In the previous lectures, the path to the goal was the entire point of the search (e.g., finding the shortest sequence of boat trips or puzzle slides). In local search, the path is completely irrelevant; the only thing that matters is the final state or configuration itself.

Because they don't keep track of the paths they traverse, local search algorithms have two massive advantages: they use very little memory (usually a constant amount), and they can find reasonable solutions in exceptionally large or infinite (continuous) state spaces where systematic algorithms like A* would completely crash.

Here is a detailed breakdown of the four algorithms covered:

Imagine the state space as a landscape of hills and valleys, where elevation represents the objective function (the "fitness" or quality of the state). Hill-climbing operates by looking at its immediate neighbors and simply moving to the highest one. It does not think ahead about where to go next.

2. Simulated Annealing

To fix the "getting stuck" problem of Hill-Climbing, Simulated Annealing introduces controlled randomness. It is inspired by metallurgy, where metals are heated to high temperatures and gradually cooled to allow the material to reach a low-energy, stable crystalline state.

Instead of keeping just one node in memory, Local Beam Search keeps track of k randomly generated states. At each step, it generates all successors for all k states, pools them together into one complete list, and selects the absolute best k successors to continue the search.

4. Genetic Algorithms (GAs)

A Genetic Algorithm is a variant of stochastic beam search inspired by natural selection. Instead of just modifying single states, it generates successors by combining two "parent" states.