algorithms · level 14

Backtracking

Depth-first search with undo — the shape behind sudoku, n-queens, and every constraint solver.

200 XP

Backtracking

Backtracking is the shape of every "try, fail, undo, try again" search. It's what you do when you're assembling a sudoku row, stepping through an n-queens board, or compiling a type-checker's constraint solution — a depth-first walk down a tree of partial guesses, backing out the moment a guess becomes impossible.

Analogy

Imagine you're threading a maze with a ball of string tied to the entrance. At every junction you pick a corridor and walk. If you hit a dead end, you follow the string back to the last junction and try the next corridor. You only ever forget a corridor once you've exhausted every branch off it. Eventually you either reach the exit or prove the maze has no exit — either way, the walk is systematic.

That's backtracking. The "string" is the recursion stack; the "forget" is the undo.

The template

Every backtracking solution fits the same three-line skeleton:

function search(partial):
  if partial is a full solution: record it; return
  if partial is already impossible: return         # the prune
  for each candidate extension:
    apply(extension)
    search(partial)
    undo(extension)

The four names to know:

  • Partial solution — the answer-so-far. For n-queens it's a column-per-row array; for permutations it's a prefix; for sudoku it's the board with some cells filled.
  • Extension — one more unit of progress toward a full solution (placing a queen, picking a next element, filling a cell).
  • Prune — the check that says "this partial can't possibly extend to a full solution; don't bother recursing."
  • Undo — the bookkeeping that walks the partial state back so the next iteration of the for-loop starts from the same pre-extension state.

Pruning is the whole game

Without the prune, backtracking degenerates to brute-force enumeration — you generate every possible partial and only check at the leaves. With a good prune, you chop vast sub-trees before descending into them.

For 8-queens, brute force would be 8^8 = 16,777,216 arrangements; with the column+diagonal prune you visit ~2000 partial boards. That's a 4-order-of-magnitude speedup from one if statement. The exercise in backtracking design is figuring out the cheapest possible check that rules out the most possible sub-trees.

Canonical problems

Problem Extension Prune
n-queens place a queen in the next row column or diagonal clash
permutations pick an unused element — (no prune; n! leaves)
subsets include / exclude next element — (no prune; 2ⁿ leaves)
sudoku fill next empty cell row/column/box contains the digit
word search step to a neighbouring grid cell cell visited or doesn't match next letter
Hamiltonian path visit an unvisited vertex no edge from current to candidate

The pattern is universal. Once the three-line template clicks, every one of these is the same shape with a different apply / prune / undo triple.

Backtracking vs dynamic programming

It's easy to confuse the two — they both sound like "recursion with bookkeeping." They aren't the same shape.

  • Dynamic programming works when sub-problems overlap (the same sub-problem appears in many branches) — you memoise so you solve each unique sub-problem once. The state space is a DAG of distinct sub-problems.
  • Backtracking works when sub-problems don't overlap (each partial leads to its own sub-tree; two different paths never reach the same partial). The state space is a tree, not a DAG. Memoisation doesn't help — there's nothing to reuse.

Rule of thumb: if the partial is fully described by a small number of parameters (index, sum, remaining-length), reach for DP. If the partial is the full path so far, reach for backtracking.

When the tree is too big

Pure backtracking explores every branch that isn't explicitly pruned. For some problems (travelling salesman, MAX-SAT, big sudokus), even a well-pruned tree is too big to enumerate. The standard upgrades:

  • Branch-and-bound adds a numeric bound: keep the best full solution seen so far and prune any sub-tree whose best-case value can't beat it. Turns backtracking into an optimisation algorithm.
  • Constraint propagation pushes the prune earlier: instead of only checking the just-placed piece, propagate the constraints it creates to all remaining cells and prune any sub-tree where a cell runs out of options.
  • Heuristics reorder the candidate extensions so the most-constraining or most-promising one is tried first — the tree doesn't get smaller, but the first solution gets found sooner (useful for any-one-solution tasks).

These three plus plain backtracking are the toolkit behind real-world solvers: SAT solvers, SMT solvers, ILP branch-and-bound, game-tree AI (minimax with alpha-beta pruning is branch-and-bound on a game tree), and type inference in compilers.

Playground

Pick a problem — n-queens or permutations — and an input size. Press Play to watch the recursion unfold, or Step to walk frame by frame. Each frame is one of try / reject / accept / undo, and the partial array on the right shows what's on the stack. Pruned branches fade out; accepted solutions flash green.

Useful experiments:

  • Run n-queens at n=5. Count the reject frames vs the try frames — pruning is doing most of the work.
  • Switch to permutations on [1, 2, 3]. Note there are no rejects: permutations have no impossible partials, so every leaf is an accept.
  • Bump n-queens to 6 and then 4. The frame count grows faster than linearly in n — that's why the playground caps at 6.

Visualizer

The Backtrack Tree panel draws the recursion tree live. Attempted branches are drawn as solid lines; pruned branches are greyed out. Acceptance leaves are flagged. The picture is the entire pedagogy of this lesson: pruning is what keeps the tree small enough to traverse.