The “Combinations” problem asks us to generate all possible sets of k
distinct numbers selected from the range [1, n]
. Unlike permutations, order does not matter in combinations. For example, [1, 2]
is the same as [2, 1]
, so only one of them is included in the result.
As an example, for n = 4
and k = 2
, the valid combinations are: [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]]
.
This problem is best solved using a recursive backtracking approach. We begin by building combinations from the highest number (n
) down to 1
. At each recursive call, we make two choices: either include the current number or skip it.
We maintain a temporary list (commonly named sol
) that holds the current combination. If its length reaches k
, we add a copy to the final list ans
. After each recursive call, we backtrack by removing the last number to explore other options.
One optimization is to stop recursion early if the remaining numbers are not enough to complete a valid combination. This pruning reduces unnecessary computation.
The time complexity is O(C(n, k) Ă— k), where C(n, k)
is the number of combinations (binomial coefficient), and k
is the time to copy each combination.
The space complexity is also O(C(n, k) Ă— k) to store all the combinations in memory, plus O(k) for the recursion stack and temporary list.
An iterative solution using stacks or queues is possible but significantly more complex to implement cleanly. The backtracking approach is not only concise and readable but also performs very well with proper pruning.
The “Combinations” problem is a fundamental example of recursive backtracking. It reinforces the concept of making binary choices (include or exclude), managing recursion state, and pruning inefficient paths. Mastering this pattern helps in solving more advanced problems involving subset generation, combinatorics, and decision trees.