Recursion
Solves a computational problem (function calling itself) that has the solution depend on the solutions from smaller instances of a problem until such problem reaches a base case. It is widely used for tasks that can be broken down into similar subproblems, such as traversing data structures, solving mathematical sequences, or performing divide-and-conquer algorithms.
Base Case
This is the simplest instance of the problem, where the solution is straightforward and does not require further recursive calls. The base case prevents infinite recursion by acting as the stopping criterion. Without a base case, the recursion would continue indefinitely, leading to a stack overflow error. For example, in the factorial function n!, the base case is 0!=1, as it doesn't require further computation.
Recursive Case
This defines how the problem is broken down into smaller subproblems and calls the function itself with these smaller inputs. The recursive case must progress toward the base case to ensure that the recursion terminates. For instance, in the Fibonacci sequence, the recursive case defines Fn = Fn-1 + Fn-2, reducing the problem size with each call.
Recommended by LinkedIn
Function Call Stack
Recursion relies on the system's call stack to keep track of function calls. Each recursive call is pushed onto the stack, holding its execution context until the base case is reached. As the base case is resolved, the stack unwinds, and the results of recursive calls are combined. This mechanism is crucial for maintaining the correct sequence of execution but can also limit the depth of recursion due to memory constraints.
Termination Condition
A properly designed recursive function ensures termination by guaranteeing that the recursive case leads to the base case. Without a clear termination condition, the recursion risks running indefinitely. This requires careful design to ensure that each step reduces the problem size and moves closer to the base case.