The bisection method is a classical root-finding technique used extensively in numerical analysis to locate a root of a continuous function
Physically and mathematically, the idea is that if a function
Conceptual Illustration:
In the above conceptual plot, the function
Consider a continuous function
The bisection method proceeds by evaluating the midpoint:
We then check the sign of
- If
$f(a)f(c) < 0$ , then the root lies between$a$ and$c$ . - If
$f(c)f(b) < 0$ , then the root lies between$c$ and$b$ .
In either case, we have reduced the interval size by a factor of 2. After
I. Assumption: We start with a continuous function
This ensures the existence of at least one root
II. First Iteration:
Compute the midpoint:
Evaluate
- If
$f(a_0)f(c_1) < 0$ , set$a_1 = a_0$ and$b_1 = c_1$ . - Else, set
$a_1 = c_1$ and$b_1 = b_0$ .
In either case, the new interval
III. Subsequent Iterations:
At the
and based on the sign test:
As iterations proceed, the length of the interval containing the root is:
IV. Convergence Criterion:
The process is repeated until the desired precision
or until the maximum number of iterations
Input:
- A continuous function
$f(x)$ . - Initial interval
$[a, b]$ such that$f(a)f(b) < 0$ . - Tolerance
$\epsilon > 0$ or maximum iterations$n_{\max}$ .
Initialization:
- Set iteration counter
$k = 0$ .
Iteration:
I. Compute the midpoint:
II. Evaluate
III. If
- Stop and take
$c$ as the approximate root.
IV. Else, determine the sub-interval:
- If
$f(a)f(c) < 0$ , set$b = c$ . - Else, set
$a = c$ .
V. Increment iteration counter
Output:
- Approximate root
$c$ . - Number of iterations performed.
Given Function:
We know
so there is at least one root in
Initial Setup:
$a_0 = 0$ $b_0 = 5$ - Interval length:
$b_0 - a_0 = 5$
Iteration 1:
Compute midpoint:
Evaluate
Check signs:
Since this is negative, the root lies in
Update interval:
Iteration 2:
New interval:
Midpoint:
Evaluate
Check signs:
This indicates the root is in
Update interval:
Iteration 3:
New interval:
Midpoint:
Evaluate
Check signs:
So the root is now bracketed in
Update interval:
Continuing this process, as we further narrow down the interval, we find that the root approaches
-
Guaranteed convergence is ensured if
$f$ is continuous and the interval$[a, b]$ satisfies$f(a)f(b) < 0$ , making the method reliable for root-finding. - The robustness and simplicity of the method come from its reliance only on function evaluations, requiring no derivatives or complex operations, making it easy to use across various problems.
- The method’s stable and predictable behavior allows for precise estimation of the number of iterations required to achieve a desired accuracy since the interval halves with each iteration.
- Slow convergence is a drawback, as the method converges linearly, making it inefficient compared to faster methods like Newton-Raphson or Secant methods for well-behaved functions.
- The requirement for an initial bracketing of the root means that you must first identify two points
$[a, b]$ where$f(a)f(b) < 0$ , which can be challenging if the function’s behavior is not well-known. - The method is limited to finding a single root within a given interval, necessitating separate bracketing for each root in cases where multiple roots exist.
- Inapplicability to complex or multiple root situations arises because the method does not use additional information like derivatives or higher-order approximations, making it less suitable for complicated problems.