The Secant Method is a root-finding algorithm used in numerical analysis to approximate the zeros of a given function
Conceptually, the Secant Method constructs a secant line between two points
Conceptual Illustration:
Imagine plotting the function
The intersection of the secant line with the x-axis gives the next approximation
Consider a continuous function
This formula approximates the derivative
By replacing
I. Starting from Newton’s Method:
Newton’s method update rule is:
II. Approximating the Derivative:
If the derivative
III. Substitute into Newton’s Formula:
Replacing
IV. Simplify the Expression:
By rearranging, we get:
This is the Secant Method iteration formula.
Input:
- A function
$f(x)$ . - Two initial points
$x_0$ and$x_1$ . - A tolerance
$\epsilon > 0$ or a maximum number of iterations$n_{\max}$ .
Initialization:
Set
Iteration:
I. Evaluate
II. Compute:
III. Check for convergence:
- If
$|x_{n+1}-x_n|< \epsilon$ or$|f(x_{n+1})|< \epsilon$ , stop. - If
$n > n_{\max}$ , stop.
IV. Update indices:
Output:
- Approximate root
$x_{n+1}$ . - Number of iterations performed.
Given Function:
We know the roots are
Iteration 1:
-
$f(x_0)=f(0)=0^2-4=-4$ . -
$f(x_1)=f(1)=1^2-4=-3$ .
Update:
Check carefully:
Actually, let's compute step-by-step to avoid confusion:
Iteration 2:
Now:
-
$x_1=1, x_2=4$ . $f(1)=-3, f(4)=4^2-4=16-4=12.$
Update:
Iteration 3:
Now:
-
$x_2=4, x_3=1.6$ . $f(4)=12, f(1.6)=1.6^2-4=2.56-4=-1.44.$
Update:
Compute inside:
So:
Repeating further will bring the sequence closer to
However, let's simplify by noting that if we start closer to the root, the method converges faster. For example, if we start with
- The secant method avoids requiring an analytic derivative, unlike Newton’s method, which is particularly useful for functions that are difficult or impossible to differentiate analytically.
- Convergence in the secant method is typically faster than linear convergence, which is common in the bisection method, though it is not as fast as the quadratic convergence seen in Newton's method.
- The secant method is easy to implement due to its simple iterative formula, which relies only on function evaluations and does not require derivative or interval-based calculations.
- Convergence is not guaranteed, as poor initial guesses can cause divergence or failure to find a root.
- Functions with multiple roots pose challenges because the secant method may converge to an unintended root if the initial guesses are closer to a different zero.
- The method’s stability and efficiency are highly dependent on the initial guesses,
$x_0$ and$x_1$ , which play a critical role in determining whether the method converges and how quickly. - The secant method does not offer quadratic convergence like Newton’s method, resulting in slower convergence rates compared to derivative-based methods when derivatives are available and feasible to compute.