Content deleted Content added
Jitse Niesen (talk | contribs) start giving short descriptions of the algorithms |
Jitse Niesen (talk | contribs) →Specific algorithms: more short description |
||
Line 7:
==Specific algorithms==
The simplest root-finding algorithm is the [[bisection method]]: we start with two points ''a'' and ''b'' which bracket a root, and at every iteration, we pick either the subinterval [''a'', ''c''] or [''c'', ''b''], where ''c'' = (''a'' + ''b'') / 2 is the midpoint between ''a'' and ''b''. The algorithm always select a subinterval which contains a root. The bisection method
[[Newton's method]], also called the Newton-Raphson method,
:<math> x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}. </math>
Newton's method may not converge if you start too far away from a root. However, if it does converge, it is faster than the bisection method (convergence is quadratical). Newton's method is also important because it readily generalizes to higher-dimensional problems.
If we replace the derivative in Newton's method with a [[finite difference]], we get the [[secant method]]. It is defined by the recurrence relation
:<math>x_{n+1} = x_n - \frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})} f(x_n). </math>
So, the secant method does not require the computation of a derivative, but the price is slower convergence (the order is approximately 1.6).
The [[false position method]], also called the regula falsi method, is like the bisection method. However, it does not cut the interval in two equal parts at every iteration, but it cuts the interval at the point given by the formula for the secant method. The false position method inherits the robustness of the bisection method and the superlinear convergence of the secant method,
Other root-finding algorithms include:
Line 15 ⟶ 23:
* [[Ruffini's rule|Ruffini's method]]
* [[Successive substitutions]]
* [[Laguerre's method]]
* [[Brent's method]]
|