Content deleted Content added
Jitse Niesen (talk | contribs) →Specific algorithms: add Muller's method and inverse quadratic interpolation |
Jitse Niesen (talk | contribs) describe some more methods and add section on polynimials |
||
Line 17:
So, the secant method does not require the computation of a derivative, but the price is slower convergence (the order is approximately 1.6).
The [[false position method]], also called the regula falsi method, is like the bisection method. However, it does not cut the interval in two equal parts at every iteration, but it cuts the interval at the point given by the formula for the secant method. The false position method inherits the robustness of the bisection method and the superlinear convergence of the secant method
The secant method also arises if one approximates the unknown function ''f'' by [[linear interpolation]]. When [[polynomial interpolation|quadratic interpolation]] is used instead, one arrives at [[Muller's method]]. It converges faster than the secant method. A particular feature of this method is that the iterates ''x''<sub>''n''</sub> may become [[complex number|complex]].
This can be avoided by interpolating the [[inverse function|inverse]] of ''f'', resulting in the [[inverse quadratic interpolation]] method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
Finally, [[Brent's method]] is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
Other root-finding algorithms include:
* [[Successive substitutions]]
* [[Broyden's method]]
* [[Fixed-point iteration]]
==Finding roots of polynomials==
Much attention has been given to the special case that the function ''f'' is in fact a [[polynomial]]. Of course, the method described in the previous section can be used. In particular, it is easy to find the derivative of a polynomial, so [[Newton's method]] is a viable candidate. But one can also choose for a method that exploits the fact that ''f'' is a polynomial.
One possibility is to form the [[companion matrix]] of the polynomial. Since the eigenvalues of this matrix coincide with the roots of the polynomial, one can now use any [[eigenvalue algorithm]] to find the roots of the original polynomial.
Another possibility is [[Laguerre's method]], which is a rather complicated method that converges very fast. In fact, it exhibits cubic convergence for simple roots, beating the quadratic convergence displayed by Newton's method.
If the polynomial has rational coefficients and only rational roots are of interest, then [[Ruffini's rule|Ruffini's method]] can also be used.
In any case, it should be borne in mind that the problem of finding roots can be [[condition number|ill-conditioned]], as the example of [[Wilkinson's polynomial]] shows.
==External links==
Line 34 ⟶ 46:
*[http://www.nr.com Numerical Recipes Homepage]
[[Category:Root-finding algorithms|*]]
|