Root-finding algorithm: Difference between revisions

Content deleted Content added
No edit summary
OAbot (talk | contribs)
m Open access bot: url-access=subscription updated in citation with #oabot.
 
(566 intermediate revisions by more than 100 users not shown)
Line 1:
{{Short description|Algorithms for zeros of functions}}
A '''root-finding algorithm''' is a numerical method or [[algorithm]] for finding a value ''x'' such that ''f(x) = 0'', for a given [[function (mathematics)|function]] ''f''. Here, ''x'' is a single [[real number]] called the '''[[root]]'''.
In [[numerical analysis]], a '''root-finding algorithm''' is an [[algorithm]] for finding [[Zero of a function|zeros]], also called "roots", of [[continuous function]]s. A [[zero of a function]] {{math|''f''}} is a number {{math|''x''}} such that {{math|1=''f''(''x'') = 0}}. As, generally, the zeros of a function cannot be computed exactly nor expressed in [[closed form expression|closed form]], root-finding algorithms provide approximations to zeros. For functions from the [[real number]]s to real numbers or from the [[complex number]]s to the complex numbers, these are expressed either as [[floating-point arithmetic|floating-point]] numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating [[interval (mathematics)|intervals]] for real roots or [[disk (mathematics)|disks]] for complex roots.<ref>{{Cite book |last1=Press |first1=W. H. |title=Numerical Recipes: The Art of Scientific Computing |last2=Teukolsky |first2=S. A. |last3=Vetterling |first3=W. T. |last4=Flannery |first4=B. P. |publisher=Cambridge University Press |year=2007 |isbn=978-0-521-88068-8 |edition=3rd |publication-place=New York |chapter=Chapter 9. Root Finding and Nonlinear Sets of Equations |chapter-url=http://apps.nrbook.com/empanel/index.html#pg=442}}</ref>
 
[[Equation solving|Solving an equation]] {{math|1=''f''(''x'') = ''g''(''x'')}} is the same as finding the roots of the function {{math|1=''h''(''x'') = ''f''(''x'') – ''g''(''x'')}}. Thus root-finding algorithms can be used to solve any [[equation (mathematics)|equation]] of continuous functions. However, most root-finding algorithms do not guarantee that they will find all roots of a function, and if such an algorithm does not find any root, that does not necessarily mean that no root exists.
When ''x'' is a vector, algorithms to find ''x'' such that ''f(x)'' = 0 are generally called "equation-solving algorithms". These algorithms are a generalization of root-finding and can operate either on [[linear]] or [[non-linear]] equations. Some root-finding algorithms (such as [[Newton's method]]) can be directly generalized to solve non-linear equations.
 
Most numerical root-finding methods are [[Iterative method|iterative methods]], producing a [[sequence]] of numbers that ideally converges towards a root as a [[Limit of a sequence|limit]]. They require one or more ''initial guesses'' of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a [[Fixed point (mathematics)|fixed point]] of the auxiliary function, which is chosen for having the roots of the original equation as fixed points and for converging rapidly to these fixed points.
Root-finding algorithms are studied in [[numerical analysis]].
 
The behavior of general root-finding algorithms is studied in [[numerical analysis]]. However, for polynomials specifically, the study of root-finding algorithms belongs to [[computer algebra]], since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency and applicability of an algorithm may depend sensitively on the characteristics of the given functions. For example, many algorithms use the [[derivative]] of the input function, while others work on every [[continuous function]]. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for [[polynomial]]s, there are specific algorithms that use algebraic properties for certifying that no root is missed and for locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically [[Newton's method]]) to the unique root within each interval (or disk).
==Specific algorithms==
 
== Bracketing methods ==
The simplest root-finding algorithm is the [[bisection method]]: we start with two points ''a'' and ''b'' which bracket a root, and at every iteration, we pick either the subinterval [''a'', ''c''] or [''c'', ''b''], where ''c'' = (''a'' + ''b'') / 2 is the midpoint between ''a'' and ''b''. The algorithm always selects a subinterval which contains a root. The bisection method is guaranteed to converge to a root, however, its progress is rather slow (the [[rate of convergence]] is linear).
Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root is considered found. These generally use the [[intermediate value theorem]], which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require starting with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of [[polynomial]]s there are other methods such as [[Descartes' rule of signs]], [[Budan's theorem]] and [[Sturm's theorem]] for bounding or determining the number of roots in an interval. They lead to efficient algorithms for [[real-root isolation]] of polynomials, which find all real roots with a guaranteed accuracy.
 
=== Bisection method ===
[[Newton's method]], also called the Newton-Raphson method, linearizes the function ''f'' at the current approximation to the root. This yields the [[recurrence relation]]
The simplest root-finding algorithm is the [[bisection method]]. Let {{math|''f''}} be a [[continuous function]] for which one knows an interval {{math|[''a'', ''b'']}} such that {{math|''f''(''a'')}} and {{math|''f''(''b'')}} have opposite signs (a bracket). Let {{math|1=''c'' = (''a'' + ''b'')/2}} be the middle of the interval (the midpoint or the point that bisects the interval). Then either {{math|''f''(''a'')}} and {{math|''f''(''c'')}}, or {{math|''f''(''c'')}} and {{math|''f''(''b'')}} have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one [[bit]] of accuracy with each iteration. Therefore, the number of function evaluations required for finding an ''ε''-approximate root is <math>\log_2\frac{b-a}{\varepsilon}</math>. Other methods, under appropriate conditions, can gain accuracy faster.
:<math> x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}. </math>
Newton's method may not converge if you start too far away from a root. However, if it does converge, it is faster than the bisection method (convergence is quadratic). Newton's method is also important because it readily generalizes to higher-dimensional problems.
 
=== False position (''regula falsi'') ===
If we replace the derivative in Newton's method with a [[finite difference]], we get the [[secant method]]. It is defined by the recurrence relation
The [[false position method]], also called the ''regula falsi'' method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the [[x-intercept|{{math|''x''}}-intercept]] of the line that connects the plotted function values at the endpoints of the interval, that is
:<math>x_{n+1} = x_n - \frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})} f(x_n). </math>
:<math>c= \frac{af(b)-bf(a)}{f(b)-f(a)}.</math>
So, the secant method does not require the computation of a derivative, but the price is slower convergence (the order is approximately 1.6).
 
False position is similar to the [[secant method]], except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method. However, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for {{math|''f''(''c'')}}. Typically, this may occur if the [[derivative]] of {{mvar|f}} is large in the neighborhood of the root.
The [[false position method]], also called the regula falsi method, is like the bisection method. However, it does not cut the interval in two equal parts at every iteration, but it cuts the interval at the point given by the formula for the secant method. The false position method inherits the robustness of the bisection method and the superlinear convergence of the secant method.
 
== Interpolation ==
The secant method also arises if one approximates the unknown function ''f'' by [[linear interpolation]]. When [[polynomial interpolation|quadratic interpolation]] is used instead, one arrives at [[Muller's method]]. It converges faster than the secant method. A particular feature of this method is that the iterates ''x''<sub>''n''</sub> may become [[complex number|complex]].
This can be avoided by interpolating the [[inverse function|inverse]] of ''f'', resulting in the [[inverse quadratic interpolation]] method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
 
Many root-finding processes work by [[interpolation]]. This consists in using the last computed approximate values of the root for approximating the function by a [[polynomial]] of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.
Finally, [[Brent's method]] is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
 
Interpolating two values yields a line: a polynomial of degree one. This is the basis of the [[secant method]]. ''Regula falsi'' is also an interpolation method that interpolates two points at a time but it differs from the secant method by using two points that are not necessarily the last two computed points. Three values define a parabolic curve: a [[quadratic function]]. This is the basis of [[Muller's method]].
Other root-finding algorithms include:
 
== Iterative methods ==
* [[Successive substitutions]]
Although all root-finding algorithms proceed by [[iteration]], an [[iterative method|iterative]] root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a [[fixed-point iteration|fixed point]] of the auxiliary function is reached to the desired precision, i.e., when a new computed value is sufficiently close to the preceding ones.
* [[Broyden's method]]
* [[Fixed-point iteration]]
 
=== Newton's method (and similar derivative-based methods) ===
==Finding roots of polynomials==
[[Newton's method]] assumes the function ''f'' to have a continuous [[derivative]]. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method; its [[order of convergence]] is usually quadratic whereas the bisection method's is linear. Newton's method is also important because it readily generalizes to higher-dimensional problems. [[Householder's method]]s are a class of Newton-like methods with higher orders of convergence. The first one after Newton's method is [[Halley's method]] with cubic order of convergence.
 
=== Secant method ===
Much attention has been given to the special case that the function ''f'' is in fact a [[polynomial]]. Of course, the method described in the previous section can be used. In particular, it is easy to find the derivative of a polynomial, so [[Newton's method]] is a viable candidate. But one can also choose a method that exploits the fact that ''f'' is a polynomial.
Replacing the derivative in Newton's method with a [[finite difference]], we get the [[secant method]]. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order of convergence is the [[golden ratio]], approximately 1.62<ref>{{Cite web |last=Chanson |first=Jeffrey R. |date=October 3, 2024 |title=Order of Convergence |url=https://math.libretexts.org/Bookshelves/Applied_Mathematics/Numerical_Methods_(Chasnov)/02%3A_Root_Finding/2.04%3A_Order_of_Convergence |access-date=October 3, 2024 |website=LibreTexts Mathematics}}</ref>). A generalization of the secant method in higher dimensions is [[Broyden's method]].
 
=== Steffensen's method ===
One possibility is to form the [[companion matrix]] of the polynomial. Since the eigenvalues of this matrix coincide with the roots of the polynomial, one can now use any [[eigenvalue algorithm]] to find the roots of the original polynomial.
If we use a polynomial fit to remove the quadratic part of the finite difference used in the secant method, so that it better approximates the derivative, we obtain [[Steffensen's method]], which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.
 
=== Fixed point iteration method ===
Another possibility is [[Laguerre's method]], which is a rather complicated method that converges very fast. In fact, it exhibits cubic convergence for simple roots, beating the quadratic convergence displayed by Newton's method.
We can use the [[fixed-point iteration]] to find the root of a function. Given a function <math> f(x) </math> which we have set to zero to find the root (<math> f(x)=0 </math>), we rewrite the equation in terms of <math> x </math> so that <math> f(x)=0 </math> becomes <math> x=g(x) </math> (note, there are often many <math> g(x) </math> functions for each <math> f(x)=0 </math> function). Next, we relabel each side of the equation as <math> x_{n+1}=g(x_{n}) </math> so that we can perform the iteration. Next, we pick a value for <math> x_{1} </math> and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if <math> |g'(root)|<1 </math>.
 
As an example of converting <math> f(x)=0 </math> to <math> x=g(x) </math>, if given the function <math> f(x)=x^2+x-1 </math>, we will rewrite it as one of the following equations.
If the polynomial has rational coefficients and only rational roots are of interest, then [[Ruffini's rule|Ruffini's method]] can also be used.
:<math> x_{n+1}=(1/x_n) - 1 </math>,
:<math> x_{n+1}=1/(x_n+1) </math>,
:<math> x_{n+1}=1-x_n^2 </math>,
:<math> x_{n+1}=x_n^2+2x_n-1 </math>, or
:<math> x_{n+1}=\pm \sqrt{1-x_n} </math>.
 
=== Inverse interpolation ===
In any case, it should be borne in mind that the problem of finding roots can be [[condition number|ill-conditioned]], as the example of [[Wilkinson's polynomial]] shows.
The appearance of complex values in interpolation methods can be avoided by interpolating the [[inverse function|inverse]] of ''f'', resulting in the [[inverse quadratic interpolation]] method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
 
== Combinations of methods ==
==External links==
 
=== Brent's method ===
*[http://www.nr.com Numerical Recipes Homepage]
[[Brent's method]] is a combination of the bisection method, the secant method and [[inverse quadratic interpolation]]. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
 
=== Ridders' method ===
[[Category:Root-finding algorithms|*]]
[[Ridders' method]] is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.
 
== Roots of polynomials {{anchor|Polynomials}} ==
[[it:Calcolo dello zero di una funzione]]
{{excerpt|Polynomial root-finding algorithms}}
[[fr:Recherche d'un zéro d'une fonction]]
 
== Finding roots in higher dimensions ==
 
The [[bisection method]] has been generalized to higher dimensions; these methods are called '''generalized bisection methods'''.<ref name=":0">{{Cite journal |last1=Mourrain |first1=B. |last2=Vrahatis |first2=M. N. |last3=Yakoubsohn |first3=J. C. |date=2002-06-01 |title=On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree |journal=Journal of Complexity |language=en |volume=18 |issue=2 |pages=612–640 |doi=10.1006/jcom.2001.0636 |issn=0885-064X|doi-access=free }}</ref><ref>{{Cite book |last=Vrahatis |first=Michael N. |date=2020 |editor-last=Sergeyev |editor-first=Yaroslav D. |editor2-last=Kvasov |editor2-first=Dmitri E. |chapter=Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions |chapter-url=https://link.springer.com/chapter/10.1007/978-3-030-40616-5_17 |title=Numerical Computations: Theory and Algorithms |series=Lecture Notes in Computer Science |language=en |___location=Cham |publisher=Springer International Publishing |volume=11974 |pages=223–238 |doi=10.1007/978-3-030-40616-5_17 |isbn=978-3-030-40616-5 |s2cid=211160947}}</ref> At each iteration, the ___domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily and guarantees the existence of a root.
 
The [[Poincaré–Miranda theorem]] gives a criterion for the existence of a root in a rectangle, but it is hard to verify because it requires evaluating the function on the entire boundary of the rectangle.
 
Another criterion is given by a theorem of [[Leopold Kronecker|Kronecker]].<ref>{{Cite book |last1=Ortega |first1= James M. |last2=Rheinboldt |first2=Werner C. |title=Iterative solution of nonlinear equations in several variables |date=2000 |publisher=Society for Industrial and Applied Mathematics |isbn=978-0-89871-461-6 }}</ref>{{Page needed|date=December 2024}} It says that, if the [[Degree of a continuous mapping|topological degree]] of a function ''f'' on a rectangle is non-zero, then the rectangle must contain at least one root of ''f''. This criterion is the basis for several root-finding methods, such as those of Stenger<ref>{{Cite journal |last=Stenger |first=Frank |date=1975-03-01 |title=Computing the topological degree of a mapping inRn |url=https://doi.org/10.1007/BF01419526 |journal=Numerische Mathematik |language=en |volume=25 |issue=1 |pages=23–38 |doi=10.1007/BF01419526 |s2cid=122196773 |issn=0945-3245|url-access=subscription }}</ref> and Kearfott.<ref>{{Cite journal |last=Kearfott |first=Baker |date=1979-06-01 |title=An efficient degree-computation method for a generalized method of bisection |url=https://doi.org/10.1007/BF01404868 |journal=Numerische Mathematik |volume=32 |issue=2 |pages=109–127 |doi=10.1007/BF01404868 |s2cid=122058552 |issn=0029-599X|url-access=subscription }}</ref> However, computing the topological degree can be time-consuming.
 
A third criterion is based on a ''characteristic polyhedron''. This criterion is used by a method called Characteristic Bisection.<ref name=":0" />{{Rp|page=19--}} It does not require computing the topological degree; it only requires computing the signs of function values. The number of required evaluations is at least <math>\log_2(D/\epsilon)</math>, where ''D'' is the length of the longest edge of the characteristic polyhedron.<ref name=":2">{{Cite journal |last1=Vrahatis |first1=M. N. |last2=Iordanidis |first2=K. I. |date=1986-03-01 |title=A Rapid Generalized Method of Bisection for Solving Systems of Non-linear Equations |url=https://doi.org/10.1007/BF01389620 |journal=Numerische Mathematik |language=en |volume=49 |issue=2 |pages=123–138 |doi=10.1007/BF01389620 |issn=0945-3245 |s2cid=121771945|url-access=subscription }}</ref>{{Rp|page=11|___location=Lemma.4.7}} Note that Vrahatis and Iordanidis <ref name=":2" /> prove a lower bound on the number of evaluations, and not an upper bound.
 
A fourth method uses an [[intermediate value theorem]] on simplices.<ref>{{Cite journal |last=Vrahatis |first=Michael N. |date=2020-04-15 |title=Intermediate value theorem for simplices for simplicial approximation of fixed points and zeros |journal=Topology and Its Applications |language=en |volume=275 |pages=107036 |doi=10.1016/j.topol.2019.107036 |s2cid=213249321 |issn=0166-8641|doi-access=free }}</ref> Again, no upper bound on the number of queries is given.
 
== See also ==
{{cols}}
* [[List of root finding algorithms]]
* [[Fixed-point computation]]
{{annotated link|Broyden's method}}
* {{annotated link|Cryptographically secure pseudorandom number generator}}
* [[GNU Scientific Library]]
* {{annotated link|Graeffe's method}}
* {{annotated link|Lill's method}}
* {{annotated link|MPSolve}}
* {{annotated link|Multiplicity (mathematics)}}
* [[Nth root algorithm|{{math|''n''}}th root algorithm]]
* {{annotated link|System of polynomial equations}}
* {{annotated link|Kantorovich theorem}}
{{colend}}
 
== References ==
{{reflist}}
== Further reading ==
* Victor Yakovlevich Pan: "Solving a Polynomial Equation: Some History and Recent Progress", SIAM Review, Vol.39, No.2, pp.187-220 (June, 1997).
* John Michael McNamee: ''Numerical Methods for Roots of Polynomials - Part I'', Elsevier, ISBN 978-0-444-52729-5 (2007).
* John Michael McNamee and Victor Yakovlevich Pan: ''Numerical Methods for Roots of Polynomials - Part II'', Elsevier, ISBN 978-0-444-52730-1 (2013).
 
 
{{Root-finding algorithms}}
{{Data structures and algorithms}}
 
[[Category:Root-finding algorithms| ]]