Content deleted Content added
m King of Hearts moved page Root-finding algorithms to Root-finding algorithm: Talk:Root-finding algorithms#Requested move 21 July 2024 |
m Open access bot: url-access=subscription updated in citation with #oabot. |
||
(18 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Algorithms for zeros of functions}}
▲</noinclude>In [[numerical analysis]], a '''root-finding algorithm''' is an [[algorithm]] for finding [[Zero of a function|zeros]], also called "roots", of [[continuous function]]s. A [[zero of a function]] {{math|''f''}}, from the [[real number]]s to real numbers or from the [[complex number]]s to the complex numbers, is a number {{math|''x''}} such that {{math|1=''f''(''x'') = 0}}. As, generally, the zeros of a function cannot be computed exactly nor expressed in [[closed form expression|closed form]], root-finding algorithms provide approximations to zeros, expressed either as [[floating-point arithmetic|floating-point]] numbers or as small isolating [[interval (mathematics)|intervals]], or [[disk (mathematics)|disks]] for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).<ref>{{Cite book |last1=Press |first1=W. H. |title=Numerical Recipes: The Art of Scientific Computing |last2=Teukolsky |first2=S. A. |last3=Vetterling |first3=W. T. |last4=Flannery |first4=B. P. |publisher=Cambridge University Press |year=2007 |isbn=978-0-521-88068-8 |edition=3rd |publication-place=New York |chapter=Chapter 9. Root Finding and Nonlinear Sets of Equations |chapter-url=http://apps.nrbook.com/empanel/index.html#pg=442}}</ref>
[[Equation solving|Solving an equation]] {{math|1=''f''(''x'') = ''g''(''x'')}} is the same as finding the roots of the function {{math|1=''h''(''x'') = ''f''(''x'') – ''g''(''x'')}}. Thus root-finding algorithms
Most numerical root-finding methods
The behavior of general root-finding algorithms is studied in [[numerical analysis]]. However, for polynomials specifically, the study of root-finding
== Bracketing methods ==
Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root
=== Bisection method ===
The simplest root-finding algorithm is the [[bisection method]]. Let {{math|''f''}} be a [[continuous function]]
=== False position (''regula falsi'') ===
Line 19 ⟶ 18:
:<math>c= \frac{af(b)-bf(a)}{f(b)-f(a)}.</math>
False position is similar to the [[secant method]], except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root.
== Interpolation ==
Line 28 ⟶ 24:
Many root-finding processes work by [[interpolation]]. This consists in using the last computed approximate values of the root for approximating the function by a [[polynomial]] of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.
== Iterative methods ==
Although all root-finding algorithms proceed by [[iteration]], an [[iterative method|iterative]] root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a [[fixed-point iteration|fixed point]]
=== Newton's method (and similar derivative-based methods) ===
[[Newton's method]] assumes the function ''f'' to have a continuous [[derivative]]. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method
=== Secant method ===
Replacing the derivative in Newton's method with a [[finite difference]], we get the [[secant method]]. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order
=== Steffensen's method ===
If we use a polynomial fit to remove the quadratic part of the finite difference used in the
=== Fixed point iteration method ===
Line 70 ⟶ 64:
== Finding roots in higher dimensions ==
The [[bisection method]] has been generalized to higher dimensions; these methods are called '''generalized bisection methods'''.<ref name=":0">{{Cite journal |last1=Mourrain |first1=B. |last2=Vrahatis |first2=M. N. |last3=Yakoubsohn |first3=J. C. |date=2002-06-01 |title=On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree |journal=Journal of Complexity |language=en |volume=18 |issue=2 |pages=612–640 |doi=10.1006/jcom.2001.0636 |issn=0885-064X|doi-access=free }}</ref><ref>{{Cite book |last=Vrahatis |first=Michael N. |date=2020 |editor-last=Sergeyev |editor-first=Yaroslav D. |editor2-last=Kvasov |editor2-first=Dmitri E. |chapter=Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions |chapter-url=https://link.springer.com/chapter/10.1007/978-3-030-40616-5_17 |title=Numerical Computations: Theory and Algorithms |series=Lecture Notes in Computer Science |language=en |___location=Cham |publisher=Springer International Publishing |volume=11974 |pages=223–238 |doi=10.1007/978-3-030-40616-5_17 |isbn=978-3-030-40616-5 |s2cid=211160947}}</ref> At each iteration, the ___domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily
The [[Poincaré–Miranda theorem]] gives a criterion for the existence of a root in a rectangle, but it is hard to verify
Another criterion is given by a theorem of [[Leopold Kronecker|Kronecker]].<ref>{{Cite
A third criterion is based on a ''characteristic polyhedron''. This criterion is used by a method called Characteristic Bisection.<ref name=":0" />{{Rp|page=19--}} It does not require
A fourth method uses an [[intermediate
== See also ==
Line 99 ⟶ 93:
{{reflist}}
== Further reading ==
* Victor Yakovlevich Pan: "Solving a Polynomial Equation: Some History and Recent Progress", SIAM Review, Vol.39, No.2, pp.187-220 (June, 1997).
* J.M. McNamee: "Numerical Methods for Roots of Polynomials - Part I", Elsevier (2007).▼
*
▲*
|