Ridders' method: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Tags: Mobile edit Mobile web edit
 
(47 intermediate revisions by 15 users not shown)
Line 1:
{{short description|Root-finding algorithm in numerical analysis}}
In [[numerical analysis]], '''Ridders' method''' is a [[root-finding algorithm]] based on the [[false position method]] and the use of an [[exponential function]] to successively approximate a [[Root of a function|root]] of a [[Functioncontinuous (mathematics)|function]] ''<math>f''(x)</math>. The method is due to C. Ridders .<ref>{{citeCite journal doi| last1 = Ridders | first1 = C. | doi = 10.1109/TCS.1979.1084580 | title = A new algorithm for computing a single root of a real continuous function | journal = IEEE Transactions on Circuits and Systems | volume = 26 | pages = 979–980| year = 1979 | issue = 11 }}</ref><ref>{{cite book|title=Numerical Methods in Engineering with Python|first=Jaan |last=Kiusalaas| publisher=Cambridge University Press| year=2010| isbn=978-0-521-19132-6 | edition=2nd| pages=146–150| url=httphttps://books.google.co.ukcom/books?id=9SG1r8EJawIC&pg=PT156}}</ref>.
 
Ridders' method is simpler than [[Muller's method]] or [[Brent's method]] but with similar performance .<ref>{{cite isbnbook | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=[[Numerical Recipes]]: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 9.2.1. Ridders' Method | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=452}}</ref>. The formula below converges quadratically when the function is well-behaved, which implies that the number of additional significant digits found at each step approximately doubles; but the function has to be evaluated twice for each step, so the overall [[order of convergence]] of the method with respect to function evaluations rather than with respect to number of iterates is √2<math>\sqrt{2}</math>. If the function is not well-behaved, the root remains bracketed and the length of the bracketing interval at least halves on each iteration, so convergence is guaranteed. The algorithm also makes use of square roots, which are slower than basic floating point operations.
 
==Method==
Press et al. (2007) describe the method as follows. Given two values of the independent variable, ''x''<submath>1x_0</submath> and ''x''<submath>2x_2</submath>, which are on two different sides of the root being sought so that<math>f(x_0)f(x_2) < 0</math>, the method begins by evaluating the function at the midpoint ''x'' <sub>3</submath>x_1 between= the(x_0 two+x_2)/2 points</math>. One then finds the unique exponential function of the form ''e''<supmath>''e^{ax''}</supmath> which,such when multiplied by ''f'', transforms thethat function at the three points into a straight line. The false position method is then applied to the transformed values, leading to a new value ''x''<submath>4</sub>, between ''h(x''<sub>1</sub> and '')=f(x''<sub>2)e^{ax}</submath>, whichsatisfies can be used as one of the two bracketing values in the next step of the iteration. The other bracketing value is taken to be ''x''<sub>3</submath>h(x_1)=(h(x_0) if f+h(''x''<sub>3<x_2))/sub>)2 has the opposite sign to f(''x''<sub>4</submath>),. or otherwiseSpecifically, whicheverparameter of ''x''<submath>1a</submath> andis ''x''<sub>2</sub>determined has f(x) of opposite sign to f(''x''<sub>4</sub>).by
:<math>e^{a(x_1 - x_0)} = \frac{f(x_1)-\operatorname{sign}[f(x_0)]\sqrt{f(x_1)^2 - f(x_0)f(x_2)}}{f(x_2)} .</math>
 
The false position method is then applied to the points <math>(x_0, h(x_0))</math> and <math>(x_2,h(x_2))</math>, leading to a new value <math>x_3 </math> between <math>x_0 </math> and <math>x_2 </math>,
The method can be summarized by the formula (equation 9.2.4 from Press et al.)
:<math>x_4x_3 = x_3x_1 + (x_3x_1 - x_1x_0)\frac{\operatorname{sign}[f(x_1) - f(x_2x_0)]f(x_3x_1)}{\sqrt{f(x_3x_1)^2 - f(x_1x_0)f(x_2)}} .,</math>
which will be used as one of the two bracketing values in the next step of the iteration. The other bracketing value is taken to be <math>x_1</math> if <math>f(x_1)f(x_3) <0</math> (which will be true in the well-behaved case), or otherwise whichever of <math>x_0</math> and <math>x_2</math> has a function value of opposite sign to <math>f(x_3).</math> The iterative procedure can be terminated when a target accuracy is obtained.
 
==References==
{{reflist}}
 
{{root-finding algorithms}}
{{mathapplied-stub}}
 
[[Category:Root-finding algorithms]]
 
{{mathapplied-stub}}