Steffensen's method: Difference between revisions

Content deleted Content added
Line 2:
 
==Simple description==
The simplest form of the formula for Steffensen's method occurs when it is used to find the zeros, or roots, of a function <math>f</math>, that is, to find the input value<math>x_\star</math> that satisifies <math>f(x_\star)=0</math>. Near the solution <math>x_\star</math>, the function <math>f</math> is supposed to approximately satisfy <math>-1 < f'(x_\star) < 0</math>, which makes it adequate as an itterationcorrection function tofor findfinding its own solution, although it is not necessarillyrequired efficientlyto be efficient. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value <math>x_0</math> must be ''very'' close to the actual solution <math>x_\star</math>, and quadratic convergence mayto notthe solution may be obtainedslow.
 
Given an adequate starting value <math>x_0\ </math>, a sequence of values <math>x_0,\ x_1,\ x_2,\dots,\ x_n,\dots</math> can be generated. When it works, each value in the sequence is much closer to the solution <math>x_\star</math> than the prior value. The value <math>x_n\ </math> from the current step generates the value <math>x_{n+1}\ </math> for the next step, via this formula<ref>Germund Dahlquist, Åke Björck, tr. Ned Anderson (1974) ''Numerical Methods'', pp.&nbsp;230-231, Prentice Hall, Englewood Cliffs, NJ</ref>:
Line 12:
:<math>g(x_n) = \frac{f(x_n + f(x_n)) - f(x_n)}{f(x_n)}</math>
 
The function <math>g</math> is the average slope of the function &fnof; between the last sequence point <math>(x,y)=( x_n,\ f(x_n) )</math> and the auxilliary point <math>(x,y)=( x_n + h,\ f(x_n + h) )</math>, with the step <math>h=f(x_n)\ </math>&nbsp;. It is only for the purpose of finding this auxilliary point that the function <math>f</math> must be an adequate itteration function for its own solution, to get quick convergence for the sequence of values <math>x_n</math>&nbsp;. For all other parts of the calculation, Steffensen's method only requires the function <math>f</math> to be continuous, and to actually have a nearby solution. Several modest modifications of the step <math>h</math> in the slope functioncaclulation <math>g</math> exist to accomodate functions <math>f</math> that do not quite meet this requirement.
 
The main advantage of Steffensen's method is that it can find the roots of an equation <math>f</math> just as "[[quadratic convergence|quickly]]" as [[Newton's method]] but the formula does not require a separate function for the derivative, so it can be programmed for any generic function. In this case ''[[quadratic convergence|quicly]]'' means that the number of correct digits in the answer doubles with each step. The cost for the quick convergence is the double function evaluation: both <math>f(x_n)</math> and <math>f(x_n + f(x_n))</math> must be calculated, which might be time-consuming if <math>f</math> is a complicated function.