Steffensen's method

This is an old revision of this page, as edited by Tom Lougheed (talk | contribs) at 03:13, 3 August 2008 (Simple description). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In numerical analysis, Steffensen's method is a root-finding method. It is similar to Newton's method and it also achieves quadratic convergence, but it does not use derivatives. The method is named after Johan Frederik Steffensen.

Simple description

The simplest form of the formula for Steffensen's method occurs when it is used to find the zeros, or roots, of a function  , that is, to find the input value  that satisifies  . Near the solution  , the function   is supposed to approximately satisfy  , which makes it adequate as an itteration function to find its own solution, although not necessarilly efficiently. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value   must be very close to the actual solution  , and quadratic convergence may not be obtained.

Given an adequate starting value  , a sequence of values   can be generated. When it works, each value in the sequence is much closer to the solution   than the prior value. The value   from the current step generates the value   for the next step, via this formula[1]:

 

for n = 0, 1, 2, 3, ... , where the slope function   is a composite of the original function   given by the following formula:

 

The function   is the average slope of the function ƒ between the last sequence point   and the auxilliary point  , with the step   . It is only for the purpose of finding this auxilliary point that the function   must be an adequate itteration function for its own solution, to get quick convergence for the sequence of values   . For all other parts of the calculation, Steffensen's method only requires the function   to be continuous, and to actually have a solution. Several modest modifications of the step   in the slope function   exist to accomodate functions   that do not quite meet this requirement.

The main advantage of Steffensen's method is that it can find the roots of an equation   just as "quickly" as Newton's method but the formula does not require a separate function for the derivative, so it can be programmed for any generic function. In this case quicly means that the number of correct digits in the answer doubles with each step. The cost for the quick convergence is the double function evaluation: both   and   must be calculated, which might be time-consuming if   is a complicated function.

Similar to Newton's method and most other quadratically convergent methods, the crucial weakness with the method is the choice of the starting value   . If the value of   is not "close enough" to the actual solution, the method will fail and the sequence of values   will either flip flop between two extremes, or diverge to infinity (possibly both!).

Generalised definition

Steffensen's method can also be used more abstractly, to find the locations a different kind of function  , that for some input values,   produces the same output:  . Such solutions are called "fixed points". Many of such functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates convergence.

Steffensen's method finds fixed points of a mapping  . In Steffensen's original description,   was supposed to be a real function, but the method has been generalised for functions   on a Banach space  .

The method assumes that a family of bounded linear operators   associated with   and   are can be found to satisfy the condition[2]

 

In the original form (given in the section above), where the function   simply takes in and produces real numbers, the operators are divided differences. In the general form, the operators   are the analogue of divided differences in the Banach space.

Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference  instead of the derivative  . It is thus defined by

 

for  , and where   is the identity operator. If the operator   satisfies

 

for some constant  , then the method converges quadratically to a fixed point of ƒ if the initial approximation   is sufficiently close to the desired solution  , that satisifies  

References

  1. ^ Germund Dahlquist, Åke Björck, tr. Ned Anderson (1974) Numerical Methods, pp. 230-231, Prentice Hall, Englewood Cliffs, NJ
  2. ^ L. W. Johnson; D. R. Scholz (1968) On Steffensen's Method, SIAM Journal on Numerical Analysis (June 1968), vol. 5, no. 2., pp. 296-302. Stable URL: [1]