In numerical analysis, Steffensen's method is a root-finding method. It is similar to Newton's method and it also achieves quadratic convergence, but it does not use derivatives. The method is named after Johan Frederik Steffensen.
Simple description
The simplest form of the formula for Steffensen's method occurs when it is used to find the zeros, or roots, of a function , that is, to find the input value that satisifies . Given an adequate starting value , a sequence of values can be generated. When it works, each value in the sequence is much closer to the solution than the prior value. The value from the current step generates the value for the next step, via this formula[1]:
for n = 1, 2, 3, ..., where the slope g(xn) is a composite of the original function given by the following formula:
The function g is the average slope of the function ƒ between the last sequence point and the auxilliary point .
The main advantage of Steffensen's method is that it can find the roots of an equation just as "quickly" as Newton's method but the formula does not require a separate function for the derivative, so it can be programmed for any generic function. In this case quicly means that the number of correct digits in the answer doubles with each step. The cost for the quick convergence is the double function evaluation: both and must be calculated, which migh be time-consuming if ƒ is a complicated function.
Similar to Newton's method and most other quadratically convergent methods, the crucial weakness with the method is the choice of the starting value . If the value of is not "close enough" to the actual solution, the method will fail and the sequence of values will either flip flop between two extremes, or diverge to infinity (possibly both!).
Generalised definition
Steffensen's method can also be used more abstractly, to find the locations a different kind of function , that for some input values, produces the same output: . Such solutions are called "fixed points". Many of such functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates convergence.
Steffensen's method finds fixed points of a mapping . In Steffensen's original description, was supposed to be a real function, but the method has been generalised for functions on a Banach space .
The method assumes that a family of bounded linear operators associated with and are can be found to satisfy the condition[2]
- .
In the original form (given in the section above), where the function simply takes in and produces real numbers, the operators are divided differences. In the general form, the operators are the analogue of divided differences in the Banach space.
Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference instead of the derivative . It is thus defined by
for , and where is the identity operator. If the operator satisfies
for some constant , then the method converges quadratically to a fixed point of ƒ if the initial approximation is sufficiently close to the desired solution , that satisifies .