Steffensen's method: Difference between revisions

Content deleted Content added
Line 6:
:<math>x_{n+1} = x_n - \frac{f(x_n)}{g(x_n)}</math>
 
for <math>''n'' = 0,\ 1,\ 2,\ 3,\ ...</math>, where the slope <math>''g''(x_n)\ ''x''<sub>''n''</mathsub>) is a composite of the original function <math>f\ </math> given by the following formula:
 
:<math>g(x_n) = \frac{f(x_n + f(x_n)) - f(x_n)}{f(x_n)}</math>
 
The function <math>''g\ </math>'' is the average slope of the function <math>f\ </math>&fnof; between the last sequence point <math>x=x_n,\ y=f(x_n)</math> and the auxilliary point <math>x=x_n + f(x_n),\ y=f(x_n + f(x_n))</math>&nbsp;.
 
The main advantage of Steffensen's method is that it can find the roots of an equation <math>f\ </math> just as "[[quadratic convergence|quickly]]" as [[Newton's method]] but the formula does not require a separate function for the derivative, so it can be programmed for any generic function. In this case ''[[quadratic convergence|quicly]]'' means that the number of correct digits in the answer doubles with each step. The cost for the quick convergence is the double function evaluation: both <math>f(x_n)\ </math> and <math>f(x_n + f(x_n))\ </math> must be calculated, which migh be time-consuming if <math>f\ </math>&fnof; is a complicated function.
 
Similar to [[Newton's method]] and most other quadratically convergent methods, the crucial weakness with the method is the choice of the starting value <math>x_0\ </math>&nbsp;. If the value of <math>x_0\ </math> is not "close enough" to the actual solution, the method will fail and the sequence of values <math>x_0,\ x_1,\ x_2,\ x_3 ...,\dots</math> will either flip flop between two extremes, or diverge to infinity (possibly both!).
 
==Generalised definition==