Content deleted Content added
RowanElder (talk | contribs) m Typos, o -> e |
corrected false comment about Newton's method (prior editor didn't figure in the derivative calculation); moved into footnote |
||
Line 1:
{{short description|Newton-like root-finding algorithm that does not use derivatives}}
In [[numerical analysis]], '''Steffensen's method''' is an [[iterative method]] for numerical [[root-finding method|root-finding]] named after [[Johan Frederik Steffensen]] that is similar to the [[secant method]] and to [[Newton's method]]. '''Steffensen's method''' achieves a quadratic [[order of convergence]] without using [[derivative]]s, whereas Newton's method converges quadratically but requires derivatives and the secant method does not require derivatives but also converges less quickly than quadratically.
Steffensen's method has the drawback that it requires two function evaluations per step, whereas the secant method requires only one evaluation per step, so it is not necessarily most efficient in terms of [[computational cost]], depending on the number of iterations each requires. Newton's method also requires evaluating two functions per step – for the function and for its derivative – and its computational cost varies between being the same as Steffensen's method (for most functions, where calculation of the derivative is just as computationally costly as the original function).{{efn|
For rare special case functions the derivative for Newton's method can be calculated at negligible cost, by using saved parts from evaluation of the main function. If optimized in this way, Newton's method becomes only slightly more costly per step than the secant method, with slightly faster convergence.
}}
Steffensen's method can be derived as an adaptation of [[Aitken's delta-squared process]] applied to [[fixed-point iteration]]. Viewed in this way, Steffensen's method naturally generalizes to efficient fixed-point calculation in general [[Banach space|Banach spaces]], whenever fixed points are guaranteed to exist and fixed-point iteration is guaranteed to converge, although possibly slowly, by the [[Banach fixed-point theorem]].
|