Predictor–corrector method: Difference between revisions

Content deleted Content added
Jeffareid (talk | contribs)
Adding short description: "Algorithms in numerical analysis"
 
(47 intermediate revisions by 30 users not shown)
Line 1:
{{Short description|Algorithms in numerical analysis}}
In [[mathematics]], particularly [[numerical analysis]], a '''predictor-corrector method''' is an [[algorithm]] that proceeds in two steps. First, the prediction step calculates a rough approximation of the desired quantity. Second, the corrector step refines the initial approximation using another means.
In [[numerical analysis]], '''predictor–corrector methods''' belong to a class of [[algorithm]]s designed to integrate ordinary differential equations{{snd}}to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps:
 
# The initial, "prediction" step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate ("anticipate") this function's value at a subsequent, new point.
__TOC__
# The next, "corrector" step refines the initial approximation by using the ''predicted'' value of the function and ''another method'' to interpolate that unknown function's value at the '''same''' subsequent point.
 
== Predictor–corrector methods for solving ODEs ==
== Example ==
 
When considering the [[numerical methods for ordinary differential equations|numerical solution of ordinary differential equations (ODEs)]], a predictor–corrector method typically uses an [[explicit and implicit methods|explicit method]] for the predictor step and an implicit method for the corrector step.
In approximating the solution to a first-order [[ordinary differential equation]], suppose one knows the solution points <math>y_0</math> and <math>y_1</math> at times <math>t_0</math> and <math>t_1</math>. By fitting a cubic polynomial to the points and their derivatives (gotten through the differential equation), one can predict a point <math>\tilde{y}_2</math> by [[Extrapolation|extrapolating]] to a future time <math>t_2</math>. Using the new value <math>\tilde{y}_2</math> and its derivative there <math>\tilde{y}^'_2</math> along with the previous points and their derivatives, one can then better [[Interpolation|interpolate]] the derivative between <math>t_1</math> and <math>t_2</math> to get a better approximation <math>y_2</math>. The interpolation and subsequent integration of the differential equation constitute the corrector step.
 
=== Example: Euler Trapezoidalmethod Examplewith the trapezoidal rule ===
 
A simple predictor–corrector method (known as [[Heun's method]]) can be constructed from the [[Euler method]] (an explicit method) and the [[trapezoidal rule (differential equations)|trapezoidal rule]] (an implicit method).
Example of a Euler - trapezoidal predictor-corrector method.
 
Consider the differential equation
In this example ''h'' = <math>\Delta{t} </math>, <math> t_{i+1} = t_{i} + \Delta{t} = t_{i} + h </math>
 
: <math> y' = f(t,y), \quad y(t_0) = y_0., </math>
 
firstand calculatedenote anthe initialstep guesssize valueby <math>\tilde{y}_{g}h</math> via Euler:.
 
First, the predictor step: starting from the current value <math>y_i</math>, calculate an initial guess value <math>\tilde{y}_{i+1}</math> via the Euler method,
: <math>\tilde{y}_{g} = y_i + h f(t_i,y_i)</math>
 
: <math>\tilde{y}_{gi+1} = y_i + h f(t_i,y_i). </math>
next improve the initial guess through iteration of the trapezoidal rule, this iteration process normally converges quickly:
 
Next, the corrector step: improve the initial guess using trapezoidal rule,
: <math>\tilde{y}_{g+1} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g})).</math>
 
: <math>\tilde{y}_ y_{gi+21} = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{gi+1}) \bigr). </math>
...
: <math>\tilde{y}_{g+n} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g+n-1})).</math>
 
thenThat usevalue the finalis guessused as the next step:.
until some fixed value ''n'' or until the guesses converge to within some error tolerance ''e'' :
 
=== PEC mode and PECE mode ===
: <math> | \tilde{y}_{g+n} - \tilde{y}_{g+n-1} | <= e </math>
 
There are different variants of a predictor–corrector method, depending on how often the corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the variant in the above example:
then use the final guess as the next step:
 
: <math>y_{i+1} = \tilde{y}_begin{g+nalign}.</math>
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
: <math>\tildey_{y}_{gi+n1} &= y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{gi+n-1}) \bigr).</math>
\end{align} </math>
 
It is also possible to evaluate the function ''f'' only once per step by using the method in Predict–Evaluate–Correct (PEC) mode:
Note that the overall error is unrelated to convergence in the algorithm but instead to the step size and the core method, which in this example is a trapezoidal, (linear) approximation of the actual function. The step size ''h'' ( <math>\Delta{t} </math> ) needs to be relatively small in order to get a good approximation. See also [[stiff equation]].
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,\tilde{y}_i), \\
y_{i+1} &= y_i + \tfrac12 h \bigl( f(t_i, \tilde{y}_i) + f(t_{i+1},\tilde{y}_{i+1}) \bigr).
\end{align} </math>
 
Additionally, the corrector step can be repeated in the hope that this achieves an even better approximation to the true solution. If the corrector method is run twice, this yields the PECECE mode:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
: <math>\tildehat{y}_{gi+1} &= y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{gi+1}) \bigr).</math>, \\
y_{i+1} &= y_i + \tfrac12 h \bigl( f(t_i, y_i) + f(t_{i+1},\hat{y}_{i+1}) \bigr).
\end{align} </math>
 
The PECEC mode has one fewer function evaluation than PECECE mode.
 
More generally, if the corrector is run ''k'' times, the method is in P(EC)<sup>''k''</sup>
or P(EC)<sup>''k''</sup>E mode. If the corrector method is iterated until it converges, this could be called PE(CE)<sup>∞</sup>.<ref>{{harvnb|Butcher|2003|p=104}}</ref>
 
== See also ==
Line 42 ⟶ 63:
* [[Beeman's algorithm]]
* [[Heun's method]]
* [[Mehrotra predictor-correctorpredictor–corrector method]]
* [[Numerical continuation]]
 
==Notes==
{{reflist}}
 
==References==
* {{Citation | last1=Butcher | first1=John C. | author1-link=John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=[[John Wiley & Sons]] | ___location=New York | isbn=978-0-471-96758-3 | year=2003}}.
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 17.6. Multistep, Multivalue, and Predictor-Corrector Methods | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=942}}
 
== External links ==
 
* {{MathWorld |title=Predictor-Corrector Methods |urlname=Predictor-CorrectorMethods}}
* [https://web.archive.org/web/20080617035745/http://www.fisica.uniud.it/~ercolessi/md/md/node22.html Predictor-correctorPredictor–corrector methods] for differential equations
 
{{Numerical integrators}}
 
{{DEFAULTSORT:Predictor-corrector method}}
[[Category:Algorithms]]
[[Category:Numerical analysis]]