Predictor–corrector method: Difference between revisions

Content deleted Content added
28bot (talk | contribs)
m Reverting possible edit test by 27.251.39.82. False positive? Please report it.
Adding short description: "Algorithms in numerical analysis"
 
(23 intermediate revisions by 14 users not shown)
Line 1:
{{Short description|Algorithms in numerical analysis}}
In [[mathematics]], particularly [[numerical analysis]], a '''predictor–corrector method''' is an [[algorithm]] that proceeds in two steps. First, the prediction step calculates a rough approximation of the desired quantity. Second, the corrector step refines the initial approximation using another means.
In [[numerical analysis]], '''predictor–corrector methods''' belong to a class of [[algorithm]]s designed to integrate ordinary differential equations{{snd}}to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps:
__TOC__
== Example ==
 
# The initial, "prediction" step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate ("anticipate") this function's value at a subsequent, new point.
In approximating the solution to a first-order [[ordinary differential equation]], suppose one knows the solution points <math>y_0</math> and <math>y_1</math> at times <math>t_0</math> and <math>t_1</math>. By fitting a cubic polynomial to the points and their derivatives (obtained from the differential equation), one can predict a point <math>\tilde{y}_2</math> by [[Extrapolation|extrapolating]] to a future time <math>t_2</math>. Using the new value <math>\tilde{y}_2</math> and its derivative there, <math>\tilde{y}^'_2</math> along with the previous points and their derivatives, one can then better [[Interpolation|interpolate]] the derivative between <math>t_1</math> and <math>t_2</math> to get a better approximation <math>y_2</math>. The interpolation and subsequent integration of the differential equation constitute the corrector step.
# The next, "corrector" step refines the initial approximation by using the ''predicted'' value of the function and ''another method'' to interpolate that unknown function's value at the '''same''' subsequent point.
 
== Predictor–corrector methods for solving ODEs ==
== Euler trapezoidal example ==
 
When considering the [[numerical methods for ordinary differential equations|numerical solution of ordinary differential equations (ODEs)]], a predictor–corrector method typically uses an [[explicit and implicit methods|explicit method]] for the predictor step and an implicit method for the corrector step.
Example of an Euler – trapezoidal predictor–corrector method.
 
=== Example: Euler method with the trapezoidal examplerule ===
In this example <math>h = \Delta{t} </math>, <math> t_{i+1} = t_i + \Delta{t} = t_i + h </math>
 
A simple predictor–corrector method (known as [[Heun's method]]) can be constructed from the [[Euler method]] (an explicit method) and the [[trapezoidal rule (differential equations)|trapezoidal rule]] (an implicit method).
: <math> y' = f(t,y), \quad y(t_0) = y_0. </math>
 
Consider the differential equation
First calculate an initial guess value <math>\tilde{y}_{[0]}</math> via the [[Euler method]]:
 
: <math>\tilde{ y}_{[0]}' = y_i + h f(t_it,y_iy), \quad y(t_0) = y_0, </math>
 
and denote the step size by <math>h</math>.
Next, improve the initial guess through iteration of the [[trapezoidal rule (differential equations)|trapezoidal rule]]. This iteration process normally converges quickly.
 
First, the predictor step: starting from the current value <math>y_i</math>, calculate an initial guess value <math>\tilde{y}_{[0]i+1}</math> via the [[Euler method]]:,
: <math>
\begin{align}
\tilde{y}_{[1]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[0]})). \\
\tilde{y}_{[2]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[1]})). \\
& {}\ \vdots \\
\tilde{y}_{[n]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[n-1]})).
\end{align}
</math>
 
: <math> | \tilde{y}_{[n]i+1} -= \tilde{y}_{[n-1]}y_i |+ \leh ef(t_i,y_i). </math>
This iteration process is repeated until some fixed value ''n'' or until the guesses converge to within some error tolerance ''e'' :
 
Next, the corrector step: improve the initial guess using trapezoidal rule,
: <math> | \tilde{y}_{[n]} - \tilde{y}_{[n-1]} | \le e </math>
 
: <math> y_{i+1} = y_i + \tfrac12 h \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{i+1}) \bigr). </math>
then use the final guess as the next step:
 
thenThat usevalue the finalis guessused as the next step:.
: <math>y_{i+1} = \tilde{y}_{[n]}.</math>
 
=== PEC mode and PECE mode ===
Note that the overall error is unrelated to convergence in the algorithm but instead to the step size and the core method, which in this example is a trapezoidal, (linear) approximation of the actual function. The step size ''h'' ( <math>\Delta{t} </math> ) needs to be relatively small in order to get a good approximation. See also [[stiff equation]].
 
There are different variants of a predictor–corrector method, depending on how often the corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the variant in the above example:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
\tildey_{y}_{[2]i+1} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[i+1]}) \bigr). \\
\end{align} </math>
 
It is also possible to evaluate the function ''f'' only once per step by using the method in Predict–Evaluate–Correct (PEC) mode:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,\tilde{y}_i), \\
y_{i+1} &= y_i + \tfrac12 h \bigl( f(t_i, \tilde{y}_i) + f(t_{i+1},\tilde{y}_{i+1}) \bigr).
\end{align} </math>
 
Additionally, the corrector step can be repeated in the hope that this achieves an even better approximation to the true solution. If the corrector method is run twice, this yields the PECECE mode:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
\tildehat{y}_{[i+1]} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[0]i+1}) \bigr)., \\
\tildey_{y}_{[n]i+1} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tildehat{y}_{[n-i+1]}) \bigr).
\end{align} </math>
 
The PECEC mode has one fewer function evaluation than PECECE mode.
 
More generally, if the corrector is run ''k'' times, the method is in P(EC)<sup>''k''</sup>
or P(EC)<sup>''k''</sup>E mode. If the corrector method is iterated until it converges, this could be called PE(CE)<sup>∞</sup>.<ref>{{harvnb|Butcher|2003|p=104}}</ref>
 
== See also ==
Line 45 ⟶ 65:
* [[Mehrotra predictor–corrector method]]
* [[Numerical continuation]]
 
==Notes==
{{reflist}}
 
==References==
* {{Citation | last1=Butcher | first1=John C. | author1-link=John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=[[John Wiley & Sons]] | ___location=New York | isbn=978-0-471-96758-3 | year=2003}}.
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 17.6. Multistep, Multivalue, and Predictor-Corrector Methods | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=942}}
 
== External links ==
 
* {{MathWorld |title=Predictor-Corrector Methods |urlname=Predictor–Corrector MethodsPredictor-CorrectorMethods}}
* [https://web.archive.org/web/20080617035745/http://www.fisica.uniud.it/~ercolessi/md/md/node22.html Predictor–corrector methods] for differential equations
 
{{Numerical integrators}}
 
{{DEFAULTSORT:Predictor-corrector method}}
[[Category:Algorithms]]
[[Category:Numerical analysis]]
 
[[ru:Схема предиктор-корректор]]