Predictor–corrector method: Difference between revisions

Content deleted Content added
28bot (talk | contribs)
m Reverting possible edit test by 27.251.39.82. False positive? Please report it.
add section on PEC and PECE mode, simplify and copy-edit other parts
Line 1:
In [[mathematics]], particularly [[numerical analysis]], a '''predictor–corrector method''' is an [[algorithm]] that proceeds in two steps. First, the prediction step calculates a rough approximation of the desired quantity. Second, the corrector step refines the initial approximation using another means.
__TOC__
== Predictor–corrector methods for solving ODEs ==
== Example ==
 
When considering the [[numerical methods for ordinary differential equations|numerical solution of ordinary differential equations (ODEs)]], a predictor­­–corrector method typically uses an [[explicit and implicit methods|explicit method]] for the predictor step and an implicit method for the corrector step.
In approximating the solution to a first-order [[ordinary differential equation]], suppose one knows the solution points <math>y_0</math> and <math>y_1</math> at times <math>t_0</math> and <math>t_1</math>. By fitting a cubic polynomial to the points and their derivatives (obtained from the differential equation), one can predict a point <math>\tilde{y}_2</math> by [[Extrapolation|extrapolating]] to a future time <math>t_2</math>. Using the new value <math>\tilde{y}_2</math> and its derivative there, <math>\tilde{y}^'_2</math> along with the previous points and their derivatives, one can then better [[Interpolation|interpolate]] the derivative between <math>t_1</math> and <math>t_2</math> to get a better approximation <math>y_2</math>. The interpolation and subsequent integration of the differential equation constitute the corrector step.
 
=== Example: Euler method with the trapezoidal examplerule ===
 
A simple predictor–corrector method can be constructed from the [[Euler method]] (an explicit method) and the [[trapezoidal rule (differential equations)|trapezoidal rule]] (an implicit method).
Example of an Euler – trapezoidal predictor–corrector method.
 
Consider the differential equation
In this example <math>h = \Delta{t} </math>, <math> t_{i+1} = t_i + \Delta{t} = t_i + h </math>
 
: <math> y' = f(t,y), \quad y(t_0) = y_0., </math>
 
Firstand calculatedenote anthe initialstep guesssize valueby <math>\tilde{y}_{[0]}h</math>. via the [[Euler method]]:
 
First, the predictor step: starting from the current value <math>y_i</math>, calculate an initial guess value <math>\tilde{y}_{i+1}</math> via the Euler method,
: <math>\tilde{y}_{[0]} = y_i + h f(t_i,y_i)</math>
 
: <math>\tilde{y}_{[0]i+1} = y_i + h f(t_i,y_i). </math>
Next, improve the initial guess through iteration of the [[trapezoidal rule (differential equations)|trapezoidal rule]]. This iteration process normally converges quickly.
 
Next, the corrector step: improve the initial guess using trapezoidal rule,
: <math>
\begin{align}
\tilde{y}_{[1]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[0]})). \\
\tilde{y}_{[2]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[1]})). \\
& {}\ \vdots \\
\tilde{y}_{[n]} & = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[n-1]})).
\end{align}
</math>
 
: <math> y_{i+1} = y_i + \tfrac12 h \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{i+1}) \bigr). </math>
This iteration process is repeated until some fixed value ''n'' or until the guesses converge to within some error tolerance ''e'' :
 
thenThat usevalue the finalis guessused as the next step:.
: <math> | \tilde{y}_{[n]} - \tilde{y}_{[n-1]} | \le e </math>
 
=== PEC mode and PECE mode ===
then use the final guess as the next step:
 
There are different variants of a predictor­–corrector method, depending on how often the corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the variant in the above example:
: <math>y_{i+1} = \tilde{y}_{[n]}.</math>
 
: <math> \begin{align}
Note that the overall error is unrelated to convergence in the algorithm but instead to the step size and the core method, which in this example is a trapezoidal, (linear) approximation of the actual function. The step size ''h'' ( <math>\Delta{t} </math> ) needs to be relatively small in order to get a good approximation. See also [[stiff equation]].
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
\tildey_{y}_{[2]i+1} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[i+1]}) \bigr). \\
\end{align} </math>
 
It is also possible to evaluate the function ''f'' only once per step by using the method in Predict–Evaluate–Correct (PEC) mode:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,\tilde{y}_i), \\
y_{i+1} &= y_i + \tfrac12 h \bigl( f(t_i, \tilde{y}_i) + f(t_{i+1},\tilde{y}_{i+1}) \bigr).
\end{align} </math>
 
Additionally, the corrector step can be repeated in the hope that this achieves an even better approximation to the true solution. If the corrector method is run twice, this yields the PECECE mode:
 
: <math> \begin{align}
\tilde{y}_{i+1} &= y_i + h f(t_i,y_i), \\
\tildehat{y}_{[i+1]} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tilde{y}_{[0]i+1}) \bigr). \\
\tildey_{y}_{[n]i+1} & = y_i + \frac{tfrac12 h}{2} \bigl( f(t_i, y_i) + f(t_{i+1},\tildehat{y}_{[n-i+1]}) \bigr).
\end{align} </math>
 
The PECEC mode has one fewer function evaluation. More generally, if the corrector is run ''k'' times, the method is in P(EC)<sup>''k''</sup>
or P(EC)<sup>''k''</sup>E mode. If the corrector method is iterated until it converges, this could be called PE(CE)<sup>∞</sup>. <ref>{{harvnb|Butcher|2003|p=104}}</ref>
 
== See also ==
Line 45 ⟶ 59:
* [[Mehrotra predictor–corrector method]]
* [[Numerical continuation]]
 
==Notes==
{{reflist}}
 
==References==
* {{Citation | last1=Butcher | first1=John C. | author1-link=John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=[[John Wiley & Sons]] | ___location=New York | isbn=978-0-471-96758-3 | year=2003}}.
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press | publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 17.6. Multistep, Multivalue, and Predictor-Corrector Methods | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=942}}