Predictor–corrector method: Difference between revisions

Content deleted Content added
Jeffareid (talk | contribs)
mNo edit summary
Jeffareid (talk | contribs)
No edit summary
Line 6:
 
In approximating the solution to a first-order [[ordinary differential equation]], suppose one knows the solution points <math>y_0</math> and <math>y_1</math> at times <math>t_0</math> and <math>t_1</math>. By fitting a cubic polynomial to the points and their derivatives (gotten through the differential equation), one can predict a point <math>\tilde{y}_2</math> by [[Extrapolation|extrapolating]] to a future time <math>t_2</math>. Using the new value <math>\tilde{y}_2</math> and its derivative there <math>\tilde{y}^'_2</math> along with the previous points and their derivatives, one can then better [[Interpolation|interpolate]] the derivative between <math>t_1</math> and <math>t_2</math> to get a better approximation <math>y_2</math>. The interpolation and subsequent integration of the differential equation constitute the corrector step.
 
== Euler Trapezoidal Example ==
 
Example of a Euler - trapezoidal predictor-corrector method.
 
In this example ''h'' = <math>\Delta{t} </math>, <math> t_{i+1} = t_{i} + \Delta{t} = t_{i} + h </math>
 
: <math> y' = f(t,y), \quad y(t_0) = y_0. </math>
 
first calculate an initial guess value <math>\tilde{y}_{g}</math> via Euler:
 
: <math>\tilde{y}_{g} = y_i + h f(t_i,y_i)</math>
 
next improve the initial guess through iteration of the trapezoidal rule:
 
: <math>\tilde{y}_{g+1} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g})).</math>
 
: <math>\tilde{y}_{g+2} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g+1})).</math>
...
: <math>\tilde{y}_{g+n} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g+n-1})).</math>
 
until some fixed value ''n'' or until the guesses converge to within some error tolerance ''e'' :
 
: <math> | \tilde{y}_{g+n} - \tilde{y}_{g+n-1} | <= e </math>
 
then use the final guess as the next step:
 
: <math>y_{i+1} = \tilde{y}_{g+n}.</math>
 
Note that the overall error is unrelated to convergence in the algorithm but instead to the step size and the core method, which in this example is a trapezoidal, (linear) approximation of the actual function. The step size ''h'' ( <math>\Delta{t} </math> ) needs to be relatively small in order to get a good approximation. Also see [[stiff equation]]
 
 
== See also ==