Talk:Predictor–corrector method: Difference between revisions

Content deleted Content added
Jeffareid (talk | contribs)
Proposed example: new section
Cewbot (talk | contribs)
m Maintain {{WPBS}}: 1 WikiProject template. Remove 1 deprecated parameter: field.
 
(36 intermediate revisions by 6 users not shown)
Line 1:
{{WikiProject banner shell|class=Start|
Talk page for predictor-corrector method. [[User:Jeffareid|Jeffareid]] ([[User talk:Jeffareid|talk]]) 04:13, 21 July 2009 (UTC)
{{WikiProject Mathematics|priority=Mid}}
}}
__TOC__
 
== Proposed example ==
 
Example of a trapezoidal predictor-corrector method.
== Proposed example ==
 
In this example ''h'' = <math>\Delta{t} </math>, <math> t_{i+1} = t_{i} + \Delta{t} = t_{i} + h </math>
Example of a trapezoidal predictor-corrector method
 
: <math> y' = f(t,y), \quad y(t_0) = y_0. </math>
 
first calculate an initial guess value <math>\tilde{y}_{g}</math> via Euler:
 
: <math>\tilde{y}_{g} = y_i + h f(t_i,y_i)</math>
 
next improve the initial guess through iteration of the trapezoidal rule:
next calculate successive guesses
 
: <math>\tilde{y}_{g+1} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g})).</math>
Line 21 ⟶ 24:
: <math>\tilde{y}_{g+n} = y_i + \frac{h}{2}(f(t_i, y_i) + f(t_{i+1},\tilde{y}_{g+n-1})).</math>
 
until some fixed value ''n'' or until the guesses converge to within some error tolerance ''e'' :
 
: <math> | \tilde{y}_{g+n} - \tilde{y}_{g+n-1} | <= e </math>
 
Once convergence is reached, then use the final guess as the next step:
 
: <math>y_{i+1} = \tilde{y}_{g+n}.</math>
 
If I remember correctly, the iterative process converges quadratically. Note that the overall error is unrelated to convergence in the algorithm but instead to the step size and the core method, which in this example is a trapezoidal, (linear) approximation of the actual function. The step size ''h'' ( <math>\Delta{t} </math> ) needs to be relatively small in order to get a good approximation. Also see [[stiff equation]]
If the guesses don't converge within some number of steps, such as <math> n = 16 </math> reduce h and repeat the step. To optimize this, if the steps converge too soon, such as 4 steps, then increase h. If I remember correctly, the iterative process converges quadratically.
 
Talk page for predictor-corrector method. [[User:Jeffareid|Jeffareid]] ([[User talk:Jeffareid|talk]]) 0402:1342, 2125 July 2009 (UTC)
 
:The relation to the https://de.wikipedia.org/wiki/Picard-Iteration might be a worthwhile refernce. I am dubious on the quadratic convergence claim as it looks more like a type of gradient descent to me. [[Special:Contributions/2001:638:904:FFC8:3433:CB4E:3261:66DB|2001:638:904:FFC8:3433:CB4E:3261:66DB]] ([[User talk:2001:638:904:FFC8:3433:CB4E:3261:66DB|talk]]) 23:32, 11 March 2023 (UTC)
[[User:Jeffareid|Jeffareid]] ([[User talk:Jeffareid|talk]]) 04:14, 21 July 2009 (UTC)