Linear multistep method: Difference between revisions

Content deleted Content added
zero-stability, more on convergence, add reference
use same definition of local error throughout article, add links
Line 106:
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align} </math>
a good approximation of the differential equation <math> y' = f(t,y) </math>? More precisely, a multistep method is ''consistent'' if the [[local truncation error]] goes to zero asfaster than the step size ''h'' as ''h'' goes to zero, where the ''local truncation error'' is defined to be the difference between the result <math>y_{n+s}</math> of the method, assuming that all the previous values <math>y_{n+s-1}, \ldots, y_n</math> are exact, and the exact solution of the equation at time <math>t_{n+s}</math>, divided by h. A computation using [[Taylor series]] shows out that a linear multistep method is consistent if and only if
:<math> \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad \sum_{k=0}^s b_k = s + \sum_{k=0}^{s-1} ka_k. </math>
All the methods mentioned above are consistent {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
Line 121:
 
=== Stability and convergence ===
<!-- [[Zero-stability]] redirects here -->
 
The numerical solution of a one-step method depends on the initial condition <math> y_0 </math>, but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, <math> y_0, y_1, \ldots, y_{s-1} </math>. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is ''zero-stable'' for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation <math> y' = 0 </math> {{harv|Süli|Mayers|2003|p=332}}.
 
If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the ''root condition'' is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied {{harv|Süli|Mayers|2003|p=335}}.
 
Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values <math> y_1, \ldots, y_{s-1}</math> all converge to the initial value <math> y_0 </math> as <math> h \to 0 </math>. Then, the numerical solution converges to the exact solution as <math> h \to 0 </math> if and only if the method is zero-stable. This result is known as the ''Dahlquist equivalence theorem'', (comparenamed after [[Germund Dahlquist]]; this theorem is similar in spirit to the [[Lax equivalence theorem]]) for [[finite difference method]]s. Furthermore, if the method has order ''p'', then the [[global truncation error|global error]] (the difference between the numerical solution and the exact solution at a fixed time) is <math> O(h^p) </math> {{harv|Süli|Mayers|2003|p=340}}.
 
Furthermore, if the method is convergent, the method is said to be ''strongly stable'' if <math>z=1</math> is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be ''relatively stable''. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two.