Content deleted Content added
Jitse Niesen (talk | contribs) m section headings |
Jitse Niesen (talk | contribs) zero-stability, more on convergence, add reference |
||
Line 121:
=== Stability and convergence ===
The numerical solution of a one-step method depends on the initial condition <math> y_0 </math>, but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, <math> y_0, y_1, \ldots, y_{s-1} </math>. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is ''zero-stable'' for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation <math> y' = 0 </math> {{harv|Süli|Mayers|2003|p=332}}.
If the roots of the characteristic polynomial <math>\rho</math> all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the [[root condition]] is satisfied.▼
▲If the roots of the characteristic polynomial
Furthermore, if the method is stable, the method is said to be ''strongly stable'' if <math>z=1</math> is the only root of modulus 1. If it is stable and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be ''relatively stable''. Note that 1 must be a root; thus stable methods are always one of these two.▼
Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values <math> y_1, \ldots, y_{s-1}</math> all converge to the initial value <math> y_0 </math> as <math> h \to 0 </math>. Then, the numerical solution converges to the exact solution as <math> h \to 0 </math> if and only if the method is zero-stable. This result is known as the ''Dahlquist equivalence theorem'' (compare the [[Lax equivalence theorem]]). Furthermore, if the method has order ''p'', then the [[global truncation error|global error]] (the difference between the numerical solution and the exact solution at a fixed time) is <math> O(h^p) </math> {{harv|Süli|Mayers|2003|p=340}}.
▲Furthermore, if the method is
=== Example ===
Line 130 ⟶ 133:
:<math>y_{n+1} = y_n + h\left( {23\over 12} f(t_n, y_n) - {16 \over 12} f(t_{n-1}, y_{n-1}) + {5\over 12}f(t_{n-2}, y_{n-2})\right).</math>
The characteristic equation is thus
:<math>
which has roots <math>z=0, 1</math>, and the conditions above are satisfied. As <math>z=1</math> is the only root of modulus 1, the method is strongly stable.
Line 138 ⟶ 141:
===First Dahlquist barrier===
A
===Second Dahlquist barrier===
Line 156 ⟶ 159:
* {{citation | first1 = W. E. | last1 = Milne | year = 1926 | title = Numerical integration of ordinary differential equations | journal = American Mathematical Monthly | volume = 33 | number = 9 | pages = 455–460 | issue = 9 | doi = 10.2307/2299609 | jstor = 2299609 | publisher = Mathematical Association of America }}.
* {{citation | first1 = Forest R. | last1 = Moulton | author1-link = Forest Ray Moulton | year = 1926 | title = New methods in exterior ballistics | publisher = University of Chicago Press }}.
* {{citation | first1 = Alfio | last1 = Quarteroni | first2 = Riccardo | last2 = Sacco | first3 = Fausto | last3 = Saleri | year = 2000 | title = Matematica Numerica | publisher = Springer Verlag | isbn = 978-8-847-00077-3 }}.
* {{Citation | last1=Süli | first1=Endre | last2=Mayers | first2=David | title=An Introduction to Numerical Analysis | publisher=[[Cambridge University Press]] | isbn=0521007941 | year=2003}}.
== External links ==
|