Content deleted Content added
→External links: no value added by this link |
Citation bot (talk | contribs) Added bibcode. | Use this bot. Report bugs. | Suggested by Abductive | Category:Numerical differential equations | #UCB_Category 153/167 |
||
(19 intermediate revisions by 16 users not shown) | |||
Line 1:
{{Short description|Class of iterative numerical methods for solving differential equations}}
{{redirect|Adams' method|the electoral apportionment method|Method of smallest divisors}}
{{no footnotes|date=June 2017}}
Line 5 ⟶ 6:
== Definitions ==
Numerical methods for ordinary differential equations approximate solutions to [[initial value problem]]s of the form
The result is approximations for the value of <math> y(t) </math> at discrete times <math> t_i </math>:
where <math> h </math> is the time step (sometimes referred to as <math> \Delta t </math>) and <math>i</math> is an integer.
Multistep methods use information from the previous <math> s </math> steps to calculate the next value. In particular, a ''linear'' multistep method uses a linear combination of <math> y_i </math> and <math> f(t_i,y_i) </math> to calculate the value of <math> y </math> for the desired current step. Thus, a linear multistep method is a method of the form
& y_{n+s} + a_{s-1} \cdot y_{n+s-1} + a_{s-2} \cdot y_{n+s-2} + \cdots + a_0 \cdot y_n \\
& \qquad {} = h\cdot\left( b_s \cdot f(t_{n+s},y_{n+s}) + b_{s-1} \cdot f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 \cdot f(t_n,y_n) \right) \\
Line 26 ⟶ 27:
Consider for an example the problem
The exact solution is <math> y(t) =
=== One-step Euler ===
A simple numerical method is Euler's method:
Euler's method can be viewed as an explicit multistep method for the degenerate case of one step.
This method, applied with step size <math> h = \tfrac{1}{2} </math> on the problem <math> y' = y </math>, gives the following results:
y_1 &= y_0 + hf(t_0, y_0) = 1 + \tfrac{1}{2} \
y_2 &= y_1 + hf(t_1, y_1) = 1.5 + \tfrac{1}{2} \
y_3 &= y_2 + hf(t_2, y_2) = 2.25 + \tfrac{1}{2} \
y_4 &= y_3 + hf(t_3, y_3) = 3.375 + \tfrac{1}{2} \
\end{align} </math>
===Two-step Adams–Bashforth===
Euler's method is a [[one-step method]]. A simple multistep method is the two-step Adams–Bashforth method
This method needs two values, <math> y_{n+1} </math> and <math> y_n </math>, to compute the next value, <math> y_{n+2} </math>. However, the initial value problem provides only one value, <math> y_0 = 1 </math>. One possibility to resolve this issue is to use the <math> y_1 </math> computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):
y_2 &= y_1 + \
y_3 &= y_2 + \
y_4 &= y_3 + \
\end{align} </math>
The exact solution at <math> t = t_4 = 2 </math> is <math>
== Families of multistep methods ==
Line 60 ⟶ 61:
The Adams–Bashforth methods with ''s'' = 1, 2, 3, 4, 5 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Butcher|2003|p=103}}):
y_{n+1} &= y_n + hf(t_n, y_n) , \qquad\text{(This is the Euler method)} \\
y_{n+2} &= y_{n+1} + h\left( \frac{3}{2}f(t_{n+1}, y_{n+1}) - \frac{1}{2}f(t_n, y_n) \right) , \\
Line 69 ⟶ 70:
The coefficients <math> b_j </math> can be determined as follows. Use [[polynomial interpolation]] to find the polynomial ''p'' of degree <math> s-1 </math> such that
The [[Lagrange polynomial|Lagrange formula]] for polynomial interpolation yields
The polynomial ''p'' is locally a good approximation of the right-hand side of the differential equation <math> y' = f(t,y) </math> that is to be solved, so consider the equation <math> y' = p(t) </math> instead. This equation can be solved exactly; the solution is simply the integral of ''p''. This suggests taking
The Adams–Bashforth method arises when the formula for ''p'' is substituted. The coefficients <math> b_j </math> turn out to be given by
Replacing <math> f(t, y) </math> by its interpolant ''p'' incurs an error of order ''h''<sup>''s''</sup>, and it follows that the ''s''-step Adams–Bashforth method has indeed order ''s'' {{harv|Iserles|1996|loc=§2.1}}
Line 83 ⟶ 84:
The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have <math> a_{s-1} = -1 </math> and <math> a_{s-2} = \cdots = a_0 = 0 </math>. Again the ''b'' coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that <math> b_s = 0 </math>, an ''s''-step Adams–Moulton method can reach order <math> s+1 </math>, while an ''s''-step Adams–Bashforth methods has only order ''s''.
The Adams–Moulton methods with ''s'' = 0, 1, 2, 3, 4 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Quarteroni|Sacco|Saleri|2000}}) listed, where the first two methods are the [[backward Euler method]] and the [[Trapezoidal rule (differential equations)|trapezoidal rule]] respectively:
y_{n} &= y_{n-1} + h f(t_{n},y_{n}), \\
y_{n+2} &= y_{n+1} + h \left( \frac{5}{12} f(t_{n+2},y_{n+2}) + \frac{
▲: <math> \begin{align}
▲y_{n+2} &= y_{n+1} + h \left( \frac{5}{12} f(t_{n+2},y_{n+2}) + \frac{2}{3} f(t_{n+1},y_{n+1}) - \frac{1}{12} f(t_n,y_n) \right) , \\
y_{n+3} &= y_{n+2} + h \left( \frac{9}{24} f(t_{n+3},y_{n+3}) + \frac{19}{24} f(t_{n+2},y_{n+2}) - \frac{5}{24} f(t_{n+1},y_{n+1}) + \frac{1}{24} f(t_n,y_n) \right) , \\
y_{n+4} &= y_{n+3} + h \left( \frac{251}{720} f(t_{n+4},y_{n+4}) + \frac{646}{720} f(t_{n+3},y_{n+3}) - \frac{264}{720} f(t_{n+2},y_{n+2}) + \frac{106}{720} f(t_{n+1},y_{n+1}) - \frac{19}{720} f(t_n,y_n) \right) .
Line 94:
The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points <math>t_{n-1},\dots, t_{n-s} </math>, as above, but also <math> t_n </math>. The coefficients are given by
The Adams–Moulton methods are solely due to [[John Couch Adams]], like the Adams–Bashforth methods. The name of [[Forest Ray Moulton]] became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a [[Predictor-corrector method|predictor-corrector]] pair {{harv|Moulton|1926}}; {{harvtxt|Milne|1926}} had the same idea. Adams used [[Newton's method]] to solve the implicit equation {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.1}}.
=== Backward differentiation formulas (BDF) ===
The BDF methods are implicit methods with <math> b_{s-1} = \cdots = b_0 = 0 </math> and the other coefficients chosen such that the method attains order ''s'' (the maximum possible). These methods are especially used for the solution of [[stiff equation|stiff differential equation]]s.
Line 107:
=== Consistency and order ===
The first question is whether the method is consistent: is the difference equation
& a_{s}y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align} </math>
a good approximation of the differential equation <math> y' = f(t,y) </math>? More precisely, a multistep method is ''consistent'' if the [[local truncation error]] goes to zero faster than the step size ''h'' as ''h'' goes to zero, where the ''local truncation error'' is defined to be the difference between the result <math>y_{n+s}</math> of the method, assuming that all the previous values <math>y_{n+s-1}, \ldots, y_n</math> are exact, and the exact solution of the equation at time <math>t_{n+s}</math>. A computation using [[Taylor series]] shows that a linear multistep method is consistent if and only if
All the methods mentioned above are consistent {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have ''order'' ''p'' if the local error is of order <math>O(h^{p+1})</math> as ''h'' goes to zero. This is equivalent to the following condition on the coefficients of the methods:
The ''s''-step Adams–Bashforth method has order ''s'', while the ''s''-step Adams–Moulton method has order <math>s+1</math> {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
These conditions are often formulated using the ''characteristic polynomials''
In terms of these polynomials, the above condition for the method to have order ''p'' becomes
In particular, the method is consistent if it has order at least one, which is the case if <math>\rho(1)=0</math> and <math>\rho'(1)=\sigma(1)</math>.
=== Stability and convergence ===
<!-- [[Zero
The numerical solution of a one-step method depends on the initial condition <math> y_0 </math>, but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, <math> y_0, y_1, \ldots, y_{s-1} </math>. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is [[Zero stability|''zero-stable'']] for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation <math> y' = 0 </math> {{harv|Süli|Mayers|2003|p=332}}.
If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the ''root condition'' is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied {{harv|Süli|Mayers|2003|p=335}}.
Line 137:
To assess the performance of linear multistep methods on [[stiff equation]]s, consider the linear test equation ''y''' = λ''y''. A multistep method applied to this differential equation with step size ''h'' yields a linear [[recurrence relation]] with characteristic polynomial
This polynomial is called the ''stability polynomial'' of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be ''absolutely stable'' for that value of ''h''λ. The method is said to be ''A-stable'' if it is absolutely stable for all ''h''λ with negative real part. The ''region of absolute stability'' is the set of all ''h''λ for which the multistep method is absolutely stable {{harv|Süli|Mayers|2003|pp=347 & 348}}. For more details, see the section on [[Stiff equation#Multistep methods|stiff equations and multistep methods]].
Line 143:
Consider the Adams–Bashforth three-step method
<!-- save expression that computes y_n+1 rather than y_n+s
-->
One characteristic polynomial is thus
which has roots <math>z=0, 1</math>, and the conditions above are satisfied. As <math>z=1</math> is the only root of modulus 1, the method is strongly stable.
The other characteristic polynomial is
▲<math>\sigma(z) = \frac{23}{12}z^{2} - \frac{4}{3}z + \frac{5}{12} </math>
==First and second Dahlquist barriers==
Line 159 ⟶ 158:
===First Dahlquist barrier===
===Second Dahlquist barrier===
== See also ==
Line 171 ⟶ 170:
* {{citation | first1 = Francis | last1 = Bashforth | year = 1883 | title = An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams | ___location = Cambridge }}.
* {{Citation | last1=Butcher | first1=John C. | author1-link = John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=John Wiley | isbn=978-0-471-96758-3 | year=2003}}.
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=Convergence and stability in the numerical integration of ordinary differential equations | year=1956 | journal=Mathematica Scandinavica | volume=4 | pages=
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=A special stability problem for linear multistep methods | doi=10.1007/BF01963532 | year=1963 | journal=BIT | issn=0006-3835 | volume=3 | pages=27–43|
* {{citation | first1 = Herman H. | last1 = Goldstine | author1-link = Herman Goldstine | year = 1977 | title = A History of Numerical Analysis from the 16th through the 19th Century | publisher = Springer-Verlag | ___location = New York | isbn = 978-0-387-90277-7 }}.
* {{citation | first1 = Ernst | last1 = Hairer | first2 = Syvert Paul | last2 = Nørsett | first3 = Gerhard | last3 = Wanner | year = 1993 | title = Solving ordinary differential equations I: Nonstiff problems | edition = 2nd | publisher = Springer Verlag | ___location = Berlin | isbn = 978-3-540-56670-0 }}.
* {{Citation | last1=Hairer | first1=Ernst | last2=Wanner | first2=Gerhard | title=Solving ordinary differential equations II: Stiff and differential-algebraic problems | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-3-540-60452-5 | year=1996}}.
* {{citation | first1 = Arieh | last1 = Iserles | year = 1996 | title = A First Course in the Numerical Analysis of Differential Equations | publisher = Cambridge University Press | bibcode = 1996fcna.book.....I | isbn = 978-0-521-55655-2 }}.
* {{citation | first1 = W. E. | last1 = Milne | year = 1926 | title = Numerical integration of ordinary differential equations | journal = American Mathematical Monthly | volume = 33 | issue = 9 | pages = 455–460 | doi = 10.2307/2299609 | jstor = 2299609 | publisher = Mathematical Association of America }}.
* {{citation | first1 = Forest R. | last1 = Moulton | author1-link = Forest Ray Moulton | year = 1926 | title = New methods in exterior ballistics | publisher = University of Chicago Press }}.
* {{citation |author-link1= Alfio Quarteroni | first1 = Alfio | last1 = Quarteroni | first2 = Riccardo | last2 = Sacco | first3 = Fausto | last3 = Saleri | year = 2000 | title = Matematica Numerica | publisher = Springer Verlag | isbn = 978-88-470-0077-3 }}.
* {{Citation | last1=Süli | first1=Endre | last2=Mayers | first2=David | title=An Introduction to Numerical Analysis | publisher=[[Cambridge University Press]] | isbn=0-521-00794-1 | year=2003}}.
|