Content deleted Content added
Jitse Niesen (talk | contribs) →Adams–Moulton methods: correct link |
Citation bot (talk | contribs) Added bibcode. | Use this bot. Report bugs. | Suggested by Abductive | Category:Numerical differential equations | #UCB_Category 153/167 |
||
(83 intermediate revisions by 59 users not shown) | |||
Line 1:
{{Short description|Class of iterative numerical methods for solving differential equations}}
{{redirect|Adams' method|the electoral apportionment method|Method of smallest divisors}}
'''Linear multistep methods''' are used for the [[numerical ordinary differential equations|numerical solution of ordinary differential equations]]. Conceptually, a numerical method starts from an initial point and then takes a short '''step''' forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as [[Euler's method]]) refer to only one previous point and its derivative to determine the current value. Methods such as [[Runge–Kutta methods|Runge-Kutta]] take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of ''linear'' multistep methods, a [[linear combination]] of the previous points and derivative values is used.▼
{{no footnotes|date=June 2017}}
▲'''Linear multistep methods''' are used for the [[numerical ordinary differential equations|numerical solution of ordinary differential equations]]. Conceptually, a numerical method starts from an initial point and then takes a short '''step''' forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as [[Euler's method]]) refer to only one previous point and its derivative to determine the current value. Methods such as [[Runge–Kutta methods|
== Definitions ==
Numerical methods for ordinary differential equations approximate solutions to [[initial value
The result is approximations for the value of <math> y(t) </math> at discrete times <math> t_i </math>:
where
▲where ''h'' is the time step (sometimes referred to as <math> \Delta t </math>).
& y_{n+s} + a_{s-1} \cdot y_{n+s-1} + a_{s-2} \cdot y_{n+s-2} + \cdots + a_0 \cdot y_n \\▼
& \qquad {} = h
& \Leftrightarrow \sum_{j=0}^s a_jy_{n+j} = h\sum_{j=0}^sb_jf(t_{n+j},y_{n+j}),
▲: <math> \begin{align}
▲& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
▲& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align} </math>
One can distinguish between [[explicit and implicit methods]]. If <math> b_s = 0 </math>, then the method is called "explicit", since the formula can directly compute <math> y_{n+s} </math>. If <math> b_s \ne 0 </math> then the method is called "implicit", since the value of <math> y_{n+s} </math> depends on the value of <math> f(t_{n+s}, y_{n+s}) </math>, and the equation must be solved for <math> y_{n+s} </math>.
Sometimes an explicit multistep method is used to "predict" the value of <math> y_{n+s} </math>. That value is then used in an implicit formula to "correct" the value.
== Examples ==
Consider for an example the problem
The exact solution is <math> y(t) =
=== One-
A simple numerical method is Euler's method:
Euler's method can be viewed as an explicit multistep method for the degenerate case of one step.
This method, applied with step size <math> h = \
y_1 &= y_0 + hf(t_0, y_0) = 1 + \
y_2 &= y_1 + hf(t_1, y_1) = 1.5 + \
y_3 &= y_2 + hf(t_2, y_2) = 2.25 + \
y_4 &= y_3 + hf(t_3, y_3) = 3.375 + \
\end{align} </math>
===Two-
Euler's method is a [[one-step method]]. A simple multistep method is the two-step Adams–Bashforth method
This method needs two values, <math> y_{n+1} </math> and <math> y_n </math>, to compute the next value, <math> y_{n+2} </math>. However, the initial value problem provides only one value, <math> y_0 = 1 </math>. One possibility to resolve this issue is to use the <math> y_1 </math> computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):
y_2 &= y_1 + \
y_3 &= y_2 + \
y_4 &= y_3 + \
\end{align} </math>
The exact solution at <math> t = t_4 = 2 </math> is <math>
==
Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the [[backward differentiation formula]]s (BDFs).
=== Adams–Bashforth methods ===
The Adams–Bashforth methods are explicit methods. The coefficients are <math> a_{s-1}=-1 </math> and <math> a_{s-2} = \cdots = a_0 = 0 </math>, while the <math> b_j </math> are chosen such that the methods
▲The Adams–Bashforth methods are explicit methods. The coefficients are <math>a_{s-1}=-1</math> and <math>a_{s-2} = \cdots = a_0 = 0</math>, while the <math>b_j</math> are chosen such that the methods has order ''s'' (this determines the methods uniquely).
The Adams–Bashforth methods with ''s'' = 1, 2, 3, 4, 5 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Butcher|2003|p=103}}):
<math display="block"> \begin{align}
* <math> y_{n+2} = y_{n+1} + h\left( \tfrac32 f(t_{n+1}, y_{n+1}) - \tfrac12 f(t_n, y_n)\right); </math>▼
y_{n+5} &= y_{n+4} + h\
\end{align} </math>
The coefficients <math> b_j </math> can be determined as follows. Use [[polynomial interpolation]] to find the polynomial ''p'' of degree <math> s-1 </math> such that
The [[Lagrange polynomial|Lagrange formula]] for polynomial interpolation yields
The polynomial ''p'' is locally a good approximation of the right-hand side of the differential equation <math> y' = f(t,y) </math> that is to be solved, so consider the equation <math> y' = p(t) </math> instead. This equation can be solved exactly; the solution is simply the integral of ''p''. This suggests taking
The Adams–Bashforth method arises when the formula for ''p'' is substituted. The coefficients <math> b_j </math> turn out to be given by
Replacing
The Adams–Bashforth methods were designed by [[John Couch Adams]] to solve a differential equation modelling [[capillary action]] due to [[Francis Bashforth]]. {{harvtxt|Bashforth|1883}} published his theory and Adams' numerical method {{harv|Goldstine|1977}}.
=== Adams–Moulton methods ===
The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have <math> a_{s-1} = -1 </math> and <math> a_{s-2} = \cdots = a_0 = 0 </math>. Again the ''b'' coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that <math> b_s = 0 </math>, an ''s''-step Adams–Moulton method can reach order <math> s+1 </math>, while an ''s''-step Adams–Bashforth methods has only order ''s''.
The Adams–Moulton methods with ''s'' = 0, 1, 2, 3, 4 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Quarteroni|Sacco|Saleri|2000}}) listed, where the first two methods are the [[backward Euler method]] and the [[Trapezoidal rule (differential equations)|trapezoidal rule]] respectively:
<math display="block"> \begin{align}
y_{n} &= y_{n-1} + h f(t_{n},y_{n}), \\
y_{n+4} &= y_{n+3} + h \left( \frac{251}{720} f(t_{n+4},y_{n+4}) + \frac{646}{720} f(t_{n+3},y_{n+3}) - \
\end{align} </math>
The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points
The Adams–Moulton methods are solely due to [[John Couch Adams]], like the Adams–Bashforth methods. The name of [[Forest Ray Moulton]] became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a [[Predictor-corrector method|predictor-corrector]] pair {{harv|Moulton|1926}}; {{harvtxt|Milne|1926}} had the same idea. Adams used [[Newton's method]] to solve the implicit equation {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.1}}.
=== Backward differentiation formulas (BDF) ===
{{main|Backward differentiation formula}}
The BDF methods are implicit methods with <math> b_{s-1} = \cdots = b_0 = 0 </math> and the other coefficients chosen such that the method attains order ''s'' (the maximum possible). These methods are especially used for the solution of [[stiff equation|stiff differential equation]]s.
== Analysis ==
The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are [[Numerical ordinary differential equations#Analysis|convergence, order, and stability]].
=== Consistency and order ===
The first question is whether the method is consistent: is the difference equation
& a_{s}y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align} </math>
a good approximation of the differential equation <math> y' = f(t,y) </math>? More precisely, a multistep method is ''consistent'' if the [[local truncation error]] goes to zero
All the methods mentioned above are consistent {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have ''order'' ''p'' if the local error is of order <math>O(h^{p+1})</math> as ''h'' goes to zero. This is equivalent to the following condition on the coefficients of the methods:
The ''s''-step Adams–Bashforth method has order ''s'', while the ''s''-step Adams–Moulton method has order <math>s+1</math> {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
These conditions are often formulated using the ''characteristic polynomials''
In terms of these polynomials, the above condition for the method to have order ''p'' becomes
In particular, the method is consistent if it has order at least one, which is the case if <math>\rho(1)=0</math> and <math>\rho'(1)=\sigma(1)</math>.
=== Stability and convergence ===
If the roots of the characteristic polynomial <math>\rho</math> all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the [[root condition]] is satisfied.▼
<!-- [[Zero stability]] redirects here -->
The numerical solution of a one-step method depends on the initial condition <math> y_0 </math>, but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, <math> y_0, y_1, \ldots, y_{s-1} </math>. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is [[Zero stability|''zero-stable'']] for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation <math> y' = 0 </math> {{harv|Süli|Mayers|2003|p=332}}.
Furthermore, if the method is stable, the method is said to be ''strongly stable'' if <math>z=1</math> is the only root of modulus 1. If it is stable and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be ''relatively stable''. Note that 1 must be a root; thus stable methods are always one of these two.▼
▲If the roots of the characteristic polynomial
Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values <math> y_1, \ldots, y_{s-1}</math> all converge to the initial value <math> y_0 </math> as <math> h \to 0 </math>. Then, the numerical solution converges to the exact solution as <math> h \to 0 </math> if and only if the method is zero-stable. This result is known as the ''Dahlquist equivalence theorem'', named after [[Germund Dahlquist]]; this theorem is similar in spirit to the [[Lax equivalence theorem]] for [[finite difference method]]s. Furthermore, if the method has order ''p'', then the [[global truncation error|global error]] (the difference between the numerical solution and the exact solution at a fixed time) is <math> O(h^p) </math> {{harv|Süli|Mayers|2003|p=340}}.
▲Furthermore, if the method is
To assess the performance of linear multistep methods on [[stiff equation]]s, consider the linear test equation ''y''' = λ''y''. A multistep method applied to this differential equation with step size ''h'' yields a linear [[recurrence relation]] with characteristic polynomial
<math display="block"> \pi(z; h\lambda) = (1 - h\lambda\beta_s) z^s + \sum_{k=0}^{s-1} (\alpha_k - h\lambda\beta_k) z^k = \rho(z) - h\lambda\sigma(z). </math>
This polynomial is called the ''stability polynomial'' of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be ''absolutely stable'' for that value of ''h''λ. The method is said to be ''A-stable'' if it is absolutely stable for all ''h''λ with negative real part. The ''region of absolute stability'' is the set of all ''h''λ for which the multistep method is absolutely stable {{harv|Süli|Mayers|2003|pp=347 & 348}}. For more details, see the section on [[Stiff equation#Multistep methods|stiff equations and multistep methods]].
=== Example ===
Consider the Adams–Bashforth three-step method
<!-- save expression that computes y_n+1 rather than y_n+s
The characteristic equation is thus▼
-->
:<math>q(z)=z^3-z^2=z^2(z-1)\,</math>▼
▲
which has roots <math>z=0, 1</math>, and the conditions above are satisfied. As <math>z=1</math> is the only root of modulus 1, the method is strongly stable.
The other characteristic polynomial is
<math display="block">\sigma(z) = \frac{23}{12} z^2 - \frac{4}{3} z + \frac{5}{12} </math>
==First and second Dahlquist barriers==
These two results were proved by [[Germund Dahlquist]] and represent an important bound for the order of convergence and for the [[Stiff equation#A-stability|A-stability]] of a linear multistep method. The first Dahlquist barrier was proved in {{harvtxt|Dahlquist|1956}} and the second in {{harvtxt|Dahlquist|1963}}.
===First Dahlquist barrier===
===Second Dahlquist barrier===
== See also ==
Line 153 ⟶ 170:
* {{citation | first1 = Francis | last1 = Bashforth | year = 1883 | title = An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams | ___location = Cambridge }}.
* {{Citation | last1=Butcher | first1=John C. | author1-link = John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=John Wiley | isbn=978-0-471-96758-3 | year=2003}}.
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=Convergence and stability in the numerical integration of ordinary differential equations | year=1956 | journal=Mathematica Scandinavica | volume=4 | pages=33–53| doi=10.7146/math.scand.a-10454 | doi-access=free }}.
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=A special stability problem for linear multistep methods | doi=10.1007/BF01963532 | year=1963 | journal=BIT | issn=0006-3835 | volume=3 | pages=27–43| s2cid=120241743 }}.
* {{citation | first1 = Herman H. | last1 = Goldstine | author1-link = Herman Goldstine | year = 1977 | title = A History of Numerical Analysis from the 16th through the 19th Century | publisher = Springer-Verlag | ___location = New York | isbn = 978-0-387-90277-7 }}.
* {{citation | first1 = Ernst | last1 = Hairer | first2 = Syvert Paul | last2 = Nørsett | first3 = Gerhard | last3 = Wanner | year = 1993 | title = Solving ordinary differential equations I: Nonstiff problems | edition = 2nd | publisher = Springer Verlag | ___location = Berlin | isbn = 978-3-540-56670-0 }}.
* {{Citation | last1=Hairer | first1=Ernst | last2=Wanner | first2=Gerhard | title=Solving ordinary differential equations II: Stiff and differential-algebraic problems | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-3-540-60452-5 | year=1996}}.
* {{citation | first1 = Arieh | last1 = Iserles | year = 1996 | title = A First Course in the Numerical Analysis of Differential Equations | publisher = Cambridge University Press | bibcode = 1996fcna.book.....I | isbn = 978-0-521-55655-2 }}.
* {{citation | first1 = W. E. | last1 = Milne | year = 1926 | title = Numerical integration of ordinary differential equations | journal = American Mathematical Monthly | volume = 33 |
* {{citation | first1 = Forest R. | last1 = Moulton | author1-link = Forest Ray Moulton | year = 1926 | title = New methods in exterior ballistics | publisher = University of Chicago Press }}.
* {{citation |author-link1= Alfio Quarteroni | first1 = Alfio | last1 = Quarteroni | first2 = Riccardo | last2 = Sacco | first3 = Fausto | last3 = Saleri | year = 2000 | title = Matematica Numerica | publisher = Springer Verlag | isbn = 978-
* {{Citation | last1=Süli | first1=Endre | last2=Mayers | first2=David | title=An Introduction to Numerical Analysis | publisher=[[Cambridge University Press]] | isbn=0-521-00794-1 | year=2003}}.
== External links ==
* {{MathWorld | urlname= AdamsMethod| title= Adams Method }}
{{Numerical integrators}}
[[Category:Numerical differential equations]]
[[Category:Numerical analysis]]
|