Linear multistep method: Difference between revisions

Content deleted Content added
Reverted good faith edits by HerrHartmuth: Keep italic d; space before punctuation in math mode isn't helpful; manual sizing of delimiters usually isn't helpful. (TW)
Citation bot (talk | contribs)
Added bibcode. | Use this bot. Report bugs. | Suggested by Abductive | Category:Numerical differential equations | #UCB_Category 153/167
 
(33 intermediate revisions by 29 users not shown)
Line 1:
{{Short description|Class of iterative numerical methods for solving differential equations}}
{{redirect|Adams' method|the electoral apportionment method|Method of smallest divisors}}
{{no footnotes|date=June 2017}}
Line 5 ⟶ 6:
== Definitions ==
Numerical methods for ordinary differential equations approximate solutions to [[initial value problem]]s of the form
: <math display="block"> y' = f(t,y), \quad y(t_0) = y_0. </math>
 
The result is approximations for the value of <math> y(t) </math> at discrete times <math> t_i </math>:
: <math display="block"> y_i \approx y(t_i) \quad\text{where}\quad t_i = t_0 + i h, </math>
where <math> h </math> is the time step (sometimes referred to as <math> \Delta t </math>) and <math>i</math> is an integer.
 
Multistep methods use information from the previous <math> s </math> steps to calculate the next value. In particular, a ''linear'' multistep method uses a linear combination of <math> y_i </math> and <math> f(t_i,y_i) </math> to calculate the value of <math> y </math> for the desired current step. Thus, a linear multistep method is a method of the form
: <math display="block"> \begin{align}
& y_{n+s} + a_{s-1} \cdot y_{n+s-1} + a_{s-2} \cdot y_{n+s-2} + \cdots + a_0 \cdot y_n \\
& \qquad {} = h\cdot\left( b_s \cdot f(t_{n+s},y_{n+s}) + b_{s-1} \cdot f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 \cdot f(t_n,y_n) \right), \\
& \Leftrightarrow \sum_{j=0}^s a_jy_{n+j} = h\sum_{j=0}^sb_jf(t_{n+j},y_{n+j}),
\end{align} </math>
with <math>a_s=1</math>. The coefficients <math> a_0, \dotsc, a_{s-1} </math> and <math> b_0, \dotsc, b_s </math> determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method.
 
One can distinguish between [[explicit and implicit methods]]. If <math> b_s = 0 </math>, then the method is called "explicit", since the formula can directly compute <math> y_{n+s} </math>. If <math> b_s \ne 0 </math> then the method is called "implicit", since the value of <math> y_{n+s} </math> depends on the value of <math> f(t_{n+s}, y_{n+s}) </math>, and the equation must be solved for <math> y_{n+s} </math>. [[Iterative methods]] such as [[Newton's method]] are often used to solve the implicit formula.
Line 25 ⟶ 27:
 
Consider for an example the problem
: <math display="block"> y' = f(t,y)=y, \quad y(0) = 1. </math>
The exact solution is <math> y(t) = \mathrm{e}^t </math>.
 
=== One-step Euler ===
A simple numerical method is Euler's method:
: <math display="block"> y_{n+1} = y_n + hf(t_n, y_n). \, </math>
Euler's method can be viewed as an explicit multistep method for the degenerate case of one step.
 
This method, applied with step size <math> h = \tfrac{1}{2} </math> on the problem <math> y' = y </math>, gives the following results:
: <math display="block"> \begin{align}
y_1 &= y_0 + hf(t_0, y_0) = 1 + \tfrac{1}{2} \cdot1cdot 1 = 1.5, \\
y_2 &= y_1 + hf(t_1, y_1) = 1.5 + \tfrac{1}{2} \cdot1cdot 1.5 = 2.25, \\
y_3 &= y_2 + hf(t_2, y_2) = 2.25 + \tfrac{1}{2} \cdot2cdot 2.25 = 3.375, \\
y_4 &= y_3 + hf(t_3, y_3) = 3.375 + \tfrac{1}{2} \cdot3cdot 3.375 = 5.0625.
\end{align} </math>
 
===Two-step Adams–Bashforth===
Euler's method is a [[one-step method]]. A simple multistep method is the two-step Adams–Bashforth method
: <math display="block"> y_{n+2} = y_{n+1} + \tfrac{3}{2} hf(t_{n+1},y_{n+1}) - \tfrac{1}{2} hf(t_n,y_n). </math>
This method needs two values, <math> y_{n+1} </math> and <math> y_n </math>, to compute the next value, <math> y_{n+2} </math>. However, the initial value problem provides only one value, <math> y_0 = 1 </math>. One possibility to resolve this issue is to use the <math> y_1 </math> computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):
: <math display="block"> \begin{align}
y_2 &= y_1 + \tfrac32tfrac 3 2 hf(t_1, y_1) - \tfrac12tfrac 1 2 hf(t_0, y_0) = 1.5 + \tfrac32tfrac 3 2 \cdot \tfrac12tfrac 1 2 \cdot1cdot 1.5 - \tfrac12tfrac 1 2 \cdot \tfrac12tfrac 1 2 \cdot1cdot 1 = 2.375, \\
y_3 &= y_2 + \tfrac32tfrac 3 2 hf(t_2, y_2) - \tfrac12tfrac 1 2 hf(t_1, y_1) = 2.375 + \tfrac32tfrac 3 2 \cdot \tfrac12tfrac 1 2 \cdot2cdot 2.375 - \tfrac12tfrac 1 2 \cdot \tfrac12tfrac 1 2 \cdot1cdot 1.5 = 3.7812, \\
y_4 &= y_3 + \tfrac32tfrac 3 2 hf(t_3, y_3) - \tfrac12tfrac 1 2 hf(t_2, y_2) = 3.7812 + \tfrac32tfrac 3 2 \cdot \tfrac12tfrac 1 2 \cdot3cdot 3.7812 - \tfrac12tfrac 1 2 \cdot \tfrac12tfrac 1 2 \cdot2cdot 2.375 = 6.0234.
\end{align} </math>
The exact solution at <math> t = t_4 = 2 </math> is <math> \mathrm{e}^2 = 7.3891\ldots </math>, so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough.
 
== Families of multistep methods ==
Line 56 ⟶ 58:
 
=== Adams–Bashforth methods ===
The Adams–Bashforth methods are explicit methods. The coefficients are <math> a_{s-1}=-1 </math> and <math> a_{s-2} = \cdots = a_0 = 0 </math>, while the <math> b_j </math> are chosen such that the methods hashave order ''s'' (this determines the methods uniquely).
 
The Adams–Bashforth methods with ''s'' = 1, 2, 3, 4, 5 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Butcher|2003|p=103}}):
: <math display="block"> \begin{align}
y_{n+1} &= y_n + hf(t_n, y_n) , \qquad\text{(This is the Euler method)} \\
y_{n+2} &= y_{n+1} + h\left( \frac{3}{2}f(t_{n+1}, y_{n+1}) - \frac{1}{2}f(t_n, y_n) \right) , \\
y_{n+3} &= y_{n+2} + h\left( \frac{23}{12} f(t_{n+2}, y_{n+2}) - \frac{416}{312} f(t_{n+1}, y_{n+1}) + \frac{5}{12}f(t_n, y_n)\right) , \\
y_{n+4} &= y_{n+3} + h\left( \frac{55}{24} f(t_{n+3}, y_{n+3}) - \frac{59}{24} f(t_{n+2}, y_{n+2}) + \frac{37}{24} f(t_{n+1}, y_{n+1}) - \frac{39}{824} f(t_n, y_n) \right) , \\
y_{n+5} &= y_{n+4} + h\left( \frac{1901}{720} f(t_{n+4}, y_{n+4}) - \frac{13872774}{360720} f(t_{n+3}, y_{n+3}) + \frac{1092616}{30720} f(t_{n+2}, y_{n+2}) - \frac{6371274}{360720} f(t_{n+1}, y_{n+1}) + \frac{251}{720} f(t_n, y_n) \right) .
\end{align} </math>
 
The coefficients <math> b_j </math> can be determined as follows. Use [[polynomial interpolation]] to find the polynomial ''p'' of degree <math> s-1 </math> such that
:<math display="block"> p(t_{n+i}) = f(t_{n+i}, y_{n+i}), \qquad \text{for } i=0,\ldots,s-1. </math>
The [[Lagrange polynomial|Lagrange formula]] for polynomial interpolation yields
:<math display="block"> p(t) = \sum_{j=0}^{s-1} \frac{(-1)^{s-j-1}f(t_{n+j}, y_{n+j})}{j!(s-j-1)!h^{s-1}} \prod_{i=0 \atop i\ne j}^{s-1} (t-t_{n+i}). </math>
The polynomial ''p'' is locally a good approximation of the right-hand side of the differential equation <math> y' = f(t,y) </math> that is to be solved, so consider the equation <math> y' = p(t) </math> instead. This equation can be solved exactly; the solution is simply the integral of ''p''. This suggests taking
:<math display="block"> y_{n+s} = y_{n+s-1} + \int_{t_{n+s-1}}^{t_{n+s}} p(t)\,\mathrm dt. </math>
The Adams–Bashforth method arises when the formula for ''p'' is substituted. The coefficients <math> b_j </math> turn out to be given by
:<math display="block"> b_{s-j-1} = \frac{(-1)^j}{j!(s-j-1)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s-1} (u+i) \,\mathrm du, \qquad \text{for } j=0,\ldots,s-1. </math>
Replacing <math> f(t, y) </math> by its interpolant ''p'' incurs an error of order ''h''<sup>''s''</sup>, and it follows that the ''s''-step Adams–Bashforth method has indeed order ''s'' {{harv|Iserles|1996|loc=§2.1}}
 
Line 82 ⟶ 84:
The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have <math> a_{s-1} = -1 </math> and <math> a_{s-2} = \cdots = a_0 = 0 </math>. Again the ''b'' coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that <math> b_s = 0 </math>, an ''s''-step Adams–Moulton method can reach order <math> s+1 </math>, while an ''s''-step Adams–Bashforth methods has only order ''s''.
 
The Adams–Moulton methods with ''s'' = 0, 1, 2, 3, 4 are ({{harvnb|Hairer|Nørsett|Wanner|1993|loc=§III.1}}; {{harvnb|Quarteroni|Sacco|Saleri|2000}}) listed, where the first two methods are the [[backward Euler method]] and the [[Trapezoidal rule (differential equations)|trapezoidal rule]] respectively:
: <math display="block"> \begin{align}
y_ny_{n} &= y_{n-1} + h f(t_nt_{n},y_ny_{n}) , \qquad\text{(This is the backward Euler method)}\\
y_{n+1} &= y_n + \frac{1}{2} h \left( f(t_{n+1},y_{n+1}) + f(t_n,y_n) \right) , \qquad\text{(This is the trapezoidal rule)}\\
y_{n+2} &= y_{n+1} + h \left( \frac{5}{12} f(t_{n+2},y_{n+2}) + \frac{28}{312} f(t_{n+1},y_{n+1}) - \frac{1}{12} f(t_n,y_n) \right) , \\
y_{n+3} &= y_{n+2} + h \left( \frac{39}{824} f(t_{n+3},y_{n+3}) + \frac{19}{24} f(t_{n+2},y_{n+2}) - \frac{5}{24} f(t_{n+1},y_{n+1}) + \frac{1}{24} f(t_n,y_n) \right) , \\
y_{n+4} &= y_{n+3} + h \left( \frac{251}{720} f(t_{n+4},y_{n+4}) + \frac{646}{720} f(t_{n+3},y_{n+3}) - \frac{264}{720} f(t_{n+2},y_{n+2}) + \frac{106}{720} f(t_{n+1},y_{n+1}) - \frac{19}{720} f(t_n,y_n) \right) .
\end{align} </math>
 
The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points <math>t_{n-1},\dots, t_{n-s} </math>, as above, but also <math> t_n </math>. The coefficients are given by
:<math display="block"> b_{s-j} = \frac{(-1)^j}{j!(s-j)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s} (u+i-1) \,\mathrm du, \qquad \text{for } j=0,\ldots,s. </math>
 
The Adams–Moulton methods are solely due to [[John Couch Adams]], like the Adams–Bashforth methods. The name of [[Forest Ray Moulton]] became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a [[Predictor-corrector method|predictor-corrector]] pair {{harv|Moulton|1926}}; {{harvtxt|Milne|1926}} had the same idea. Adams used [[Newton's method]] to solve the implicit equation {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.1}}.
 
=== Backward differentiation formulas (BDF) ===
:{{main|Backward differentiation formula}}
The BDF methods are implicit methods with <math> b_{s-1} = \cdots = b_0 = 0 </math> and the other coefficients chosen such that the method attains order ''s'' (the maximum possible). These methods are especially used for the solution of [[stiff equation|stiff differential equation]]s.
 
Line 105 ⟶ 107:
=== Consistency and order ===
The first question is whether the method is consistent: is the difference equation
:<math display="block"> \begin{align}
& a_{s}y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align} </math>
a good approximation of the differential equation <math> y' = f(t,y) </math>? More precisely, a multistep method is ''consistent'' if the [[local truncation error]] goes to zero faster than the step size ''h'' as ''h'' goes to zero, where the ''local truncation error'' is defined to be the difference between the result <math>y_{n+s}</math> of the method, assuming that all the previous values <math>y_{n+s-1}, \ldots, y_n</math> are exact, and the exact solution of the equation at time <math>t_{n+s}</math>. A computation using [[Taylor series]] shows that a linear multistep method is consistent if and only if
:<math display="block"> \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad \sum_{k=0}^s b_k = s + \sum_{k=0}^{s-1} ka_kk a_k. </math>
All the methods mentioned above are consistent {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
 
If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have ''order'' ''p'' if the local error is of order <math>O(h^{p+1})</math> as ''h'' goes to zero. This is equivalent to the following condition on the coefficients of the methods:
:<math display="block"> \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad q \sum_{k=0}^s k^{q-1} b_k = s^q + \sum_{k=0}^{s-1} k^q a_k \text{ for } q=1,\ldots,p. </math>
The ''s''-step Adams–Bashforth method has order ''s'', while the ''s''-step Adams–Moulton method has order <math>s+1</math> {{harv|Hairer|Nørsett|Wanner|1993|loc=§III.2}}.
 
These conditions are often formulated using the ''characteristic polynomials''
:<math display="block"> \rho(z) = z^s + \sum_{k=0}^{s-1} a_k z^k \quad\text{and}\quad \sigma(z) = \sum_{k=0}^s b_k z^k. </math>
In terms of these polynomials, the above condition for the method to have order ''p'' becomes
:<math display="block"> \rho(\mathrm{e}^h) - h\sigma(\mathrm{e}^h) = O(h^{p+1}) \quad \text{as } h\to 0. </math>
In particular, the method is consistent if it has order at least one, which is the case if <math>\rho(1)=0</math> and <math>\rho'(1)=\sigma(1)</math>.
 
=== Stability and convergence ===
<!-- [[Zero- stability]] redirects here -->
 
The numerical solution of a one-step method depends on the initial condition <math> y_0 </math>, but the numerical solution of an ''s''-step method depend on all the ''s'' starting values, <math> y_0, y_1, \ldots, y_{s-1} </math>. It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is [[Zero stability|''zero-stable'']] for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than ''K''ε for some value of ''K'' which does not depend on the step size ''h''. This is called "zero-stability" because it is enough to check the condition for the differential equation <math> y' = 0 </math> {{harv|Süli|Mayers|2003|p=332}}.
 
If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the ''root condition'' is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied {{harv|Süli|Mayers|2003|p=335}}.
Line 135 ⟶ 137:
 
To assess the performance of linear multistep methods on [[stiff equation]]s, consider the linear test equation ''y''' = λ''y''. A multistep method applied to this differential equation with step size ''h'' yields a linear [[recurrence relation]] with characteristic polynomial
:<math display="block"> \pi(z; h\lambda) = (1 - h\lambda\beta_s) z^s + \sum_{k=0}^{s-1} (\alpha_k - h\lambda\beta_k) z^k = \rho(z) - h\lambda\sigma(z). </math>
This polynomial is called the ''stability polynomial'' of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be ''absolutely stable'' for that value of ''h''λ. The method is said to be ''A-stable'' if it is absolutely stable for all ''h''λ with negative real part. The ''region of absolute stability'' is the set of all ''h''λ for which the multistep method is absolutely stable {{harv|Süli|Mayers|2003|pp=347 & 348}}. For more details, see the section on [[Stiff equation#Multistep methods|stiff equations and multistep methods]].
 
Line 141 ⟶ 143:
Consider the Adams–Bashforth three-step method
<!-- save expression that computes y_n+1 rather than y_n+s
:<math display="block">y_{n+1} = y_n + h\left( {23\over 12} f(t_n, y_n) - {16 \over 12} f(t_{n-1}, y_{n-1}) + {5\over 12}f(t_{n-2}, y_{n-2})\right).</math>
-->
:<math display="block">y_{n+3} = y_{n+2} + h\left( {23\over 12} f(t_{n+2}, y_{n+2}) - {4 \over 3} f(t_{n+1}, y_{n+1}) + {5\over 12}f(t_{n}, y_{n})\right).</math>
One characteristic polynomial is thus
:<math display="block">\rho(z) = z^3-z^2 = z^2(z-1)\,</math>
which has roots <math>z=0, 1</math>, and the conditions above are satisfied. As <math>z=1</math> is the only root of modulus 1, the method is strongly stable.
 
The other characteristic polynomial is
<math display="block">\sigma(z) = \frac{23}{12} z^{2} - \frac{4}{3} z + \frac{5}{12} </math>
 
<math>\sigma(z) = \frac{23}{12}z^{2} - \frac{4}{3}z + \frac{5}{12} </math>
 
==First and second Dahlquist barriers==
Line 157 ⟶ 158:
===First Dahlquist barrier===
 
AThe first Dahlquist barrier states that a zero-stable and linear ''q''-step multistep method cannot attain an order of convergence greater than ''q'' + 1 if ''q'' is odd and greater than ''q'' + 2 if ''q'' is even. If the method is also explicit, then it cannot attain an order greater than ''q'' {{harv|Hairer|Nørsett|Wanner|1993|loc=Thm III.3.5}}.
 
===Second Dahlquist barrier===
 
ThereThe aresecond Dahlquist barrier states that no explicit linear multistep methods are [[Stiff equation#A-stability|A-stable]]. andFurther, linearthe multistepmaximal methods.order Theof an (implicit) onesA-stable havelinear order of convergencemultistep atmethod mostis 2. TheAmong [[trapezoidalthe ruleA-stable (differentiallinear multistep methods of order 2, the equations)|trapezoidal rule]] has the smallest error constant amongst{{harv|Dahlquist|1963|loc=Thm the2.1 A-stable linear multistep methods of orderand 2.2}}.
 
== See also ==
Line 169 ⟶ 170:
* {{citation | first1 = Francis | last1 = Bashforth | year = 1883 | title = An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams | ___location = Cambridge }}.
* {{Citation | last1=Butcher | first1=John C. | author1-link = John C. Butcher | title=Numerical Methods for Ordinary Differential Equations | publisher=John Wiley | isbn=978-0-471-96758-3 | year=2003}}.
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=Convergence and stability in the numerical integration of ordinary differential equations | year=1956 | journal=Mathematica Scandinavica | volume=4 | pages=3333–53| doi=10.7146/math.scand.a-10454 | doi-53access=free }}.
* {{Citation | last1=Dahlquist | first1=Germund | author1-link=Germund Dahlquist | title=A special stability problem for linear multistep methods | doi=10.1007/BF01963532 | year=1963 | journal=BIT | issn=0006-3835 | volume=3 | pages=27–43| s2cid=120241743 }}.
* {{citation | first1 = Herman H. | last1 = Goldstine | author1-link = Herman Goldstine | year = 1977 | title = A History of Numerical Analysis from the 16th through the 19th Century | publisher = Springer-Verlag | ___location = New York | isbn = 978-0-387-90277-7 }}.
* {{citation | first1 = Ernst | last1 = Hairer | first2 = Syvert Paul | last2 = Nørsett | first3 = Gerhard | last3 = Wanner | year = 1993 | title = Solving ordinary differential equations I: Nonstiff problems | edition = 2nd | publisher = Springer Verlag | ___location = Berlin | isbn = 978-3-540-56670-0 }}.
* {{Citation | last1=Hairer | first1=Ernst | last2=Wanner | first2=Gerhard | title=Solving ordinary differential equations II: Stiff and differential-algebraic problems | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-3-540-60452-5 | year=1996}}.
* {{citation | first1 = Arieh | last1 = Iserles | year = 1996 | title = A First Course in the Numerical Analysis of Differential Equations | publisher = Cambridge University Press | bibcode = 1996fcna.book.....I | isbn = 978-0-521-55655-2 }}.
* {{citation | first1 = W. E. | last1 = Milne | year = 1926 | title = Numerical integration of ordinary differential equations | journal = American Mathematical Monthly | volume = 33 | issue = 9 | pages = 455–460 | doi = 10.2307/2299609 | jstor = 2299609 | publisher = Mathematical Association of America }}.
* {{citation | first1 = Forest R. | last1 = Moulton | author1-link = Forest Ray Moulton | year = 1926 | title = New methods in exterior ballistics | publisher = University of Chicago Press }}.
* {{citation |author-link1= Alfio Quarteroni | first1 = Alfio | last1 = Quarteroni | first2 = Riccardo | last2 = Sacco | first3 = Fausto | last3 = Saleri | year = 2000 | title = Matematica Numerica | publisher = Springer Verlag | isbn = 978-88-470-0077-3 }}.
* {{Citation | last1=Süli | first1=Endre | last2=Mayers | first2=David | title=An Introduction to Numerical Analysis | publisher=[[Cambridge University Press]] | isbn=0-521-00794-1 | year=2003}}.
 
== External links ==
* {{MathWorld | urlname= AdamsMethod| title= Adams Method }}
*[http://www.dotnumerics.com/NumericalLibraries/DifferentialEquations/ DotNumerics: Ordinary Differential Equations for C# and VB.NET] Initial-value problem for nonstiff and stiff ordinary differential equations (explicit Runge-Kutta, implicit Runge-Kutta, Gear’s BDF and Adams-Moulton).
 
{{Numerical integrators}}
 
[[Category:Numerical differential equations]]
[[Category:Numerical analysis]]