Linear multistep method: Difference between revisions

Content deleted Content added
m Reverted edits by 193.136.142.6 (talk) to last version by Saung Tadashi
Wikini (talk | contribs)
Definitions: add dots to increase the reading
Line 2:
'''Linear multistep methods''' are used for the [[numerical ordinary differential equations|numerical solution of ordinary differential equations]]. Conceptually, a numerical method starts from an initial point and then takes a short '''step''' forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as [[Euler's method]]) refer to only one previous point and its derivative to determine the current value. Methods such as [[Runge–Kutta methods|Runge–Kutta]] take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of ''linear'' multistep methods, a [[linear combination]] of the previous points and derivative values is used.
 
== Definitions ==
Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form
: <math> y' = f(t,y), \quad y(t_0) = y_0. </math>
 
The result is approximations for the value of <math> y(t) </math> at discrete times <math> t_i </math>:
: <math> y_i \approx y(t_i) \quad\text{where}\quad t_i = t_0 + i h, </math>
where ''<math> h'' </math> is the time step (sometimes referred to as <math> \Delta t </math>).
 
Multistep methods use information from the previous ''<math> s'' </math> steps to calculate the next value. In particular, a ''linear'' multistep method uses a linear combination of <math> y_i </math> and <math> f(t_i,y_i) </math> to calculate the value of ''<math> y'' </math> for the desired current step. Thus, a linear multistep method is a method of the form
: <math> \begin{align}
& y_{n+s} + a_{s-1} \cdot y_{n+s-1} + a_{s-2} \cdot y_{n+s-2} + \cdots + a_0 \cdot y_n \\
& \qquad {} = h \biglcdot\left( b_s \cdot f(t_{n+s},y_{n+s}) + b_{s-1} \cdot f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 \cdot f(t_n,y_n) \bigrright),
\end{align} </math>
The coefficients <math> a_0, \ldotsdotsc, a_{s-1} </math> and <math> b_0, \ldotsdotsc, b_s </math> determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method.
 
One can distinguish between [[explicit and implicit methods]]. If <math> b_s = 0 </math>, then the method is called "explicit", since the formula can directly compute <math> y_{n+s} </math>. If <math> b_s \ne 0 </math> then the method is called "implicit", since the value of <math> y_{n+s} </math> depends on the value of <math> f(t_{n+s}, y_{n+s}) </math>, and the equation must be solved for <math> y_{n+s} </math>. [[Iterative methods]] such as [[Newton's method]] are often used to solve the implicit formula.
 
Sometimes an explicit multistep method is used to "predict" the value of <math> y_{n+s} </math>. That value is then used in an implicit formula to "correct" the value. The result is a [[predictor–corrector method]].
 
==Examples==