Dynamic programming: Difference between revisions

Content deleted Content added
No edit summary
Tags: Reverted Visual edit Mobile edit Mobile web edit
No edit summary
Tags: Reverted references removed Visual edit Mobile edit Mobile web edit
Line 14:
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of '''value functions''' ''V''<sub>1</sub>, ''V''<sub>2</sub>, ..., ''V''<sub>''n''</sub> taking ''y'' as an argument representing the '''[[State variable|state]]''' of the system at times ''i'' from 1 to ''n''. The definition of ''V''<sub>''n''</sub>(''y'') is the value obtained in state ''y'' at the last time ''n''. The values ''V''<sub>''i''</sub> at earlier times ''i''&nbsp;=&nbsp;''n''&nbsp;&minus;1,&nbsp;''n''&nbsp;&minus;&nbsp;2,&nbsp;...,&nbsp;2,&nbsp;1 can be found by working backwards, using a [[Recursion|recursive]] relationship called the [[Bellman equation]]. For ''i''&nbsp;=&nbsp;2,&nbsp;...,&nbsp;''n'', ''V''<sub>''i''&minus;1</sub> at any state ''y'' is calculated from ''V''<sub>''i''</sub> by maximizing a simple function (usually the sum) of the gain from a decision at time ''i''&nbsp;&minus;&nbsp;1 and the function ''V''<sub>''i''</sub> at the new state of the system if this decision is made. Since ''V''<sub>''i''</sub> has already been calculated for the needed states, the above operation yields ''V''<sub>''i''&minus;1</sub> for those states. Finally, ''V''<sub>1</sub> at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
 
In [[control theory]], a typical problem is to find an admissible control <math>\mathbf{u}^{\ast}</math> which causes the system <math>\dot{\mathbf{x}}(t) = \mathbf{g} \left( \mathbf{x}(t), \mathbf{u}(t), t \right)</math> to follow an admissible trajectory <math>\mathbf{x}^{\ast}</math> on a continuous time interval <math>t_{0} \leq t \leq t_{1}</math> that minimizes a [[Loss function|cost function]]
=== Control theory ===
 
In [[control theory]], a typical problem is to find an admissible control <math>\mathbf{u}^{\ast}</math> which causes the system <math>\dot{\mathbf{x}}(t) = \mathbf{g} \left( \mathbf{x}(t), \mathbf{u}(t), t \right)</math> to follow an admissible trajectory <math>\mathbf{x}^{\ast}</math> on a continuous time interval <math>t_{0} \leq t \leq t_{1}</math> that minimizes a [[Loss function|cost function]]
The solution to this problem is an optimal control law or policy <math>\mathbf{u}^{\ast} = h(\mathbf{x}(t), t)</math>, which produces an optimal trajectory <math>\mathbf{x}^{\ast}</math> and a [[cost-to-go function]] <math>J^{\ast}</math>. The latter obeys the fundamental equation of dynamic programming:
:<math>J = b \left( \mathbf{x}(t_{1}), t_{1} \right) + \int_{t_{0}}^{t_{1}} f \left( \mathbf{x}(t), \mathbf{u}(t), t \right) \mathrm{d} t</math>
 
The solution to this problem is an optimal control law or policy <math>\mathbf{u}^{\ast} = h(\mathbf{x}(t), t)</math>, which produces an optimal trajectory <math>\mathbf{x}^{\ast}</math> and a [[cost-to-go function]] <math>J^{\ast}</math>. The latter obeys the fundamental equation of dynamic programming:
a known as the , in which and. One finds the minimizingin terms of <math>t</math>,, and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition In practice, this generally requires for some discrete approximation to the exact optimization relationship.
:<math>- J_{t}^{\ast} = \min_{\mathbf{u}} \left\{ f \left( \mathbf{x}(t), \mathbf{u}(t), t \right) + J_{x}^{\ast \mathsf{T}} \mathbf{g} \left( \mathbf{x}(t), \mathbf{u}(t), t \right) \right\}</math>
a [[partial differential equation]] known as the [[Hamilton–Jacobi–Bellman equation]], in which <math>J_{x}^{\ast} = \frac{\partial J^{\ast}}{\partial \mathbf{x}} = \left[ \frac{\partial J^{\ast}}{\partial x_{1}} ~~~~ \frac{\partial J^{\ast}}{\partial x_{2}} ~~~~ \dots ~~~~ \frac{\partial J^{\ast}}{\partial x_{n}} \right]^{\mathsf{T}}</math> and <math>J_{t}^{\ast} = \frac{\partial J^{\ast}}{\partial t}</math>. One finds the minimizing <math>\mathbf{u}</math> in terms of <math>t</math>, <math>\mathbf{x}</math>, and the unknown function <math>J_{x}^{\ast}</math> and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition <math>J \left( t_{1} \right) = b \left( \mathbf{x}(t_{1}), t_{1} \right)</math>.<ref>{{cite book |first1=M. I. |last1=Kamien |author-link=Morton Kamien |first2=N. L. |last2=Schwartz |author-link2=Nancy Schwartz |title=Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management |___location=New York |publisher=Elsevier |edition=Second |year=1991 |isbn=978-0-444-01609-6 |url=https://books.google.com/books?id=0IoGUn8wjDQC&pg=PA261 |page=261 }}</ref> In practice, this generally requires [[Numerical partial differential equations|numerical techniques]] for some discrete approximation to the exact optimization relationship.
 
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation:
 
:<math>J_{k}^{\ast} \left( \mathbf{x}_{n-k} \right) = \min_{\mathbf{u}_{n-k}} \left\{ \hat{f} \left( \mathbf{x}_{n-k}, \mathbf{u}_{n-k} \right) + J_{k-1}^{\ast} \left( \hat{\mathbf{g}} \left( \mathbf{x}_{n-k}, \mathbf{u}_{n-k} \right) \right) \right\}</math>
at the <math>k</math>-th stage of <math>n</math> equally spaced discrete time intervals, and where <math>\hat{f}</math> and <math>\hat{\mathbf{g}}</math> denote discrete approximations to <math>f</math> and <math>\mathbf{g}</math>. This functional equation is known as the [[Bellman equation]], which can be solved for an exact solution of the discrete approximation of the optimization equation.<ref>{{cite book |first=Donald E. |last=Kirk |title=Optimal Control Theory: An Introduction |___location=Englewood Cliffs, NJ |publisher=Prentice-Hall |year=1970 |isbn=978-0-13-638098-6 |pages=94–95 |url=https://books.google.com/books?id=fCh2SAtWIdwC&pg=PA94 }}</ref>
 
==== Example from economics: Ramsey's problem of optimal saving ====
{{See also|Ramsey–Cass–Koopmans model}}
In economics, the objective is generally to maximize (rather than minimize) some dynamic [[social welfare function]]. In Ramsey's problem, this function relates amounts of consumption to levels of [[utility]]. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in [[Physical capital|capital stock]] that is used in production), known as [[intertemporal choice]]. Future consumption is discounted at a constant rate <math>\beta \in (0,1)</math>. A discrete approximation to the transition equation of capital is given by
:<math>k_{t+1} = \hat{g} \left( k_{t}, c_{t} \right) = f(k_{t}) - c_{t}</math>
where <math>c</math> is consumption, <math>k</math> is capital, and <math>f</math> is a [[production function]] satisfying the [[Inada conditions]]. An initial capital stock <math>k_{0} > 0</math> is assumed.
 
Let <math>c_t</math> be consumption in period {{mvar|t}}, and assume consumption yields [[utility]] <math>u(c_t)=\ln(c_t)</math> as long as the consumer lives. Assume the consumer is impatient, so that he [[discounting|discounts]] future utility by a factor {{mvar|b}} each period, where <math>0<b<1</math>. Let <math>k_t</math> be [[capital (economics)|capital]] in period {{mvar|t}}. Assume initial capital is a given amount <math>k_0>0</math>, and suppose that this period's capital and consumption determine next period's capital as <math>k_{t+1}=Ak^a_t - c_t</math>, where {{mvar|A}} is a positive constant and <math>0<a<1</math>. Assume capital cannot be negative. Then the consumer's decision problem can be written as follows: subject to <math>k_{t+1}=Ak^a_t - c_t \geq 0</math> for all <math>t=0,1,2,\ldots,T</math>
 
: <math>\max \sum_{t=0}^T b^t \ln(c_t)</math> subject to <math>k_{t+1}=Ak^a_t - c_t \geq 0</math> for all <math>t=0,1,2,\ldots,T</math>
 
Written this way, the problem looks complicated, because it involves solving for all the choice variables <math>c_0, c_1, c_2, \ldots , c_T</math>. (The capital <math>k_0</math> is not a choice variable—the consumer's initial capital is taken as given.)
 
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of ''value functions'' <math>V_t(k)</math>, for <math>t=0,1,2,\ldots,T,T+1</math> which represent the value of having any amount of capital {{mvar|k}} at each time {{mvar|t}}. There is (by assumption) no utility from having capital after death, <math>V_{T+1}(k)=0</math>.
 
The value of any quantity of capital at any previous time can be calculated by [[backward induction]] using the [[Bellman equation]]. In this problem, for each <math>t=0,1,2,\ldots,T</math>, the Bellman equation is
 
: <math>V_t(k_t) \, = \, \max \left( \ln(c_t) + b V_{t+1}(k_{t+1}) \right)</math> subject to <math>k_{t+1}=Ak^a_t - c_t \geq 0</math>
Line 45 ⟶ 42:
This problem is much simpler than the one we wrote down before, because it involves only two decision variables, <math>c_t</math> and <math>k_{t+1}</math>. Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time {{mvar|t}}, his current capital <math>k_t</math> is given, and he only needs to choose current consumption <math>c_t</math> and saving <math>k_{t+1}</math>.
 
To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as {{mvar|k}}. <math>V_{T+1}(k)</math> is already known, so using the Bellman equation once we can calculate <math>V_T(k)</math>, and so on until we get to <math>V_0(k)</math>, which is the ''value'' of the initial decision problem for the whole lifetime. In other words, once we know <math>V_{T-j+1}(k)</math>, we can calculate <math>V_{T-j}(k)</math>, which is the maximum of <math>\ln(c_{T-j}) + b V_{T-j+1}(Ak^a-c_{T-j})</math>, where <math>c_{T-j}</math> is the choice variable and <math>Ak^a-c_{T-j} \ge 0</math>.
 
Working backwards, it can be shown that the value function at time <math>t=T-j</math> is
 
: <math>V_{T-j}(k) \, = \, a \sum_{i=0}^j a^ib^i \ln k + v_{T-j}</math>
 
where each <math>v_{T-j}</math> is a constant, and the optimal amount to consume at time <math>t=T-j</math> is
 
: <math>c_{T-j}(k) \, = \, \frac{1}{\sum_{i=0}^j a^ib^i} Ak^a</math>
 
which can be simplified to
 
: <math>\begin{align}
c_{T}(k) & = Ak^a\\
c_{T-1}(k) & = \frac{Ak^a}{1+ab}\\
c_{T-2}(k) & = \frac{Ak^a}{1+ab+a^2b^2}\\
&\dots\\
c_2(k) & = \frac{Ak^a}{1+ab+a^2b^2+\ldots+a^{T-2}b^{T-2}}\\
c_1(k) & = \frac{Ak^a}{1+ab+a^2b^2+\ldots+a^{T-2}b^{T-2}+a^{T-1}b^{T-1}}\\
c_0(k) & = \frac{Ak^a}{1+ab+a^2b^2+\ldots+a^{T-2}b^{T-2}+a^{T-1}b^{T-1}+a^Tb^T}
\end{align}</math>
 
We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period {{mvar|T}}, the last period of life.