Content deleted Content added
Citation bot (talk | contribs) Alter: template type, pages, url. URLs might have been anonymized. Add: bibcode, arxiv, doi, page, issue, volume, journal. Removed URL that duplicated identifier. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 655/1032 |
|||
(31 intermediate revisions by 20 users not shown) | |||
Line 1:
{{Short description|Algorithm for trajectory optimization}}
'''Differential dynamic programming''' ('''DDP | volume = 3
| pages = 85–95
| last = Mayne
| first = D. Q.
| author-link=David Mayne
| title = A second-order gradient method of optimizing non-linear discrete time systems
| journal = Int J Control
| year = 1966
| doi = 10.1080/00207176608921369
}}</ref> and subsequently analysed in Jacobson and Mayne's eponymous book.<ref>{{cite book|
| doi = 10.1080/00207178808906114
| issn = 0020-7179
Line 19 ⟶ 21:
| journal = International Journal of Control
| year = 1988
}}</ref><ref>{{Cite
| last = Liao
| first = L. Z.
|author2=C. A Shoemaker | author2-link = Christine Shoemaker
| title = Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems
|
| year = 1992
|
| hdl = 1813/5474 |hdl-access=free
}}</ref>
Line 58 ⟶ 61:
== Differential dynamic programming ==
DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If
:<math>\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)</math>
is the argument of the <math>\min[\cdot]</math> operator in {{EquationNote|2|Eq. 2}}, let <math>Q</math> be the variation of this quantity around the <math>i</math>-th <math>(\mathbf{x},\mathbf{u})</math> pair:
:<math>\begin{align}Q(\delta\mathbf{x},\delta\mathbf{u})\equiv &\ell(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u})&&{}+V(\mathbf{f}(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u}),i+1)
Line 97 ⟶ 100:
| first = J. |author2=G. Zeglin |author3=C.G. Atkeson
| title = Minimax differential dynamic programming: Application to a biped walking robot
|
| date = 2003
}}</ref>
Line 124 ⟶ 127:
:<math>
\begin{alignat}{2}
\Delta V(i) &= &{} -\tfrac{1}{2}
V_\mathbf{x}(i) &= Q_\mathbf{x} & {}- Q_\mathbf{xu} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}}\\
V_{\mathbf{x}\mathbf{x}}(i) &= Q_{\mathbf{x}\mathbf{x}} &{} - Q_{\mathbf{x}\mathbf{u}}Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}.
Line 141 ⟶ 144:
The backward passes and forward passes are iterated until convergence.
If the Hessians <math>Q_{\mathbf{x}\mathbf{x}}, Q_{\mathbf{u}\mathbf{u}}, Q_{\mathbf{u}\mathbf{x}}, Q_{\mathbf{x}\mathbf{u}}</math> are replaced by their Gauss-Newton approximation, the method reduces to the iterative Linear Quadratic Regulator (iLQR).<ref>{{Cite conference
| title = A Unified Local Convergence Analysis of Differential Dynamic Programming, Direct Single Shooting, and Direct Multiple Shooting
| conference = 2023 European Control Conference (ECC)
| doi = 10.23919/ECC57647.2023.10178367
| url = https://ieeexplore.ieee.org/document/10178367
| url-access = subscription
}}</ref>▼
== Regularization and line-search ==
Differential dynamic programming is a second-order algorithm like [[Newton's method]]. It therefore takes large steps toward the minimum and often requires [[regularization (mathematics)|regularization]] and/or [[line-search]] to achieve convergence.<ref>
{{Cite journal |last=Liao |first=L. Z |author2=C. A Shoemaker |author2-link=Christine Shoemaker |year=1991 |title=Convergence in unconstrained discrete-time differential dynamic programming |journal=IEEE Transactions on Automatic Control |volume=36 |issue=6 |page=692 |doi=10.1109/9.86943}}
</ref><ref>{{Cite
▲| pages = 692
▲| last = Liao
▲| first = L. Z
▲| year = 1991
▲</ref>
| publisher = Hebrew University
| last = Tassa
Line 166:
| date = 2011
| url = http://icnc.huji.ac.il/phd/theses/files/YuvalTassa.pdf
| access-date = 2012-02-27
| archive-url = https://web.archive.org/web/20160304023026/http://icnc.huji.ac.il/phd/theses/files/YuvalTassa.pdf
</ref> Regularization in the DDP context means ensuring that the <math>Q_{\mathbf{u}\mathbf{u}}</math> matrix in {{EquationNote|4|Eq. 4}} is [[positive definite matrix|positive definite]]. Line-search in DDP amounts to scaling the open-loop control modification <math>\mathbf{k}</math> by some <math>0<\alpha<1</math>.▼
| archive-date = 2016-03-04
▲}}</ref> Regularization in the DDP context means ensuring that the <math>Q_{\mathbf{u}\mathbf{u}}</math> matrix in {{EquationNote|4|Eq. 4}} is [[positive definite matrix|positive definite]]. Line-search in DDP amounts to scaling the open-loop control modification <math>\mathbf{k}</math> by some <math>0<\alpha<1</math>.
== Monte Carlo version ==
Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.<ref>{{Cite
Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.<ref>{{Cite book|last1=Lefebvre|first1=Tom|last2=Crevecoeur|first2=Guillaume|title=2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) |chapter=Path Integral Policy Improvement with Differential Dynamic Programming |date=July 2019|chapter-url=https://ieeexplore.ieee.org/document/8868359|pages=739–745|doi=10.1109/AIM.2019.8868359|hdl=1854/LU-8623968|isbn=978-1-7281-2493-3|s2cid=204816072|url=https://biblio.ugent.be/publication/8623968 |hdl-access=free}}</ref> This creates a link between differential dynamic programming and path integral control,<ref>{{Cite book|last1=Theodorou|first1=Evangelos|last2=Buchli|first2=Jonas|last3=Schaal|first3=Stefan|title=2010 IEEE International Conference on Robotics and Automation |chapter=Reinforcement learning of motor skills in high dimensions: A path integral approach |date=May 2010|pages=2397–2403|doi=10.1109/ROBOT.2010.5509336|isbn=978-1-4244-5038-1|s2cid=15116370}}</ref> which is a framework of stochastic optimal control.
== Constrained problems ==
Interior Point Differential dynamic programming (IPDDP) is an [[interior-point method]] generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.<ref>{{cite journal |last1=Pavlov |first1=Andrei|last2=Shames|first2=Iman| last3=Manzie|first3=Chris|date=2020 |title=Interior Point Differential Dynamic Programming |journal=IEEE Transactions on Control Systems Technology |volume=29 |issue=6 |page=2720 |doi=10.1109/TCST.2021.3049416 |arxiv=2004.12710 |bibcode=2021ITCST..29.2720P }}</ref>
== See also ==
Line 182 ⟶ 189:
* [http://www.mathworks.com/matlabcentral/fileexchange/52069-ilqg-ddp-trajectory-optimization A MATLAB implementation of DDP]
* The open-source software framework [https://github.com/acados/acados acados] provides an efficient and embeddable implementation of DDP.
<!--- Categories --->▼
▲<!--- Categories --->
[[Category:Dynamic programming]]
|