Differential dynamic programming: Difference between revisions

Content deleted Content added
author-link
Citation bot (talk | contribs)
Alter: template type, pages, url. URLs might have been anonymized. Add: bibcode, arxiv, doi, page, issue, volume, journal. Removed URL that duplicated identifier. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 655/1032
 
(4 intermediate revisions by 4 users not shown)
Line 144:
 
The backward passes and forward passes are iterated until convergence.
If the Hessians <math>Q_{\mathbf{x}\mathbf{x}}, Q_{\mathbf{u}\mathbf{u}}, Q_{\mathbf{u}\mathbf{x}}, Q_{\mathbf{x}\mathbf{u}}</math> are replaced by their Gauss-Newton approximation, the method reduces to the iterative Linear Quadratic Regulator (iLQR).<ref>{{Cite conference
| pages = 1–7
| last = Baumgärtner
| first = K.
| title = A Unified Local Convergence Analysis of Differential Dynamic Programming, Direct Single Shooting, and Direct Multiple Shooting
| conference = 2023 European Control Conference (ECC)
| year = 2023
| doi = 10.23919/ECC57647.2023.10178367
| url = https://ieeexplore.ieee.org/document/10178367
| url-access = subscription
}}</ref>
 
== Regularization and line-search ==
Line 161 ⟶ 172:
 
== Monte Carlo version ==
Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.<ref>{{Cite conference |title=Sampled differential dynamic programming |book-title=2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |language=en-US|doi=10.1109/IROS.2016.7759229|s2cid=1338737}}</ref><ref>{{Cite conference|last1=Rajamäki|first1=Joose|first2=Perttu|last2=Hämäläinen|url=https://ieeexplore.ieee.org/document/8430799|title=Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication|conference=2018 Annual American Control Conference (ACC)|date=June 2018 |pages=2182–2189 |doi=10.23919/ACC.2018.8430799 |s2cid=243932441 |language=en-US|access-date=2018-10-19|url-access=subscription}}</ref><ref>{{Cite book|first=Joose|last=Rajamäki|date=2018|title=Random Search Algorithms for Optimal Control|url=http://urn.fi/URN:ISBN:978-952-60-8156-4|language=en|issn=1799-4942|isbn=978-952-60-8156-4|publisher=Aalto University}}</ref> It is based on treating the quadratic cost of differential dynamic programming as the energy of a [[Boltzmann distribution]]. This way the quantities of DDP can be matched to the statistics of a [[Multivariate normal distribution|multidimensional normal distribution]]. The statistics can be recomputed from sampled trajectories without differentiation.
 
Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.<ref>{{Cite book|last1=Lefebvre|first1=Tom|last2=Crevecoeur|first2=Guillaume|title=2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) |chapter=Path Integral Policy Improvement with Differential Dynamic Programming |date=July 2019|chapter-url=https://ieeexplore.ieee.org/document/8868359|pages=739–745|doi=10.1109/AIM.2019.8868359|hdl=1854/LU-8623968|isbn=978-1-7281-2493-3|s2cid=204816072|url=https://biblio.ugent.be/publication/8623968 |hdl-access=free}}</ref> This creates a link between differential dynamic programming and path integral control,<ref>{{Cite book|last1=Theodorou|first1=Evangelos|last2=Buchli|first2=Jonas|last3=Schaal|first3=Stefan|title=2010 IEEE International Conference on Robotics and Automation |chapter=Reinforcement learning of motor skills in high dimensions: A path integral approach |date=May 2010|chapter-url=https://ieeexplore.ieee.org/document/5509336|pages=2397–2403|doi=10.1109/ROBOT.2010.5509336|isbn=978-1-4244-5038-1|s2cid=15116370}}</ref> which is a framework of stochastic optimal control.
 
== Constrained problems ==
Interior Point Differential dynamic programming (IPDDP) is an [[interior-point method]] generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.<ref>{{cite arXivjournal |last1=Pavlov |first1=Andrei|last2=Shames|first2=Iman| last3=Manzie|first3=Chris|date=2020 |title=Interior Point Differential Dynamic Programming |eprintjournal=IEEE Transactions on Control Systems Technology |volume=29 |issue=6 |page=2720 |doi=10.1109/TCST.2021.3049416 |arxiv=2004.12710 |classbibcode=math2021ITCST.OC.29.2720P }}</ref>
 
== See also ==
Line 177 ⟶ 188:
* [http://www.ros.org/wiki/color_DDP A Python implementation of DDP]
* [http://www.mathworks.com/matlabcentral/fileexchange/52069-ilqg-ddp-trajectory-optimization A MATLAB implementation of DDP]
 
* The open-source software framework [https://github.com/acados/acados acados] provides an efficient and embeddable implementation of DDP.
 
<!--- Categories --->