Differential dynamic programming: Difference between revisions

Content deleted Content added
Added the section concerning the Monte Carlo version.
Line 168:
}}
</ref> Regularization in the DDP context means ensuring that the <math>Q_{\mathbf{u}\mathbf{u}}</math> matrix in {{EquationNote|4|Eq. 4}} is [[positive definite matrix|positive definite]]. Line-search in DDP amounts to scaling the open-loop control modification <math>\mathbf{k}</math> by some <math>0<\alpha<1</math>.
 
== Monte Carlo version ==
Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.<ref>{{Cite web|url=https://ieeexplore.ieee.org/document/7759229|title=Sampled differential dynamic programming - IEEE Conference Publication|website=ieeexplore.ieee.org|language=en-US|access-date=2018-10-19}}</ref><ref>{{Cite web|url=https://ieeexplore.ieee.org/document/8430799|title=Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication|website=ieeexplore.ieee.org|language=en-US|access-date=2018-10-19}}</ref><ref>{{Cite journal|last=Joose|first=Rajamäki,|date=2018|title=Random Search Algorithms for Optimal Control|url=http://urn.fi/URN:ISBN:978-952-60-8156-4|language=en|issn=1799-4942}}</ref> It is based on treating the quadratic cost of differential dynamic programming as the energy of a [[Boltzmann distribution]]. This way the quantities of DDP can be matched to the statistics of a [[Multivariate normal distribution|multidimensional normal distribution]]. The statistics can be recomputed from sampled trajectories without differentiation.
 
== See also ==