Policy gradient method: Difference between revisions

Content deleted Content added
Line 226:
* Use [[backtracking line search]] to ensure the trust-region constraint is satisfied. Specifically, it backtracks the step size to ensure the KL constraint and policy improvement by repeatedly trying<math display="block">
\theta_{t+1} = \theta_t + \sqrt{\frac{2\epsilon}{x^T F x}} x, \theta_t + \alpha \sqrt{\frac{2\epsilon}{x^T F x}} x, \dots
</math>until a <math>\theta_{t+1}</math> is found that both satisfies the KL constraint <math>\bar{D}_{KL}(\pi_{\theta_{t+1}} \| \pi_{\theta_{t}}) \leq \epsilon </math> and results in a higher <math>
L(\theta_t, \theta_{t+1}) \geq L(\theta_t, \theta_t)
</math>. Here, <math>\alpha \in (0,1)</math> is the backtracking coefficient.