Policy gradient method

This is an old revision of this page, as edited by Cosmia Nebula (talk | contribs) at 04:01, 25 January 2025 (Policy gradient: anchor REINFORCE). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Policy gradient methods are a class of reinforcement learning algorithms.

Policy gradient methods are a sub-class of policy optimization methods. Unlike value-based methods which learn a value function to derive a policy, policy optimization methods directly learn a policy function that selects actions without consulting a value function. For policy gradient to apply, the policy function is parameterized by a differentiable parameter .

Overview

In policy-based RL, the actor is a parameterized policy function  , where   are the parameters of the actor. The actor takes as argument the state of the environment   and produces a probability distribution  .

If the action space is discrete, then  . If the action space is continuous, then  .

The goal of policy optimization is to find some   that maximizes the expected episodic reward  : where   is the discount factor,   is the reward at step  ,   is the starting state, and   is the time-horizon (which can be infinite).

The policy gradient is defined as  . Different policy gradient methods stochastically estimate the policy gradient in different ways. The goal of any policy gradient method is to iteratively maximize   by gradient ascent. Since the key part of any policy gradient method is the stochastic estimation of the policy gradient, they are also studied under the title of "Monte Carlo gradient estimation".[1]

REINFORCE

Policy gradient

The REINFORCE algorithm was the first policy gradient method.[2] It is based on the identity for the policy gradient which can be improved via the "causality trick"[3] 

LemmaThe expectation of the score function is zero, conditional on any present or past state. That is, for any   and any state  , we have  

Further, if   is a random variable that is independent of  , then  

Proof
Proof

Use the reparameterization trick.

 Since the policy   is a probability distribution over actions for a given state,  . 

By the tower law and the previous lemma.

 

Proof
Proof

Applying the reparameterization trick,

  which is the first equation.

By the lemma,   for any  . Plugging this into the previous formula, we zero out a whole triangle of terms, to get   which is the second equation.

Thus, we have an unbiased estimator of the policy gradient: where the index   ranges over   rollout trajectories using the policy  .

The score function   can be interpreted as the direction in the parameter space that increases the probability of taking action   in state  . The policy gradient, then, is a weighted average of all possible directions to increase the probability of taking any action in any state, but weighted by reward signals, so that if taking a certain action in a certain state is associated with high reward, then that direction would be highly reinforced, and vice versa.

Algorithm

The REINFORCE algorithm is a loop:

  1. Rollout   trajectories in the environment, using   as the policy function.
  2. Compute the policy gradient estimation:  
  3. Update the policy by gradient ascent:  

Here,   is the learning rate at update step  .

Variance reduction

REINFORCE is an on-policy algorithm, meaning that the trajectories used for the update must be sampled from the current policy  . This can lead to high variance in the updates, as the returns   can vary significantly between trajectories. Many variants of REINFORCE has been introduced, under the title of variance reduction.

REINFORCE with baseline

A common way for reducing variance is the REINFORCE with baseline algorithm, based on the following identity: for any function  . This can be proven by applying the previous lemma.

The algorithm uses the modified gradient estimator and the original REINFORCE algorithm is the special case where  .

Actor-critic methods

If   is chosen well, such that  , this could significantly decrease variance in the gradient estimation. That is, the baseline should be as close to the value function   as possible, approaching the ideal of: Note that, as the policy   updates, the value function   updates as well, so the baseline should also be updated. One common approach is to train a separate function that estimates the value function, and use that as the baseline. This is one of the actor-critic methods, where the policy function is the actor and the value function is the critic.

The Q-function   can also be used as the critic, since by a similar argument using the tower law.

Subtracting the value function as a baseline, we find that the advantage function   can be used as the critic as well: In summary, there are many unbiased estimators for  , all in the form of:   where   is any linear sum of the following terms:

  •  : never used.
  •  : used by the REINFORCE algorithm.
  •  : used by the REINFORCE with baseline algorithm.
  •  : 1-step TD learning.
  •  .
  •  .

Some more possible   are as follows, with very similar proofs.

  •  : 2-step TD learning.
  •  : n-step TD learning.
  •  : TD(λ) learning, also known as GAE (generalized advantage estimate).[4] This is obtained by an exponentially decaying sum of the n-step TD learning ones.

Other methods

Other important examples of policy gradient methods include Trust Region Policy Optimization (TRPO)[5] and Proximal Policy Optimization (PPO).[6]

Natural policy gradient

The natural policy gradient method is a variant of the policy gradient method, proposed by Sham Kakade in 2001.[7] Unlike standard policy gradient methods, which depend on the choice of parameters   (making updates coordinate-dependent), the natural policy gradient aims to provide a coordinate-free update, which is geometrically "natural".

Motivation

Standard policy gradient updates   solve a constrained optimization problem:  While the objective (linearized improvement) is geometrically meaningful, the Euclidean constraint   introduces coordinate dependence. To address this, the natural policy gradient replaces the Euclidean constraint with a Kullback–Leibler divergence (KL) constraint: This ensures updates are invariant to invertible affine parameter transformations.

Fisher information approximation

For small  , the KL divergence is approximated by the Fisher information metric: where   is the Fisher information matrix of the policy, defined as: This transforms the problem into a problem in quadratic programming, yielding the natural policy gradient update: The step size   is typically adjusted to maintain the KL constraint, with  .

Practical considerations

Inverting   is computationally intensive, especially for high-dimensional parameters (e.g., neural networks). Practical implementations often use approximations:

These methods address the trade-off between inversion complexity and policy update stability, making natural policy gradients feasible in large-scale applications.

See also

References

  1. ^ Mohamed, Shakir; Rosca, Mihaela; Figurnov, Michael; Mnih, Andriy (2020). "Monte Carlo Gradient Estimation in Machine Learning". Journal of Machine Learning Research. 21 (132): 1–62. arXiv:1906.10652. ISSN 1533-7928.
  2. ^ Williams, Ronald J. (May 1992). "Simple statistical gradient-following algorithms for connectionist reinforcement learning". Machine Learning. 8 (3–4): 229–256. doi:10.1007/BF00992696. ISSN 0885-6125.
  3. ^ Sutton, Richard S; McAllester, David; Singh, Satinder; Mansour, Yishay (1999). "Policy Gradient Methods for Reinforcement Learning with Function Approximation". Advances in Neural Information Processing Systems. 12. MIT Press.
  4. ^ Schulman, John; Moritz, Philipp; Levine, Sergey; Jordan, Michael; Abbeel, Pieter (2018-10-20), High-Dimensional Continuous Control Using Generalized Advantage Estimation, doi:10.48550/arXiv.1506.02438
  5. ^ a b c Schulman, John; Levine, Sergey; Moritz, Philipp; Jordan, Michael; Abbeel, Pieter (2015-07-06). "Trust region policy optimization". Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. ICML'15. Lille, France: JMLR.org: 1889–1897.
  6. ^ a b Schulman, John; Wolski, Filip; Dhariwal, Prafulla; Radford, Alec; Klimov, Oleg (2017-08-28), Proximal Policy Optimization Algorithms, arXiv:1707.06347
  7. ^ Kakade, Sham M (2001). "A Natural Policy Gradient". Advances in Neural Information Processing Systems. 14. MIT Press.
  • Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2 ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6.
  • Bertsekas, Dimitri P. (2019). Reinforcement learning and optimal control (2 ed.). Belmont, Massachusetts: Athena Scientific. ISBN 978-1-886529-39-7.
  • Grossi, Csaba (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning (1 ed.). Cham: Springer International Publishing. ISBN 978-3-031-00423-0.
  • Mohamed, Shakir; Rosca, Mihaela; Figurnov, Michael; Mnih, Andriy (2020). "Monte Carlo Gradient Estimation in Machine Learning". Journal of Machine Learning Research. 21 (132): 1–62. arXiv:1906.10652. ISSN 1533-7928.