Content deleted Content added
Citation bot (talk | contribs) Added url. | Use this bot. Report bugs. | Suggested by 16dvnk | Category:Artificial intelligence | #UCB_Category 69/198 |
|||
(10 intermediate revisions by 6 users not shown) | |||
Line 1:
{{Short description|Reinforcement learning
The '''actor-critic algorithm''' (AC) is a family of [[reinforcement learning]] (RL) algorithms that combine policy-based RL algorithms such as [[
An AC algorithm consists of two main components: an "'''actor'''" that determines which actions to take according to a policy function, and a "'''critic'''" that evaluates those actions according to a value function.<ref>{{Cite journal |
== Overview ==
Line 9:
=== Actor ===
The '''actor''' uses a policy function <math>\pi(a|s)</math>, while the critic estimates either the [[value function]] <math>V(s)</math>, the action-value Q-function <math>Q(s,a),
</math>
The actor is a parameterized function <math>\pi_\theta</math>, where <math>\theta</math> are the parameters of the actor. The actor takes as argument the state of the environment <math>s</math> and produces a [[probability distribution]] <math>\pi_\theta(\cdot | s)</math>.
Line 30:
The goal of policy gradient method is to optimize <math>J(\theta)</math> by [[Gradient descent|gradient ascent]] on the policy gradient <math>\nabla J(\theta)</math>.
As detailed on the [[Policy gradient method#Actor-critic methods|policy gradient method]] page, there are many [[
\cdot \Psi_j
\Big|S_0 = s_0 \right]</math>where <math display="inline">\Psi_j</math> is a linear sum of the following:
Line 39:
* <math display="inline">\gamma^j \left(R_j + \gamma V^{\pi_\theta}( S_{j+1}) - V^{\pi_\theta}( S_{j})\right)</math>: [[Temporal difference learning|TD(1) learning]].
* <math display="inline">\gamma^j Q^{\pi_\theta}(S_j, A_j)</math>.
* <math display="inline">\gamma^j A^{\pi_\theta}(S_j, A_j)</math>: '''Advantage Actor-Critic (A2C)'''.<ref name=":0">{{Citation |
* <math display="inline">\gamma^j \left(R_j + \gamma R_{j+1} + \gamma^2 V^{\pi_\theta}( S_{j+2}) - V^{\pi_\theta}( S_{j})\right)</math>: TD(2) learning.
* <math display="inline">\gamma^j \left(\sum_{k=0}^{n-1} \gamma^k R_{j+k} + \gamma^n V^{\pi_\theta}( S_{j+n}) - V^{\pi_\theta}( S_{j})\right)</math>: TD(n) learning.
* <math display="inline">\gamma^j \sum_{n=1}^\infty \frac{\lambda^{n-1}}{1-\lambda}\cdot \left(\sum_{k=0}^{n-1} \gamma^k R_{j+k} + \gamma^n V^{\pi_\theta}( S_{j+n}) - V^{\pi_\theta}( S_{j})\right)</math>: TD(λ) learning, also known as '''GAE (generalized advantage estimate)'''.<ref name="arxiv.org">{{Citation |
=== Critic ===
In the unbiased estimators given above, certain functions such as <math>V^{\pi_\theta}, Q^{\pi_\theta}, A^{\pi_\theta}</math> appear. These are approximated by the '''critic'''. Since these functions all depend on the actor, the critic must learn alongside the actor. The critic is learned by value-based RL algorithms.
For example, if the critic is estimating the state-value function <math>V^{\pi_\theta}(s)</math>, then it can be learned by any value function approximation method. Let the critic be a function approximator <math>V_\phi(s)</math> with parameters <math>\phi</math>.
The simplest example is TD(1) learning, which trains the critic to minimize the TD(1) error:<math display="block">\delta_i = R_i + \gamma V_\phi(S_{i+1}) - V_\phi(S_i)</math>The critic parameters are updated by gradient descent on the squared TD error:<math display="block">\phi \leftarrow \phi - \alpha \nabla_\phi (\delta_i)^2 = \phi + \alpha \delta_i \nabla_\phi V_\phi(S_i)</math>where <math>\alpha</math> is the learning rate. Note that the gradient is taken with respect to the <math>\phi</math> in <math>V_\phi(S_i)</math> only, since the <math>\phi</math> in <math>\gamma V_\phi(S_{i+1})</math> constitutes a moving target, and the gradient is not taken with respect to that. This is a common source of error in implementations that use [[automatic differentiation]], and requires "stopping the gradient" at that point.
Line 62 ⟶ 61:
</math>, low variance, high bias). This hyperparameter can be adjusted to pick the optimal bias-variance trade-off in advantage estimation. It uses an exponentially decaying average of n-step returns with <math>
\lambda
</math> being the decay strength.<ref name="arxiv.org"/>
== Variants ==
* '''Asynchronous Advantage Actor-Critic (A3C)''': [[Parallel computing|Parallel and asynchronous]] version of A2C.<ref name=":0" />
* '''Soft Actor-Critic (SAC)''': Incorporates entropy maximization for improved exploration.<ref>{{Citation |
* '''Deep Deterministic Policy Gradient (DDPG)''': Specialized for continuous action spaces.<ref>{{Citation |
== See also ==
Line 77 ⟶ 76:
== References ==
{{Reflist|30em}}
* {{Cite journal |
* {{Cite book |
* {{Cite book |last=Bertsekas |first=Dimitri P. |title=Reinforcement learning and optimal control |date=2019 |publisher=Athena Scientific |isbn=978-1-886529-39-7 |edition=2 |___location=Belmont, Massachusetts}}
* {{Cite book |last=Grossi |first=Csaba |title=Algorithms for Reinforcement Learning |date=2010 |publisher=Springer International Publishing |isbn=978-3-031-00423-0 |edition=1 |series=Synthesis Lectures on Artificial Intelligence and Machine Learning |___location=Cham}}
* {{Cite journal |last1=Grondman |first1=Ivo |last2=Busoniu |first2=Lucian |last3=Lopes |first3=Gabriel A. D. |last4=Babuska |first4=Robert |date=November 2012 |title=A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients |journal=IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews |volume=42 |issue=6 |pages=1291–1307 |doi=10.1109/TSMCC.2012.2218595 |bibcode=2012ITHMS..42.1291G |issn=1094-6977 |url=https://hal.science/hal-00756747 }}
{{Artificial intelligence navbox}}
[[Category:Reinforcement learning]]
[[Category:Machine learning algorithms]]
|