Content deleted Content added
corrected a spelling mistake |
|||
Line 60:
Another major innovation is the use of transformer-based architectures in DRL. Unlike traditional models that rely on recurrent or convolutional networks, transformers can model long-term dependencies more effectively. The Decision Transformer and other similar models treat RL as a sequence modeling problem, enabling agents to generalize better across tasks.<ref>Kostas, J. et al. "Transformer-based reinforcement learning agents." arXiv preprint arXiv:2209.00588 (2022). https://arxiv.org/abs/2209.00588</ref>
In addition, research into open-ended learning has led to the creation of
=== Future directions ===
|