User:ZachsGenericUsername/sandbox/Deep reinforcement learning: Difference between revisions
Content deleted Content added
more source edits |
Citation bot (talk | contribs) Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Eastmain | #UCB_webform 653/859 |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 3:
'''Deep reinforcement learning (DRL)''' is a [[machine learning]] method that takes principles from both [[reinforcement learning]] and [[deep learning]] to obtain benefits from both.
Deep reinforcement learning has a large diversity of applications including but not limited to video games, computer science, healthcare, and finance. Deep reinforcement algorithms are able to take a huge amount of input data (e.g. every pixel rendered to the screen in a video game) and decide what action needs to take place in order to reach a goal.<ref name=":1" />
== Overview ==
=== Reinforcement Learning ===
[[File:Markov diagram v2.svg|alt=Diagram explaining the loop recurring in reinforcement learning algorithms|thumb|Diagram of the loop recurring in reinforcement learning algorithms]][[Reinforcement learning]] is a process in which an agent learns to preform an action through trial and error. In this process, the agent receives a reward indicating whether their previous action was good or bad and aims to optimize their behavior based on this reward.<ref>{{Cite journal|
=== Deep Learning ===
[[File:Neural network example.svg|thumb|241x241px|Depiction of a basic artificial neural network]]
Line 20:
* The [[AlphaZero]] algorithm, developed by [[DeepMind]], that has achieved super-human like performance in many games.<ref>{{Cite web|title=DeepMind - What if solving one problem could unlock solutions to thousands more?|url=https://deepmind.com/|access-date=2020-11-16|website=Deepmind}}</ref>
* Image enhancement models such as [[Generative adversarial network|GAN]] and Unet which have attained much higher performance compared to the previous methods such as [[Super-resolution imaging|super-resolution]] and segmentation<ref name=":1">{{Cite book
*
Line 37:
The training process of Q-learning involves exploring different actions and recording the table of q values that correspond state and actions. Once this agent is sufficiently trained, the table should provide an accurate representation of the quality of actions given their state.<ref>{{Cite web|last=Violante|first=Andre|date=2019-07-01|title=Simple Reinforcement Learning: Q-learning|url=https://towardsdatascience.com/simple-reinforcement-learning-q-learning-fcddc4b6fe56|access-date=2020-11-16|website=Medium|language=en}}</ref>
==== Deep Q-Learning ====
[[Deep Q-learning]] takes the principles of standard Q-learning but approximates the q values using an artificial neural network. In many applications, there is too much input data that needs to be accounted for (e.g. the millions of pixels in a computer screen) which would make the standard process of determining every the q values for each state and action take a large amount of time. By using a neural network to process the data and predict a q value for each available action, the algorithms can be much faster and subsequently, process more data.<ref>{{Cite
=== Challenges ===
Line 47:
In the greedy learning policy the agent chooses actions that have the greatest the q value for the given state:<math>a=argmax_n Q(s,a)</math>With this solution, the agent may get stuck in a local maximum and not discover possible greater success because it only focuses maximizing the q value given its current knowledge.
In the epsilon-greedy method of training, before determining each action the agent decides whether to prioritize exploration, taking an action with an uncertain outcome for the purpose of gaining more knowledge, or exploitation, picking an action that maximize the q value. At every iteration, a random number between zero and one is selected. If this value is above the specified value of epsilon, the agent will choose a value that prioritizes exploration, otherwise the agent will select an action attempting to maximize the q value. Higher values of epsilon will result in a greater amount of exploration.<ref name=":0">{{Cite journal|
Line 71:
In training reinforcement learning algorithms, agents are rewarded based on their behavior. Variation in the frequency and what occasions that the agent is awarded at can have a large impact on the speed and quality of the outcome of training.
When the goal is too difficult for the learning algorithm to complete, <s>they</s> may never reach the goal and will never be rewarded. Additionally, if a reward is received at the end of a task, the algorithm has no way to differentiate between good and bad behavior during the task.<ref>{{Cite journal|
==== '''[[Bias–variance tradeoff|Bias–Variance Tradeoff]]''' ====
Line 79:
==== '''Reward Shaping''' ====
Reward shaping is the process of giving an agent intermediate rewards that are customized to fit the task it is attempting to complete. For example, if an agent is attempting to learn the game [[Atari Breakout]], they may get a positive reward every time they successfully hit the ball and break a brick instead of successfully completing a level. This will reduce the time it takes an agent to learn a task because it will have to do less guessing. However, using this method reduces the ability to generalize this algorithm to other applications because the rewards would need to be tweaked for each individual circumstance, making it not an optimal solution.<ref>{{Citation|last=Wiewiora|first=Eric|title=Reward Shaping|date=2010|url=https://doi.org/10.1007/978-0-387-30164-8_731|encyclopedia=Encyclopedia of Machine Learning|pages=863–865|editor-last=Sammut|editor-first=Claude|place=Boston, MA|publisher=Springer US|language=en|doi=10.1007/978-0-387-30164-8_731|isbn=978-0-387-30164-8|access-date=2020-11-16|editor2-last=Webb|editor2-first=Geoffrey I.}}</ref>
==== '''Curiosity Driven Exploration''' ====
The idea behind curiosity driven exploration is giving the agent a motive to explore unknown outcomes in order to find the best solutions. This is done by "modify[ing] the loss function (or even the network architecture) by adding terms to incentivize exploration"<ref>{{Cite
==== '''Hindsight Experience Replay''' ====
Hindsight experience replay is the method of training that involves storing and learning from previous failed attempts to complete a task beyond just a negative reward. While a failed attempt may not have reached the intended goal, it can serve as a lesson for how achieve the unintended result.<ref>{{Cite
== Generalization ==
Deep reinforcement learning excels at generalization, or the ability to use one machine learning model for multiple tasks.
Reinforcement learning models require an indication state in order to function. When this state is provided by a artificial neural network, which are good at dictating features from raw data (e.g. pixels or raw image files), there is a reduced need to predefine the environment, allowing the model to be generalized to multiple applications. With this layer of abstraction, deep reinforcement learning algorithms can be designed in a way that allows them to become generalized and the same model can be used for different tasks.
== References ==<!--- See http://en.wikipedia.org/wiki/Wikipedia:Footnotes on how to create references using <ref></ref> tags, these references will then appear here automatically -->
|