Recurrent neural network: Difference between revisions

Content deleted Content added
A relevant reference was added to the page.
Tag: Reverted
 
(208 intermediate revisions by 76 users not shown)
Line 1:
{{Short description|ComputationalClass modelof usedartificial inneural machine learningnetwork}}
{{Distinguish|recursiveRecursive neural network|Feedback neural network}}
{{Machine learning|ArtificialNeural neural networknetworks}}
A '''recurrent neural network''' ('''RNN''') is a class of [[artificial neural network]]s where connections between nodes form a [[directed graph|directed]] or [[Graph (discrete mathematics)|undirected graph]] along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from [[feedforward neural networks]], RNNs can use their internal state (memory) to process variable length sequences of inputs.<ref>{{cite journal |last1=Dixit |first1=P. |last2=Silakari |first2=S. |date=2021 |title=Deep Learning Algorithms for Cybersecurity Applications: A Technological and Status Review |url=https://doi.org/10.1016/j.cosrev.2020.100317 |journal=Computer Science Review |volume=39 |pages= | doi=10.1016/j.cosrev.2020.100317}}</ref><ref>{{Cite journal|last=Dupond|first=Samuel|date=2019|title=<!-- for sure correct title? not found, nor in archive.org (for 2020-02-13), nor Volume correct? 2019 is vol 47-48 and 41 from 2016--> A thorough review on the current advance of neural network structures.|url=https://www.sciencedirect.com/journal/annual-reviews-in-control|journal=Annual Reviews in Control|volume=14|pages=200–230}}</ref><ref>{{Cite journal|date=2018-11-01|title=State-of-the-art in artificial neural network applications: A survey|journal=Heliyon|language=en|volume=4|issue=11|pages=e00938|doi=10.1016/j.heliyon.2018.e00938|issn=2405-8440|doi-access=free|last1=Abiodun|first1=Oludare Isaac|last2=Jantan|first2=Aman|last3=Omolara|first3=Abiodun Esther|last4=Dada|first4=Kemi Victoria|last5=Mohamed|first5=Nachaat Abdelatif|last6=Arshad|first6=Humaira|pmid=30519653|pmc=6260436}}</ref><ref>{{Cite journal|date=2018-12-01|title=Time series forecasting using artificial neural networks methodologies: A systematic review|journal=Future Computing and Informatics Journal|language=en|volume=3|issue=2|pages=334–340|doi=10.1016/j.fcij.2018.10.003|issn=2314-7288|doi-access=free|last1=Tealab|first1=Ahmed}}</ref> This makes them applicable to tasks such as unsegmented, connected [[handwriting recognition]]<ref>{{cite journal |last1=Graves |first1=Alex |author-link1=Alex Graves (computer scientist) |last2=Liwicki |first2=Marcus |last3=Fernandez |first3=Santiago |last4=Bertolami |first4=Roman |last5=Bunke |first5=Horst |last6=Schmidhuber |first6=Jürgen |author-link6=Jürgen Schmidhuber |title=A Novel Connectionist System for Improved Unconstrained Handwriting Recognition |url=http://www.idsia.ch/~juergen/tpami_2008.pdf |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=31 |issue=5 |pages=855–868 |year=2009 |doi=10.1109/tpami.2008.137 |pmid=19299860 |citeseerx=10.1.1.139.4502 |s2cid=14635907 }}</ref> or [[speech recognition]].<ref name="sak2014">{{Cite web |url=https://research.google.com/pubs/archive/43905.pdf |title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |last3=Beaufays | first3=Françoise |year=2014 }}</ref><ref name="liwu2015">{{cite arXiv |last1=Li |first1=Xiangang |last2=Wu |first2=Xihong |date=2014-10-15 |title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition |eprint=1410.4281 |class=cs.CL }}</ref> Recurrent neural networks are theoretically [[Turing complete]] and can run arbitrary programs to process arbitrary sequences of inputs.<ref>{{cite journal|last1=Hyötyniemi|first1=Heikki|date=1996|title=Turing machines are recurrent neural networks|journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society|pages=13–24}}</ref>
 
In [[artificial neural networks]], '''recurrent neural networks''' ('''RNNs''') are designed for processing sequential data, such as text, speech, and [[time series]],<ref>{{Cite journal |last1=Tealab |first1=Ahmed |date=2018-12-01 |title=Time series forecasting using artificial neural networks methodologies: A systematic review |journal=Future Computing and Informatics Journal |volume=3 |issue=2 |pages=334–340 |doi=10.1016/j.fcij.2018.10.003 |issn=2314-7288 |doi-access=free}}</ref> where the order of elements is important. Unlike [[feedforward neural network]]s, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.
The term "recurrent neural network" is used to refer to the class of networks with an [[infinite impulse response]], whereas "[[convolutional neural network]]" refers to the class of [[finite impulse response|finite impulse]] response. Both classes of networks exhibit temporal [[dynamic system|dynamic behavior]].<ref>{{Cite journal |last=Miljanovic |first=Milos |date=Feb–Mar 2012 |title=Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction |url=http://www.ijcse.com/docs/INDJCSE12-03-01-028.pdf |journal=Indian Journal of Computer and Engineering |volume=3 |issue=1 }}</ref> A finite impulse recurrent network is a [[directed acyclic graph]] that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a [[directed cyclic graph]] that can not be unrolled.
 
The fundamental building block of RNN is the ''recurrent unit'', which maintains a ''hidden state''—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected [[handwriting recognition]],<ref>{{cite journal |last1=Graves |first1=Alex |author-link1=Alex Graves (computer scientist) |last2=Liwicki |first2=Marcus |last3=Fernandez |first3=Santiago |last4=Bertolami |first4=Roman |last5=Bunke |first5=Horst |last6=Schmidhuber |first6=Jürgen |author-link6=Jürgen Schmidhuber |year=2009 |title=A Novel Connectionist System for Improved Unconstrained Handwriting Recognition |url=http://www.idsia.ch/~juergen/tpami_2008.pdf |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=31 |issue=5 |pages=855–868 |citeseerx=10.1.1.139.4502 |doi=10.1109/tpami.2008.137 |pmid=19299860 |s2cid=14635907}}</ref> [[speech recognition]],<ref name="sak2014">{{Cite web |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |last3=Beaufays |first3=Françoise |year=2014 |title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling |url=https://research.google.com/pubs/archive/43905.pdf |publisher=Google Research}}</ref><ref name="liwu2015">{{cite arXiv |eprint=1410.4281 |class=cs.CL |first1=Xiangang |last1=Li |first2=Xihong |last2=Wu |title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition |date=2014-10-15}}</ref> [[natural language processing]], and [[neural machine translation]].<ref>{{Cite journal |last=Dupond |first=Samuel |date=2019 |title=<!-- for sure correct title? not found, nor in archive.org (for 2020-02-13), nor Volume correct? 2019 is vol 47-48 and 41 from 2016--> A thorough review on the current advance of neural network structures. |url=https://www.sciencedirect.com/journal/annual-reviews-in-control |journal=Annual Reviews in Control |volume=14 |pages=200–230}}</ref><ref>{{Cite journal |last1=Abiodun |first1=Oludare Isaac |last2=Jantan |first2=Aman |last3=Omolara |first3=Abiodun Esther |last4=Dada |first4=Kemi Victoria |last5=Mohamed |first5=Nachaat Abdelatif |last6=Arshad |first6=Humaira |date=2018-11-01 |title=State-of-the-art in artificial neural network applications: A survey |journal=Heliyon |volume=4 |issue=11 |pages=e00938 |bibcode=2018Heliy...400938A |doi=10.1016/j.heliyon.2018.e00938 |issn=2405-8440 |pmc=6260436 |pmid=30519653 |doi-access=free}}</ref>
Both finite impulse and infinite impulse recurrent networks can have additional stored states, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of [[long short-term memory]] networks (LSTMs) and [[gated recurrent unit]]s. This is also called Feedback Neural Network (FNN).
 
However, traditional RNNs suffer from the [[vanishing gradient problem]], which limits their ability to learn long-range dependencies. This issue was addressed by the development of the [[long short-term memory]] (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, [[gated recurrent unit]]s (GRUs) were introduced as a more computationally efficient alternative.
{{toclimit|3}}
 
In recent years, [[Transformer (deep learning architecture)|transformers]], which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
 
==History==
Recurrent neural networks were based on [[David Rumelhart]]'s work in 1986.<ref>{{Cite journal |last1=Williams |first1=Ronald J. |last2=Hinton |first2=Geoffrey E. |last3=Rumelhart |first3=David E. |date=October 1986 |title=Learning representations by back-propagating errors |journal=Nature |volume=323 |issue=6088 |pages=533–536 |doi=10.1038/323533a0 |issn=1476-4687 |bibcode=1986Natur.323..533R |s2cid=205001834 }}</ref> [[Hopfield network]]s – a special kind of RNN – were (re-)discovered by [[John Hopfield]] in 1982. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent [[Layer (deep learning)|layers]] in an RNN unfolded in time.<ref name="schmidhuber1993">{{Cite book |url=ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf |title=Habilitation thesis: System modeling and optimization |last=Schmidhuber |first=Jürgen |year=1993 }} Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.</ref>
 
===LSTM Before modern ===
One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in [[anatomy]]. In 1901, [[Santiago Ramón y Cajal|Cajal]] observed "recurrent semicircles" in the [[Cerebellum|cerebellar cortex]] formed by [[parallel fiber]], [[Purkinje cell]]s, and [[granule cell]]s.<ref>{{Cite journal |last1=Espinosa-Sanchez |first1=Juan Manuel |last2=Gomez-Marin |first2=Alex |last3=de Castro |first3=Fernando |date=2023-07-05 |title=The Importance of Cajal's and Lorente de Nó's Neuroscience to the Birth of Cybernetics |url=http://journals.sagepub.com/doi/10.1177/10738584231179932 |journal=The Neuroscientist |volume=31 |issue=1 |pages=14–30 |language=en |doi=10.1177/10738584231179932 |pmid=37403768 |hdl=10261/348372 |issn=1073-8584|hdl-access=free }}</ref><ref>{{Cite book |last=Ramón y Cajal |first=Santiago |url=https://archive.org/details/b2129592x_0002/page/n159/mode/2up |title=Histologie du système nerveux de l'homme & des vertébrés |date=1909 |publisher=Paris : A. Maloine |others=Foyle Special Collections Library King's College London |volume=II |pages=149}}</ref> In 1933, [[Rafael Lorente de Nó|Lorente de Nó]] discovered "recurrent, reciprocal connections" by [[Golgi's method]], and proposed that excitatory loops explain certain aspects of the [[vestibulo-ocular reflex]].<ref>{{Cite journal |last=de NÓ |first=R. Lorente |date=1933-08-01 |title=Vestibulo-Ocular Reflex Arc |url=http://archneurpsyc.jamanetwork.com/article.aspx?doi=10.1001/archneurpsyc.1933.02240140009001 |journal=Archives of Neurology and Psychiatry |volume=30 |issue=2 |pages=245 |doi=10.1001/archneurpsyc.1933.02240140009001 |issn=0096-6754|url-access=subscription }}</ref><ref>{{Cite journal |last=Larriva-Sahd |first=Jorge A. |date=2014-12-03 |title=Some predictions of Rafael Lorente de Nó 80 years later |journal=Frontiers in Neuroanatomy |volume=8 |pages=147 |doi=10.3389/fnana.2014.00147 |doi-access=free |issn=1662-5129 |pmc=4253658 |pmid=25520630}}</ref> During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. [[Donald O. Hebb|Hebb]] considered "reverberating circuit" as an explanation for short-term memory.<ref>{{Cite web |title=reverberating circuit |url=https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100417461 |access-date=2024-07-27 |website=Oxford Reference }}</ref> The McCulloch and Pitts paper (1943), which proposed the [[McCulloch-Pitts neuron]] model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past.<ref>{{Cite journal |last1=McCulloch |first1=Warren S. |last2=Pitts |first2=Walter |date=December 1943 |title=A logical calculus of the ideas immanent in nervous activity |url=http://link.springer.com/10.1007/BF02478259 |journal=The Bulletin of Mathematical Biophysics |volume=5 |issue=4 |pages=115–133 |doi=10.1007/BF02478259 |issn=0007-4985|url-access=subscription }}</ref> They were both interested in closed loops as possible explanations for e.g. [[epilepsy]] and [[Complex regional pain syndrome|causalgia]].<ref>{{Cite journal |last1=Moreno-Díaz |first1=Roberto |last2=Moreno-Díaz |first2=Arminda |date=April 2007 |title=On the legacy of W.S. McCulloch |url=https://linkinghub.elsevier.com/retrieve/pii/S0303264706002152 |journal=Biosystems |volume=88 |issue=3 |pages=185–190 |doi=10.1016/j.biosystems.2006.08.010|pmid=17184902 |bibcode=2007BiSys..88..185M |url-access=subscription }}</ref><ref>{{Cite journal |last=Arbib |first=Michael A |date=December 2000 |title=Warren McCulloch's Search for the Logic of the Nervous System |url=https://muse.jhu.edu/article/46496 |journal=Perspectives in Biology and Medicine |volume=43 |issue=2 |pages=193–216 |doi=10.1353/pbm.2000.0001 |pmid=10804585 |issn=1529-8795|url-access=subscription }}</ref> [[Renshaw cell|Recurrent inhibition]] was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the [[Macy conferences]].<ref>{{Cite journal |last=Renshaw |first=Birdsey |date=1946-05-01 |title=Central Effects of Centripetal Impulses in Axons of Spinal Ventral Roots |url=https://www.physiology.org/doi/10.1152/jn.1946.9.3.191 |journal=Journal of Neurophysiology |volume=9 |issue=3 |pages=191–204 |doi=10.1152/jn.1946.9.3.191 |pmid=21028162 |issn=0022-3077|url-access=subscription }}</ref> See <ref name=":0">{{Cite journal |last=Grossberg |first=Stephen |date=2013-02-22 |title=Recurrent Neural Networks |journal=Scholarpedia |volume=8 |issue=2 |pages=1888 |doi=10.4249/scholarpedia.1888 |doi-access=free |bibcode=2013SchpJ...8.1888G |issn=1941-6016}}</ref> for an extensive review of recurrent neural network models in neuroscience.[[File:Typical_connections_in_a_close-loop_cross-coupled_perceptron.png|thumb|A close-loop cross-coupled perceptron network<ref name=":1" />{{Pg|page=403|___location=Fig. 47}}]]
[[Long short-term memory]] (LSTM) networks were invented by [[Sepp Hochreiter|Hochreiter]] and [[Jürgen Schmidhuber|Schmidhuber]] in 1997 and set accuracy records in multiple applications domains.<ref name="lstm">{{Cite journal |last1=Hochreiter |first1=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |date=1997-11-01 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735|pmid=9377276 |s2cid=1915014 }}</ref>
[[Frank Rosenblatt]] in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered [[perceptron]] networks whose middle layer contains recurrent connections that change by a [[Hebbian theory|Hebbian learning]] rule.<ref>F. Rosenblatt, "[[iarchive:SelfOrganizingSystems/page/n87/mode/1up|Perceptual Generalization over Transformation Groups]]", pp. 63--100 in ''Self-organizing Systems: Proceedings of an Inter-disciplinary Conference, 5 and 6 May 1959''. Edited by Marshall C. Yovitz and Scott Cameron. London, New York, [etc.], Pergamon Press, 1960. ix, 322 p.</ref>{{Pg|pages=73-75}} Later, in ''Principles of Neurodynamics'' (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,<ref name=":1">{{Cite book |last=Rosenblatt |first=Frank |url=https://archive.org/details/DTIC_AD0256582/page/n3/mode/2up |title=DTIC AD0256582: PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS |date=1961-03-15 |publisher=Defense Technical Information Center |language=english}}</ref>{{Pg|___location=Chapter 19, 21}} and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.<ref name=":1" />{{Pg|___location=Section 19.11}}
 
Similar networks were published by Kaoru Nakano in 1971,<ref name="Nakano1971">{{cite book |last1=Nakano |first1=Kaoru |title=Pattern Recognition and Machine Learning |date=1971 |isbn=978-1-4615-7568-9 |pages=172–186 |chapter=Learning Process in a Model of Associative Memory |doi=10.1007/978-1-4615-7566-5_15}}</ref><ref name="Nakano1972">{{cite journal |last1=Nakano |first1=Kaoru |date=1972 |title=Associatron-A Model of Associative Memory |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=SMC-2 |issue=3 |pages=380–388 |doi=10.1109/TSMC.1972.4309133}}</ref>[[Shun'ichi Amari]] in 1972,<ref name="Amari1972">{{cite journal |last1=Amari |first1=Shun-Ichi |date=1972 |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements |journal=IEEE Transactions |volume=C |issue=21 |pages=1197–1206}}</ref> and {{ill|William A. Little (physicist)|lt=William A. Little|de|William A. Little}} in 1974,<ref name="little74">{{cite journal |last=Little |first=W. A. |year=1974 |title=The Existence of Persistent States in the Brain |journal=Mathematical Biosciences |volume=19 |issue=1–2 |pages=101–120 |doi=10.1016/0025-5564(74)90031-5}}</ref> who was acknowledged by Hopfield in his 1982 paper.
Around 2007, LSTM started to revolutionize [[speech recognition]], outperforming traditional models in certain speech applications.<ref name="fernandez2007keyword">{{Cite book |last1=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting |url=http://dl.acm.org/citation.cfm?id=1778066.1778092 |journal=Proceedings of the 17th International Conference on Artificial Neural Networks |series=ICANN'07 |___location=Berlin, Heidelberg |publisher=Springer-Verlag |pages=220–229 |isbn=978-3-540-74693-5 }}</ref> In 2009, a [[Connectionist Temporal Classification (CTC)|Connectionist Temporal Classification]] (CTC)-trained LSTM network was the first RNN to win pattern recognition contests when it won several competitions in connected [[handwriting recognition]].<ref name=schmidhuber2015/><ref name=graves20093>{{Cite document |last1=Graves |first1=Alex |last2=Schmidhuber |first2=Jürgen |year=2009 |editor1-last=Koller |editor1-first=D. |editor2-last=Schuurmans |editor2-first=D. |editor2-link=Dale Schuurmans |editor3-last=Bengio |editor3-first=Y. |editor3-link=Yoshua Bengio
|editor4-last=Bottou |editor4-first=L. |title=Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks |work=Advances in Neural Information Processing Systems |publisher=Neural Information Processing Systems (NIPS) Foundation |volume=21 |pages=545–552 |url=https://papers.nips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks}}</ref> In 2014, the Chinese company [[Baidu]] used CTC-trained RNNs to break the 2S09 Switchboard Hub5'00 speech recognition dataset<ref>{{Cite web|url=https://catalog.ldc.upenn.edu/LDC2002S09|title=2000 HUB5 English Evaluation Speech - Linguistic Data Consortium|website=catalog.ldc.upenn.edu}}</ref> benchmark without using any traditional speech processing methods.<ref name="hannun2014">{{cite arXiv |last1=Hannun |first1=Awni |last2=Case |first2=Carl |last3=Casper |first3=Jared |last4=Catanzaro |first4=Bryan |last5=Diamos |first5=Greg |last6=Elsen |first6=Erich |last7=Prenger |first7=Ryan |last8=Satheesh |first8=Sanjeev |last9=Sengupta |first9=Shubho |date=2014-12-17 |title=Deep Speech: Scaling up end-to-end speech recognition |eprint=1412.5567 |class=cs.CL}}</ref>
 
Another origin of RNN was [[statistical mechanics]]. The [[Ising model]] was developed by [[Wilhelm Lenz]]<ref name="lenz1920">{{Citation |last=Lenz |first=W. |title=Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern |journal=Physikalische Zeitschrift |volume=21 |pages=613–615 |year=1920 |postscript=. |author-link=Wilhelm Lenz}}</ref> and [[Ernst Ising]]<ref name="ising1925">{{citation |last=Ising |first=E. |title=Beitrag zur Theorie des Ferromagnetismus |journal=Z. Phys. |volume=31 |issue=1 |pages=253–258 |year=1925 |bibcode=1925ZPhy...31..253I |doi=10.1007/BF02980577 |s2cid=122157319}}</ref> in the 1920s<ref>{{cite journal |last1=Brush |first1=Stephen G. |year=1967 |title=History of the Lenz-Ising Model |journal=Reviews of Modern Physics |volume=39 |issue=4 |pages=883–893 |bibcode=1967RvMP...39..883B |doi=10.1103/RevModPhys.39.883}}</ref> as a simple statistical mechanical model of magnets at equilibrium. [[Roy J. Glauber|Glauber]] in 1963 studied the Ising model evolving in time, as a process towards equilibrium ([[Glauber dynamics]]), adding in the component of time.<ref name=":22">{{cite journal |last1=Glauber |first1=Roy J. |date=February 1963 |title=Roy J. Glauber "Time-Dependent Statistics of the Ising Model" |url=https://aip.scitation.org/doi/abs/10.1063/1.1703954 |journal=Journal of Mathematical Physics |volume=4 |issue=2 |pages=294–307 |doi=10.1063/1.1703954 |access-date=2021-03-21|url-access=subscription }}</ref>
LSTM also improved large-vocabulary speech recognition<ref name="sak2014"/><ref name="liwu2015"/> and [[text-to-speech]] synthesis<ref name="fan2015">Fan, Bo; Wang, Lijuan; Soong, Frank K.; Xie, Lei (2015) "Photo-Real Talking Head with Deep Bidirectional LSTM", in ''Proceedings of ICASSP 2015''</ref> and was used in [[Google Android]].<ref name="schmidhuber2015" /><ref name="zen2015">{{Cite web |url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43266.pdf |title=Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis |last1=Zen |first1=Heiga |last2=Sak |first2=Haşim |year=2015 |website=Google.com |publisher=ICASSP |pages=4470–4474 }}</ref> In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49%{{Citation needed|date=November 2016}} through CTC-trained LSTM.<ref name="sak2015">{{Cite web |url=http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html |title=Google voice search: faster and more accurate |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |date=September 2015 |last3=Rao |first3=Kanishka |last4=Beaufays |first4=Françoise |last5=Schalkwyk |first5=Johan}}</ref>
 
The [[Spin glass|Sherrington–Kirkpatrick model]] of spin glass, published in 1975,<ref>{{Cite journal |last1=Sherrington |first1=David |last2=Kirkpatrick |first2=Scott |date=1975-12-29 |title=Solvable Model of a Spin-Glass |url=https://link.aps.org/doi/10.1103/PhysRevLett.35.1792 |journal=Physical Review Letters |volume=35 |issue=26 |pages=1792–1796 |doi=10.1103/PhysRevLett.35.1792 |bibcode=1975PhRvL..35.1792S |issn=0031-9007|url-access=subscription }}</ref> is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.<ref name="Hopfield19822">{{cite journal |last1=Hopfield |first1=J. J. |date=1982 |title=Neural networks and physical systems with emergent collective computational abilities |journal=Proceedings of the National Academy of Sciences |volume=79 |issue=8 |pages=2554–2558 |bibcode=1982PNAS...79.2554H |doi=10.1073/pnas.79.8.2554 |pmc=346238 |pmid=6953413 |doi-access=free}}</ref> In a 1984 paper he extended this to continuous activation functions.<ref name=":02">{{cite journal |last1=Hopfield |first1=J. J. |date=1984 |title=Neurons with graded response have collective computational properties like those of two-state neurons |journal=Proceedings of the National Academy of Sciences |volume=81 |issue=10 |pages=3088–3092 |bibcode=1984PNAS...81.3088H |doi=10.1073/pnas.81.10.3088 |pmc=345226 |pmid=6587342 |doi-access=free}}</ref> It became a standard model for the study of neural networks through statistical mechanics.<ref>{{Cite book |last1=Engel |first1=A. |title=Statistical mechanics of learning |last2=Broeck |first2=C. van den |date=2001 |publisher=Cambridge University Press |isbn=978-0-521-77307-2 |___location=Cambridge, UK; New York, NY}}</ref><ref>{{Cite journal |last1=Seung |first1=H. S. |last2=Sompolinsky |first2=H. |last3=Tishby |first3=N. |date=1992-04-01 |title=Statistical mechanics of learning from examples |url=https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.6056 |journal=Physical Review A |volume=45 |issue=8 |pages=6056–6091 |doi=10.1103/PhysRevA.45.6056|pmid=9907706 |bibcode=1992PhRvA..45.6056S }}</ref>
LSTM broke records for improved [[machine translation]],<ref name="sutskever2014">{{Cite journal |last1=Sutskever |first1=Ilya |last2=Vinyals |first2=Oriol |last3=Le |first3=Quoc V. |year=2014 |title=Sequence to Sequence Learning with Neural Networks |url=https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf |journal=Electronic Proceedings of the Neural Information Processing Systems Conference |volume=27 |pages=5346 |arxiv=1409.3215 |bibcode=2014arXiv1409.3215S }}</ref> [[Language Modeling]]<ref name="vinyals2016">{{cite arXiv |last1=Jozefowicz |first1=Rafal |last2=Vinyals |first2=Oriol |last3=Schuster |first3=Mike |last4=Shazeer |first4=Noam |last5=Wu |first5=Yonghui |date=2016-02-07 |title=Exploring the Limits of Language Modeling |eprint=1602.02410 |class=cs.CL}}</ref> and Multilingual Language Processing.<ref name="gillick2015">{{cite arXiv |last1=Gillick |first1=Dan |last2=Brunk |first2=Cliff |last3=Vinyals |first3=Oriol |last4=Subramanya |first4=Amarnag |date=2015-11-30 |title=Multilingual Language Processing From Bytes |eprint=1512.00103 |class=cs.CL}}</ref> LSTM combined with [[convolutional neural network]]s (CNNs) improved [[automatic image captioning]].<ref name="vinyals2015">{{cite arXiv |last1=Vinyals |first1=Oriol |last2=Toshev |first2=Alexander |last3=Bengio |first3=Samy |last4=Erhan |first4=Dumitru |date=2014-11-17 |title=Show and Tell: A Neural Image Caption Generator |eprint=1411.4555 |class=cs.CV }}</ref>
 
==Architectures=Modern===
Modern RNN networks are mainly based on two architectures: LSTM and BRNN.<ref>{{Cite book |last1=Zhang |first1=Aston |title=Dive into deep learning |last2=Lipton |first2=Zachary |last3=Li |first3=Mu |last4=Smola |first4=Alexander J. |date=2024 |publisher=Cambridge University Press |isbn=978-1-009-38943-3 |___location=Cambridge New York Port Melbourne New Delhi Singapore |chapter=10. Modern Recurrent Neural Networks |chapter-url=https://d2l.ai/chapter_recurrent-modern/index.html}}</ref>
 
At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets".<ref>{{Cite journal |last1=Rumelhart |first1=David E. |last2=Hinton |first2=Geoffrey E. |last3=Williams |first3=Ronald J. |date=October 1986 |title=Learning representations by back-propagating errors |url=https://www.nature.com/articles/323533a0 |journal=Nature |language=en |volume=323 |issue=6088 |pages=533–536 |doi=10.1038/323533a0 |bibcode=1986Natur.323..533R |issn=1476-4687|url-access=subscription }}</ref> Two early influential works were the [[#Jordan network|Jordan network]] (1986) and the [[#Elman network|Elman network]] (1990), which applied RNN to study [[cognitive psychology]]. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent [[Layer (deep learning)|layers]] in an RNN unfolded in time.<ref name="schmidhuber1993">{{Cite book |last=Schmidhuber |first=Jürgen |url=ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf |title=Habilitation thesis: System modeling and optimization |year=1993}}{{Dead link|date=June 2024|bot=InternetArchiveBot|fix-attempted=yes}} Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.</ref>
 
[[Long short-term memory]] (LSTM) networks were invented by [[Sepp Hochreiter|Hochreiter]] and [[Jürgen Schmidhuber|Schmidhuber]] in 1995 and set accuracy records in multiple applications domains.<ref>{{Cite Q|Q98967430}}</ref><ref name="lstm">{{Cite journal |last1=Hochreiter |first1=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |date=1997-11-01 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735|pmid=9377276 |s2cid=1915014 }}</ref> It became the default choice for RNN architecture.
 
[[Bidirectional recurrent neural networks]] (BRNN) use two RNNs that process the same input in opposite directions.<ref name="Schuster">Schuster, Mike, and Kuldip K. Paliwal. "[https://www.researchgate.net/profile/Mike_Schuster/publication/3316656_Bidirectional_recurrent_neural_networks/links/56861d4008ae19758395f85c.pdf Bidirectional recurrent neural networks]." Signal Processing, IEEE Transactions on 45.11 (1997): 2673-2681.2. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan</ref> These two are often combined, giving the bidirectional LSTM architecture.
 
Around 2006, bidirectional LSTM started to revolutionize [[speech recognition]], outperforming traditional models in certain speech applications.<ref>{{Cite journal |last1=Graves |first1=Alex |last2=Schmidhuber |first2=Jürgen |date=2005-07-01 |title=Framewise phoneme classification with bidirectional LSTM and other neural network architectures |journal=Neural Networks |series=IJCNN 2005 |volume=18 |issue=5 |pages=602–610 |citeseerx=10.1.1.331.5800 |doi=10.1016/j.neunet.2005.06.042 |pmid=16112549 |s2cid=1856462}}</ref><ref name="fernandez2007keyword">{{Cite conference |last1=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting |url=http://dl.acm.org/citation.cfm?id=1778066.1778092 |book-title=Proceedings of the 17th International Conference on Artificial Neural Networks |series=ICANN'07 |___location=Berlin, Heidelberg |publisher=Springer-Verlag |pages=220–229 |isbn=978-3-540-74693-5 }}</ref> They also improved large-vocabulary speech recognition<ref name="sak2014" /><ref name="liwu2015" /> and [[text-to-speech]] synthesis<ref name="fan2015">{{cite conference |last1=Fan |first1=Bo |last2=Wang |first2=Lijuan |last3=Soong |first3=Frank K. |last4=Xie |first4=Lei |title=Photo-Real Talking Head with Deep Bidirectional LSTM |chapter-url= |editor= |book-title=Proceedings of ICASSP 2015 IEEE International Conference on Acoustics, Speech and Signal Processing |doi=10.1109/ICASSP.2015.7178899 |date=2015 |isbn=978-1-4673-6997-8 |pages=4884–8 }}</ref> and was used in [[Google Voice Search|Google voice search]], and dictation on [[Android (operating system)|Android devices]].<ref name="sak2015">{{Cite web |url=http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html |title=Google voice search: faster and more accurate |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |date=September 2015 |last3=Rao |first3=Kanishka |last4=Beaufays |first4=Françoise |last5=Schalkwyk |first5=Johan}}</ref> They broke records for improved [[machine translation]],<ref name="sutskever2014">{{Cite journal |last1=Sutskever |first1=Ilya |last2=Vinyals |first2=Oriol |last3=Le |first3=Quoc V. |year=2014 |title=Sequence to Sequence Learning with Neural Networks |url=https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf |journal=Electronic Proceedings of the Neural Information Processing Systems Conference |volume=27 |page=5346 |arxiv=1409.3215 |bibcode=2014arXiv1409.3215S }}</ref> [[Language Modeling|language modeling]]<ref name="vinyals2016">{{cite arXiv |last1=Jozefowicz |first1=Rafal |last2=Vinyals |first2=Oriol |last3=Schuster |first3=Mike |last4=Shazeer |first4=Noam |last5=Wu |first5=Yonghui |date=2016-02-07 |title=Exploring the Limits of Language Modeling |eprint=1602.02410 |class=cs.CL}}</ref> and Multilingual Language Processing.<ref name="gillick2015">{{cite arXiv |last1=Gillick |first1=Dan |last2=Brunk |first2=Cliff |last3=Vinyals |first3=Oriol |last4=Subramanya |first4=Amarnag |date=2015-11-30 |title=Multilingual Language Processing From Bytes |eprint=1512.00103 |class=cs.CL}}</ref> Also, LSTM combined with [[convolutional neural network]]s (CNNs) improved [[automatic image captioning]].<ref name="vinyals2015">{{cite arXiv |last1=Vinyals |first1=Oriol |last2=Toshev |first2=Alexander |last3=Bengio |first3=Samy |last4=Erhan |first4=Dumitru |date=2014-11-17 |title=Show and Tell: A Neural Image Caption Generator |eprint=1411.4555 |class=cs.CV }}</ref>
 
The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014.<ref name=":2">{{Cite arXiv |last1=Cho |first1=Kyunghyun |last2=van Merrienboer |first2=Bart |last3=Gulcehre |first3=Caglar |last4=Bahdanau |first4=Dzmitry |last5=Bougares |first5=Fethi |last6=Schwenk |first6=Holger |last7=Bengio |first7=Yoshua |date=2014-06-03 |title=Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation |class=cs.CL |eprint =1406.1078}}</ref><ref name="sequence">{{cite arXiv |eprint=1409.3215 |class=cs.CL |first1=Ilya |last1=Sutskever |first2=Oriol |last2=Vinyals |title=Sequence to sequence learning with neural networks |date=14 Dec 2014 |last3=Le |first3=Quoc Viet}} [first version posted to arXiv on 10 Sep 2014]</ref> A [[seq2seq]] architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of [[Attention (machine learning)|attention mechanisms]] and [[Transformer (deep learning architecture)|transformers]].
 
==Configurations==
{{main|Layer (deep learning)}}
 
RNNs come in many variants.
An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNNs can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
 
=== Standard ===
[[File:Recurrent neural network unfold.svg|thumb|Compressed (left) and unfolded (right) basic recurrent neural network]]
RNNs come in many variants. Abstractly speaking, an RNN is a function <math>f_\theta</math> of type <math>(x_t, h_t) \mapsto (y_t, h_{t+1})</math>, where
 
*<math>x_t</math>: input vector;
* <math>h_t</math>: hidden vector;
* <math>y_t</math>: output vector;
* <math>\theta</math>: neural network parameters.
 
In words, it is a neural network that maps an input <math>x_t</math> into an output <math>y_t</math>, with the hidden vector <math>h_t</math> playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing.
 
The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be [[Layer (deep learning)|layers]] are, in fact, different steps in time, "unfolded" to produce the appearance of [[Layer (deep learning)|layers]].
 
=== Stacked RNN ===
[[File:Stacked_RNN.png|thumb|Stacked RNN]]A '''stacked RNN''', or '''deep RNN''', is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows
 
# Layer 1 has hidden vector <math>h_{1, t}</math>, parameters <math>\theta_1</math>, and maps <math>f_{\theta_1} : (x_{0, t}, h_{1, t}) \mapsto (x_{1, t}, h_{1, t+1}) </math>.
# Layer 2 has hidden vector <math>h_{2, t}</math>, parameters <math>\theta_2</math>, and maps <math>f_{\theta_2} : (x_{1, t}, h_{2, t}) \mapsto (x_{2, t}, h_{2, t+1}) </math>.
# ...
# Layer <math>n </math> has hidden vector <math>h_{n, t}</math>, parameters <math>\theta_n</math>, and maps <math>f_{\theta_n} : (x_{n-1, t}, h_{n, t}) \mapsto (x_{n, t}, h_{n, t+1}) </math>.
 
Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.
 
===Bidirectional===
{{Main|Bidirectional recurrent neural networks}}
[[File:Bidirectional_RNN.png|thumb|Bidirectional RNN]]A '''bidirectional RNN''' (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:
 
* The forward RNN processes in one direction: <math display="block">f_{\theta}(x_0, h_0) = (y_0, h_{1}), f_{\theta}(x_1, h_1) = (y_1, h_{2}), \dots</math>
* The backward RNN processes in the opposite direction:<math display="block">f'_{\theta'}(x_N, h_N') = (y'_N, h_{N-1}'), f'_{\theta'}(x_{N-1}, h_{N-1}') = (y'_{N-1}, h_{N-2}'), \dots</math>
 
The two output sequences are then concatenated to give the total output: <math>((y_0, y_0'), (y_1, y_1'), \dots, (y_N, y_N'))</math>.
 
Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The [[ELMo]] model (2018)<ref>{{cite arXiv |eprint=1802.05365 |class=cs.CL |title=Deep contextualized word representations |date=2018 |vauthors=Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L}}</ref> is a stacked bidirectional [[Long short-term memory|LSTM]] which takes character-level as inputs and produces word-level embeddings.
 
=== Encoder-decoder ===
{{Main|seq2seq}}
[[File:Decoder RNN.png|thumb|A decoder without an encoder]]
[[File:Seq2seq_RNN_encoder-decoder_with_attention_mechanism,_training_and_inferring.png|thumb|Encoder-decoder RNN without attention mechanism]]
[[File:Seq2seq_RNN_encoder-decoder_with_attention_mechanism,_training.png|thumb|Encoder-decoder RNN with attention mechanism]]
 
Two RNNs can be run front-to-back in an '''encoder-decoder''' configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional [[Attention (machine learning)|attention mechanism]]. This was used to construct state of the art [[Neural machine translation|neural machine translators]] during the 2014–2017 period. This was an instrumental step towards the development of [[Transformer (deep learning architecture)|transformers]].<ref>{{Cite journal |last1=Vaswani |first1=Ashish |last2=Shazeer |first2=Noam |last3=Parmar |first3=Niki |last4=Uszkoreit |first4=Jakob |last5=Jones |first5=Llion |last6=Gomez |first6=Aidan N |last7=Kaiser |first7=Ł ukasz |last8=Polosukhin |first8=Illia |date=2017 |title=Attention is All you Need |url=https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30}}</ref>
 
=== PixelRNN ===
An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions.<ref>{{Cite journal |last1=Oord |first1=Aäron van den |last2=Kalchbrenner |first2=Nal |last3=Kavukcuoglu |first3=Koray |date=2016-06-11 |title=Pixel Recurrent Neural Networks |url=https://proceedings.mlr.press/v48/oord16.html |journal=Proceedings of the 33rd International Conference on Machine Learning |publisher=PMLR |pages=1747–1756}}</ref> For example, the row-by-row direction processes an <math>n \times n</math> grid of vectors <math>x_{i, j}</math> in the following order: <math display="block">x_{1, 1}, x_{1, 2}, \dots, x_{1, n}, x_{2, 1}, x_{2, 2}, \dots, x_{2, n}, \dots, x_{n, n}</math>The '''diagonal BiLSTM''' uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes <math>x_{i, j}</math> depending on its hidden state and cell state on the top and the left side: <math>h_{i-1, j}, c_{i-1, j}</math> and <math>h_{i, j-1}, c_{i, j-1}</math>. The other processes it from the top-right corner to the bottom-left.
 
== Architectures ==
 
===Fully recurrent ===
[[File:Recurrent neural network unfoldHopfield-net-vector.svg|thumb|CompressedA (left)fully andconnected unfoldedRNN (right)with basic4 recurrent neural network.neurons]]
'''Fully recurrent neural networks''' (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a [[fully connected network]]. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be [[Layer (deep learning)|layers]] are, in fact, different steps in time of the same fully recurrent neural network. The left-most item in the illustration shows the recurrent connections as the arc labeled 'v'. It is "unfolded" in time to produce the appearance of [[Layer (deep learning)|layers]].
[[File:RNN architecture.png|thumb|A simple Elman network where <math>\sigma_h = \tanh, \sigma_y = \text{Identity} </math>]]
 
===Hopfield ===
{{Main|Hopfield network}}
 
The '''[[Hopfield network]]''' is an RNN in which all connections across layers are equally sized. It requires [[Stationary process|stationary]] inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using [[Hebbian learning]], then the Hopfield network can perform as [[Robustness (computer science)|robust]] [[content-addressable memory]], resistant to connection alteration.
 
==={{Anchor|Elman network|Jordan network}}Elman networks and Jordan networks===
[[File:Elman srnn.png|thumb|right|The Elman network]]
 
An '''[[Jeff Elman|Elman]] network''' is a three-layer network (arranged horizontally as ''x'', ''y'', and ''z'' in the illustration) with the addition of a set of context units (''u'' in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.<ref name="bmm615">Cruse, Holk; [http://www.brains-minds-media.org/archive/615/bmm615.pdf ''Neural Networks as Cybernetic Systems''], 2nd and revised edition</ref> At each time step, the input is fed forward and a [[learning rule]] is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks such as sequence-prediction that are beyond the power of a standard [[multilayer perceptron]].
 
'''[[Michael I. Jordan|Jordan]] networks''' are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also referred to ascalled the state layer. They have a recurrent connection to themselves.<ref name="bmm615" />
 
Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).
Line 57 ⟶ 124:
\end{align}
</math>
;Jordan network<ref>{{Cite bookconference |last=Jordan |first=Michael I. |title=Neural-Network Models of Cognition - Biobehavioral Foundations |date=1997-01-01 |chapter=Serial Order: A Parallel Distributed Processing Approach |journalseries=Advances in Psychology |series=Neural-Network Models of Cognition |volume=121 |pages=471–495 |doi=10.1016/s0166-4115(97)80111-2 |isbn=9780444819314978-0-444-81931-4|s2cid=15375627 }}</ref>
:<math>
\begin{align}
h_t &= \sigma_h(W_{h} x_t + U_{h} y_s_{t-1} + b_h) \\
y_t &= \sigma_y(W_{y} h_t + b_y)\\
s_t &= \sigma_s(W_{s, s} s_{t-1} + W_{s, y} y_{t-1} + b_s)
\end{align}
</math>
Line 68 ⟶ 136:
* <math>x_t</math>: input vector
* <math>h_t</math>: hidden layer vector
* <math>s_t</math>: "state" vector,
* <math>y_t</math>: output vector
* <math>W</math>, <math>U</math> and <math>b</math>: parameter matrices and vector
* <math>\sigma_h</math> and <math>\sigma_ysigma</math>: [[Activation function]]s
 
===HopfieldLong short-term memory===
{{Main|HopfieldLong networkshort-term memory}}
 
[[File:Long Short-Term Memory.svg|thumb|Long short-term memory unit]]
The [[Hopfield network]] is an RNN in which all connections across layers are equally sized. It requires [[Stationary process|stationary]] inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using [[Hebbian learning]] then the Hopfield network can perform as [[Robustness (computer science)|robust]] [[content-addressable memory]], resistant to connection alteration.
'''Long short-term memory''' (LSTM) is the most widely used RNN architecture. It was designed to solve the [[vanishing gradient problem]]. LSTM is normally augmented by recurrent gates called "forget gates".<ref name="gers2002">{{Cite journal |last1=Gers |first1=Felix A. |last2=Schraudolph |first2=Nicol N. |last3=Schmidhuber |first3=Jürgen |year=2002 |title=Learning Precise Timing with LSTM Recurrent Networks |url=http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf |journal=Journal of Machine Learning Research |volume=3 |pages=115–143 |access-date=2017-06-13}}</ref> LSTM prevents backpropagated errors from vanishing or exploding.<ref name="hochreiter1991" /> Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.<ref name="bayer2009">{{Cite book |last1=Bayer |first1=Justin |url=http://mediatum.ub.tum.de/doc/1289041/document.pdf |title=Artificial Neural Networks – ICANN 2009 |last2=Wierstra |first2=Daan |last3=Togelius |first3=Julian |last4=Schmidhuber |first4=Jürgen |date=2009-09-14 |publisher=Springer |isbn=978-3-642-04276-8 |series=Lecture Notes in Computer Science |volume=5769 |___location=Berlin, Heidelberg |pages=755–764 |chapter=Evolving Memory Cell Structures for Sequence Learning |doi=10.1007/978-3-642-04277-5_76}}</ref> LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components.
 
Many applications use stacks of LSTMs,<ref name="fernandez2007">{{Cite conference |last1=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=Sequence labelling in structured domains with hierarchical recurrent neural networks |url=https://www.ijcai.org/Proceedings/07/Papers/124.pdf |pages=774–9 |citeseerx=10.1.1.79.1887 |book-title=Proceedings of the 20th International Joint Conference on Artificial Intelligence, Ijcai 2007}}</ref> for which it is called "deep LSTM". LSTM can learn to recognize [[context-sensitive languages]] unlike previous models based on [[hidden Markov model]]s (HMM) and similar concepts.<ref name="peepholeLSTM" />
 
===Gated recurrent unit===
{{Main|Gated recurrent unit}}
 
[[File:Gated Recurrent Unit.svg|thumb|Gated recurrent unit]]
'''Gated recurrent unit''' (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants.<ref>{{cite arXiv |eprint=1701.03452 |class=cs.NE |first1=Joel |last1=Heck |first2=Fathi M. |last2=Salem |title=Simplified Minimal Gated Unit Variations for Recurrent Neural Networks |date=2017-01-12}}</ref><ref>{{cite arXiv |eprint=1701.05923 |class=cs.NE |first1=Rahul |last1=Dey |first2=Fathi M. |last2=Salem |title=Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks |date=2017-01-20}}</ref> They have fewer parameters than LSTM, as they lack an output gate.<ref name="MyUser_Wildml.com_May_18_2016c">{{cite web |last=Britz |first=Denny |date=October 27, 2015 |title=Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano – WildML |url=http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/ |access-date=May 18, 2016 |newspaper=Wildml.com}}</ref>
 
Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.<ref name="MyUser_Arxiv.org_May_18_2016c2">{{cite arXiv |eprint=1412.3555 |class=cs.NE |first1=Junyoung |last1=Chung |first2=Caglar |last2=Gulcehre |title=Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling |last3=Cho |first3=KyungHyun |last4=Bengio |first4=Yoshua |year=2014}}</ref> There does not appear to be particular performance difference between LSTM and GRU.<ref name="MyUser_Arxiv.org_May_18_2016c2"/><ref name="gruber_jockisch">{{citation |last1=Gruber |first1=N. |title=Are GRU cells more specific and LSTM cells more sensitive in motive classification of text? |journal=Frontiers in Artificial Intelligence |volume=3 |page=40 |year=2020 |doi=10.3389/frai.2020.00040 |pmc=7861254 |pmid=33733157 |s2cid=220252321 |last2=Jockisch |first2=A. |doi-access=free}}</ref>
 
====Bidirectional associative memory====
{{Main|Bidirectional associative memory}}
 
Introduced by [[Bart Kosko]],<ref>{{cite journal |year=1988 |title=Bidirectional associative memories |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=18 |issue=1 |pages=49–60 |doi=10.1109/21.87054 |last1=Kosko |first1=Bart |s2cid=59875735 }}</ref> a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionalitybidirectionality comes from passing information through a matrix and its [[transpose]]. Typically, [[bipolar encoding]] is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using [[Markov chain|Markov]] stepping wereare optimized for increased network stability and relevance to real-world applications.<ref>{{cite journal |last1=Rakkiyappan |first1=Rajan |last2=Chandrasekar |first2=Arunachalam |last3=Lakshmanan |first3=Subramanian |last4=Park |first4=Ju H. |date=2 January 2015 |title=Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control |journal=Complexity |volume=20 |issue=3 |pages=39–65 |doi=10.1002/cplx.21503 |bibcode=2015Cmplx..20c..39R }}</ref>
 
A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.<ref>{{cite book
Line 95 ⟶ 175:
{{Main|Echo state network}}
 
The echo[[Echo state network|'''Echo state networks''']] (ESN) hashave a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain [[time series]].<ref>{{Cite journal |last1=Jaeger |first1=Herbert |last2=Haas |first2=Harald |date=2004-04-02 |title=Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication |journal=Science |volume=304 |issue=5667 |pages=78–80 |doi=10.1126/science.1091277 |pmid=15064413 |bibcode=2004Sci...304...78J|citeseerx=10.1.1.719.2301 |s2cid=2184251 }}</ref> A variant for [[Spiking neural network|spiking neurons]] is known as a [[liquid state machine]].<ref>{{cite journal |first1=Wolfgang |last1=Maass |first2=Thomas |last2=Natschläger |first3=Henry |last3=Markram |title=AReal-time freshcomputing lookwithout atstable real-timestates: computationa innew genericframework recurrentfor neural circuitscomputation based on perturbations |seriesjournal=TechnicalNeural reportComputation |publisherdoi=Institute10.1162/089976602760407955 for|date=2002 Theoretical|volume=14 Computer|issue=11 Science,|pages=2531–2560 Technische|pmid=12433288 Universität Graz|s2cid=1045112 |dateurl=2002https://igi-08-20web.tugraz.at/people/maass/psfiles/130.pdf }}</ref>
 
===Independently RNN (IndRNN) ===
The Independently recurrent neural network (IndRNN)<ref name="auto">{{cite arXiv |title= Independently Recurrent Neural Network (IndRNN): Building a Longer and Deeper RNN|last1=Li |first1=Shuai |last2=Li |first2=Wanqing |last3=Cook |first3=Chris |last4=Zhu |first4=Ce |last5=Yanbo |first5=Gao |eprint=1803.04831|class=cs.CV |year=2018 }}</ref> addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with the non-saturated nonlinear functions such as ReLU. Using skip connections, deep networks can be trained.
 
===Recursive ===
{{Main|Recursive neural network}}
 
A '''[[recursive neural network]]'''<ref>{{cite book |doi=10.1109/ICNN.1996.548916 |title=Learning task-dependent distributed representations by backpropagation through structure |last1=Goller |first1=Christoph |last2title=KüchlerProceedings |first2=Andreas |s2cid=6536466 |journal=IEEEof International Conference on Neural Networks (ICNN'96) |volumelast2=1Küchler |pagesfirst2=347Andreas |year=1996 |isbn=978-0-7803-3210-2 |volume=1 |page=347 |chapter=Learning task-dependent distributed representations by backpropagation through structure |citeseerx=10.1.1.52.4759 |doi=10.1109/ICNN.1996.548916 |s2cid=6536466}}</ref> is created by applying the same set of weights [[recursion|recursively]] over a differentiable graph-like structure by traversing the structure in [[topological sort|topological order]]. Such networks are typically also trained by the reverse mode of [[automatic differentiation]].<ref name="lin1970">{{cite bookthesis |first=Seppo |last=Linnainmaa |author-link=Seppo Linnainmaa |year=1970 |title=The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors |publishertype=M.Sc.MSc thesis (in Finnish),|language=fi |publisher=University of Helsinki }}</ref><ref name="grie2008">{{cite book |first1last1=Andreas Griewank |last1first1=GriewankAndreas |first2url=Andrea{{google books |last2plainurl=y Walther |author2-linkid=AndreaxoiiLaRxcbEC}} Walther|title=Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation |editionlast2=SecondWalther |urlfirst2={{google books Andrea |plainurlauthor2-link=yAndrea Walther |idpublisher=xoiiLaRxcbEC}}SIAM |year=2008 |publisher=SIAM |isbn=978-0-89871-776-1 |edition=Second}}</ref> They can process [[distributed representation]]s of structure, such as [[mathematical logic|logical terms]]. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to [[natural language processing]].<ref>{{citation |last1=Socher |first1=Richard |last2title=Lin28th |first2=CliffInternational |last3=NgConference |first3=Andrewon Y.Machine |last4=ManningLearning |first4=Christopher(ICML D.2011) |contribution=Parsing Natural Scenes and Natural Language with Recursive Neural Networks |contribution-url=https://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf |titlelast2=28thLin International|first2=Cliff Conference|last3=Ng on|first3=Andrew MachineY. Learning|last4=Manning (ICML 2011)|first4=Christopher D.}}</ref> The Recursive''recursive Neuralneural Tensortensor Networknetwork'' uses a [[tensor]]-based composition function for all nodes in the tree.<ref>{{cite journal |last1=Socher |first1=Richard |last2=Perelygin |first2=Alex |last3=Wu |first3=Jean Y. |last4=Chuang |first4=Jason |last5=Manning |first5=Christopher D. |last6=Ng |first6=Andrew Y. |last7=Potts |first7=Christopher |title=Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank |journal=Emnlp 2013 |url=http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf |journal=Emnlp 2013}}</ref>
 
===Neural historyTuring compressormachines===
{{Main|Neural Turing machine|Differentiable neural computer}}
The neural history compressor is an unsupervised stack of RNNs.<ref name="schmidhuber1992">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=1992 |title=Learning complex, extended sequences using the principle of history compression |url=ftp://ftp.idsia.ch/pub/juergen/chunker.pdf |journal=Neural Computation |volume=4 |issue=2 |pages=234–242 |doi=10.1162/neco.1992.4.2.234 |s2cid=18271205 }}</ref> At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
 
'''Neural Turing machines''' (NTMs) are a method of extending recurrent neural networks by coupling them to external [[memory]] resources with which they interact. The combined system is analogous to a [[Turing machine]] or [[Von Neumann architecture]] but is [[Differentiable neural computer|differentiable]] end-to-end, allowing it to be efficiently trained with [[gradient descent]].<ref>{{cite arXiv |eprint=1410.5401 |class=cs.NE |first1=Alex |last1=Graves |first2=Greg |last2=Wayne |title=Neural Turing Machines |last3=Danihelka |first3=Ivo |year=2014}}</ref>
The system effectively minimises the description length or the negative [[logarithm]] of the probability of the data.<ref name="scholarpedia2015pre">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=2015 |title=Deep Learning |journal=Scholarpedia |volume=10 |issue=11 |page=32832 |doi=10.4249/scholarpedia.32832 |bibcode=2015SchpJ..1032832S |doi-access=free }}</ref> Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
 
'''Differentiable neural computers''' (DNCs) are an extension of neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.<ref name="DNCnature2016">{{Cite journal |last1=Graves |first1=Alex |last2=Wayne |first2=Greg |last3=Reynolds |first3=Malcolm |last4=Harley |first4=Tim |last5=Danihelka |first5=Ivo |last6=Grabska-Barwińska |first6=Agnieszka |last7=Colmenarejo |first7=Sergio Gómez |last8=Grefenstette |first8=Edward |last9=Ramalho |first9=Tiago |date=2016-10-12 |title=Hybrid computing using a neural network with dynamic external memory |url=http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz |journal=Nature |volume=538 |issue=7626 |pages=471–476 |bibcode=2016Natur.538..471G |doi=10.1038/nature20101 |issn=1476-4687 |pmid=27732574 |s2cid=205251479|url-access=subscription }}</ref>
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).<ref name="schmidhuber1992" /> Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.<ref name="schmidhuber1992" />
 
'''Neural network pushdown automata''' (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of [[context free grammar]]s (CFGs).<ref>{{Cite book |last1=Sun |first1=Guo-Zheng |title=Adaptive Processing of Sequences and Data Structures |last2=Giles |first2=C. Lee |last3=Chen |first3=Hsing-Hen |publisher=Springer |year=1998 |isbn=978-3-540-64341-8 |editor-last=Giles |editor-first=C. Lee |series=Lecture Notes in Computer Science |___location=Berlin, Heidelberg |pages=296–345 |chapter=The Neural Network Pushdown Automaton: Architecture, Dynamics and Training |citeseerx=10.1.1.56.8723 |doi=10.1007/bfb0054003 |editor-last2=Gori |editor-first2=Marco}}</ref>
A [[generative model]] partially overcame the [[vanishing gradient problem]]<ref name="hochreiter1991">Hochreiter, Sepp (1991), [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf Untersuchungen zu dynamischen neuronalen Netzen], Diploma thesis, Institut f. Informatik, Technische Univ. Munich, Advisor Jürgen Schmidhuber</ref> of [[automatic differentiation]] or [[backpropagation]] in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.<ref name="schmidhuber1993" />
 
Recurrent neural networks are [[Turing complete]] and can run arbitrary programs to process arbitrary sequences of inputs.<ref>{{cite journal |last1=Hyötyniemi |first1=Heikki |date=1996 |title=Turing machines are recurrent neural networks |journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society |pages=13–24}}</ref>
===Second order RNNs===
Second order RNNs use higher order weights <math>w{}_{ijk}</math> instead of the standard <math>w{}_{ij}</math> weights, and states can be a product. This allows a direct mapping to a [[finite-state machine]] both in training, stability, and representation.<ref>{{cite journal |first1=C. Lee |last1=Giles |first2=Clifford B. |last2=Miller |first3=Dong |last3=Chen |first4=Hsing-Hen |last4=Chen |first5=Guo-Zheng |last5=Sun |first6=Yee-Chun |last6=Lee |url=https://clgiles.ist.psu.edu/pubs/NC1992-recurrent-NN.pdf<!-- https://www.semanticscholar.org/paper/Learning-and-Extracting-Finite-State-Automata-with-Giles-Miller/872cdc269f3cb59f8a227818f35041415091545f --> |title=Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks |journal=Neural Computation |volume=4 |issue=3 |pages=393–405 |year=1992 |doi=10.1162/neco.1992.4.3.393 |s2cid=19666035 }}</ref><ref>{{cite journal |first1=Christian W. |last1=Omlin |first2=C. Lee |last2=Giles |title=Constructing Deterministic Finite-State Automata in Recurrent Neural Networks |journal=Journal of the ACM |volume=45 |issue=6 |pages=937–972 |year=1996 |doi=10.1145/235809.235811 |citeseerx=10.1.1.32.2364 |s2cid=228941 }}</ref> Long short-term memory is an example of this but has no such formal mappings or proof of stability.
 
==Training==
===Long short-term memory===
{{Main|Long short-term memory}}
 
=== Teacher forcing ===
[[File:Long Short-Term Memory.svg|thumb|Long short-term memory unit]]
[[File:Seq2seq_RNN_encoder-decoder_with_attention_mechanism,_training_and_inferring.png|thumb|Encoder-decoder RNN without attention mechanism. Teacher forcing is shown in red.]]An RNN can be trained into a conditionally [[generative model]] of sequences, aka '''autoregression'''.
Long short-term memory (LSTM) is a [[deep learning]] system that avoids the [[vanishing gradient problem]]. LSTM is normally augmented by recurrent gates called "forget gates".<ref name="gers2002">{{Cite journal |url=http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf |title=Learning Precise Timing with LSTM Recurrent Networks |last1=Gers |first1=Felix A. |last2=Schraudolph |first2=Nicol N. |journal=Journal of Machine Learning Research |volume=3 |access-date=2017-06-13 |last3=Schmidhuber |first3=Jürgen |pages=115–143 |year=2002 }}</ref> LSTM prevents backpropagated errors from vanishing or exploding.<ref name="hochreiter1991" /> Instead, errors can flow backwards through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks<ref name="schmidhuber2015">{{Cite journal |last=Schmidhuber |first=Jürgen |date=January 2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |doi=10.1016/j.neunet.2014.09.003 |pmid=25462637 |arxiv=1404.7828 |s2cid=11715509 }}</ref> that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.<ref name="bayer2009">{{Cite book |last1=Bayer |first1=Justin |last2=Wierstra |first2=Daan |last3=Togelius |first3=Julian |last4=Schmidhuber |first4=Jürgen |date=2009-09-14 |title=Evolving Memory Cell Structures for Sequence Learning |journal=Artificial Neural Networks – ICANN 2009 |publisher=Springer |___location=Berlin, Heidelberg |pages=755–764 |doi=10.1007/978-3-642-04277-5_76 |series=Lecture Notes in Computer Science |volume=5769 |isbn=978-3-642-04276-8|url=http://mediatum.ub.tum.de/doc/1289041/document.pdf }}</ref> LSTM works even given long delays between significant events and can handle signals that mix low and high frequency components.
 
Concretely, let us consider the problem of machine translation, that is, given a sequence <math>(x_1, x_2, \dots, x_n)</math> of English words, the model is to produce a sequence <math>(y_1, \dots, y_m)</math> of French words. It is to be solved by a [[seq2seq]] model.
Many applications use stacks of LSTM RNNs<ref name="fernandez2007">{{Cite journal |last1=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=Sequence labelling in structured domains with hierarchical recurrent neural networks |citeseerx=10.1.1.79.1887 |journal=Proc. 20th International Joint Conference on Artificial Intelligence, Ijcai 2007 |pages=774–779 }}</ref> and train them by [[Connectionist Temporal Classification (CTC)]]<ref name="graves2006">{{Cite journal |last1=Graves |first1=Alex |last2=Fernández |first2=Santiago |last3=Gomez |first3=Faustino J. |year=2006 |title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks |citeseerx=10.1.1.75.6306 |journal=Proceedings of the International Conference on Machine Learning |pages=369–376}}</ref> to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
 
Now, during training, the encoder half of the model would first ingest <math>(x_1, x_2, \dots, x_n)</math>, then the decoder half would start generating a sequence <math>(\hat y_1, \hat y_2, \dots, \hat y_{l})</math>. The problem is that if the model makes a mistake early on, say at <math>\hat y_2</math>, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift <math>\hat y_2</math> towards <math>y_2</math>, but not the others.
LSTM can learn to recognize [[context-sensitive languages]] unlike previous models based on [[hidden Markov model]]s (HMM) and similar concepts.<ref>{{Cite journal |last1=Gers |first1=Felix A. |last2=Schmidhuber |first2=Jürgen<!-- the E. is a mistake --> |date=November 2001 |title=LSTM recurrent networks learn simple context-free and context-sensitive languages |journal=IEEE Transactions on Neural Networks |volume=12 |issue=6 |pages=1333–1340 |doi=10.1109/72.963769 |pmid=18249962 |s2cid=10192330 |issn=1045-9227 |url=https://semanticscholar.org/paper/f828b401c86e0f8fddd8e77774e332dfd226cb05<!-- or https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=963769 --> }}</ref>
 
'''Teacher forcing''' makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see <math>(y_1, \dots, y_{k})</math> in order to generate <math>\hat y_{k+1}</math>.
===Gated recurrent unit===
{{Main|Gated recurrent unit}}
 
===Gradient descent===
[[File:Gated Recurrent Unit.svg|thumb|Gated recurrent unit]]
{{Main|Gradient descent|Vanishing gradient problem}}
Gated recurrent units (GRUs) are a gating mechanism in [[recurrent neural networks]] introduced in 2014. They are used in the full form and several simplified variants.<ref>{{cite arXiv |last1=Heck |first1=Joel |last2=Salem |first2=Fathi M. |date=2017-01-12 |title=Simplified Minimal Gated Unit Variations for Recurrent Neural Networks |eprint=1701.03452 |class=cs.NE }}</ref><ref>{{cite arXiv |last1=Dey |first1=Rahul |last2=Salem |first2=Fathi M. |date=2017-01-20 |title=Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks |eprint=1701.05923 |class=cs.NE }}</ref> Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.<ref name="MyUser_Arxiv.org_May_18_2016c">{{cite arXiv |class=cs.NE |first2=Caglar |last2=Gulcehre |title=Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling |eprint=1412.3555 |last1=Chung |first1=Junyoung |last3=Cho |first3=KyungHyun |last4=Bengio |first4=Yoshua |year=2014}}</ref> They have fewer parameters than LSTM, as they lack an output gate.<ref name="MyUser_Wildml.com_May_18_2016c">{{cite web |url=http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/ |title=Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano – WildML |newspaper=Wildml.com |access-date=May 18, 2016 |date=October 27, 2015 |first=Denny |last=Britz}}</ref>
 
Gradient descent is a [[:Category:First order methods|first-order]] [[Iterative algorithm|iterative]] [[Mathematical optimization|optimization]] [[algorithm]] for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear [[activation function]]s are [[Differentiable function|differentiable]].
===Bi-directional===
{{Main|Bidirectional recurrent neural networks}}
 
{{anchor|Real-Time Recurrent Learning}}The standard method for training RNN by gradient descent is the "[[backpropagation through time]]" (BPTT) algorithm, which is a special case of the general algorithm of [[backpropagation]]. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,<ref>{{cite book |last1=Robinson |first1=Anthony J.<!-- sometimes cited as T. (for "Tony") Robinson --> |url={{google books |plainurl=y |id=6JYYMwEACAAJ }} |title=The Utility Driven Dynamic Error Propagation Network |last2=Fallside |first2=Frank |publisher=Department of Engineering, University of Cambridge |year=1987 |series=Technical Report CUED/F-INFENG/TR.1}}</ref><ref>{{cite book |last1=Williams |first1=Ronald J. |url={{google books |plainurl=y |id=B71nu3LDpREC}} |title=Backpropagation: Theory, Architectures, and Applications |last2=Zipser |first2=D. |date=1 February 2013 |publisher=Psychology Press |isbn=978-1-134-77581-1 |editor-last1=Chauvin |editor-first1=Yves |contribution=Gradient-based learning algorithms for recurrent networks and their computational complexity |editor-last2=Rumelhart |editor-first2=David E.}}</ref> which is an instance of [[automatic differentiation]] in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is [[Local algorithm|local]] in time but not local in space.
Bi-directional RNNs use a finite sequence to predict or label each element of the sequence based on the element's past and future contexts. This is done by concatenating the outputs of two RNNs, one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique has been proven to be especially useful when combined with LSTM RNNs.<ref>{{Cite journal |last1=Graves |first1=Alex |last2=Schmidhuber |first2=Jürgen |date=2005-07-01 |title=Framewise phoneme classification with bidirectional LSTM and other neural network architectures |journal=Neural Networks |series=IJCNN 2005 |volume=18 |issue=5 |pages=602–610 |doi=10.1016/j.neunet.2005.06.042|pmid=16112549 |citeseerx=10.1.1.331.5800 }}</ref><ref name="ThireoReczko">{{Cite journal |last1=Thireou |first1=Trias |last2=Reczko |first2=Martin |date=July 2007 |title=Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins |journal=IEEE/ACM Transactions on Computational Biology and Bioinformatics |volume=4 |issue=3 |pages=441–446 |doi=10.1109/tcbb.2007.1015 |pmid=17666763 |s2cid=11787259 }}</ref>
 
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |date=1989-01-01 |title=A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks |journal=Connection Science |volume=1 |issue=4 |pages=403–412 |doi=10.1080/09540098908915650 |s2cid=18721007}}</ref><ref name="PríncipeEuliano2000">{{cite book |last1=Príncipe |first1=José C. |url={{google books |plainurl=y |id=jgMZAQAAIAAJ}} |title=Neural and adaptive systems: fundamentals through simulations |last2=Euliano |first2=Neil R. |last3=Lefebvre |first3=W. Curt |publisher=Wiley |year=2000 |isbn=978-0-471-35167-2}}</ref>
===Continuous-time===
A continuous-time recurrent neural network (CTRNN) uses a system of [[ordinary differential equations]] to model the effects on a neuron of the incoming inputs.
 
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the [[Jacobian matrix|Jacobian matrices]], while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.<ref name="Ollivier2015">{{Cite arXiv |eprint=1507.07680 |class=cs.NE |first1=Ollivier |last1=Yann |first2=Corentin |last2=Tallec |title=Training recurrent networks online without backtracking |date=2015-07-28 |first3=Guillaume |last3=Charpiat}}</ref> An online hybrid between BPTT and RTRL with intermediate complexity exists,<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |date=1992-03-01 |title=A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks |journal=Neural Computation |volume=4 |issue=2 |pages=243–248 |doi=10.1162/neco.1992.4.2.243 |s2cid=11761172}}</ref><ref>{{cite report |url=http://citeseerx.ist.psu.edu/showciting?cid=128036 |title=Complexity of exact gradient computation algorithms for recurrent neural networks |last=Williams |first=Ronald J. |publisher=Northeastern University, College of Computer Science |___location=Boston (MA) |access-date=2017-07-02 |archive-url=https://web.archive.org/web/20171020033840/http://citeseerx.ist.psu.edu/showciting?cid=128036 |archive-date=2017-10-20 |url-status=dead |series=Technical Report NU-CCS-89-27 |year=1989}}</ref> along with variants for continuous time.<ref>{{Cite journal |last=Pearlmutter |first=Barak A. |date=1989-06-01 |title=Learning State Space Trajectories in Recurrent Neural Networks |url=http://repository.cmu.edu/cgi/viewcontent.cgi?article=2865&context=compsci |journal=Neural Computation |volume=1 |issue=2 |pages=263–269 |doi=10.1162/neco.1989.1.2.263 |s2cid=16813485}}</ref>
For a neuron <math>i</math> in the network with activation <math>y_{i}</math>, the rate of change of activation is given by:
:<math>\tau_{i}\dot{y}_{i}=-y_{i}+\sum_{j=1}^{n}w_{ji}\sigma(y_{j}-\Theta_{j})+I_{i}(t)</math>
Where:
* <math>\tau_{i}</math> : Time constant of [[Synapse|postsynaptic]] node
* <math>y_{i}</math> : Activation of postsynaptic node
* <math>\dot{y}_{i}</math> : Rate of change of activation of postsynaptic node
* <math>w{}_{ji}</math> : Weight of connection from pre to postsynaptic node
* <math>\sigma(x)</math> : [[Sigmoid function|Sigmoid]] of x e.g. <math>\sigma(x) = 1/(1+e^{-x})</math>.
* <math>y_{j}</math> : Activation of presynaptic node
* <math>\Theta_{j}</math> : Bias of presynaptic node
* <math>I_{i}(t)</math> : Input (if any) to node
 
A major problem with gradient descent for standard RNN architectures is that [[Vanishing gradient problem|error gradients vanish]] exponentially quickly with the size of the time lag between important events.<ref name="hochreiter1991" /><ref name="HOCH2001">{{cite book |last=Hochreiter |first=Sepp |title=A Field Guide to Dynamical Recurrent Networks |date=15 January 2001 |publisher=John Wiley & Sons |isbn=978-0-7803-5369-5 |editor-last1=Kolen |editor-first1=John F. |chapter=Gradient flow in recurrent nets: the difficulty of learning long-term dependencies |display-authors=etal |editor-last2=Kremer |editor-first2=Stefan C. |chapter-url={{google books |plainurl=y |id=NWOcMVA64aAC }}}}</ref> LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.<ref name="lstm" /> This problem is also solved in the independently recurrent neural network (IndRNN)<ref name="auto" /> by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problems.
CTRNNs have been applied to [[evolutionary robotics]] where they have been used to address vision,<ref>{{citation |last1=Harvey |first1=Inman |last2=Husbands |first2=Phil |last3=Cliff |first3=Dave |title=3rd international conference on Simulation of adaptive behavior: from animals to animats 3 |year=1994 |pages=392–401 |contribution=Seeing the light: Artificial evolution, real vision |contribution-url=https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_Evolution_Real_Vision }}</ref> co-operation,<ref name="Evolving communication without dedicated communication channels">{{cite book |last=Quinn |first=Matthew |chapter=Evolving communication without dedicated communication channels |journal=Advances in Artificial Life |year=2001 |pages=357–366 |doi=10.1007/3-540-44811-X_38 |series=Lecture Notes in Computer Science |volume=2159 |isbn=978-3-540-42567-0 |citeseerx=10.1.1.28.5890 }}</ref> and minimal cognitive behaviour.<ref name="The dynamics of adaptive behavior: A research program">{{cite journal |first=Randall D. |last=Beer |title=The dynamics of adaptive behavior: A research program |journal=Robotics and Autonomous Systems |year=1997 |pages=257–289 |doi=10.1016/S0921-8890(96)00063-2 |volume=20 |issue=2–4}}</ref>
 
The [[online algorithm]] called '''causal recursive backpropagation''' (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.<ref>{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |last4=Rao |first4=Bhaskar D. |year=1999 |title=On-Line Learning Algorithms for Locally Recurrent Neural Networks |journal=IEEE Transactions on Neural Networks |volume=10 |issue=2 |pages=253–271 |citeseerx=10.1.1.33.7550 |doi=10.1109/72.750549 |pmid=18252525}}</ref> It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.
Note that, by the [[Shannon sampling theorem]], discrete time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent [[difference equation]]s.<ref name="Sherstinsky-NeurIPS2018-CRACT-3">{{cite conference |last=Sherstinsky |first=Alex |title=Deriving the Recurrent Neural Network Definition and RNN Unrolling Using Signal Processing |url=https://www.researchgate.net/publication/331718291 |conference=Critiquing and Correcting Trends in Machine Learning Workshop at NeurIPS-2018 |conference-url=https://ml-critique-correct.github.io/ |editor-last=Bloem-Reddy |editor-first=Benjamin |editor2-last=Paige |editor2-first=Brooks |editor3-last=Kusner |editor3-first=Matt |editor4-last=Caruana |editor4-first=Rich |editor5-last=Rainforth |editor5-first=Tom |editor6-last=Teh |editor6-first=Yee Whye |date=2018-12-07 }}</ref> This transformation can be thought of as occurring after the post-synaptic node activation functions <math>y_i(t)</math> have been low-pass filtered but prior to sampling.
 
One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.<ref>{{Cite journal |last1=Wan |first1=Eric A. |last2=Beaufays |first2=Françoise |year=1996 |title=Diagrammatic derivation of gradient algorithms for neural networks |journal=Neural Computation |volume=8 |pages=182–201 |doi=10.1162/neco.1996.8.1.182 |s2cid=15512077}}</ref> It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.<ref name="ReferenceA">{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |year=2000 |title=A Signal-Flow-Graph Approach to On-line Gradient Calculation |journal=Neural Computation |volume=12 |issue=8 |pages=1901–1927 |citeseerx=10.1.1.212.5406 |doi=10.1162/089976600300015196 |pmid=10953244 |s2cid=15090951}}</ref> It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.<ref name="ReferenceA" />
===Hierarchical ===
{{Expand section|date=August 2019}}
Hierarchical RNNs connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.<ref name="schmidhuber1992" /><ref>{{Cite journal |last1=Paine |first1=Rainer W. |last2=Tani |first2=Jun |s2cid=9932565 |date=2005-09-01 |title=How Hierarchical Control Self-organizes in Artificial Adaptive Systems |journal=Adaptive Behavior |volume=13 |issue=3 |pages=211–225 |doi=10.1177/105971230501300303}}</ref> Such hierarchical structures of cognition are present in theories of memory presented by philosopher [[Henri Bergson]], whose philosophical views have inspired hierarchical models.<ref name="auto1">{{Cite web| url=https://www.researchgate.net/publication/328474302 |title= Burns, Benureau, Tani (2018) A Bergson-Inspired Adaptive Time Constant for the Multiple Timescales Recurrent Neural Network Model. JNNS}}</ref>
 
=== Connectionist temporal classification ===
===Recurrent multilayer perceptron network===
The [[connectionist temporal classification]] (CTC)<ref name="graves2006">{{Cite conference |last1=Graves |first1=Alex |last2=Fernández |first2=Santiago |last3=Gomez |first3=Faustino J. |year=2006 |title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks |url=https://axon.cs.byu.edu/~martinez/classes/778/Papers/p369-graves.pdf |pages=369–376 |citeseerx=10.1.1.75.6306 |doi=10.1145/1143844.1143891 |isbn=1-59593-383-2 |book-title=Proceedings of the International Conference on Machine Learning}}</ref> is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.<ref>{{Cite journal |last=Hannun |first=Awni |date=2017-11-27 |title=Sequence Modeling with CTC |url=https://distill.pub/2017/ctc |journal=Distill |language=en |volume=2 |issue=11 |pages=e8 |doi=10.23915/distill.00008 |issn=2476-0757|doi-access=free |url-access=subscription }}</ref>
Generally, a recurrent multilayer perceptron network (RMLP) network consists of cascaded subnetworks, each of which contains multiple layers of nodes. Each of these subnetworks is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.<ref>{{cite book |citeseerx=10.1.1.45.3527 |title=Recurrent Multilayer Perceptrons for Identification and Control: The Road to Applications |year=1995 |first=Kurt |last=Tutschku |publisher=University of Würzburg Am Hubland |series=Institute of Computer Science Research Report |volume=118 |date=June 1995 }}</ref>
 
===MultipleGlobal timescalesoptimization modelmethods===
Training the weights in a neural network can be modeled as a non-linear [[global optimization]] problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization that depends on spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.<ref>{{Cite journal |last1=Yamashita |first1=Yuichi |last2=Tani |first2=Jun |date=2008-11-07 |title=Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment |journal=PLOS Computational Biology |volume=4 |issue=11 |pages=e1000220 |doi=10.1371/journal.pcbi.1000220 |pmc=2570613 |pmid=18989398 |bibcode=2008PLSCB...4E0220Y}}</ref><ref>{{Cite journal |last1=Alnajjar |first1=Fady |last2=Yamashita |first2=Yuichi |last3=Tani |first3=Jun |year=2013 |title=The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory |journal=Frontiers in Neurorobotics |volume=7 |pages=2 |doi=10.3389/fnbot.2013.00002 |pmc=3575058 |pmid=23423881|doi-access=free }}</ref> With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the [[memory-prediction framework|memory-prediction]] theory of brain function by [[Jeff Hawkins|Hawkins]] in his book ''[[On Intelligence]]''.{{Citation needed |date=June 2017}} Such a hierarchy also agrees with theories of memory posited by philosopher [[Henri Bergson]], which have been incorporated into an MTRNN model.<ref name="auto1"/><ref>{{Cite web| url=http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf |title= Proceedings of the 28th Annual Conference of the Japanese Neural Network Society (October, 2018)}}</ref>
 
The most common global optimization method for training RNNs is [[genetic algorithm]]s, especially in unstructured networks.<ref>{{citation |last1=Gomez |first1=Faustino J. |title=IJCAI 99 |year=1999 |access-date=5 August 2017 |contribution=Solving non-Markovian control tasks with neuroevolution |contribution-url=http://www.cs.utexas.edu/users/nn/downloads/papers/gomez.ijcai99.pdf |publisher=Morgan Kaufmann |last2=Miikkulainen |first2=Risto}}</ref><ref>{{cite thesis |url=http://arimaa.com/arimaa/about/Thesis/ |title=Applying Genetic Algorithms to Recurrent Neural Networks for Learning Network Parameters and Architecture |last=Syed |first=Omar |type=MSc |publisher=Department of Electrical Engineering, Case Western Reserve University |date=May 1995}}</ref><ref>{{Cite journal |last1=Gomez |first1=Faustino J. |last2=Schmidhuber |first2=Jürgen |last3=Miikkulainen |first3=Risto |date=June 2008 |title=Accelerated Neural Evolution Through Cooperatively Coevolved Synapses |url=https://www.jmlr.org/papers/volume9/gomez08a/gomez08a.pdf |journal=Journal of Machine Learning Research |volume=9 |pages=937–965}}</ref>
===Neural Turing machines===
{{Main|Neural Turing machine}}
 
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the [[Chromosome (genetic algorithm)|chromosome]] represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external [[memory]] resources which they can interact with by [[attentional process]]es. The combined system is analogous to a [[Turing machine]] or [[Von Neumann architecture]] but is [[Differentiable neural computer|differentiable]] end-to-end, allowing it to be efficiently trained with [[gradient descent]].<ref>{{cite arXiv |eprint=1410.5401 |title= Neural Turing Machines |last1=Graves |first1=Alex |last2=Wayne |first2=Greg |last3=Danihelka |first3=Ivo |year=2014 |class=cs.NE }}</ref>
 
* Each weight encoded in the chromosome is assigned to the respective weight link of the network.
===Differentiable neural computer===
* The training set is presented to the network which propagates the input signals forward.
{{main|Differentiable neural computer}}
* The mean-squared error is returned to the fitness function.
Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.
* This function drives the genetic selection process.
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme can be:
* When the neural network has learned a certain percentage of the training data.
* When the minimum value of the mean-squared-error is satisfied.
* When the maximum number of training generations has been reached.
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
 
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as [[simulated annealing]] or [[particle swarm optimization]].
===Neural network pushdown automata===
Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analogue stacks that are differentiable and that are trained. In this way, they are similar in complexity to recognizers of [[context free grammar]]s (CFGs).<ref>{{Cite book |title=Adaptive Processing of Sequences and Data Structures |chapter=The Neural Network Pushdown Automaton: Architecture, Dynamics and Training |last1=Sun |first1=Guo-Zheng |last2=Giles |first2=C. Lee |last3=Chen |first3=Hsing-Hen |year=1998 |publisher=Springer |___location=Berlin, Heidelberg |isbn=9783540643418 |editor-last=Giles |editor-first=C. Lee |editor-last2=Gori |editor-first2=Marco |series=Lecture Notes in Computer Science |pages=296–345 |doi=10.1007/bfb0054003 |citeseerx=10.1.1.56.8723 }}</ref>
 
== Other architectures ==
===Memristive Networks===
Greg Snider of [[HP Labs]] describes a system of cortical computing with memristive nanodevices.<ref>{{Citation
| last = Snider
| first = Greg
| title = Cortical computing with memristive nanodevices
| journal = Sci-DAC Review
| volume = 10
| pages = 58–65
| year = 2008
| url = http://www.scidacreview.org/0804/html/hardware.html
}}</ref> The [[memristors]] (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. [[DARPA]]'s [[SyNAPSE|SyNAPSE project]] has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures which may be based on memristive systems.
[[Memristive networks]] are a particular type of [[physical neural network]] that have very similar properties to (Little-)Hopfield networks, as they have a continuous dynamics, have a limited memory capacity and they natural relax via the minimization of a function which is asymptotic to the [[Ising model]]. In this sense, the dynamics of a memristive circuit has the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering an analog memristive networks accounts to a peculiar type of [[neuromorphic engineering]] in which the device behavior depends on the circuit wiring, or topology.
<ref>{{cite journal |last1=Caravelli |first1=Francesco |last2=Traversa |first2=Fabio Lorenzo |last3=Di Ventra |first3=Massimiliano |arxiv=1608.08651 |title=The complex dynamics of memristive circuits: analytical results and universal slow relaxation |year=2017 |doi=10.1103/PhysRevE.95.022140 |pmid=28297937 |volume=95 |issue= 2 |pages= 022140 |journal=Physical Review E|bibcode=2017PhRvE..95b2140C |s2cid=6758362 }}</ref><ref>{{Cite journal |last=Caravelli |first=Francesco |date=2019-11-07 |title=Asymptotic Behavior of Memristive Circuits
|journal=Entropy |volume=21 |issue=8 |pages=789 |doi= 10.3390/e21080789 |pmid=33267502 |pmc=789|bibcode=2019Entrp..21..789C |doi-access=free }}</ref>
 
===Independently RNN (IndRNN) ===
==Training==
The independently recurrent neural network (IndRNN)<ref name="auto">{{cite arXiv |title= Independently Recurrent Neural Network (IndRNN): Building a Longer and Deeper RNN|last1=Li |first1=Shuai |last2=Li |first2=Wanqing |last3=Cook |first3=Chris |last4=Zhu |first4=Ce |last5=Yanbo |first5=Gao |eprint=1803.04831|class=cs.CV |year=2018 }}</ref> addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as [[ReLU]]. Deep networks can be trained using [[skip connections]].
 
===GradientNeural descenthistory compressor===
{{Main|Gradient descent}}
Gradient descent is a [[:Category:First order methods|first-order]] [[Iterative algorithm|iterative]] [[Mathematical optimization|optimization]] [[algorithm]] for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear [[activation function]]s are [[Differentiable function|differentiable]]. Various methods for doing so were developed in the 1980s and early 1990s by [[Paul Werbos|Werbos]], [[Ronald J. Williams|Williams]], [[Tony Robinson (speech recognition)|Robinson]], [[Jürgen Schmidhuber|Schmidhuber]], [[Sepp Hochreiter|Hochreiter]], Pearlmutter and others.
 
The ''neural history compressor'' is an unsupervised stack of RNNs.<ref name="schmidhuber1992">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=1992 |title=Learning complex, extended sequences using the principle of history compression |url=ftp://ftp.idsia.ch/pub/juergen/chunker.pdf |journal=Neural Computation |volume=4 |issue=2 |pages=234–242 |doi=10.1162/neco.1992.4.2.234 |archive-url=https://web.archive.org/web/20170706014739/ftp://ftp.idsia.ch/pub/juergen/chunker.pdf |archive-date=2017-07-06 |url-status=dead |s2cid=18271205 }}</ref> At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
The standard method is called "[[backpropagation through time]]" or BPTT, and is a generalization of [[back-propagation]] for feed-forward networks.<ref>{{Cite journal|last=Werbos|first=Paul J.|title=Generalization of backpropagation with application to a recurrent gas market model|journal=Neural Networks|volume=1|issue=4|pages=339–356|doi=10.1016/0893-6080(88)90007-x|year=1988|s2cid=205001834 |url=https://www.semanticscholar.org/paper/Learning-representations-by-back-propagating-errors-Rumelhart-Hinton/052b1d8ce63b07fec3de9dbb583772d860b7c769}}</ref><ref>{{cite book |url={{google books |plainurl=y |id=Ff9iHAAACAAJ}} |title=Learning Internal Representations by Error Propagation |last=Rumelhart |first=David E. |publisher=Institute for Cognitive Science, University of California |___location=San Diego (CA) |year=1985 }}</ref> Like that method, it is an instance of [[automatic differentiation]] in the reverse accumulation mode of [[Pontryagin's minimum principle]]. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,<ref>{{cite book |url={{google books |plainurl=y |id=6JYYMwEACAAJ }} |title=The Utility Driven Dynamic Error Propagation Network |series=Technical Report CUED/F-INFENG/TR.1 |last1=Robinson |first1=Anthony J.<!-- sometimes cited as T. (for "Tony") Robinson --> |first2=Frank |last2=Fallside |publisher=Department of Engineering, University of Cambridge |year=1987 }}</ref><ref>{{cite book |url={{google books |plainurl=y |id=B71nu3LDpREC}} |title=Backpropagation: Theory, Architectures, and Applications |editor-last1=Chauvin |editor-first1=Yves |editor-last2=Rumelhart |editor-first2=David E. |first1=Ronald J. |last1=Williams |first2=D. |last2=Zipser |contribution=Gradient-based learning algorithms for recurrent networks and their computational complexity |date=1 February 2013 |publisher=Psychology Press |isbn=978-1-134-77581-1 }}</ref> which is an instance of [[automatic differentiation]] in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.
 
The system effectively minimizes the description length or the negative [[logarithm]] of the probability of the data.<ref name="scholarpedia2015pre">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=2015 |title=Deep Learning |journal=Scholarpedia |volume=10 |issue=11 |page=32832 |doi=10.4249/scholarpedia.32832 |bibcode=2015SchpJ..1032832S |doi-access=free }}</ref> Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |s2cid=18721007 |date=1989-01-01 |title=A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks |journal=Connection Science |volume=1 |issue=4 |pages=403–412 |doi=10.1080/09540098908915650 }}</ref><ref name="PríncipeEuliano2000">{{cite book |first1=José C. |last1=Príncipe |first2=Neil R. |last2= Euliano |first3=W. Curt |last3=Lefebvre |title=Neural and adaptive systems: fundamentals through simulations |url={{google books |plainurl=y |id=jgMZAQAAIAAJ}} |year=2000 |publisher=Wiley |isbn=978-0-471-35167-2 }}</ref>
 
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).<ref name="schmidhuber1992" /> Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.<ref name="schmidhuber1992" />
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the [[Jacobian matrix|Jacobian matrices]], while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.<ref name="Ollivier2015">{{Cite arXiv |last1=Yann |first1=Ollivier |first2=Corentin |last2=Tallec |first3=Guillaume |last3=Charpiat |date=2015-07-28 |title=Training recurrent networks online without backtracking |eprint=1507.07680 |class=cs.NE }}</ref> An online hybrid between BPTT and RTRL with intermediate complexity exists,<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |date=1992-03-01 |title=A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks |journal=Neural Computation |volume=4 |issue=2 |pages=243–248 |doi=10.1162/neco.1992.4.2.243 |s2cid=11761172 }}</ref><ref>{{cite journal |first=Ronald J. |last=Williams |title=Complexity of exact gradient computation algorithms for recurrent neural networks |___location=Boston (MA) |series=Technical Report NU-CCS-89-27 |publisher=Northeastern University, College of Computer Science |year=1989 |url=http://citeseerx.ist.psu.edu/showciting?cid=128036 }}</ref> along with variants for continuous time.<ref>{{Cite journal |last=Pearlmutter |first=Barak A. |date=1989-06-01 |title=Learning State Space Trajectories in Recurrent Neural Networks |journal=Neural Computation |volume=1 |issue=2 |pages=263–269 |doi=10.1162/neco.1989.1.2.263 |s2cid=16813485 |url=http://repository.cmu.edu/cgi/viewcontent.cgi?article=2865&context=compsci }}</ref>
 
A [[generative model]] partially overcame the [[vanishing gradient problem]]<ref name="hochreiter1991">{{cite thesis |last=Hochreiter |first=Sepp |date=1991 |url=http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf |title=Untersuchungen zu dynamischen neuronalen Netzen |type=Diploma |publisher=Institut f. Informatik, Technische University Munich}}</ref> of [[automatic differentiation]] or [[backpropagation]] in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.<ref name="schmidhuber1993" />
A major problem with gradient descent for standard RNN architectures is that [[Vanishing gradient problem|error gradients vanish]] exponentially quickly with the size of the time lag between important events.<ref name="hochreiter1991" /><ref name="HOCH2001">{{cite book |chapter-url={{google books |plainurl=y |id=NWOcMVA64aAC }} |title=A Field Guide to Dynamical Recurrent Networks |last=Hochreiter |first=Sepp |display-authors=etal |date=15 January 2001 |publisher=John Wiley & Sons |isbn=978-0-7803-5369-5 |chapter=Gradient flow in recurrent nets: the difficulty of learning long-term dependencies |editor-last2=Kremer |editor-first2=Stefan C. |editor-first1=John F. |editor-last1=Kolen }}</ref> LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.<ref name="lstm" /> This problem is also solved in the independently recurrent neural network (IndRNN)<ref name="auto"/> by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different range including long-term memory can be learned without the gradient vanishing and exploding problem.
 
===Second order RNNs===
The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.<ref>{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |last4=Rao |first4=Bhaskar D. |year=1999 |title=On-Line Learning Algorithms for Locally Recurrent Neural Networks |journal=IEEE Transactions on Neural Networks |volume=10 |issue=2 |pages=253–271 |doi=10.1109/72.750549 |pmid=18252525 |citeseerx=10.1.1.33.7550 }}</ref> It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves stability of the algorithm, providing a unifying view on gradient calculation techniques for recurrent networks with local feedback.
Second-order RNNs use higher order weights <math>w{}_{ijk}</math> instead of the standard <math>w{}_{ij}</math> weights, and states can be a product. This allows a direct mapping to a [[finite-state machine]] both in training, and representation.<ref>{{cite journal |first1=C. Lee |last1=Giles |first2=Clifford B. |last2=Miller |first3=Dong |last3=Chen |first4=Hsing-Hen |last4=Chen |first5=Guo-Zheng |last5=Sun |first6=Yee-Chun |last6=Lee |url=https://clgiles.ist.psu.edu/pubs/NC1992-recurrent-NN.pdf<!-- https://www.semanticscholar.org/paper/Learning-and-Extracting-Finite-State-Automata-with-Giles-Miller/872cdc269f3cb59f8a227818f35041415091545f --> |title=Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks |journal=Neural Computation |volume=4 |issue=3 |pages=393–405 |year=1992 |doi=10.1162/neco.1992.4.3.393 |s2cid=19666035 }}</ref><ref>{{cite journal |first1=Christian W. |last1=Omlin |first2=C. Lee |last2=Giles |title=Constructing Deterministic Finite-State Automata in Recurrent Neural Networks |journal=Journal of the ACM |volume=45 |issue=6 |pages=937–972 |year=1996 |doi=10.1145/235809.235811 |citeseerx=10.1.1.32.2364 |s2cid=228941 }}</ref> Long short-term memory is an example of this but has no such formal mappings or proof of stability.
 
===Hierarchical recurrent neural network===
One approach to the computation of gradient information in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.<ref>{{Cite journal |last1=Wan |first1=Eric A. |last2=Beaufays |first2=Françoise |year=1996 |title=Diagrammatic derivation of gradient algorithms for neural networks |journal=Neural Computation |volume=8 |pages=182–201 |doi=10.1162/neco.1996.8.1.182 |s2cid=15512077 }}</ref> It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.<ref name="ReferenceA">{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |year=2000 |title=A Signal-Flow-Graph Approach to On-line Gradient Calculation |journal=Neural Computation |volume=12 |issue=8 |pages=1901–1927 |doi=10.1162/089976600300015196 |pmid=10953244 |citeseerx=10.1.1.212.5406 |s2cid=15090951 }}</ref> It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.<ref name="ReferenceA"/>
Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.<ref name="schmidhuber1992" /><ref>{{Cite journal |last1=Paine |first1=Rainer W. |last2=Tani |first2=Jun |s2cid=9932565 |date=2005-09-01 |title=How Hierarchical Control Self-organizes in Artificial Adaptive Systems |journal=Adaptive Behavior |volume=13 |issue=3 |pages=211–225 |doi=10.1177/105971230501300303}}</ref> Such hierarchical structures of cognition are present in theories of memory presented by philosopher [[Henri Bergson]], whose philosophical views have inspired hierarchical models.<ref name="auto1">{{Cite web| url=https://www.researchgate.net/publication/328474302 |title= Burns, Benureau, Tani (2018) A Bergson-Inspired Adaptive Time Constant for the Multiple Timescales Recurrent Neural Network Model. JNNS}}</ref>
 
Hierarchical recurrent neural networks are useful in [[forecasting]], helping to predict disaggregated inflation components of the [[consumer price index]] (CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established [[inflation]] prediction methods.<ref name="barkan">{{cite journal | last1 = Barkan | first1 = Oren | last2 = Benchimol | first2 = Jonathan | last3 = Caspi | first3 = Itamar | last4 = Cohen | first4 = Eliya | last5 = Hammer | first5 = Allon | last6 = Koenigstein | first6 = Noam | date = 2023 | title = Forecasting CPI inflation components with Hierarchical Recurrent Neural Networks | journal = International Journal of Forecasting | volume = 39 | issue = 3 | pages = 1145–1162 | doi = 10.1016/j.ijforecast.2022.04.009 | arxiv = 2011.07920 }}</ref>
===Global optimization methods===
Training the weights in a neural network can be modeled as a non-linear [[global optimization]] problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared-difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
 
===Recurrent multilayer perceptron network===
The most common global optimization method for training RNNs is [[genetic algorithm]]s, especially in unstructured networks.<ref>{{citation |title=IJCAI 99 |year=1999 |last1=Gomez |first1=Faustino J. |last2=Miikkulainen |first2=Risto |contribution=Solving non-Markovian control tasks with neuroevolution |contribution-url=http://www.cs.utexas.edu/users/nn/downloads/papers/gomez.ijcai99.pdf |publisher=Morgan Kaufmann |access-date=5 August 2017 }}</ref><ref>{{cite web |url=http://arimaa.com/arimaa/about/Thesis/ |title=Applying Genetic Algorithms to Recurrent Neural Networks for Learning Network Parameters and Architecture |last=Syed |first=Omar |publisher=M.Sc. thesis, Department of Electrical Engineering, Case Western Reserve University, Advisor Yoshiyasu Takefuji |date=May 1995 }}</ref><ref>{{Cite journal |last1=Gomez |first1=Faustino J. |last2=Schmidhuber |first2=Jürgen |last3=Miikkulainen |first3=Risto |date=June 2008 |title=Accelerated Neural Evolution Through Cooperatively Coevolved Synapses |url=http://dl.acm.org/citation.cfm?id=1390681.1390712 |journal=Journal of Machine Learning Research |volume=9 |pages=937–965 }}</ref>
Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.<ref>{{cite book |citeseerx=10.1.1.45.3527 |title=Recurrent Multilayer Perceptrons for Identification and Control: The Road to Applications |first=Kurt |last=Tutschku |publisher=University of Würzburg Am Hubland |series=Institute of Computer Science Research Report |volume=118 |date=June 1995 }}</ref>
 
===Multiple timescales model===
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the [[Chromosome (genetic algorithm)|chromosome]] represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.<ref>{{Cite journal |last1=Yamashita |first1=Yuichi |last2=Tani |first2=Jun |date=2008-11-07 |title=Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment |journal=PLOS Computational Biology |volume=4 |issue=11 |pages=e1000220 |doi=10.1371/journal.pcbi.1000220 |pmc=2570613 |pmid=18989398 |bibcode=2008PLSCB...4E0220Y |doi-access=free }}</ref><ref>{{Cite journal |last1=Alnajjar |first1=Fady |last2=Yamashita |first2=Yuichi |last3=Tani |first3=Jun |year=2013 |title=The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory |journal=Frontiers in Neurorobotics |volume=7 |page=2 |doi=10.3389/fnbot.2013.00002 |pmc=3575058 |pmid=23423881|doi-access=free }}</ref> With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the [[memory-prediction framework|memory-prediction]] theory of brain function by [[Jeff Hawkins|Hawkins]] in his book ''[[On Intelligence]]''.{{Citation needed |date=June 2017}} Such a hierarchy also agrees with theories of memory posited by philosopher [[Henri Bergson]], which have been incorporated into an MTRNN model.<ref name="auto1"/><ref>{{Cite web | url=http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf | title=Proceedings of the 28th Annual Conference of the Japanese Neural Network Society (October, 2018) | access-date=2021-02-06 | archive-date=2020-05-09 | archive-url=https://web.archive.org/web/20200509004753/http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf | url-status=dead }}</ref>
 
===Memristive networks===
* Each weight encoded in the chromosome is assigned to the respective weight link of the network.
Greg Snider of [[HP Labs]] describes a system of cortical computing with memristive nanodevices.<ref>{{Citation
* The training set is presented to the network which propagates the input signals forward.
| last = Snider
* The mean-squared-error is returned to the fitness function.
| first = Greg
* This function drives the genetic selection process.
| title = Cortical computing with memristive nanodevices
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
| journal = Sci-DAC Review
* When the neural network has learnt a certain percentage of the training data or
| volume = 10
* When the minimum value of the mean-squared-error is satisfied or
| pages = 58–65
* When the maximum number of training generations has been reached.
| year = 2008
The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the mean-squared-error from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared-error.
| url = http://www.scidacreview.org/0804/html/hardware.html
| access-date = 2019-09-06
| archive-date = 2016-05-16
| archive-url = https://web.archive.org/web/20160516070906/http://www.scidacreview.org/0804/html/hardware.html
| url-status = dead
}}</ref> The [[memristors]] (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. [[DARPA]]'s [[SyNAPSE|SyNAPSE project]] has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.
[[Memristive networks]] are a particular type of [[physical neural network]] that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the [[Ising model]]. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of [[neuromorphic engineering]] in which the device behavior depends on the circuit wiring or topology.
The evolution of these networks can be studied analytically using variations of the [[Caravelli-Traversa-Di Ventra equation]].<ref>{{cite journal |last1=Caravelli |first1=Francesco |last2=Traversa |first2=Fabio Lorenzo |last3=Di Ventra |first3=Massimiliano |title=The complex dynamics of memristive circuits: analytical results and universal slow relaxation |year=2017 |doi=10.1103/PhysRevE.95.022140 |pmid=28297937 |volume=95 |issue= 2 |page= 022140 |journal=Physical Review E|bibcode=2017PhRvE..95b2140C |s2cid=6758362|arxiv=1608.08651 }}</ref>
 
=== Continuous-time ===
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as [[simulated annealing]] or [[particle swarm optimization]].
A continuous-time recurrent neural network (CTRNN) uses a system of [[ordinary differential equations]] to model the effects on a neuron of the incoming inputs. They are typically analyzed by [[dynamical systems theory]]. Many RNN models in neuroscience are continuous-time.<ref name=":0" />
 
For a neuron <math>i</math> in the network with activation <math>y_{i}</math>, the rate of change of activation is given by:
==Related fields and models==
:<math>\tau_{i}\dot{y}_{i}=-y_{i}+\sum_{j=1}^{n}w_{ji}\sigma(y_{j}-\Theta_{j})+I_{i}(t)</math>
RNNs may behave [[chaos theory|chaotically]]. In such cases, [[dynamical systems theory]] may be used for analysis.
Where:
* <math>\tau_{i}</math> : Time constant of [[Synapse|postsynaptic]] node
* <math>y_{i}</math> : Activation of postsynaptic node
* <math>\dot{y}_{i}</math> : Rate of change of activation of postsynaptic node
* <math>w{}_{ji}</math> : Weight of connection from pre to postsynaptic node
* <math>\sigma(x)</math> : [[Sigmoid function|Sigmoid]] of x e.g. <math>\sigma(x) = 1/(1+e^{-x})</math>.
* <math>y_{j}</math> : Activation of presynaptic node
* <math>\Theta_{j}</math> : Bias of presynaptic node
* <math>I_{i}(t)</math> : Input (if any) to node
 
CTRNNs have been applied to [[evolutionary robotics]] where they have been used to address vision,<ref>{{citation |last1=Harvey |first1=Inman |title=3rd international conference on Simulation of adaptive behavior: from animals to animats 3 |pages=392–401 |year=1994 |contribution=Seeing the light: Artificial evolution, real vision |contribution-url=https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_Evolution_Real_Vision |last2=Husbands |first2=Phil |last3=Cliff |first3=Dave}}</ref> co-operation,<ref name="Evolving communication without dedicated communication channels">{{cite conference |last=Quinn |first=Matt |year=2001 |title=Evolving communication without dedicated communication channels |pages=357–366 |doi=10.1007/3-540-44811-X_38 |isbn=978-3-540-42567-0 |book-title=Advances in Artificial Life: 6th European Conference, ECAL 2001}}</ref> and minimal cognitive behaviour.<ref name="The dynamics of adaptive behavior: A research program">{{cite journal |last=Beer |first=Randall D. |year=1997 |title=The dynamics of adaptive behavior: A research program |journal=Robotics and Autonomous Systems |volume=20 |issue=2–4 |pages=257–289 |doi=10.1016/S0921-8890(96)00063-2}}</ref>
 
Note that, by the [[Shannon sampling theorem]], discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent [[difference equation]]s.<ref name="Sherstinsky-NeurIPS2018-CRACT-3">{{cite conference |last=Sherstinsky |first=Alex |date=2018-12-07 |editor-last=Bloem-Reddy |editor-first=Benjamin |editor2-last=Paige |editor2-first=Brooks |editor3-last=Kusner |editor3-first=Matt |editor4-last=Caruana |editor4-first=Rich |editor5-last=Rainforth |editor5-first=Tom |editor6-last=Teh |editor6-first=Yee Whye |title=Deriving the Recurrent Neural Network Definition and RNN Unrolling Using Signal Processing |url=https://www.researchgate.net/publication/331718291 |conference=Critiquing and Correcting Trends in Machine Learning Workshop at NeurIPS-2018 |conference-url=https://ml-critique-correct.github.io/}}</ref> This transformation can be thought of as occurring after the post-synaptic node activation functions <math>y_i(t)</math> have been [[Low-pass filter|low-pass filtered]] but prior to sampling.
 
They are in fact [[recursive neural network]]s with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
 
InFrom particulara time-series perspective, RNNs can appear as nonlinear versions of [[finite impulse response]] and [[infinite impulse response]] filters and also as a [[nonlinear autoregressive exogenous model]] (NARX).<ref>{{cite journal |url={{google books |plainurl=y |id=830-HAAACAAJ |page=208}} |title=Computational Capabilities of Recurrent NARX Neural Networks |last1=Siegelmann |first1=Hava T. |last2=Horne |first2=Bill G. |last3=Giles |first3=C. Lee |journal= IEEE Transactions on Systems, Man, and Cybernetics, - Part B: (Cybernetics)|volume=27 |issue=2 |pages=208–15 |year=1995 |pmid=18255858 |doi=10.1109/3477.558801 |citeseerx=10.1.1.48.7468 }}</ref> RNN has infinite impulse response whereas [[convolutional neural network]] has [[finite impulse response]]. Both classes of networks exhibit temporal [[dynamic system|dynamic behavior]].<ref>{{Cite journal |last=Miljanovic |first=Milos |date=Feb–Mar 2012 |title=Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction |url=http://www.ijcse.com/docs/INDJCSE12-03-01-028.pdf |journal=Indian Journal of Computer and Engineering |volume=3 |issue=1}}</ref> A finite impulse recurrent network is a [[directed acyclic graph]] that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a [[directed cyclic graph]] that cannot be unrolled.
 
The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency [[Action potential|spiking activity]].<ref>{{Cite journal |last1=Hodassman |first1=Shiri |last2=Meir |first2=Yuval |last3=Kisos |first3=Karin |last4=Ben-Noam |first4=Itamar |last5=Tugendhaft |first5=Yael |last6=Goldental |first6=Amir |last7=Vardi |first7=Roni |last8=Kanter |first8=Ido |date=2022-09-29 |title=Brain inspired neuronal silencing mechanism to enable reliable sequence identification |journal=Scientific Reports |volume=12 |issue=1 |pages=16003 |doi=10.1038/s41598-022-20337-x |pmid=36175466 |pmc=9523036 |arxiv=2203.13028 |bibcode=2022NatSR..1216003H |issn=2045-2322|doi-access=free }}</ref>
 
Additional stored states and the storage under direct control by the network can be added to both [[infinite impulse response|infinite-impulse]] and [[finite impulse response|finite-impulse]] networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of [[long short-term memory]] networks (LSTMs) and [[gated recurrent unit]]s. This is also called Feedback Neural Network (FNN).
 
==Libraries==
Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by [[just-in-time compilation]].
* [[Apache Singa]]
* [[Caffe (software)|Caffe]]: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in [[C++]], and has [[Python (programming language)|Python]] and [[MATLAB]] wrappers.
* [[Chainer]]: The first stable deep learning library that supports dynamic, define-by-run neural networks. Fully in Python, production support for CPU, GPU, distributed training.
* [[Deeplearning4j]]: Deep learning in [[Java (programming language)|Java]] and [[Scala (programming language)|Scala]] on multi-GPU-enabled [[Apache Spark|Spark]]. A general-purpose [http://deeplearning4j.org/ deep learning library] for the [[Java virtual machine|JVM]] production stack running on a [https://github.com/deeplearning4j/libnd4j C++ scientific computing engine]. Allows the creation of custom layers. Integrates with [[Hadoop]] and [[Apache Kafka|Kafka]].
*[[Flux (machine-learning framework)|Flux]]: includes interfaces for RNNs, including GRUs and LSTMs, written in [[Julia (programming language)|Julia]].
* [[Keras]]: High-level, easy to use API, providing a wrapper to many other deep learning libraries.
* [[Microsoft Cognitive Toolkit]]
* [[MXNet]]: a modernan open-source deep learning framework used to train and deploy deep neural networks.
* [[PyTorch]]: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
* [[TensorFlow]]: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary [[Tensor processing unit|TPU]],<ref>{{cite news |url=https://www.wired.com/2016/05/google-tpu-custom-chips/ |first=Cade |last=Metz |newspaper=Wired |date=May 18, 2016 |title=Google Built Its Very Own Chips to Power Its AI Bots }}</ref> mobile
* [[Theano (software)|Theano]]: The referenceA deep-learning library for Python with an API largely compatible with the popular [[NumPy]] library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
* [[Torch (machine learning)|Torch]] ([http://www.torch.ch/ www.torch.ch]): A scientific computing framework with wide support for machine learning algorithms, written in [[C (programming language)|C]] and [[Lua (programming language)|luaLua]]. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter.
 
==Applications==
Applications of recurrent neural networks include:
*[[Machine translation]]<ref name="sutskever2014"/>
*[[Robot control]]<ref>{{Cite book |last1=Mayer |first1=Hermann |last2=Gomez |first2=Faustino J. |last3=Wierstra |first3=Daan |last4=Nagy |first4=Istvan |last5=Knoll |first5=Alois |last6=Schmidhuber |first6=Jürgen |datetitle=October 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems |titlechapter=A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks |journaldate=2006October IEEE/RSJ International Conference on Intelligent Robots and Systems2006 |pages=543–548 |doi=10.1109/IROS.2006.282190 |isbn=978-1-4244-0258-8 |citeseerx=10.1.1.218.3399 |s2cid=12284900 }}</ref>
*[[Time series prediction]]<ref>{{Cite journalconference |last1=Wierstra |first1=Daan |last2=Schmidhuber |first2=Jürgen |last3=Gomez |first3=Faustino J. |year=2005 |title=Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning |url=https://www.academia.edu/5830256 |journalbook-title=Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh |pages=853–858853–8 |oclc=62330637}}</ref><ref>{{cite arXiv |last=Petneházi |first=Gábor |title=Recurrent neural networks for time series forecasting |date=2019-01-01 |eprint=1901.00069 |class=cs.LG }}</ref><ref>{{cite journal |last1=Hewamalage |first1=Hansika |last2=Bergmeir |first2=Christoph |last3=Bandara |first3=Kasun |title=Recurrent Neural Networks for Time Series Forecasting: Current Status and Future Directions |journal=International Journal of Forecasting |year=2020 |volume=37 |pages=388–427 |doi=10.1016/j.ijforecast.2020.06.008 |arxiv=1909.00590 |s2cid=202540863 }}</ref>
*[[Speech recognition]]<ref>{{cite journal |last1=Graves |first1=Alex |last2=Schmidhuber |first2=Jürgen |year=2005 |title=Framewise phoneme classification with bidirectional LSTM and other neural network architectures |journal=Neural Networks |volume=18 |issue=5–6 |pages=602–610 |doi=10.1016/j.neunet.2005.06.042 |pmid=16112549 |citeseerx=10.1.1.331.5800 |s2cid=1856462 }}</ref><ref>{{Cite book |last1name=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting |url=http:"fernandez2007keyword"//dl.acm.org/citation.cfm?id=1778066.1778092 |journal=Proceedings of the 17th International Conference on Artificial Neural Networks |series=ICANN'07 |___location=Berlin, Heidelberg |publisher=Springer-Verlag |pages=220–229 |isbn=978-3540746935 }}</ref><ref name="graves2013">{{cite journalconference |last1=Graves |first1=Alex |last2=Mohamed |first2=Abdel-rahman |last3=Hinton |first3=Geoffrey E. |yearbook-title=2013 |title=SpeechIEEE RecognitionInternational withConference Deepon Recurrent Neural Networks |journal=Acoustics, Speech and Signal Processing (ICASSP),|title=Speech 2013recognition IEEEwith Internationaldeep Conferencerecurrent onneural networks |year=2013 |pages=6645–66496645–9 |arxiv=1303.5778 |bibcode=2013arXiv1303.5778G |doi=10.1109/ICASSP.2013.6638947 |isbn=978-1-4799-0356-6 |s2cid=206741496 }}</ref>
*[[Speech synthesis]]<ref>{{Cite journal |last1=Chang |first1=Edward F. |last2=Chartier |first2=Josh |last3=Anumanchipalli |first3=Gopala K. |date=24 April 2019 |title=Speech synthesis from neural decoding of spoken sentences |journal=Nature |language=en |volume=568 |issue=7753 |pages=493–498493–8 |doi=10.1038/s41586-019-1119-1 |pmid=31019317 |pmc=9714519 |issn=1476-4687 |bibcode=2019Natur.568..493A |s2cid=129946122 }}</ref>
*[[Brain–computer interfaces]]<ref>{{Cite journal |last1=Moses, |first1=David A., |last2=Metzger |first2=Sean L. Metzger,|last3=Liu |first3=Jessie R. Liu,|last4=Anumanchipalli |first4=Gopala K. Anumanchipalli,|last5=Makin |first5=Joseph G. Makin,|last6=Sun |first6=Pengfei F. Sun,|last7=Chartier |first7=Josh Chartier,|last8=Dougherty et|first8=Maximilian alE. "|last9=Liu |first9=Patricia M. |last10=Abrams |first10=Gary M. |last11=Tu-Chan |first11=Adelyn |last12=Ganguly |first12=Karunesh |last13=Chang |first13=Edward F. |date=2021-07-15 |title=Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria." |journal=New England Journal of Medicine |volume=385, no. |issue=3 (July 15, 2021):|pages=217–227 217–27. https://|doi.org/=10.1056/NEJMoa2027540. |pmc=8972947 |pmid=34260835}}</ref>
*Time series anomaly detection<ref>{{Cite conference |last1=Malhotra |first1=Pankaj |last2=Vig |first2=Lovekesh |last3=Shroff |first3=Gautam |last4=Agarwal |first4=Puneet |date=April 2015 |title=Long Short Term Memory Networks for Anomaly Detection in Time Series |url={{GBurl|USGLCgAAQBAJ|p=89}} |isbn=978-2-87587-015-5 |publisher=Ciaco |pages=89–94
</ref>
*Time series anomaly detection<ref>{{Cite journal|last1=Malhotra |first1=Pankaj |last2=Vig |first2=Lovekesh |last3=Shroff |first3=Gautam |last4=Agarwal |first4=Puneet |date=April 2015 |book-title=Long Short Term Memory Networks for Anomaly Detection in Time Series |url=https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf |journal=European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning ESANN 2015 }}</ref>
*[[Text-to-Video model]]<ref>{{Cite web |title=Papers with Code - DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction |url=https://paperswithcode.com/paper/deephs-hdrvideo-deep-high-speed-high-dynamic |access-date=2022-10-13 |website=paperswithcode.com }}</ref>
*Rhythm learning<ref name="peephole2002">{{cite journal |last1=Gers |first1=Felix A. |last2=Schraudolph |first2=Nicol N. |last3=Schmidhuber |first3=Jürgen |year=2002 |title=Learning precise timing with LSTM recurrent networks |url=http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf |journal=Journal of Machine Learning Research |volume=3 |pages=115–143 }}</ref>
*Music composition<ref>{{Cite book |last1=Eck |first1=Douglas |last2=Schmidhuber |first2=Jürgen |datetitle=Artificial Neural Networks — ICANN 2002-08-28 |titlechapter=Learning the Long-Term Structure of the Blues |journaldate=Artificial Neural Networks — ICANN 2002-08-28 |publisher=Springer |___location=Berlin, Heidelberg |pages=284–289 |doi=10.1007/3-540-46084-5_47 |isbn=978-35404608483-540-46084-8 |series=Lecture Notes in Computer Science |volume=2415 |citeseerx=10.1.1.116.3620 }}</ref>
*Grammar learning<ref>{{cite journal |last1=Schmidhuber |first1=Jürgen |last2=Gers |first2=Felix A. |last3=Eck |first3=Douglas |year=2002 |title=Learning nonregular languages: A comparison of simple recurrent networks and LSTM |journal=Neural Computation |volume=14 |issue=9 |pages=2039–2041 |doi=10.1162/089976602320263980 |pmid=12184841 |citeseerx=10.1.1.11.7369 |s2cid=30459046 }}</ref><ref name="peepholeLSTM">{{cite journal |last1=Gers |first1=Felix A. |last2=Schmidhuber |first2=Jürgen |year=2001 |title=LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages |url=ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf |journal=IEEE Transactions on Neural Networks |volume=12 |issue=6 |pages=1333–13401333–40 |doi=10.1109/72.963769 |pmid=18249962 |s2cid=10192330 |archive-url=https://web.archive.org/web/20170706014426/ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf |archive-date=2017-07-06 |url-status=dead |access-date=2017-12-12 }}</ref><ref>{{cite journal |last1=Pérez-Ortiz |first1=Juan Antonio |last2=Gers |first2=Felix A. |last3=Eck |first3=Douglas |last4=Schmidhuber |first4=Jürgen |year=2003 |title=Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets |journal=Neural Networks |volume=16 |issue=2 |pages=241–250 |doi=10.1016/s0893-6080(02)00219-8 |pmid=12628609 |citeseerx=10.1.1.381.1992 }}</ref>
*[[Handwriting recognition]]<ref>{{cite journalconference |first1=Alex |last1=Graves |first2=Jürgen |last2=Schmidhuber |title=Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks |journalbook-title=Advances in Neural Information Processing Systems |volume=22, NIPS'22 |pages=545–552 |___location=Vancouver (BC) |publisher=MIT Press |year=2009 |url=http://papers.neurips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks.pdf}}</ref><ref>{{Cite bookconference |last1=Graves |first1=Alex |last2=Fernández |first2=Santiago |last3=Liwicki |first3=Marcus |last4=Bunke |first4=Horst |last5=Schmidhuber |first5=Jürgen |year=2007 |title=Unconstrained Online Handwriting Recognition with Recurrent Neural Networks |url=http://dl.acm.org/citation.cfm?id=2981562.2981635 |journalbook-title=Proceedings of the 20th International Conference on Neural Information Processing Systems |series=NIPS'07 |publisher=Curran Associates Inc. |pages=577–584 |isbn=9781605603520978-1-60560-352-0 }}</ref>
*Human action recognition<ref>{{cite journalbook |first1=Moez |last1=Baccouche |first2=Franck |last2=Mamalet |first3=Christian |last3=Wolf |first4=Christophe |last4=Garcia |first5=Atilla |last5=Baskurt |title=Human Behavior Unterstanding |chapter=Sequential Deep Learning for Human Action Recognition |journal=2nd International Workshop on Human Behavior Understanding (HBU) |editor-first1=Albert Ali |editor-last1=Salah |editor-first2=Bruno |editor-last2=Lepri |___location=Amsterdam, Netherlands |pages=29–39 |series=Lecture Notes in Computer Science |volume=7065 |publisher=Springer |year=2011 |doi=10.1007/978-3-642-25446-8_4 |isbn=978-3-642-25445-1 }}</ref>
*Protein homology detection<ref>{{Cite journal
| last1 = Hochreiter | first1 = Sepp
Line 281 ⟶ 362:
| doi-access = free
}}</ref>
*Predicting subcellular localization of proteins<ref name="ThireoReczko">{{Cite journal |last1=Thireou |first1=Trias |last2=Reczko |first2=Martin |date=July 2007 |title=Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins |journal=IEEE/ACM Transactions on Computational Biology and Bioinformatics |volume=4 |issue=3 |pages=441–446 |doi=10.1109/tcbb.2007.1015 |pmid=17666763 |s2cid=11787259}}</ref>
*Predicting subcellular localization of proteins<ref name="ThireoReczko" />
*Several prediction tasks in the area of business process management<ref>{{cite book |last1=Tax |first1=Niek |last2=Verenich |first2=Ilya |last3=La Rosa |first3=Marcello |last4=Dumas |first4=Marlon |yeartitle=2017Advanced Information Systems Engineering |titlechapter=Predictive Business Process Monitoring with LSTM neuralNeural networksNetworks |journalyear=Proceedings of the International Conference on Advanced Information Systems Engineering (CAiSE)2017 |pages=477–492 |doi=10.1007/978-3-319-59536-8_30 |series=Lecture Notes in Computer Science |volume=10253 |isbn=978-3-319-59535-1 |arxiv=1612.02130 |s2cid=2192354 }}</ref>
*Prediction in medical care pathways<ref>{{cite journal |last1=Choi |first1=Edward |last2=Bahadori |first2=Mohammad Taha |last3=Schuetz |first3=Andy |last4=Stewart |first4=Walter F. |last5=Sun |first5=Jimeng |journal=JMLR Workshop and Conference Proceedings |year=2016 |title=Doctor AI: Predicting Clinical Events via Recurrent Neural Networks |url=http://proceedings.mlr.press/v56/Choi16.html |journal=Proceedings of the 1st Machine Learning for Healthcare Conference |volume=56 |pages=301–318 |bibcode=2015arXiv151105942C |arxiv=1511.05942 |pmid=28286600 |pmc=5341604 }}</ref>
* Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code) <ref>{{Cite web |title=Artificial intelligence helps accelerate progress toward efficient fusion reactions |url=https://www.princeton.edu/news/2017/12/15/artificial-intelligence-helps-accelerate-progress-toward-efficient-fusion-reactions |access-date=2023-06-12 |website=Princeton University }}</ref>
 
==References==
Line 289 ⟶ 371:
 
==Further reading==
* {{cite book |last1=Mandic |first1=Danilo P. |last2=Chambers |first2=Jonathon A. |name-list-style=amp |title=Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability |publisher=Wiley |year=2001 |isbn=978-0-471-49517-8 }}
* {{Cite journal |last=Grossberg |first=Stephen |date=2013-02-22 |title=Recurrent Neural Networks |journal=Scholarpedia |volume=8 |issue=2 |pages=1888 |doi=10.4249/scholarpedia.1888 |doi-access=free |bibcode=2013SchpJ...8.1888G |issn=1941-6016}}
 
* [http://www.idsia.ch/~juergen/rnn.html Recurrent Neural Networks]. List of RNN papers by [[Jürgen Schmidhuber]]'s group at [[Dalle Molle Institute for Artificial Intelligence Research]].
==External links==
*[http://www.idsia.ch/~juergen/rnn.html Recurrent Neural Networks] with over 60 RNN papers by [[Jürgen Schmidhuber]]'s group at [[Dalle Molle Institute for Artificial Intelligence Research]]
*[http://jsalatas.ictpro.gr/weka Elman Neural Network implementation] for [[WEKA]]
 
{{Artificial intelligence navbox}}
{{Differentiable computing}}
{{Authority control}}
 
{{DEFAULTSORT:Recurrent Neural Network}}
[[Category:ArtificialNeural intelligencenetwork architectures]]
[[Category:Artificial neural networks]]