Recurrent neural network: Difference between revisions

Content deleted Content added
m 'Transformer's isn't a proper noun because this is about ML, not Optimus Prime
 
(32 intermediate revisions by 8 users not shown)
Line 1:
{{Short description|Class of artificial neural network}}
{{Distinguish|recursiveRecursive neural network|Feedback neural network}}
{{Machine learning|ArtificialNeural neural networknetworks}}
 
In [[artificial neural networks]], '''Recurrentrecurrent neural networks''' ('''RNNs''') are a class of artificial neural networks designed for processing sequential data, such as text, speech, and [[time series]],<ref>{{Cite journal |last1=Tealab |first1=Ahmed |date=2018-12-01 |title=Time series forecasting using artificial neural networks methodologies: A systematic review |journal=Future Computing and Informatics Journal |volume=3 |issue=2 |pages=334–340 |doi=10.1016/j.fcij.2018.10.003 |issn=2314-7288 |doi-access=free}}</ref> where the order of elements is important. Unlike [[feedforward neural network]]s, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.
 
The fundamental building block of RNNsRNN is the '''recurrent unit''', which maintains a '''hidden state'''—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected [[handwriting recognition]],<ref>{{cite journal |last1=Graves |first1=Alex |author-link1=Alex Graves (computer scientist) |last2=Liwicki |first2=Marcus |last3=Fernandez |first3=Santiago |last4=Bertolami |first4=Roman |last5=Bunke |first5=Horst |last6=Schmidhuber |first6=Jürgen |author-link6=Jürgen Schmidhuber |year=2009 |title=A Novel Connectionist System for Improved Unconstrained Handwriting Recognition |url=http://www.idsia.ch/~juergen/tpami_2008.pdf |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=31 |issue=5 |pages=855–868 |citeseerx=10.1.1.139.4502 |doi=10.1109/tpami.2008.137 |pmid=19299860 |s2cid=14635907}}</ref> [[speech recognition]],<ref name="sak2014">{{Cite web |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |last3=Beaufays |first3=Françoise |year=2014 |title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling |url=https://research.google.com/pubs/archive/43905.pdf |publisher=Google Research}}</ref><ref name="liwu2015">{{cite arXiv |eprint=1410.4281 |class=cs.CL |first1=Xiangang |last1=Li |first2=Xihong |last2=Wu |title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition |date=2014-10-15}}</ref> [[natural language processing]], and [[neural machine translation]].<ref>{{Cite journal |last=Dupond |first=Samuel |date=2019 |title=<!-- for sure correct title? not found, nor in archive.org (for 2020-02-13), nor Volume correct? 2019 is vol 47-48 and 41 from 2016--> A thorough review on the current advance of neural network structures. |url=https://www.sciencedirect.com/journal/annual-reviews-in-control |journal=Annual Reviews in Control |volume=14 |pages=200–230}}</ref><ref>{{Cite journal |last1=Abiodun |first1=Oludare Isaac |last2=Jantan |first2=Aman |last3=Omolara |first3=Abiodun Esther |last4=Dada |first4=Kemi Victoria |last5=Mohamed |first5=Nachaat Abdelatif |last6=Arshad |first6=Humaira |date=2018-11-01 |title=State-of-the-art in artificial neural network applications: A survey |journal=Heliyon |volume=4 |issue=11 |pages=e00938 |bibcode=2018Heliy...400938A |doi=10.1016/j.heliyon.2018.e00938 |issn=2405-8440 |pmc=6260436 |pmid=30519653 |doi-access=free}}</ref>
 
However, traditional RNNs suffer from the [[vanishing gradient problem]], which limits their ability to learn long-range dependencies. This issue was addressed by the development of the [[long short-term memory]] (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, [[Gatedgated recurrent unit|Gated Recurrent Units]]s (GRUs) were introduced as a more computationally efficient alternative.
 
In recent years, [[Transformer (deep learning architecture)|transformers]], which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
Line 14:
 
=== Before modern ===
One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in [[anatomy]]. In 1901, [[Santiago Ramón y Cajal|Cajal]] observed "recurrent semicircles" in the [[Cerebellum|cerebellar cortex]] formed by [[parallel fiber]], [[Purkinje cell]]s, and [[granule cell]]s.<ref>{{Cite journal |last1=Espinosa-Sanchez |first1=Juan Manuel |last2=Gomez-Marin |first2=Alex |last3=de Castro |first3=Fernando |date=2023-07-05 |title=The Importance of Cajal's and Lorente de Nó's Neuroscience to the Birth of Cybernetics |url=http://journals.sagepub.com/doi/10.1177/10738584231179932 |journal=The Neuroscientist |volume=31 |issue=1 |pages=14–30 |language=en |doi=10.1177/10738584231179932 |pmid=37403768 |hdl=10261/348372 |issn=1073-8584|hdl-access=free }}</ref><ref>{{Cite book |last=Ramón y Cajal |first=Santiago |url=https://archive.org/details/b2129592x_0002/page/n159/mode/2up |title=Histologie du système nerveux de l'homme & des vertébrés |date=1909 |publisher=Paris : A. Maloine |others=Foyle Special Collections Library King's College London |volume=II |pages=149}}</ref> In 1933, [[Rafael Lorente de Nó|Lorente de Nó]] discovered "recurrent, reciprocal connections" by [[Golgi's method]], and proposed that excitatory loops explain certain aspects of the [[vestibulo-ocular reflex]].<ref>{{Cite journal |last=de NÓ |first=R. Lorente |date=1933-08-01 |title=Vestibulo-Ocular Reflex Arc |url=http://archneurpsyc.jamanetwork.com/article.aspx?doi=10.1001/archneurpsyc.1933.02240140009001 |journal=Archives of Neurology and Psychiatry |volume=30 |issue=2 |pages=245 |doi=10.1001/archneurpsyc.1933.02240140009001 |issn=0096-6754|url-access=subscription }}</ref><ref>{{Cite journal |last=Larriva-Sahd |first=Jorge A. |date=2014-12-03 |title=Some predictions of Rafael Lorente de Nó 80 years later |journal=Frontiers in Neuroanatomy |volume=8 |pages=147 |doi=10.3389/fnana.2014.00147 |doi-access=free |issn=1662-5129 |pmc=4253658 |pmid=25520630}}</ref> During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. [[Donald O. Hebb|Hebb]] considered "reverberating circuit" as an explanation for short-term memory.<ref>{{Cite web |title=reverberating circuit |url=https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100417461 |access-date=2024-07-27 |website=Oxford Reference }}</ref> The McCulloch and Pitts paper (1943), which proposed the [[McCulloch-Pitts neuron]] model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past.<ref>{{Cite journal |last1=McCulloch |first1=Warren S. |last2=Pitts |first2=Walter |date=December 1943 |title=A logical calculus of the ideas immanent in nervous activity |url=http://link.springer.com/10.1007/BF02478259 |journal=The Bulletin of Mathematical Biophysics |volume=5 |issue=4 |pages=115–133 |doi=10.1007/BF02478259 |issn=0007-4985|url-access=subscription }}</ref> They were both interested in closed loops as possible explanations for e.g. [[epilepsy]] and [[Complex regional pain syndrome|causalgia]].<ref>{{Cite journal |last1=Moreno-Díaz |first1=Roberto |last2=Moreno-Díaz |first2=Arminda |date=April 2007 |title=On the legacy of W.S. McCulloch |url=https://linkinghub.elsevier.com/retrieve/pii/S0303264706002152 |journal=Biosystems |volume=88 |issue=3 |pages=185–190 |doi=10.1016/j.biosystems.2006.08.010|pmid=17184902 |bibcode=2007BiSys..88..185M |url-access=subscription }}</ref><ref>{{Cite journal |last=Arbib |first=Michael A |date=December 2000 |title=Warren McCulloch's Search for the Logic of the Nervous System |url=https://muse.jhu.edu/article/46496 |journal=Perspectives in Biology and Medicine |volume=43 |issue=2 |pages=193–216 |doi=10.1353/pbm.2000.0001 |pmid=10804585 |issn=1529-8795|url-access=subscription }}</ref> [[Renshaw cell|Recurrent inhibition]] was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the [[Macy conferences]].<ref>{{Cite journal |last=Renshaw |first=Birdsey |date=1946-05-01 |title=Central Effects of Centripetal Impulses in Axons of Spinal Ventral Roots |url=https://www.physiology.org/doi/10.1152/jn.1946.9.3.191 |journal=Journal of Neurophysiology |volume=9 |issue=3 |pages=191–204 |doi=10.1152/jn.1946.9.3.191 |pmid=21028162 |issn=0022-3077|url-access=subscription }}</ref> See <ref name=":0">{{Cite journal |last=Grossberg |first=Stephen |date=2013-02-22 |title=Recurrent Neural Networks |journal=Scholarpedia |volume=8 |issue=2 |pages=1888 |doi=10.4249/scholarpedia.1888 |doi-access=free |bibcode=2013SchpJ...8.1888G |issn=1941-6016}}</ref> for an extensive review of recurrent neural network models in neuroscience.[[File:Typical_connections_in_a_close-loop_cross-coupled_perceptron.png|thumb|A close-loop cross-coupled perceptron network.<ref name=":1" />{{Pg|page=403|___location=Fig. 47}}.]]
[[Frank Rosenblatt]] in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered [[perceptron]] networks whose middle layer contains recurrent connections that change by a [[Hebbian theory|Hebbian learning]] rule.<ref>F. Rosenblatt, "[[iarchive:SelfOrganizingSystems/page/n87/mode/1up|Perceptual Generalization over Transformation Groups]]", pp. 63--100 in ''Self-organizing Systems: Proceedings of an Inter-disciplinary Conference, 5 and 6 May 1959''. Edited by Marshall C. Yovitz and Scott Cameron. London, New York, [etc.], Pergamon Press, 1960. ix, 322 p.</ref>{{Pg|pages=73-75}} Later, in ''Principles of Neurodynamics'' (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,<ref name=":1">{{Cite book |last=Rosenblatt |first=Frank |url=https://archive.org/details/DTIC_AD0256582/page/n3/mode/2up |title=DTIC AD0256582: PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS |date=1961-03-15 |publisher=Defense Technical Information Center |language=english}}</ref>{{Pg|___location=Chapter 19, 21}} and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.<ref name=":1" />{{Pg|___location=Section 19.11}}
 
Similar networks were published by Kaoru Nakano in 1971,<ref name="Nakano1971">{{cite book |last1=Nakano |first1=Kaoru |title=Pattern Recognition and Machine Learning |date=1971 |isbn=978-1-4615-7568-9 |pages=172–186 |chapter=Learning Process in a Model of Associative Memory |doi=10.1007/978-1-4615-7566-5_15}}</ref><ref name="Nakano1972">{{cite journal |last1=Nakano |first1=Kaoru |date=1972 |title=Associatron-A Model of Associative Memory |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=SMC-2 |issue=3 |pages=380–388 |doi=10.1109/TSMC.1972.4309133}}</ref>[[Shun'ichi Amari]] in 1972,<ref name="Amari1972">{{cite journal |last1=Amari |first1=Shun-Ichi |date=1972 |title=Learning patterns and pattern sequences by self-organizing nets of threshold elements |journal=IEEE Transactions |volume=C |issue=21 |pages=1197–1206}}</ref> and {{ill|William A. Little (physicist)|lt=William A. Little|de|William A. Little}} in 1974,<ref name="little74">{{cite journal |last=Little |first=W. A. |year=1974 |title=The Existence of Persistent States in the Brain |journal=Mathematical Biosciences |volume=19 |issue=1–2 |pages=101–120 |doi=10.1016/0025-5564(74)90031-5}}</ref> who was acknowledged by Hopfield in his 1982 paper.
 
Another origin of RNN was [[statistical mechanics]]. The [[Ising model]] was developed by [[Wilhelm Lenz]]<ref name="lenz1920">{{Citation |last=Lenz |first=W. |title=Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern |journal=Physikalische Zeitschrift |volume=21 |pages=613–615 |year=1920 |postscript=. |author-link=Wilhelm Lenz}}</ref> and [[Ernst Ising]]<ref name="ising1925">{{citation |last=Ising |first=E. |title=Beitrag zur Theorie des Ferromagnetismus |journal=Z. Phys. |volume=31 |issue=1 |pages=253–258 |year=1925 |bibcode=1925ZPhy...31..253I |doi=10.1007/BF02980577 |s2cid=122157319}}</ref> in the 1920s<ref>{{cite journal |last1=Brush |first1=Stephen G. |year=1967 |title=History of the Lenz-Ising Model |journal=Reviews of Modern Physics |volume=39 |issue=4 |pages=883–893 |bibcode=1967RvMP...39..883B |doi=10.1103/RevModPhys.39.883}}</ref> as a simple statistical mechanical model of magnets at equilibrium. [[Roy J. Glauber|Glauber]] in 1963 studied the Ising model evolving in time, as a process towards equilibrium ([[Glauber dynamics]]), adding in the component of time.<ref name=":22">{{cite journal |last1=Glauber |first1=Roy J. |date=February 1963 |title=Roy J. Glauber "Time-Dependent Statistics of the Ising Model" |url=https://aip.scitation.org/doi/abs/10.1063/1.1703954 |journal=Journal of Mathematical Physics |volume=4 |issue=2 |pages=294–307 |doi=10.1063/1.1703954 |access-date=2021-03-21|url-access=subscription }}</ref>
 
The [[Spin glass|Sherrington–Kirkpatrick model]] of spin glass, published in 1975,<ref>{{Cite journal |last1=Sherrington |first1=David |last2=Kirkpatrick |first2=Scott |date=1975-12-29 |title=Solvable Model of a Spin-Glass |url=https://link.aps.org/doi/10.1103/PhysRevLett.35.1792 |journal=Physical Review Letters |volume=35 |issue=26 |pages=1792–1796 |doi=10.1103/PhysRevLett.35.1792 |bibcode=1975PhRvL..35.1792S |issn=0031-9007|url-access=subscription }}</ref> is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.<ref name="Hopfield19822">{{cite journal |last1=Hopfield |first1=J. J. |date=1982 |title=Neural networks and physical systems with emergent collective computational abilities |journal=Proceedings of the National Academy of Sciences |volume=79 |issue=8 |pages=2554–2558 |bibcode=1982PNAS...79.2554H |doi=10.1073/pnas.79.8.2554 |pmc=346238 |pmid=6953413 |doi-access=free}}</ref> In a 1984 paper he extended this to continuous activation functions.<ref name=":02">{{cite journal |last1=Hopfield |first1=J. J. |date=1984 |title=Neurons with graded response have collective computational properties like those of two-state neurons |journal=Proceedings of the National Academy of Sciences |volume=81 |issue=10 |pages=3088–3092 |bibcode=1984PNAS...81.3088H |doi=10.1073/pnas.81.10.3088 |pmc=345226 |pmid=6587342 |doi-access=free}}</ref> It became a standard model for the study of neural networks through statistical mechanics.<ref>{{Cite book |last1=Engel |first1=A. |title=Statistical mechanics of learning |last2=Broeck |first2=C. van den |date=2001 |publisher=Cambridge University Press |isbn=978-0-521-77307-2 |___location=Cambridge, UK; New York, NY}}</ref><ref>{{Cite journal |last1=Seung |first1=H. S. |last2=Sompolinsky |first2=H. |last3=Tishby |first3=N. |date=1992-04-01 |title=Statistical mechanics of learning from examples |url=https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.6056 |journal=Physical Review A |volume=45 |issue=8 |pages=6056–6091 |doi=10.1103/PhysRevA.45.6056|pmid=9907706 |bibcode=1992PhRvA..45.6056S }}</ref>
 
===Modern===
Modern RNN networks are mainly based on two architectures: LSTM and BRNN.<ref>{{Cite book |last1=Zhang |first1=Aston |title=Dive into deep learning |last2=Lipton |first2=Zachary |last3=Li |first3=Mu |last4=Smola |first4=Alexander J. |date=2024 |publisher=Cambridge University Press |isbn=978-1-009-38943-3 |___location=Cambridge New York Port Melbourne New Delhi Singapore |chapter=10. Modern Recurrent Neural Networks |chapter-url=https://d2l.ai/chapter_recurrent-modern/index.html}}</ref>
 
At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets".<ref>{{Cite journal |last1=Rumelhart |first1=David E. |last2=Hinton |first2=Geoffrey E. |last3=Williams |first3=Ronald J. |date=October 1986 |title=Learning representations by back-propagating errors |url=https://www.nature.com/articles/323533a0 |journal=Nature |language=en |volume=323 |issue=6088 |pages=533–536 |doi=10.1038/323533a0 |bibcode=1986Natur.323..533R |issn=1476-4687|url-access=subscription }}</ref> Two early influential works were the [[#Jordan network|Jordan network]] (1986) and the [[#Elman network|Elman network]] (1990), which applied RNN to study [[cognitive psychology]]. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent [[Layer (deep learning)|layers]] in an RNN unfolded in time.<ref name="schmidhuber1993">{{Cite book |last=Schmidhuber |first=Jürgen |url=ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf |title=Habilitation thesis: System modeling and optimization |year=1993}}{{Dead link|date=June 2024|bot=InternetArchiveBot|fix-attempted=yes}} Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.</ref>
 
[[Long short-term memory]] (LSTM) networks were invented by [[Sepp Hochreiter|Hochreiter]] and [[Jürgen Schmidhuber|Schmidhuber]] in 1995 and set accuracy records in multiple applications domains.<ref>{{Cite Q|Q98967430}}</ref><ref name="lstm">{{Cite journal |last1=Hochreiter |first1=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |date=1997-11-01 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735|pmid=9377276 |s2cid=1915014 }}</ref> It became the default choice for RNN architecture.
 
[[Bidirectional recurrent neural networks]] (BRNN) usesuse two RNNRNNs that processesprocess the same input in opposite directions.<ref name="Schuster">Schuster, Mike, and Kuldip K. Paliwal. "[https://www.researchgate.net/profile/Mike_Schuster/publication/3316656_Bidirectional_recurrent_neural_networks/links/56861d4008ae19758395f85c.pdf Bidirectional recurrent neural networks]." Signal Processing, IEEE Transactions on 45.11 (1997): 2673-2681.2. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan</ref> These two are often combined, giving the bidirectional LSTM architecture.
 
Around 2006, bidirectional LSTM started to revolutionize [[speech recognition]], outperforming traditional models in certain speech applications.<ref>{{Cite journal |last1=Graves |first1=Alex |last2=Schmidhuber |first2=Jürgen |date=2005-07-01 |title=Framewise phoneme classification with bidirectional LSTM and other neural network architectures |journal=Neural Networks |series=IJCNN 2005 |volume=18 |issue=5 |pages=602–610 |citeseerx=10.1.1.331.5800 |doi=10.1016/j.neunet.2005.06.042 |pmid=16112549 |s2cid=1856462}}</ref><ref name="fernandez2007keyword">{{Cite conference |last1=Fernández |first1=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |year=2007 |title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting |url=http://dl.acm.org/citation.cfm?id=1778066.1778092 |book-title=Proceedings of the 17th International Conference on Artificial Neural Networks |series=ICANN'07 |___location=Berlin, Heidelberg |publisher=Springer-Verlag |pages=220–229 |isbn=978-3-540-74693-5 }}</ref> They also improved large-vocabulary speech recognition<ref name="sak2014" /><ref name="liwu2015" /> and [[text-to-speech]] synthesis<ref name="fan2015">{{cite conference |last1=Fan |first1=Bo |last2=Wang |first2=Lijuan |last3=Soong |first3=Frank K. |last4=Xie |first4=Lei |title=Photo-Real Talking Head with Deep Bidirectional LSTM |chapter-url= |editor= |book-title=Proceedings of ICASSP 2015 IEEE International Conference on Acoustics, Speech and Signal Processing |doi=10.1109/ICASSP.2015.7178899 |date=2015 |isbn=978-1-4673-6997-8 |pages=4884–8 }}</ref> and was used in [[Google Voice Search|Google voice search]], and dictation on [[Android (operating system)|Android devices]].<ref name="sak2015">{{Cite web |url=http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html |title=Google voice search: faster and more accurate |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |date=September 2015 |last3=Rao |first3=Kanishka |last4=Beaufays |first4=Françoise |last5=Schalkwyk |first5=Johan}}</ref> They broke records for improved [[machine translation]],<ref name="sutskever2014">{{Cite journal |last1=Sutskever |first1=Ilya |last2=Vinyals |first2=Oriol |last3=Le |first3=Quoc V. |year=2014 |title=Sequence to Sequence Learning with Neural Networks |url=https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf |journal=Electronic Proceedings of the Neural Information Processing Systems Conference |volume=27 |page=5346 |arxiv=1409.3215 |bibcode=2014arXiv1409.3215S }}</ref> [[Language Modeling|language modeling]]<ref name="vinyals2016">{{cite arXiv |last1=Jozefowicz |first1=Rafal |last2=Vinyals |first2=Oriol |last3=Schuster |first3=Mike |last4=Shazeer |first4=Noam |last5=Wu |first5=Yonghui |date=2016-02-07 |title=Exploring the Limits of Language Modeling |eprint=1602.02410 |class=cs.CL}}</ref> and Multilingual Language Processing.<ref name="gillick2015">{{cite arXiv |last1=Gillick |first1=Dan |last2=Brunk |first2=Cliff |last3=Vinyals |first3=Oriol |last4=Subramanya |first4=Amarnag |date=2015-11-30 |title=Multilingual Language Processing From Bytes |eprint=1512.00103 |class=cs.CL}}</ref> Also, LSTM combined with [[convolutional neural network]]s (CNNs) improved [[automatic image captioning]].<ref name="vinyals2015">{{cite arXiv |last1=Vinyals |first1=Oriol |last2=Toshev |first2=Alexander |last3=Bengio |first3=Samy |last4=Erhan |first4=Dumitru |date=2014-11-17 |title=Show and Tell: A Neural Image Caption Generator |eprint=1411.4555 |class=cs.CV }}</ref>
Line 37:
 
==Configurations==
{{main|Layer (deep learning)}}

An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNNRNNs can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
 
=== Standard ===
Line 53 ⟶ 55:
 
=== Stacked RNN ===
[[File:Stacked_RNN.png|thumb|Stacked RNN.]]A '''stacked RNN''', or '''deep RNN''', is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows
 
# Layer 1 has hidden vector <math>h_{1, t}</math>, parameters <math>\theta_1</math>, and maps <math>f_{\theta_1} : (x_{0, t}, h_{1, t}) \mapsto (x_{1, t}, h_{1, t+1}) </math>.
Line 64 ⟶ 66:
===Bidirectional===
{{Main|Bidirectional recurrent neural networks}}
[[File:Bidirectional_RNN.png|thumb|Bidirectional RNN.]]A '''bidirectional RNN''' (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:
 
* The forward RNN processes in one direction: <math display="block">f_{\theta}(x_0, h_0) = (y_0, h_{1}), f_{\theta}(x_1, h_1) = (y_1, h_{2}), \dots</math>
Line 75 ⟶ 77:
=== Encoder-decoder ===
{{Main|seq2seq}}
[[File:Decoder RNN.png|thumb|A decoder without an encoder.]]
[[File:Seq2seq_RNN_encoder-decoder_with_attention_mechanism,_training_and_inferring.png|thumb|Encoder-decoder RNN without attention mechanism.]]
[[File:Seq2seq_RNN_encoder-decoder_with_attention_mechanism,_training.png|thumb|Encoder-decoder RNN with attention mechanism.]]
 
Two RNNs can be run front-to-back in an '''encoder-decoder''' configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional [[Attention (machine learning)|attention mechanism]]. This was used to construct state of the art [[Neural machine translation|neural machine translators]] during the 2014–2017 period. This was an instrumental step towards the development of [[Transformer (deep learning architecture)|transformers]].<ref>{{Cite journal |last1=Vaswani |first1=Ashish |last2=Shazeer |first2=Noam |last3=Parmar |first3=Niki |last4=Uszkoreit |first4=Jakob |last5=Jones |first5=Llion |last6=Gomez |first6=Aidan N |last7=Kaiser |first7=Ł ukasz |last8=Polosukhin |first8=Illia |date=2017 |title=Attention is All you Need |url=https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30}}</ref>
Line 87 ⟶ 89:
 
===Fully recurrent ===
[[File:Hopfield-net-vector.svg|thumb|A fully connected RNN with 4 neurons.]]
'''Fully recurrent neural networks''' (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a [[fully connected network]]. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.
[[File:RNN architecture.png|thumb|A simple Elman network where <math>\sigma_h = \tanh, \sigma_y = \text{Identity} </math>.]]
 
===Hopfield ===
Line 158 ⟶ 160:
{{Main|Bidirectional associative memory}}
 
Introduced by [[Bart Kosko]],<ref>{{cite journal |year=1988 |title=Bidirectional associative memories |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=18 |issue=1 |pages=49–60 |doi=10.1109/21.87054 |last1=Kosko |first1=Bart |s2cid=59875735 }}</ref> a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its [[transpose]]. Typically, [[bipolar encoding]] is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using [[Markov chain|Markov]] stepping wereare optimized for increased network stability and relevance to real-world applications.<ref>{{cite journal |last1=Rakkiyappan |first1=Rajan |last2=Chandrasekar |first2=Arunachalam |last3=Lakshmanan |first3=Subramanian |last4=Park |first4=Ju H. |date=2 January 2015 |title=Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control |journal=Complexity |volume=20 |issue=3 |pages=39–65 |doi=10.1002/cplx.21503 |bibcode=2015Cmplx..20c..39R }}</ref>
 
A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.<ref>{{cite book
Line 178 ⟶ 180:
{{Main|Recursive neural network}}
 
A '''[[recursive neural network]]'''<ref>{{cite book |last1=Goller |first1=Christoph |title=Proceedings of International Conference on Neural Networks (ICNN'96) |last2=Küchler |first2=Andreas |year=1996 |isbn=978-0-7803-3210-2 |volume=1 |page=347 |chapter=Learning task-dependent distributed representations by backpropagation through structure |citeseerx=10.1.1.52.4759 |doi=10.1109/ICNN.1996.548916 |s2cid=6536466}}</ref> is created by applying the same set of weights [[recursion|recursively]] over a differentiable graph-like structure by traversing the structure in [[topological sort|topological order]]. Such networks are typically also trained by the reverse mode of [[automatic differentiation]].<ref name="lin1970">{{cite thesis |first=Seppo |last=Linnainmaa |author-link=Seppo Linnainmaa |year=1970 |title=The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors |type=MSc |language=fi |publisher=University of Helsinki}}</ref><ref name="grie2008">{{cite book |last1=Griewank |first1=Andreas |url={{google books |plainurl=y |id=xoiiLaRxcbEC}} |title=Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation |last2=Walther |first2=Andrea |author2-link=Andrea Walther |publisher=SIAM |year=2008 |isbn=978-0-89871-776-1 |edition=Second}}</ref> They can process [[distributed representation]]s of structure, such as [[mathematical logic|logical terms]]. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to [[natural language processing]].<ref>{{citation |last1=Socher |first1=Richard |title=28th International Conference on Machine Learning (ICML 2011) |contribution=Parsing Natural Scenes and Natural Language with Recursive Neural Networks |contribution-url=https://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf |last2=Lin |first2=Cliff |last3=Ng |first3=Andrew Y. |last4=Manning |first4=Christopher D.}}</ref> The Recursive''recursive Neuralneural Tensortensor Networknetwork'' uses a [[tensor]]-based composition function for all nodes in the tree.<ref>{{cite journal |last1=Socher |first1=Richard |last2=Perelygin |first2=Alex |last3=Wu |first3=Jean Y. |last4=Chuang |first4=Jason |last5=Manning |first5=Christopher D. |last6=Ng |first6=Andrew Y. |last7=Potts |first7=Christopher |title=Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank |url=http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf |journal=Emnlp 2013}}</ref>
 
===Neural Turing machines===
Line 185 ⟶ 187:
'''Neural Turing machines''' (NTMs) are a method of extending recurrent neural networks by coupling them to external [[memory]] resources with which they interact. The combined system is analogous to a [[Turing machine]] or [[Von Neumann architecture]] but is [[Differentiable neural computer|differentiable]] end-to-end, allowing it to be efficiently trained with [[gradient descent]].<ref>{{cite arXiv |eprint=1410.5401 |class=cs.NE |first1=Alex |last1=Graves |first2=Greg |last2=Wayne |title=Neural Turing Machines |last3=Danihelka |first3=Ivo |year=2014}}</ref>
 
'''Differentiable neural computers''' (DNCs) are an extension of Neuralneural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.<ref name="DNCnature2016">{{Cite journal |last1=Graves |first1=Alex |last2=Wayne |first2=Greg |last3=Reynolds |first3=Malcolm |last4=Harley |first4=Tim |last5=Danihelka |first5=Ivo |last6=Grabska-Barwińska |first6=Agnieszka |last7=Colmenarejo |first7=Sergio Gómez |last8=Grefenstette |first8=Edward |last9=Ramalho |first9=Tiago |date=2016-10-12 |title=Hybrid computing using a neural network with dynamic external memory |url=http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz |journal=Nature |volume=538 |issue=7626 |pages=471–476 |bibcode=2016Natur.538..471G |doi=10.1038/nature20101 |issn=1476-4687 |pmid=27732574 |s2cid=205251479|url-access=subscription }}</ref>
 
'''Neural network pushdown automata''' (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of [[context free grammar]]s (CFGs).<ref>{{Cite book |last1=Sun |first1=Guo-Zheng |title=Adaptive Processing of Sequences and Data Structures |last2=Giles |first2=C. Lee |last3=Chen |first3=Hsing-Hen |publisher=Springer |year=1998 |isbn=978-3-540-64341-8 |editor-last=Giles |editor-first=C. Lee |series=Lecture Notes in Computer Science |___location=Berlin, Heidelberg |pages=296–345 |chapter=The Neural Network Pushdown Automaton: Architecture, Dynamics and Training |citeseerx=10.1.1.56.8723 |doi=10.1007/bfb0054003 |editor-last2=Gori |editor-first2=Marco}}</ref>
 
Recurrent neural networks are [[Turing complete]] and can run arbitrary programs to process arbitrary sequences of inputs.<ref>{{cite journal |last1=Hyötyniemi |first1=Heikki |date=1996 |title=Turing machines are recurrent neural networks |journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society |pages=13–24}}</ref>
Line 207 ⟶ 209:
Gradient descent is a [[:Category:First order methods|first-order]] [[Iterative algorithm|iterative]] [[Mathematical optimization|optimization]] [[algorithm]] for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear [[activation function]]s are [[Differentiable function|differentiable]].
 
{{anchor|Real-Time Recurrent Learning}}The standard method for training RNN by gradient descent is the "[[backpropagation through time]]" (BPTT) algorithm, which is a special case of the general algorithm of [[backpropagation]]. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,<ref>{{cite book |last1=Robinson |first1=Anthony J.<!-- sometimes cited as T. (for "Tony") Robinson --> |url={{google books |plainurl=y |id=6JYYMwEACAAJ }} |title=The Utility Driven Dynamic Error Propagation Network |last2=Fallside |first2=Frank |publisher=Department of Engineering, University of Cambridge |year=1987 |series=Technical Report CUED/F-INFENG/TR.1}}</ref><ref>{{cite book |last1=Williams |first1=Ronald J. |url={{google books |plainurl=y |id=B71nu3LDpREC}} |title=Backpropagation: Theory, Architectures, and Applications |last2=Zipser |first2=D. |date=1 February 2013 |publisher=Psychology Press |isbn=978-1-134-77581-1 |editor-last1=Chauvin |editor-first1=Yves |contribution=Gradient-based learning algorithms for recurrent networks and their computational complexity |editor-last2=Rumelhart |editor-first2=David E.}}</ref> which is an instance of [[automatic differentiation]] in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is [[Local algorithm|local]] in time but not local in space.
 
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |date=1989-01-01 |title=A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks |journal=Connection Science |volume=1 |issue=4 |pages=403–412 |doi=10.1080/09540098908915650 |s2cid=18721007}}</ref><ref name="PríncipeEuliano2000">{{cite book |last1=Príncipe |first1=José C. |url={{google books |plainurl=y |id=jgMZAQAAIAAJ}} |title=Neural and adaptive systems: fundamentals through simulations |last2=Euliano |first2=Neil R. |last3=Lefebvre |first3=W. Curt |publisher=Wiley |year=2000 |isbn=978-0-471-35167-2}}</ref>
Line 213 ⟶ 215:
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the [[Jacobian matrix|Jacobian matrices]], while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.<ref name="Ollivier2015">{{Cite arXiv |eprint=1507.07680 |class=cs.NE |first1=Ollivier |last1=Yann |first2=Corentin |last2=Tallec |title=Training recurrent networks online without backtracking |date=2015-07-28 |first3=Guillaume |last3=Charpiat}}</ref> An online hybrid between BPTT and RTRL with intermediate complexity exists,<ref>{{Cite journal |last=Schmidhuber |first=Jürgen |date=1992-03-01 |title=A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks |journal=Neural Computation |volume=4 |issue=2 |pages=243–248 |doi=10.1162/neco.1992.4.2.243 |s2cid=11761172}}</ref><ref>{{cite report |url=http://citeseerx.ist.psu.edu/showciting?cid=128036 |title=Complexity of exact gradient computation algorithms for recurrent neural networks |last=Williams |first=Ronald J. |publisher=Northeastern University, College of Computer Science |___location=Boston (MA) |access-date=2017-07-02 |archive-url=https://web.archive.org/web/20171020033840/http://citeseerx.ist.psu.edu/showciting?cid=128036 |archive-date=2017-10-20 |url-status=dead |series=Technical Report NU-CCS-89-27 |year=1989}}</ref> along with variants for continuous time.<ref>{{Cite journal |last=Pearlmutter |first=Barak A. |date=1989-06-01 |title=Learning State Space Trajectories in Recurrent Neural Networks |url=http://repository.cmu.edu/cgi/viewcontent.cgi?article=2865&context=compsci |journal=Neural Computation |volume=1 |issue=2 |pages=263–269 |doi=10.1162/neco.1989.1.2.263 |s2cid=16813485}}</ref>
 
A major problem with gradient descent for standard RNN architectures is that [[Vanishing gradient problem|error gradients vanish]] exponentially quickly with the size of the time lag between important events.<ref name="hochreiter1991" /><ref name="HOCH2001">{{cite book |last=Hochreiter |first=Sepp |title=A Field Guide to Dynamical Recurrent Networks |date=15 January 2001 |publisher=John Wiley & Sons |isbn=978-0-7803-5369-5 |editor-last1=Kolen |editor-first1=John F. |chapter=Gradient flow in recurrent nets: the difficulty of learning long-term dependencies |display-authors=etal |editor-last2=Kremer |editor-first2=Stefan C. |chapter-url={{google books |plainurl=y |id=NWOcMVA64aAC }}}}</ref> LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.<ref name="lstm" /> This problem is also solved in the independently recurrent neural network (IndRNN)<ref name="auto" /> by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problemproblems.
 
The on-line[[online algorithm]] called '''causal recursive backpropagation''' (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.<ref>{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |last4=Rao |first4=Bhaskar D. |year=1999 |title=On-Line Learning Algorithms for Locally Recurrent Neural Networks |journal=IEEE Transactions on Neural Networks |volume=10 |issue=2 |pages=253–271 |citeseerx=10.1.1.33.7550 |doi=10.1109/72.750549 |pmid=18252525}}</ref> It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.
 
One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.<ref>{{Cite journal |last1=Wan |first1=Eric A. |last2=Beaufays |first2=Françoise |year=1996 |title=Diagrammatic derivation of gradient algorithms for neural networks |journal=Neural Computation |volume=8 |pages=182–201 |doi=10.1162/neco.1996.8.1.182 |s2cid=15512077}}</ref> It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.<ref name="ReferenceA">{{Cite journal |last1=Campolucci |first1=Paolo |last2=Uncini |first2=Aurelio |last3=Piazza |first3=Francesco |year=2000 |title=A Signal-Flow-Graph Approach to On-line Gradient Calculation |journal=Neural Computation |volume=12 |issue=8 |pages=1901–1927 |citeseerx=10.1.1.212.5406 |doi=10.1162/089976600300015196 |pmid=10953244 |s2cid=15090951}}</ref> It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.<ref name="ReferenceA" />
 
=== Connectionist temporal classification ===
The [[connectionist temporal classification]] (CTC)<ref name="graves2006">{{Cite conference |last1=Graves |first1=Alex |last2=Fernández |first2=Santiago |last3=Gomez |first3=Faustino J. |year=2006 |title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks |url=https://axon.cs.byu.edu/~martinez/classes/778/Papers/p369-graves.pdf |pages=369–376 |citeseerx=10.1.1.75.6306 |doi=10.1145/1143844.1143891 |isbn=1-59593-383-2 |book-title=Proceedings of the International Conference on Machine Learning}}</ref> is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.<ref>{{Cite journal |last=Hannun |first=Awni |date=2017-11-27 |title=Sequence Modeling with CTC |url=https://distill.pub/2017/ctc |journal=Distill |language=en |volume=2 |issue=11 |pages=e8 |doi=10.23915/distill.00008 |issn=2476-0757|doi-access=free |url-access=subscription }}</ref>
 
===Global optimization methods===
Line 233 ⟶ 235:
* The mean-squared error is returned to the fitness function.
* This function drives the genetic selection process.
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme iscan be:
* When the neural network has learned a certain percentage of the training data or.
* When the minimum value of the mean-squared-error is satisfied or.
* When the maximum number of training generations has been reached.
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
Line 244 ⟶ 246:
 
===Independently RNN (IndRNN) ===
The independently recurrent neural network (IndRNN)<ref name="auto">{{cite arXiv |title= Independently Recurrent Neural Network (IndRNN): Building a Longer and Deeper RNN|last1=Li |first1=Shuai |last2=Li |first2=Wanqing |last3=Cook |first3=Chris |last4=Zhu |first4=Ce |last5=Yanbo |first5=Gao |eprint=1803.04831|class=cs.CV |year=2018 }}</ref> addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as [[ReLU]]. Deep networks can be trained using [[skip connections]].
 
===Neural history compressor===
 
The ''neural history compressor'' is an unsupervised stack of RNNs.<ref name="schmidhuber1992">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=1992 |title=Learning complex, extended sequences using the principle of history compression |url=ftp://ftp.idsia.ch/pub/juergen/chunker.pdf |journal=Neural Computation |volume=4 |issue=2 |pages=234–242 |doi=10.1162/neco.1992.4.2.234 |s2cidarchive-url=18271205https://web.archive.org/web/20170706014739/ftp://ftp.idsia.ch/pub/juergen/chunker.pdf }}{{Dead link|archive-date=June 20242017-07-06 |boturl-status=InternetArchiveBotdead |fix-attempteds2cid=yes18271205 }}</ref> At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
 
The system effectively minimizes the description length or the negative [[logarithm]] of the probability of the data.<ref name="scholarpedia2015pre">{{cite journal |last1=Schmidhuber |first1=Jürgen |year=2015 |title=Deep Learning |journal=Scholarpedia |volume=10 |issue=11 |page=32832 |doi=10.4249/scholarpedia.32832 |bibcode=2015SchpJ..1032832S |doi-access=free }}</ref> Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
Line 256 ⟶ 259:
 
===Second order RNNs===
Second-order RNNs use higher order weights <math>w{}_{ijk}</math> instead of the standard <math>w{}_{ij}</math> weights, and states can be a product. This allows a direct mapping to a [[finite-state machine]] both in training, stability, and representation.<ref>{{cite journal |first1=C. Lee |last1=Giles |first2=Clifford B. |last2=Miller |first3=Dong |last3=Chen |first4=Hsing-Hen |last4=Chen |first5=Guo-Zheng |last5=Sun |first6=Yee-Chun |last6=Lee |url=https://clgiles.ist.psu.edu/pubs/NC1992-recurrent-NN.pdf<!-- https://www.semanticscholar.org/paper/Learning-and-Extracting-Finite-State-Automata-with-Giles-Miller/872cdc269f3cb59f8a227818f35041415091545f --> |title=Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks |journal=Neural Computation |volume=4 |issue=3 |pages=393–405 |year=1992 |doi=10.1162/neco.1992.4.3.393 |s2cid=19666035 }}</ref><ref>{{cite journal |first1=Christian W. |last1=Omlin |first2=C. Lee |last2=Giles |title=Constructing Deterministic Finite-State Automata in Recurrent Neural Networks |journal=Journal of the ACM |volume=45 |issue=6 |pages=937–972 |year=1996 |doi=10.1145/235809.235811 |citeseerx=10.1.1.32.2364 |s2cid=228941 }}</ref> Long short-term memory is an example of this but has no such formal mappings or proof of stability.
 
===Hierarchical recurrent neural network===
Line 267 ⟶ 270:
 
===Multiple timescales model===
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.<ref>{{Cite journal |last1=Yamashita |first1=Yuichi |last2=Tani |first2=Jun |date=2008-11-07 |title=Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment |journal=PLOS Computational Biology |volume=4 |issue=11 |pages=e1000220 |doi=10.1371/journal.pcbi.1000220 |pmc=2570613 |pmid=18989398 |bibcode=2008PLSCB...4E0220Y |doi-access=free }}</ref><ref>{{Cite journal |last1=Alnajjar |first1=Fady |last2=Yamashita |first2=Yuichi |last3=Tani |first3=Jun |year=2013 |title=The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory |journal=Frontiers in Neurorobotics |volume=7 |page=2 |doi=10.3389/fnbot.2013.00002 |pmc=3575058 |pmid=23423881|doi-access=free }}</ref> With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the [[memory-prediction framework|memory-prediction]] theory of brain function by [[Jeff Hawkins|Hawkins]] in his book ''[[On Intelligence]]''.{{Citation needed |date=June 2017}} Such a hierarchy also agrees with theories of memory posited by philosopher [[Henri Bergson]], which have been incorporated into an MTRNN model.<ref name="auto1"/><ref>{{Cite web | url=http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf | title= Proceedings of the 28th Annual Conference of the Japanese Neural Network Society (October, 2018) | access-date=2021-02-06 | archive-date=2020-05-09 | archive-url=https://web.archive.org/web/20200509004753/http://jnns.org/conference/2018/JNNS2018_Technical_Programs.pdf | url-status=dead }}</ref>
 
===Memristive networks===
Line 285 ⟶ 288:
}}</ref> The [[memristors]] (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. [[DARPA]]'s [[SyNAPSE|SyNAPSE project]] has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.
[[Memristive networks]] are a particular type of [[physical neural network]] that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the [[Ising model]]. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of [[neuromorphic engineering]] in which the device behavior depends on the circuit wiring or topology.
The evolution of these networks can be studied analytically using variations of the [[F.Caravelli|Caravelli]]–[[F. -Traversa|Traversa]]–[[-Di Ventra equation]] equation.<ref>{{cite journal |last1=Caravelli |first1=Francesco |last2=Traversa |first2=Fabio Lorenzo |last3=Di Ventra |first3=Massimiliano |title=The complex dynamics of memristive circuits: analytical results and universal slow relaxation |year=2017 |doi=10.1103/PhysRevE.95.022140 |pmid=28297937 |volume=95 |issue= 2 |page= 022140 |journal=Physical Review E|bibcode=2017PhRvE..95b2140C |s2cid=6758362|arxiv=1608.08651 }}</ref>
 
=== Continuous-time ===
Line 304 ⟶ 307:
CTRNNs have been applied to [[evolutionary robotics]] where they have been used to address vision,<ref>{{citation |last1=Harvey |first1=Inman |title=3rd international conference on Simulation of adaptive behavior: from animals to animats 3 |pages=392–401 |year=1994 |contribution=Seeing the light: Artificial evolution, real vision |contribution-url=https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_Evolution_Real_Vision |last2=Husbands |first2=Phil |last3=Cliff |first3=Dave}}</ref> co-operation,<ref name="Evolving communication without dedicated communication channels">{{cite conference |last=Quinn |first=Matt |year=2001 |title=Evolving communication without dedicated communication channels |pages=357–366 |doi=10.1007/3-540-44811-X_38 |isbn=978-3-540-42567-0 |book-title=Advances in Artificial Life: 6th European Conference, ECAL 2001}}</ref> and minimal cognitive behaviour.<ref name="The dynamics of adaptive behavior: A research program">{{cite journal |last=Beer |first=Randall D. |year=1997 |title=The dynamics of adaptive behavior: A research program |journal=Robotics and Autonomous Systems |volume=20 |issue=2–4 |pages=257–289 |doi=10.1016/S0921-8890(96)00063-2}}</ref>
 
Note that, by the [[Shannon sampling theorem]], discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent [[difference equation]]s.<ref name="Sherstinsky-NeurIPS2018-CRACT-3">{{cite conference |last=Sherstinsky |first=Alex |date=2018-12-07 |editor-last=Bloem-Reddy |editor-first=Benjamin |editor2-last=Paige |editor2-first=Brooks |editor3-last=Kusner |editor3-first=Matt |editor4-last=Caruana |editor4-first=Rich |editor5-last=Rainforth |editor5-first=Tom |editor6-last=Teh |editor6-first=Yee Whye |title=Deriving the Recurrent Neural Network Definition and RNN Unrolling Using Signal Processing |url=https://www.researchgate.net/publication/331718291 |conference=Critiquing and Correcting Trends in Machine Learning Workshop at NeurIPS-2018 |conference-url=https://ml-critique-correct.github.io/}}</ref> This transformation can be thought of as occurring after the post-synaptic node activation functions <math>y_i(t)</math> have been [[Low-pass filter|low-pass filtered]] but prior to sampling.
 
They are in fact [[recursive neural network]]s with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
 
From a time-series perspective, RNNs can appear as nonlinear versions of [[finite impulse response]] and [[infinite impulse response]] filters and also as a [[nonlinear autoregressive exogenous model]] (NARX).<ref>{{cite journal |url={{google books |plainurl=y |id=830-HAAACAAJ |page=208}} |title=Computational Capabilities of Recurrent NARX Neural Networks |last1=Siegelmann |first1=Hava T. |last2=Horne |first2=Bill G. |last3=Giles |first3=C. Lee |journal= IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics|volume=27 |issue=2 |pages=208–15 |year=1995 |pmid=18255858 |doi=10.1109/3477.558801 |citeseerx=10.1.1.48.7468 }}</ref> RNN has infinite impulse response whereas [[convolutional neural network]]s havehas [[finite impulse response|finite impulse]] response. Both classes of networks exhibit temporal [[dynamic system|dynamic behavior]].<ref>{{Cite journal |last=Miljanovic |first=Milos |date=Feb–Mar 2012 |title=Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction |url=http://www.ijcse.com/docs/INDJCSE12-03-01-028.pdf |journal=Indian Journal of Computer and Engineering |volume=3 |issue=1}}</ref> A finite impulse recurrent network is a [[directed acyclic graph]] that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a [[directed cyclic graph]] that cannot be unrolled.
 
The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency [[Action potential|spiking activity]].<ref>{{Cite journal |last1=Hodassman |first1=Shiri |last2=Meir |first2=Yuval |last3=Kisos |first3=Karin |last4=Ben-Noam |first4=Itamar |last5=Tugendhaft |first5=Yael |last6=Goldental |first6=Amir |last7=Vardi |first7=Roni |last8=Kanter |first8=Ido |date=2022-09-29 |title=Brain inspired neuronal silencing mechanism to enable reliable sequence identification |journal=Scientific Reports |volume=12 |issue=1 |pages=16003 |doi=10.1038/s41598-022-20337-x |pmid=36175466 |pmc=9523036 |arxiv=2203.13028 |bibcode=2022NatSR..1216003H |issn=2045-2322|doi-access=free }}</ref>
 
Additional stored states and the storage under direct control by the network can be added to both [[infinite impulse response|infinite-impulse]] and [[finite impulse response|finite-impulse]] networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of [[long short-term memory]] networks (LSTMs) and [[gated recurrent unit]]s. This is also called Feedback Neural Network (FNN).
Line 342 ⟶ 345:
*Rhythm learning<ref name="peephole2002">{{cite journal |last1=Gers |first1=Felix A. |last2=Schraudolph |first2=Nicol N. |last3=Schmidhuber |first3=Jürgen |year=2002 |title=Learning precise timing with LSTM recurrent networks |url=http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf |journal=Journal of Machine Learning Research |volume=3 |pages=115–143 }}</ref>
*Music composition<ref>{{Cite book |last1=Eck |first1=Douglas |last2=Schmidhuber |first2=Jürgen |title=Artificial Neural Networks — ICANN 2002 |chapter=Learning the Long-Term Structure of the Blues |date=2002-08-28 |publisher=Springer |___location=Berlin, Heidelberg |pages=284–289 |doi=10.1007/3-540-46084-5_47 |isbn=978-3-540-46084-8 |series=Lecture Notes in Computer Science |volume=2415 |citeseerx=10.1.1.116.3620 }}</ref>
*Grammar learning<ref>{{cite journal |last1=Schmidhuber |first1=Jürgen |last2=Gers |first2=Felix A. |last3=Eck |first3=Douglas |year=2002 |title=Learning nonregular languages: A comparison of simple recurrent networks and LSTM |journal=Neural Computation |volume=14 |issue=9 |pages=2039–2041 |doi=10.1162/089976602320263980 |pmid=12184841 |citeseerx=10.1.1.11.7369 |s2cid=30459046 }}</ref><ref name="peepholeLSTM">{{cite journal |last1=Gers |first1=Felix A. |last2=Schmidhuber |first2=Jürgen |year=2001 |title=LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages |url=ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf |journal=IEEE Transactions on Neural Networks |volume=12 |issue=6 |pages=1333–40 |doi=10.1109/72.963769 |pmid=18249962 |s2cid=10192330 |access-date=2017-12-12 |archive-date=2020-07-10 |archive-url=https://web.archive.org/web/2020071012282520170706014426/ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf |archive-date=2017-07-06 |url-status=dead |access-date=2017-12-12 }}</ref><ref>{{cite journal |last1=Pérez-Ortiz |first1=Juan Antonio |last2=Gers |first2=Felix A. |last3=Eck |first3=Douglas |last4=Schmidhuber |first4=Jürgen |year=2003 |title=Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets |journal=Neural Networks |volume=16 |issue=2 |pages=241–250 |doi=10.1016/s0893-6080(02)00219-8 |pmid=12628609 |citeseerx=10.1.1.381.1992 }}</ref>
*[[Handwriting recognition]]<ref>{{cite conference |first1=Alex |last1=Graves |first2=Jürgen |last2=Schmidhuber |title=Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks |book-title=Advances in Neural Information Processing Systems |volume=22, NIPS'22 |pages=545–552 |publisher=MIT Press |year=2009 |url=http://papers.neurips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks.pdf}}</ref><ref>{{Cite conference |last1=Graves |first1=Alex |last2=Fernández |first2=Santiago |last3=Liwicki |first3=Marcus |last4=Bunke |first4=Horst |last5=Schmidhuber |first5=Jürgen |year=2007 |title=Unconstrained Online Handwriting Recognition with Recurrent Neural Networks |url=http://dl.acm.org/citation.cfm?id=2981562.2981635 |book-title=Proceedings of the 20th International Conference on Neural Information Processing Systems |publisher=Curran Associates |pages=577–584 |isbn=978-1-60560-352-0 }}</ref>
*Human action recognition<ref>{{cite book |first1=Moez |last1=Baccouche |first2=Franck |last2=Mamalet |first3=Christian |last3=Wolf |first4=Christophe |last4=Garcia |first5=Atilla |last5=Baskurt |title=Human Behavior Unterstanding |chapter=Sequential Deep Learning for Human Action Recognition |editor-first1=Albert Ali |editor-last1=Salah |editor-first2=Bruno |editor-last2=Lepri |___location=Amsterdam, Netherlands |pages=29–39 |series=Lecture Notes in Computer Science |volume=7065 |publisher=Springer |year=2011 |doi=10.1007/978-3-642-25446-8_4 |isbn=978-3-642-25445-1 }}</ref>