Recurrent neural network: Difference between revisions

Content deleted Content added
Restored revision 1253050709 by Perceptron599 (talk): WP:UNDUE emphasis on one short film
m Bold.
Line 5:
'''Recurrent neural networks''' ('''RNNs''') are a class of [[Neural network (machine learning)|artificial neural network]] commonly used for sequential data processing. Unlike [[feedforward neural network]]s, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and [[time series]].<ref>{{Cite journal |last1=Tealab |first1=Ahmed |date=2018-12-01 |title=Time series forecasting using artificial neural networks methodologies: A systematic review |journal=Future Computing and Informatics Journal |volume=3 |issue=2 |pages=334–340 |doi=10.1016/j.fcij.2018.10.003 |issn=2314-7288 |doi-access=free}}</ref>
 
The building block of RNNs is the '''recurrent unit'''. This unit maintains a hidden state, essentially a form of memory, which is updated at each time step based on the current input and the previous hidden state. This feedback loop allows the network to learn from past inputs, and incorporate that knowledge into its current processing.
 
Early RNNs suffered from the [[vanishing gradient problem]], limiting their ability to learn long-range dependencies. This was solved by the [[long short-term memory]] (LSTM) variant in 1997, thus making it the standard architecture for RNN.