Bidirectional recurrent neural networks: Difference between revisions

Content deleted Content added
Yobot (talk | contribs)
m WP:CHECKWIKI error fixes, added orphan tag using AWB (11971)
Tao oat (talk | contribs)
m Pluralise a few words that needed it
Line 1:
{{Orphan|date=March 2016}}
 
'''Bidirectional Recurrent Neural Networks'''('''BRNN''') were invented in 1997 by Schuster & Paliwal.<ref name="Schuster">Schuster, Mike, and Kuldip K. Paliwal. "Bidirectional recurrent neural networks." Signal Processing, IEEE Transactions on 45.11 (1997): 2673-2681.2. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan</ref> BRNNs BRNN iswere introduced to increase the amount of input information available to bethe referred tonetwork. For example, [[multilayer perceptron]](MLPMLPs) and [[time delay neural network]] (TDNNTDNNs) have limitationlimitations on the input data flexibility, as they require their input data to be fixed. Standard [[recurrent neural network]] (RNNRNNs) also hashave restrictions as itsthe future input information cannot be reached from itsthe current state. On the contrary, BRNNBRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. The basic idea of BRNNBRNNs is to connect two hidden layers of opposite directions to the same output. By this structure, the output layer can get information from past and future states.
 
BRNN are especially useful when the context of the input is needed. For example, in handwriting recognition, the performance can be enhanced by knowledge of the letters located before and after the current letter.