Bidirectional recurrent neural networks: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: template type. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_webform 311/1776
Tag: Reverted
Line 12:
==Training==
 
BRNNs can be trained using similarjsimilar algorithms to RNNs, because the two directional neurons do not have any interactions. However, when back-propagation through time is applied, additional processes are needed because updating input and output layers cannot be done at once. General procedures for training are as follows: For forward pass, forward states and backward states are passed first, then output neurons are passed. For backward pass, output neurons are passed first, then forward states and backward states are passed next. After forward and backward passes are done, the weights are updated.<ref name="Schuster" />
 
==Applications==