Content deleted Content added
Line 246:
===Independently RNN (IndRNN) ===
The independently recurrent neural network (IndRNN)<ref name="auto">{{cite arXiv |title= Independently Recurrent Neural Network (IndRNN): Building a Longer and Deeper RNN|last1=Li |first1=Shuai |last2=Li |first2=Wanqing |last3=Cook |first3=Chris |last4=Zhu |first4=Ce |last5=Yanbo |first5=Gao |eprint=1803.04831|class=cs.CV |year=2018 }}</ref> addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as [[ReLU]]. Deep networks can be trained using skip connections.
===Neural history compressor===
|