Metadata only
Datum
2022-08Typ
- Journal Article
Abstract
We propose a new stackable recurrent cell (STAR) for recurrent neural networks (RNNs) that has significantly less parameters than widely used LSTM and GRU while being more robust against vanishing or exploding gradients. Stacking multiple layers of recurrent units has two major drawbacks: i) many recurrent cells (e.g., LSTM cells) are extremely eager in terms of parameters and computation resources, ii) deep RNNs are prone to vanishing or exploding gradients during training. We investigate the training of multi-layer RNNs and examine the magnitude of the gradients as they propagate through the network in the "vertical" direction. We show that, depending on the structure of the basic recurrent unit, the gradients are systematically attenuated or amplified. Based on our analysis we design a new type of gated cell that better preserves gradient magnitude. We validate our design on a large number of sequence modelling tasks and demonstrate that the proposed STAR cell allows to build and train deeper recurrent architectures, ultimately leading to improved performance while being computationally efficient. Mehr anzeigen
Publikationsstatus
publishedZeitschrift / Serie
IEEE Transactions on Pattern Analysis and Machine IntelligenceBand
Seiten / Artikelnummer
Verlag
IEEEThema
Recurrent neural network; deep RNN; multi-layer RNNOrganisationseinheit
03886 - Schindler, Konrad / Schindler, Konrad