Recurrent Highway Networks
Loading...
Author / Producer
Date
2017-08-06
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
Data
Rights / License
Abstract
Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with “deep” transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Gersgorin’s circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which extend the LSTM architecture to allow step-to-step transition depths larger than one. Several language modeling experiments demonstrate that the proposed architecture results in powerful and efficient models. On the Penn Treebank corpus, solely increasing the transition depth from 1 to 10 improves word-level perplexity from 90.6 to 65.4 using the same number of parameters. On the larger Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform all previous results and achieve an entropy of 1.27 bits per character.
Permanent link
Publication status
published
External links
Book title
Proceedings of the 34 th International Conference on Machine Learning
Journal / series
Volume
70
Pages / Article No.
4189 - 4198
Publisher
PMLR
Event
34th International Conference on Machine Learning (ICML 2017)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
09574 - Frazzoli, Emilio / Frazzoli, Emilio
02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.