Metric entropy limits on recurrent neural network learning of linear dynamical systems


Date

2022-07

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

One of the most influential results in neural network theory is the universal approximation theorem [1–3] which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward neural networks. The purpose of this paper is to establish a result in this spirit for the approximation of general discrete-time linear dynamical systems—including time-varying systems—by recurrent neural networks (RNNs). For the subclass of linear time-invariant (LTI) systems, we devise a quantitative version of this statement. Specifically, measuring the complexity of the considered class of LTI systems through metric entropy according to [4], we show that RNNs can optimally learn—or identify in system-theory parlance—stable LTI systems. For LTI systems whose input-output relation is characterized through a difference equation, this means that RNNs can learn the difference equation from input-output traces in a metric-entropy optimal manner.

Publication status

published

Editor

Book title

Volume

59

Pages / Article No.

198 - 223

Publisher

Elsevier

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Recurrent neural networks; Linear dynamical systems; Metric entropy; Hardy spaces; Universal approximation; System identification

Organisational unit

03610 - Boelcskei, Helmut / Boelcskei, Helmut check_circle

Notes

Funding

Related publications and datasets