Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking


Loading...

Date

2022-04-01

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The method is based on dividing a link into smaller sections. A smaller section is then compensated by the DL-DBP algorithm and the same trained model is then reapplied to the subsequent sections. We show in a 32 GBd 16QAM 2400 km 5-channel wavelength division multiplexing transmission link experiment that the proposed stacked DL-DBPs provides a 0.41 dB gain with respect to linear compensation scheme. This needs to be compared with a 0.56 dB gain achieved by a non-stacked DL-DBPs compensated scheme for the price of a 203% increase in total training time. Furthermore, it is shown that by only training the last section of the stacked DL-DBP, one can increase the compensation performance to 0.48 dB.

Publication status

published

Editor

Book title

Volume

34 (7)

Pages / Article No.

387 - 390

Publisher

IEEE

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03974 - Leuthold, Juerg / Leuthold, Juerg check_circle
02635 - Institut für Elektromagnetische Felder / Institute of Electromagnetic Fields

Notes

Funding

Related publications and datasets