3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation


METADATA ONLY
Loading...

Date

2022

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

We propose and study a new class of gradient compressors for communication-efficient training—three point compressors (3PC)—as well as efficient distributed nonconvex optimization algorithms that can take advantage of them. Unlike most established approaches, which rely on a static compressor choice (e.g., TopK), our class allows the compressors to evolve throughout the training process, with the aim of improving the theoretical communication complexity and practical efficiency of the underlying methods. We show that our general approach can recover the recently proposed state-of-the-art error feedback mechanism EF21 (Richtárik et al, 2021) and its theoretical properties as a special case, but also leads to a number of new efficient methods. Notably, our approach allows us to improve upon the state-of-the-art in the algorithmic and theoretical foundations of the lazy aggregation literature (Liu et al, 2017; Lan et al, 2017). As a by-product that may be of independent interest, we provide a new and fundamental link between the lazy aggregation and error feedback literature. A special feature of our work is that we do not require the compressors to be unbiased.

Publication status

published

Book title

Proceedings of the 39th International Conference on Machine Learning

Volume

162

Pages / Article No.

18596 - 18648

Publisher

PMLR

Event

39th International Conference on Machine Learning (ICML 2022)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09729 - He, Niao / He, Niao check_circle
02219 - ETH AI Center / ETH AI Center

Notes

Funding

Related publications and datasets