Abstract
Recent years have witnessed a growing list of systems for distributed data-parallel training. Existing systems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algorithmic side, researchers have proposed a wide range of techniques to lower the communication via "system relaxations": quantization, decentralization, and communication delay. However, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based optimization, therefore, cannot take advantage of all possible optimizations that the machine learning community has been developing recently. Given this emerging gap between the current landscapes of systems and theory, we build BAGUA, a MPI-style communication library, providing a collection of primitives, that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. Powered by this design, BAGUA has a great ability to implement and extend various state-of-the-art distributed learning algorithms. In a production cluster with up to 16 machines (128 GPUs), BAGUA can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time by a significant margin (up to 2x) across a diverse range of tasks. Moreover, we conduct a rigorous tradeo. exploration showing that di.erent algorithms and system relaxations achieve the best performance over di.erent network conditions. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000571167Publication status
publishedExternal links
Journal / series
Proceedings of the VLDB EndowmentVolume
Pages / Article No.
Publisher
Association for Computing MachineryOrganisational unit
09588 - Zhang, Ce (ehemalig) / Zhang, Ce (former)
More
Show all metadata
ETH Bibliography
yes
Altmetrics