Open access
Author
Date
2019Type
- Doctoral Thesis
ETH Bibliography
yes
Altmetrics
Abstract
The design of machine learning algorithms is often conducted in a system-agnostic manner. As a consequence, established methods may not be well aligned with the particularities of the systems on which they are deployed. In this thesis we demonstrate that there is a huge potential for improving the performance and efficiency of machine learning applications by incorporating system characteristics into the algorithm design.
We develop new principled tools and methods for training machine learning models that are theoretically sound and enable the systematic utilization of individual hardware resources available in heterogeneous systems. In particular, we focus on lowering the impact of slow interconnects on distributed training and exploiting hierarchical memory structures, compute parallelism and accelerator units. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000341033Publication status
publishedExternal links
Search print copy at ETH Library
Contributors
Examiner: Hofmann, Thomas
Examiner: Pozidis, Haris
Examiner: Jaggi, Martin
Examiner: Hardt, Moritz
Examiner: Püschel, Markus
Publisher
ETH ZurichSubject
machine learning; distributed algorithms; GPU acceleration; convex optimization; generalized linear models; heterogeneous system; snap machine learning; parallel algorithmsOrganisational unit
09462 - Hofmann, Thomas / Hofmann, Thomas
More
Show all metadata
ETH Bibliography
yes
Altmetrics