
Open access
Autor(in)
Datum
2019Typ
- Doctoral Thesis
ETH Bibliographie
yes
Altmetrics
Abstract
The design of machine learning algorithms is often conducted in a system-agnostic manner. As a consequence, established methods may not be well aligned with the particularities of the systems on which they are deployed. In this thesis we demonstrate that there is a huge potential for improving the performance and efficiency of machine learning applications by incorporating system characteristics into the algorithm design.
We develop new principled tools and methods for training machine learning models that are theoretically sound and enable the systematic utilization of individual hardware resources available in heterogeneous systems. In particular, we focus on lowering the impact of slow interconnects on distributed training and exploiting hierarchical memory structures, compute parallelism and accelerator units. Mehr anzeigen
Persistenter Link
https://doi.org/10.3929/ethz-b-000341033Publikationsstatus
publishedExterne Links
Printexemplar via ETH-Bibliothek suchen
Beteiligte
Referent: Hofmann, Thomas
Referent: Pozidis, Haris
Referent: Jaggi, Martin
Referent: Hardt, Moritz
Referent: Püschel, Markus

Verlag
ETH ZurichThema
machine learning; distributed algorithms; GPU acceleration; convex optimization; generalized linear models; heterogeneous system; snap machine learning; parallel algorithmsOrganisationseinheit
09462 - Hofmann, Thomas / Hofmann, Thomas
ETH Bibliographie
yes
Altmetrics