ZipML: An End-to-end Bitwise Framework for Dense Generalized Linear Models
dc.contributor.author
Zhang, Hantian
dc.contributor.author
Kara, Kaan
dc.contributor.author
Li, Jerry
dc.contributor.author
Alistarh, Dan
dc.contributor.author
Liu, Ji
dc.contributor.author
Zhang, Ce
dc.date.accessioned
2020-07-13T10:17:38Z
dc.date.available
2017-06-12T16:35:53Z
dc.date.available
2020-05-15T07:25:30Z
dc.date.available
2020-07-13T10:17:38Z
dc.date.issued
2016-11-16
dc.identifier.uri
http://hdl.handle.net/20.500.11850/123108
dc.description.abstract
Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-of-magnitude speedups? We present a framework called ZipML to answer these questions. For linear models, the answer is yes. We develop a simple framework based on one simple but novel strategy called double sampling. Our framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quantization would introduce significant bias. We validate our framework across a range of applications, and show that it enables an FPGA prototype that is up to 6.5x faster than an implementation using full 32-bit precision. We further develop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7x in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-the-art XNOR-Net. Finally, we extend our framework through approximation to non-linear models, such as SVM. We show that, although using low-precision data induces bias, we can appropriately bound and control the bias. We find in practice 8-bit precision is often sufficient to converge to the correct solution. Interestingly, however, in practice we notice that our framework does not always outperform the naive rounding approach. We discuss this negative result in detail.
en_US
dc.language.iso
en
en_US
dc.publisher
Cornell University
en_US
dc.title
ZipML: An End-to-end Bitwise Framework for Dense Generalized Linear Models
en_US
dc.type
Working Paper
ethz.journal.title
arXiv
ethz.pages.start
1611.05402
en_US
ethz.size
37 p.
en_US
ethz.identifier.arxiv
1611.05402
ethz.publication.place
Ithaca, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02663 - Institut für Computing Platforms / Institute for Computing Platforms::09588 - Zhang, Ce / Zhang, Ce
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02663 - Institut für Computing Platforms / Institute for Computing Platforms::09588 - Zhang, Ce / Zhang, Ce
ethz.date.deposited
2017-06-12T16:37:49Z
ethz.source
ECIT
ethz.identifier.importid
imp593654e6b085e43201
ethz.ecitpid
pub:185479
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2017-07-14T22:03:00Z
ethz.rosetta.lastUpdated
2020-07-13T10:17:48Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=ZipML:%20An%20End-to-end%20Bitwise%20Framework%20for%20Dense%20Generalized%20Linear%20Models&rft.jtitle=arXiv&rft.date=2016-11-16&rft.spage=1611.05402&rft.au=Zhang,%20Hantian&Kara,%20Kaan&Li,%20Jerry&Alistarh,%20Dan&Liu,%20Ji&rft.genre=preprint&
Dateien zu diesem Eintrag
Dateien | Größe | Format | Im Viewer öffnen |
---|---|---|---|
Zu diesem Eintrag gibt es keine Dateien. |
Publikationstyp
-
Working Paper [5291]