Online Variance Reduction for Stochastic Optimization
dc.contributor.author
Borsos, Zalán
dc.contributor.author
Krause, Andreas
dc.contributor.author
Levy, Kfir Y.
dc.contributor.editor
Bubeck, Sébastien
dc.contributor.editor
Perchet, Vianney
dc.contributor.editor
Rigollet, Philippe
dc.date.accessioned
2020-02-19T11:33:58Z
dc.date.available
2018-12-12T12:33:01Z
dc.date.available
2020-02-19T11:33:58Z
dc.date.issued
2018
dc.identifier.issn
2640-3498
dc.identifier.uri
http://hdl.handle.net/20.500.11850/310331
dc.description.abstract
Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling data points, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.
en_US
dc.language.iso
en
en_US
dc.publisher
PMLR
en_US
dc.subject
importance sampling
en_US
dc.subject
variance reduction
en_US
dc.subject
bandit feedback
en_US
dc.subject
empirical risk minimization
en_US
dc.title
Online Variance Reduction for Stochastic Optimization
en_US
dc.type
Conference Paper
ethz.book.title
Proceedings of the 31st Conference On Learning Theory (COLT 2018)
en_US
ethz.journal.title
Proceedings of Machine Learning Research
ethz.journal.volume
75
en_US
ethz.pages.start
324
en_US
ethz.pages.end
357
en_US
ethz.event
31st Conference on Learning Theory (COLT 2018)
en_US
ethz.event.location
Stockholm, Sweden
en_US
ethz.event.date
July 6-9, 2018
en_US
ethz.grant
Scaling Up by Scaling Down: Big ML via Small Coresets
en_US
ethz.publication.place
Cambridge, MA
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02661 - Institut für Maschinelles Lernen / Institute for Machine Learning::03908 - Krause, Andreas / Krause, Andreas
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02661 - Institut für Maschinelles Lernen / Institute for Machine Learning::03908 - Krause, Andreas / Krause, Andreas
en_US
ethz.identifier.url
http://proceedings.mlr.press/v75/borsos18a.html
ethz.grant.agreementno
167212
ethz.grant.fundername
SNF
ethz.grant.funderDoi
10.13039/501100001711
ethz.grant.program
NFP 75: Gesuch
ethz.date.deposited
2018-12-10T10:49:54Z
ethz.source
BATCH
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2018-12-12T12:33:09Z
ethz.rosetta.lastUpdated
2021-02-15T08:09:35Z
ethz.rosetta.versionExported
true
dc.identifier.olduri
http://hdl.handle.net/20.500.11850/310308
dc.identifier.olduri
http://hdl.handle.net/20.500.11850/309647
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Online%20Variance%20Reduction%20for%20Stochastic%20Optimization&rft.jtitle=Proceedings%20of%20Machine%20Learning%20Research&rft.date=2018&rft.volume=75&rft.spage=324&rft.epage=357&rft.issn=2640-3498&rft.au=Borsos,%20Zal%C3%A1n&Krause,%20Andreas&Levy,%20Kfir%20Y.&rft.genre=proceeding&rft.btitle=Proceedings%20of%20the%2031st%20%20Conference%20On%20Learning%20Theory%20(COLT%202018)
Files in this item
Files | Size | Format | Open in viewer |
---|---|---|---|
There are no files associated with this item. |
Publication type
-
Conference Paper [33079]