Metadata only
Date
2020-07Type
- Report
ETH Bibliography
yes
Altmetrics
Abstract
Stochastic gradient descent (SGD) type optimization schemes are fundamental ingredients in a large number of machine learning based algorithms. In particular, SGD type optimization schemes are frequently employed in applications involving natural language processing, object and face recognition,fraud detection, computational advertisement, and numerical approximations of partial differential equations. In mathematical convergence results for SGD type optimization schemes there are usually two types of error criteria studied in the scientific literature, that is, the error in the strong sense and the error with respect to the objective function. In applications one is often not only interested in the size of the error with respect to the objective function but also in the size of the error with respect to a test function which is possibly different from the objective function. The analysis of the size of this error is the subject of this article. In particular, the main result of this article proves under suitable assumptions that the size of this error decays at the same speed as in the special case where the test function coincides with the objective function. Show more
Publication status
publishedExternal links
Journal / series
SAM Research ReportVolume
Publisher
Seminar for Applied Mathematics, ETH ZurichOrganisational unit
02501 - Seminar für Angewandte Mathematik / Seminar for Applied Mathematics
Funding
175699 - Higher order numerical approximation methods for stochastic partial differential equations (SNF)
More
Show all metadata
ETH Bibliography
yes
Altmetrics