Stochastic gradient descent: General Analysis and Improved Rates
A talk in the Keine Reihe series by
Peter Richtarik from KAUST, Saudi-Arabia
| Abstract: | We propose a general yet simple theorem describing the convergence
of SGD under the arbitrary sampling paradigm. Our theorem describes the con-
vergence of an infinite array of variants of SGD, each of which is associated with a
specific probability law governing the data selection rule used to form mini-batches.
This is the first time such an analysis is performed, and most of our variants of SGD
were never explicitly considered in the literature before. Our analysis relies on the
recently introduced notion of expected smoothness and does not rely on a uniform
bound on the variance of the stochastic gradients. By specializing our theorem to
different mini-batching strategies, such as sampling with replacement and indepen-
dent sampling, we derive exact expressions for the stepsize as a function of the
mini-batch size. With this we can also determine the mini-batch size that optimizes
the total complexity, and show explicitly that as the variance of the stochastic gra-
dient evaluated at the minimum grows, so does the optimal mini-batch size. For
zero variance, the optimal mini-batch size is one. Moreover, we prove insightful
stepsize-switching rules which describe when one should switch from a constant to
a decreasing stepsize regime. |