Authors: Zaiyi Chen,Zhuoning Yuan,Jinfeng Yi,Bowen Zhou,Enhong Chen,Tianbao Yang
Where published:
ICLR 2019 5
ArXiv: 1808.06296
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1808.06296v3
Although stochastic gradient descent (SGD) method and its variants (e.g.,
stochastic momentum methods, AdaGrad) are the choice of algorithms for solving
non-convex problems (especially deep learning), there still remain big gaps
between the theory and the practice with many questions unresolved. For
example, there is still a lack of theories of convergence for SGD and its
variants that use stagewise step size and return an averaged solution in
practice. In addition, theoretical insights of why adaptive step size of
AdaGrad could improve non-adaptive step size of {\sgd} is still missing for
non-convex optimization. This paper aims to address these questions and fill
the gap between theory and practice. We propose a universal stagewise
optimization framework for a broad family of {\bf non-smooth non-convex}
(namely weakly convex) problems with the following key features: (i) at each
stage any suitable stochastic convex optimization algorithms (e.g., SGD or
AdaGrad) that return an averaged solution can be employed for minimizing a
regularized convex problem; (ii) the step size is decreased in a stagewise
manner; (iii) an averaged solution is returned as the final solution that is
selected from all stagewise averaged solutions with sampling probabilities {\it
increasing} as the stage number. Our theoretical results of stagewise AdaGrad
exhibit its adaptive convergence, therefore shed insights on its faster
convergence for problems with sparse stochastic gradients than stagewise SGD.
To the best of our knowledge, these new results are the first of their kind for
addressing the unresolved issues of existing theories mentioned earlier.
Besides theoretical contributions, our empirical studies show that our
stagewise SGD and ADAGRAD improve the generalization performance of existing
variants/implementations of SGD and ADAGRAD.