Check a prototype of 2nd version of this platform being developed by cKnowledge.org in collaboration with MLCommons.

Kernel and Rich Regimes in Overparametrized Models

lib:41e471d04a3da1f0 (v1.0.0)

Authors: Blake Woodworth,Suriya Gunasekar,Jason D. Lee,Edward Moroshko,Pedro Savarese,Itay Golan,Daniel Soudry,Nathan Srebro
Where published: ICLR 2020 1
ArXiv: 2002.09277
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/2002.09277v3


A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We also highlight an interesting role for the width of a model in the case that the predictor is not identically zero at initialization. We provide a complete and detailed analysis for a family of simple depth-$D$ models that already exhibit an interesting and meaningful transition between the kernel and rich regimes, and we also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!