Minimum ℓ_1-norm interpolators: Precise asymptotics and multiple descent

10/18/2021
by   Yue Li, et al.
0

An evolving line of machine learning works observe empirical evidence that suggests interpolating estimators – the ones that achieve zero training error – may not necessarily be harmful. This paper pursues theoretical understanding for an important type of interpolators: the minimum ℓ_1-norm interpolator, which is motivated by the observation that several learning algorithms favor low ℓ_1-norm solutions in the over-parameterized regime. Concretely, we consider the noisy sparse regression model under Gaussian design, focusing on linear sparsity and high-dimensional asymptotics (so that both the number of features and the sparsity level scale proportionally with the sample size). We observe, and provide rigorous theoretical justification for, a curious multi-descent phenomenon; that is, the generalization risk of the minimum ℓ_1-norm interpolator undergoes multiple (and possibly more than two) phases of descent and ascent as one increases the model capacity. This phenomenon stems from the special structure of the minimum ℓ_1-norm interpolator as well as the delicate interplay between the over-parameterized ratio and the sparsity, thus unveiling a fundamental distinction in geometry from the minimum ℓ_2-norm interpolator. Our finding is built upon an exact characterization of the risk behavior, which is governed by a system of two non-linear equations with two unknowns.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro