0000000000585003
AUTHOR
Mathurin Massias
Exploiting regularity in sparse Generalized Linear Models
International audience; Generalized Linear Models (GLM) are a wide class ofregression and classification models, where the predictedvariable is obtained from a linear combination of the in-put variables. For statistical inference in high dimensions,sparsity inducing regularization have proven useful whileoffering statistical guarantees. However, solving the result-ing optimization problems can be challenging: even forpopular iterative algorithms such as coordinate descent, oneneeds to loop over a large number of variables. To mitigatethis, techniques known asscreening rulesandworking setsdiminish the size of the optimization problem at hand, eitherby progressively removing variables, or by …
Dual Extrapolation for Sparse Generalized Linear Models
International audience; Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, o…
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning
International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…