0000000000202132
AUTHOR
Quentin Klopfenstein
Model identification and local linear convergence of coordinate descent
For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e.g., support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough. Results concerning coordinate descent are scarcer and model identification has only been shown for specific estimators, the support-vector machine for instance. In this work, we show that cyclic coordinate descent achieves model identification in finite time for a wide class of functions. In addition, we prove explicit local linear convergence rates for coordinate descent. Extensive experiments on various estimators and on real datasets demonstrate that thes…
Analyse des infiltrats immunitaires CD3 et CD8 chez 1280 malades atteints de cancers coliques de stade III : approche par intelligence artificielle entièrement automatisée
L’etude retrospective de tumeurs par immunohistochimie constitue un enjeu majeur dans l’amelioration de la prise en charge atteints de pathologies cancereuses. Cependant l’analyse est chronophage pour le pathologiste et reste le plus souvent semi-quantitative pouvant entrainer une perte d’information. L’intelligence artificielle, qui consiste en une approche d’analyse automatisee a haut debit, pourrait permettre de s’affranchir dans ce contexte du pathologiste en apportant de plus une information quantitative. Nous avons ainsi etudie deux parametres immunitaires, CD3 et CD8 sur 1280 lames de malades de la cohorte PETACC8 [1] , une etude europeenne de patients atteints de cancers coliques de…
Optimisation non-lisse pour l'estimation de composants immunitaires cellulaires dans un environnement tumoral
In this PhD proposal we will investigate new regularization methods of inverse problems that provide an absolute quantification of immune cell subpopulations. The mathematical aspect of this PhD proposal is two-fold. The first goal is to enhance the underlying linear model through a more refined construction of the expression matrix. The second goal is, given this linear model, to derive the best possible estimator. These two issues can be treated in a decoupled way, which is the standard for existing methods such as Cibersort, or as a coupled optimization problem (which is known as blind deconvolution in signal processing).
Evaluation of tumor immune contexture among intrinsic molecular subtypes helps to predict outcome in early breast cancer
BackgroundThe prognosis of early breast cancer is linked to clinic-pathological stage and the molecular characteristics of intrinsic tumor cells. In some patients, the amount and quality of tumor-infiltrating immune cells appear to affect long term outcome. We aimed to propose a new tool to estimate immune infiltrate, and link these factors to patient prognosis according to breast cancer molecular subtypes.MethodsWe performed in silico analyses in more than 2800 early breast cancer transcriptomes with corresponding clinical annotations. We first developed a new gene expression deconvolution algorithm that accurately estimates the quantity of immune cell populations (tumor immune contexture,…
Implicit differentiation of Lasso-type models for hyperparameter optimization
International audience; Setting regularization parameters for Lasso-type estimators is notoriously difficult, though crucial in practice. The most popular hyperparam-eter optimization approach is grid-search using held-out validation data. Grid-search however requires to choose a predefined grid for each parameter , which scales exponentially in the number of parameters. Another approach is to cast hyperparameter optimization as a bi-level optimization problem, one can solve by gradient descent. The key challenge for these methods is the estimation of the gradient w.r.t. the hyperpa-rameters. Computing this gradient via forward or backward automatic differentiation is possible yet usually s…
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning
International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…