• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

Model Consistency for Learning with Mirror-Stratifiable Regularizers

Fadili, Jalal; Garrigos, Guillaume; Malick, Jérôme; Peyré, Gabriel (2019-04), Model Consistency for Learning with Mirror-Stratifiable Regularizers, 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019, 2019-04, Naha, Japan

View/Open
stratificationAISTATS.pdf (605.0Kb)
Type
Communication / Conférence
External document link
https://hal.archives-ouvertes.fr/hal-01988309
Date
2019-04
Conference title
22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019
Conference date
2019-04
Conference city
Naha
Conference country
Japan
Metadata
Show full item record
Author(s)
Fadili, Jalal
Groupe de Recherche en Informatique, Image et Instrumentation de Caen [GREYC]
Garrigos, Guillaume cc
Université Paris 7
Malick, Jérôme
Laboratoire Jean Kuntzmann [LJK]
Peyré, Gabriel
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Abstract (EN)
Low-complexity non-smooth convex regular-izers are routinely used to impose some structure (such as sparsity or low-rank) on the coefficients for linear predictors in supervised learning. Model consistency consists then in selecting the correct structure (for instance support or rank) by regularized empirical risk minimization. It is known that model consistency holds under appropriate non-degeneracy conditions. However such conditions typically fail for highly correlated designs and it is observed that regularization methods tend to select larger models. In this work, we provide the theoretical underpinning of this behavior using the notion of mirror-stratifiable regular-izers. This class of regularizers encompasses the most well-known in the literature, including the 1 or trace norms. It brings into play a pair of primal-dual models, which in turn allows one to locate the structure of the solution using a specific dual certificate. We also show how this analysis is applicable to optimal solutions of the learning problem, and also to the iterates computed by a certain class of stochastic proximal-gradient algorithms.
Subjects / Keywords
Mirror-Stratifiable Regularizers

Related items

Showing items related by title and author.

  • Thumbnail
    Model Consistency of Partly Smooth Regularizers 
    Vaiter, Samuel; Fadili, Jalal; Peyré, Gabriel (2018) Article accepté pour publication ou publié
  • Thumbnail
    Risk estimation for matrix recovery with spectral regularization 
    Deledalle, Charles-Alban; Vaiter, Samuel; Peyré, Gabriel; Fadili, Jalal; Dossal, Charles (2012) Communication / Conférence
  • Thumbnail
    Local Linear Convergence of Inertial Forward-Backward Splitting for Low Complexity Regularization 
    Liang, Jingwei; Fadili, Jalal M.; Peyré, Gabriel (2015) Communication / Conférence
  • Thumbnail
    Local Linear Convergence of Douglas-Rachford/ADMM for Low Complexity Regularization 
    Liang, Jingwei; Fadili, Jalal M.; Peyré, Gabriel; Luke, Russell (2015) Communication / Conférence
  • Thumbnail
    Unbiased Risk Estimation for Sparse Analysis Regularization 
    Dossal, Charles; Fadili, Jalal; Peyré, Gabriel; Vaiter, Samuel; Deledalle, Charles-Alban (2012) Communication / Conférence
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo