• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

About contrastive unsupervised representation learning for classification and its convergence

Merad, Ibrahim; Yu, Yiyang; Bacry, Emmanuel; Gaïffas, Stéphane (2021), About contrastive unsupervised representation learning for classification and its convergence. https://basepub.dauphine.psl.eu/handle/123456789/22718

View/Open
2012.01064.pdf (675.6Kb)
Type
Document de travail / Working paper
External document link
https://hal.archives-ouvertes.fr/hal-03438767
Date
2021
Series title
Cahier de recherche CEREMADE, Université Paris Dauphine-PSL
Published in
Paris
Pages
17
Metadata
Show full item record
Author(s)
Merad, Ibrahim
Yu, Yiyang
Bacry, Emmanuel cc
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Gaïffas, Stéphane
Centre de Mathématiques Appliquées - Ecole Polytechnique [CMAP]
Laboratoire de Probabilités, Statistique et Modélisation [LPSM (UMR_8001)]
Abstract (EN)
Contrastive representation learning has been recently proved to be very efficient for selfsupervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide convergence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings.
Subjects / Keywords
unsupervised learning; constrative learning; deep neural networks; theoretical guarantees

Related items

Showing items related by title and author.

  • Thumbnail
    ZiMM : a deep learning model for long term adverse events with non-clinical claims data 
    Kabeshova, Anastasiia; Yu, Yiyang; Lukacs, Bertrand; Bacry, Emmanuel; Gaïffas, Stéphane (2020) Article accepté pour publication ou publié
  • Thumbnail
    Screening anxiolytics, hypnotics, antidepressants and neuroleptics for bone fracture risk among elderly: a nation-wide dynamic multivariate self-control study using the SNDS claims database 
    Morel, Maryan; Bouyer, Benjamin; Guilloux, Agathe; LAANANI, Moussa; Leroy, Fanny; Nguyen, Dinh Phong; Sebiat, Youcef; Bacry, Emmanuel; Gaïffas, Stéphane (2021) Document de travail / Working paper
  • Thumbnail
    Dual Optimization for convex constrained objectives without the gradient-Lipschitz assumptions 
    Bompaire, Martin; Gaïffas, Stéphane; Bacry, Emmanuel (2018) Document de travail / Working paper
  • Thumbnail
    Concentration inequalities for matrix martingales in continuous time 
    Bacry, Emmanuel; Muzy, Jean-François; Gaïffas, Stéphane (2018) Article accepté pour publication ou publié
  • Thumbnail
    ConvSCCS: convolutional self-controlled case-seris model for lagged adverser event detection 
    Morel, Maryan; Bacry, Emmanuel; Gaïffas, Stéphane; Guilloux, Agathe; Leroy, Fanny (2019) Article accepté pour publication ou publié
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo