• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

Algorithms that get old : the case of generative deep neural networks

Turinici, Gabriel (2022-09), Algorithms that get old : the case of generative deep neural networks, The 8th International Conference on Machine Learning, Optimization, and Data Science - LOD 2022, 2022-09, Siena, Italy

View/Open
2202.03008.pdf (279.9Kb)
Type
Communication / Conférence
Date
2022-09
Conference title
The 8th International Conference on Machine Learning, Optimization, and Data Science - LOD 2022
Conference date
2022-09
Conference city
Siena
Conference country
Italy
Pages
9
Metadata
Show full item record
Author(s)
Turinici, Gabriel cc
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Abstract (EN)
Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), and Generative Adversarial Networks (GANs) produce new objects each time when asked to do so with the constraint that the new objects remain similar to some list of examples given as input. However, this behavior is unlike that of human artists that change their style as time goes by and seldom return to the style of the initial creations. We investigate a situation where VAEs are used to sample from a probability measure described by some empirical dataset. Based on recent works on Radon-Sobolev statistical distances, we propose a numerical paradigm, to be used in conjunction with a generative algorithm, that satisfies the two following requirements: the objects created do not repeat and evolve to fill the entire target probability distribution.
Subjects / Keywords
variational auto-encoder; generative adversarial network; statistical distance; vector quantization; deep neural network; measurecompression

Related items

Showing items related by title and author.

  • Thumbnail
    Deep learning of Value at Risk through generative neural network models : the case of the Variational Auto Encoder 
    Brugière, Pierre; Turinici, Gabriel (2022) Document de travail / Working paper
  • Thumbnail
    Convergence dynamics of Generative Adversarial Networks: the dual metric flows 
    Turinici, Gabriel (2021) Communication / Conférence
  • Thumbnail
    On the Expressive Power of Deep Fully Circulant Neural Networks 
    Araújo, Alexandre; Negrevergne, Benjamin; Chevaleyre, Yann; Atif, Jamal (2019) Document de travail / Working paper
  • Thumbnail
    A priori convergence of the Greedy algorithm for the parametrized reduced basis method 
    Buffa, Annalisa; Maday, Yvon; Patera, Anthony T.; Prud'Homme, Christophe; Turinici, Gabriel (2012) Article accepté pour publication ou publié
  • Thumbnail
    The liquidity regimes and the prepayment option of a corporate loan in the finite horizon case 
    Papin, Timothée; Turinici, Gabriel (2015) Article accepté pour publication ou publié
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo