• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Aide
  • Connexion
  • Langue 
    • Français
    • English
Consulter le document 
  •   Accueil
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • Consulter le document
  •   Accueil
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • Consulter le document
JavaScript is disabled for your browser. Some features of this site may not work without it.

Afficher

Toute la baseCentres de recherche & CollectionsAnnée de publicationAuteurTitreTypeCette collectionAnnée de publicationAuteurTitreType

Mon compte

Connexion

Enregistrement

Statistiques

Documents les plus consultésStatistiques par paysAuteurs les plus consultés
Thumbnail

Depth-Adaptive Neural Networks from the Optimal Control viewpoint

Aghili, Joubine; Mula, Olga (2020), Depth-Adaptive Neural Networks from the Optimal Control viewpoint. https://basepub.dauphine.fr/handle/123456789/21136

Voir/Ouvrir
AM2020.pdf (1.440Mb)
Type
Document de travail / Working paper
Lien vers un document non conservé dans cette base
https://hal.archives-ouvertes.fr/hal-02897466
Date
2020
Titre de la collection
Cahier de recherche CEREMADE
Pages
40
Métadonnées
Afficher la notice complète
Auteur(s)
Aghili, Joubine
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Mula, Olga cc
Laboratoire Jacques-Louis Lions [LJLL]
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Résumé (EN)
In recent years, deep learning has been connected with optimal control as a way to define a notion of a continuous underlying learning problem. In this view, neural networks can be interpreted as a discretization of a parametric Ordinary Differential Equation which, in the limit, defines a continuous-depth neural network. The learning task then consists in finding the best ODE parameters for the problem under consideration, and their number increases with the accuracy of the time discretization. Although important steps have been taken to realize the advantages of such continuous formulations, most current learning techniques fix a discretization (i.e. the number of layers is fixed). In this work, we propose an iterative adaptive algorithm where we progressively refine the time discretization (i.e. we increase the number of layers). Provided that certain tolerances are met across the iterations, we prove that the strategy converges to the underlying continuous problem. One salient advantage of such a shallow-to-deep approach is that it helps to benefit in practice from the higher approximation properties of deep networks by mitigating over-parametrization issues. The performance of the approach is illustrated in several numerical examples.
Mots-clés
Neural Networks; Deep Learning; Continuous-Depth Neural Networks; Optimal Control

Publications associées

Affichage des éléments liés par titre et auteur.

  • Vignette de prévisualisation
    Mean-field Langevin System, Optimal Control and Deep Neural Networks 
    Hu, Kaitong; Kazeykina, Anna; Ren, Zhenjie (2019-09) Document de travail / Working paper
  • Vignette de prévisualisation
    An Adaptive Nested Source Term Iteration for Radiative Transfer Equations 
    Dahmen, Wolfgang; Gruber, Felix; Mula, Olga (2020) Article accepté pour publication ou publié
  • Vignette de prévisualisation
    An Adaptive Parareal Algorithm 
    Maday, Yvon; Mula, Olga (2020) Article accepté pour publication ou publié
  • Vignette de prévisualisation
    Greedy Algorithms for Optimal Measurements Selection in State Estimation Using Reduced Models 
    Binev, Peter; Cohen, Albert; Mula, Olga; Nichols, James (2018) Article accepté pour publication ou publié
  • Vignette de prévisualisation
    Optimal reduced model algorithms for data-based state estimation 
    Cohen, Albert; Dahmen, Wolfgang; DeVore, Ron; Fadili, Jalal M.; Mula, Olga; Nichols, James (2020) Article accepté pour publication ou publié
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Tél. : 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo