• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
  •   BIRD Home
  • CEREMADE (UMR CNRS 7534)
  • CEREMADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent

Ayadi, Imen; Turinici, Gabriel (2020), Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent. https://basepub.dauphine.fr/handle/123456789/20719

View/Open
turinici_ayadi2020-rk-adaptive-sgd.pdf (488.5Kb)
Type
Document de travail / Working paper
External document link
https://hal.archives-ouvertes.fr/hal-02483988
Date
2020
Publisher
Cahier de recherche CEREMADE, Université Paris-Dauphine
Published in
Paris
Pages
16
Metadata
Show full item record
Author(s)
Ayadi, Imen
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Turinici, Gabriel cc
CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Abstract (EN)
The minimization of the loss function is of paramount importance in deep neural networks. On the other hand, many popular optimization algorithms have been shown to correspond to some evolution equation of gradient flow type. Inspired by the numerical schemes used for general evolution equations we introduce a second order stochastic Runge Kutta method and show that it yields a consistent procedure for the minimization of the loss function. In addition it can be coupled, in an adaptive framework, with a Stochastic Gradient Descent (SGD) to adjust automatically the learning rate of the SGD, without the need of any additional information on the Hessian of the loss functional. The adaptive SGD, called SGD-G2, is successfully tested on standard datasets.
Subjects / Keywords
SGD; stochastic gradient descent; Machine Learning; adaptive stochastic gradient; deep learning optimization; neural networks optimization

Related items

Showing items related by title and author.

  • Thumbnail
    Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent 
    Ayadi, Imen; Turinici, Gabriel (2021) Communication / Conférence
  • Thumbnail
    The convergence of the Stochastic Gradient Descent (SGD) : a self-contained proof 
    Turinici, Gabriel (2021) Document de travail / Working paper
  • Thumbnail
    Second-order in time schemes for gradient flows in Wasserstein and geodesic metric spaces 
    Legendre, Guillaume; Turinici, Gabriel (2017) Article accepté pour publication ou publié
  • Thumbnail
    Metric gradient flows with state dependent functionals: the Nash-MFG equilibrium flows and their numerical schemes 
    Turinici, Gabriel (2017) Article accepté pour publication ou publié
  • Thumbnail
    Finite volume approximation of optimal transport and Wasserstein gradient flows 
    Todeschi, Gabriele (2021-12-13) Thèse
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo