
Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent
Ayadi, Imen; Turinici, Gabriel (2020), Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent. https://basepub.dauphine.fr/handle/123456789/20719
Voir/Ouvrir
Type
Document de travail / Working paperLien vers un document non conservé dans cette base
https://hal.archives-ouvertes.fr/hal-02483988Date
2020Éditeur
Cahier de recherche CEREMADE, Université Paris-Dauphine
Ville d’édition
Paris
Pages
16
Métadonnées
Afficher la notice complèteAuteur(s)
Ayadi, ImenCEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Turinici, Gabriel

CEntre de REcherches en MAthématiques de la DEcision [CEREMADE]
Résumé (EN)
The minimization of the loss function is of paramount importance in deep neural networks. On the other hand, many popular optimization algorithms have been shown to correspond to some evolution equation of gradient flow type. Inspired by the numerical schemes used for general evolution equations we introduce a second order stochastic Runge Kutta method and show that it yields a consistent procedure for the minimization of the loss function. In addition it can be coupled, in an adaptive framework, with a Stochastic Gradient Descent (SGD) to adjust automatically the learning rate of the SGD, without the need of any additional information on the Hessian of the loss functional. The adaptive SGD, called SGD-G2, is successfully tested on standard datasets.Mots-clés
SGD; stochastic gradient descent; Machine Learning; adaptive stochastic gradient; deep learning optimization; neural networks optimizationPublications associées
Affichage des éléments liés par titre et auteur.
-
Ayadi, Imen; Turinici, Gabriel (2021) Communication / Conférence
-
Turinici, Gabriel (2021) Document de travail / Working paper
-
Legendre, Guillaume; Turinici, Gabriel (2017) Article accepté pour publication ou publié
-
Turinici, Gabriel (2017) Article accepté pour publication ou publié
-
Todeschi, Gabriele (2021-12-13) Thèse