Depth-Adaptive Neural Networks from the Optimal Control viewpoint
hal.structure.identifier | CEntre de REcherches en MAthématiques de la DEcision [CEREMADE] | |
dc.contributor.author | Aghili, Joubine
HAL ID: 2773 | |
hal.structure.identifier | Laboratoire Jacques-Louis Lions [LJLL] | |
hal.structure.identifier | CEntre de REcherches en MAthématiques de la DEcision [CEREMADE] | |
dc.contributor.author | Mula, Olga
HAL ID: 1531 ORCID: 0000-0002-3017-6598 | |
dc.date.accessioned | 2020-10-21T09:17:10Z | |
dc.date.available | 2020-10-21T09:17:10Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | https://basepub.dauphine.fr/handle/123456789/21136 | |
dc.language.iso | en | en |
dc.subject | Neural Networks | en |
dc.subject | Deep Learning | en |
dc.subject | Continuous-Depth Neural Networks | en |
dc.subject | Optimal Control | en |
dc.subject.ddc | 515 | en |
dc.title | Depth-Adaptive Neural Networks from the Optimal Control viewpoint | en |
dc.type | Document de travail / Working paper | |
dc.description.abstracten | In recent years, deep learning has been connected with optimal control as a way to define a notion of a continuous underlying learning problem. In this view, neural networks can be interpreted as a discretization of a parametric Ordinary Differential Equation which, in the limit, defines a continuous-depth neural network. The learning task then consists in finding the best ODE parameters for the problem under consideration, and their number increases with the accuracy of the time discretization. Although important steps have been taken to realize the advantages of such continuous formulations, most current learning techniques fix a discretization (i.e. the number of layers is fixed). In this work, we propose an iterative adaptive algorithm where we progressively refine the time discretization (i.e. we increase the number of layers). Provided that certain tolerances are met across the iterations, we prove that the strategy converges to the underlying continuous problem. One salient advantage of such a shallow-to-deep approach is that it helps to benefit in practice from the higher approximation properties of deep networks by mitigating over-parametrization issues. The performance of the approach is illustrated in several numerical examples. | en |
dc.identifier.citationpages | 40 | en |
dc.relation.ispartofseriestitle | Cahier de recherche CEREMADE | en |
dc.identifier.urlsite | https://hal.archives-ouvertes.fr/hal-02897466 | en |
dc.subject.ddclabel | Analyse | en |
dc.description.ssrncandidate | non | en |
dc.description.halcandidate | non | en |
dc.description.readership | recherche | en |
dc.description.audience | International | en |
dc.date.updated | 2020-10-21T09:14:15Z | |
hal.author.function | aut | |
hal.author.function | aut |