• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • LAMSADE (UMR CNRS 7243)
  • LAMSADE : Publications
  • View Item
  •   BIRD Home
  • LAMSADE (UMR CNRS 7243)
  • LAMSADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

Bridging the gap between Markowitz planning and deep reinforcement learning

Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek (2020), Bridging the gap between Markowitz planning and deep reinforcement learning. https://basepub.dauphine.psl.eu/handle/123456789/22199

View/Open
Bridging_gap.pdf (542.5Kb)
Type
Document de travail / Working paper
Date
2020
Series title
Preprint Lamsade
Published in
Paris
Metadata
Show full item record
Author(s)
Benhamou, Éric
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Saltiel, David
Ungari, Sandrine
Mukhopadhyay, Abhishek
Abstract (EN)
While researchers in the asset management industry have mostly focused on techniques based on financial and risk planning techniques like Markowitz efficient frontier, minimum variance, maximum diversification or equal risk parity , in parallel, another community in machine learning has started working on reinforcement learning and more particularly deep reinforcement learning to solve other decision making problems for challenging task like autonomous driving , robot learning, and on a more conceptual side games solving like Go. This paper aims to bridge the gap between these two approaches by showing Deep Reinforcement Learning (DRL) techniques can shed new lights on portfolio allocation thanks to a more general optimization setting that casts portfolio allocation as an optimal control problem that is not just a one-step optimization, but rather a continuous control optimization with a delayed reward. The advantages are numerous: (i) DRL maps directly market conditions to actions by design and hence should adapt to changing environment , (ii) DRL does not rely on any traditional financial risk assumptions like that risk is represented by variance, (iii) DRL can incorporate additional data and be a multi inputs method as opposed to more traditional optimization methods. We present on an experiment some encouraging results using convolution networks.
Subjects / Keywords
asset management

Related items

Showing items related by title and author.

  • Thumbnail
    Time your hedge with Deep Reinforcement Learning 
    Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek (2020) Document de travail / Working paper
  • Thumbnail
    AAMDRL: Augmented Asset Management with Deep Reinforcement Learning 
    Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek; Atif, Jamal (2020) Document de travail / Working paper
  • Thumbnail
    Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models 
    Benhamou, Éric; Saltiel, David; Tabachnik, Serge; Wong, Sui Kai; Chareyron, François (2021) Document de travail / Working paper
  • Thumbnail
    Deep Reinforcement Learning (DRL) for portfolio allocation 
    Benhamou, Éric; Saltiel, David; Ohana, Jean-Jacques; Atif, Jamal; Laraki, Rida Communication / Conférence
  • Thumbnail
    Similarities between policy gradient methods (PGM) in reinforcement learning (RL) and supervised learning (SL) 
    Benhamou, Éric (2019) Document de travail / Working paper
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo