Show simple item record

hal.structure.identifierLaboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
dc.contributor.authorLecoutre, Adrian
hal.structure.identifierLaboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
dc.contributor.authorNegrevergne, Benjamin
HAL ID: 172154
ORCID: 0000-0002-7074-8167
hal.structure.identifierLaboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
dc.contributor.authorYger, Florian
HAL ID: 17768
ORCID: 0000-0002-7182-8062
dc.date.accessioned2018-03-21T12:52:49Z
dc.date.available2018-03-21T12:52:49Z
dc.date.issued2017
dc.identifier.urihttps://basepub.dauphine.fr/handle/123456789/17578
dc.language.isoenen
dc.subjectArt style recognitionen
dc.subjectPaintingen
dc.subjectFeature extractionen
dc.subjectDeep learningen
dc.subject.ddc005.7en
dc.titleRecognizing Art Style Automatically in painting with deep learningen
dc.typeCommunication / Conférence
dc.description.abstractenThe artistic style (or artistic movement) of a painting is a rich descriptor that captures both visual and historical information about the painting. Correctly identifying the artistic style of a paintings is crucial for indexing large artistic databases. In this paper, we investigate the use of deep residual neural to solve the problem of detecting the artistic style of a painting and outperform existing approaches to reach an accuracy of 62 on the Wikipaintings dataset (for 25 different style). To achieve this result, the network is first pre-trained on ImageNet, and deeply retrained for artistic style. We empirically evaluate that to achieve the best performance, one need to retrain about 20 layers. This suggests that the two tasks are as similar as expected, and explain the previous success of hand crafted features. We also demonstrate that the style detected on the Wikipaintings dataset are consistent with styles detected on an independent dataset and describe a number of experiments we conducted to validate this approach both qualitatively and quantitatively.en
dc.identifier.citationpages327-342en
dc.relation.ispartoftitleProceedings of the 9th Asian Conference on Machine Learning (ACML 2017)en
dc.relation.ispartofeditorNoh, Yung-Kyun
dc.relation.ispartofeditorZhang, Min-Ling
dc.relation.ispartofpublnameJMLR: Workshop and Conference Proceedingsen
dc.relation.ispartofdate2017
dc.contributor.countryeditoruniversityotherFRANCE
dc.subject.ddclabelOrganisation des donnéesen
dc.relation.conftitle9th Asian Conference on Machine Learning (ACML 2017)en
dc.relation.confdate2017-11
dc.relation.confcitySeoulen
dc.relation.confcountry"Koreaen
dc.relation.forthcomingnonen
dc.description.ssrncandidatenonen
dc.description.halcandidateouien
dc.description.readershiprechercheen
dc.description.audienceInternationalen
dc.relation.Isversionofjnlpeerreviewednonen
dc.relation.Isversionofjnlpeerreviewednonen
dc.date.updated2018-03-21T12:40:24Z
hal.faultCodeThe supplied XML description does not validate the AOfr schema
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record