Show simple item record

dc.contributor.authorManzi, Alessandro
dc.contributor.authorDario, Paolo
dc.contributor.authorCavallo, Filippo
dc.date.accessioned2020-06-02T14:06:23Z
dc.date.available2020-06-02T14:06:23Z
dc.date.issued2017
dc.identifier.urihttps://basepub.dauphine.fr/handle/123456789/20810
dc.language.isoenen
dc.subjecthuman activity recognitionen
dc.subjectclusteringen
dc.subjectx-meansen
dc.subjectSVMen
dc.subjectSMOen
dc.subjectskeleton dataen
dc.subjectdepth cameraen
dc.subjectRGB-D cameraen
dc.subjectassisted livingen
dc.subject.ddc005en
dc.titleA Human Activity Recognition System Based on Dynamic Clustering of Skeleton Dataen
dc.typeArticle accepté pour publication ou publié
dc.description.abstractenHuman activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.en
dc.relation.isversionofjnlnameSensors
dc.relation.isversionofjnlvol17en
dc.relation.isversionofjnlissue5en
dc.relation.isversionofjnldate2017-05
dc.relation.isversionofdoi10.3390/s17051100en
dc.relation.isversionofjnlpublisherMDPIen
dc.subject.ddclabelProgrammation, logiciels, organisation des donnéesen
dc.relation.forthcomingnonen
dc.relation.forthcomingprintnonen
dc.description.ssrncandidateouien
dc.description.halcandidateouien
dc.description.readershiprechercheen
dc.description.audienceInternationalen
dc.relation.Isversionofjnlpeerreviewedouien
dc.relation.Isversionofjnlpeerreviewedouien
hal.person.labIds504313
hal.person.labIds504313
hal.person.labIds504313
hal.faultCode{"meta":{"domain":{"isEmpty":"Cette valeur est obligatoire et ne peut \u00eatre vide"}}}


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record