Show simple item record

hal.structure.identifierLaboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
dc.contributor.authorAiriau, Stéphane
HAL ID: 742766
ORCID: 0000-0003-4669-7619
hal.structure.identifierLaboratoire d'Informatique Paris Descartes [LIPADE - EA 2517]
dc.contributor.authorBonzon, Elise
hal.structure.identifierInstitute for Logic, Language and Computation [ILLC]
dc.contributor.authorEndriss, Ulle
hal.structure.identifierLaboratoire d'Informatique de Paris 6 [LIP6]
dc.contributor.authorMaudet, Nicolas
HAL ID: 4473
ORCID: 0000-0002-4232-069X
hal.structure.identifierLaboratoire d'Informatique Paris Descartes [LIPADE - EA 2517]
dc.contributor.authorRossit, Julien
dc.date.accessioned2019-06-25T09:38:02Z
dc.date.available2019-06-25T09:38:02Z
dc.date.issued2017
dc.identifier.issn1076-9757
dc.identifier.urihttps://basepub.dauphine.fr/handle/123456789/19020
dc.language.isoenen
dc.subjectargumentation frameworken
dc.subject.ddc006.3en
dc.titleRationalisation of Profiles of Abstract Argumentation Frameworks: Characterisation and Complexityen
dc.typeArticle accepté pour publication ou publié
dc.description.abstractenDifferent agents may have different points of view. Following a popular approach in the artificial intelligence literature, this can be modeled by means of different abstract argumentation frameworks, each consisting of a set of arguments the agent is contemplating and a binary attack-relation between them. A question arising in this context is whether the diversity of views observed in such a profile of argumentation frameworks is consistent with the assumption that every individual argumentation framework is induced by a combination of, first, some basic factual attack-relation between the arguments and, second, the personal preferences of the agent concerned regarding the moral or social values the arguments under scrutiny relate to. We treat this question of rationalisability of a profile as an algorithmic problem and identify tractable and intractable cases. In doing so, we distinguish different constraints on admissible rationalisations, e.g., concerning the types of preferences used or the number of distinct values involved. We also distinguish two different semantics for rationalisability, which differ in the assumptions made on how agents treat attacks between arguments they do not report. This research agenda, bringing together ideas from abstract argumentation and social choice, is useful for understanding what types of profiles can reasonably be expected to occur in a multiagent system.en
dc.relation.isversionofjnlnameJournal of Artificial Intelligence Research
dc.relation.isversionofjnlvol60en
dc.relation.isversionofjnldate2017
dc.relation.isversionofjnlpages149-177en
dc.relation.isversionofdoihttps://doi.org/10.1613/jair.5436en
dc.contributor.countryeditoruniversityotherFRANCE
dc.subject.ddclabelIntelligence artificielleen
dc.relation.forthcomingnonen
dc.relation.forthcomingprintnonen
dc.description.ssrncandidatenonen
dc.description.halcandidateouien
dc.description.readershiprechercheen
dc.description.audienceInternationalen
dc.relation.Isversionofjnlpeerreviewedouien
dc.relation.Isversionofjnlpeerreviewedouien
dc.date.updated2019-03-31T15:57:49Z
hal.faultCode{"meta":{"identifier":{"regexNotMatch":"'https:\/\/doi.org\/10.1613\/jair.5436' n'est pas un DOI valide, par exemple : 10.xxx"}}}
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record