6533b81ffe1ef96bd1276ee9
RESEARCH PRODUCT
Expliquer le comportement de robots distants à des utilisateurs humains : une approche orientée-agent
Yazan Muallasubject
[SPI.OTHER]Engineering Sciences [physics]/OtherHuman-Computer InteractionExplainable Artificial IntelligenceIntelligence artificielle explicable[SPI.OTHER] Engineering Sciences [physics]/OtherMulti-Agent SystemsSystèmes multi-AgentsInteraction homme-Machinedescription
With the widespread use of Artificial Intelligence (AI) systems, understanding the behavior of intelligent agents and robots is crucial to guarantee smooth human-agent collaboration since it is not straightforward for humans to understand the agent’s state of mind. Recent studies in the goal-driven Explainable AI (XAI) domain have confirmed that explaining the agent’s behavior to humans fosters the latter’s understandability of the agent and increases its acceptability. However, providing overwhelming or unnecessary information may also confuse human users and cause misunderstandings. For these reasons, the parsimony of explanations has been outlined as one of the key features facilitating successful human-agent interaction with a parsimonious explanation defined as the simplest explanation that describes the situation adequately. While the parsimony of explanations is receiving growing attention in the literature, most of the works are carried out only conceptually.This thesis proposes, using a rigorous research methodology, a mechanism for parsimonious XAI that strikes a balance between simplicity and adequacy. In particular, it introduces a context-aware and adaptive process of explanation formulation and proposes a Human-Agent Explainability Architecture (HAExA) allowing to make this process operational for remote robots represented as Belief-Desire-Intention agents. To provide parsimonious explanations, HAExA relies first on generating normal and contrastive explanations and second on updating and filtering them before communicating them to the human.To evaluate the proposed architecture, we design and conduct empirical human-computer interaction studies employing agent-based simulation. The studies rely on well-established XAI metrics to estimate how understood and satisfactory the explanations provided by HAExA are. The results are properly analyzed and validated using parametric and non-parametric statistical testing.
year | journal | country | edition | language |
---|---|---|---|---|
2020-11-30 |