Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

AI for Explaining Decisions in Multi-Agent Environments

lib:7b28155363bc5852 (v1.0.0)

Authors: Sarit Kraus,Amos Azaria,Jelena Fiosina,Maike Greve,Noam Hazon,Lutz Kolbe,Tim-Benjamin Lembcke,Jörg P. Müller,Sören Schleibaum,Mark Vollrath
ArXiv: 1910.04404
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/1910.04404v2


Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: xMASE. We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI system's decisions in multi-agent environments.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!