Author ORCID Identifier
Document Type
Conference Paper
Disciplines
Computer Sciences
Abstract
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users' understanding of AI systems It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.
DOI
https://doi.org/10.1007/978-3-031-35891-3_21
Recommended Citation
Sheridan, H., Murphy, E., O’Sullivan, D. (2023). Exploring Mental Models for Explainable Artificial Intelligence: Engaging Cross-disciplinary Teams Using a Design Thinking Approach. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. DOI: 10.1007/978-3-031-35891-3_21
Funder
TU Dublin Scholarship Programme
Publication Details
International Conference on Human-Computer Interaction
HCII 2023: Artificial Intelligence in HCI pp 337–354
Part of the Lecture Notes in Computer Science book series (LNAI,volume 14050)
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG