Author ORCID Identifier

https://orcid.org/0000-0001-6894-0996

Document Type

Conference Paper

Disciplines

Computer Sciences

Publication Details

International Conference on Human-Computer Interaction

HCII 2023: Artificial Intelligence in HCI pp 337–354

Part of the Lecture Notes in Computer Science book series (LNAI,volume 14050)

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

Abstract

Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users' understanding of AI systems It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.

DOI

https://doi.org/10.1007/978-3-031-35891-3_21

Funder

TU Dublin Scholarship Programme

Available for download on Tuesday, July 09, 2024


Share

COinS