Document Type

Conference Paper


Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence


Computer Sciences

Publication Details

CENTRIC 2022 : The Fifteenth International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services, Lisbon


Artificial Intelligence (AI) is playing an important role in society including how vital, often life changing decisions are made. For this reason, interest in Explainable Artificial Intelligence (XAI) has grown in recent years as a means of revealing the processes and operations contained within what is often described as a black box, an often-opaque system whose decisions are difficult to understand by the end user. This paper presents the results of a design thinking workshop with 20 participants (computer science and graphic design students) where we sought to investigate users' mental models when interacting with AI systems. Using two personas, participants were asked to empathise with two end users of an AI driven recruitment system, identify pain points in a user’s experience and ideate on possible solutions to these pain points. These tasks were used to explore the user’s understanding of AI systems, the intelligibility of AI systems and how the inner workings of these systems might be explained to end users. We discovered that visual feedback, analytics, and comparisons, feature highlighting in conjunction with factual, counterfactual and principal reasoning explanations could be used to improve user’s mental models of AI systems.



TU Dublin