-
Enhancing Early-Stage XAI Projects through Designer-led Visual Ideation of AI Concepts
Helen Sheridan, Emma Murphy, and Dympna O'Sullivan
The pervasive use of artificial intelligence (AI) in processing users’ data is well documented with the use of AI believed to profoundly change users’ way of life in the near future. However, there still exists a sense of mistrust among users who engage with AI systems some of this stemming from lack of transparency, including users failing to understand what AI is, what it can do and its impact on society. From this, the emerging discipline of explainable artificial intelligence (XAI) has emerged, a method of designing and developing AI where a systems decisions, processes and outputs are explained and understood by the end user. It has been argued that designing for AI systems especially for XAI poses a unique set of challenges as AI systems are often considered complex, opaque and difficult to visualise and interpret especially for those unfamiliar with their inner workings. For this reason, visual interpretations which match users’ mental models of their understanding of AI are a necessary step in the development of XAI solutions. Our research examines the inclusion of designers in an early-stage analysis of an AI recruitment system taking a design thinking approach in the form of 3 workshops. We discovered that workshops with designers included yielded more visual interpretations of big ideas related to AI systems, and the inclusion of designers encouraged more visual interpretations from non-designers and those not typically used to employing drawing as a method to express mental models.
-
Ideating Explainable AI
Helen Sheridan, Emma Murphy, and Dympna O'Sullivan
Exploring user's mental models of an AI-driven recruitment system using design-thinking methods as an approach to ideating XAI.
-
Unlocking the Black Box: Evaluating XAI Through a Mixed Methods Approach Combining Quantitative Standardised Scales and Qualitative Techniques
Helen Sheridan, Dympna O'Sullivan, and Emma Murphy
In 1950 when Alan Turing first published his groundbreaking paper, computing machinery and intelligence and asked “Can machines think?” a new era of research exploring the intelligence of digital computers and their ability to deceive and/or imitate a human was ignited. From these first explorations of AI to modern day artificial intelligence and machine learning systems many advances, breakthroughs and improved algorithms have been developed usually advancing at an exponential pace. This has resulted in the pervasive use of AI systems in the processing of data. Concerns have been expressed related to biased decisions by AI systems around the processing of personal data in domains such as recruitment, medicine and the judicial system, AI systems which make life changing decisions for users. However, legislation such as the EU’s AI act has called for greater regulation of artificial intelligence in the EU including catagorising according to risk, developing systems with greater transparency and the right to an explanation of AI systems’ decisions. Explainable artificial intelligence (XAI) can be best described as a model which produces details or reasons which make the functioning of an AI system clear or easy to understand. Much research and development has been done in this area to demystify the black box nature of some AI systems. With the right to an explanation XAI will play a leading role in the compliance of high-risk AI systems for companies and in the delivery of explanations for those who engage with AI.
However, when an XAI method is presented to a user how do we know that the user understands the explanation? When we factor in other metrics which might be crucial when evaluating users’ understanding of an XAI output such as; user satisfaction, user curiosity or need for an explanation, user trust in the system following an explanation and users’ mental models of the AI system we can see the multiple evaluation methods, scales and tests which may need to be considered. Also, bear in mind the breath of user types who require explanations and the volume of domains which utilise AI in automated decision-making which may also influence the evaluation method employed.
Evaluation methods traditionally used within the IT industry for software and websites have been utilised, examined, evaluated and verified extensively. Although there are many standardised scales and evaluation methods used to evaluate software and websites such as SUS, PSSUQ and SUMI, few of these methods translate specifically to the domain of XAI. Those designed specifically for XAI such as SCS, goodness check, satisfaction scale, and trust scale have not been as thoroughly tested and validated as those for software and websites. Also, many qualitative evaluation methods such as AAR/AI, explainability scenarios and counterfactual scenarios specifically tailored for evaluating AI and XAI suffer the same fate. This is to be expected since XAI in comparison to software and websites is a relatively new field of research.
As part of a larger study, we present an overview of quantative methods in the form of standardised scales and qualitative techniques considered user experience methods which evaluate more traditional forms of information technology such as software or websites but with an emphasis on those which are best suited and have been validated in the evaluation of XAI. We also present an overview of evaluation methods specifically designed to evaluate XAI and discuss how these might be used in conjunction with traditional evaluation methods to determine users understanding of XAI outputs.
-
Identifying Gendered Language
Shweta Soundararajan and Sarah Jane Delany
Gendered language refers to the use of words that indicate the gender of an individual. It can be explicit, where the gender is directly implied by the specific words used (e.g., mother, she, man), or it can be implicit, where societal roles and behaviors convey a person's gender. For example, expectations that women display communal traits (e.g., affectionate, caring, gentle) and men display agentic traits (e.g., assertive, competitive, decisive). The presence of gendered language in natural language processing (NLP) systems can reinforce gender stereotypes and bias. Our work introduces an approach to creating gendered language datasets using ChatGPT. These datasets are designed to support data-driven methods for identifying gender stereotypes and mitigating gender bias. The approach focuses on generating implicit gendered language that captures and reflects stereotypical characteristics or traits associated with a specific gender. This is achieved by constructing prompts for ChatGPT that incorporate gender-coded words sourced from gender-coded lexicons. The evaluation of the datasets generated demonstrates good examples of English-language gendered sentences that can be categorized as either contradictory to or consistent with gender stereotypes. Additionally, the generated data exhibits a strong gender bias.
-
Detecting Patches on Road Pavement Images Acquired with 3D Laser Sensors using Object Detection and Deep Learning
Ibrahim Hassan Syed, Dympna O'Sullivan, Susan McKeever, David Power, Ray Mcgowan, and Kieran Feighan
Regular pavement inspections are key to good road maintenance and road defect corrections. Advanced pavement inspection systems such as LCMS (Laser Crack Measurement System) can automatically detect the presence of different defects using 3D lasers. However, such systems still require manual involvement to complete the detection of pavement defects. This work proposes an automatic patch detection system using an object detection technique. Results show that the object detection model can successfully detect patches inside LCMS images and suggest that the proposed approach could be integrated into the existing pavement inspection systems.
-
Investigating the Use of Conversational Agents as Accountable Buddies to Support Health and Lifestyle Change
Ekaterina Uetova, Dympna O'Sullivan, Lucy Hederman, and Robert J. Ross
The poster focuses on the role of conversational agents in promoting health and well-being. Results of the literature review indicate that negative emotions can hinder individuals from taking necessary actions related to their health. The study concludes that understanding and addressing emotional barriers is essential to facilitating early access to health services and improving well-being. The poster outlines plans to investigate motivation strategies, develop a prototype conversational agent based on user study insights and chat log data, and incorporate emotion regulation to effectively manage users' emotional experiences.
Printing is not supported at the primary Gallery Thumbnail page. Please first navigate to a specific Image before printing.