Document Type
Theses, Ph.D
Disciplines
1.2 COMPUTER AND INFORMATION SCIENCE
Abstract
Explainable Artificial Intelligence (XAI) has rapidly grown in the past decade due to the prevalence of machine learning, especially deep learning, in fields like healthcare and finance. While these models excel in accuracy, their complexity hampers transparency and interpretability. Ensuring understandable explanations for AI predictions fosters trust, prevents errors, complies with regulations, and enhances model refinement. The research project outlined in this thesis unfolds in phases. It commences with a comprehensive review of existing XAI studies, contributing to the field’s knowledge by proposing a taxonomy that organises theories and notions related to explainability, the evaluation approaches for XAI methods, and the XAI methods themselves according to various dimensions, including the format of their explanations. The central idea is unravelling the inferential process behind machine learning models. Although these models follow inference rules, such rules are seldom presented to users. Furthermore, no tool exists to check whether these rules are consistent with existing knowledge. Non-monotonic reasoning allows for the withdrawal of conclusions when new data emerges. Integrating Argumentation Theory and Abstract Argumentation Frameworks (AAF) aims to manage conflicts in model reasoning. However, before this project, work on applying AT to mine arguments and counterarguments from machine learning models was limited. The core proposition of the research is an innovative XAI approach. This method employs computational argumentation principles to automatically construct an AAF-based representation of a machine learning model’s inferential process. Additionally, it generates weighted attacks to signify conflicting information while adhering to an “inconsistency budget” threshold to prune weaker attacks.
DOI
https://doi.org/10.21427/7909-4a46
Recommended Citation
Vilone, Giulia, "A Fully Automated Global Post-hoc Method Based on Abstract Argumentation for Explainable Artificial Intelligence and its Application on Fully Connected Dense Deep Neural Networks" (2024). Dissertations. 283.
https://arrow.tudublin.ie/scschcomdis/283
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Publication Details
Thesis submitted for the degree of Doctor of Philosophy, July 2024. School of Computer Science, Technological University Dublin.