A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence
Author ORCID Identifier
https://orcid.org/ 0000-0002-2718-5426
Document Type
Article
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
Computer Sciences
Abstract
The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. Understanding the inferences of such models can be seen as a process that discloses the relationships between their input and output. These relationships can be represented as a set of inference rules which are usually not explicit within a model. Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exists on their comparison. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by four post-hoc rule extractors by employing six quantitative metrics. Findings demonstrate that these metrics can actually help identify superior methods over the others thus are capable of successfully modelling distinctively aspects of explainability.
DOI
https://doi.org/10.21427/z4x3-3f86
Recommended Citation
Vilone G., Rizzo L., Longo L. A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence. Proceedings for the 28th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, December 7-8, 2020, V. 2771, pp. 85-96, DOI: 10.21427/z4x3-3f86
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Publication Details
Proceedings for the 28th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, December 7-8, 2020