Author ORCID Identifier

0000-0003-4170-1295

Document Type

Conference Paper

Disciplines

Statistics

Publication Details

Statistical and Machine Learning: Methods and Applications (SAML-25) on June 5th and 6th, 2025 at TU Dublin, Ireland.

doi:10.21427/wdvv-g139

Abstract

This paper proposes a novel framework for automating ML model interpretation and explainability across different applications with an emphasis on transparency, trust, and human-centric decisionmaking assistance. Although ML models, particularly advanced structures such as deep neural networks, possess superior predictive powers, their interpretability tends to be obscure, and thus their application in sensitive or regulated domains is impeded. Current XAI techniques, though promising, tend to be post-hoc and are not scalable for real-time or large-scale deployments. This study addresses such concerns by presenting an automated, modular pipeline where interpretation techniques are embedded in the process of developing the ML model. The framework employs a hybrid approach where model-agnostic and model-specific explanation methods such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and integrated gradients are combined. These are embedded in a continuous learning platform to produce interpretable insights along with model predictions. The framework was evaluated using benchmarking datasets from domains such as healthcare, finance, and environmental monitoring to assess generalizability, fidelity of interpretability, and scalability. Empirical results demonstrate that the automatic pipeline consistently made accurate predictions (mean accuracy of 93%) simultaneously with producing interpretable explanations as expected by domain experts. Automatic attribution maps and rule-based summaries facilitated real-time insight generation with more than 60% reduced human effort in interpretation. Additionally, explainability fidelity metrics determined strong alignment between model behaviour and generated explanations. Ethical AI principles guided the research, and transparency, human oversight, and fairness were embedded at each stage. Automating interpretation, the framework allows stakeholders to inspect model decisions, enhancing trust, accountability, and compliance in high-stakes environments. Overall, this work significantly enhances explainable AI operationalization with a scalable and ethics-based solution for interpretation automation, enabling more knowledgeable and ethical utilization of machine learning in high-stakes deployments.

DOI

https://doi.org/10.21427/wdvv-g139

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.


Share

COinS