1.2 COMPUTER AND INFORMATION SCIENCE
An open research question in deep reinforcement learning is how to focus the policy learning of key decisions within a sparse domain. This paper emphasizes on combining the advantages of input-output hidden Markov models and reinforcement learning. We propose a novel hierarchical modeling methodology that, at a high level, detects and interprets the root cause of a failure as well as the health degradation of the turbofan engine, while at a low level, provides the optimal replacement policy. This approach outperforms baseline deep reinforcement learning (DRL) models and has performance comparable to that of a state-of-the-art reinforcement learning system while being more interpretable.
Abbas, Ammar N.; Chasparis, Georgios C.; and Kelleher, John, "Interpretable Input-Output Hidden Markov Model-Based Deep Reinforcement Learning for the Predictive Maintenance of Turbofan Engines" (2023). Conference papers. 411.
This publication is the result of the research and activities done along the Collaborative Intelligence for Safety-Critical systems (CISC) project; which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under the Marie Skłodowska-Curie grant agreement no. 955901. The research reported in this paper has been performed within the frame of SCCH, part of the COMET Program managed by FFG. The work of Kelleher is also partly funded by the ADAPT Centre which is funded under the Science Foundation Ireland (SFI) Research Centres Program (Grant No. 13/RC/2106_P2).
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.