Curriculum
Computer Science and Innovation for Societal Challenges, XXXVII series
Grant sponsor
PON
Supervisor
Gian Antonio Susto
Co-Supervisor
Anna Spagnolli
Project: Unveiling the Inner Mechanism: Intrinsic Explainability in Anomaly Detection and Reinforcement Learning
Full text of the dissertation book can be downloaded from: https://hdl.handle.net/11577/3553660
Abstract: Explainability is increasingly recognized as a cornerstone of sustainable and trustworthy Machine Learning (ML), especially in industrial settings where errors can be costly and potentially harmful. This dissertation focuses on intrinsically interpretable models for two high-impact domains: Unsupervised Anomaly Detection (AD) and Hierarchical Reinforcement Learning (HRL). Emphasizing transparent decision-making not only fosters user trust but also promotes the sustainable adoption of ML by aligning with human-centric values and reducing the risk of wasteful, misinformed actions. First, building on the Isolation Forest (IF) paradigm, a unified framework, referred to as Isolation-based Models, is introduced. It subsumes the standard IF, extended variants (e.g., Extended IF, Hypersphere IF, Deep IF), and newly proposed branching methods under a consistent mathematical umbrella. Critically, a generalized feature-importance algorithm is developed to ensure model outputs are natively interpretable, revealing both global (model-wide) and local (instance-wise) insights. Additionally, the framework incorporates a clustering extension, seamlessly combining anomaly detection and data segmentation in a single pipeline that preserves transparency. By offering interpretable outputs, the approach mitigates confusion over flagged anomalies and reduces the risk of resource-intensive guesswork, thus supporting more sustainable monitoring and maintenance workflows. Next, the dissertation addresses Hierarchical Reinforcement Learning with the Multilayer Abstract Nested Generation of Options (MANGO) framework. MANGO constructs tiered abstractions that reflect the structural logic of the environment. Each high-level “macro-action” decomposes into sub-policies at lower levels, creating a layered, transparent decision-making process. Experiments on a grid-based environment underscore MANGO’s potential to handle sparse rewards more efficiently than basic RL methods, albeit requiring careful hyperparameter tuning. By breaking down complex tasks into human-intuitive subgoals, MANGO eases error diagnosis and fosters more responsible and explainable agent behaviors, aligning with industrial sustainability objectives such as conserving effort and reducing computational overhead. Overall, these contributions demonstrate that intrinsic explainability can be integrated into advanced ML solutions without sacrificing performance. The resulting models illuminate why anomalies are flagged or how multi-step decisions are reached, thereby enhancing trust, enabling swift corrective actions, and minimizing wasteful resource expenditure. Such transparency resonates with emerging Industry 5.0 priorities, where human-centricity and sustainability guide the deployment of AI-driven technologies in high-stakes industrial processes.