This thesis explores the integration of Explainable Artificial Intelligence techniques into Process Mining to address challenges in understanding and adopting complex AI-driven process technologies. The research aims to bridge the gap between advanced Process Mining technologies and human understanding, focusing on making these systems more transparent, interpretable, and ethically aligned. The thesis identifies several key problems in the field of explainable process mining. These include the opacity of complex Artificial Intelligence systems in critical operational processes, a lack of practical Explainable Artificial Intelligence solutions in real-world organizational settings, and a narrow focus of existing Explainable Artificial Intelligence techniques in Process Mining, primarily on feature attribution methods. There is also insufficient qualitative grounding and disconnection from real deployment ecosystems, inadequate evaluation methodologies emphasizing quantitative metrics over qualitative assessments, and limited implementation of explainable Process Mining solutions within interactive interfaces or tools. Additionally, there is minimal research on real-world organizational challenges affecting Explainable Artificial Intelligence adoption and regulatory challenges in balancing transparency requirements with innovation. The research makes several significant contributions to the field. It presents a comprehensive framework bridging conceptual Explainable Artificial Intelligence solutions with pragmatic realization in organizational settings. A systematic literature review analyzes existing Explainable Artificial Intelligence techniques in Process Mining, while qualitative research uncovers real-world strategies and barriers in explaining process insights. The thesis also includes an analysis of emerging regulations and ethical risks in Process Mining, a risk taxonomy cataloguing technical and sociotechnical vulnerabilities, and a conceptual architecture aligning innovative Explainable Artificial Intelligence techniques with credible deployment impact. Additionally, it provides Explainable Artificial Intelligence evaluation guidelines for Process Mining and tools for participatory explainability requirements elicitation. Key findings from the literature review reveal that most studies focus on predictive process monitoring, with feature attribution methods like SHAP and LIME being most common. Causal explanation techniques are underutilized, and there is limited use of real-world organizational logs for evaluation. The review also highlights a lack of qualitative evaluation of explanation interfaces. A qualitative study with practitioners uncovered that while process mining explainability is necessary, delivery strategies are unclear. Data and system integration barriers hinder interpretability, and explainability strategies are highly context-dependent. The study also revealed a gap between client maturity and demand for modular explanations. The regulatory analysis focused on European Union regulations, which mandate extensive documentation for algorithmic accountability. However, individual explainability rights lack concrete standards, and there are tensions between transparency desires and barriers to full disclosure. Ethical considerations resulted in a taxonomy of technical and socio-technical risks in Explainable Artificial Intelligence for Process Mining, along with ethical principles and risk mitigation strategies for responsible Explainable Artificial Intelligence development. The thesis proposes a comprehensive conceptual framework for explainability in Process Mining, addressing identified challenges through a phased approach. This includes requirements elicitation, technical tool development, organizational integration, and governance and policy structures. The framework defines key dimensions of explainability and outlines a stakeholder-driven process for implementing explainable Process Mining solutions. To demonstrate the practical application of the framework, a hypothetical healthcare case study is presented. This study illustrates how the explainability framework can be adapted to optimize the process for patients with heart valve diseases, balancing explainability requirements with potential risks and costs in a specific context. The thesis concludes by outlining future research directions in the field of explainable process mining. These include developing multi-perspective and context-aware explanations, advancing causal inference and counterfactual explanations in Process Mining, creating natural language interfaces for Process Mining insights, exploring privacy-preserving explanation techniques, leveraging simulation for explanation validation, and establishing ethical Artificial Intelligence governance frameworks specific to Process Mining.
© 2001-2025 Fundación Dialnet · Todos los derechos reservados