PhD defence by Vlad Paul Cosma
Title
Declarative process models as explainable and verifiable Artificial Intelligence
Abstract
As Artificial Intelligence based decision support systems become an ever increasing part of our society, regulators have found it necessary to devise laws and guidelines on the ethical use of AI. In this light explainable AI has regained attention in the minds of researchers and businesses. Explainable AI has been with us for decades in the form of rule based systems where high level declarative rules are often implemented into more imperative or procedural behaviour based code. Process mining has emerged as prominent AI field where the goal is to use process models to describe real world business processes. The “white-box” qualities that make process models ideal at describing business processes also makes them prime candidates for use in decision support systems. With the goal of enhancing model explainability and expressiveness, this thesis addresses selected aspects of the process mining pipeline, from data extraction to model verification, with a focus on enhanced trustworthiness, understandability, and verifiability of declarative process models. The objective is to advance the process mining field towards providing explainable Artificial Intelligence based decision support systems. To improve trustworthiness we propose a participatory design based method for event log extraction that improves collaboration among stakeholders by extracting event logs that accurately and transparently capture business rules. To improve understandability we propose new algorithms to discover simpler, more understandable, declarative process models that take into account the users cognitive load. To enhance verifiability we propose two approaches tailored for the Dynamic Condition Response graphs declarative process notation. The first approach is based on a bisimilar mapping from Dynamic Condition Response graphs to Petri nets which preserves the declarative behaviour. This allows us to take advantage of mature model checking tools available for Petri nets. The second approach allows for the verification of select trace behaviour via test driven modelling. Specifically we check that we do not violate past modelling decisions when declarative models evolve over time.
Supervisors
Principal Supervisor Professor Thomas Troels Hildebrandt
Co-Supervisor Associate Professor Tijs Slaats
Assessment Committee
Associate Professor Dmitriy Traytel, DIKU
Professor Barbara Weber, University of St. Gallen
Associate Professor Claudio Di Ciccio, Utrecht University, Department of Information and Computing Sciences
Leader of defence: Associate Professor Boris Düdder
For an electronic copy of the thesis, please visit the PhD Programme page.