DIKU Bits: Learning Explainable Models of Behaviour

Picture with text Insight, inspiration, motivation

On 26 April the Software, Data, People, & Society (SDPS) Section at Department of Computer Science, University of Copenhagen, will give a DIKU Bits lecture


Tijs Slaat, Associate Professor at the University of Copenhagen's Department of Computer Science in the Software, Data, People, & Society Section.


Learning Explainable Models of Behaviour


Process discovery algorithms learn models of behaviour, e.g. how people do their work. While most AI approaches focus predominantly on the accuracy of models for e.g. classification and prediction, process models also have a strong focus on human-understanding: they are meant to be descriptive and explainable to regular business users. 

In this talk I will discuss exciting recent advances we’ve made in this field and the myriads of interesting research questions that are still open to explore.


Which courses do you teach?I’m teaching the M.Sc. level course on Software Engineering and Architecture and teach part of the B.Sc. course on Reactive and Event Based Systems.

Which technology/research/projects/startup are you excited to see the evolution of? On a personal level I’m very excited by the recent success of the DisCoveR process miner, developed by students from DIKU and myself, which was recognized as the best discovery algorithm at the international conference on process mining last year. At the moment we’re working hard on further developing the algorithm and creating a user-friendly toolset around it, in part supported by DIREC through the AI and Blockchains for Complex Business Processes project.

On a more general level I’m quite enthusiastic about the recent societal attention that explainable AI has been receiving. The increasing ubiquity of machine learning and AI influencing our daily life makes it critical that ordinary citizens can understand how these algorithms and models are making decisions for them. I believe process discovery can play an important role as a case study on how one can successfully treat human-understanding as a key design factor during the entire lifecycle of machine learning algorithms and tools.

What is your favorite sketch from the DIKUrevy? I’m afraid that I’ve not yet been properly introduced to DIKUrevy.