Last Fridays Talks: Causality & Explainability
Talk 1
Detecting and Explaining Drift in AI Models
Abstract
As real-world data evolves, even the most advanced AI models can lose accuracy — a challenge known as concept drift. Detecting and understanding these shifts is crucial to keeping AI systems trustworthy and effective. This talk introduces DRIFTLENS, an unsupervised, real-time framework that spots and explains data drift without needing labeled data. By analyzing deep learning representations, DRIFTLENS detects changes quickly and reveals their impact on model behavior. Tested across diverse datasets and applications, DRIFTLENS is faster, more accurate, and more interpretable than existing approaches, offering a powerful new lens for monitoring and maintaining reliable AI systems in dynamic environments.
Speaker
Tania Cerquitelli is a Full Professor in the Department of Control and Computer Engineering at the Polytechnic University of Turin, Italy. In addition to her academic role, she supports institutional initiatives under the Deputy Rector for Society, Community, and Program Delivery, contributes to the university’s trade union relations, and serves on the Steering Committee of the PoliTO Center for Social Impact. Her research focuses on data science and machine learning, with particular emphasis on explainable AI, the democratization of data science, and the early detection of concept drift. She is a member of the editorial boards of several leading international journals (Elsevier, Springer, IEEE). Her research is supported by the European Union, the Italian Ministry of University and Research, the Piedmont Region, and various industry partners.
Talk 2
Learning to make Bayesian decisions under non-stationarity with frequentist performance guarantees
Abstract
As AI agents get more powerful, we expect to train them on open-world scenarios with sparse and online feedback. These modern desiderata make it crucial for the agent to balance exploration and exploitation optimally. We study the foundations of the exploration-exploitation dilemma under non-stationarity. We adopt non-stationary linear contextual bandits as a representative use case and investigate the way they handle the exploration-exploitation dilemma through the lens of sequential Bayesian inference. Whereas existing algorithms typically rely on weighted regularized least squares, we study weight sequential Bayesian parameter updates. This approach maintains a posterior distribution over the time-varying reward parameters. We characterize the behavior of this learning principle with a new concentration inequality, which we in turn use to design three new algorithms. We show that these algorithms perform competitively both in analytical and practical terms. Our findings give guidance for building sequential decision making algorithms that can adapt to their perpetually changing environments in a trustworthy manner.
Speaker
Melih Kandemir is an associate professor at University of Southern Denmark (SDU). He is the founding PI of the SDU Adaptive Intelligence Laboratory (ADIN Lab), which practices fundamental research on probabilistic approaches to reinforcement learning with specific focus on continuous adaptive control applications. Melih is also the founding head of the SDU Centre for Machine Learning. Prior to his current role, Melih was leading a research group at Bosch Center for Artificial Intelligence in Renningen, Germany. Melih's current research is funded by the Novo Nordisk Foundation, Carlsberg Foundation, and the Independent Research Fund Denmark.