Talk by Maria Maistro – University of Copenhagen

Talk by Maria Maistro


Exploiting User Signals and Stochastic Models to Improve Information Retrieval Systems and Evaluation


In this talk I will discuss several issues related to different aspects of effectiveness measures and novel solutions that we propose to address each of these challenges. I will present AWARE, a probabilistic framework for dealing with the noise and inconsistencies introduced when relevance labels are gathered with multiple crowd assessors. By modeling relevance judgements and crowd assessors as sources of uncertainty, I will propose to directly combine the performance measures computed on the ground-truth generated by each crowd assessor, instead of adopting a classification technique to merge the labels at pool level. Furthermore, I will talk about evaluation measures able to account for user signals. I will propose a new user model based on Markov chains, that allow the user to scan the result list with many degrees of freedom. I will exploit this Markovian model in order to inject user models into precision, defining a new family of evaluation measures, and I will embed this model as objective function of an LtR algorithm to improve system performances. Finally, I will talk about an ongoing work on click models. I will briefly introduce click models and perplexity, a popular measure to evaluate click models, and I will describe some evaluation pitfalls due to the imbalance between the number of clicks and skips.


Maria Maistro took her master degree in Mathematics at the University of Padua specializing her studies in probability and stochastic financial methods, operational research, and optimal control theory. She presented a thesis on stochastic user models for IR evaluation, where a new family of evaluation measures based on Markov chains was proposed. She is a PhD candidate at the Doctorate School in Information Engineering of the University of Padua, with a project concerning the study of possible applications of Markov chains to the field of IR, analysis of user behavior from large amounts of click-logs, click models, Learning to Rank, IR evaluation, and the impact of crowdsourcing labels on evaluation measures. More about her research can be found at

This talk is part of QUARTZ (Quantum Information Access and Retrieval Theory) ITN (No 721321)