An Uncertainty-aware Query Selection Model for Evaluation of Information Retrieval Systems – University of Copenhagen

An Uncertainty-aware Query Selection Model for Evaluation of Information Retrieval Systems

A HCC/CCC talk by Prof. Ingemar J. Cox on the evaluation of information retrieval systems.

The talk is open to all.


Effective evaluation of information retrieval systems requires building test collections that contain a set of queries and associated relevance judgments. Constructing test collections requires significant resources since obtaining relevance judgments can be costly. In real world settings, the budget is constrained and imposes a limit on the number of judgments that can be acquired. Hence, algorithms that can be used to reduce the number of judgments are needed. We focus on query selection as a mechanism for reducing the cost of building test collections and show how the query selection can be mathematically formulated as an optimization problem. In particular, our mathematical formulation explicitly models the uncertainty in the retrieval effectiveness metrics that are introduced by the absence of relevance judgments. Subsequent experiments demonstrate the advantages of modeling this uncertainty. Since the optimization problem is computationally intractable, we devise an iterative query selection algorithm that provides an approximate solution. Our method selects queries one by one and assumes that no relevance judgments are available for the query under consideration. Once a query is selected, the associated relevance assessments are acquired and then used to aid the selection of subsequent queries. We demonstrate the effectiveness of the algorithm by using TREC 2004 Robust track and the TREC-8 Ad-Hoc track data. We also introduce a new test collection with 50 rankers trained with different features and approximately 1000 queries. Our experimental results on all test collections show that the queries chosen by our model produce a ranking of systems that are better correlated with the actual ranking when compared to queries selected according to existing baseline methods. Finally, we investigate how the selected query subset generalizes to (1) new unseen systems and (2) changes to the evaluation metric. We show that our iterative algorithm can be modified to improve generalizability in both cases.


Ingemar J. Cox is Professor and Director of Research in the Department of Computer Science at University College London (UCL) and Adjunct Professor at the Technical University of Denmark (DTU), Cognitive Systems section.

He is Head of the Future Media Group at UCL. He has been a recipient of a Royal Society Wolfson Fellowship (2002-2007). He received his B.Sc. from UCL and Ph.D. from Oxford University. He was a member of the Technical Staff at AT&T Bell Labs at Murray Hill from 1984 until 1989 where his research interests were focused on mobile robots. In 1989 he joined NEC Research Institute in Princeton, NJ as a senior research scientist in the computer science division. At NEC, his research shifted to problems in computer vision and he was responsible for creating the computer vision group at NECI. He has worked on problems to do with stereo and motion correspondence and multimedia issues of image database retrieval and watermarking.

In 1999, he was awarded the IEEE Signal Processing Society Best Paper Award (Image and Multidimensional Signal Processing Area) for a paper he co-authored on watermarking. From 1997-1999, he served as Chief Technical Officer of Signafy, Inc, a subsidiary of NEC responsible for the commercialization of watermarking. Between 1996 and 1999, he led the design of NEC's watermarking proposal for DVD video disks and later colloborated with IBM in developing the technology behind the joint "Galaxy" proposal supported by Hitachi, IBM, NEC, Pioneer and Sony. In 1999, he returned to NEC Research Institute as a Research Fellow. He is a Fellow of the IEEE, the IET (formerly IEE), and the British Computer Society. He is a member of the UK Computing Research Committee. He was founding co-editor in chief of the IEE Proc. on Information Security and is an associate editor of the IEEE Trans. on Information Forensics and Security. He is co-author of a book entitled "Digital Watermarking" and its second edition "Digital Watermarking and Steganography", and the co-editor of two books, "Autonomous Robots Vehicles" and "Partitioning Data Sets: With Applications to Psychology, Computer Vision and Target Tracking".