Rethinking how to extend average precision to graded relevance

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

We present two new measures of retrieval effectiveness, inspired by Graded Average Precision(GAP), which extends Average Precision(AP) to graded relevance judgements. Starting from the random choice of a user, we define Extended Graded Average Precision(xGAP) and Expected Graded Average Precision(eGAP), which are more accurate than GAP in the case of a small number of highly relevant documents with high probability to be considered relevant by the users. The proposed measures are then evaluated on TREC 10, TREC 14, and TREC 21 collections showing that they actually grasp a different angle from GAP and that they are robust when it comes to incomplete judgments and shallow pools.

Original languageEnglish
Title of host publicationInformation Access Evaluation : Multilinguality, Multimodality, and Interaction - 5th International Conference of the CLEF Initiative, CLEF 2014, Proceedings
Number of pages12
PublisherSpringer Verlag,
Publication date1 Jan 2014
Pages19-30
ISBN (Print)9783319113814
DOIs
Publication statusPublished - 1 Jan 2014
Externally publishedYes
Event5th International Conference of the CLEF Initiative, CLEF 2014 - Sheffield, United Kingdom
Duration: 15 Sep 201418 Sep 2014

Conference

Conference5th International Conference of the CLEF Initiative, CLEF 2014
LandUnited Kingdom
BySheffield
Periode15/09/201418/09/2014
SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8685 LNCS
ISSN0302-9743

ID: 216517815