Principled Multi-Aspect Evaluation Measures of Rankings
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Final published version, 1.4 MB, PDF document
Information Retrieval evaluation has traditionally focused on defining principled ways of assessing the relevance of a ranked list of documents with respect to a query. Several methods extend this type of evaluation beyond relevance, making it possible to evaluate different aspects of a document ranking (e.g., relevance, usefulness, or credibility) using a single measure (multi-aspect evaluation). However, these methods either are (i) tailor-made for specific aspects and do not extend to other types or numbers of aspects, or (ii) have theoretical anomalies, e.g. assign maximum score to a ranking where all documents are labelled with the lowest grade with respect to all aspects (e.g., not relevant, not credible, etc.). We present a theoretically principled multi-aspect evaluation method that can be used for any number, and any type, of aspects. A thorough empirical evaluation using up to 5 aspects and a total of 425 runs officially submitted to 10 TREC tracks shows that our method is more discriminative than the state-of-the-art and overcomes theoretical limitations of the state-of-the-art.
|Title of host publication||CIKM 2021 - Proceedings of the 30th ACM International Conference on Information and Knowledge Management|
|Publisher||Association for Computing Machinery, Inc|
|Publication status||Published - 2021|
|Event||30th ACM International Conference on Information and Knowledge Management, CIKM 2021 - Virtual, Online, Australia|
Duration: 1 Nov 2021 → 5 Nov 2021
|Conference||30th ACM International Conference on Information and Knowledge Management, CIKM 2021|
|Periode||01/11/2021 → 05/11/2021|
|Sponsor||ACM SIGIR, ACM SIGWEB|
© 2021 Owner/Author.
- evaluation, multiple aspects, partial order, ranking
Number of downloads are based on statistics from Google Scholar and www.ku.dk