How to Measure the Reproducibility of System-oriented IR Experiments
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Accepted author manuscript, 592 KB, PDF document
Replicability and reproducibility of experimental results are primary concerns in all the areas of science and IR is not an exception. Besides the problem of moving the field towards more reproducible experimental practices and protocols, we also face a severe methodological issue: we do not have any means to assess when reproduced is reproduced. Moreover, we lack any reproducibility-oriented dataset, which would allow us to develop such methods. To address these issues, we compare several measures to objectively quantify to what extent we have replicated or reproduced a system-oriented IR experiment. These measures operate at different levels of granularity, from the fine-grained comparison of ranked lists, to the more general comparison of the obtained effects and significant differences. Moreover, we also develop a reproducibility-oriented dataset, which allows us to validate our measures and which can also be used to develop future measures.
|Title of host publication
|SIGIR '20 : Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
|Number of pages
|Association for Computing Machinery
|Published - 2020
|43rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2020 - Virtual, Online, China
Duration: 25 Jul 2020 → 30 Jul 2020
|43rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2020
|25/07/2020 → 30/07/2020
|ACM Special Interest Group on Information Retrieval (SIGIR)
© 2020 ACM.
- measure, replicability, reproducibility