Multileaving for online evaluation of rankers

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

  • Brian Brost

In online learning to rank we are faced with a tradeoff between exploring new, potentially superior rankers, and exploiting our preexisting knowledge of what rankers have performed well in the past. Multileaving methods offer an attractive approach to this problem since they can efficiently use online feedback to simultaneously evaluate a potentially arbitrary number of rankers. In this talk we discuss some of the main challenges in multileaving, and discuss promising areas for future research.

Original languageEnglish
Title of host publicationProceedings of the 1st International Workshop on LEARning Next gEneration Rankers co-located with the 3rd ACM International Conference on the Theory of Information Retrieval (ICTIR 2017)
EditorsNicola Ferro, Claudio Lucchese, Maria Maistro, Raffaele Perego
Number of pages2
PublisherCEUR-WS.org
Publication date2017
Publication statusPublished - 2017
Event1st International Workshop on LEARning Next gEneration Rankers - Amsterdam, Netherlands
Duration: 1 Oct 20171 Oct 2017
Conference number: 1

Workshop

Workshop1st International Workshop on LEARning Next gEneration Rankers
Nummer1
LandNetherlands
ByAmsterdam
Periode01/10/201701/10/2017
SeriesCEUR Workshop Proceedings
Volume2007
ISSN1613-0073

ID: 188452064