Deep learning relevance: creating relevant information (as opposed to retrieving it)

Research output: Contribution to conferencePaperResearchpeer-review

Standard

Deep learning relevance : creating relevant information (as opposed to retrieving it). / Lioma, Christina; Larsen, Birger; Petersen, Casper; Simonsen, Jakob Grue.

2016. Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy.

Research output: Contribution to conferencePaperResearchpeer-review

Harvard

Lioma, C, Larsen, B, Petersen, C & Simonsen, JG 2016, 'Deep learning relevance: creating relevant information (as opposed to retrieving it)', Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy, 21/07/2016 - 21/07/2016. <https://arxiv.org/abs/1606.07660>

APA

Lioma, C., Larsen, B., Petersen, C., & Simonsen, J. G. (2016). Deep learning relevance: creating relevant information (as opposed to retrieving it). Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy. https://arxiv.org/abs/1606.07660

Vancouver

Lioma C, Larsen B, Petersen C, Simonsen JG. Deep learning relevance: creating relevant information (as opposed to retrieving it). 2016. Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy.

Author

Lioma, Christina ; Larsen, Birger ; Petersen, Casper ; Simonsen, Jakob Grue. / Deep learning relevance : creating relevant information (as opposed to retrieving it). Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy.6 p.

Bibtex

@conference{9514a731c0b8449c8c4e347b43cb1643,
title = "Deep learning relevance: creating relevant information (as opposed to retrieving it)",
abstract = "What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also {"}understand{"} it and synthesise it into a singledocument? We present a preliminary study that makes a first step towards answering this question.Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to {"}deep learn{"} a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the {"}deep learned{"} document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.",
author = "Christina Lioma and Birger Larsen and Casper Petersen and Simonsen, {Jakob Grue}",
year = "2016",
language = "English",
note = "SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR) ; Conference date: 21-07-2016 Through 21-07-2016",

}

RIS

TY - CONF

T1 - Deep learning relevance

T2 - SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR)

AU - Lioma, Christina

AU - Larsen, Birger

AU - Petersen, Casper

AU - Simonsen, Jakob Grue

N1 - Conference code: 1

PY - 2016

Y1 - 2016

N2 - What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a singledocument? We present a preliminary study that makes a first step towards answering this question.Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.

AB - What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a singledocument? We present a preliminary study that makes a first step towards answering this question.Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.

M3 - Paper

Y2 - 21 July 2016 through 21 July 2016

ER -

ID: 171795008