Generating Images Instead of Retrieving Them: Relevance Feedback on Generative Adversarial Networks

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Generating Images Instead of Retrieving Them : Relevance Feedback on Generative Adversarial Networks. / Ukkonen, Antti; Joona, Pyry; Ruotsalo, Tuukka.

SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, 2020. s. 1329-1338.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Ukkonen, A, Joona, P & Ruotsalo, T 2020, Generating Images Instead of Retrieving Them: Relevance Feedback on Generative Adversarial Networks. i SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, s. 1329-1338. https://doi.org/10.1145/3397271.3401129

APA

Ukkonen, A., Joona, P., & Ruotsalo, T. (2020). Generating Images Instead of Retrieving Them: Relevance Feedback on Generative Adversarial Networks. I SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (s. 1329-1338). Association for Computing Machinery. https://doi.org/10.1145/3397271.3401129

Vancouver

Ukkonen A, Joona P, Ruotsalo T. Generating Images Instead of Retrieving Them: Relevance Feedback on Generative Adversarial Networks. I SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery. 2020. s. 1329-1338 https://doi.org/10.1145/3397271.3401129

Author

Ukkonen, Antti ; Joona, Pyry ; Ruotsalo, Tuukka. / Generating Images Instead of Retrieving Them : Relevance Feedback on Generative Adversarial Networks. SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery, 2020. s. 1329-1338

Bibtex

@inproceedings{ee9ac4935a1742119abe4aa8aa80dc0d,
title = "Generating Images Instead of Retrieving Them: Relevance Feedback on Generative Adversarial Networks",
abstract = "Finding images matching a user's intention has been largely based on matching a representation of the user's information needs with an existing collection of images. For example, using an example image or a written query to express the information need and retrieving images that share similarities with the query or example image. However, such an approach is limited to retrieving only images that already exist in the underlying collection. Here, we present a methodology for generating images matching the user intention instead of retrieving them. The methodology utilizes a relevance feedback loop between a user and generative adversarial neural networks (GANs). GANs can generate novel photorealistic images which are initially not present in the underlying collection, but generated in response to user feedback. We report experiments (N=29) where participants generate images using four different domains and various search goals with textual and image targets. The results show that the generated images match the tasks and outperform images selected as baselines from a fixed image collection. Our results demonstrate that generating new information can be more useful for users than retrieving it from a collection of existing information.",
author = "Antti Ukkonen and Pyry Joona and Tuukka Ruotsalo",
year = "2020",
doi = "10.1145/3397271.3401129",
language = "English",
pages = "1329--1338",
booktitle = "SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
publisher = "Association for Computing Machinery",

}

RIS

TY - GEN

T1 - Generating Images Instead of Retrieving Them

T2 - Relevance Feedback on Generative Adversarial Networks

AU - Ukkonen, Antti

AU - Joona, Pyry

AU - Ruotsalo, Tuukka

PY - 2020

Y1 - 2020

N2 - Finding images matching a user's intention has been largely based on matching a representation of the user's information needs with an existing collection of images. For example, using an example image or a written query to express the information need and retrieving images that share similarities with the query or example image. However, such an approach is limited to retrieving only images that already exist in the underlying collection. Here, we present a methodology for generating images matching the user intention instead of retrieving them. The methodology utilizes a relevance feedback loop between a user and generative adversarial neural networks (GANs). GANs can generate novel photorealistic images which are initially not present in the underlying collection, but generated in response to user feedback. We report experiments (N=29) where participants generate images using four different domains and various search goals with textual and image targets. The results show that the generated images match the tasks and outperform images selected as baselines from a fixed image collection. Our results demonstrate that generating new information can be more useful for users than retrieving it from a collection of existing information.

AB - Finding images matching a user's intention has been largely based on matching a representation of the user's information needs with an existing collection of images. For example, using an example image or a written query to express the information need and retrieving images that share similarities with the query or example image. However, such an approach is limited to retrieving only images that already exist in the underlying collection. Here, we present a methodology for generating images matching the user intention instead of retrieving them. The methodology utilizes a relevance feedback loop between a user and generative adversarial neural networks (GANs). GANs can generate novel photorealistic images which are initially not present in the underlying collection, but generated in response to user feedback. We report experiments (N=29) where participants generate images using four different domains and various search goals with textual and image targets. The results show that the generated images match the tasks and outperform images selected as baselines from a fixed image collection. Our results demonstrate that generating new information can be more useful for users than retrieving it from a collection of existing information.

U2 - 10.1145/3397271.3401129

DO - 10.1145/3397271.3401129

M3 - Article in proceedings

SP - 1329

EP - 1338

BT - SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval

PB - Association for Computing Machinery

ER -

ID: 255167066