Neural naturalist: Generating fine-grained image comparisons

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds. The language collected is highly detailed, while remaining understandable to the everyday observer (e.g., “heart-shaped face,” “squat body”). Paragraph-length descriptions naturally adapt to varying levels of taxonomic and visual distance-drawn from a novel stratified sampling approach-with the appropriate level of detail. We propose a new model called Neural Naturalist that uses a joint image encoding and comparative module to generate comparative language, and evaluate the results with humans who must use the descriptions to distinguish real images. Our results indicate promising potential for neural models to explain differences in visual embedding space using natural language, as well as a concrete path for machine learning to aid citizen scientists in their effort to preserve biodiversity.

OriginalsprogEngelsk
TidsskriftEMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference
Sider (fra-til)708-717
Antal sider10
StatusUdgivet - 2020
Eksternt udgivetJa
Begivenhed2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019 - Hong Kong, Kina
Varighed: 3 nov. 20197 nov. 2019

Konference

Konference2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019
LandKina
ByHong Kong
Periode03/11/201907/11/2019
SponsorApple, ASAPP, et al., Facebook, Google, salesforce

Bibliografisk note

Publisher Copyright:
© 2019 Association for Computational Linguistics

ID: 301823194