Intentonomy: A dataset and study towards human intent understanding

Research output: Contribution to journalConference articleResearchpeer-review

An image is worth a thousand words, conveying information that goes beyond the mere visual content therein. In this paper, we study the intent behind social media images with an aim to analyze how visual information can facilitate recognition of human intent. Towards this goal, we introduce an intent dataset, Intentonomy, comprising 14K images covering a wide range of everyday scenes. These images are manually annotated with 28 intent categories derived from a social psychology taxonomy. We then systematically study whether, and to what extent, commonly used visual information, i.e., object and context, contribute to human motive understanding. Based on our findings, we conduct further study to quantify the effect of attending to object and context classes as well as textual information in the form of hashtags when training an intent classifier. Our results quantitatively and qualitatively shed light on how visual and textual information can produce observable effects when predicting intent.

Original languageEnglish
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Pages (from-to)12981-12991
Number of pages11
ISSN1063-6919
DOIs
Publication statusPublished - 2021
Externally publishedYes
Event2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 - Virtual, Online, United States
Duration: 19 Jun 202125 Jun 2021

Conference

Conference2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021
CountryUnited States
CityVirtual, Online
Period19/06/202125/06/2021

Bibliographical note

Funding Information:
Acknowledgement We thank Luke Chesser and Timothy Carbone from Unsplash for providing the images, Kimberly Wilber and Bor-chun Chen for tips and suggestions about the annotation interface and annotator management, Kevin Musgrave for the general discussion, and anonymous reviewers for their valuable feedback. This work is supported by a Facebook AI research grant awarded to Cornell University.

Publisher Copyright:
© 2021 IEEE

ID: 301816716