Understanding image quality and trust in peer-to-peer marketplaces

Research output: Contribution to journalConference articleResearchpeer-review

  • Xiao Ma
  • Lina Mezghani
  • Kimberly Wilber
  • Hui Hong
  • Robinson Piramuthu
  • Mor Naaman
  • Belongie, Serge

As any savvy online shopper knows, second-hand peer-to-peer marketplaces are filled with images of mixed quality. How does image quality impact marketplace outcomes, and can quality be automatically predicted? In this work, we conducted a large-scale study on the quality of user-generated images in peer-to-peer marketplaces. By gathering a dataset of common second-hand products (˜75,000 images) and annotating a subset with human-labeled quality judgments, we were able to model and predict image quality with decent accuracy (˜87%). We then conducted two studies focused on understanding the relationship between these image quality scores and two marketplace outcomes: sales and perceived trustworthiness. We show that image quality is associated with higher likelihood that an item will be sold, though other factors such as view count were better predictors of sales. Nonetheless, we show that high quality user-generated images selected by our models outperform stock imagery in eliciting perceptions of trust from users. Our findings can inform the design of future marketplaces and guide potential sellers to take better product images.

Original languageEnglish
JournalProceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
Pages (from-to)511-520
Number of pages10
DOIs
Publication statusPublished - 4 Mar 2019
Externally publishedYes
Event19th IEEE Winter Conference on Applications of Computer Vision, WACV 2019 - Waikoloa Village, United States
Duration: 7 Jan 201911 Jan 2019

Conference

Conference19th IEEE Winter Conference on Applications of Computer Vision, WACV 2019
CountryUnited States
CityWaikoloa Village
Period07/01/201911/01/2019
SponsorIEEE Biometrics Council, IEEE Computer Society

Bibliographical note

Funding Information:
This work is partly funded by a Facebook equipment donation to Cornell University and by AOL through the Connected Experiences Laboratory. We additionally wish to thank our crowd workers on Mechanical Turk and our colleagues from eBay.

Publisher Copyright:
© 2019 IEEE

ID: 301824730