On Optimizing Human-Machine Task Assignments

Research output: Working paperPreprintResearch

  • Andreas Veit
  • Michael J. Wilber
  • Rajan Vaish
  • James Davis
  • Vishal Anand
  • Anshu Aviral
  • Prithvijit Chakrabarty
  • Yash Chandak
  • Sidharth Chaturvedi
  • Chinmaya Devaraj
  • Ankit Dhall
  • Utkarsh Dwivedi
  • Sanket Gupte
  • Sharath N. Sridhar
  • Karthik Paga
  • Anuj Pahuja
  • Aditya Raisinghani
  • Ayush Sharma
  • Shweta Sharma
  • Darpana Sinha
  • Nisarg Thakkar
  • K. Bala Vignesh
  • Utkarsh Verma
  • Kanniganti Abhishek
  • Amod Agrawal
  • Arya Aishwarya
  • Aurgho Bhattacharjee
  • Sarveshwaran Dhanasekar
  • Venkata Karthik Gullapalli
  • Shuchita Gupta
  • G Chandana
  • Kinjal Jain
  • Simran Kapur
  • Meghana Kasula
  • Shashi Kumar
  • Parth Kundaliya
  • Utkarsh Mathur
  • Alankrit Mishra
  • Aayush Mudgal
  • Aditya Nadimpalli
  • Munakala Sree Nihit
  • Akanksha Periwal
  • Ayush Sagar
  • Ayush Shah
  • Vikas Sharma
  • Yashovardhan Sharma
  • Faizal Siddiqui
  • Virender Singh
  • S. Abhinav
  • Pradyumna Tambwekar
  • Rashida Taskin
  • Ankit Tripathi
  • Anurag D. Yadav
When crowdsourcing systems are used in combination with machine inference systems in the real world, they benefit the most when the machine system is deeply integrated with the crowd workers. However, if researchers wish to integrate the crowd with "off-the-shelf" machine classifiers, this deep integration is not always possible. This work explores two strategies to increase accuracy and decrease cost under this setting. First, we show that reordering tasks presented to the human can create a significant accuracy improvement. Further, we show that greedily choosing parameters to maximize machine accuracy is sub-optimal, and joint optimization of the combined system improves performance.
Original languageEnglish
Number of pages2
DOIs
Publication statusSubmitted - 24 Sep 2015
Externally publishedYes

ID: 307530275