Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

  • Fulltext

    Accepted author manuscript, 6.66 MB, PDF document

  • Arora, Arnav
  • Preslav Nakov
  • Momchil Hardalov
  • Sheikh Muhammad Sarwar
  • Vibha Nayak
  • Yoan Dinkov
  • Dimitrina Zlatkova
  • Kyle Dent
  • Ameya Bhatawdekar
  • Guillaume Bouchard
  • Augenstein, Isabelle

The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self-harm, and many others. Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users. Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and suggest directions for future work.

Original languageEnglish
Article number72
JournalACM Computing Surveys
Volume56
Issue number3
Pages (from-to)1-17
ISSN0360-0300
DOIs
Publication statusPublished - 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.

    Research areas

  • Additional Key Words and PhrasesOnline harms, bullying and harassment, content moderation, graphic content, hate speech, misinformation, offensive language, self-harm, sexual abuse, spam, violence

ID: 381228308