What Can We Do to Improve Peer Review in NLP?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2020
PublisherAssociation for Computational Linguistics
Publication date2020
Pages1256-1262
DOIs
Publication statusPublished - 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - online
Duration: 16 Nov 202020 Nov 2020
http://2020.emnlp.org

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Locationonline
Periode16/11/202020/11/2020
Internetadresse

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 254996462