What Can We Do to Improve Peer Review in NLP?
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
What Can We Do to Improve Peer Review in NLP? / Rogers, Anna; Augenstein, Isabelle.
Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. s. 1256-1262.Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - What Can We Do to Improve Peer Review in NLP?
AU - Rogers, Anna
AU - Augenstein, Isabelle
PY - 2020
Y1 - 2020
N2 - Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
AB - Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
U2 - 10.18653/v1/2020.findings-emnlp.112
DO - 10.18653/v1/2020.findings-emnlp.112
M3 - Article in proceedings
SP - 1256
EP - 1262
BT - Findings of the Association for Computational Linguistics: EMNLP 2020
PB - Association for Computational Linguistics
T2 - The 2020 Conference on Empirical Methods in Natural Language Processing
Y2 - 16 November 2020 through 20 November 2020
ER -
ID: 254996462