What Can We Do to Improve Peer Review in NLP?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

What Can We Do to Improve Peer Review in NLP? / Rogers, Anna; Augenstein, Isabelle.

Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. p. 1256-1262.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Rogers, A & Augenstein, I 2020, What Can We Do to Improve Peer Review in NLP? in Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, pp. 1256-1262, The 2020 Conference on Empirical Methods in Natural Language Processing, 16/11/2020. https://doi.org/10.18653/v1/2020.findings-emnlp.112

APA

Rogers, A., & Augenstein, I. (2020). What Can We Do to Improve Peer Review in NLP? In Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 1256-1262). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.112

Vancouver

Rogers A, Augenstein I. What Can We Do to Improve Peer Review in NLP? In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics. 2020. p. 1256-1262 https://doi.org/10.18653/v1/2020.findings-emnlp.112

Author

Rogers, Anna ; Augenstein, Isabelle. / What Can We Do to Improve Peer Review in NLP?. Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. pp. 1256-1262

Bibtex

@inproceedings{f464713e1976445baedcb9064992f527,
title = "What Can We Do to Improve Peer Review in NLP?",
abstract = " Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.",
author = "Anna Rogers and Isabelle Augenstein",
year = "2020",
doi = "10.18653/v1/2020.findings-emnlp.112",
language = "English",
pages = "1256--1262",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
publisher = "Association for Computational Linguistics",
note = "The 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 ; Conference date: 16-11-2020 Through 20-11-2020",
url = "http://2020.emnlp.org",

}

RIS

TY - GEN

T1 - What Can We Do to Improve Peer Review in NLP?

AU - Rogers, Anna

AU - Augenstein, Isabelle

PY - 2020

Y1 - 2020

N2 - Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.

AB - Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.

U2 - 10.18653/v1/2020.findings-emnlp.112

DO - 10.18653/v1/2020.findings-emnlp.112

M3 - Article in proceedings

SP - 1256

EP - 1262

BT - Findings of the Association for Computational Linguistics: EMNLP 2020

PB - Association for Computational Linguistics

T2 - The 2020 Conference on Empirical Methods in Natural Language Processing

Y2 - 16 November 2020 through 20 November 2020

ER -

ID: 254996462