Experiments with crowdsourced re-annotation of a POS tagging data set

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Experiments with crowdsourced re-annotation of a POS tagging data set. / Hovy, Dirk; Plank, Barbara; Søgaard, Anders.

Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Baltimore, Maryland : Association for Computational Linguistics, 2014. p. 377-382.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Hovy, D, Plank, B & Søgaard, A 2014, Experiments with crowdsourced re-annotation of a POS tagging data set. in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, pp. 377-382.

APA

Hovy, D., Plank, B., & Søgaard, A. (2014). Experiments with crowdsourced re-annotation of a POS tagging data set. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 377-382). Association for Computational Linguistics.

Vancouver

Hovy D, Plank B, Søgaard A. Experiments with crowdsourced re-annotation of a POS tagging data set. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Baltimore, Maryland: Association for Computational Linguistics. 2014. p. 377-382

Author

Hovy, Dirk ; Plank, Barbara ; Søgaard, Anders. / Experiments with crowdsourced re-annotation of a POS tagging data set. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Baltimore, Maryland : Association for Computational Linguistics, 2014. pp. 377-382

Bibtex

@inproceedings{4f6899f31ad74b89a4130e912c4618be,
title = "Experiments with crowdsourced re-annotation of a POS tagging data set",
abstract = "Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks.",
author = "Dirk Hovy and Barbara Plank and Anders S{\o}gaard",
year = "2014",
month = jun,
language = "English",
pages = "377--382",
booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - Experiments with crowdsourced re-annotation of a POS tagging data set

AU - Hovy, Dirk

AU - Plank, Barbara

AU - Søgaard, Anders

PY - 2014/6

Y1 - 2014/6

N2 - Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks.

AB - Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks.

M3 - Article in proceedings

SP - 377

EP - 382

BT - Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

PB - Association for Computational Linguistics

CY - Baltimore, Maryland

ER -

ID: 107673017