Modeling Information Change in Science Communication with Semantically Matched Paraphrases

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearch

Standard

Modeling Information Change in Science Communication with Semantically Matched Paraphrases. / Wright, Dustin; Pei, Jiaxin; Jurgens, David; Augenstein, Isabelle.

Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2022. p. 1783-1807.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearch

Harvard

Wright, D, Pei, J, Jurgens, D & Augenstein, I 2022, Modeling Information Change in Science Communication with Semantically Matched Paraphrases. in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 1783-1807, 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, 07/12/2022. <https://aclanthology.org/2022.emnlp-main.117/>

APA

Wright, D., Pei, J., Jurgens, D., & Augenstein, I. (2022). Modeling Information Change in Science Communication with Semantically Matched Paraphrases. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 1783-1807). Association for Computational Linguistics. https://aclanthology.org/2022.emnlp-main.117/

Vancouver

Wright D, Pei J, Jurgens D, Augenstein I. Modeling Information Change in Science Communication with Semantically Matched Paraphrases. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. 2022. p. 1783-1807

Author

Wright, Dustin ; Pei, Jiaxin ; Jurgens, David ; Augenstein, Isabelle. / Modeling Information Change in Science Communication with Semantically Matched Paraphrases. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2022. pp. 1783-1807

Bibtex

@inproceedings{7b02ab2ea2c744d1b25ae5de32527810,
title = "Modeling Information Change in Science Communication with Semantically Matched Paraphrases",
abstract = "Whether the media faithfully communicate scientific information has long been a core issue to the science community. Automatically identifying paraphrased scientific findings could enable large-scale tracking and analysis of information changes in the science communication process, but this requires systems to understand the similarity between scientific information across multiple domains. To this end, we present the SCIENTIFIC PARAPHRASE AND INFORMATION CHANGE DATASET (SPICED), the first paraphrase dataset of scientific findings annotated for degree of information change. SPICED contains 6, 000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers. We demonstrate that SPICED poses a challenging task and that models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims. Finally, we show that models trained on SPICED can reveal large-scale trends in the degrees to which people and organizations faithfully communicate new scientific findings. Data, code, and pre-trained models are available at http://www.copenlu.com/publication/2022_emnlp_wright/.",
author = "Dustin Wright and Jiaxin Pei and David Jurgens and Isabelle Augenstein",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 ; Conference date: 07-12-2022 Through 11-12-2022",
year = "2022",
language = "English",
pages = "1783--1807",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - Modeling Information Change in Science Communication with Semantically Matched Paraphrases

AU - Wright, Dustin

AU - Pei, Jiaxin

AU - Jurgens, David

AU - Augenstein, Isabelle

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Whether the media faithfully communicate scientific information has long been a core issue to the science community. Automatically identifying paraphrased scientific findings could enable large-scale tracking and analysis of information changes in the science communication process, but this requires systems to understand the similarity between scientific information across multiple domains. To this end, we present the SCIENTIFIC PARAPHRASE AND INFORMATION CHANGE DATASET (SPICED), the first paraphrase dataset of scientific findings annotated for degree of information change. SPICED contains 6, 000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers. We demonstrate that SPICED poses a challenging task and that models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims. Finally, we show that models trained on SPICED can reveal large-scale trends in the degrees to which people and organizations faithfully communicate new scientific findings. Data, code, and pre-trained models are available at http://www.copenlu.com/publication/2022_emnlp_wright/.

AB - Whether the media faithfully communicate scientific information has long been a core issue to the science community. Automatically identifying paraphrased scientific findings could enable large-scale tracking and analysis of information changes in the science communication process, but this requires systems to understand the similarity between scientific information across multiple domains. To this end, we present the SCIENTIFIC PARAPHRASE AND INFORMATION CHANGE DATASET (SPICED), the first paraphrase dataset of scientific findings annotated for degree of information change. SPICED contains 6, 000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers. We demonstrate that SPICED poses a challenging task and that models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims. Finally, we show that models trained on SPICED can reveal large-scale trends in the degrees to which people and organizations faithfully communicate new scientific findings. Data, code, and pre-trained models are available at http://www.copenlu.com/publication/2022_emnlp_wright/.

UR - http://www.scopus.com/inward/record.url?scp=85149438687&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85149438687

SP - 1783

EP - 1807

BT - Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics

T2 - 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022

Y2 - 7 December 2022 through 11 December 2022

ER -

ID: 341062805