Rewarding Coreference Resolvers for Being Consistent with World Knowledge

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Unresolved coreference is a bottleneck for relation extraction, and high-quality coreference resolvers may produce an output that makes it a lot easier to extract knowledge triples. We show how to improve coreference resolvers by forwarding their input to a relation extraction system and reward the resolvers for producing triples that are found in knowledge bases. Since relation extraction systems can rely on different forms of supervision and be biased in different ways, we obtain the best performance, improving over the state of the art, using multi-task reinforcement learning.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
PublisherAssociation for Computational Linguistics
Publication date2019
Pages1229-1235
DOIs
Publication statusPublished - 2019
EventProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) - Hong Kong, China
Duration: 1 Nov 20191 Nov 2019

Conference

ConferenceProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
LocationHong Kong, China
Periode01/11/201901/11/2019

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 239862438