Can AMR Assist Legal and Logical Reasoning?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearch

Documents

Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2022
PublisherAssociation for Computational Linguistics
Publication date2022
Pages1555-1568
Publication statusPublished - 2022
Event2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
LandUnited Arab Emirates
ByAbu Dhabi
Periode07/12/202211/12/2022

Bibliographical note

Publisher Copyright:
© 2022 Association for Computational Linguistics.

ID: 339845185