Can AMR Assist Legal and Logical Reasoning?

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskning

Standard

Can AMR Assist Legal and Logical Reasoning? / Schrack, Nikolaus; Cui, Ruixiang; López, Hugo A.; Hershcovich, Daniel.

Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, 2022. s. 1555-1568.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskning

Harvard

Schrack, N, Cui, R, López, HA & Hershcovich, D 2022, Can AMR Assist Legal and Logical Reasoning? i Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, s. 1555-1568, 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, 07/12/2022. <https://aclanthology.org/2022.findings-emnlp.112>

APA

Schrack, N., Cui, R., López, H. A., & Hershcovich, D. (2022). Can AMR Assist Legal and Logical Reasoning? I Findings of the Association for Computational Linguistics: EMNLP 2022 (s. 1555-1568). Association for Computational Linguistics. https://aclanthology.org/2022.findings-emnlp.112

Vancouver

Schrack N, Cui R, López HA, Hershcovich D. Can AMR Assist Legal and Logical Reasoning? I Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics. 2022. s. 1555-1568

Author

Schrack, Nikolaus ; Cui, Ruixiang ; López, Hugo A. ; Hershcovich, Daniel. / Can AMR Assist Legal and Logical Reasoning?. Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, 2022. s. 1555-1568

Bibtex

@inproceedings{9e73288e4d8a4451897896157f445bae,
title = "Can AMR Assist Legal and Logical Reasoning?",
abstract = "Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.",
author = "Nikolaus Schrack and Ruixiang Cui and L{\'o}pez, {Hugo A.} and Daniel Hershcovich",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 2022 Findings of the Association for Computational Linguistics: EMNLP 2022 ; Conference date: 07-12-2022 Through 11-12-2022",
year = "2022",
language = "English",
pages = "1555--1568",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - Can AMR Assist Legal and Logical Reasoning?

AU - Schrack, Nikolaus

AU - Cui, Ruixiang

AU - López, Hugo A.

AU - Hershcovich, Daniel

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.

AB - Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.

UR - http://www.scopus.com/inward/record.url?scp=85149815433&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85149815433

SP - 1555

EP - 1568

BT - Findings of the Association for Computational Linguistics: EMNLP 2022

PB - Association for Computational Linguistics

T2 - 2022 Findings of the Association for Computational Linguistics: EMNLP 2022

Y2 - 7 December 2022 through 11 December 2022

ER -

ID: 339845185