LexGLUE: A Benchmark Dataset for Legal Language Understanding in English

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

LexGLUE : A Benchmark Dataset for Legal Language Understanding in English. / Chalkidis, Ilias; Jana, Abhik; Hartung, Dirk; Bommarito, Michael; Androutsopoulos, Ion; Katz, Daniel Martin; Aletras, Nikolaos.

ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). red. / Smaranda Muresan; Preslav Nakov; Aline Villavicencio. Association for Computational Linguistics, 2022. s. 4310-4330 (Proceedings of the Annual Meeting of the Association for Computational Linguistics, Bind 1).

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Chalkidis, I, Jana, A, Hartung, D, Bommarito, M, Androutsopoulos, I, Katz, DM & Aletras, N 2022, LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. i S Muresan, P Nakov & A Villavicencio (red), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Association for Computational Linguistics, Proceedings of the Annual Meeting of the Association for Computational Linguistics, bind 1, s. 4310-4330, 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, Dublin, Irland, 22/05/2022.

APA

Chalkidis, I., Jana, A., Hartung, D., Bommarito, M., Androutsopoulos, I., Katz, D. M., & Aletras, N. (2022). LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. I S. Muresan, P. Nakov, & A. Villavicencio (red.), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (s. 4310-4330). Association for Computational Linguistics. Proceedings of the Annual Meeting of the Association for Computational Linguistics Bind 1

Vancouver

Chalkidis I, Jana A, Hartung D, Bommarito M, Androutsopoulos I, Katz DM o.a. LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. I Muresan S, Nakov P, Villavicencio A, red., ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Association for Computational Linguistics. 2022. s. 4310-4330. (Proceedings of the Annual Meeting of the Association for Computational Linguistics, Bind 1).

Author

Chalkidis, Ilias ; Jana, Abhik ; Hartung, Dirk ; Bommarito, Michael ; Androutsopoulos, Ion ; Katz, Daniel Martin ; Aletras, Nikolaos. / LexGLUE : A Benchmark Dataset for Legal Language Understanding in English. ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). red. / Smaranda Muresan ; Preslav Nakov ; Aline Villavicencio. Association for Computational Linguistics, 2022. s. 4310-4330 (Proceedings of the Annual Meeting of the Association for Computational Linguistics, Bind 1).

Bibtex

@inproceedings{70afd6f0a68e4df49c4b47ed2b6e49e3,
title = "LexGLUE: A Benchmark Dataset for Legal Language Understanding in English",
abstract = "Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.",
author = "Ilias Chalkidis and Abhik Jana and Dirk Hartung and Michael Bommarito and Ion Androutsopoulos and Katz, {Daniel Martin} and Nikolaos Aletras",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 ; Conference date: 22-05-2022 Through 27-05-2022",
year = "2022",
language = "English",
series = "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
pages = "4310--4330",
editor = "Smaranda Muresan and Preslav Nakov and Aline Villavicencio",
booktitle = "ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)",
publisher = "Association for Computational Linguistics",

}

RIS

TY - GEN

T1 - LexGLUE

T2 - 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022

AU - Chalkidis, Ilias

AU - Jana, Abhik

AU - Hartung, Dirk

AU - Bommarito, Michael

AU - Androutsopoulos, Ion

AU - Katz, Daniel Martin

AU - Aletras, Nikolaos

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.

AB - Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.

UR - http://www.scopus.com/inward/record.url?scp=85137748584&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85137748584

T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics

SP - 4310

EP - 4330

BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)

A2 - Muresan, Smaranda

A2 - Nakov, Preslav

A2 - Villavicencio, Aline

PB - Association for Computational Linguistics

Y2 - 22 May 2022 through 27 May 2022

ER -

ID: 339157744