Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Processing Long Legal Documents with Pre-trained Transformers : Modding LegalBERT and Longformer. / Mamakas, Dimitris; Tsotsi, Petros; Androutsopoulos, Ion; Chalkidis, Ilias.

NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop. Association for Computational Linguistics (ACL), 2022. p. 130-142.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Mamakas, D, Tsotsi, P, Androutsopoulos, I & Chalkidis, I 2022, Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer. in NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop. Association for Computational Linguistics (ACL), pp. 130-142, 4th Natural Legal Language Processing Workshop, NLLP 2022, co-located with the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, 08/12/2022. <https://aclanthology.org/2022.nllp-1.11>

APA

Mamakas, D., Tsotsi, P., Androutsopoulos, I., & Chalkidis, I. (2022). Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer. In NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop (pp. 130-142). Association for Computational Linguistics (ACL). https://aclanthology.org/2022.nllp-1.11

Vancouver

Mamakas D, Tsotsi P, Androutsopoulos I, Chalkidis I. Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer. In NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop. Association for Computational Linguistics (ACL). 2022. p. 130-142

Author

Mamakas, Dimitris ; Tsotsi, Petros ; Androutsopoulos, Ion ; Chalkidis, Ilias. / Processing Long Legal Documents with Pre-trained Transformers : Modding LegalBERT and Longformer. NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop. Association for Computational Linguistics (ACL), 2022. pp. 130-142

Bibtex

@inproceedings{15b9fa6f84b24ff387cd9562f3be4ccd,
title = "Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer",
abstract = "Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.",
author = "Dimitris Mamakas and Petros Tsotsi and Ion Androutsopoulos and Ilias Chalkidis",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 4th Natural Legal Language Processing Workshop, NLLP 2022, co-located with the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 ; Conference date: 08-12-2022",
year = "2022",
language = "English",
pages = "130--142",
booktitle = "NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Processing Long Legal Documents with Pre-trained Transformers

T2 - 4th Natural Legal Language Processing Workshop, NLLP 2022, co-located with the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022

AU - Mamakas, Dimitris

AU - Tsotsi, Petros

AU - Androutsopoulos, Ion

AU - Chalkidis, Ilias

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.

AB - Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.

UR - http://www.scopus.com/inward/record.url?scp=85154613920&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85154613920

SP - 130

EP - 142

BT - NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop

PB - Association for Computational Linguistics (ACL)

Y2 - 8 December 2022

ER -

ID: 358725705