Copyright Violations and Large Language Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 829 KB, PDF-dokument

Language models may memorize more than just facts, including entire chunks of texts seen during training. Fair use exemptions to copyright laws typically allow for limited use of copyrighted material without permission from the copyright holder, but typically for extraction of information from copyrighted materials, rather than verbatim reproduction. This work explores the issue of copyright violations and large language models through the lens of verbatim memorization, focusing on possible redistribution of copyrighted text. We present experiments with a range of language models over a collection of popular books and coding problems, providing a conservative characterization of the extent to which language models can redistribute these materials. Overall, this research highlights the need for further examination and the potential impact on future developments in natural language processing to ensure adherence to copyright regulations. Code is at https://github.com/coastalcph/CopyrightLLMs.
OriginalsprogEngelsk
TitelProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2023
Sider7403-7412
ISBN (Trykt)979-8-89176-060-8
DOI
StatusUdgivet - 2023
Begivenhed2023 Conference on Empirical Methods in Natural Language Processing - Singapore
Varighed: 6 dec. 202310 dec. 2023

Konference

Konference2023 Conference on Empirical Methods in Natural Language Processing
BySingapore
Periode06/12/202310/12/2023

ID: 381725956