Language Modelling with Pixels

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Language Modelling with Pixels. / Rust, Phillip; Lotz, Jonas F.; Bugliarello, Emanuele; Salesky, Elizabeth; Lhoneux, Miryam de; Elliott, Desmond.

The Eleventh International Conference on Learning Representations. arXiv.org, 2023.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Rust, P, Lotz, JF, Bugliarello, E, Salesky, E, Lhoneux, MD & Elliott, D 2023, Language Modelling with Pixels. in The Eleventh International Conference on Learning Representations. arXiv.org, 11h International Conference on Learning Representations - ICLR 2023, Kigali, Rwanda, 01/05/2023.

APA

Rust, P., Lotz, J. F., Bugliarello, E., Salesky, E., Lhoneux, M. D., & Elliott, D. (2023). Language Modelling with Pixels. In The Eleventh International Conference on Learning Representations arXiv.org.

Vancouver

Rust P, Lotz JF, Bugliarello E, Salesky E, Lhoneux MD, Elliott D. Language Modelling with Pixels. In The Eleventh International Conference on Learning Representations. arXiv.org. 2023

Author

Rust, Phillip ; Lotz, Jonas F. ; Bugliarello, Emanuele ; Salesky, Elizabeth ; Lhoneux, Miryam de ; Elliott, Desmond. / Language Modelling with Pixels. The Eleventh International Conference on Learning Representations. arXiv.org, 2023.

Bibtex

@inproceedings{acfb86e5200c4ed7aa82224b2974c95e,
title = "Language Modelling with Pixels",
abstract = " Language models are defined over a finite set of inputs, which creates a vocabulary bottleneck when we attempt to scale the number of supported languages. Tackling this bottleneck results in a trade-off between what can be represented in the embedding matrix and computational issues in the output layer. This paper introduces PIXEL, the Pixel-based Encoder of Language, which suffers from neither of these issues. PIXEL is a pretrained language model that renders text as images, making it possible to transfer representations across languages based on orthographic similarity or the co-activation of pixels. PIXEL is trained to reconstruct the pixels of masked patches instead of predicting a distribution over tokens. We pretrain the 86M parameter PIXEL model on the same English data as BERT and evaluate on syntactic and semantic tasks in typologically diverse languages, including various non-Latin scripts. We find that PIXEL substantially outperforms BERT on syntactic and semantic processing tasks on scripts that are not found in the pretraining data, but PIXEL is slightly weaker than BERT when working with Latin scripts. Furthermore, we find that PIXEL is more robust than BERT to orthographic attacks and linguistic code-switching, further confirming the benefits of modelling language with pixels. ",
keywords = "cs.CL, cs.AI, cs.CV, cs.LG",
author = "Phillip Rust and Lotz, {Jonas F.} and Emanuele Bugliarello and Elizabeth Salesky and Lhoneux, {Miryam de} and Desmond Elliott",
year = "2023",
language = "English",
booktitle = "The Eleventh International Conference on Learning Representations",
publisher = "arXiv.org",
note = "11h International Conference on Learning Representations - ICLR 2023 ; Conference date: 01-05-2023 Through 05-05-2023",

}

RIS

TY - GEN

T1 - Language Modelling with Pixels

AU - Rust, Phillip

AU - Lotz, Jonas F.

AU - Bugliarello, Emanuele

AU - Salesky, Elizabeth

AU - Lhoneux, Miryam de

AU - Elliott, Desmond

PY - 2023

Y1 - 2023

N2 - Language models are defined over a finite set of inputs, which creates a vocabulary bottleneck when we attempt to scale the number of supported languages. Tackling this bottleneck results in a trade-off between what can be represented in the embedding matrix and computational issues in the output layer. This paper introduces PIXEL, the Pixel-based Encoder of Language, which suffers from neither of these issues. PIXEL is a pretrained language model that renders text as images, making it possible to transfer representations across languages based on orthographic similarity or the co-activation of pixels. PIXEL is trained to reconstruct the pixels of masked patches instead of predicting a distribution over tokens. We pretrain the 86M parameter PIXEL model on the same English data as BERT and evaluate on syntactic and semantic tasks in typologically diverse languages, including various non-Latin scripts. We find that PIXEL substantially outperforms BERT on syntactic and semantic processing tasks on scripts that are not found in the pretraining data, but PIXEL is slightly weaker than BERT when working with Latin scripts. Furthermore, we find that PIXEL is more robust than BERT to orthographic attacks and linguistic code-switching, further confirming the benefits of modelling language with pixels.

AB - Language models are defined over a finite set of inputs, which creates a vocabulary bottleneck when we attempt to scale the number of supported languages. Tackling this bottleneck results in a trade-off between what can be represented in the embedding matrix and computational issues in the output layer. This paper introduces PIXEL, the Pixel-based Encoder of Language, which suffers from neither of these issues. PIXEL is a pretrained language model that renders text as images, making it possible to transfer representations across languages based on orthographic similarity or the co-activation of pixels. PIXEL is trained to reconstruct the pixels of masked patches instead of predicting a distribution over tokens. We pretrain the 86M parameter PIXEL model on the same English data as BERT and evaluate on syntactic and semantic tasks in typologically diverse languages, including various non-Latin scripts. We find that PIXEL substantially outperforms BERT on syntactic and semantic processing tasks on scripts that are not found in the pretraining data, but PIXEL is slightly weaker than BERT when working with Latin scripts. Furthermore, we find that PIXEL is more robust than BERT to orthographic attacks and linguistic code-switching, further confirming the benefits of modelling language with pixels.

KW - cs.CL

KW - cs.AI

KW - cs.CV

KW - cs.LG

M3 - Article in proceedings

BT - The Eleventh International Conference on Learning Representations

PB - arXiv.org

T2 - 11h International Conference on Learning Representations - ICLR 2023

Y2 - 1 May 2023 through 5 May 2023

ER -

ID: 379722288