A Latent-Variable Model for Intrinsic Probing

Research output: Working paperPreprintResearch

Standard

A Latent-Variable Model for Intrinsic Probing. / Stańczak, Karolina; Hennigen, Lucas Torroba; Williams, Adina; Cotterell, Ryan; Augenstein, Isabelle.

arxiv.org, 2022.

Research output: Working paperPreprintResearch

Harvard

Stańczak, K, Hennigen, LT, Williams, A, Cotterell, R & Augenstein, I 2022 'A Latent-Variable Model for Intrinsic Probing' arxiv.org.

APA

Stańczak, K., Hennigen, L. T., Williams, A., Cotterell, R., & Augenstein, I. (2022). A Latent-Variable Model for Intrinsic Probing. arxiv.org.

Vancouver

Stańczak K, Hennigen LT, Williams A, Cotterell R, Augenstein I. A Latent-Variable Model for Intrinsic Probing. arxiv.org. 2022 Jan 20.

Author

Stańczak, Karolina ; Hennigen, Lucas Torroba ; Williams, Adina ; Cotterell, Ryan ; Augenstein, Isabelle. / A Latent-Variable Model for Intrinsic Probing. arxiv.org, 2022.

Bibtex

@techreport{71e1979139a542afa5caf3a51201e5b9,
title = "A Latent-Variable Model for Intrinsic Probing",
abstract = " The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute, but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax. ",
keywords = "cs.CL",
author = "Karolina Sta{\'n}czak and Hennigen, {Lucas Torroba} and Adina Williams and Ryan Cotterell and Isabelle Augenstein",
year = "2022",
month = jan,
day = "20",
language = "English",
publisher = "arxiv.org",
type = "WorkingPaper",
institution = "arxiv.org",

}

RIS

TY - UNPB

T1 - A Latent-Variable Model for Intrinsic Probing

AU - Stańczak, Karolina

AU - Hennigen, Lucas Torroba

AU - Williams, Adina

AU - Cotterell, Ryan

AU - Augenstein, Isabelle

PY - 2022/1/20

Y1 - 2022/1/20

N2 - The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute, but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.

AB - The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute, but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.

KW - cs.CL

M3 - Preprint

BT - A Latent-Variable Model for Intrinsic Probing

PB - arxiv.org

ER -

ID: 324688545