TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

TX-Ray : Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. / Rethmeier, Nils; Saxena, Vageesh Kumar ; Augenstein, Isabelle.

Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII). ed. / Jonas Peters; David Sontag. PMLR, 2020. p. 440-449 (Proceedings of Machine Learning Research, Vol. 124).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Rethmeier, N, Saxena, VK & Augenstein, I 2020, TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. in J Peters & D Sontag (eds), Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII). PMLR, Proceedings of Machine Learning Research, vol. 124, pp. 440-449, 36th Conference on Uncertainty in Artificial Intelligence (UAI), Virtuel omline, 03/08/2020.

APA

Rethmeier, N., Saxena, V. K., & Augenstein, I. (2020). TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. In J. Peters, & D. Sontag (Eds.), Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII) (pp. 440-449). PMLR. Proceedings of Machine Learning Research Vol. 124

Vancouver

Rethmeier N, Saxena VK, Augenstein I. TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. In Peters J, Sontag D, editors, Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII). PMLR. 2020. p. 440-449. (Proceedings of Machine Learning Research, Vol. 124).

Author

Rethmeier, Nils ; Saxena, Vageesh Kumar ; Augenstein, Isabelle. / TX-Ray : Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII). editor / Jonas Peters ; David Sontag. PMLR, 2020. pp. 440-449 (Proceedings of Machine Learning Research, Vol. 124).

Bibtex

@inproceedings{50f8b09f6e0e40d0baed3a93bf6ede69,
title = "TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP",
abstract = "While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of {\textquoteleft}visualizing preferred inputs of neurons{\textquoteright} to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.",
author = "Nils Rethmeier and Saxena, {Vageesh Kumar} and Isabelle Augenstein",
year = "2020",
language = "English",
series = "Proceedings of Machine Learning Research",
pages = "440--449",
editor = "Peters, {Jonas } and Sontag, {David }",
booktitle = "Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII)",
publisher = "PMLR",
note = "null ; Conference date: 03-08-2020 Through 06-08-2020",

}

RIS

TY - GEN

T1 - TX-Ray

AU - Rethmeier, Nils

AU - Saxena, Vageesh Kumar

AU - Augenstein, Isabelle

PY - 2020

Y1 - 2020

N2 - While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.

AB - While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.

M3 - Article in proceedings

T3 - Proceedings of Machine Learning Research

SP - 440

EP - 449

BT - Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII)

A2 - Peters, Jonas

A2 - Sontag, David

PB - PMLR

Y2 - 3 August 2020 through 6 August 2020

ER -

ID: 255044224