TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • TX-Ray

    Final published version, 1.43 MB, PDF document

While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training. Thus, for TX-Ray, we modify the established computer vision explainability principle of ‘visualizing preferred inputs of neurons’ to make it usable for both NLP and for transfer analysis. This allows one to analyze, track and quantify how self- or supervised NLP models first build knowledge abstractions in pretraining (1), andthen transfer abstractions to a new domain (2), or adapt them during supervised finetuning (3) – see Fig. 1. TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis. We find that, similar to Lottery Ticket based pruning, TX-Ray based pruning can improve test set generalization and that it can reveal how early stages of self-supervision automatically learn linguistic abstractions like parts-of-speech.
Original languageEnglish
Title of host publicationProceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAII)
EditorsJonas Peters, David Sontag
PublisherPMLR
Publication date2020
Pages440-449
Publication statusPublished - 2020
Event36th Conference on Uncertainty in Artificial Intelligence (UAI), - Virtuel omline
Duration: 3 Aug 20206 Aug 2020

Conference

Conference36th Conference on Uncertainty in Artificial Intelligence (UAI),
ByVirtuel omline
Periode03/08/202006/08/2020
SeriesProceedings of Machine Learning Research
Volume124
ISSN1938-7228

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 255044224