When does deep multi-task learning work for loosely related document classification tasks?

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.
OriginalsprogEngelsk
TitelProceedings of the 2018 EMNLP Workshop BlackboxNLP : Analyzing and Interpreting Neural Networks for NLP
ForlagAssociation for Computational Linguistics
Publikationsdato2018
Sider1-8
StatusUdgivet - 2018
Begivenhed2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP - Brussels, Belgien
Varighed: 1 nov. 20181 nov. 2018

Workshop

Workshop2018 EMNLP Workshop BlackboxNLP
LandBelgien
ByBrussels
Periode01/11/201801/11/2018

ID: 214759272