When does deep multi-task learning work for loosely related document classification tasks?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.
Original languageEnglish
Title of host publicationProceedings of the 2018 EMNLP Workshop BlackboxNLP : Analyzing and Interpreting Neural Networks for NLP
PublisherAssociation for Computational Linguistics
Publication date2018
Pages1-8
Publication statusPublished - 2018
Event2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP - Brussels, Belgium
Duration: 1 Nov 20181 Nov 2018

Workshop

Workshop2018 EMNLP Workshop BlackboxNLP
LandBelgium
ByBrussels
Periode01/11/201801/11/2018

ID: 214759272