Identifying beneficial task relations for multi-task learning in deep neural networks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.

Original languageEnglish
Title of host publicationProceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics : volume 2, short papers
Number of pages6
Volume2
PublisherAssociation for Computational Linguistics
Publication date2017
Pages164-169
ISBN (Electronic)9781510838604
Publication statusPublished - 2017
Event15th Conference of the European Chapter of the Association for Computational Linguistics - Valencia, Spain
Duration: 3 Apr 20177 Apr 2017
Conference number: 15

Conference

Conference15th Conference of the European Chapter of the Association for Computational Linguistics
Nummer15
LandSpain
ByValencia
Periode03/04/201707/04/2017

Links

ID: 184142439