Identifying beneficial task relations for multi-task learning in deep neural networks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Identifying beneficial task relations for multi-task learning in deep neural networks. / Bingel, Joachim; Søgaard, Anders.

Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: volume 2, short papers. Vol. 2 Association for Computational Linguistics, 2017. p. 164-169.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Bingel, J & Søgaard, A 2017, Identifying beneficial task relations for multi-task learning in deep neural networks. in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: volume 2, short papers. vol. 2, Association for Computational Linguistics, pp. 164-169, 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, 03/04/2017. <http://aclweb.org/anthology/E17-2026>

APA

Bingel, J., & Søgaard, A. (2017). Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: volume 2, short papers (Vol. 2, pp. 164-169). Association for Computational Linguistics. http://aclweb.org/anthology/E17-2026

Vancouver

Bingel J, Søgaard A. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: volume 2, short papers. Vol. 2. Association for Computational Linguistics. 2017. p. 164-169

Author

Bingel, Joachim ; Søgaard, Anders. / Identifying beneficial task relations for multi-task learning in deep neural networks. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: volume 2, short papers. Vol. 2 Association for Computational Linguistics, 2017. pp. 164-169

Bibtex

@inproceedings{74e59ef98caa49819b44c501218862fa,
title = "Identifying beneficial task relations for multi-task learning in deep neural networks",
abstract = "Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.",
author = "Joachim Bingel and Anders S{\o}gaard",
year = "2017",
language = "English",
volume = "2",
pages = "164--169",
booktitle = "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 03-04-2017 Through 07-04-2017",

}

RIS

TY - GEN

T1 - Identifying beneficial task relations for multi-task learning in deep neural networks

AU - Bingel, Joachim

AU - Søgaard, Anders

N1 - Conference code: 15

PY - 2017

Y1 - 2017

N2 - Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.

AB - Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.

UR - http://www.scopus.com/inward/record.url?scp=85021633901&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85021633901

VL - 2

SP - 164

EP - 169

BT - Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics

PB - Association for Computational Linguistics

Y2 - 3 April 2017 through 7 April 2017

ER -

ID: 184142439