Latent Multi-Task Architecture Learning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Latent Multi-Task Architecture Learning. / Ruder, Sebastian; Bingel, Joachim; Augenstein, Isabelle; Søgaard, Anders.

Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019. AAAI Press, 2019. p. 4822-4829.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Ruder, S, Bingel, J, Augenstein, I & Søgaard, A 2019, Latent Multi-Task Architecture Learning. in Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019. AAAI Press, pp. 4822-4829, 33rd AAAI Conference on Artificial Intelligence - AAAI 2019, Honolulu, United States, 27/01/2019. https://doi.org/10.1609/aaai.v33i01.33014822

APA

Ruder, S., Bingel, J., Augenstein, I., & Søgaard, A. (2019). Latent Multi-Task Architecture Learning. In Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019 (pp. 4822-4829). AAAI Press. https://doi.org/10.1609/aaai.v33i01.33014822

Vancouver

Ruder S, Bingel J, Augenstein I, Søgaard A. Latent Multi-Task Architecture Learning. In Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019. AAAI Press. 2019. p. 4822-4829 https://doi.org/10.1609/aaai.v33i01.33014822

Author

Ruder, Sebastian ; Bingel, Joachim ; Augenstein, Isabelle ; Søgaard, Anders. / Latent Multi-Task Architecture Learning. Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019. AAAI Press, 2019. pp. 4822-4829

Bibtex

@inproceedings{3fd6798c506a47288ab1791916c3bd8f,
title = "Latent Multi-Task Architecture Learning",
abstract = "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.",
author = "Sebastian Ruder and Joachim Bingel and Isabelle Augenstein and Anders S{\o}gaard",
year = "2019",
doi = "10.1609/aaai.v33i01.33014822",
language = "English",
pages = "4822--4829",
booktitle = "Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019",
publisher = "AAAI Press",
note = "33rd AAAI Conference on Artificial Intelligence - AAAI 2019 ; Conference date: 27-01-2019 Through 01-02-2019",

}

RIS

TY - GEN

T1 - Latent Multi-Task Architecture Learning

AU - Ruder, Sebastian

AU - Bingel, Joachim

AU - Augenstein, Isabelle

AU - Søgaard, Anders

PY - 2019

Y1 - 2019

N2 - Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.

AB - Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.

U2 - 10.1609/aaai.v33i01.33014822

DO - 10.1609/aaai.v33i01.33014822

M3 - Article in proceedings

SP - 4822

EP - 4829

BT - Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019

PB - AAAI Press

T2 - 33rd AAAI Conference on Artificial Intelligence - AAAI 2019

Y2 - 27 January 2019 through 1 February 2019

ER -

ID: 240627841