Residual networks behave like ensembles of relatively shallow networks

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Residual networks behave like ensembles of relatively shallow networks. / Veit, Andreas; Wilber, Michael; Belongie, Serge.

In: Advances in Neural Information Processing Systems, 2016, p. 550-558.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Veit, A, Wilber, M & Belongie, S 2016, 'Residual networks behave like ensembles of relatively shallow networks', Advances in Neural Information Processing Systems, pp. 550-558.

APA

Veit, A., Wilber, M., & Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. Advances in Neural Information Processing Systems, 550-558.

Vancouver

Veit A, Wilber M, Belongie S. Residual networks behave like ensembles of relatively shallow networks. Advances in Neural Information Processing Systems. 2016;550-558.

Author

Veit, Andreas ; Wilber, Michael ; Belongie, Serge. / Residual networks behave like ensembles of relatively shallow networks. In: Advances in Neural Information Processing Systems. 2016 ; pp. 550-558.

Bibtex

@inproceedings{bfbff0e1e290428ca90a2a43bfddef42,
title = "Residual networks behave like ensembles of relatively shallow networks",
abstract = "In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.",
author = "Andreas Veit and Michael Wilber and Serge Belongie",
note = "Funding Information: We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3). Publisher Copyright: {\textcopyright} 2016 NIPS Foundation - All Rights Reserved.; 30th Annual Conference on Neural Information Processing Systems, NIPS 2016 ; Conference date: 05-12-2016 Through 10-12-2016",
year = "2016",
language = "English",
pages = "550--558",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",
publisher = "Morgan Kaufmann Publishers, Inc",

}

RIS

TY - GEN

T1 - Residual networks behave like ensembles of relatively shallow networks

AU - Veit, Andreas

AU - Wilber, Michael

AU - Belongie, Serge

N1 - Funding Information: We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3). Publisher Copyright: © 2016 NIPS Foundation - All Rights Reserved.

PY - 2016

Y1 - 2016

N2 - In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.

AB - In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.

UR - http://www.scopus.com/inward/record.url?scp=85019250516&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85019250516

SP - 550

EP - 558

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

T2 - 30th Annual Conference on Neural Information Processing Systems, NIPS 2016

Y2 - 5 December 2016 through 10 December 2016

ER -

ID: 301828179