Magnitude and Uncertainty Pruning Criterion for Neural Networks

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Magnitude and Uncertainty Pruning Criterion for Neural Networks. / Ko, Vinnie; Oehmcke, Stefan; Gieseke, Fabian.

2019 IEEE International Conference on Big Data, Big Data. red. / Chaitanya Baru; Jun Huan; Latifur Khan; Xiaohua Tony Hu; Ronay Ak; Yuanyuan Tian; Roger Barga; Carlo Zaniolo; Kisung Lee; Yanfang Fanny Ye. IEEE, 2019. s. 2317-2326 9005692 (Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019).

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Ko, V, Oehmcke, S & Gieseke, F 2019, Magnitude and Uncertainty Pruning Criterion for Neural Networks. i C Baru, J Huan, L Khan, XT Hu, R Ak, Y Tian, R Barga, C Zaniolo, K Lee & YF Ye (red), 2019 IEEE International Conference on Big Data, Big Data., 9005692, IEEE, Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019, s. 2317-2326, 2019 IEEE International Conference on Big Data, Big Data 2019, Los Angeles, USA, 09/12/2019. https://doi.org/10.1109/BigData47090.2019.9005692

APA

Ko, V., Oehmcke, S., & Gieseke, F. (2019). Magnitude and Uncertainty Pruning Criterion for Neural Networks. I C. Baru, J. Huan, L. Khan, X. T. Hu, R. Ak, Y. Tian, R. Barga, C. Zaniolo, K. Lee, & Y. F. Ye (red.), 2019 IEEE International Conference on Big Data, Big Data (s. 2317-2326). [9005692] IEEE. Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 https://doi.org/10.1109/BigData47090.2019.9005692

Vancouver

Ko V, Oehmcke S, Gieseke F. Magnitude and Uncertainty Pruning Criterion for Neural Networks. I Baru C, Huan J, Khan L, Hu XT, Ak R, Tian Y, Barga R, Zaniolo C, Lee K, Ye YF, red., 2019 IEEE International Conference on Big Data, Big Data. IEEE. 2019. s. 2317-2326. 9005692. (Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019). https://doi.org/10.1109/BigData47090.2019.9005692

Author

Ko, Vinnie ; Oehmcke, Stefan ; Gieseke, Fabian. / Magnitude and Uncertainty Pruning Criterion for Neural Networks. 2019 IEEE International Conference on Big Data, Big Data. red. / Chaitanya Baru ; Jun Huan ; Latifur Khan ; Xiaohua Tony Hu ; Ronay Ak ; Yuanyuan Tian ; Roger Barga ; Carlo Zaniolo ; Kisung Lee ; Yanfang Fanny Ye. IEEE, 2019. s. 2317-2326 (Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019).

Bibtex

@inproceedings{f12ec045c7b94b038ae180ad4322818d,
title = "Magnitude and Uncertainty Pruning Criterion for Neural Networks",
abstract = "Neural networks have achieved dramatic improvements in recent years and depict the state-of-the-art methods for many real-world tasks nowadays. One drawback is, however, that many of these models are overparameterized, which makes them both computationally and memory intensive. Furthermore, overparameterization can also lead to undesired overfitting side-effects. Inspired by recently proposed magnitude-based pruning schemes and the Wald test from the field of statistics, we introduce a novel magnitude and uncertainty (MU) pruning criterion that helps to lessen such shortcomings. One important advantage of our MU pruning criterion is that it is scale-invariant, a phenomenon that the magnitude-based pruning criterion suffers from. In addition, we present a 'pseudo bootstrap' scheme, which can efficiently estimate the uncertainty of the weights by using their update information during training. Our experimental evaluation, which is based on various neural network architectures and datasets, shows that our new criterion leads to more compressed models compared to models that are solely based on magnitude-based pruning criteria, with, at the same time, less loss in predictive power.",
keywords = "Neural network compression, overparameterization, pruning, Wald test",
author = "Vinnie Ko and Stefan Oehmcke and Fabian Gieseke",
year = "2019",
doi = "10.1109/BigData47090.2019.9005692",
language = "English",
series = "Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019",
pages = "2317--2326",
editor = "Chaitanya Baru and Jun Huan and Latifur Khan and Hu, {Xiaohua Tony} and Ronay Ak and Yuanyuan Tian and Roger Barga and Carlo Zaniolo and Kisung Lee and Ye, {Yanfang Fanny}",
booktitle = "2019 IEEE International Conference on Big Data, Big Data",
publisher = "IEEE",
note = "2019 IEEE International Conference on Big Data, Big Data 2019 ; Conference date: 09-12-2019 Through 12-12-2019",

}

RIS

TY - GEN

T1 - Magnitude and Uncertainty Pruning Criterion for Neural Networks

AU - Ko, Vinnie

AU - Oehmcke, Stefan

AU - Gieseke, Fabian

PY - 2019

Y1 - 2019

N2 - Neural networks have achieved dramatic improvements in recent years and depict the state-of-the-art methods for many real-world tasks nowadays. One drawback is, however, that many of these models are overparameterized, which makes them both computationally and memory intensive. Furthermore, overparameterization can also lead to undesired overfitting side-effects. Inspired by recently proposed magnitude-based pruning schemes and the Wald test from the field of statistics, we introduce a novel magnitude and uncertainty (MU) pruning criterion that helps to lessen such shortcomings. One important advantage of our MU pruning criterion is that it is scale-invariant, a phenomenon that the magnitude-based pruning criterion suffers from. In addition, we present a 'pseudo bootstrap' scheme, which can efficiently estimate the uncertainty of the weights by using their update information during training. Our experimental evaluation, which is based on various neural network architectures and datasets, shows that our new criterion leads to more compressed models compared to models that are solely based on magnitude-based pruning criteria, with, at the same time, less loss in predictive power.

AB - Neural networks have achieved dramatic improvements in recent years and depict the state-of-the-art methods for many real-world tasks nowadays. One drawback is, however, that many of these models are overparameterized, which makes them both computationally and memory intensive. Furthermore, overparameterization can also lead to undesired overfitting side-effects. Inspired by recently proposed magnitude-based pruning schemes and the Wald test from the field of statistics, we introduce a novel magnitude and uncertainty (MU) pruning criterion that helps to lessen such shortcomings. One important advantage of our MU pruning criterion is that it is scale-invariant, a phenomenon that the magnitude-based pruning criterion suffers from. In addition, we present a 'pseudo bootstrap' scheme, which can efficiently estimate the uncertainty of the weights by using their update information during training. Our experimental evaluation, which is based on various neural network architectures and datasets, shows that our new criterion leads to more compressed models compared to models that are solely based on magnitude-based pruning criteria, with, at the same time, less loss in predictive power.

KW - Neural network compression

KW - overparameterization

KW - pruning

KW - Wald test

U2 - 10.1109/BigData47090.2019.9005692

DO - 10.1109/BigData47090.2019.9005692

M3 - Article in proceedings

AN - SCOPUS:85081327889

T3 - Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019

SP - 2317

EP - 2326

BT - 2019 IEEE International Conference on Big Data, Big Data

A2 - Baru, Chaitanya

A2 - Huan, Jun

A2 - Khan, Latifur

A2 - Hu, Xiaohua Tony

A2 - Ak, Ronay

A2 - Tian, Yuanyuan

A2 - Barga, Roger

A2 - Zaniolo, Carlo

A2 - Lee, Kisung

A2 - Ye, Yanfang Fanny

PB - IEEE

T2 - 2019 IEEE International Conference on Big Data, Big Data 2019

Y2 - 9 December 2019 through 12 December 2019

ER -

ID: 241594680