Activation Compression of Graph Neural Networks Using Block-Wise Quantization with Improved Variance Minimization

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Accepteret manuskript, 1,6 MB, PDF-dokument

Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate activation maps down to using INT2 precision. They showed little to no reduction in performance while achieving large reductions in GPU memory consumption. In this work, we present an improvement to the EXACT strategy by using block-wise quantization of the intermediate activations. We experimentally analyze different block sizes and show further reduction in memory consumption (> 15%), and runtime speedup per epoch (≈ 5%) even when performing extreme extents of quantization with similar performance trade-offs as with the original EXACT. Further, we present a correction to the assumptions on the distribution of intermediate activation maps in EXACT (assumed to be uniform) and show improved variance estimations of the quantization and dequantization steps.

OriginalsprogEngelsk
Titel2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings
Antal sider5
ForlagIEEE
Publikationsdato2024
Sider7430-7434
ISBN (Elektronisk)9798350344851
DOI
StatusUdgivet - 2024
Begivenhed49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Sydkorea
Varighed: 14 apr. 202419 apr. 2024

Konference

Konference49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024
LandSydkorea
BySeoul
Periode14/04/202419/04/2024
SponsorThe Institute of Electrical and Electronics Engineers Signal Processing Society

Bibliografisk note

Funding Information:
The authors acknowledge funding received under European Union's Horizon Europe Research and Innovation programme under grant agreements No. 101070284 and No. 101070408.

Publisher Copyright:
© 2024 IEEE.

ID: 395155271