Equity through Access: A Case for Small-scale Deep Learning

Publikation: Working paperPreprintForskning

Standard

Equity through Access : A Case for Small-scale Deep Learning. / Selvan, Raghavendra; Pepin, Bob; Igel, Christian; Samuel, Gabrielle; Dam, Erik B.

2024.

Publikation: Working paperPreprintForskning

Harvard

Selvan, R, Pepin, B, Igel, C, Samuel, G & Dam, EB 2024 'Equity through Access: A Case for Small-scale Deep Learning'.

APA

Selvan, R., Pepin, B., Igel, C., Samuel, G., & Dam, E. B. (2024). Equity through Access: A Case for Small-scale Deep Learning.

Vancouver

Selvan R, Pepin B, Igel C, Samuel G, Dam EB. Equity through Access: A Case for Small-scale Deep Learning. 2024 mar. 19.

Author

Selvan, Raghavendra ; Pepin, Bob ; Igel, Christian ; Samuel, Gabrielle ; Dam, Erik B. / Equity through Access : A Case for Small-scale Deep Learning. 2024.

Bibtex

@techreport{2bd351d677414d2dac8f41ac8a16ddd2,
title = "Equity through Access: A Case for Small-scale Deep Learning",
abstract = "The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for vision tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning 1M to 130M trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using pretrained models can significantly reduce the computational resources and data required. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints.",
keywords = "cs.LG, cs.AI, stat.ML",
author = "Raghavendra Selvan and Bob Pepin and Christian Igel and Gabrielle Samuel and Dam, {Erik B}",
note = "Source code available at https://github.com/saintslab/PePR",
year = "2024",
month = mar,
day = "19",
language = "Udefineret/Ukendt",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - Equity through Access

T2 - A Case for Small-scale Deep Learning

AU - Selvan, Raghavendra

AU - Pepin, Bob

AU - Igel, Christian

AU - Samuel, Gabrielle

AU - Dam, Erik B

N1 - Source code available at https://github.com/saintslab/PePR

PY - 2024/3/19

Y1 - 2024/3/19

N2 - The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for vision tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning 1M to 130M trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using pretrained models can significantly reduce the computational resources and data required. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints.

AB - The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for vision tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning 1M to 130M trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using pretrained models can significantly reduce the computational resources and data required. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints.

KW - cs.LG

KW - cs.AI

KW - stat.ML

M3 - Preprint

BT - Equity through Access

ER -

ID: 387832287