Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Smallcap : Lightweight Image Captioning Prompted with Retrieval Augmentation. / Ramos, Rita; Martins, Bruno; Elliott, Desmond; Kementchedjhieva, Yova.

Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023. IEEE Computer Society Press, 2023. s. 2840-2849.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Ramos, R, Martins, B, Elliott, D & Kementchedjhieva, Y 2023, Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation. i Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023. IEEE Computer Society Press, s. 2840-2849, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, Canada, 18/06/2023. https://doi.org/10.1109/CVPR52729.2023.00278

APA

Ramos, R., Martins, B., Elliott, D., & Kementchedjhieva, Y. (2023). Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation. I Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 (s. 2840-2849). IEEE Computer Society Press. https://doi.org/10.1109/CVPR52729.2023.00278

Vancouver

Ramos R, Martins B, Elliott D, Kementchedjhieva Y. Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation. I Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023. IEEE Computer Society Press. 2023. s. 2840-2849 https://doi.org/10.1109/CVPR52729.2023.00278

Author

Ramos, Rita ; Martins, Bruno ; Elliott, Desmond ; Kementchedjhieva, Yova. / Smallcap : Lightweight Image Captioning Prompted with Retrieval Augmentation. Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023. IEEE Computer Society Press, 2023. s. 2840-2849

Bibtex

@inproceedings{c3de167a250647e6a537348fcae89d06,
title = "Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation",
abstract = "Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.",
keywords = "Multi-modal learning",
author = "Rita Ramos and Bruno Martins and Desmond Elliott and Yova Kementchedjhieva",
note = "Publisher Copyright: {\textcopyright} 2023 IEEE.; 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 ; Conference date: 18-06-2023 Through 22-06-2023",
year = "2023",
doi = "10.1109/CVPR52729.2023.00278",
language = "English",
pages = "2840--2849",
booktitle = "Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023",
publisher = "IEEE Computer Society Press",
address = "United States",

}

RIS

TY - GEN

T1 - Smallcap

T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023

AU - Ramos, Rita

AU - Martins, Bruno

AU - Elliott, Desmond

AU - Kementchedjhieva, Yova

N1 - Publisher Copyright: © 2023 IEEE.

PY - 2023

Y1 - 2023

N2 - Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.

AB - Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.

KW - Multi-modal learning

UR - http://www.scopus.com/inward/record.url?scp=85164893030&partnerID=8YFLogxK

U2 - 10.1109/CVPR52729.2023.00278

DO - 10.1109/CVPR52729.2023.00278

M3 - Article in proceedings

AN - SCOPUS:85164893030

SP - 2840

EP - 2849

BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023

PB - IEEE Computer Society Press

Y2 - 18 June 2023 through 22 June 2023

ER -

ID: 371289982