Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Accepteret manuskript, 2,34 MB, PDF-dokument

Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.

OriginalsprogEngelsk
TitelProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
Antal sider10
ForlagIEEE Computer Society Press
Publikationsdato2023
Sider2840-2849
ISBN (Elektronisk)9798350301298
DOI
StatusUdgivet - 2023
Begivenhed2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Vancouver, Canada
Varighed: 18 jun. 202322 jun. 2023

Konference

Konference2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
LandCanada
ByVancouver
Periode18/06/202322/06/2023
SponsorAmazon Science, Ant Research, Cruise, et al., Google, Lambda

Bibliografisk note

Funding Information:
This research was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055, through Fundac¸ão para a Ciência e Tecnolo-gia (FCT) with the Ph.D. scholarship 2020.06106.BD, and through the INESC-ID multi-annual funding from the PID-DAC programme (UIDB/50021/2020).

Publisher Copyright:
© 2023 IEEE.

ID: 371289982