Smallcap: Lightweight Image Captioning Prompted with Retrieval Augmentation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Accepted author manuscript, 2.34 MB, PDF document

Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pretraining and finetuning. As an alternative to large models, we present Smallcap, which generates a caption conditioned on an input image and related captions retrieved from a datastore. Our model is lightweight and fast to train, as the only learned parameters are in newly introduced cross-attention layers between a pre-trained CLIP encoder and GPT-2 decoder. Smallcap can transfer to new domains without additional finetuning and can exploit large-scale data in a training-free fashion since the contents of the datastore can be readily replaced. Our experiments show that Smallcap, trained only on COCO, has competitive performance on this benchmark, and also transfers to other domains without retraining, solely through retrieval from target-domain data. Further improvement is achieved through the training-free exploitation of diverse human-labeled and web data, which proves to be effective for a range of domains, including the nocaps benchmark, designed to test generalization to unseen visual concepts.11Code: https://github.com/RitaRamo/smallcap.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
Number of pages10
PublisherIEEE Computer Society Press
Publication date2023
Pages2840-2849
ISBN (Electronic)9798350301298
DOIs
Publication statusPublished - 2023
Event2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Vancouver, Canada
Duration: 18 Jun 202322 Jun 2023

Conference

Conference2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
LandCanada
ByVancouver
Periode18/06/202322/06/2023
SponsorAmazon Science, Ant Research, Cruise, et al., Google, Lambda

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

    Research areas

  • Multi-modal learning

ID: 371289982