Discriminative Class Tokens for Text-to-Image Diffusion Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Discriminative Class Tokens for Text-to-Image Diffusion Models. / Schwartz, Idan ; Snæbjarnarson, Vésteinn; Chefer, Hila; Belongie, Serge; Wolf, Lior; Benaim, Sagie.

2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Schwartz, I, Snæbjarnarson, V, Chefer, H, Belongie, S, Wolf, L & Benaim, S 2023, Discriminative Class Tokens for Text-to-Image Diffusion Models. i 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023, Paris, Frankrig, 02/10/2023.

APA

Schwartz, I., Snæbjarnarson, V., Chefer, H., Belongie, S., Wolf, L., & Benaim, S. (2023). Discriminative Class Tokens for Text-to-Image Diffusion Models. I 2023 IEEE/CVF International Conference on Computer Vision (ICCV) IEEE.

Vancouver

Schwartz I, Snæbjarnarson V, Chefer H, Belongie S, Wolf L, Benaim S. Discriminative Class Tokens for Text-to-Image Diffusion Models. I 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. 2023

Author

Schwartz, Idan ; Snæbjarnarson, Vésteinn ; Chefer, Hila ; Belongie, Serge ; Wolf, Lior ; Benaim, Sagie. / Discriminative Class Tokens for Text-to-Image Diffusion Models. 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023.

Bibtex

@inproceedings{edb44e255b4e4c569a64d9b79f1bfe96,
title = "Discriminative Class Tokens for Text-to-Image Diffusion Models",
abstract = "Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. While impressive, the images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This approach has two disadvantages: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, affecting the quality and diversity of the generated images, or (ii) the input is a hard-coded label, as opposed to free-form text, limiting the control over the generated images.In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier. This is done by iteratively modifying the embedding of an added input token of a text-to-image diffusion model, by steering generated images toward a given target class according to a classifier. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at \url{this https URL}.",
author = "Idan Schwartz and V{\'e}steinn Sn{\ae}bjarnarson and Hila Chefer and Serge Belongie and Lior Wolf and Sagie Benaim",
year = "2023",
language = "English",
booktitle = "2023 IEEE/CVF International Conference on Computer Vision (ICCV)",
publisher = "IEEE",
note = "2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 ; Conference date: 02-10-2023 Through 06-10-2023",

}

RIS

TY - GEN

T1 - Discriminative Class Tokens for Text-to-Image Diffusion Models

AU - Schwartz, Idan

AU - Snæbjarnarson, Vésteinn

AU - Chefer, Hila

AU - Belongie, Serge

AU - Wolf, Lior

AU - Benaim, Sagie

PY - 2023

Y1 - 2023

N2 - Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. While impressive, the images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This approach has two disadvantages: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, affecting the quality and diversity of the generated images, or (ii) the input is a hard-coded label, as opposed to free-form text, limiting the control over the generated images.In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier. This is done by iteratively modifying the embedding of an added input token of a text-to-image diffusion model, by steering generated images toward a given target class according to a classifier. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at \url{this https URL}.

AB - Recent advances in text-to-image diffusion models have enabled the generation of diverse and high-quality images. While impressive, the images often fall short of depicting subtle details and are susceptible to errors due to ambiguity in the input text. One way of alleviating these issues is to train diffusion models on class-labeled datasets. This approach has two disadvantages: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, affecting the quality and diversity of the generated images, or (ii) the input is a hard-coded label, as opposed to free-form text, limiting the control over the generated images.In this work, we propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text while achieving high accuracy through discriminative signals from a pretrained classifier. This is done by iteratively modifying the embedding of an added input token of a text-to-image diffusion model, by steering generated images toward a given target class according to a classifier. Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images or retraining of a noise-tolerant classifier. We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier. The code is available at \url{this https URL}.

M3 - Article in proceedings

BT - 2023 IEEE/CVF International Conference on Computer Vision (ICCV)

PB - IEEE

T2 - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023

Y2 - 2 October 2023 through 6 October 2023

ER -

ID: 383747472