Robustness and Generalization via Generative Adversarial Training

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Robustness and Generalization via Generative Adversarial Training. / Belongie, Serge; Poursaeed, Omid; Jiang, Tianxing; Yang, Harry; Lim, Ser-Nam.

In: IEEE Xplore Digital Library, Vol. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 28.02.2022, p. 15691-15700.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Belongie, S, Poursaeed, O, Jiang, T, Yang, H & Lim, S-N 2022, 'Robustness and Generalization via Generative Adversarial Training', IEEE Xplore Digital Library, vol. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15691-15700. https://doi.org/10.1109/ICCV48922.2021.01542

APA

Belongie, S., Poursaeed, O., Jiang, T., Yang, H., & Lim, S-N. (2022). Robustness and Generalization via Generative Adversarial Training. IEEE Xplore Digital Library, 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 15691-15700. https://doi.org/10.1109/ICCV48922.2021.01542

Vancouver

Belongie S, Poursaeed O, Jiang T, Yang H, Lim S-N. Robustness and Generalization via Generative Adversarial Training. IEEE Xplore Digital Library. 2022 Feb 28;2021 IEEE/CVF International Conference on Computer Vision (ICCV):15691-15700. https://doi.org/10.1109/ICCV48922.2021.01542

Author

Belongie, Serge ; Poursaeed, Omid ; Jiang, Tianxing ; Yang, Harry ; Lim, Ser-Nam. / Robustness and Generalization via Generative Adversarial Training. In: IEEE Xplore Digital Library. 2022 ; Vol. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). pp. 15691-15700.

Bibtex

@inproceedings{b3aaa87b664a48eb809bff3f7b4ba511,
title = "Robustness and Generalization via Generative Adversarial Training",
abstract = "While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images. In this paper, we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization and robustness to unseen adversarial attacks. Instead of altering a single pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, semantic segmentation and object detection.",
author = "Serge Belongie and Omid Poursaeed and Tianxing Jiang and Harry Yang and Ser-Nam Lim",
year = "2022",
month = feb,
day = "28",
doi = "10.1109/ICCV48922.2021.01542",
language = "English",
volume = "2021 IEEE/CVF International Conference on Computer Vision (ICCV)",
pages = "15691--15700",
journal = "IEEE Xplore Digital Library",

}

RIS

TY - GEN

T1 - Robustness and Generalization via Generative Adversarial Training

AU - Belongie, Serge

AU - Poursaeed, Omid

AU - Jiang, Tianxing

AU - Yang, Harry

AU - Lim, Ser-Nam

PY - 2022/2/28

Y1 - 2022/2/28

N2 - While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images. In this paper, we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization and robustness to unseen adversarial attacks. Instead of altering a single pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, semantic segmentation and object detection.

AB - While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images. In this paper, we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization and robustness to unseen adversarial attacks. Instead of altering a single pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, semantic segmentation and object detection.

UR - https://openaccess.thecvf.com/content/ICCV2021/html/Poursaeed_Robustness_and_Generalization_via_Generative_Adversarial_Training_ICCV_2021_paper.html

U2 - 10.1109/ICCV48922.2021.01542

DO - 10.1109/ICCV48922.2021.01542

M3 - Conference article

VL - 2021 IEEE/CVF International Conference on Computer Vision (ICCV)

SP - 15691

EP - 15700

JO - IEEE Xplore Digital Library

JF - IEEE Xplore Digital Library

ER -

ID: 303805415