Generative Adversarial Perturbations

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Generative Adversarial Perturbations. / Poursaeed, Omid; Katsman, Isay; Gao, Bicheng; Belongie, Serge.

In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 14.12.2018, p. 4422-4431.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Poursaeed, O, Katsman, I, Gao, B & Belongie, S 2018, 'Generative Adversarial Perturbations', Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4422-4431. https://doi.org/10.1109/CVPR.2018.00465

APA

Poursaeed, O., Katsman, I., Gao, B., & Belongie, S. (2018). Generative Adversarial Perturbations. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4422-4431. https://doi.org/10.1109/CVPR.2018.00465

Vancouver

Poursaeed O, Katsman I, Gao B, Belongie S. Generative Adversarial Perturbations. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2018 Dec 14;4422-4431. https://doi.org/10.1109/CVPR.2018.00465

Author

Poursaeed, Omid ; Katsman, Isay ; Gao, Bicheng ; Belongie, Serge. / Generative Adversarial Perturbations. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2018 ; pp. 4422-4431.

Bibtex

@inproceedings{79a8471d66424a319cb7a84397f36a1e,
title = "Generative Adversarial Perturbations",
abstract = "In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.",
author = "Omid Poursaeed and Isay Katsman and Bicheng Gao and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2018 IEEE.; 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 ; Conference date: 18-06-2018 Through 22-06-2018",
year = "2018",
month = dec,
day = "14",
doi = "10.1109/CVPR.2018.00465",
language = "English",
pages = "4422--4431",
journal = "I E E E Conference on Computer Vision and Pattern Recognition. Proceedings",
issn = "1063-6919",
publisher = "Institute of Electrical and Electronics Engineers",

}

RIS

TY - GEN

T1 - Generative Adversarial Perturbations

AU - Poursaeed, Omid

AU - Katsman, Isay

AU - Gao, Bicheng

AU - Belongie, Serge

N1 - Publisher Copyright: © 2018 IEEE.

PY - 2018/12/14

Y1 - 2018/12/14

N2 - In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.

AB - In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.

UR - http://www.scopus.com/inward/record.url?scp=85062857493&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2018.00465

DO - 10.1109/CVPR.2018.00465

M3 - Conference article

AN - SCOPUS:85062857493

SP - 4422

EP - 4431

JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

SN - 1063-6919

T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018

Y2 - 18 June 2018 through 22 June 2018

ER -

ID: 301825242