Generative Adversarial Perturbations
Publikation: Bidrag til tidsskrift › Konferenceartikel › Forskning › fagfællebedømt
In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Sider (fra-til) | 4422-4431 |
Antal sider | 10 |
ISSN | 1063-6919 |
DOI | |
Status | Udgivet - 14 dec. 2018 |
Eksternt udgivet | Ja |
Begivenhed | 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 - Salt Lake City, USA Varighed: 18 jun. 2018 → 22 jun. 2018 |
Konference
Konference | 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 |
---|---|
Land | USA |
By | Salt Lake City |
Periode | 18/06/2018 → 22/06/2018 |
Bibliografisk note
Publisher Copyright:
© 2018 IEEE.
ID: 301825242