Enhancing adversarial example transferability with an intermediate level attack

Research output: Contribution to journalConference articleResearchpeer-review

Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model. However, adversarial examples are typically overfit to exploit the particular architecture and feature representation of a source model, resulting in sub-optimal black-box transfer attacks to other target models. We introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer of the source model, improving upon state-of-the-art methods. We show that we can select a layer of the source model to perturb without any knowledge of the target models while achieving high transferability. Additionally, we provide some explanatory insights regarding our method and the effect of optimizing for adversarial examples using intermediate feature maps.

Original languageEnglish
JournalProceedings of the IEEE International Conference on Computer Vision
Pages (from-to)4732-4741
Number of pages10
ISSN1550-5499
DOIs
Publication statusPublished - Oct 2019
Event17th IEEE/CVF International Conference on Computer Vision, ICCV 2019 - Seoul, Korea, Republic of
Duration: 27 Oct 20192 Nov 2019

Conference

Conference17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
CountryKorea, Republic of
CitySeoul
Period27/10/201902/11/2019
SponsorComputer Vision Foundation, IEEE

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

ID: 301824085