Adversarial Example Decomposition
Research output: Working paper › Preprint › Research
Standard
Adversarial Example Decomposition. / Belongie, Serge; He, Horace; Jiang, Qingxuan; Katsman, Isay; Lim, Ser Nam.
2019.Research output: Working paper › Preprint › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - Adversarial Example Decomposition
AU - Belongie, Serge
AU - He, Horace
AU - Jiang, Qingxuan
AU - Katsman, Isay
AU - Lim, Ser Nam
PY - 2019/6/21
Y1 - 2019/6/21
N2 - Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.
AB - Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.
UR - https://vision.cornell.edu/se3/wp-content/uploads/2019/06/Adversarial_Decomp_SPML19-1.pdf
U2 - https://doi.org/10.48550/arXiv.1812.01198
DO - https://doi.org/10.48550/arXiv.1812.01198
M3 - Preprint
BT - Adversarial Example Decomposition
ER -
ID: 306898197