Adversarial Example Decomposition

Research output: Working paperPreprintResearch

Standard

Adversarial Example Decomposition. / Belongie, Serge; He, Horace; Jiang, Qingxuan; Katsman, Isay; Lim, Ser Nam.

2019.

Research output: Working paperPreprintResearch

Harvard

Belongie, S, He, H, Jiang, Q, Katsman, I & Lim, SN 2019 'Adversarial Example Decomposition'. https://doi.org/10.48550/arXiv.1812.01198

APA

Belongie, S., He, H., Jiang, Q., Katsman, I., & Lim, S. N. (2019). Adversarial Example Decomposition. https://doi.org/10.48550/arXiv.1812.01198

Vancouver

Belongie S, He H, Jiang Q, Katsman I, Lim SN. Adversarial Example Decomposition. 2019 Jun 21. https://doi.org/10.48550/arXiv.1812.01198

Author

Belongie, Serge ; He, Horace ; Jiang, Qingxuan ; Katsman, Isay ; Lim, Ser Nam. / Adversarial Example Decomposition. 2019.

Bibtex

@techreport{89716d2d164d49d285a2df13e013f0a1,
title = "Adversarial Example Decomposition",
abstract = "Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.",
author = "Serge Belongie and Horace He and Qingxuan Jiang and Isay Katsman and Lim, {Ser Nam}",
year = "2019",
month = jun,
day = "21",
doi = "https://doi.org/10.48550/arXiv.1812.01198",
language = "English",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - Adversarial Example Decomposition

AU - Belongie, Serge

AU - He, Horace

AU - Jiang, Qingxuan

AU - Katsman, Isay

AU - Lim, Ser Nam

PY - 2019/6/21

Y1 - 2019/6/21

N2 - Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.

AB - Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.

UR - https://vision.cornell.edu/se3/wp-content/uploads/2019/06/Adversarial_Decomp_SPML19-1.pdf

U2 - https://doi.org/10.48550/arXiv.1812.01198

DO - https://doi.org/10.48550/arXiv.1812.01198

M3 - Preprint

BT - Adversarial Example Decomposition

ER -

ID: 306898197