TY - GEN
T1 - ADMM attack
T2 - 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019
AU - Zhao, Pu
AU - Xu, Kaidi
AU - Liu, Sijia
AU - Wang, Yanzhi
AU - Lin, Xue
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/1/21
Y1 - 2019/1/21
N2 - Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.
AB - Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.
UR - http://www.scopus.com/inward/record.url?scp=85061118037&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061118037&partnerID=8YFLogxK
U2 - 10.1145/3287624.3288750
DO - 10.1145/3287624.3288750
M3 - Conference contribution
AN - SCOPUS:85061118037
T3 - Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC
SP - 538
EP - 543
BT - ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 January 2019 through 24 January 2019
ER -