ADMM attack

An enhanced adversarial attack for deep neural networks with undetectable distortions

Pu Zhao, Kaidi Xu, Sijia Liu, Yanzhi Wang, Xue Lin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.

Original languageEnglish (US)
Title of host publicationASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages538-543
Number of pages6
ISBN (Electronic)9781450360074
DOIs
StatePublished - Jan 21 2019
Externally publishedYes
Event24th Asia and South Pacific Design Automation Conference, ASPDAC 2019 - Tokyo, Japan
Duration: Jan 21 2019Jan 24 2019

Publication series

NameProceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC

Conference

Conference24th Asia and South Pacific Design Automation Conference, ASPDAC 2019
CountryJapan
CityTokyo
Period1/21/191/24/19

Fingerprint

Labels
Deep neural networks

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design

Cite this

Zhao, P., Xu, K., Liu, S., Wang, Y., & Lin, X. (2019). ADMM attack: An enhanced adversarial attack for deep neural networks with undetectable distortions. In ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference (pp. 538-543). (Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1145/3287624.3288750

ADMM attack : An enhanced adversarial attack for deep neural networks with undetectable distortions. / Zhao, Pu; Xu, Kaidi; Liu, Sijia; Wang, Yanzhi; Lin, Xue.

ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference. Institute of Electrical and Electronics Engineers Inc., 2019. p. 538-543 (Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhao, P, Xu, K, Liu, S, Wang, Y & Lin, X 2019, ADMM attack: An enhanced adversarial attack for deep neural networks with undetectable distortions. in ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference. Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC, Institute of Electrical and Electronics Engineers Inc., pp. 538-543, 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019, Tokyo, Japan, 1/21/19. https://doi.org/10.1145/3287624.3288750
Zhao P, Xu K, Liu S, Wang Y, Lin X. ADMM attack: An enhanced adversarial attack for deep neural networks with undetectable distortions. In ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference. Institute of Electrical and Electronics Engineers Inc. 2019. p. 538-543. (Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC). https://doi.org/10.1145/3287624.3288750
Zhao, Pu ; Xu, Kaidi ; Liu, Sijia ; Wang, Yanzhi ; Lin, Xue. / ADMM attack : An enhanced adversarial attack for deep neural networks with undetectable distortions. ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 538-543 (Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC).
@inproceedings{6cbad1c483f9410b9faf0512573362d0,
title = "ADMM attack: An enhanced adversarial attack for deep neural networks with undetectable distortions",
abstract = "Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.",
author = "Pu Zhao and Kaidi Xu and Sijia Liu and Yanzhi Wang and Xue Lin",
year = "2019",
month = "1",
day = "21",
doi = "10.1145/3287624.3288750",
language = "English (US)",
series = "Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "538--543",
booktitle = "ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference",

}

TY - GEN

T1 - ADMM attack

T2 - An enhanced adversarial attack for deep neural networks with undetectable distortions

AU - Zhao, Pu

AU - Xu, Kaidi

AU - Liu, Sijia

AU - Wang, Yanzhi

AU - Lin, Xue

PY - 2019/1/21

Y1 - 2019/1/21

N2 - Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.

AB - Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.

UR - http://www.scopus.com/inward/record.url?scp=85061118037&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061118037&partnerID=8YFLogxK

U2 - 10.1145/3287624.3288750

DO - 10.1145/3287624.3288750

M3 - Conference contribution

T3 - Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC

SP - 538

EP - 543

BT - ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference

PB - Institute of Electrical and Electronics Engineers Inc.

ER -