Reinforced adversarial attacks on deep neural networks using ADMM

Pu Zhao, Kaidi Xu, Tianyun Zhang, Makan Fardad, Yanzhi Wang, Xue Lin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.

Original languageEnglish (US)
Title of host publication2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1169-1173
Number of pages5
ISBN (Electronic)9781728112954
DOIs
StatePublished - Feb 20 2019
Event2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Anaheim, United States
Duration: Nov 26 2018Nov 29 2018

Publication series

Name2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings

Conference

Conference2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
CountryUnited States
CityAnaheim
Period11/26/1811/29/18

Fingerprint

Distillation
Deep neural networks
Deep learning

Keywords

  • ADMM (Alternating Direction Method of Multipliers)
  • Adversarial Attacks
  • Deep Neural Networks

ASJC Scopus subject areas

  • Information Systems
  • Signal Processing

Cite this

Zhao, P., Xu, K., Zhang, T., Fardad, M., Wang, Y., & Lin, X. (2019). Reinforced adversarial attacks on deep neural networks using ADMM. In 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings (pp. 1169-1173). [8646651] (2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/GlobalSIP.2018.8646651

Reinforced adversarial attacks on deep neural networks using ADMM. / Zhao, Pu; Xu, Kaidi; Zhang, Tianyun; Fardad, Makan; Wang, Yanzhi; Lin, Xue.

2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 1169-1173 8646651 (2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhao, P, Xu, K, Zhang, T, Fardad, M, Wang, Y & Lin, X 2019, Reinforced adversarial attacks on deep neural networks using ADMM. in 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings., 8646651, 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings, Institute of Electrical and Electronics Engineers Inc., pp. 1169-1173, 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018, Anaheim, United States, 11/26/18. https://doi.org/10.1109/GlobalSIP.2018.8646651
Zhao P, Xu K, Zhang T, Fardad M, Wang Y, Lin X. Reinforced adversarial attacks on deep neural networks using ADMM. In 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 1169-1173. 8646651. (2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings). https://doi.org/10.1109/GlobalSIP.2018.8646651
Zhao, Pu ; Xu, Kaidi ; Zhang, Tianyun ; Fardad, Makan ; Wang, Yanzhi ; Lin, Xue. / Reinforced adversarial attacks on deep neural networks using ADMM. 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 1169-1173 (2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings).
@inproceedings{6bd0dd7fe0d4465e9af4751eb6f5f3ac,
title = "Reinforced adversarial attacks on deep neural networks using ADMM",
abstract = "As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.",
keywords = "ADMM (Alternating Direction Method of Multipliers), Adversarial Attacks, Deep Neural Networks",
author = "Pu Zhao and Kaidi Xu and Tianyun Zhang and Makan Fardad and Yanzhi Wang and Xue Lin",
year = "2019",
month = "2",
day = "20",
doi = "10.1109/GlobalSIP.2018.8646651",
language = "English (US)",
series = "2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1169--1173",
booktitle = "2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings",

}

TY - GEN

T1 - Reinforced adversarial attacks on deep neural networks using ADMM

AU - Zhao, Pu

AU - Xu, Kaidi

AU - Zhang, Tianyun

AU - Fardad, Makan

AU - Wang, Yanzhi

AU - Lin, Xue

PY - 2019/2/20

Y1 - 2019/2/20

N2 - As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.

AB - As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.

KW - ADMM (Alternating Direction Method of Multipliers)

KW - Adversarial Attacks

KW - Deep Neural Networks

UR - http://www.scopus.com/inward/record.url?scp=85063101070&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063101070&partnerID=8YFLogxK

U2 - 10.1109/GlobalSIP.2018.8646651

DO - 10.1109/GlobalSIP.2018.8646651

M3 - Conference contribution

T3 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings

SP - 1169

EP - 1173

BT - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -