Reinforced adversarial attacks on deep neural networks using ADMM

Pu Zhao, Kaidi Xu, Tianyun Zhang, Makan Fardad, Yanzhi Wang, Xue Lin

Research output: Chapter in Book/Entry/PoemConference contribution

2 Scopus citations

Abstract

As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.

Original languageEnglish (US)
Title of host publication2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1169-1173
Number of pages5
ISBN (Electronic)9781728112954
DOIs
StatePublished - Jul 2 2018
Event2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Anaheim, United States
Duration: Nov 26 2018Nov 29 2018

Publication series

Name2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings

Conference

Conference2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
Country/TerritoryUnited States
CityAnaheim
Period11/26/1811/29/18

Keywords

  • ADMM (Alternating Direction Method of Multipliers)
  • Adversarial Attacks
  • Deep Neural Networks

ASJC Scopus subject areas

  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Reinforced adversarial attacks on deep neural networks using ADMM'. Together they form a unique fingerprint.

Cite this