TY - GEN
T1 - Reinforced adversarial attacks on deep neural networks using ADMM
AU - Zhao, Pu
AU - Xu, Kaidi
AU - Zhang, Tianyun
AU - Fardad, Makan
AU - Wang, Yanzhi
AU - Lin, Xue
N1 - Funding Information:
ACKNOWLEDGEMENTS This work is supported by the National Science Foundation (CCF-1733701, CNS-1704662, CNS-1739748, CMMI-1750531 and ECCS-1609916), Air Force Research Laboratory FA8750-18-2-0058, and U.S. Office of Naval Research. The fourth author gratefully acknowledges financial support from the National Science Foundation under awards CAREER CMMI-1750531 and ECCS-1609916. We thank researchers at the US Naval Research Laboratory for their comments on previous drafts of this paper.
Funding Information:
This work is supported by the National Science Foundation (CCF-1733701, CNS-1704662, CNS-1739748, CMMI- 1750531 and ECCS-1609916), Air Force Research Laboratory FA8750-18-2-0058, and U.S. Office of Naval Research. The fourth author gratefully acknowledges financial support from the National Science Foundation under awards CAREER CMMI-1750531 and ECCS-1609916. We thank researchers at the US Naval Research Laboratory for their comments on previous drafts of this paper.
Publisher Copyright:
© 2018 IEEE.
PY - 2019/2/20
Y1 - 2019/2/20
N2 - As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.
AB - As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with CW attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.
KW - ADMM (Alternating Direction Method of Multipliers)
KW - Adversarial Attacks
KW - Deep Neural Networks
UR - http://www.scopus.com/inward/record.url?scp=85063101070&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063101070&partnerID=8YFLogxK
U2 - 10.1109/GlobalSIP.2018.8646651
DO - 10.1109/GlobalSIP.2018.8646651
M3 - Conference contribution
AN - SCOPUS:85063101070
T3 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
SP - 1169
EP - 1173
BT - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
Y2 - 26 November 2018 through 29 November 2018
ER -