TY - GEN
T1 - Fault sneaking attack
T2 - 56th Annual Design Automation Conference, DAC 2019
AU - Zhao, Pu
AU - Wang, Siyue
AU - Gongye, Cheng
AU - Wang, Yanzhi
AU - Fei, Yunsi
AU - Lin, Xue
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/6/2
Y1 - 2019/6/2
N2 - Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using ℓ0 norm to measure the number of modifications and ℓ2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.
AB - Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using ℓ0 norm to measure the number of modifications and ℓ2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.
KW - ADMM
KW - Deep neural networks
KW - Fault injection
UR - http://www.scopus.com/inward/record.url?scp=85067812350&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067812350&partnerID=8YFLogxK
U2 - 10.1145/3316781.3317825
DO - 10.1145/3316781.3317825
M3 - Conference contribution
AN - SCOPUS:85067812350
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 56th Annual Design Automation Conference 2019, DAC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 June 2019 through 6 June 2019
ER -