TY - JOUR
T1 - StructADMM
T2 - Achieving Ultrahigh Efficiency in Structured Pruning for DNNs
AU - Zhang, Tianyun
AU - Ye, Shaokai
AU - Feng, Xiaoyu
AU - Ma, Xiaolong
AU - Zhang, Kaiqi
AU - Li, Zhengang
AU - Tang, Jian
AU - Liu, Sijia
AU - Lin, Xue
AU - Liu, Yongpan
AU - Fardad, Makan
AU - Wang, Yanzhi
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2022/5/1
Y1 - 2022/5/1
N2 - Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work, the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filterwise, channelwise, and shapewise sparsity, as well as nonstructured sparsity. The proposed framework incorporates stochastic gradient descent (SGD; or ADAM) with alternating direction method of multipliers (ADMM) and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Leveraging special characteristics of ADMM, we further propose a progressive, multistep weight pruning framework and a network purification and unused path removal procedure, in order to achieve higher pruning rate without accuracy loss. Without loss of accuracy on the AlexNet model, we achieve 2.58 × and 3.65 × average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15 × and 8.52 × when allowing a moderate accuracy loss of 2%. In this case, the model compression for convolutional layers is 15.0× , corresponding to 11.93× measured CPU speedup. As another example, for the ResNet-18 model on the CIFAR-10 data set, we achieve an unprecedented 54.2× structured pruning rate on CONV layers. This is 32× higher pruning rate compared with recent work and can further translate into 7.6× inference time speedup on the Adreno 640 mobile GPU compared with the original, unpruned DNN model. We share our codes and models at the link http://bit.ly/2M0V7DO.
AB - Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work, the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filterwise, channelwise, and shapewise sparsity, as well as nonstructured sparsity. The proposed framework incorporates stochastic gradient descent (SGD; or ADAM) with alternating direction method of multipliers (ADMM) and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Leveraging special characteristics of ADMM, we further propose a progressive, multistep weight pruning framework and a network purification and unused path removal procedure, in order to achieve higher pruning rate without accuracy loss. Without loss of accuracy on the AlexNet model, we achieve 2.58 × and 3.65 × average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15 × and 8.52 × when allowing a moderate accuracy loss of 2%. In this case, the model compression for convolutional layers is 15.0× , corresponding to 11.93× measured CPU speedup. As another example, for the ResNet-18 model on the CIFAR-10 data set, we achieve an unprecedented 54.2× structured pruning rate on CONV layers. This is 32× higher pruning rate compared with recent work and can further translate into 7.6× inference time speedup on the Adreno 640 mobile GPU compared with the original, unpruned DNN model. We share our codes and models at the link http://bit.ly/2M0V7DO.
KW - Alternating direction method of multipliers (ADMM)
KW - deep neural networks (DNNs)
KW - hardware acceleration
KW - weight pruning
UR - http://www.scopus.com/inward/record.url?scp=85100916448&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85100916448&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2020.3045153
DO - 10.1109/TNNLS.2020.3045153
M3 - Article
C2 - 33587706
AN - SCOPUS:85100916448
SN - 2162-237X
VL - 33
SP - 2259
EP - 2273
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -