TY - JOUR
T1 - StructADMM
T2 - A systematic, high-efficiency framework of structured weight pruning for dnns
AU - Zhang, Tianyun
AU - Ye, Shaokai
AU - Zhang, Kaiqi
AU - Ma, Xiaolong
AU - Liu, Ning
AU - Zhang, Linfeng
AU - Tang, Jian
AU - Ma, Kaisheng
AU - Lin, Xue
AU - Fardad, Makan
AU - Wang, Yanzhi
N1 - Publisher Copyright:
Copyright © 2018, The Authors. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2018/7/29
Y1 - 2018/7/29
N2 - Weight pruning methods of DNNs have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Without loss of accuracy on the AlexNet model, we achieve 2.58× and 3.65× average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15× and 8.52× when allowing a moderate accuracy loss of 2%. In this case the model compression for convolutional layers is 15.0×, corresponding to 11.93× measured CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework.
AB - Weight pruning methods of DNNs have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work, we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Without loss of accuracy on the AlexNet model, we achieve 2.58× and 3.65× average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15× and 8.52× when allowing a moderate accuracy loss of 2%. In this case the model compression for convolutional layers is 15.0×, corresponding to 11.93× measured CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework.
UR - http://www.scopus.com/inward/record.url?scp=85094542141&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094542141&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85094542141
JO - Nuclear Physics A
JF - Nuclear Physics A
SN - 0375-9474
ER -