An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM

Geng Yuan, Xiaolong Ma, Caiwen Ding, Sheng Lin, Tianyun Zhang, Zeinab S. Jalali, Yilong Zhao, Li Jiang, Sucheta Soundarajan, Yanzhi Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring sub-stantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor-based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed framework achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ.

Original languageEnglish (US)
Title of host publicationInternational Symposium on Low Power Electronics and Design, ISLPED 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728129549
DOIs
StatePublished - Jul 2019
Event2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019 - Lausanne, Switzerland
Duration: Jul 29 2019Jul 31 2019

Publication series

NameProceedings of the International Symposium on Low Power Electronics and Design
Volume2019-July
ISSN (Print)1533-4678

Conference

Conference2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019
CountrySwitzerland
CityLausanne
Period7/29/197/31/19

Fingerprint

Memristors
Data storage equipment
Electric power utilization
Deep neural networks
Hardware

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Yuan, G., Ma, X., Ding, C., Lin, S., Zhang, T., Jalali, Z. S., ... Wang, Y. (2019). An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM. In International Symposium on Low Power Electronics and Design, ISLPED 2019 [8824944] (Proceedings of the International Symposium on Low Power Electronics and Design; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISLPED.2019.8824944

An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM. / Yuan, Geng; Ma, Xiaolong; Ding, Caiwen; Lin, Sheng; Zhang, Tianyun; Jalali, Zeinab S.; Zhao, Yilong; Jiang, Li; Soundarajan, Sucheta; Wang, Yanzhi.

International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc., 2019. 8824944 (Proceedings of the International Symposium on Low Power Electronics and Design; Vol. 2019-July).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yuan, G, Ma, X, Ding, C, Lin, S, Zhang, T, Jalali, ZS, Zhao, Y, Jiang, L, Soundarajan, S & Wang, Y 2019, An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM. in International Symposium on Low Power Electronics and Design, ISLPED 2019., 8824944, Proceedings of the International Symposium on Low Power Electronics and Design, vol. 2019-July, Institute of Electrical and Electronics Engineers Inc., 2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019, Lausanne, Switzerland, 7/29/19. https://doi.org/10.1109/ISLPED.2019.8824944
Yuan G, Ma X, Ding C, Lin S, Zhang T, Jalali ZS et al. An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM. In International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc. 2019. 8824944. (Proceedings of the International Symposium on Low Power Electronics and Design). https://doi.org/10.1109/ISLPED.2019.8824944
Yuan, Geng ; Ma, Xiaolong ; Ding, Caiwen ; Lin, Sheng ; Zhang, Tianyun ; Jalali, Zeinab S. ; Zhao, Yilong ; Jiang, Li ; Soundarajan, Sucheta ; Wang, Yanzhi. / An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM. International Symposium on Low Power Electronics and Design, ISLPED 2019. Institute of Electrical and Electronics Engineers Inc., 2019. (Proceedings of the International Symposium on Low Power Electronics and Design).
@inproceedings{dc7abaad59f5414d85de41235f9ef276,
title = "An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM",
abstract = "The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring sub-stantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor-based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed framework achieves 29.81× (20.88×) weight compression ratio, with 98.38{\%} (96.96{\%}) and 98.29{\%} (97.47{\%}) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5{\%} (0.76{\%}) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ.",
author = "Geng Yuan and Xiaolong Ma and Caiwen Ding and Sheng Lin and Tianyun Zhang and Jalali, {Zeinab S.} and Yilong Zhao and Li Jiang and Sucheta Soundarajan and Yanzhi Wang",
year = "2019",
month = "7",
doi = "10.1109/ISLPED.2019.8824944",
language = "English (US)",
series = "Proceedings of the International Symposium on Low Power Electronics and Design",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "International Symposium on Low Power Electronics and Design, ISLPED 2019",

}

TY - GEN

T1 - An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM

AU - Yuan, Geng

AU - Ma, Xiaolong

AU - Ding, Caiwen

AU - Lin, Sheng

AU - Zhang, Tianyun

AU - Jalali, Zeinab S.

AU - Zhao, Yilong

AU - Jiang, Li

AU - Soundarajan, Sucheta

AU - Wang, Yanzhi

PY - 2019/7

Y1 - 2019/7

N2 - The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring sub-stantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor-based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed framework achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ.

AB - The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring sub-stantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor-based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed framework achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ.

UR - http://www.scopus.com/inward/record.url?scp=85072657580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072657580&partnerID=8YFLogxK

U2 - 10.1109/ISLPED.2019.8824944

DO - 10.1109/ISLPED.2019.8824944

M3 - Conference contribution

AN - SCOPUS:85072657580

T3 - Proceedings of the International Symposium on Low Power Electronics and Design

BT - International Symposium on Low Power Electronics and Design, ISLPED 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -