SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing

Ao Ren, Zhe Li, Caiwen Ding, Qinru Qiu, Yanzhi Wang, Ji Li, Xuehai Qian, Bo Yuan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

43 Citations (Scopus)

Abstract

With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs. Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SCDCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J.

Original languageEnglish (US)
Title of host publicationASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems
PublisherAssociation for Computing Machinery
Pages405-418
Number of pages14
VolumePart F127193
ISBN (Electronic)9781450344654
DOIs
StatePublished - Apr 4 2017
Event22nd International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2017 - Xi'an, China
Duration: Apr 8 2017Apr 12 2017

Other

Other22nd International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2017
CountryChina
CityXi'an
Period4/8/174/12/17

Fingerprint

Neural networks
Hardware
Scalability
Feature extraction
Energy utilization
Application specific integrated circuits
Particle accelerators
Energy efficiency
Field programmable gate arrays (FPGA)
Servers
Chemical activation
Throughput

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture

Cite this

Ren, A., Li, Z., Ding, C., Qiu, Q., Wang, Y., Li, J., ... Yuan, B. (2017). SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing. In ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems (Vol. Part F127193, pp. 405-418). Association for Computing Machinery. https://doi.org/10.1145/3037697.3037746

SC-DCNN : Highly-scalable deep convolutional neural network using stochastic computing. / Ren, Ao; Li, Zhe; Ding, Caiwen; Qiu, Qinru; Wang, Yanzhi; Li, Ji; Qian, Xuehai; Yuan, Bo.

ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. Vol. Part F127193 Association for Computing Machinery, 2017. p. 405-418.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ren, A, Li, Z, Ding, C, Qiu, Q, Wang, Y, Li, J, Qian, X & Yuan, B 2017, SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing. in ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. vol. Part F127193, Association for Computing Machinery, pp. 405-418, 22nd International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2017, Xi'an, China, 4/8/17. https://doi.org/10.1145/3037697.3037746
Ren A, Li Z, Ding C, Qiu Q, Wang Y, Li J et al. SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing. In ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. Vol. Part F127193. Association for Computing Machinery. 2017. p. 405-418 https://doi.org/10.1145/3037697.3037746
Ren, Ao ; Li, Zhe ; Ding, Caiwen ; Qiu, Qinru ; Wang, Yanzhi ; Li, Ji ; Qian, Xuehai ; Yuan, Bo. / SC-DCNN : Highly-scalable deep convolutional neural network using stochastic computing. ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems. Vol. Part F127193 Association for Computing Machinery, 2017. pp. 405-418
@inproceedings{d97f85dc85314e879ff02330a66b1909,
title = "SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing",
abstract = "With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs. Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SCDCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J.",
author = "Ao Ren and Zhe Li and Caiwen Ding and Qinru Qiu and Yanzhi Wang and Ji Li and Xuehai Qian and Bo Yuan",
year = "2017",
month = "4",
day = "4",
doi = "10.1145/3037697.3037746",
language = "English (US)",
volume = "Part F127193",
pages = "405--418",
booktitle = "ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems",
publisher = "Association for Computing Machinery",

}

TY - GEN

T1 - SC-DCNN

T2 - Highly-scalable deep convolutional neural network using stochastic computing

AU - Ren, Ao

AU - Li, Zhe

AU - Ding, Caiwen

AU - Qiu, Qinru

AU - Wang, Yanzhi

AU - Li, Ji

AU - Qian, Xuehai

AU - Yuan, Bo

PY - 2017/4/4

Y1 - 2017/4/4

N2 - With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs. Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SCDCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J.

AB - With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs. Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SCDCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J.

UR - http://www.scopus.com/inward/record.url?scp=85022070818&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85022070818&partnerID=8YFLogxK

U2 - 10.1145/3037697.3037746

DO - 10.1145/3037697.3037746

M3 - Conference contribution

AN - SCOPUS:85022070818

VL - Part F127193

SP - 405

EP - 418

BT - ASPLOS 2017 - 22nd International Conference on Architectural Support for Programming Languages and Operating Systems

PB - Association for Computing Machinery

ER -