Structural design optimization for deep convolutional neural networks using stochastic computing

Zhe Li, Ao Ren, Ji Li, Qinru Qiu, Bo Yuan, Jeffrey Draper, Yanzhi Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

Deep Convolutional Neural Networks (DCNNs) have been demonstrated as effective models for understanding image content. The computation behind DCNNs highly relies on the capability of hardware resources due to the deep structure. DCNNs have been implemented on different large-scale computing platforms. However, there is a trend that DCNNs have been embedded into light-weight local systems, which requires low power/energy consumptions and small hardware footprints. Stochastic Computing (SC) radically simplifies the hardware implementation of arithmetic units and has the potential to satisfy the small low-power needs of DCNNs. Local connectivities and down-sampling operations have made DCNNs more complex to be implemented using SC. In this paper, eight feature extraction designs for DCNNs using SC in two groups are explored and optimized in detail from the perspective of calculation precision, where we permute two SC implementations for inner-product calculation, two down-sampling schemes, and two structures of DCNN neurons. We evaluate the network in aspects of network accuracy and hardware performance for each DCNN using one feature extraction design out of eight. Through exploration and optimization, the accuracies of SC-based DCNNs are guaranteed compared with software implementations on CPU/GPU/binary-based ASIC synthesis, while area, power, and energy are significantly reduced by up to 776x, 190x, and 32835x.

Original languageEnglish (US)
Title of host publicationProceedings of the 2017 Design, Automation and Test in Europe, DATE 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages250-253
Number of pages4
ISBN (Electronic)9783981537093
DOIs
StatePublished - May 11 2017
Event20th Design, Automation and Test in Europe, DATE 2017 - Swisstech, Lausanne, Switzerland
Duration: Mar 27 2017Mar 31 2017

Other

Other20th Design, Automation and Test in Europe, DATE 2017
CountrySwitzerland
CitySwisstech, Lausanne
Period3/27/173/31/17

Fingerprint

Structural design
Neural networks
Hardware
Feature extraction
Design optimization
Sampling
Image understanding
Application specific integrated circuits
Neurons
Program processors
Energy utilization

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Safety, Risk, Reliability and Quality

Cite this

Li, Z., Ren, A., Li, J., Qiu, Q., Yuan, B., Draper, J., & Wang, Y. (2017). Structural design optimization for deep convolutional neural networks using stochastic computing. In Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017 (pp. 250-253). [7926991] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.23919/DATE.2017.7926991

Structural design optimization for deep convolutional neural networks using stochastic computing. / Li, Zhe; Ren, Ao; Li, Ji; Qiu, Qinru; Yuan, Bo; Draper, Jeffrey; Wang, Yanzhi.

Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 250-253 7926991.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, Z, Ren, A, Li, J, Qiu, Q, Yuan, B, Draper, J & Wang, Y 2017, Structural design optimization for deep convolutional neural networks using stochastic computing. in Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017., 7926991, Institute of Electrical and Electronics Engineers Inc., pp. 250-253, 20th Design, Automation and Test in Europe, DATE 2017, Swisstech, Lausanne, Switzerland, 3/27/17. https://doi.org/10.23919/DATE.2017.7926991
Li Z, Ren A, Li J, Qiu Q, Yuan B, Draper J et al. Structural design optimization for deep convolutional neural networks using stochastic computing. In Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 250-253. 7926991 https://doi.org/10.23919/DATE.2017.7926991
Li, Zhe ; Ren, Ao ; Li, Ji ; Qiu, Qinru ; Yuan, Bo ; Draper, Jeffrey ; Wang, Yanzhi. / Structural design optimization for deep convolutional neural networks using stochastic computing. Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 250-253
@inproceedings{c7fa82d8d0554f8ca806748c203c167e,
title = "Structural design optimization for deep convolutional neural networks using stochastic computing",
abstract = "Deep Convolutional Neural Networks (DCNNs) have been demonstrated as effective models for understanding image content. The computation behind DCNNs highly relies on the capability of hardware resources due to the deep structure. DCNNs have been implemented on different large-scale computing platforms. However, there is a trend that DCNNs have been embedded into light-weight local systems, which requires low power/energy consumptions and small hardware footprints. Stochastic Computing (SC) radically simplifies the hardware implementation of arithmetic units and has the potential to satisfy the small low-power needs of DCNNs. Local connectivities and down-sampling operations have made DCNNs more complex to be implemented using SC. In this paper, eight feature extraction designs for DCNNs using SC in two groups are explored and optimized in detail from the perspective of calculation precision, where we permute two SC implementations for inner-product calculation, two down-sampling schemes, and two structures of DCNN neurons. We evaluate the network in aspects of network accuracy and hardware performance for each DCNN using one feature extraction design out of eight. Through exploration and optimization, the accuracies of SC-based DCNNs are guaranteed compared with software implementations on CPU/GPU/binary-based ASIC synthesis, while area, power, and energy are significantly reduced by up to 776x, 190x, and 32835x.",
author = "Zhe Li and Ao Ren and Ji Li and Qinru Qiu and Bo Yuan and Jeffrey Draper and Yanzhi Wang",
year = "2017",
month = "5",
day = "11",
doi = "10.23919/DATE.2017.7926991",
language = "English (US)",
pages = "250--253",
booktitle = "Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Structural design optimization for deep convolutional neural networks using stochastic computing

AU - Li, Zhe

AU - Ren, Ao

AU - Li, Ji

AU - Qiu, Qinru

AU - Yuan, Bo

AU - Draper, Jeffrey

AU - Wang, Yanzhi

PY - 2017/5/11

Y1 - 2017/5/11

N2 - Deep Convolutional Neural Networks (DCNNs) have been demonstrated as effective models for understanding image content. The computation behind DCNNs highly relies on the capability of hardware resources due to the deep structure. DCNNs have been implemented on different large-scale computing platforms. However, there is a trend that DCNNs have been embedded into light-weight local systems, which requires low power/energy consumptions and small hardware footprints. Stochastic Computing (SC) radically simplifies the hardware implementation of arithmetic units and has the potential to satisfy the small low-power needs of DCNNs. Local connectivities and down-sampling operations have made DCNNs more complex to be implemented using SC. In this paper, eight feature extraction designs for DCNNs using SC in two groups are explored and optimized in detail from the perspective of calculation precision, where we permute two SC implementations for inner-product calculation, two down-sampling schemes, and two structures of DCNN neurons. We evaluate the network in aspects of network accuracy and hardware performance for each DCNN using one feature extraction design out of eight. Through exploration and optimization, the accuracies of SC-based DCNNs are guaranteed compared with software implementations on CPU/GPU/binary-based ASIC synthesis, while area, power, and energy are significantly reduced by up to 776x, 190x, and 32835x.

AB - Deep Convolutional Neural Networks (DCNNs) have been demonstrated as effective models for understanding image content. The computation behind DCNNs highly relies on the capability of hardware resources due to the deep structure. DCNNs have been implemented on different large-scale computing platforms. However, there is a trend that DCNNs have been embedded into light-weight local systems, which requires low power/energy consumptions and small hardware footprints. Stochastic Computing (SC) radically simplifies the hardware implementation of arithmetic units and has the potential to satisfy the small low-power needs of DCNNs. Local connectivities and down-sampling operations have made DCNNs more complex to be implemented using SC. In this paper, eight feature extraction designs for DCNNs using SC in two groups are explored and optimized in detail from the perspective of calculation precision, where we permute two SC implementations for inner-product calculation, two down-sampling schemes, and two structures of DCNN neurons. We evaluate the network in aspects of network accuracy and hardware performance for each DCNN using one feature extraction design out of eight. Through exploration and optimization, the accuracies of SC-based DCNNs are guaranteed compared with software implementations on CPU/GPU/binary-based ASIC synthesis, while area, power, and energy are significantly reduced by up to 776x, 190x, and 32835x.

UR - http://www.scopus.com/inward/record.url?scp=85020208252&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85020208252&partnerID=8YFLogxK

U2 - 10.23919/DATE.2017.7926991

DO - 10.23919/DATE.2017.7926991

M3 - Conference contribution

AN - SCOPUS:85020208252

SP - 250

EP - 253

BT - Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -