Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing

Zhe Li, Ji Li, Ao Ren, Caiwen Ding, Jeffrey Draper, Qinru Qiu, Bo Yuan, Yanzhi Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. Previous works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but ignore other constraints, such as area, power, and energy. Stochastic Computing (SC), as a unique data representation and processing technique, has the potential to enable the design of fully parallel and scalable hardware implementations of large-scale deep learning systems. This paper proposed an automatic design allocation algorithm driven by budget requirement considering overall accuracy performance. This systematic method enables the automatic design of a DCNN where all design parameters are jointly optimized. Experimental results demonstrate that proposed algorithm can achieve a joint optimization of all design parameters given the comprehensive budget of a DCNN.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018
PublisherIEEE Computer Society
Pages28-33
Number of pages6
Volume2018-July
ISBN (Print)9781538670996
DOIs
StatePublished - Aug 7 2018
Event17th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018 - Hong Kong, Hong Kong
Duration: Jul 9 2018Jul 11 2018

Other

Other17th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018
CountryHong Kong
CityHong Kong
Period7/9/187/11/18

Fingerprint

Neural networks
Hardware
Learning systems
Mobile devices
Field programmable gate arrays (FPGA)
Servers
Processing
Deep learning

Keywords

  • Deep Convolutional Neural Networks
  • Deep Learning
  • Design Parameter Co optimization
  • Stochastic Computing

ASJC Scopus subject areas

  • Hardware and Architecture
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Li, Z., Li, J., Ren, A., Ding, C., Draper, J., Qiu, Q., ... Wang, Y. (2018). Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing. In Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018 (Vol. 2018-July, pp. 28-33). [8429337] IEEE Computer Society. https://doi.org/10.1109/ISVLSI.2018.00016

Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing. / Li, Zhe; Li, Ji; Ren, Ao; Ding, Caiwen; Draper, Jeffrey; Qiu, Qinru; Yuan, Bo; Wang, Yanzhi.

Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018. Vol. 2018-July IEEE Computer Society, 2018. p. 28-33 8429337.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, Z, Li, J, Ren, A, Ding, C, Draper, J, Qiu, Q, Yuan, B & Wang, Y 2018, Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing. in Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018. vol. 2018-July, 8429337, IEEE Computer Society, pp. 28-33, 17th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018, Hong Kong, Hong Kong, 7/9/18. https://doi.org/10.1109/ISVLSI.2018.00016
Li Z, Li J, Ren A, Ding C, Draper J, Qiu Q et al. Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing. In Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018. Vol. 2018-July. IEEE Computer Society. 2018. p. 28-33. 8429337 https://doi.org/10.1109/ISVLSI.2018.00016
Li, Zhe ; Li, Ji ; Ren, Ao ; Ding, Caiwen ; Draper, Jeffrey ; Qiu, Qinru ; Yuan, Bo ; Wang, Yanzhi. / Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing. Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018. Vol. 2018-July IEEE Computer Society, 2018. pp. 28-33
@inproceedings{d79e91762a144e08aada9a0b0769445a,
title = "Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing",
abstract = "Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. Previous works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but ignore other constraints, such as area, power, and energy. Stochastic Computing (SC), as a unique data representation and processing technique, has the potential to enable the design of fully parallel and scalable hardware implementations of large-scale deep learning systems. This paper proposed an automatic design allocation algorithm driven by budget requirement considering overall accuracy performance. This systematic method enables the automatic design of a DCNN where all design parameters are jointly optimized. Experimental results demonstrate that proposed algorithm can achieve a joint optimization of all design parameters given the comprehensive budget of a DCNN.",
keywords = "Deep Convolutional Neural Networks, Deep Learning, Design Parameter Co optimization, Stochastic Computing",
author = "Zhe Li and Ji Li and Ao Ren and Caiwen Ding and Jeffrey Draper and Qinru Qiu and Bo Yuan and Yanzhi Wang",
year = "2018",
month = "8",
day = "7",
doi = "10.1109/ISVLSI.2018.00016",
language = "English (US)",
isbn = "9781538670996",
volume = "2018-July",
pages = "28--33",
booktitle = "Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - Towards budget-driven hardware optimization for deep convolutional neural networks using stochastic computing

AU - Li, Zhe

AU - Li, Ji

AU - Ren, Ao

AU - Ding, Caiwen

AU - Draper, Jeffrey

AU - Qiu, Qinru

AU - Yuan, Bo

AU - Wang, Yanzhi

PY - 2018/8/7

Y1 - 2018/8/7

N2 - Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. Previous works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but ignore other constraints, such as area, power, and energy. Stochastic Computing (SC), as a unique data representation and processing technique, has the potential to enable the design of fully parallel and scalable hardware implementations of large-scale deep learning systems. This paper proposed an automatic design allocation algorithm driven by budget requirement considering overall accuracy performance. This systematic method enables the automatic design of a DCNN where all design parameters are jointly optimized. Experimental results demonstrate that proposed algorithm can achieve a joint optimization of all design parameters given the comprehensive budget of a DCNN.

AB - Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. Previous works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but ignore other constraints, such as area, power, and energy. Stochastic Computing (SC), as a unique data representation and processing technique, has the potential to enable the design of fully parallel and scalable hardware implementations of large-scale deep learning systems. This paper proposed an automatic design allocation algorithm driven by budget requirement considering overall accuracy performance. This systematic method enables the automatic design of a DCNN where all design parameters are jointly optimized. Experimental results demonstrate that proposed algorithm can achieve a joint optimization of all design parameters given the comprehensive budget of a DCNN.

KW - Deep Convolutional Neural Networks

KW - Deep Learning

KW - Design Parameter Co optimization

KW - Stochastic Computing

UR - http://www.scopus.com/inward/record.url?scp=85052128580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85052128580&partnerID=8YFLogxK

U2 - 10.1109/ISVLSI.2018.00016

DO - 10.1109/ISVLSI.2018.00016

M3 - Conference contribution

AN - SCOPUS:85052128580

SN - 9781538670996

VL - 2018-July

SP - 28

EP - 33

BT - Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018

PB - IEEE Computer Society

ER -