Fully-Parallel Area-Efficient Deep Neural Network Design Using Stochastic Computing

Yi Xie, Siyu Liao, Bo Yuan, Yanzhi Wang, Zhongfeng Wang

Research output: Contribution to journalArticlepeer-review

37 Scopus citations


Deep neural network (DNN) has emerged as a powerful machine learning technique for various artificial intelligence applications. Due to the unique advantages on speed, area, and power, specific hardware design has become a very attractive solution for the efficient deployment of DNN. However, the huge resource cost of multipliers makes the fully-parallel implementations of multiplication-intensive DNN still very prohibitive in many real-time resource-constrained embedded applications. This brief proposes a fully-parallel area-efficient stochastic DNN design. By leveraging stochastic computing (SC) technique, the computations of DNN are implemented using very simple stochastic logic, thereby enabling low-complexity fully-parallel DNN design. In addition, to avoid the accuracy loss incurred by the approximation of SC, we propose an accuracy-aware DNN datapath architecture to retain the test accuracy of stochastic DNN. Moreover, we propose a novel low-complexity architecture for the binary-to-stochastic (B-to-S) interface to drastically reduce the footprint of the peripheral B-to-S circuit. Experimental results show that the proposed stochastic DNN design achieves much better hardware performance than non-stochastic design with negligible test accuracy loss.

Original languageEnglish (US)
Article number8022910
Pages (from-to)1382-1386
Number of pages5
JournalIEEE Transactions on Circuits and Systems II: Express Briefs
Issue number12
StatePublished - Dec 2017


  • Deep neural network
  • area-efficient
  • fully-parallel
  • stochastic computing

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Fully-Parallel Area-Efficient Deep Neural Network Design Using Stochastic Computing'. Together they form a unique fingerprint.

Cite this