Fully-Parallel Area-Efficient Deep Neural Net-work Design using Stochastic Computing

Yi Xie, Siyu Liao, Bo Yuan, Yanzhi Wang, Zhongfeng Wang

Research output: Contribution to journalArticle

9 Scopus citations


Deep neural network (DNN) has emerged as a powerful machine learning technique for various artificial intelligence applications. Due to the unique advantages on speed, area and power, specific hardware design has become a very attractive solution for the efficient deployment of DNN. However, the huge resource cost of multipliers makes the fully-parallel implementations of multiplication-intensive DNN still very prohibitive in many real-time resource-constrained embedded applications. This paper proposes a fully-parallel area-efficient stochastic DNN design. By leveraging stochastic computing (SC) technique, the computations of DNN are implemented using very simple stochastic logic, thereby enabling low-complexity fully-parallel DNN design. In addition, to avoid the accuracy loss incurred by the approximation of SC, we propose an accuracy-aware DNN datapath architecture to retain the test accuracy of stochastic DNN. Moreover, we propose novel low-complexity architecture for binary-to-stochastic (B-to-S) interface to drastically reduce the footprint of peripheral B-to-S circuit. Experimental results show that the proposed stochastic DNN design achieves much better hardware performance than non-stochastic design with negligible test accuracy loss.

Original languageEnglish (US)
JournalIEEE Transactions on Circuits and Systems II: Express Briefs
StateAccepted/In press - Aug 30 2017


  • Area-efficient.
  • Deep Neural Network
  • Fully-parallel
  • Stochastic Computing

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Fully-Parallel Area-Efficient Deep Neural Net-work Design using Stochastic Computing'. Together they form a unique fingerprint.

  • Cite this