TY - JOUR
T1 - Fully-Parallel Area-Efficient Deep Neural Network Design Using Stochastic Computing
AU - Xie, Yi
AU - Liao, Siyu
AU - Yuan, Bo
AU - Wang, Yanzhi
AU - Wang, Zhongfeng
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2017/12
Y1 - 2017/12
N2 - Deep neural network (DNN) has emerged as a powerful machine learning technique for various artificial intelligence applications. Due to the unique advantages on speed, area, and power, specific hardware design has become a very attractive solution for the efficient deployment of DNN. However, the huge resource cost of multipliers makes the fully-parallel implementations of multiplication-intensive DNN still very prohibitive in many real-time resource-constrained embedded applications. This brief proposes a fully-parallel area-efficient stochastic DNN design. By leveraging stochastic computing (SC) technique, the computations of DNN are implemented using very simple stochastic logic, thereby enabling low-complexity fully-parallel DNN design. In addition, to avoid the accuracy loss incurred by the approximation of SC, we propose an accuracy-aware DNN datapath architecture to retain the test accuracy of stochastic DNN. Moreover, we propose a novel low-complexity architecture for the binary-to-stochastic (B-to-S) interface to drastically reduce the footprint of the peripheral B-to-S circuit. Experimental results show that the proposed stochastic DNN design achieves much better hardware performance than non-stochastic design with negligible test accuracy loss.
AB - Deep neural network (DNN) has emerged as a powerful machine learning technique for various artificial intelligence applications. Due to the unique advantages on speed, area, and power, specific hardware design has become a very attractive solution for the efficient deployment of DNN. However, the huge resource cost of multipliers makes the fully-parallel implementations of multiplication-intensive DNN still very prohibitive in many real-time resource-constrained embedded applications. This brief proposes a fully-parallel area-efficient stochastic DNN design. By leveraging stochastic computing (SC) technique, the computations of DNN are implemented using very simple stochastic logic, thereby enabling low-complexity fully-parallel DNN design. In addition, to avoid the accuracy loss incurred by the approximation of SC, we propose an accuracy-aware DNN datapath architecture to retain the test accuracy of stochastic DNN. Moreover, we propose a novel low-complexity architecture for the binary-to-stochastic (B-to-S) interface to drastically reduce the footprint of the peripheral B-to-S circuit. Experimental results show that the proposed stochastic DNN design achieves much better hardware performance than non-stochastic design with negligible test accuracy loss.
KW - Deep neural network
KW - area-efficient
KW - fully-parallel
KW - stochastic computing
UR - http://www.scopus.com/inward/record.url?scp=85028717713&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028717713&partnerID=8YFLogxK
U2 - 10.1109/TCSII.2017.2746749
DO - 10.1109/TCSII.2017.2746749
M3 - Article
AN - SCOPUS:85028717713
SN - 1549-7747
VL - 64
SP - 1382
EP - 1386
JO - IEEE Transactions on Circuits and Systems II: Express Briefs
JF - IEEE Transactions on Circuits and Systems II: Express Briefs
IS - 12
M1 - 8022910
ER -