### Abstract

Deep Learning, as an important branch of machine learning and neural network, is playing an increasingly important role in a number of fields like computer vision, natural language processing, etc. However, large-scale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. The solution proposed in this paper is taking advantage of the fantastic features of stochastic computing methods. Stochastic computing is a type of data representation and processing technique, which uses a binary bit stream to represent a probability number (by counting the number of ones in this bit stream). In the stochastic computing area, some key arithmetic operations such as additions or multiplications can be implemented with very simple components like AND gates or multiplexers, respectively. Thus it provides an immense design space for integrating a large amount of neurons and enabling fully parallel and scalable hardware implementations of large-scale deep learning systems. In this paper, we present a reconfigurable large-scale deep learning system based on stochastic computing technologies, including the design of the neuron, the convolution function, the back-propagation function and some other basic operations. And the network-on-chip technique is also proposed in this paper to achieve the goal of implementing a large-scale hardware system. Our experiments validate the functionality of reconfigurable deep learning systems using stochastic computing, and demonstrate that when the bit streams are set to be 8192 bits, classification of MNIST digits by stochastic computing can perform as low error rate as that by normal arithmetic operations.

Original language | English (US) |
---|---|

Title of host publication | 2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

ISBN (Electronic) | 9781509013708 |

DOIs | |

State | Published - Nov 8 2016 |

Event | 2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - San Diego, United States Duration: Oct 17 2016 → Oct 19 2016 |

### Other

Other | 2016 IEEE International Conference on Rebooting Computing, ICRC 2016 |
---|---|

Country | United States |

City | San Diego |

Period | 10/17/16 → 10/19/16 |

### Fingerprint

### Keywords

- deep learning
- large-scale
- neuron
- reconfigurable
- Stochastic computing

### ASJC Scopus subject areas

- Hardware and Architecture

### Cite this

*2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings*[7738685] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRC.2016.7738685

**Designing reconfigurable large-scale deep learning systems using stochastic computing.** / Ren, Ao; Li, Zhe; Wang, Yanzhi; Qiu, Qinru; Yuan, Bo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings.*, 7738685, Institute of Electrical and Electronics Engineers Inc., 2016 IEEE International Conference on Rebooting Computing, ICRC 2016, San Diego, United States, 10/17/16. https://doi.org/10.1109/ICRC.2016.7738685

}

TY - GEN

T1 - Designing reconfigurable large-scale deep learning systems using stochastic computing

AU - Ren, Ao

AU - Li, Zhe

AU - Wang, Yanzhi

AU - Qiu, Qinru

AU - Yuan, Bo

PY - 2016/11/8

Y1 - 2016/11/8

N2 - Deep Learning, as an important branch of machine learning and neural network, is playing an increasingly important role in a number of fields like computer vision, natural language processing, etc. However, large-scale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. The solution proposed in this paper is taking advantage of the fantastic features of stochastic computing methods. Stochastic computing is a type of data representation and processing technique, which uses a binary bit stream to represent a probability number (by counting the number of ones in this bit stream). In the stochastic computing area, some key arithmetic operations such as additions or multiplications can be implemented with very simple components like AND gates or multiplexers, respectively. Thus it provides an immense design space for integrating a large amount of neurons and enabling fully parallel and scalable hardware implementations of large-scale deep learning systems. In this paper, we present a reconfigurable large-scale deep learning system based on stochastic computing technologies, including the design of the neuron, the convolution function, the back-propagation function and some other basic operations. And the network-on-chip technique is also proposed in this paper to achieve the goal of implementing a large-scale hardware system. Our experiments validate the functionality of reconfigurable deep learning systems using stochastic computing, and demonstrate that when the bit streams are set to be 8192 bits, classification of MNIST digits by stochastic computing can perform as low error rate as that by normal arithmetic operations.

AB - Deep Learning, as an important branch of machine learning and neural network, is playing an increasingly important role in a number of fields like computer vision, natural language processing, etc. However, large-scale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. The solution proposed in this paper is taking advantage of the fantastic features of stochastic computing methods. Stochastic computing is a type of data representation and processing technique, which uses a binary bit stream to represent a probability number (by counting the number of ones in this bit stream). In the stochastic computing area, some key arithmetic operations such as additions or multiplications can be implemented with very simple components like AND gates or multiplexers, respectively. Thus it provides an immense design space for integrating a large amount of neurons and enabling fully parallel and scalable hardware implementations of large-scale deep learning systems. In this paper, we present a reconfigurable large-scale deep learning system based on stochastic computing technologies, including the design of the neuron, the convolution function, the back-propagation function and some other basic operations. And the network-on-chip technique is also proposed in this paper to achieve the goal of implementing a large-scale hardware system. Our experiments validate the functionality of reconfigurable deep learning systems using stochastic computing, and demonstrate that when the bit streams are set to be 8192 bits, classification of MNIST digits by stochastic computing can perform as low error rate as that by normal arithmetic operations.

KW - deep learning

KW - large-scale

KW - neuron

KW - reconfigurable

KW - Stochastic computing

UR - http://www.scopus.com/inward/record.url?scp=85005965765&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85005965765&partnerID=8YFLogxK

U2 - 10.1109/ICRC.2016.7738685

DO - 10.1109/ICRC.2016.7738685

M3 - Conference contribution

AN - SCOPUS:85005965765

BT - 2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -