Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor

Amar Shrestha, Khadeer Ahmed, Yanzhi Wang, David P. Widemann, Adam T. Moody, Brian C. Van Essen, Qinru Qiu

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Due to the distributed and asynchronous nature of neural computation through low-energy spikes, brain-inspired hardware systems offer high energy efficiency and massive parallelism. One such platform is the IBM TrueNorth Neurosynaptic System. Recently, TrueNorth compatible representation learning algorithms have emerged, achieving close to state-of-the-art performance in various datasets. However, its application in temporal sequence processing models such as recurrent neural networks (RNNs) is still only at the proof of concept level. There is an inherent difficulty in capturing temporal dynamics of an RNN using spiking neurons which is only exasperated by the hardware constraints in connectivity and synaptic weight resolution. This work presents a design flow that overcomes these difficulties and maps a special case of recurrent networks called Long Short-Term Memory (LSTM) onto a spike-based platform. The framework utilizes various approximation techniques; activation discretization, weight quantization, scaling and rounding; spiking neural circuits that implement the complex gating mechanisms, and a store-and-release technique to enable neuron synchronization and faithful storage. While the presented techniques can be applied to map LSTM to any Spiking Neural Network (SNN) simulator/emulator, here we choose the TrueNorth chip as the target platform by adhering to its hardware constraints. Three LSTM applications, parity check, Extended Reber Grammar and Question classification, are evaluated. The tradeoffs among accuracy, performance and energy tradeoffs achieved on TrueNorth are demonstrated. This is compared to the performance on an SNN platform without hardware constraints, which represents the upper bound of the achievable accuracy.

Original languageEnglish (US)
JournalIEEE Journal on Emerging and Selected Topics in Circuits and Systems
DOIs
StateAccepted/In press - Jul 13 2018

Fingerprint

Hardware
Networks (circuits)
Recurrent neural networks
Neurons
Neural networks
Learning algorithms
Energy efficiency
Brain
Synchronization
Simulators
Chemical activation
Long short-term memory
Processing

Keywords

  • Biological neural networks
  • Computer architecture
  • Hardware
  • Logic gates
  • Long Short-Term Memory
  • Neuromorphic Hardware
  • Neurons
  • Recurrent neural networks
  • Recurrent Neural Networks
  • Spiking Neural Networks
  • Synchronization

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor. / Shrestha, Amar; Ahmed, Khadeer; Wang, Yanzhi; Widemann, David P.; Moody, Adam T.; Van Essen, Brian C.; Qiu, Qinru.

In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 13.07.2018.

Research output: Contribution to journalArticle

Shrestha, Amar ; Ahmed, Khadeer ; Wang, Yanzhi ; Widemann, David P. ; Moody, Adam T. ; Van Essen, Brian C. ; Qiu, Qinru. / Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor. In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 2018.
@article{53b23ec78fed455395f5d0b3610ea8b4,
title = "Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor",
abstract = "Due to the distributed and asynchronous nature of neural computation through low-energy spikes, brain-inspired hardware systems offer high energy efficiency and massive parallelism. One such platform is the IBM TrueNorth Neurosynaptic System. Recently, TrueNorth compatible representation learning algorithms have emerged, achieving close to state-of-the-art performance in various datasets. However, its application in temporal sequence processing models such as recurrent neural networks (RNNs) is still only at the proof of concept level. There is an inherent difficulty in capturing temporal dynamics of an RNN using spiking neurons which is only exasperated by the hardware constraints in connectivity and synaptic weight resolution. This work presents a design flow that overcomes these difficulties and maps a special case of recurrent networks called Long Short-Term Memory (LSTM) onto a spike-based platform. The framework utilizes various approximation techniques; activation discretization, weight quantization, scaling and rounding; spiking neural circuits that implement the complex gating mechanisms, and a store-and-release technique to enable neuron synchronization and faithful storage. While the presented techniques can be applied to map LSTM to any Spiking Neural Network (SNN) simulator/emulator, here we choose the TrueNorth chip as the target platform by adhering to its hardware constraints. Three LSTM applications, parity check, Extended Reber Grammar and Question classification, are evaluated. The tradeoffs among accuracy, performance and energy tradeoffs achieved on TrueNorth are demonstrated. This is compared to the performance on an SNN platform without hardware constraints, which represents the upper bound of the achievable accuracy.",
keywords = "Biological neural networks, Computer architecture, Hardware, Logic gates, Long Short-Term Memory, Neuromorphic Hardware, Neurons, Recurrent neural networks, Recurrent Neural Networks, Spiking Neural Networks, Synchronization",
author = "Amar Shrestha and Khadeer Ahmed and Yanzhi Wang and Widemann, {David P.} and Moody, {Adam T.} and {Van Essen}, {Brian C.} and Qinru Qiu",
year = "2018",
month = "7",
day = "13",
doi = "10.1109/JETCAS.2018.2856117",
language = "English (US)",
journal = "IEEE Journal on Emerging and Selected Topics in Circuits and Systems",
issn = "2156-3357",
publisher = "IEEE Computer Society",

}

TY - JOUR

T1 - Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor

AU - Shrestha, Amar

AU - Ahmed, Khadeer

AU - Wang, Yanzhi

AU - Widemann, David P.

AU - Moody, Adam T.

AU - Van Essen, Brian C.

AU - Qiu, Qinru

PY - 2018/7/13

Y1 - 2018/7/13

N2 - Due to the distributed and asynchronous nature of neural computation through low-energy spikes, brain-inspired hardware systems offer high energy efficiency and massive parallelism. One such platform is the IBM TrueNorth Neurosynaptic System. Recently, TrueNorth compatible representation learning algorithms have emerged, achieving close to state-of-the-art performance in various datasets. However, its application in temporal sequence processing models such as recurrent neural networks (RNNs) is still only at the proof of concept level. There is an inherent difficulty in capturing temporal dynamics of an RNN using spiking neurons which is only exasperated by the hardware constraints in connectivity and synaptic weight resolution. This work presents a design flow that overcomes these difficulties and maps a special case of recurrent networks called Long Short-Term Memory (LSTM) onto a spike-based platform. The framework utilizes various approximation techniques; activation discretization, weight quantization, scaling and rounding; spiking neural circuits that implement the complex gating mechanisms, and a store-and-release technique to enable neuron synchronization and faithful storage. While the presented techniques can be applied to map LSTM to any Spiking Neural Network (SNN) simulator/emulator, here we choose the TrueNorth chip as the target platform by adhering to its hardware constraints. Three LSTM applications, parity check, Extended Reber Grammar and Question classification, are evaluated. The tradeoffs among accuracy, performance and energy tradeoffs achieved on TrueNorth are demonstrated. This is compared to the performance on an SNN platform without hardware constraints, which represents the upper bound of the achievable accuracy.

AB - Due to the distributed and asynchronous nature of neural computation through low-energy spikes, brain-inspired hardware systems offer high energy efficiency and massive parallelism. One such platform is the IBM TrueNorth Neurosynaptic System. Recently, TrueNorth compatible representation learning algorithms have emerged, achieving close to state-of-the-art performance in various datasets. However, its application in temporal sequence processing models such as recurrent neural networks (RNNs) is still only at the proof of concept level. There is an inherent difficulty in capturing temporal dynamics of an RNN using spiking neurons which is only exasperated by the hardware constraints in connectivity and synaptic weight resolution. This work presents a design flow that overcomes these difficulties and maps a special case of recurrent networks called Long Short-Term Memory (LSTM) onto a spike-based platform. The framework utilizes various approximation techniques; activation discretization, weight quantization, scaling and rounding; spiking neural circuits that implement the complex gating mechanisms, and a store-and-release technique to enable neuron synchronization and faithful storage. While the presented techniques can be applied to map LSTM to any Spiking Neural Network (SNN) simulator/emulator, here we choose the TrueNorth chip as the target platform by adhering to its hardware constraints. Three LSTM applications, parity check, Extended Reber Grammar and Question classification, are evaluated. The tradeoffs among accuracy, performance and energy tradeoffs achieved on TrueNorth are demonstrated. This is compared to the performance on an SNN platform without hardware constraints, which represents the upper bound of the achievable accuracy.

KW - Biological neural networks

KW - Computer architecture

KW - Hardware

KW - Logic gates

KW - Long Short-Term Memory

KW - Neuromorphic Hardware

KW - Neurons

KW - Recurrent neural networks

KW - Recurrent Neural Networks

KW - Spiking Neural Networks

KW - Synchronization

UR - http://www.scopus.com/inward/record.url?scp=85049952757&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049952757&partnerID=8YFLogxK

U2 - 10.1109/JETCAS.2018.2856117

DO - 10.1109/JETCAS.2018.2856117

M3 - Article

AN - SCOPUS:85049952757

JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems

JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems

SN - 2156-3357

ER -