TY - GEN
T1 - Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning
AU - Shrestha, Amar
AU - Ahmed, Khadeer
AU - Wang, Yanzhi
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - Spike-Timing Dependent Plasticity (STDP), the canonical learning rule for spiking neural networks (SNN), is gaining tremendous interest because of its simplicity, efficiency and biological plausibility. However, to date, multilayer feed-forward networks of spiking neurons are either only partially trained using STDP or pre-trained using traditional deep neural networks which are converted to deep spiking neural networks or a two-layer network where STDP learnt features are manually labelled. In this work, we present a low-cost, simplified, yet stable STDP rule for layer-wise unsupervised and supervised training of a multilayer feed-forward SNN. We propose to approximate Bayesian neuron using Stochastic Integrate and Fire (SIF) neuron model and introduce a supervised learning approach using teacher neurons to train the classification layer with one neuron per class. A SNN is trained for classification of handwritten digits with multiple layers of spiking neurons, including both the feature extraction and classification layer, using the proposed STDP rule. Our method achieves comparable to better accuracy on MNIST dataset than manually labelled two layer networks for the same sized hidden layer. We also analyze the parameter space to provide rationales for parameter fine-tuning and provide additional methods to improve noise resilience and input intensity variations. We further propose a Quantized 2-Power Shift (Q2PS) STDP rule, which reduces the implementation cost of digital hardware while achieves comparable performance.
AB - Spike-Timing Dependent Plasticity (STDP), the canonical learning rule for spiking neural networks (SNN), is gaining tremendous interest because of its simplicity, efficiency and biological plausibility. However, to date, multilayer feed-forward networks of spiking neurons are either only partially trained using STDP or pre-trained using traditional deep neural networks which are converted to deep spiking neural networks or a two-layer network where STDP learnt features are manually labelled. In this work, we present a low-cost, simplified, yet stable STDP rule for layer-wise unsupervised and supervised training of a multilayer feed-forward SNN. We propose to approximate Bayesian neuron using Stochastic Integrate and Fire (SIF) neuron model and introduce a supervised learning approach using teacher neurons to train the classification layer with one neuron per class. A SNN is trained for classification of handwritten digits with multiple layers of spiking neurons, including both the feature extraction and classification layer, using the proposed STDP rule. Our method achieves comparable to better accuracy on MNIST dataset than manually labelled two layer networks for the same sized hidden layer. We also analyze the parameter space to provide rationales for parameter fine-tuning and provide additional methods to improve noise resilience and input intensity variations. We further propose a Quantized 2-Power Shift (Q2PS) STDP rule, which reduces the implementation cost of digital hardware while achieves comparable performance.
KW - Digit recognition
KW - Quantized STDP
KW - STDP
KW - Spiking neural network
KW - Supervised learning
KW - Unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85031037119&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85031037119&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966096
DO - 10.1109/IJCNN.2017.7966096
M3 - Conference contribution
AN - SCOPUS:85031037119
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 1999
EP - 2006
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
Y2 - 14 May 2017 through 19 May 2017
ER -