TY - GEN
T1 - Unsupervised Adaptation of Spiking Networks in a Gradual Changing Environment
AU - Mei, Zaidao
AU - Barnell, Mark
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Spiking neural networks(SNNs) have drawn broad research interests in recent years due to their high energy efficiency and biologically-plausibility. They have proven to be competitive in many machine learning tasks. Similar to all Artificial Neural Network(ANNs) machine learning models, the SNNs rely on the assumption that the training and testing data are drawn from the same distribution. As the environment changes gradually, the input distribution will shift over time, and the performance of SNNs turns out to be brittle. To this end, we propose a unified framework that can adapt non-stationary streaming data by exploiting unlabeled intermediate domain, and fits with the in-hardware SNN learning algorithm Error-modulated STDP. Specifically, we propose a unique self-training framework to generate pseudo labels to retrain the model for intermediate and target domains. In addition, we develop an online-normalization method with an auxiliary neuron to normalize the output of the hidden layers. By combining the normalization with self-training, our approach gains average classification improvements over 10% on MNIST, NMINST, and two other datasets.
AB - Spiking neural networks(SNNs) have drawn broad research interests in recent years due to their high energy efficiency and biologically-plausibility. They have proven to be competitive in many machine learning tasks. Similar to all Artificial Neural Network(ANNs) machine learning models, the SNNs rely on the assumption that the training and testing data are drawn from the same distribution. As the environment changes gradually, the input distribution will shift over time, and the performance of SNNs turns out to be brittle. To this end, we propose a unified framework that can adapt non-stationary streaming data by exploiting unlabeled intermediate domain, and fits with the in-hardware SNN learning algorithm Error-modulated STDP. Specifically, we propose a unique self-training framework to generate pseudo labels to retrain the model for intermediate and target domains. In addition, we develop an online-normalization method with an auxiliary neuron to normalize the output of the hidden layers. By combining the normalization with self-training, our approach gains average classification improvements over 10% on MNIST, NMINST, and two other datasets.
KW - Spiking Neural Networks
KW - domain adaptation
KW - in-hardware learning
UR - http://www.scopus.com/inward/record.url?scp=85142304411&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142304411&partnerID=8YFLogxK
U2 - 10.1109/HPEC55821.2022.9926367
DO - 10.1109/HPEC55821.2022.9926367
M3 - Conference contribution
AN - SCOPUS:85142304411
T3 - 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022
BT - 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE High Performance Extreme Computing Conference, HPEC 2022
Y2 - 19 September 2022 through 23 September 2022
ER -