TY - GEN
T1 - Self-structured confabulation network for fast anomaly detection and reasoning
AU - Chen, Qiuwen
AU - Wu, Qing
AU - Bishop, Morgan
AU - Linderman, Richard
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/9/28
Y1 - 2015/9/28
N2 - Inference models such as the confabulation network are particularly useful in anomaly detection applications because they allow introspection to the decision process. However, building such network model always requires expert knowledge. In this paper, we present a self-structuring technique that learns the structure of a confabulation network from unlabeled data. Without any assumption of the distribution of data, we leverage the mutual information between features to learn a succinct network configuration, and enable fast incremental learning to refine the knowledge bases from continuous data streams. Compared to several existing anomaly detection methods, the proposed approach provides higher detection performance and excellent reasoning capability. We also exploit the massive parallelism that is inherent to the inference model and accelerate the detection process using GPUs. Experimental results show significant speedups and the potential to be applied to real-time applications with high-volume data streams.
AB - Inference models such as the confabulation network are particularly useful in anomaly detection applications because they allow introspection to the decision process. However, building such network model always requires expert knowledge. In this paper, we present a self-structuring technique that learns the structure of a confabulation network from unlabeled data. Without any assumption of the distribution of data, we leverage the mutual information between features to learn a succinct network configuration, and enable fast incremental learning to refine the knowledge bases from continuous data streams. Compared to several existing anomaly detection methods, the proposed approach provides higher detection performance and excellent reasoning capability. We also exploit the massive parallelism that is inherent to the inference model and accelerate the detection process using GPUs. Experimental results show significant speedups and the potential to be applied to real-time applications with high-volume data streams.
KW - Logic gates
UR - http://www.scopus.com/inward/record.url?scp=84951148994&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84951148994&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2015.7280371
DO - 10.1109/IJCNN.2015.7280371
M3 - Conference contribution
AN - SCOPUS:84951148994
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2015 International Joint Conference on Neural Networks, IJCNN 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - International Joint Conference on Neural Networks, IJCNN 2015
Y2 - 12 July 2015 through 17 July 2015
ER -