TY - GEN
T1 - Towards parallel implementation of associative inference for cogent confabulation
AU - Li, Zhe
AU - Qiu, Qinru
AU - Tamhankar, Mangesh
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/11/28
Y1 - 2016/11/28
N2 - The superb efficiency and noise resilience of human cognizance comes from the extensive highly associative memory. For example, it is easy for human to recognize occluded or incomplete text images based on its context. Associative inference in the neocortex system is a concurrent process. Serial implementation of this concurrent process not only hinders its performance, but also limits the quality of recall. This paper investigates parallel implementation of associative inference using cogent confabulation model, which is a highly cross-dependent and cyclic knowledge network that supports probabilistic inference. By breaking the fixed processing order, which is typical in sequential processing, and introducing randomness generated from the race conditions in parallel processing, we do not only reduce the runtime, but also improve the accuracy. Further improvement can be achieved by scheduling the lexicon processing intermittently, which provides time for the changes to settle down. Using sentence construction as a case study, we demonstrate that the parallel implementation provides up to 93.4% reduction in computation time and 5% improvement in recall accuracy.
AB - The superb efficiency and noise resilience of human cognizance comes from the extensive highly associative memory. For example, it is easy for human to recognize occluded or incomplete text images based on its context. Associative inference in the neocortex system is a concurrent process. Serial implementation of this concurrent process not only hinders its performance, but also limits the quality of recall. This paper investigates parallel implementation of associative inference using cogent confabulation model, which is a highly cross-dependent and cyclic knowledge network that supports probabilistic inference. By breaking the fixed processing order, which is typical in sequential processing, and introducing randomness generated from the race conditions in parallel processing, we do not only reduce the runtime, but also improve the accuracy. Further improvement can be achieved by scheduling the lexicon processing intermittently, which provides time for the changes to settle down. Using sentence construction as a case study, we demonstrate that the parallel implementation provides up to 93.4% reduction in computation time and 5% improvement in recall accuracy.
KW - Cogent Confabulation
KW - Multi-Threading
KW - Parallel Programming
KW - Sentence Completion
UR - http://www.scopus.com/inward/record.url?scp=85007071374&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85007071374&partnerID=8YFLogxK
U2 - 10.1109/HPEC.2016.7761623
DO - 10.1109/HPEC.2016.7761623
M3 - Conference contribution
AN - SCOPUS:85007071374
T3 - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
BT - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE High Performance Extreme Computing Conference, HPEC 2016
Y2 - 13 September 2016 through 15 September 2016
ER -