TY - GEN
T1 - Accelerating cogent confabulation
T2 - 2008 International Joint Conference on Neural Networks, IJCNN 2008
AU - Qiu, Qinru
AU - Burns, Daniel
AU - Moore, Michael
AU - Linderman, Richard
AU - Renz, Thomas
AU - Wu, Qing
PY - 2008
Y1 - 2008
N2 - Cogent confabulation is a computation model that mimics the Hebbian learning, information storage, inter-relation of symbolic concepts, and the recall operations of the brain. The model has been applied to cognitive processing of language, audio and visual signals. In this project, we focus on how to accelerate the computation which underlie confabulation based sentence completion through software and hardware optimization. On the software implementation side, appropriate data structures can improve the performance of the software by more than 5,000X. On the hardware implementation side, the cogent confabulation algorithm is an ideal candidate for parallel processing and its performance can be significantly improved with the help of application specific, massively parallel computing platforms. However, as the complexity and parallelism of the hardware increases, cost also increases. Architectures with different performance-cost tradeoffs are analyzed and compared. Our analysis shows that although increasing the number of processors or the size of memories per processor can increase performance, the hardware cost and performance improvements do not always exhibit a linear relation. Hardware configuration options must be carefully evaluated in order to achieve good cost performance tradeoffs.
AB - Cogent confabulation is a computation model that mimics the Hebbian learning, information storage, inter-relation of symbolic concepts, and the recall operations of the brain. The model has been applied to cognitive processing of language, audio and visual signals. In this project, we focus on how to accelerate the computation which underlie confabulation based sentence completion through software and hardware optimization. On the software implementation side, appropriate data structures can improve the performance of the software by more than 5,000X. On the hardware implementation side, the cogent confabulation algorithm is an ideal candidate for parallel processing and its performance can be significantly improved with the help of application specific, massively parallel computing platforms. However, as the complexity and parallelism of the hardware increases, cost also increases. Architectures with different performance-cost tradeoffs are analyzed and compared. Our analysis shows that although increasing the number of processors or the size of memories per processor can increase performance, the hardware cost and performance improvements do not always exhibit a linear relation. Hardware configuration options must be carefully evaluated in order to achieve good cost performance tradeoffs.
UR - http://www.scopus.com/inward/record.url?scp=56349104782&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=56349104782&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2008.4633965
DO - 10.1109/IJCNN.2008.4633965
M3 - Conference contribution
AN - SCOPUS:56349104782
SN - 9781424418213
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 1292
EP - 1300
BT - 2008 International Joint Conference on Neural Networks, IJCNN 2008
Y2 - 1 June 2008 through 8 June 2008
ER -