Existing optical character recognition (OCR) software tools can perform text image detection and pattern recognition with fairly high accuracy, however their performance will be significantly impaired when the image of the character is partially blocked or smudged. Such missing information does not hinder the human perception because we predict the missing part based on the word level and sentence level context of the character. In order to mimic the human cognitive behavior, we developed a hybrid cognitive architecture combining two neuromorphic computing models, i.e. brain-state-in-a-box (BSB) and cogent confabulation, to achieve context-aware text recognition. The BSB model performs the character recognition from input image while the confabulation models perform the context-aware prediction based on the word and sentence knowledge bases. The software tool is implemented on an 1824-core computing cluster. Its accuracy and performance are analyzed in the paper.