A Neuromorphic Architecture for Context Aware Text Image Recognition

Qinru Qiu, Zhe Li, Khadeer Ahmed, Wei Liu, Syed Faisal Habib, Hai (Helen) Li, Miao Hu

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Although existing optical character recognition (OCR) tools can achieve excellent performance in text image detection and pattern recognition, they usually require a clean input image. Most of them do not perform well when the image is partially occluded or smudged. Humans are able to tolerate much worse image quality during reading because the perception errors can be corrected by the knowledge in word and sentence level context. In this paper, we present a brain-inspired information processing framework for context-aware Intelligent Text Recognition (ITR) and its acceleration using memristor based crossbar array. The ITRS has a bottom layer of massive parallel Brain-state-in-a-box (BSB) engines that give fuzzy pattern matching results and an upper layer of statistical inference based error correction. Optimizations on each layer of the framework are introduced to improve system performance. A parallel architecture is presented that incorporates the memristor crossbar array to accelerate the pattern matching. Compared to traditional multicore microprocessor, the accelerator has the potential to provide tremendous area and power savings and more than 8,000 times speedups.

Original languageEnglish (US)
Pages (from-to)355-369
Number of pages15
JournalJournal of Signal Processing Systems
Volume84
Issue number3
DOIs
StatePublished - Sep 1 2016

Keywords

  • Memristor crossbar array
  • Neuromorphic
  • Text recognition

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Signal Processing
  • Information Systems
  • Modeling and Simulation
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'A Neuromorphic Architecture for Context Aware Text Image Recognition'. Together they form a unique fingerprint.

Cite this