TY - GEN
T1 - Assisting fuzzy offline handwriting recognition using recurrent belief propagation
AU - Li, Yilan
AU - Li, Zhe
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2017/2/9
Y1 - 2017/2/9
N2 - Recognizing handwritten texts is a challenging task due to many different writing styles and lack of clear boundary between adjacent characters. This problem has been tackled by many previous researchers using techniques such as deep learning networks and hidden Markov Models (HMM), etc. In this work we aim at offline fuzzy recognition of handwritten texts. A probabilistic inference network that performs recurrent belief propagation is developed to post process the recognition results of deep convolutional neural network (CNN) (e.g. LeNet) and form individual characters to words. The post processing has the capability of correcting deletion, insertion and replacement errors in a noisy input. The output of the inference network is a set of words with their probability of being the correct one. To limit the size of candidate words, a series of improvements have been made to the probabilistic inference network, including using a post Gaussian Mixture Estimation model to prune insignificant words. The experiments show that this model gives a competitively average accuracy of 85.5%, and the improvements provides 46.57% reduction of invalid candidate words.
AB - Recognizing handwritten texts is a challenging task due to many different writing styles and lack of clear boundary between adjacent characters. This problem has been tackled by many previous researchers using techniques such as deep learning networks and hidden Markov Models (HMM), etc. In this work we aim at offline fuzzy recognition of handwritten texts. A probabilistic inference network that performs recurrent belief propagation is developed to post process the recognition results of deep convolutional neural network (CNN) (e.g. LeNet) and form individual characters to words. The post processing has the capability of correcting deletion, insertion and replacement errors in a noisy input. The output of the inference network is a set of words with their probability of being the correct one. To limit the size of candidate words, a series of improvements have been made to the probabilistic inference network, including using a post Gaussian Mixture Estimation model to prune insignificant words. The experiments show that this model gives a competitively average accuracy of 85.5%, and the improvements provides 46.57% reduction of invalid candidate words.
UR - http://www.scopus.com/inward/record.url?scp=85016061167&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85016061167&partnerID=8YFLogxK
U2 - 10.1109/SSCI.2016.7850026
DO - 10.1109/SSCI.2016.7850026
M3 - Conference contribution
AN - SCOPUS:85016061167
T3 - 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016
BT - 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016
Y2 - 6 December 2016 through 9 December 2016
ER -