FPGA acceleration of recurrent neural network based language model

Sicheng Li, Chunpeng Wu, Hai Li, Boxun Li, Yu Wang, Qinru Qiu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

45 Scopus citations

Abstract

Recurrent neural network (RNN) based language model (RNNLM) is a biologically inspired model for natural language processing. It records the historical information through additional recurrent connections and therefore is very effective in capturing semantics of sentences. However, the use of RNNLM has been greatly hindered for the high computation cost in training. This work presents an FPGA implementation framework for RNNLM training acceleration. At architectural level, we improve the parallelism of RNN training scheme and reduce the computing resource requirement for computation efficiency enhancement. The hardware implementation primarily targets at reducing data communication load. A multi-thread based computation engine is utilized which can successfully mask the long memory latency and reuse frequent accessed data. The evaluation based on the Microsoft Research Sentence Completion Challenge shows that the proposed FPGA implementation outperforms traditional class-based modest-size recurrent networks and obtains 46.2% in training accuracy. Moreover, experiments at different network sizes demonstrate a great scalability of the proposed framework.

Original languageEnglish (US)
Title of host publicationProceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages111-118
Number of pages8
ISBN (Print)9781479999699
DOIs
StatePublished - Jul 15 2015
Event23rd IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015 - Vancouver, Canada
Duration: May 3 2015May 5 2015

Other

Other23rd IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015
CountryCanada
CityVancouver
Period5/3/155/5/15

    Fingerprint

Keywords

  • Acceleration
  • FPGA
  • Language model
  • Recurrent neural network (RNN)

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Hardware and Architecture

Cite this

Li, S., Wu, C., Li, H., Li, B., Wang, Y., & Qiu, Q. (2015). FPGA acceleration of recurrent neural network based language model. In Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015 (pp. 111-118). [7160054] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/FCCM.2015.50