Reinforcement learning with analogue memristor arrays

Zhongrui Wang, Can Li, Wenhao Song, Mingyi Rao, Daniel Belkin, Yunning Li, Peng Yan, Hao Jiang, Peng Lin, Miao Hu, John Paul Strachan, Ning Ge, Mark Barnell, Qing Wu, Andrew G. Barto, Qinru Qiu, R. Stanley Williams, Qiangfei Xia, J. Joshua Yang

Research output: Contribution to journalArticlepeer-review

253 Scopus citations

Abstract

Reinforcement learning algorithms that use deep neural networks are a promising approach for the development of machines that can acquire knowledge and solve problems without human input or supervision. At present, however, these algorithms are implemented in software running on relatively standard complementary metal–oxide–semiconductor digital platforms, where performance will be constrained by the limits of Moore’s law and von Neumann architecture. Here, we report an experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for our hybrid analogue–digital platform. To illustrate the capabilities of our approach in robust in situ training without the need for a model, we performed two classic control problems: the cart–pole and mountain car simulations. We also show that, compared with conventional digital systems in real-world reinforcement learning tasks, our hybrid analogue–digital computing system has the potential to achieve a significant boost in speed and energy efficiency.

Original languageEnglish (US)
Pages (from-to)115-124
Number of pages10
JournalNature Electronics
Volume2
Issue number3
DOIs
StatePublished - Mar 1 2019

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Reinforcement learning with analogue memristor arrays'. Together they form a unique fingerprint.

Cite this