Abstract
Reinforcement learning algorithms that use deep neural networks are a promising approach for the development of machines that can acquire knowledge and solve problems without human input or supervision. At present, however, these algorithms are implemented in software running on relatively standard complementary metal–oxide–semiconductor digital platforms, where performance will be constrained by the limits of Moore’s law and von Neumann architecture. Here, we report an experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for our hybrid analogue–digital platform. To illustrate the capabilities of our approach in robust in situ training without the need for a model, we performed two classic control problems: the cart–pole and mountain car simulations. We also show that, compared with conventional digital systems in real-world reinforcement learning tasks, our hybrid analogue–digital computing system has the potential to achieve a significant boost in speed and energy efficiency.
Original language | English (US) |
---|---|
Pages (from-to) | 115-124 |
Number of pages | 10 |
Journal | Nature Electronics |
Volume | 2 |
Issue number | 3 |
DOIs | |
State | Published - Mar 1 2019 |
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Instrumentation
- Electrical and Electronic Engineering