End-to-end reinforcement learning for multi-agent continuous control

Zilong Jiao, Jae Oh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In end-to-end reinforcement learning, an agent captures the entire mapping from its raw sensor data to actuation commands using a single neural network. End-to-end reinforcement learning is mostly studied in single-agent domains, and its scalability to multi-agent setting is under-explored. Without effective techniques, learning effective policies based on the joint observation of agents can be intractable, particularly when sensor data perceived by each agent is high-dimensional. Extending the multi-agent actor-critic method MADDPG, this paper presents Rec-MADDPG, an end-to-end reinforcement learning method for multi-agent continuous control in a cooperative environment. To ease end-to-end learning in a multi-agent setting, we proposed two embedding mechanisms, joint and independent embedding, to project agents' joint sensor observation to low-dimensional features. For training efficiency, we applied parameter sharing and the A3C-based asynchronous framework to Rec-MADDPG. Considering the challenges that can arise in real-world multi-agent control, we evaluated Rec-MADDPG in robotic navigation tasks based on realistic simulated robots and physics enable environments. Through extensive evaluation, we demonstrated that Rec-MADDPG can significantly outperform MADDPG and was able to learn individual end-to-end policies for continuous control based on raw sensor data. In addition, compared to joint embedding, independent embedding enabled Rec-MADDPG to learn even better optimal policies.

Original languageEnglish (US)
Title of host publicationProceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019
EditorsM. Arif Wani, Taghi M. Khoshgoftaar, Dingding Wang, Huanjing Wang, Naeem Seliya
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages535-540
Number of pages6
ISBN (Electronic)9781728145495
DOIs
StatePublished - Dec 2019
Event18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019 - Boca Raton, United States
Duration: Dec 16 2019Dec 19 2019

Publication series

NameProceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019

Conference

Conference18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019
CountryUnited States
CityBoca Raton
Period12/16/1912/19/19

Keywords

  • Continuous control
  • End-to-end reinforcement learning
  • Multi-agent learning
  • State abstraction

ASJC Scopus subject areas

  • Strategy and Management
  • Artificial Intelligence
  • Computer Science Applications
  • Decision Sciences (miscellaneous)
  • Signal Processing
  • Media Technology

Fingerprint Dive into the research topics of 'End-to-end reinforcement learning for multi-agent continuous control'. Together they form a unique fingerprint.

  • Cite this

    Jiao, Z., & Oh, J. (2019). End-to-end reinforcement learning for multi-agent continuous control. In M. A. Wani, T. M. Khoshgoftaar, D. Wang, H. Wang, & N. Seliya (Eds.), Proceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019 (pp. 535-540). [8999172] (Proceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICMLA.2019.00100