ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning

Kun Wu, Yinuo Zhao, Zhiyuan Xu, Zhengping Che, Chengxiang Yin, Chi Harold Liu, Qinru Qiu, Feifei Feng, Jian Tang

Research output: Contribution to journalArticlepeer-review

Abstract

Offline reinforcement learning (RL), which operates solely on static datasets without further interactions with the environment, provides an appealing alternative to learning a safe and promising control policy. The prevailing methods typically learn a conservative policy to mitigate the problem of Q -value overestimation, but it is prone to overdo it, leading to an overly conservative policy. Moreover, they optimize all samples equally with fixed constraints, lacking the nuanced ability to control conservative levels in a fine-grained manner. Consequently, this limitation results in a performance decline. To address the above two challenges in a united way, we propose a framework, adaptive conservative level in Q -learning (ACL-QL), which limits the Q -values in a mild range and enables adaptive control on the conservative level over each state-action pair, i.e., lifting the Q -values more for good transitions and less for bad transitions. We theoretically analyze the conditions under which the conservative level of the learned Q -function can be limited in a mild range and how to optimize each transition adaptively. Motivated by the theoretical analysis, we propose a novel algorithm, ACL-QL, which uses two learnable adaptive weight functions to control the conservative level over each transition. Subsequently, we design a monotonicity loss and surrogate losses to train the adaptive weight functions, Q -function, and policy network alternatively. We evaluate ACL-QL on the commonly used datasets for deep data-driven reinforcement learning (D4RL) benchmark and conduct extensive ablation studies to illustrate the effectiveness and state-of-the-art performance compared with existing offline DRL baselines.

Original languageEnglish (US)
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
StateAccepted/In press - 2024
Externally publishedYes

Keywords

  • Model-free reinforcement learning (RL)
  • offline RL
  • RL

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning'. Together they form a unique fingerprint.

Cite this