Abstract
The explosive increase of mobile devices with built-in sensors such as GPS, accelerometer, gyroscope and camera has made the design of mobile crowdsensing (MCS) applications possible, which create a new interface between humans and their surroundings. Until now, various MCS applications have been designed, where the task initiators (TIs) recruit mobile users (MUs) to complete the required sensing tasks. In this paper, deep reinforcement learning (DRL) based techniques are investigated to address the problem of assigning satisfactory but profitable amount of incentives to multiple TIs and MUs as a MCS game. Specifically, we first formulate the problem as a multi-leader and multi-follower Stackelberg game, where TIs are the leaders and MUs are the followers. Then, the existence of the Stackelberg Equilibrium (SE) is proved. Considering the challenge to compute the SE, a DRL based Dynamic Incentive Mechanism (DDIM) is proposed. It enables the TIs to learn the optimal pricing strategies directly from game experiences without knowing the private information of MUs. Finally, numerical experiments are provided to illustrate the effectiveness of the proposed incentive mechanism compared with both state-of-the-art and baseline approaches.
Original language | English (US) |
---|---|
Article number | 8758205 |
Pages (from-to) | 2316-2329 |
Number of pages | 14 |
Journal | IEEE Transactions on Mobile Computing |
Volume | 19 |
Issue number | 10 |
DOIs | |
State | Published - Oct 1 2020 |
Keywords
- Incentive mechanism
- deep reinforcement learning
- multi-leader multi-follower mobile crowdsensing
- stackelberg equilibrium
ASJC Scopus subject areas
- Software
- Computer Networks and Communications
- Electrical and Electronic Engineering