TY - GEN
T1 - Jamming Attacks on NextG Radio Access Network Slicing with Reinforcement Learning
AU - Shi, Yi
AU - Sagduyu, Yalin E.
AU - Erpek, Tugba
AU - Gursoy, M. Cenk
N1 - Funding Information:
This effort is supported by the U.S. Army Research Office under contract W911NF-17-C-0090. The content of the information does not necessarily reflect the position or the policy of the U.S. Government, and no official endorsement should be inferred.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - This paper studies how to launch an attack on reinforcement learning for network slicing in NextG radio access network (RAN). An adversarial machine learning approach is pursued to construct an over-the-air attack that manipulates the reinforcement learning algorithm and disrupts resource allocation of NextG RAN slicing. Resource blocks are allocated by the base station (gNodeB) to the requests of user equipments and reinforcement learning is applied to maximize the total reward of accepted requests over time. In the meantime, the jammer builds its surrogate model with its own reinforcement learning algorithm by observing the spectrum. This surrogate model is used to select which resource blocks to jam subject to an energy budget. The jammer's goal is to maximize the number of failed network slicing requests. For that purpose, the jammer jams the resource blocks and reduces the reinforcement learning algorithm's reward that is used as the input to update the reinforcement learning algorithm for network slicing. As result, the network slicing performance does not recover for a while even after the jammer stops jamming. The recovery time and the loss in the reward are evaluated for this attack. Results demonstrate the effectiveness of this attack compared to benchmark (random and myopic) jamming attacks, and indicate vulnerabilities of NextG RAN slicing to smart jammers.
AB - This paper studies how to launch an attack on reinforcement learning for network slicing in NextG radio access network (RAN). An adversarial machine learning approach is pursued to construct an over-the-air attack that manipulates the reinforcement learning algorithm and disrupts resource allocation of NextG RAN slicing. Resource blocks are allocated by the base station (gNodeB) to the requests of user equipments and reinforcement learning is applied to maximize the total reward of accepted requests over time. In the meantime, the jammer builds its surrogate model with its own reinforcement learning algorithm by observing the spectrum. This surrogate model is used to select which resource blocks to jam subject to an energy budget. The jammer's goal is to maximize the number of failed network slicing requests. For that purpose, the jammer jams the resource blocks and reduces the reinforcement learning algorithm's reward that is used as the input to update the reinforcement learning algorithm for network slicing. As result, the network slicing performance does not recover for a while even after the jammer stops jamming. The recovery time and the loss in the reward are evaluated for this attack. Results demonstrate the effectiveness of this attack compared to benchmark (random and myopic) jamming attacks, and indicate vulnerabilities of NextG RAN slicing to smart jammers.
UR - http://www.scopus.com/inward/record.url?scp=85142809647&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142809647&partnerID=8YFLogxK
U2 - 10.1109/FNWF55208.2022.00076
DO - 10.1109/FNWF55208.2022.00076
M3 - Conference contribution
AN - SCOPUS:85142809647
T3 - Proceedings - 2022 IEEE Future Networks World Forum, FNWF 2022
SP - 397
EP - 402
BT - Proceedings - 2022 IEEE Future Networks World Forum, FNWF 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Future Networks World Forum, FNWF 2022
Y2 - 12 October 2022 through 14 October 2022
ER -