This paper studies how to launch an attack on reinforcement learning for network slicing in NextG radio access network (RAN). An adversarial machine learning approach is pursued to construct an over-the-air attack that manipulates the reinforcement learning algorithm and disrupts resource allocation of NextG RAN slicing. Resource blocks are allocated by the base station (gNodeB) to the requests of user equipments and reinforcement learning is applied to maximize the total reward of accepted requests over time. In the meantime, the jammer builds its surrogate model with its own reinforcement learning algorithm by observing the spectrum. This surrogate model is used to select which resource blocks to jam subject to an energy budget. The jammer's goal is to maximize the number of failed network slicing requests. For that purpose, the jammer jams the resource blocks and reduces the reinforcement learning algorithm's reward that is used as the input to update the reinforcement learning algorithm for network slicing. As result, the network slicing performance does not recover for a while even after the jammer stops jamming. The recovery time and the loss in the reward are evaluated for this attack. Results demonstrate the effectiveness of this attack compared to benchmark (random and myopic) jamming attacks, and indicate vulnerabilities of NextG RAN slicing to smart jammers.