Spatial-temporal graph have been widely observed in various domains such as neuroscience, climate research, and transportation engineering. The state-of-the-art models of spatial-temporal graphs rely on Graph Neural Networks (GNNs) to obtain explicit representations for such networks and to discover hidden spatial dependencies in them. These models have demonstrated superior performance in various tasks. In this paper, we propose a sparse adversarial attack framework ADVERSPARSE to illustrate that when only a few key connections are removed in such graphs, hidden spatial dependencies learned by such spatial-temporal models are significantly impacted, leading to various issues such as increasing prediction errors. We formulate the adversarial attack as an optimization problem and solve it by the Alternating Direction Method of Multipliers (ADMM). Experiments show that ADVERSPARSE can find and remove key connections in these graphs, leading to malfunctioning models, even in models capable of learning hidden spatial dependencies.