Visual tracking is a difficult task due to numerous scale, occlusion, motion blur, and other deformation changes through-out a video sequence. While correlation filter trackers have recently shown promise, it still remains a challenge to account for the numerous different changes of an object during tracking. In this paper, we propose a selective parts-based approach, using correlation filters, that makes choices based on a consensus of the parts and global tracking to track through occlusions. In contrast to existing part-based methods, the proposed method does not dilute accurate tracking by averaging results over multiple parts at every frame. Instead, we only make location corrections when a part diverges and rely on these corrections to maintain an accurate appearance model. The proposed approach was evaluated for scenarios obtained from two different challenging benchmark datasets. Our approach has resulted in better overall precision and success rates compared to recent parts-based approaches, and has performed better especially in occlusion scenarios.