We present a structured dictionary learning method to remove blocking artifacts without blurring edges or making any assumption over image gradients. Instead of a single overcomplete dictionary, we build multiple subspaces and impose sparsity on nonzero reconstruction coefficients when we project a given texture sample on each subspace separately. In case the texture matches to the dataset with which the subspace is trained, the corresponding response will be stronger and that subspace will be chosen to represent the texture. In this manner we compute the representations of all patches in the image and aggregate these to obtain the final image. Since the block artifacts are small in magnitude in comparison to actual image edges, aggregation efficiently removes the artifacts but keep the image gradients. We discuss the choices of subspace parameterizations and adaptation to given data. Our results on a large dataset of benchmark images demonstrate that the presented method provides superior results in terms of pixel-wise (PSNR) and perceptual (SSIM) measures.