The problem of estimating the KL divergence between two unknown distributions is studied. The alphabet size k of the distributions can scale to infinity. The estimation is based on m and n independent samples respectively drawn from the two distributions. It is first shown that there does not exist any consistent estimator to guarantee asymptotic small worst-case quadratic risk over the set of all pairs of distributions. A restricted set that contains pairs of distributions with bounded ratio F(k) is further considered. An augmented plug-in estimator is proposed, and is shown to be consistent if and only if m = ω(k V log2(F(k)) and n = ω(kF(k)). Furthermore, if F(k) ≥ log2k and log2(F(k)) = o(k), it is shown that any consistent estimator must satisfy the necessary conditions: m = ω( k/log k V log2(F(k)) and n = ω( kF(k)/log k).