We consider two basic computational problems regarding discrete probability distributions: (1) approximating the statistical difference (aka variation distance) between two given distributions, and (2) approximating the entropy of a given distribution. Both problems are considered in two different settings. In the first setting the approximation algorithm is only given samples from the distributions in question, whereas in the second setting the algorithm is given the ``code'' of a sampling device (for the distributions in question).
We survey the know results regarding both settings, noting that they are fundamentally different: The first setting is concerned with the number of samples required for determining the quantity in question, and is thus essentially information theoretic. In the second setting the quantities in question are determined by the input, and the question is merely one of computational complexity. The focus of this survey is actually on the latter setting. In particular, the survey includes proof sketches of three central results regarding the latter setting, where one of these proofs has only appeared before in the second author's PhD Thesis.
Errata (3-Feb-2019): As pointed out by Itay Berman, Akshay Degwekar, Ron Rothblum and Prashant Vasudevan, the proof (sketched in Sec 5.1) for the general part of Thm 1 holds only for constant $c$ and $f$ such that $c$ is smaller than $f^2$. Nevertheless, they were able to prove the stated generalization using a more complex argument [to be posted soon]. As for the original proof, it calls for setting $t$ so that $(f(n)^2/c(n))^t/2 \geq 8n$ while assuming that $c(n)^t \geq 1/\poly(n)$. For $c(n)$ that is upper-bounded by a constant smaller than one, this assumption holds only if $t=O(\log n)$, which in turn implies that $f(n)^2 /c(n)$ must be lower-bounded a constant greater than one.