This work is a good opportunity to mention my work with Shafi Goldwasser and Asaf Nussbaum On the Implementation of Huge Random Objects; see Section 2.5 of our work, which explicitly discusses the special case of objects of feasible size. While (in Sec. 2.5) we considered efficient implementions of such objects (i.e., distributions on strings of feasible size), the current work refers to super-efficient implementations (i.e., by polynomial-size constant-depth circuits). Its focus is actually on lower bounds.
From that perspective, I find Thm 1.6 most interesting. It asserts that certain simple distributions cannot be approximate (well) by small decision forests. In contrast, Thm 1.3 refers to simultaneously achieving small locality and relatively low randomness complexity, which is used for deriving lower bounds on data structures.
The paper also present an interesting pseudorandom generator (PRG), which is obtained by combining Thms 8.1 and 8.3. Specifically, for any constant $d$ there exists a constant $c$ such that $m$-bit long sequences generated from a seed of length $k=(\log m)^{c\log\log m}$ (i.e., $k=\exp(c (\log\log m)^2)$) by uniform $m^c$-size circuits of depth 2 fool any depth $d$ circuit of size $m$. That is, the PRG takes a seed of length $k$, and stretches it to a sequence of length $\ell(k)=\exp(2^{\sqrt{(\log k)/c}})$ such that it fools any depth $d$ circuit of size $\ell(k)$ (w.r.t gap $1/\ell(k)$), while each bit of the output is produced by an explicit DNF of size $\ell(k)^c$. Note that Nisan's generator uses a seed of polylogarithmic length (i.e., $k=\poly(\log m)$), but the PRG itself is computed by a circuit of depth $O(d)$.
Complexity theory typically studies the complexity of computing a function $h(x) : {0,1}^m \to {0,1}^n$ of a given input $x$. We advocate the study of the complexity of generating the distribution $h(x)$ for uniform $x$, given random bits. Our main results are: