With the state-of-art regarding pseudorandom generators (PRGs) for space-bounded machines stuck for a couple of decades, it makes sense to try to make partial progress in the sense of improving the dependence on one parameter (but not on others). A few such works have appeared recently, and the improvement of the dependence on the error prarameter is a popular one. Such an improvement is obtained in the current work.

Unfortunately, as admitted by the authors, the construction that supports the claimed result are very complicated. I tried reading Section 2, which is supposed to provide a proof overview, but just stucked when the real action starts (Sec 2.3). I assume that more robust readers would have reached a state of understanding as to why the new notion of a pseudorandom pseudo-distribution (PRPD) helps, but even without seeing the actual benefit (to this work) it is evident that this notion has a potential benefit.

Firstly, a *pseudo-distribution* is a sequence
of pairs over $\{0,1\}^* \times \R$ such
that the pair $(x,p)$ represents that $x$ occurs
with pseudo-probability (or weight) $p$,
which may be negative or larger than one.
(Furthermore, the same string $x$ may occur
several times in the sequence of pairs.)
Indeed, in a distribution, the probabilities
are non-negative and sum-up to one,
but here nothing is required.
Now, a pseudo-distribution $X=\{(x_i,p_i):i\in[t]\}$
is called pseudorandom wrt an algorithm $A$ (or a class of algorithms)
if the probability that $A$ accepts an element
drawn from $X$ approximates the probability that $A$
accepts a uniformly distributed string,
where the probability that $A$ accepts $X$
is the sum of the $p_i$'s that are paired
with $x_i$'s that are accepted by $A$
(i.e., $\sum_{i:A(x_i)=1}p_i$).

Note that if the class of algorithms is rich enough, then this induces constraints on the pseudo-distributions that may be used (e.g., the sum of all $p_i$ must approximate 1). In the current work, where non-uniform algorithms are considered, it follows that the sum of $p_i$'s that correspond to each individual $x_i$ must be close to zero (since it has to approximate $2^{-n}$). More importantly, a construction of a pseudo-distribution that is pseudorandom wrt a class of algorithms $C$, yields a hitting set for that class (which consists of all $x_i$'s in the support), and also yields a derandomization for the class (in which the deterministic algorithm invokes the original algorithm on all $x_i$'s and takes a weighted average of the responses (weighted by the $p_i$'s)).

Nisan (Combinatorica'92) constructed a pseudorandom generator for length $n$, width $n$ read-once branching programs (ROBPs) with error $\varepsilon$ and seed length $O(\log^2{n} + \log{n} \cdot \log(1/\varepsilon))$. A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal $O(\log{n}+\log(1/\varepsilon))$, or to construct improved hitting sets, as these would yield stronger derandomization of $\mathbf{BPL}$ and $\mathbf{RL}$, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan's construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs.

In this work, we make the first improvement for the general case by constructing a hitting set with seed length $\widetilde{O}(\log^2{n}+\log(1/\varepsilon))$. That is, we decouple $\varepsilon$ and $n$, and obtain near-optimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, $\log(1/\varepsilon) \gg \log{n}$, is well-motivated by the work of Saks and Zhou (JCSS'99) who use pseudorandom generators with error $\varepsilon =3D 2^{-(\log{n})^{2}}$ in their proof for $\mathbf{BPL} \subseteq \mathbf{L}^{3/2}$. We further suggest a research program towards proving that $\mathbf{BPL} \subseteq \mathbf{L}^{4/3}$ in which our result achieves one step.

As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudo-distributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize two-sided error algorithms.

See ECCC TR17-161.

Back to list of Oded's choices.