I am happy to recommend and commend any survey, let alone on research directions that I'm closely familiar with. This survey is about quantified derandomization, which is concerned with the following type of problems. For a class of circuit $\calC$ and a class of bounds $\calB$, provide an efficient deterministic algorithm that, given a circuit $C\in\calC$ that evaluates to 1 on all but at most $B(n)$ of its $n$-bit inputs, where $B\in\calB$, finds an input that evaluates to 1.
My own view of this type of problems is more narrow than Roei's. While he advocate them firstly as interesting per se, I advocate them mainly per their relation to the standard derandomization challenge, which refers to $B(n)=2^{n}/3$. As Roei, for a class of circuits $\calC$, I distinguish between feasible bounds and bootstrapping bounds, the former are bounds $B$ for which the quantified deradomization problem can be solved efficiently, and the latter are bounds $B$ such that solving the standard derandomization problem reduces to solving the quantified deradomization (with bound $B$).
The results known so far (and surveyed by Roei) identify classes of circuits for which the feasible bounds are only a bit smaller than the bootstrapping bounds. Hence, the previously known qualitative difficulty of standard derandomization for these classes is reflected in quantitative terms: We can solve the problem for the former bounds and the challenge is to push things a bit higher so to obtain standard derandomization. Indeed, this narrow-minded and optimistic attitude was on my mind when collaborating with Avi on our joint work on the subject.
Note, however, that one can also fix the bound (or class of bounds) and distinguish between feasible circuit classes and bootstrapping circuit classes for this fixed bound. Some of the known results refer to this ``dual'' formulation, and my narrow-minded interpretation applies here too.
The focus of this survey is the question of *quantified derandomization*, which was introduced by Goldreich and Wigderson (2014): Does derandomization of probabilistic algorithms become easier if we only want to derandomize algorithms that err with extremely small probability? How small does this probability need to be in order for the problem's complexity to be affected?
This question opens the door to studying natural relaxed versions of the derandomization problem, and allows us to construct algorithms that are more efficient than in the general case as well as to make gradual progress towards solving the general case. In the survey I describe the body of knowledge accumulated since the question's introduction, focusing on the following directions and results:
Available from ECCC TR21-120