Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise

by Lijie Chen and Roei Tell

Oded's comments

This work presents a new hardness versus randomness connection. Unlike prior connections, the current one is fine-grained, instance-based, and inherently non-black-box.

Specifically, the hardness assumption refers to a function that can be computed in polynomial-time but not in lower (randomized) polynomial-time, and the derandomization of a PPT algorithm A on each input x fails only if a related algorithm computes f on x, which means that the de-randomization is instance-based. The derandomization relies on a (targetted) hitting-set generator that is constructed based on the description of a (medium depth) circuit computing f, which means that the de-randomization is highly non-black-box (beyond its being `targetted' in the sense that the set generated towards derandomizing A on input x depends on x).

One incarnation of the foregoing connection uses functions f that are computable in polynomial-time such that each randomized algorithm that runs in significantly lower polynomial-time p errs on all but finitely many inputs. Such functions cannot be Boolean (or have a constant-size range), but, assuming BPP=P, their existence can be proved by digitalization (i.e., associating a bit in the output to each possible algorithm), where P=BPP is used to close the gap between randomized algorithms computing $f$ and deterministic ones. The current work uses a strengthening of foregoing condition that requires that the function f is computed by a uniform family of circuits of medium depth (i.e., polynomial depth that is significantly smaller than the hardness bound p).

One unprecedented aspect of this work is the construction of hitting set generator that relies on an interactive proof system (i.e., the one of Goldwasser, Kalai, and Rothblum (see survey)). Specifically, the strings in the generated set are defined based on the (honest) prover strategy in that (doubly-efficient) proof system, and one uses the fact that the number of rounds in that system (when applied to proving the value of f at x) is linearly related to the depth of the circuit computing f. Actually, the hitting set is a union of sets, each corresponding to the prover's strategy in some round such that at least one of these sets is pseudorandom (equiv., if none of these sets is pseudorandom, then f is easy to compute).

The original abstract (slightly revised)

We propose a new approach to the hardness-to-randomness framework and to the promise-BPP=promise-P conjecture. Classical results rely on non-uniform hardness assumptions to construct derandomization algorithms that work in the worst-case, or rely on uniform hardness assumptions to construct derandomization algorithms that work only in the average-case. In both types of results, the derandomization algorithm is black-box and uses the standard PRG approach. In this work we present results that closely relate new and natural uniform hardness assumptions to worst-case derandomization of promise-BPP, where the algorithms underlying the latter derandomization are non-black-box.

In our main result, we show that promise-BPP=promise-P if the following holds: There exists a multi-output function computable by logspace-uniform circuits of polynomial size and depth $n^2$ that cannot be computed by uniform probabilistic algorithms in time $n^c$, for some universal constant $c\geq1$, on almost all inputs. The required failure on almost all inputs is stronger than the standard requirement of failing on one input of each length; however, the same assumption without the depth restriction on $f$ is necessary for the conclusion. This suggests a potential equivalence between worst-case derandomization of promise-BPP of any form (i.e., not necessarily by a black-box algorithm) and the existence of efficiently computable functions that are hard for probabilistic algorithms on almost all inputs.

In our second result, we introduce a new and uniform hardness-to-randomness tradeoff for the setting of superfast average-case derandomization; prior to this work, superfast average-case derandomization was known only under non-uniform hardness assumptions. In an extreme instantiation of our new tradeoff, under appealing uniform hardness assumptions, we show that for every polynomial $T(n)$ and positive constant $\epsilon$ it holds that $BPTIME[T]\subseteq heur-DTIME[T\cdot n^{\epsilon}]$, where the heur prefix means that no polynomial-time algorithm can find, with non-negligible probability, an input on which the deterministic simulation errs.

Technically, our approach is to design targeted PRGs and HSGs, as introduced by Goldreich (LNCS 6650, 2011). The targeted PRGs/HSGs ``produce randomness from the input'', as suggested by Goldreich and Wigderson (RANDOM 2002), and their analysis relies on non-black-box versions of the reconstruction procedure of Impagliazzo and Wigderson (FOCS 1998). Our main reconstruction procedure crucially relies on the ideas underlying the proof system of Goldwasser, Kalai, and Rothblum (JACM 2015).

Available from ECCC TR21-080


Back to list of Oded's choices.