Natural Proofs Versus Derandomization

by Ryan Williams

Oded's comments

Here is another intriguing result by Ryan Williams, which I interpret as saying that we should be more careful about the interpretation of various results. In this case, the Natural Proof framework is at stake, and it is vividly demonstrated that its largeness condition is quite problematic. (Indeed, we already got a fair warning in Chow's work on ``almost natural proofs'' [FOCS'08].) The current work builds on Ryan's prior works [see my choice Nr 57], which may be interpreted as saying that the fact that derandomization yields circuit lower bounds can offer a proof technique rather than a barrier.

But enough of my speculations: Here is Ryan's abstract (which highlights other aspects of his results).

The original abstract

We study connections between Natural Proofs, derandomization, and the problem of proving weak circuit lower bounds such as 'NEXP is not contained in TC^0' which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm ALG is embedded in them), have largeness (ALG accepts a large fraction of strings), and are useful (ALG rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are "naturalizing" would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue un-Natural Proofs. Some heuristic arguments say constructivity should be circumventable. Largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove:

These characterizations are applied to yield several new results. The two main applications are that NEXP \cap coNEXP does not have n^{log n} size ACC circuits, and a mild derandomization result for RP.


Back to list of Oded's choices.