An observation regarding the construction of practical probabilistic proof systems

taken from the work On the Concrete-Efficiency Threshold of Probabilistically-Checkable Proofs by Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, and Eran Tromer (ECCC TR12-045).

Oded's comments

I am not sure if this observation was not made before, but I find it of great potential value and thus worthy of highlighting.

The issue is that theoretical constructions of probabilistic proof systems (e.g., zero-knowledge (ZK) proofs or PCPs) typically start with a problem described via a TM computations, whereas in practice the starting point is a RAM computation.

Of course, one may emulate any RAM on a TM and proceed from there, but this incurs quadratic blow-up in computation time. The observation is that one can emulate a RAM program on a NTM (non-deterministic TM) with little overhead, and then check such computation via CSAT (Circuit SAT), again with small overhead. Indeed, the crucial point is that since we are anyhow using proof systems for problems in NP, the emulation on NTM rather than on DTM causes no significant cost when we get to verify the correction of the emulation.

The authors use the foregoing observation for the construction of PCPs, but the application to ZK seems far more begging to me...

Comment by Ryan Williams (May 18, 2012): The efficient reduction of RAM computation to SAT is explicit in [J. M. Robson: An O (T log T) Reduction from RAM Computations to Satisfiability. TCS 82(1), pages 141-149, 1991], but this reduction only shaves a log factor from composition of the RAM-to-NTM reduction and the NTM-to-SAT reduction (both cited in this paper). Such reductions have been recently used in several papers on lower bounds. In the thread of work on time-space tradeoffs for SAT this reduction is what lets us argue for RAM time-space lower bounds; that is, a quasi-linear time reduction is crucial here. It is also used in my recent work on faster SAT algorithms imply circuit lower bounds, where it is used to move from algorithms implemented on a RAM to SAT of comparable size. [End of Ryan's comment.]

I am not surprised that this observation was made and used in various complexity papers (but indeed I forgot it was actually used in the recent papers you mentioned). What surprised me was that this observation was not made before in the context of trying to get practical probabilistic proof systems (e.g., ZK ones), but I may be wrong here too. Let me repeat my request: If you know of such a prior use, do let me know.

Let me stress that my point is not the mere existence of an efficient emulation of RAM on NTM (nor the efficient emulation of NTM by CSAT, which is better known); my point is that if you want to construct a practical probabilistic proof systems (e.g., a ZK) for some practical RAM computation, then you should consider taking the RAM-to-NTM-to-CSAT path.

Caveat: I did not read the paper nor do I recommend doing so; I was merely referred to this observation by Or Meir.

Request: If you know of of a prior use of the RAM-to-NTM-to-CSAT path towards the construction of practical probabilistic proof systems (e.g., ZK ones), do let me know.


Back to list of Oded's choices.