Lecture Notes on Probabilistic Proofs and Pseudorandomness

The interplay between randomness and computation is one of the most fascinating scientific phenomena uncovered in the last couple of decades. This interplay is at the heart of modern cryptography and plays a fundamental role in complexity theory at large. Specifically, the interplay of randomness and computation is pivotal to several intriguing notions of probabilistic proof systems and is the focal of the computational approach to randomness. The four sets of lecture available below focus on these two subjects:

Further comments on these sets of lecture notes follow.

Pseudorandomness

A fresh view at the question of randomness was taken in the theory of computing: It has been postulated that a distribution is pseudorandom if it cannot be told apart from the uniform distribution by any efficient procedure. This paradigm, originally associating efficient procedures with polynomial-time algorithms, has been applied also with respect to a variety of other classes of distinguishing procedures. We focus on pseudorandom generators; that is, deterministic programs that stretch short (random) seeds into much longer pseudorandom sequences.

Two of the lecture series focus on pseudorandomness and, specifically, pseudorandom generators. The first lecture series focuses on the case where the pseudorandom generator runs in polynomial-time and withstands any polynomial-time distinguisher. In particular, the distinguisher may be more complex (i.e., have a higher polynomial running time) than the generator. This framework is natural in the context of designing general-purpose pseudorandom generators that can be used in any efficient (i.e., polynomial-time) application. Furthermore, this framework is almost mandatory in cryptographic applications, where the adversary is typically willing to invest more effort than the legitimate users.

The second lecture series regarding pseudorandomness focuses on the case where the pseudorandom generator runs in exponential-time (w.r.t the seed length) and withstands distinguisher of running time bounded by a specific polynomial (in the length of the generator's output). In particular, the generator may be more complex than the distinguisher. This framework is natural in the context of de-randomization (i.e., converting randomized algorithms to deterministic ones). In this context, one may be willing to construct a special-purpose generator for each randomized algorithm (or each time bound), and scan through all possible seeds to the generator (thus not being intimidated by generators with exponential running-time).

For other notions of pseudorandom generators, the interested reader is referred to Chapter 3 of Modern Cryptography, Probabilistic Proofs and Pseudorandomness.

Probabilistic Proof Systems

Various types of probabilistic proof systems have played a central role in the development of computer science in the last decade. These proof systems share a common (untraditional) feature~-- they carry a probability of error; yet, this probability is explicitly bounded and can be reduced by successive application of the proof system. The gain in allowing this untraditional relaxation is substantial, as demonstrated by three well known results regarding interactive proofs, zero-knowledge proofs, and probabilistically checkable proofs: In each of these cases, allowing a bounded probability of error makes the system much more powerful and useful than the traditional (error-less) counterparts.

Loosely speaking, an interactive proof system (see first lecture in the third lecture series) is a game between a computationally bounded verifier and a computationally unbounded prover whose goal is to convince the verifier of the validity of some assertion. Specifically, the verifier is probabilistic and its time-complexity is polynomial in the length of the assertion. It is required that if the assertion holds then the verifier always accepts (when interacting with an appropriate prover strategy). On the other hand, if the assertion is false then the verifier must reject with probability at least $\frac12$, no matter what strategy is being employed by the prover. Thus, a proof'' in this context is not a fixed and static object, but rather a randomized (dynamic) process in which the verifier interacts with the prover. Zero-knowledge proofs (see second lecture in the third lecture series) are interactive proofs that yield nothing (to the verifier) beyond the fact that the assertion is indeed valid. That is, whatever the verifier can efficiently compute after interacting with a zero-knowledge prover, can be efficiently computed from the assertion itself (without interacting with anyone). Thus, zero-knowledge proofs exhibit an extreme contrast between being convinced of the validity of a statement and learning anything in addition (while receiving such a convincing proof).

In contrast to the above, probabilistically checkable proofs (PCPs) (see last lecture series) may be though as an augmentation of written proofs allowing very fast probabilistic verification via direct access to randomly selected portions of the (redundant) proof. Typically, the verifier access only a tiny part of the proof, and this part is much smaller than an ordinary proof. Again, it is required that if the assertion holds then the verifier always accepts (when given access to an adequate proof); whereas, if the assertion is false then the verifier must reject with probability at least $\frac12$, no matter which false proof is presented to it.

Other types of probabilistic proof systems are discussed in Chapter 2 of Modern Cryptography, Probabilistic Proofs and Pseudorandomness.

Background material

We assume the reader is fairly comfortable with basic notions of the theory of computation and elementary probability theory. Familiarity with some randomized algorithms may be useful too.

We refer the reader to Lecture Notes on Complexity Theory.

Back to the top or to Oded Goldreich's homepage.