In recent years, *randomness* has become a central notion in diverse
fields of computer science. Randomness is used in the design of algorithms in
fields as computational number theory, computational geometry,
parallel and distributed computing, and it is crucial for cryptography.
Since in most cases the interest is in the behavior of
efficient algorithms (modeled by polynomial-time computations),
the fundamental notion of *pseudorandomness* arises.
Pseudorandom distributions are probability distributions
on sets of strings that cannot be efficiently distinguished from the uniform
distribution on the same sets. In other words,
any efficient probabilistic algorithm performs essentially as well
when substituting its source of unbiased coins by a
sequence sampled from a pseudorandom distribution.
In this thesis we investigate the existence of pseudorandom
distributions and the computational difficulty of generating them.

Pseudorandomness is practically beneficial if pseudorandom sequences
can be generated more easily than "truly random" ones.
This gives rise to the notion of a *pseudorandom generator* - an
efficient deterministic algorithm which *expands* truly
random strings into longer pseudorandom sequences.
The existence of pseudorandom generators is not yet proven.
Such a proof of existence would imply the solution of
the most important open problem in theoretical computer science.
It would imply the existence of one-way functions and, in particular,
that **P noteq NP**.
Thus, as long as we cannot settle these questions, the existence
of (polynomial-time) pseudorandom generators can be proven only
under intractability assumptions. In this thesis,
we present a new sufficient condition for the existence of such generators.
We show that pseudorandom generators can be constructed using
*regular one-way functions*. Regular functions are functions
that map the same number of elements to every element in the
range of the function (the actual condition is more general).
The novelty of our work is both in weakening previous sufficient
conditions for constructing pseudorandom generators, and in
presenting a new technique for iterating a (regular) one-way
function while preserving its one-wayness during the repeated iterations.
In particular, this result allows the construction of pseudorandom
generators based on specific intractability assumptions that were
not known to be sufficient for this task. Examples are the
(conjectured) intractability of general factoring, the (conjectured)
intractability of decoding random linear codes, and the (conjectured)
average-case difficulty of some combinatorial problems (e.g. subset-sum).

We also investigate the existence of pseudorandom distributions
when decoupled from the notion of efficient generation.
We prove, without relying on any unproven assumption,
the existence and samplability of
*sparse* pseudorandom distributions,
which are substantially different from the uniform distribution.
We demonstrate the existence of non-polynomial
generators of pseudorandomness achieving optimal expansion rate.
These algorithms have also "optimal" complexity measures
(as running time or circuit size),
in the sense that improving these measures would lead to
major breakthroughs in Complexity Theory.

We prove the existence of pseudorandom distributions which
are *evasive*, that is,
any efficient algorithm trying to find an element in the support
of the distribution
(i.e. elements assigned with non-zero probability),
will succeed to do so with only negligible probability.
This result allowed us to resolve two open problems concerning
the composition of zero-knowledge proof systems.
We prove that the original definition of zero-knowledge
(involving uniform verifiers without no auxiliary input)
is not robust under sequential composition,
and that even the strong formulations of zero-knowledge
are not closed under parallel composition.
Other results on the round complexity of zero-knowledge interactive
proofs, with significant implications to the parallelization of
zero-knowledge protocols, are also presented.

Finally, we investigate whether
some classical number generators,
called *congruential number generators*,
are pseudorandom generators.
These algorithms are extensions of the well-known linear
congruential generator, and are of interest because of their
simplicity and efficiency.
We prove that these number generators are not pseudorandom since
they can be efficiently predicted. We present an efficient algorithm which,
on input a prefix of the generated sequence, guesses the next
element in the sequence with a good probability of success.
This extends previous results on the predictability of
congruential generators and, in particular,
it implies an affirmative answer to the open question of
whether multivariate polynomial recurrences are efficiently predictable.

*Submitted to the Senate of the Technion, February 1990.*

Available: the thesis (in PDF file).

Back to Oded Goldreich's homepage.