Oded Goldreich - Academic Profile
My research interests lie within the
Theory of Computation.
Below is a brief introduction to that discipline,
followed by a brief description of
my specific areas of research
and a brief summary of some of my research
and expository contributions.
See also
``press releases'',
periodical statements,
Curriculum Vitae,
(annotated) list of publications,
and list of papers available on-line.
Last updated: Jan 2015.
A Brief Introduction to the Theory of Computation
Computer Science is a cluster of related scientific and engineering
disciplines concerned with the study and application of computations.
These disciplines range from the pure and basic scientific
discipline concerned with the foundations (or theory)
of computer science (or of computation) to engineering
disciplines concerned with specific applications.
The foundations (or theory) of computer science can be
partitioned into two sub-disciplines:
one concerned with the Theory of Computation,
and the other concerned with the Theory of Programming.
The Theory of Computation aims at understanding
the nature of computation, and specifically the
inherent possibilities and limitations of efficient computations.
The Theory of Programming is concerned with the actual task of
implementing computations (i.e., writing computer programs).
The Theory of Computation is a
scientific discipline concerned with the
study of general properties
of computation be it natural, man-made, or imaginary.
Most importantly, it aims to understand the
nature of efficient computation.
A few examples that are related to my specific research interests
follow
One key question is which functions can be efficiently computed?
For example, it is (relatively) easy to multiply integers,
but it seems hard to take the product and factor it into its
prime components.
In general, it seems that there are one-way computations,
or put differently one-way functions:
Such functions are easy to evaluate but hard to invert.
Do one-way functions exist?
We don't know, though we believe they do exist,
and we can relate this belief to other important questions.
A related question is that of the comparable difficulty
of solving problems versus verifying the validity of solutions.
We believe that some problems are much harder to solve than
to verify the validity of a solution for them.
However, we don't know this to be a fact either:
this is actually the famous P versus NP question.
Still, we know of many problems that are hard to solve,
provided that the above belief is indeed valid.
For each of these problems (called NP-hard),
an efficient solving method would imply an efficient
solving-method for each problem in NP
(i.e., each problem for
which verifying the validity of solutions is easy).
Thus, all the (hundreds of natural) NP-complete problems
(i.e., problems are both NP-hard and in NP)
are computationally equivalent,
although the formulation of many of them seems totally unrelated.
The Theory of Computation provides a new viewpoint on old phenomena.
For example, a computational approach to randomness, which views
randomness as subjective to the observer, leads to the
conclusion that randomness can be expanded almost arbitrarily
(cf. the theory of pseudorandomness).
Likewise, a computational approach to proofs, which allows for
(probabilistic) interavtive proofs, leads to the
conclusion that obtaining a proof to a statement
may not teach you anything beyond the validity of the statement
(such proofs are called zero-knowledge).
In general, a computational approach to proofs leads to the
realization that the standard notion may be generalized to various
ways yielding
probabilistic proof systems that offer
advantages over standard (static and deterministic) proof systems.
The Theory of Computation is also concerned with finding
the most efficient methods for solving specific problems.
For example, multiplying numbers can be done more efficiently
than via the simple method learned in elementary school.
The nature of efficient computation (and computation in general)
is indeed the formative question of the Theory of Computation.
I consider this question (or rather a cluster of questions)
to be one of the most fundamental scientific questions ever asked.
Unfortunately, the fundamental status of this question is usually
disregarded due to its immediate technological impact (as argued next).
The revolutionary impact of a technology
(in our case the Computing Technology)
on our society does not necessarily facilitate
the appreciation of the intellectual contents of
the theory underlying it
(in our case the Theory of Computation).
Typically, people are so overwhelmed by the wonders of the technology
that they fail to wonder about the theory underlying it.
Specifically, people tend not to think of computing in general terms
but rather in the concrete terms in which they have lastly encountered it.
Consequently, the intellectual contents of the Theory of Computation
is rarely communicated and rarely understood (by non-specialists).
An attempt to redeem this state of affairs
is made in the material available from the web-page
Theory of Computing - A Scientific Perspective.
My Specific Areas of Research
The theory of computation can be sub-divided to numerous
overlapping areas.
Two main clusters of areas are complexity theory
and algorithms, where the distinction is on whether the
focus is on the computational resources (as in complexity theory)
or on the tasks to be solved (as in algorithms).
Indeed, complexity theory is sub-divided according to the model
and resources of computation (i.e., time complexity,
space complexity, circuit complexity, communication complexity,
proof complexity, etc),
whereas algorithms are sub-divided according to the respective tasks
(e.g., graph algorithms, linear programming, approximation algorithms,
computational number theory, computational geometry, etc).
In addition, there are areas of the theory of computation
that should not be forced into the above two clusters.
Examples include Cryptography, Distributed Computing,
and Computational Learning Theory.
Currently, my main research areas belong to
the interplay of randomness and computation.
Specific examples include
This research (see more details below) tends to be classified
as belonging to complexity theory,
but at times it is very relevant to the design of algorithms.
In addition,
I am interested in complexity theory at large.
In the past, I was also active in
the foundations of cryptography
(see more details below);
nowadays, I no longer follow developments in that area.
I also have some research experience in Distributed Computing.
A fresh view at the question of randomness was taken in the
theory of computation: It has been postulated that a distribution
is pseudorandom if it cannot be told apart from the uniform distribution
by any efficient procedure.
This paradigm, originally associating efficient procedures
with polynomial-time algorithms, has been applied also with respect
to a variety of limited classes of such distinguishing procedures.
Various types of probabilistic proof systems have played a
central role in the development of the theory of computation
in the last two decades.
These proof systems share a common (untraditional) feature -
they carry a probability of error;
yet, this probability is explicitly bounded and
can be reduced by successive application of the proof system.
The gain in allowing this untraditional relaxation is substantial,
as demonstrated by three well known results regarding
interactive proofs, zero-knowledge proofs,
and probabilistic checkable proofs:
In each of these cases, allowing a bounded probability of error
makes the system much more powerful and useful than
the traditional (error-less) counterparts.
The study of property testing,
a notion of approximation for decision problems
Property Testing is the study of super-fast (randomized) algorithms
for approximate decision making. These algorithms are given
direct access to items of a huge data set, and determine
whether this data set has some predetermined (global) property
or is ``far'' from having this property. Remarkably, this approximate
decision is made by accessing a small portion of the data set.
That is, the focus of this study is on algorithms
that make an approximate decision without looking
at the entire object (and sometimes even running in constant time!).
Whereas classical cryptography was confined to the art
of designing and breaking encryption schemes (or ``secrecy codes''),
Modern Cryptography is concerned with the rigorous analysis
of any system that should withstand malicious attempts to abuse it.
We emphasize two aspects of the transition from classical to modern
cryptography: (1) the widening of scope from one specific task
to an utmost general class of tasks; and (2) the move from
an engineering-art which strives on ad-hoc tricks to a
scientific discipline based on rigorous approaches and techniques.
The foundations of cryptography are the paradigms,
approaches and techniques used to conceptualize, define and
provide solutions to natural cryptographic problems.
Currently, research in this area includes:
- The study of known paradigms (resp. approaches and techniques),
directed towards a better understanding and utilization of the latter.
- Discovery of new paradigms (resp. approaches and techniques)
that overcome inherent (or seemingly inherent) limitations of the
existing paradigms.
- Formulation of new cryptographic problems
and studying them using known or new paradigms
(resp. approaches and techniques).
Some of my Research Contributions
My most important research contributions are:
Showing (together with Silvio Micali and Avi Wigderson)
how to construct zero-knowledge proof systems for NP:
Zero-knowledge proofs are probabilistic and interactive proofs
that efficiently demonstrate the validity of an assertion
without conveying any additional knowledge.
This work shows the wide applicability of zero-knowledge proofs.
Most importantly, assuming the existence of one-way functions,
it is shown that every set of NP-assertions (e.g., the satisfiability
of propositional formulae) has a zero-knowledge proof system.
[See a PR poster.]
Showing (together with Silvio Micali and Avi Wigderson)
how to solve any multi-party protocol problem:
Assuming the existence of trapdoor permutations,
it is shown how to construct protocols for securely
computing any desirable multi-party functionality.
This result either requires a majority of honest players
or allows dishonest players to suspend the execution
(while being detected as bad).
Loosely speaking, in both cases,
the protocol emulates a trusted party
in an environment in which no party can be trusted.
Presenting (together with Leonid Levin)
a generic hard-core predicate for any one-way function:
A hard-core predicate of the function f is
a polynomial-time computable predicate of x that is hard
to approximate from f(x).
It is proved that any one-way function
of the form f(x,r)=(f'(x),r) has a hard-core predicate
(specifically, the inner-product mod 2 of x and r).
Showing (together with Shafi Goldwasser and Silvio Micali)
how to construct pseudorandom functions:
Extending the notion of pseudorandomness to functions,
it is shown how to construct pseudorandom functions,
using an arbitrary pseudorandom (bit) generator.
This means that a black box that stores only k secret bits
can implement a function from k bit strings to k
bit strings that cannot be distinguished from a random function by any
poly(k)-time observer that can ``query'' the function on
arguments of its choice.
Additional research contributions that I'd like to highlight
are arranged in the following (somewhat overlapping) categories.
A full list of my research contributions is available
in various formats.
See also the list of my works available on-line.
Contributions to the study of one-way functions and
pseudorandomness
Contributions to the study of derandomization and
weak sources of randomness
Contributions to the study
of various types of probabilistic proof systems
Probabilistic Checkable Proofs (PCP):
Statistical Zero-Knowledge (SZK):
Interactive Proofs (IP):
Knowledge Complexity (KC):
Contributions to the study of (computational)
zero-knowledge and related topics
Contributions to the study of Distributed Computing
Contributions to other aspects of the theory of computing
Expository Contributions: Books, Lecture Notes and Surveys
I have spent a considerable amount of time in trying to communicate
the main ideas developed in the research areas in which I have expertise.
This led me to write numerous surveys
and a few books and lecture notes.
My main expository work are briefly described below.
Computational Complexity:
A Conceptual Perspective (2008).
This book offers a conceptual perspective on complexity theory.
It is intended to serve as an introduction to the field,
and can be used either as a textbook or for self-study.
The book is also useful to experts, since it provides
expositions of various sub-areas of complexity theory.
The exposition starts from the intuitive questions,
as embodied in the concepts studies in each sub-area,
and discusses the importance of these questions,
definitional choices made in the actual formulation,
and the approaches and ideas hat underly the answers.
Foundations of Cryptography:
a two-volume textbook
(Vol1, 2001;
Vol2, 2004).
This book is aimed at presenting firm foundations for cryptography;
that is, present the paradigms, approaches and techniques
used to conceptualize, define and provide solutions
to natural ``security concerns''
as well as some of the fundamental results obtained using them.
The emphasis is on the clarification of fundamental concepts
and on demonstrating the feasibility of solving several central
cryptographic problems.
Modern Cryptography,
Probabilistic Proofs and Pseudorandomness (1998).
The interplay between randomness and computation is one
of the most fascinating scientific phenomena uncovered in
the last couple of decades.
This interplay is at the heart of modern cryptography
and plays a fundamental role in complexity theory at large.
Specifically, the interplay of randomness and computation
is pivotal to several intriguing notions of probabilistic proof systems
and is the focal of the computational approach to randomness.
This book provides an introduction to these three,
somewhat interwoven domains
(i.e., cryptography, proofs and randomness).
Randomized Methods in Computation -
Lecture Notes (2001).
A variety of randomized methods are being employed
in the study of computation.
The aim of the current course is to make the students familiar
with some of these methods.
We wish to stress three aspects regarding this course:
this course focuses on methods (i.e., tools and techniques)
and not on concepts, these methods are probabilistic
and so the result is often not ``(fully) explicit'',
and the focus is on applications to the study of computation.
Other activities: Guiding students and Editorial work
A partial list of students that I guided includes
Boaz Barak
(Ph.D. 04);
Ran Canetti
(M.Sc. 92, Ph.D. 95);
Guy Even (M.Sc. 91);
Iftach Haitner
(M.Sc. 03);
Amir Herzberg
(D.Sc. 91);
Hugo Krawczyk (D.Sc. 90);
Eyal
Kushilevitz (M.Sc. 89);
Yehuda Lindell
(Ph.D. 02);
Noam Livne
(Ph.D. 10);
Or Meir
(M.Sc. 07, Ph.D. 11);
Erez Petrank
(M.Sc. 92,
D.Sc. 95);
Alon Rosen
(Ph.D. 03);
Ron Rothblum
(M.Sc. 10, Ph.D. 15).
For a full list click HERE.
I edited
a book in memory of Shimon Even
and a collection of surveys
and research summaries on property testing.
I served (or still serving) on the editorial board of
SICOMP (1996-2010),
Journal of Cryptology (1992-2011),
Computational Complexity (since 2003)
and ECCC
(since founding in 1994).
I served as chair of the steering committee of
the Theory of
Cryptography Conference (2005-13)
and co-organizer the
Oberwolfach Meeting on Complexity Theory
(since 1994).
Back to homepage.