The strive for efficiency is ancient and universal,
as time and other resources are always in shortage.
Thus, the question of which tasks can be performed efficiently
is central to the human experience.
A key step towards the systematic study of the aforementioned question
is a rigorous definition of the notion of a task and of procedures for
solving tasks. These definitions were provided by computability theory,
which emerged in the 1930's. This theory focuses on computational tasks,
and considers automated procedures (i.e., computing devices and algorithms)
that may solve such tasks.
In focusing attention on computational tasks and algorithms,
computability theory has set the stage for the study of the
computational resources (like time) that are required by such algorithms.
When this study focuses on the resources that are necessary for
*any* algorithm that solves a particular task (or class of tasks),
the study becomes part of the theory of Computational Complexity
(also known as Complexity Theory).

Complexity Theory is a central field of the theoretical foundations
of Computer Science. It is concerned with the study of
the *intrinsic complexity of computational tasks*.
That is, a typical Complexity theoretic study looks at a task
(or a class of tasks) and at the computational resources
required to solve this task,
rather than at a specific algorithm or algorithmic scheme.
Actually, research in Complexity Theory tends to
*start with the computational resources themselves*,
and addresses the effect of limiting these resources
on the class of tasks that can be solved.
Thus, Computational Complexity is the study of the
what can be achieved within limited time
(and/or other limited natural computational resources).

The (half-century) history of Complexity Theory has witnessed two main research efforts (or directions). The first direction is aimed towards actually establishing concrete lower bounds on the complexity of problems, via an analysis of the evolution of the process of computation. Thus, in a sense, the heart of this direction is a ``low-level'' analysis of computation. Most research in circuit complexity and in proof complexity falls within this category. In contrast, a second research effort is aimed at exploring the connections among computational problems and notions, without being able to provide absolute statements regarding the individual problems or notions. This effort may be viewed as a ``high-level'' study of computation. The theory of NP-completeness as well as the studies of approximation, probabilistic proof systems, pseudorandomness and cryptography all fall within this category.

- A brief overview, a teaser (2005); revised into An Invitation to Complexity Theory (2012).
- Computational Complexity: A Conceptual Perspective, a book (2008). [See drafts (2006-07).]
- P, NP, and NP-Completeness: The Basics of Computational Complexity, a textbook (2010).
- Expositions of various topics in Complexity Theory, various texts (1987-2011).
- On Teaching the Basics of Complexity Theory, a position paper (2005).
- Two surveys:
- focusing on the high-level appproach (2000).
- including coverage of lower bounds (2004, co-auothered by Avi Wigderson).

- Two sets of lecture notes:
- detailed notes written by students (1999)
- less detailed notes written by O.G. (2002).

- Background: draft of a textbook in Hebrew on Computability and Basic Complexity Theory.
- Modern Cryptography, Probabilistic Proofs and Pseudorandomness (1998).
- Two lectures on advance topics in computability, 2002.

Back to Oded Goldreich's homepage.