Computational Complexity (or Complexity Theory) is a central subfield of the theoretical foundations of Computer Science. It is concerned with the study of the intrinsic complexity of computational tasks. This study tends to aim at generality; it focuses on natural computational resources, and considers the effect of limiting these resources on the class of problems that can be solved. It also tends to asymptotics: studying this complexity as the size of data grows. Another related subfield (represented in this volume) deals with the design and analysis of algorithms for specific (classes of) computational problems that arise in a variety of areas of mathematics, science and engineering.
he (half-century) history of Complexity Theory has witnessed two main research efforts (or directions). The first direction is aimed towards actually establishing concrete lower bounds on the complexity of problems, via an analysis of the evolution of the process of computation. Thus, in a sense, the heart of this direction is a ``low-level'' analysis of computation. Most research in circuit complexity and in proof complexity falls within this category. In contrast, a second research effort is aimed at exploring the connections among computational problems and notions, without being able to provide absolute statements. This effort may be viewed as a ``high-level'' study of computation. The theory of NP-completeness, the study of probabilistic proof systems as well as pseudorandomness and cryptography all falls within this category.
Two surveys are available on-line:
Back to Oded Goldreich's homepage.
This work may be published or be a basis for publication in the future. Copyright may be transferred without further notice and this version may no longer be accessible.