Computational Complexity: A Conceptual Perspective
Oded Goldreich


This book is rooted in the thesis that
complexity theory is extremely rich in conceptual content,
and that this contents should be explicitly
communicated in expositions and courses on the subject.
It focuses on several subareas of complexity theory,
starting from the intuitive questions addresses by the subarea.
The exposition discusses the fundamental importance of these questions,
the choices made in the actual formulation of these questions and notions,
the approaches that underly the answers,
and the ideas that are embedded in these answers.
ISBN 9780521884730
Published in US in May 2008.
Publisher:
Cambridge University Press
(see the
publisher's page for this volume).

Below is the book's tentative preface and Organization.
Drafts (and additional related texts)
are available from HERE.
See also list of corrections and additions
(to the print version).
Last updated: June 2008.
Preface (tentative, Jan. 2007)
The strive for efficiency is ancient and universal,
as time and other resources are always in shortage.
Thus, the question of which tasks can be performed efficiently
is central to the human experience.
A key step towards the systematic study of the aforementioned question
is a rigorous definition of the notion of a task and of procedures for
solving tasks. These definitions were provided by computability theory,
which emerged in the 1930's. This theory focuses on computational tasks,
and considers automated procedures
(i.e., computing devices and algorithms)
that may solve such tasks.
In focusing attention on computational tasks and algorithms,
computability theory has set the stage for the study of the
computational resources (like time) that are required by such algorithms.
When this study focuses on the resources that are necessary
for any algorithm that solves a particular task
(or a task of a particular type),
the study becomes part of the theory of Computational Complexity
(also known as Complexity Theory).
Complexity Theory is a central field of the theoretical foundations
of Computer Science. It is concerned with the study of
the intrinsic complexity of computational tasks.
That is, a typical Complexity theoretic study looks
at the computational resources required to solve
a computational task (or a class of such tasks),
rather than at a specific algorithm or an algorithmic schema.
Actually, research in Complexity Theory tends to
start with and focus on the computational resources themselves,
and addresses the effect of limiting these resources on the class
of tasks that can be solved.
Thus, Computational Complexity is the study of the
what can be achieved within limited time
(and/or other limited natural computational resources).
The (halfcentury) history of Complexity Theory has witnessed
two main research efforts (or directions).
The first direction is aimed towards actually establishing concrete
lower bounds on the complexity of computational problems,
via an analysis of the evolution of the process of computation.
Thus, in a sense, the heart of this direction is a ``lowlevel''
analysis of computation. Most research in circuit complexity
and in proof complexity falls within this category.
In contrast, a second research effort is aimed at exploring
the connections among computational problems and notions,
without being able to provide absolute statements regarding
the individual problems or notions.
This effort may be viewed as a ``highlevel'' study of computation.
The theory of NPcompleteness as well as
the studies of approximation, probabilistic proof systems,
pseudorandomness and cryptography
all fall within this category.
The current book focuses on the latter effort (or direction).
We list several reasons for our decision to focus
on the ``highlevel'' direction.
The first is the great conceptual significance
of the known results; that is, many known results
(as well as open problems)
in this direction have an extremely appealing conceptual message,
which can be appreciated also by nonexperts.
Furthermore, these conceptual aspects may be explained
without entering excessive technical detail.
Consequently, the ``highlevel'' direction is more suitable
for an exposition in a book of the current nature.
Finally, there is a subjective reason:
the ``highlevel'' direction is within our own expertise,
while this cannot be said about the ``lowlevel'' direction.
The last paragraph brings us to a discussion
of the nature of the current book,
which is captured by the subtitle
(i.e., ``a conceptual perspective'').
Our main thesis is that
complexity theory is extremely rich in conceptual content,
and that this contents should be explicitly
communicated in expositions and courses on the subject.
The desire to provide a corresponding textbook is indeed
the motivation for writing the current book
and its main governing principle.
This book offers a conceptual perspective on complexity theory,
and the presentation is designed to highlight this perspective.
It is intended to serve as an introduction to the field,
and can be used either as a textbook or for selfstudy.
Indeed, the book's primary target audience consists of students
that wish to learn complexity theory
and educators that intend to teach a course on complexity theory.
The book is also intended
to promote interest in complexity theory and make it
acccessible to general readers with adequate background
(which is mainly being comfortable with abstract discussions,
definitions and proofs).
We expect most readers to have a basic knowledge of algorithms,
or at least be fairly comfortable with the notion of an algorithm.
The book focuses on several subareas of complexity theory
(see the following organization and chapter summaries).
In each case, the exposition starts from the intuitive questions
addresses by the subarea, as embodied in the concepts that it studies.
The exposition discusses
the fundamental importance of these questions,
the choices made in the actual formulation
of these questions and notions,
the approaches that underly the answers,
and the ideas that are embedded in these answers.
Our view is that these (``nontechnical'') aspects are the core
of the field, and the presentation attempts to reflect this view.
We note that being guided by the conceptual contents of the material
leads, in some cases, to technical simplifications.
Indeed, for many of the results presented in this book,
the presentation of the proof is different
(and arguably easier to understand)
than the standard presentations.
Table of contents (tentative, May 2007)

Preface, Orgnaization and Chapter Summaries, Acknowledgments
 Introduction and Preliminaries
1.1 Introduction
1.1.1 A brief overview of Complexity Theory;
1.1.2 Characteristics of Complexity Theory;
1.1.3 Contents of this book;
1.1.4 Approach and style of this book;
1.1.5 Standard notations and other conventions.
1.2 Computational Tasks and Models
1.2.1 Representation;
1.2.2 Computational Tasks;
1.2.3 Uniform Models (Algorithms)
[Turing machines, Uncomputable functions, Universal algorithms,
Time and space complexity, Oracle machines, Restricted models];
1.2.4 Nonuniform Models (Circuits and Advice)
[Boolean Circuits, Machines that take advice,
Restricted models];
1.2.5 Complexity Classes
Chapter Notes
 P, NP and NPCompleteness
2.1 The P versus NP Question
2.1.1 The search version  finding versus checking;
2.1.2 The decision version  proving versus verifying;
2.1.3 Equivalence of the two formulations;
2.1.4 Two technical comments regarding NP;
2.1.5 The traditional definition of NP;
2.1.6 In support of P different from NP;
2.1.7 Philosophical meditations.
2.2 Polynomialtime Reductions
2.2.1 The general notion of a reduction;
2.2.2 Reducing optimization problems to search problems;
2.2.3 Selfreducibility of search problems;
2.2.4 Digest and general perspective.
2.3 NPCompleteness
2.3.1 Definitions;
2.3.2 The existence of NPcomplete problems;
2.3.3 Some natural NPcomplete problems
[
Circuit and formula satisfiability  CSAT and SAT,
Combinatorics and graph theory];
2.3.4 NP sets that are neither in P nor NPcomplete;
2.3.5 Reflections on complete problems.
2.4 Three relatively advanced topics
2.4.1 Promise Problems;
2.4.2 Optimal search algorithms for NP;
2.4.3 The class coNP and its intersection with NP.
Chapter Notes
Exercises
 Variations on P and NP
3.1 Nonuniform polynomialtime (P/poly)
3.1.1 Boolean Circuits;
3.1.2 Machines that take advice.
3.2 The Polynomialtime Hierarchy (PH)
3.2.1 Alternation of quantifiers;
3.2.2 Nondeterministic oracle machines;
3.2.3 The P/polyversusNP Question and PH.
Chapter Notes
Exercises
 More Resources, More Power?
4.1 Nonuniform complexity hierarchies
4.2 Time Hierarchies and Gaps
4.2.1 Time Hierarchies
[The Time Hierarchy Theorem,
Impossibility of speedup for universal computation,
Hierarchy theorem for nondeterministic time];
4.2.2 Time Gaps and SpeedUp.
4.3 Space Hierarchies and Gaps
Chapter Notes
Exercises
 Space Complexity
5.1 General preliminaries and issues
5.1.1 Important conventions;
5.1.2 On the minimal amount of useful computation space;
5.1.3 Time versus Space
[Two composition lemmas, An obvious bound,
Subtleties regarding spacebounded reductions,
Serach vs Decision, Complexity hierarchies and gaps,
Simultaneous timespace complexity];
5.1.4 Circuit Evaluation.
5.2 Logarithmic Space
5.2.1 The class L;
5.2.2 LogSpace Reductions;
5.2.3 LogSpace uniformity and stronger notions;
5.2.4 Undirected Connectivity.
5.3 NonDeterministic Space Complexity
5.3.1 Two models;
5.3.2 NL and directed connectivity
[Completeness and beyond,
Relating NSPACE to DSPACE, Complementation or NL=coNL];
5.3.3 A retrospective discussion.
5.4 PSPACE and Games
Chapter Notes
Exercises
 Randomness and Counting
6.1 Probabilistic PolynomialTime
6.1.1 Basic modeling issues;
6.1.2 Twosided error (BPP);
6.1.3 Onesided error (RP and coRP);
6.1.4 Zerosided error (ZPP);
6.1.5 Randomized LogSpace
[Definitional issues, the accidental tourist sees it all].
6.2 Counting
6.2.1 Exact Counting [including completeness];
6.2.2 Approximate Counting [for countDNF and countP];
6.2.3 Searching for unique solutions;
6.2.4 Uniform generation of solutions
[Relation to approximate counting,
Direct uniform generation].
Chapter Notes
Exercises
 The Bright Side of Hardness
7.1 OneWay Functions
7.1.1 The concept of oneway functions;
7.1.2 Amplification of Weak OneWay Functions;
7.1.3 HardCore Predicates;
7.1.4 Reflections on Hardness Amplification.
7.2 Hard Predicates in E
7.2.1 Amplification wrt polynomialsize circuits
[From worstcase hardness to mild averagecase hardness,
Yao's XOR Lemma, List decoding and hardness amplification];
7.2.2 Amplification wrt exponentialsize circuits
[Hard regions, Hardness amplification via hard regions].
Chapter Notes
Exercises
 Pseudorandom Generators
Introduction
8.1 The General Paradigm
8.2 GeneralPurpose Pseudorandom Generators
8.2.1 The basic definition;
8.2.2 The archetypical application;
8.2.3 Computational Indistinguishability;
8.2.4 Amplifying the stretch function;
8.2.5 Constructions;
8.2.6 Nonuniformly strong pseudorandom generators;
8.2.7 Stronger notions and conceptual reflections.
8.3 Derandomization of timecomplexity classes
8.3.1 Defining canonical derandomizers;
8.3.2 Constructing canonical derandomizers;
8.3.3 Technical variations and conceptual reflections.
8.4 Space Pseudorandom Generators
8.4.1 Definitional issues;
8.4.2 Two constructions.
8.5 Special Purpose Generators
8.5.1 PairwiseIndependence Generators;
8.5.2 SmallBias Generators;
8.5.3 Random Walks on Expanders.
Chapter Notes
Exercises
 Probabilistic Proof Systems
Introduction and Preliminaries
9.1 Interactive Proof Systems
9.1.1 Motivation and Perspective;
9.1.2 Definition;
9.1.3 The Power of Interactive Proofs;
9.1.4 Variants and finer structure
[ArthurMerlin games a.k.a publiccoin proof systems,
Interactive proof systems with twosided error,
A hierarchy of interactive proof systems,
Something completely different];
9.1.5 On computationally bounded provers
[How powerful should the prover be?
Computationalsoundness].
9.2 ZeroKnowledge Proof Systems
9.2.1 Definitional Issues;
9.2.2 The Power of ZeroKnowledge;
9.2.3 Proofs of Knowledge  a parenthetical subsection.
9.3 Probabilistically Checkable Proof Systems
9.3.1 Definition;
9.3.2 The Power of Probabilistically Checkable Proofs
[Proving that PCP[poly,O(1)] contains NP,
Overview of the first proof of the PCP Theorem,
Overview of the second proof of the PCP Theorem];
9.3.3 PCP and Approximation;
9.3.4 More on PCP itself  an overview.
Chapter Notes
Exercises
 Relaxing the Requirements
10.1 Approximation
10.1.1 Search or Optimization;
10.1.2 Decision or Property Testing.
10.2 Average Case Complexity
10.2.1 The basic theory
[Definitional issues,
Complete problems, Probabilistic versions];
10.2.2 Ramifications
[Search versus Decision,
Simple versus sampleable distributions].
Chapter Notes
Exercises
Epilogue
Apdx A  Glossary of Complexity Classes
A.1 Preliminaries;
A.2 Algorithmbased classes
[Time complexity classes, Space complexity];
A.3 Circuitbased classes.
Apdx B  On the Quest for Lower Bounds
B.1 Preliminaries;
B.2 Boolean Circuit Complexity
[Basic Results and Questions,
Monotone Circuits, BoundedDepth Circuits, Formula Size];
B.3 Arithmetic Circuits
[Univariate Polynomials, Multivariate Polynomials];
B.4 Proof Complexity
[Logical, Algebraic, and Geometric Proof Systems].
Apdx C  On the Foundations of Modern Cryptography
C.1 Introduction and Preliminaries;
C.2 Computational Difficulty
[OneWay Functions and Trapdoor Permutations];
C.3 Pseudorandomness
[including Pseudorandom Functions];
C.4 ZeroKnowledge
[The Simulation Paradigm, The Actual Definition,
A General Result and a Generic Application,
Definitional Varitions and Related Notions];
C.5 Encryption Schemes
[Definitions, Constructions, Beyond Eavesdropping Security];
C.6 Signatures and Message Authentication
[Definitions, Constructions];
C.7 General Cryptographic Protocols
[The Definitional Approach and Some Models, Some Known Results,
Construction Paradigms and Two Simple Protocols, Concluding Remarks].
Apdx D  Probabilistic Preliminaries and Advanced Topics in Randomization
D.1 Probabilistic preliminaries
[Notational Conventions, Three Inequalities];
D.2 Hashing
[Definitions, Constructions, The Leftover Hash Lemma];
D.3 Sampling [including Hitters];
D.4 Randomness Extractors
[The Main Definition, Extractors as averaging samplers,
Extractors as randomnessefficient errorreductions,
Other perspectives, Constructions].
Apdx E  Explicit Constructions
E.1 Error Correcting Codes
[A mildly explicit version of the existential proof,
The Hadamard Code, The ReedSolomon Code, The ReedMuller Code,
Binary codes of constant relative distance and constant rate,
Two additional computational problems, A list decoding bound];
E.2 Expander Graphs
[Two Mathematical Definitions, Two levels of explicitness,
Two properties, The MargulisGabberGalil Expander,
The Iterated ZigZag Construction].
Apdx F  Some Omitted Proofs
F.1 Proving that PH reduces to countP;
F.2 Proving that IP[O(f)] is contained in AM[O(f)] and in AM[f]
[Emulating general interactive proofs by AMgames,
Linear speedup for AM]
Apdx G  Some Computational Problems
G.1 Graphs
G.2 Formulae
G.3 Algebra
G.4 Number Theory
Bibliography

List of Corrections and Additions
The following list does not include errors that readers
are likely to overcome by themselves. Partial lists of such
errors appear in HERE
and HERE.

In retrospect, it would have been good to warn in Sec 1.1.5 (page 16)
that in mathematical equations "poly" usually stands for a fixed
(but unspecified) polynomial that may change without warning;
for example, the text sometimes uses $poly(n^2) = poly(n)$ etc.
(This is most relevant to highly technical parts such as Sec 7.2.)

An additional comment regarding the abstract RAM model (pages 2526):
The current presentation is oversimplified and aimed at the question
of computability, while ighnoring the question of complexity.
To properly account for the complexity of computation in the model,
one has to either restrict the size/length of the registers
or charge each operation according to the length of the integer
actually sorted and manipulated. In the context of polynomialtime
computation, one typically postulates that all registers and
memory cell can hold integers that of logarithmic length
(i.e., logarithmic in terms of the input length). In such a case,
operations like multiplication place the least significant part of
the result in one register and its most significant part in another.
[Rainer Plaga]

Typo on page 26, Footnote 12:
"Turning machine" should be "Turing machine".
[Ondrej Lengal]

The proof of Thm 1.4 (on page 27) leaves some gaps including
(1) proving the sets of reals and the set of reals in [0,1]
have the same cardinality, (2) proving that the set of all functions
(from strings to strings) and the set of Boolean functions have the
same cardinality, and (3) dealing with the problem of nonuniqueness
of binary representation of some (countablly many) reals.
It would have been better to assert as the main result that
the set of Boolean functions and the power set of the integers
have the same cardinality. [Michael Forbes]
Also, in the theorem statement aleph should be replaced by ${\aleph_0}$.
[Jerry Grossman]

The current proof of Thm 1.5 is flawed,
since it is not necessarily the case that the output
of $M''$ on input $\ang{M''}$ equals its output on input $\ang{M'}$.
This can be fixed by changing $M''$ such that, in Step 1,
it discards $x$ and uses $\ang{M'}$ instead.
That is, Steps 24 should refer to an
execution of $M'$ on input $\ang{M'}$.
[Hamoon Mousavi]

On the last line of page 32,
the phrase "has time complexity at least T" should be replaced
by "cannot have time complexity lower than T". [Jerry Grossman]

Typo in the paragraph regarding universal machines on page 34:
The machine $U'$ halts in $poly(X+t)$ steps
(and not in $poly(X)$ steps, as stated).
[Igor Carboni Oliveira]

Errata regarding Def 1.11 (on page 35):
While this simplifies definition suffices in the context
of mere computability, it is not good enough for the context
of resource bounded computation (e.g., time complexity).
The problem is that the way the definition is formulated
allows the machine to invoke the oracle on the answer provided
by a previous call at unit time (rather than in time that
is related to writing the new query). An easy fix is to
use two oracle tapes, one on which queries are written,
and the other on which the orcale answers are obtained.
(This is exactly the convention used in the context
of space complexity.)
An alternative solution is to use the convention that the query
(i.e., the actual contents of the tape when invoking the oracle)
is the contents of the cells till the location of the machine's head
on the single tape, and that the following configuration
(which corresponds to the oracle spoke state)
has the machine's head located on the leftmost cell.
[Yosef Pogrow]

Regarding Section 2.2.1.2:
Karpreductions are often referred to
as (polynomialtime) manytoone reductions.
In fact, such reference can be found also in later chapters of this book.
[Michal Koucky]

Typo on line 9 of page 66:
The references to the ``advanced comment''
is meant to refer to Section 2.2.3.2.
Similarly, the last sentence in Section 2.2.3.1
should just refer to Section 2.2.3.2.

Minor error in the advanced comment on page 70:
Not every search problem is Karpreducible to $R_H$;
this holds only for the class of search problems
that refer to relations for which the membership
problem is decidable. Also $R_H$ is neither solvable
nor is there an algorithm deciding membership in $R_H$.

Typo in Def. 2.30:
the set $P$ mentioned at the end should be $S_R$.

Typo on line 11 of page 96:
a close parenthesis is missing in the math expression.

Addition to the Chapter Notes (of Ch. 2, page 98).
The definition of the search problem classes PC and PF
coincide with the standard definitions knows as FNP and FP,
although the notation of the later classes in which `F'
stands for function is misleading.
As stated throughout the chapter, I am not happy with
the term NP, since it refers to Definition 2.7,
which I consider a conceptually wrong way of introducing
this fundamental class. However, I felt that changing
this extremely popular term is not feasible.
In contrast, the term FNP is not as popular as NP,
and there is hope to replace it.

Comment regarding Exer. 2.13 (Part 2):
In the guideline self reduction should be
downwards selfreduction (the one guarantreed for $S$).
The guideline falls short of addressing all issues.
 It is not clear how to emulate the execution of the reduction
on a given input, since the reduction may be adaptive.
One solution is to emulate all answers as `false' (and note that
the first query that is yesinstance is not affected).
This requires to define $R$ such that $x_{i+1}$ is
in the set of queries made by the oracle machine on input $x_i$
when all prior queries are answered with `false'
(rather than being answered according to $S$).
 The guideline assumes that there exists a constant $c$
such that the reduction makes queries on any input that is longer
than $c$. The solution is to add a dummy query that is the shortest
yesinstance (resp., noinstance), whenever the reduction halts
accepting (resp., rejecting) without making any query.
In general, note that (constructible) complexity bounds on
reductions can be enforced regardless of the correctness
of the answers. [Dima Kogan]

Comment regarding Exer. 2.18:
Another alternative proof of Theorem 2.16 just relies on Theorem 2.10
(which implies that R reduces to NP).

Comment regarding Exer. 2.23:
This exercise is meant to prove that the said problem is NPhard.
Proving that it is in NP is beyond the scope of this text;
it requires showing that if a solution exists then a relatively short
solution exists as well.
Btw, there is a typo in the guideline:
The inequality $x+(1y) \geq 1$ simplies to $xy \geq 0$.

A propos Exer 2.28 [suggested by Fabio Massacci]:
A natural notion, which is arises from viewing complete problems
as encoding all problems in a class,
is the notion of a ``universal (efficient) verification procedure''.
Following Def. 2.5, a NPproof system is called universal
if verification in any other NPproof system can reduced to it
in the sense captured by the functions $f$ and $h$ of Exer 2.28.
That is, verifying that $x\in S$ via the NPwitness $y$
is ``reduced'' to verifying that $f(x)$ is in the universal set
via the NPwitness $h(x,y)$ (for the universal verifier).
We mention that this notion (of a universal NPproof system)
and the fact that the standard NPproof system for 3SAT is universal
underlies the proofs of several ("furthermore") results in Section 9
(e.g., see Thm 9.11 and paragraph following Thm 9.16).

Errata regarding Exer 2.29:
The case of 3Colorability is far from being an adequate exercise;
see recent work by R.R. Barbanchon
On Unique
Graph 3Colorability and Parsimonious Reductions in the Plane
(TCS, Vol. 319, 2004, pages 455482).
[Michael Forbes]

Errata and comment regarding Exer 2.31:
In item 1 (as throughout), the functions are lengthincreasing,
not lengthpreserving.
More importantly, it seems that a more intuitive exposition is possible.
Consider a bipartite graph between two copies on $\{0,1\}^*$,
called the $S$world and the $T$world,
and draw an edge from the string $x$ in the $S$world (resp., $T$world)
to $f(x)$ in the $T$world (resp., $g(x)$ in the $S$world).
Consider all these directed edges
(and using the lengthincreasing condition),
we get an infinite collection of infinite paths that cover
all vertices in both worlds.
We seek a perfect matching that is consistent with the directed edges,
since such a matching will yield the desired isomorphism
(provided we can efficiently determine the mate of each vertex).
The matching is obtained by detecting the endpoint of the inifinite
path on which a vertex resides; once the endpoint is detected,
a perfect matching of the vertices on this path is uniquely defined.

Comment regarding Exer 2.36:
Note that Part 2 actually shows that the search problem of SAT
reduces to xSAT;
hence, PC reduces to the promise problem class NP intersection coNP.

An additional exercise for Chap 2:
Show that for all natural NPC problems,
finding an alternative/second solution is NPC.

Typo on page 111 (Def. 3.5):
replace "these exists" by "there exists".

On page 112 line 3 (just after Thm. 3.6),
the function $t$ should be restricted to be at least linear.
[Michal Koucky]

Typo on page 116 (line 10):
replace "In contract" by "In contrast".

Minor clarification re the proof of Thm 3.12 (page 120):
In representing $S\in\Pi_2$ via a set $S'\in\NP$
we use an analogue of Prop 3.9 (rather than using Prop 3.9 proper).

Errata regarding Exer 3.12:
Mahaney's result by which there are no sparse NPcomplete sets
(assuming that P is different from NP) is wrongly attributed to [81].
[Michael Forbes]

Addendum regarding Exer 3.12:
A nice proof of Mahaney's aforementioned result
can be found in Cai's lecture notes
for Introduction
to Complexity Theory
(see Sec. 4
in Lecture 11).

Addendum regarding THM 4.1 (p. 128):
The claim may be strengthened to assert that
for every $\ell:\N\to\N$ such that $\ell(n)\leq 2^n$,
it holds that $P/\ell$ strictly contains $P/(\ell1)$.
See proof and discussion.

Typo in Corollary 4.4: "timecomputable" should read "timeconstructible".
[Michael Forbes]

Typo in Theorem 4.7: The function $g$ should be increasing,
or rather satisfy $g(t) \geq t$ for all $t$.
[Augusto Modanese]

Clarification regarding Item 5 of the summary on page 145:
The machine scans its output tape in one direction.
[Michal Koucky]

Addendum to the guideline of Exer. 4.13:
the emulation should check that the emlated machine
does not enter an infinite execution.

Typo on page 156 (3rd line from bottom):
intracycle should be intercycle.
[Orr Paradise]

Minor clarification regarding the 2nd paragraph of page 157
(A couple of technicalities):
One better guaranteed that not only G1 but rather
every connected component in the graph G1 is nonbipartite.
Likewise, note that the fixed graph G
(used for ZigZag at bottom of the page) is nonbipartite too.
This is crucial for establishing the fact that the ZigZag
product maintains the number of connected components
(rather than only prevents them from decreasing).
This fact can be deduced from the behavior of the eigenvalue bounds,
but a more direct proof consists of two steps:
 In each cloud that replaces a vertex, vertices at distance two
are connected in the resulting (ZigZag) graph.
 In each connected graph that is not bipartite,
each pair of vertices is connected
by a (non necessarily simple) path of even length.
[Tal Skverer]

Minor correction for page 162, lemma 5.10:
The space bound should have an additional term
of $O(1)+\log t$ for general overhead (of the emulation)
and maintaining the index of the recursion level
(so that the emulator knows when it reaches the lowest
level of reduction, which is important since at this
point it should refer the genertated queries to its own oracle).

Typo on page 167 (last sentence of Sec. 5.3.2.2).
In the parenthetical comment, logarithmic depth
should be replaced by logsquare depth; that is,
the alternative outlined in the parenthetical sentence should read
``by noting that any logspace reduction can be computed by a uniform
family of bounded fanin circuits of logsquare depth.''
[Michal Koucky]

Regarding Chapters 610:
As noted by Guillermo MoralesLuna,
the notions of probability ensembles
(i.e., sequence of distributions indexed by strings)
and noticeable probabilities
(i.e., a probability that decreases slower than the reciprocal
of some positive polynomial in the relevant parameter)
are not defined at their first usage.
Indeed, it might have been a good idea to include a glossary
of technical terms such as these as another appendix.
Such an appendix may contain also other definitions
(e.g., as of negligible probability)
and notations such as $\{0,1\}^n$, $U_N$, $M^f(x)$.

Regarding Section 6.1.3.2, in retrospect, it seems better to present
the ideas in terms of MA2 vs MA1,
where MA denotes a generic randomized versuion of NP,
and MA2 (resp., MA1) are twosided error (resp., one sidederror)
version of it.
Specifically, $S\in MA2$ (resp., $S\in MA1$) if there exists
a probabilistic polynomialtime machine $V$ such that
1. If $x\in S$, then exists $y$ of $\poly(x)$ length
such that $\Prob[V(x,y)=1] \geq 2/3$ (resp., equals 1);
2. If $x\not\in S$, then for every $y$
it holds that $\Prob[V(x,y)=1] \leq 1/3$ (resp., $\leq 1/2$).
(Note that, like for BPP and RP, errorreduction holds here too.)
Obviously, both BPP and MA1 are contained in MA2;
and recall that it is not know whether BPP is in NP.
The proof to be presented shows that MA2 is contained in MA1;
that is, we can eliminate the error in the completeness condition.

On the 4th line of page 201, the fact that BPL is contained in P
does require a proof, so it was inadequate to label it as evident
(i.e., at the same level as RL contained in NL contained in P).
A possible proof consists of an algorithm that proceeds in iterations
such that in the $i$th iteration, the algorithm computes the probability
of reaching each of the possible configurations in $i$ steps,
where these probabilities are computed based on the probabilities
of reaching each of the configurations in $i1$ steps.
Recall that the definition of BPL guarantees that the number
of configurations as well as the number of steps is polynomial
in the input length. [Gilad BenUziahu]

On page 221: As noted by a group of students at Weizmann,
the definition of strongly parsimonious is slightly confusing,
since $x$ is used in a different sent a line before.
A better text may read:
Specifically, we refer to search problems $R\in\PC$
such that $R'(x;y')\eqdef\{y'':(x,y'y'')\in R\}$
is {\em strongly parsimoniously reducible to $R$},
where a {\sf strongly parsimonious reduction of $R'$ to $R$}
is a parsimonious reduction $g$ that is coupled with an
efficiently computable function $h$ such that $h(x',\cdot)$
is a 11 mapping of $R(g(x'))$ to $R'(x')$
(i.e., $(g,h)$ is a Levinreduction of $R'$ to $R$
such that $h$ is a 11 mapping of solutions).

Clarification for Exercise 6.2 (page 230).
In all items, we assume that the reductions only make queries
of length that is polynomially related to the length of their input.
This guarantees that error probabilities that are negligible in terms
of the length of the queries are also negligible in terms of the length
of the input. This comment is crucial only for search problems,
where error reduction is not necessarily possible.
[Ben Berger and Yevgeny Levanzov]

Additional exercise for randomized logspace
[suggested by Yam Eitan]:
We say that a probabilistic machine always halts
if there exists no infinite sequence of internal coin tosses
for which the machine doesn't halt.
Show that a log space probabilistic machine is admissable
(i.e., always halt in polynomialtime)
if and only if it always halts.
Guideline: Show that both conditions are equivalent to requiring
the digraph of all possible configurations has no directed cycle.

Typo in Exercise 6.39, Part 1 (page 240).
In the definition of $R'$, it should be $h(y)=0^i$ (not $h(x)=0^i$).
[Noa Loewenthal]

Typo on page 245. In Item 1 of Prop 7.2: NP should be PC.

Pages 261263: See comment regarding Sec 1.1.5 (page 16).
Indeed, here "poly" usually stands for a fixed (but unspecified) polynomial
that may change without warning;
for example, the text sometimes uses $poly(n^2) = poly(n)$ etc.
[Dima Kogan]

Typo on page 264.
In the paragraph introducing the simplified notations,
one should set $Z=Z_{n\ell}$
(rather than $Z=Y_{n\ell}$ as stated).

Addition to Exer. 7.24:
An alternative and less direct solution is showing that
the existence of nonuniformly hard oneway functions
implies that PC contains search problems
that have no polynomialsize circuits (a.e.), which in turn
implies that NP (and also E) is not contained in P/poly (a.e.).
Note that the a.e. (almost everywhere) clause requires
using slightly more care in the reductions.

In Construction 8.17 (page 310),
we mean $i_1
Typo on page 317 (footnote 32):
replace "automata consider here" by "automata considered here".

Better formulation for the 6th line following Def 8.20
(on page 318): replace $s(k)k$ by $ks(k)$;
it reads more natural thisway.

Page 322 lines 2122 (sketch of proof of Thm 8.22):
The two inputs of the extractor are presented here in reverse order.
See comment below regarding Apdx D.4.

Typo on page 322 line 27 (just before the itemized list):
replace "similarly for $G_2$ with respect to (s_1,\eps_1) and $\ell_2$"
by "similarly for $G_2$ with respect to (s_2,\eps_2) and $\ell_2$".

Typo on page 327 line 4:
the set of conditions has $j\in\{0,1,...,t1\}$
(rather than $j\in\{1,...,t1,t\}$).

Typo on page 332, 3rd line of Sec 8.5.2.3:
replace "characteristic" by "cardinality".

Addendum to Exer. 8.18: Actually size $O(2^k \cdot k)$ suffices;
the circuit may just hold the $(k+1)$bit long prefices of
the strings in the range of $G$.

Typo on line 22 of page 355:
replace "input common input" by "common input".

Typo on page 361 (just after Eq (9.3)):
replace "module" by "modulo".

Typo on page 364 (header of Def 9.6):
replace "proof" by "proofs".

Better formulation for 2nd sentence of Sec 9.1.5:
replace "conflicting" by "opposite".

Minor error on page 387 (line 7):
It does not follow that NP is contained in $PCP(q,O(1))$,
where $q$ is a quadratic polynomial, but rather that
the set of satisfied Quadratic Equations is in $PCP(q,O(1))$.

Typo on page 391 (line 7):
The time complexity is polynomial in the size of $C_\omega$
(and not in the size of $C_\omega^{1}$).

Typo on page 400 (last paragraph):
add q as a superscript to gapGSAT.
Also replace "can be reduce" by "can be reduced".

Two typo on page 403 (last paragraph):
In the 4th line following Thm 9.23, replace the 2nd "is" by "are".
In the 3rd line from the bottom of the page, replace "PCPs" by "PCP".

In Exer 9.4, the current guideline presumes that each variable
appears in a constant number of clauses.
The general statement can be proved by arithmetizing the
Boolean formula by using $n'=n/\log n$ variables and looking
at the sum taken over $[n]^{n'}$ rather than over $\{0,1\}^n$.
This requires introducing polynomials that map $[n]$
to a sequence of $\log n$ values in $\{0,1\}$.

In Exer 9.24, the current guideline does not yield
a linear size instance, but rather an instance
with a linear number of vertices/variables.
To obtain a linear size instance either reduced from "restricted 3SAT"
in which each variable appears in O(1) clauses
(and you can show a lineartime reduction from 3SAT to this restricted form)
or use consistency conditions only on a spanning tree of the set of
clauses that share a variable.
[Pointed out by Nitzan Uziely]

Typo on page 431 (5th line of Step 2):
replace "is that it very sensitive"
by "is that it is very sensitive".

Typo on page 434 (5th line):
relace "practical significant" by "practical significance".
[Michal Koucky]

Copyeditor error on page 452:
The heading "Approximations" should be unnumbered.

Copyeditor error on page 471:
The definition should be numbered B.1.

Appendix D.4  Randomness Extractors (pages 537544):
The two inputs of the extractor are presented here in reverse order.
That is, usually the first input is the $n$bit long source and the
second input is the $d$bit long seed, whereas in the text the order
is reversed.
(This reversal was unconscious.
I guess the nonstandard order strikes me as more natural,
but the difference is not significant enough as to justify
deviation from the standard.)

Typo on page 556 (5th line in the paragraph on amplification):
deceased should be decreased.
[Orr Paradise]

Addendum to last paragraph of Section E.2.1.2 (page 557):
I forgot to credit Noga Alon for this suggestions,
but currently I do not see a simple way to prove $c/2$expansion.
See a current memo,
which only establishes $c/12$expansion
(using a more modular argument than the one envisioned above).

Addendum to Lemma E.8 (page 558).
In the special case of $B=VA$,
we can use $\sqrt{A\cdotB} \leq N/2$,
and so the number of edges going out
of $A$ is at least $((1\rho(A))\cdot d  \lambda/2)\cdotA$.
It follows that for $A \leq N/2$,
it holds that $\Gamma(A)A \geq (1(\lambda/d))\cdotA/2$.
Back to Oded Goldreich's homepage.