This is an old list/webpage that was discontinued in March 2010. The current list appears in the current webpage my-choice.html (one level below this directory).

My impression is that STOC and FOCS do not function any more as forums devoted to the presentation and exchange of ideas (but rather function as "weight-lifting competitions"; see more on this). The question is what can be done to serve the need for the lost forums. One answer is provided by various personal blogs in which various researchers present their own choices of (relatively) recent works that have drawn their attention.

My problem with many of these blogs is that they cover the obvious (i.e., they all focus more or less on the same "hot results" that draw everybody's attention). I would like to contribute to the project of regaining forums devoted to the presentation of ideas in a different way; specifically, by calling attention to works that have facsinated me although they were not necessarily labeled as "hot".

Needless to say, my choices will be restricted to my own research areas, and even within these areas the choices will be confined to what I have heard and/or read (and understand well enough to write about...). Thus, the fact that a specific work is not mentioned does not indicate that I have a "not so high" opinion of this work; it may be the case that I do not know about this work or that I don't know it well enought to feel comfortable writing about it or that I'm too preoccupied with something and neglect this new commitment. Lastly, let me note that my own works will not be included in my choices (to follow).

**FAQ: but most of your choices have appeared in STOC/FOCS**

[see my answer]

Yes, **most**...

But my main point is not that the program of STOC/FOCS does not include interesting works, but rather that (1) not all interesting works appear in STOC/FOCS, and (2) the focus of attention in STOC/FOCS is not on the program itself (i.e., actually learning from the works, discussing them with others, etc). That is, (2) asserts that the atmosphere in STOC/FOCS is such that the actual contents of the works is lost in the surrounding atmosphere that focuses on competition.

But my main point is not that the program of STOC/FOCS does not include interesting works, but rather that (1) not all interesting works appear in STOC/FOCS, and (2) the focus of attention in STOC/FOCS is not on the program itself (i.e., actually learning from the works, discussing them with others, etc). That is, (2) asserts that the atmosphere in STOC/FOCS is such that the actual contents of the works is lost in the surrounding atmosphere that focuses on competition.

[Editorial decisions.]

Initial policy: Include only works that I heard or read after Jan. 1, 2009.

Feb. 22, 2009: Include also surveys
(see the 1st such case).

Mar. 8, 2009: Include "double features"
(see the 1st such case).

Jul. 10, 2009: Include also highlights on open problems
(see the 1st such case).

Jan. 15, 2010: Include also overall accounts of workshops and/or conferences
(see the 1st such case).

[Omer at Shafi's Seminar, Jan. 22, 2009]

**Accessible Entropy (tentative title)**
by Iftach Haitner, Omer Reingold Salil Vadhan and Hoeteck Wee.

[my comments]

The standard notion of "computational entropy" refers to a situation
in which a distribution appears to have a higher entropy than it
actually has; that is, a distribution $X$ is computationally
indistinguishable from a distribution $Y$ whereas $H(Y) > H(X)$,
where $H(Z)$ denotes the standard (information theoretic) measure
of entropy.
The new notion refers to a "kind of opposite" situation
in which sampling the support of a distribution fails
to obtain its entropy; that is, any feasible procedure $S$
that sample the support of a distribution $X$ has entropy that
is lower than the entropy of $X$ (i.e., $H(S) < H(X)$).

One nice application of the new notion is a construction of a statistically hiding commitment (from any one-way function) which mirrors the construction of statistically binding commitment (from any one-way function, via the construction of a pseudorandom generator). Thus, these two dual primitives are derived via analogous manipulations of dual notions of computational entropy.

One nice application of the new notion is a construction of a statistically hiding commitment (from any one-way function) which mirrors the construction of statistically binding commitment (from any one-way function, via the construction of a pseudorandom generator). Thus, these two dual primitives are derived via analogous manipulations of dual notions of computational entropy.

[original abstract]

We put forth a new computational notion of entropy, which measures the (in)feasibility of sampling high entropy strings that are consistent with a given protocol. Specifically, we say that the ith round of a protocol (A; B) has accessible entropy at most k, if no polynomial-time strategy A* can generate messages for A such that the entropy of its message in the ith round has entropy greater than k when conditioned both on prior messages of the protocol and on prior coin tosses of A*. As applications of this notion, we

1. Give a much simpler and more efficient construction of statistically hiding commitment schemes from arbitrary one-way functions.

2. Prove that constant-round statistically hiding commitments are necessary for constructing constant-round zero-knowledge proof systems for NP that remain secure under parallel composition (assuming the existence of one-way functions).

[Ran in the WIS Theory Seminar, Feb. 1, 2009]

**A Counterexample to Strong Parallel Repetition**
by Ran Raz

[my comments]

In contrast to prior authors that were discouraged by the observation
that providing such a counter-example will yield a surprising result
regarding some problem in geometry (i.e., "foams"),
Ran just provides such a counter-example
(which subsequently led to the analogous "foam" result reported in
*Spherical Cubes and Rounding in High Dimensions* [by
Guy Kindler, Anup Rao, Ryan O'Donnell, and Avi Wigderson, 2008]).
Ran considered the very same Odd-Cycle Game,
which is a Unique Answer game and was considered before,
and provides almost optimal strategies for this game.
These strategies select an edge to cheat about at random,
with probability distribution that decreases gradually
(but exponentially (in a base adequately close to 1))
with the distance to the query.

The subsequent work*Rounding Parallel Repetitions of Unique Games*
[by Boaz Barak, Moritz Hardt, Ishay Haviv, Anup Rao, Oded Regev
and David Steurer]
traces the source of the failure of the strong parallel repetition
to a gap between the value of the game and the value of a SDP relaxation
of the same game (which, as shown by Feige and Lovasz [1992],
satisfies the strong parallel repetition).

The subsequent work

[original abstract]

I will give a short introduction to the problem of parallel repetition of two-prover games and its applications to theoretical computer science, mathematics and physics. I will then describe a recent counterexample to the strong parallel repetition conjecture (a conjecture that would have had important applications).

In a two-prover (alternatively, two-player) game, a referee chooses questions $(x,y)$ according to a (publicly known) distribution, and sends $x$ to the first player and $y$ to the second player. The first player responds by $a=a(x)$ and the second by $b=b(y)$ (without communicating with each other). The players jointly win if a (publicly known) predicate $V(x,y,a,b)$ holds. The value of the game is the maximal probability of success that the players can achieve, where the maximum is taken over all protocols $a=a(x),b=b(y)$.

A parallel repetition of a two-prover game is a game where the players try to win $n$ copies of the original game simultaneously. More precisely, the referee generates questions $x=(x_1,...,x_n), y=(y_1,...,y_n)$, where each pair $(x_i,y_i)$ is chosen independently according to the original distribution. The players respond by $a=(a_1,...,a_n)$ and $b=(b_1,...,b_n)$. The players win if they win simultaneously on all the coordinates, that is, if for every $i$, $V(x_i,y_i,a_i,b_i)$ holds.

The parallel repetition theorem states that for any two-prover game, with value $1-\epsilon$ (for, say, $\epsilon < 1/2$), the value of the game repeated in parallel $n$ times is at most $(1- \epsilon^3)^{\Omega(n/s)}$, where $s$ is the answers' length (of the original game). The theorem has important applications in complexity theory, quantum computation, and geometry.

Several researchers asked whether this bound could be improved to $(1-\epsilon)^{\Omega(n/s)}$; this question is usually referred to as the strong parallel repetition problem. A positive answer would have had important applications. We show that the answer for this question is negative.

[Kobbi at Shafi's Seminar, Feb. 5, 2009]

**What Can We Learn Privately?**
by Shiva Kasiviswanathan, Homin Lee, Kobbi Nissim,
Sofya Raskhodnikova, and Adam Smith.

[my comments]

This work, which builds on the prior work
*Practical Privacy: The SuLQ Framework*
[by Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim, 2005],
studies which PAC learning tasks can be performed privately;
that is, the requirement is that the output of the learning process
(i.e., the hypothesis) does not violate the privacy of the
individual samples (obtained in the learning process).
Loosely speaking, they show that any PAC learnable concept class
can be learned privately, but efficiency as well as tight relation
to the VC-dimention may be lost.
They also relate the restricted PAC model of learning
by statistical queries to a restricted model of private computation
(i.e., local algorithms aka "randomized response" algorithm).

A remotely related comment: the notion of "differential privacy" is stronger than the related notion of "statistical difference" (when both notions are quantified wrt the same error level). However, "differential privacy" is considered with respect to a small but noticeable error, whereas in Cryptography one usually considers negligible "statistical difference". (Recall that "differential privacy" refers to the pointwise ratio between two probability distributions, where the ratio is required to be close to 1 (i.e., it is 1+error).)

A remotely related comment: the notion of "differential privacy" is stronger than the related notion of "statistical difference" (when both notions are quantified wrt the same error level). However, "differential privacy" is considered with respect to a small but noticeable error, whereas in Cryptography one usually considers negligible "statistical difference". (Recall that "differential privacy" refers to the pointwise ratio between two probability distributions, where the ratio is required to be close to 1 (i.e., it is 1+error).)

[original abstract]

Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life datasets. Defining a private learning algorithm as an algorithm that is both a learner and preserves the notion of differential privacy, we can now examine what concept classes can be learned privately.

We begin by showing a variant of the classical Occam's razor result. I.e., that ignoring computational complexity, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the hypothesis class. Specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples.

We also present a computationally efficient private PAC learner for the class of parity functions. This result is somewhat surprising because parity is thought to be very hard to learn given random classification noise, and learning with noise and private learning are intuitively similar (both must be robust to small changes in inputs).

Finally, we focus on the class of local algorithms (also called randomized response) that are class of private algorithms that have received extensive investigation due their practicality. We provide a precise characterization of local private learning. Namely, a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model, and furthermore, the number of rounds in the local algorithm corresponds to the adaptivity of the SQ learner. This allows us to present a separation between the power of interactive and non-interactive local learning algorithms.

[Private presentation by Or Meir, Feb. 12, 2009]
(Avi: "Shame on you, did you not know it?")

**Uniform Direct Product Theorems: Simplified, Optimized, and Derandomized**
by Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets, and Avi Wigderson

[my comments]

What is striking in this version of the Direct Product Theorem
is that it does not require samples of **(input,value)** pairs
for the basic function (denoted $f$).
Instead, the reduction
(of strongly approximating $f$ to weakly approximating $f^k$)
uses the oracle to generate such samples.
Actually, the reduction makes a single query and based
on it produces, with small probability,
an oracle machine that strongly approximates $f$
(when given oracle access to a weak approximator of $f^k$).

Specifically, for $k'=k/2$, the reduction selects uniformly a $k'$-subset $A$ of the domain, and generates a machine $M_{A,V}$, where $V$ is the value obtained from the oracle when querying it on a random $k$-set that contains $A$. On input $x$ and oracle access to $F$ (which weakly approximates $f^k$), the machine $C_{A,V}$ samples random $k$-sets $B$ that contain $A$ and $x$, and returns $F(B)_x$ for the first $V$ that satisfies $F(B)_A=V$. There is also a derandomized version, which I did not study yet (shame on me!).

Specifically, for $k'=k/2$, the reduction selects uniformly a $k'$-subset $A$ of the domain, and generates a machine $M_{A,V}$, where $V$ is the value obtained from the oracle when querying it on a random $k$-set that contains $A$. On input $x$ and oracle access to $F$ (which weakly approximates $f^k$), the machine $C_{A,V}$ samples random $k$-sets $B$ that contain $A$ and $x$, and returns $F(B)_x$ for the first $V$ that satisfies $F(B)_A=V$. There is also a derandomized version, which I did not study yet (shame on me!).

[original abstract]

The classical Direct-Product Theorem for circuits says that if a Boolean function $f:\{0,1\}^n\to\{0,1\}$ is somewhat hard to compute on average by small circuits, then the corresponding $k$-wise direct product function $f^k(x_1,\dots,x_k)=(f(x_1),\dots,f(x_k))$ (where each $x_i\in\{0,1\}^n$) is significantly harder to compute on average by slightly smaller circuits. We prove a \emph{fully uniform} version of the Direct-Product Theorem with information-theoretically \emph{optimal} parameters, up to constant factors. Namely, we show that for given $k$ and $\epsilon$, there is an efficient randomized algorithm $A$ with the following property. Given a circuit $C$ that computes $f^k$ on at least $\epsilon$ fraction of inputs, the algorithm $A$ outputs with probability at least $3/4$ a list of $O(1/\epsilon)$ circuits such that at least one of the circuits on the list computes $f$ on more than $1-\delta$ fraction of inputs, for $\delta = O((\log 1/\epsilon)/k)$; moreover, each output circuit is an $\AC^0$ circuit (of size $\poly(n,k,\log 1/\delta,1/\epsilon)$), with oracle access to the circuit $C$.

Using the Goldreich-Levin decoding algorithm~\cite{GL89}, we also get a \emph{fully uniform} version of Yao's XOR Lemma~\cite{Yao} with \emph{optimal} parameters, up to constant factors. Our results simplify and improve those in~\cite{IJK06}.

Our main result may be viewed as an efficient approximate, local, list-decoding algorithm for direct-product codes (encoding a function by its values on all $k$-tuples) with optimal parameters. We generalize it to a family of ``derandomized" direct-product codes, which we call {\em intersection codes}, where the encoding provides values of the function only on a subfamily of $k$-tuples. The quality of the decoding algorithm is then determined by sampling properties of the sets in this family and their intersections. As a direct consequence of this generalization we obtain the first derandomized direct product result in the uniform setting, allowing hardness amplification with only constant (as opposed to a factor of $k$) increase in the input length. Finally, this general setting naturally allows the decoding of concatenated codes, which further yields nearly optimal derandomized amplification.

[an editorial comment]

Hardly a month elapsed and I'm already faced with an "editorial" decision:
Should I include surveys?
This was not the original intension, but I really value surveys
and it would feel odd for me not to reflect this attitude here.

[Dieter at the WIS Theory Seminar, Feb. 22, 2009]
[my comments]

Let me augment the high-level abstract by the taste of the actual results.
Loosely speaking, the main such result states that
for every $c \leq d$ such that either $(c-1)d<1$ or $(d-1)cd-2d < 1$
there exists a $e>0$
such that SAT is not in the intersection
of $DTime[n^c]$ and $DTiSp[n^d,n^e]$.
For example, one may conclude that SAT is not in the intersection
of $DTime[n^{1+o(1)}]$ and $DTiSp[poly(n),n^{o(1)}] \sperseteq L$.

The proof typically proceeds by assuming, towards the contradiction, that $NTime[n]$ is contained in both $coNTime[n^c]$ and $DTiSp[n^d,n^e]$, and derives a contradiction by using (1) time speedup (for DTiSp) via alternation, and (2) eliminating alternation by using the 1st hypothesis. Specifically, a "DTiSp[t,s]-computation" can be encoded by a statement of the form "there exists $b-1$ intermediate configurations such that for every $i\in[b-1]$ the $i$th follows from the $i-1$st in $t/b$ steps", which yields a Sigma2 computation with time $bs+(t/b)$ (which equals $2sqrt(ts)$ when $b=sqrt(t/s)$). We can also get a Pi2 expression by considering the negation. On the other hand, using $NTime[t] \subseteq coNTime[t^c]$, we derive $Pi2Time[t] \subseteq coPi1Time[(n+t)^c]$. Combining both we get $DTiSp[t,t^{o(1)}] \subseteq coNTime[t^{c/2}]$, which implies $NTime[n] \subseteq DTiSp[n^d,n^{o(1)}] \subseteq coNTime[t^{dc/2}]$ (yieling a contradiction for $cd < 2$).

The proof typically proceeds by assuming, towards the contradiction, that $NTime[n]$ is contained in both $coNTime[n^c]$ and $DTiSp[n^d,n^e]$, and derives a contradiction by using (1) time speedup (for DTiSp) via alternation, and (2) eliminating alternation by using the 1st hypothesis. Specifically, a "DTiSp[t,s]-computation" can be encoded by a statement of the form "there exists $b-1$ intermediate configurations such that for every $i\in[b-1]$ the $i$th follows from the $i-1$st in $t/b$ steps", which yields a Sigma2 computation with time $bs+(t/b)$ (which equals $2sqrt(ts)$ when $b=sqrt(t/s)$). We can also get a Pi2 expression by considering the negation. On the other hand, using $NTime[t] \subseteq coNTime[t^c]$, we derive $Pi2Time[t] \subseteq coPi1Time[(n+t)^c]$. Combining both we get $DTiSp[t,t^{o(1)}] \subseteq coNTime[t^{c/2}]$, which implies $NTime[n] \subseteq DTiSp[n^d,n^{o(1)}] \subseteq coNTime[t^{dc/2}]$ (yieling a contradiction for $cd < 2$).

[original abstract]

Ever since the work of Cook and Levin, satisfiability has been recognized as a central problem in computational complexity. It is widely believed to be intractable, and yet till recently even a linear-time logarithmic-space algorithm was not ruled out.

The last few years have seen considerable progress in time-space lower bounds for satisfiability and closely related problems on various models of computation. This talk will describe the state-of-the-art on deterministic, randomized, and quantum machines, survey the underlying ideas, and present some of the arguments involved.

[Avi at the WIS Theory Seminar, March 8, 2009]
[+ recall of a private presentation by Elazar, Dec. 25th 2008]

**(1) Locally Testing Direct Product in the Low Error Range**
by Irit Dinur and Elazar Goldenberg

**(2) New Direct Product Code Testers, and PCPs**
by Valentine Kabanets, Russell Impagliazzo, and Avi Wigderson

[my comments]

The object being investigated is the "direct product code" just as
in the work Uniform Direct Product Theorems:
Simplified, Optimized, and Derandomized. However, there the task
was (locally) decoding (at the list decoding regime), whereas here
the task is (locally) testing (also at the list decoding regime).
Locally testing this code (and its derandomization) was first considered
by Muli Safra and myself in the mid 1990s,
but while our original motivation was the list decoding regime,
we were only able to deal with the unique decoding regime.

Testing in the list decoding regime means a characterization of the type of functions $F:U^k \to R^k$ that are accepted by the test with small (but not too small) probability. For the natural test that checks consistency on two random $k$-sequences of significant intersection (i.e., intersection size $m=sqrt(k)$ or so), such a characterization was provided in [1] for the case that the acceptance probability is $k^{-Omega(1)}$. It was also shown in [1] that this test cannot provide meaningful information for lower acceptance probability, in [2] showed that an alternative 3-query test can also handle exponentially small (in $k$) acceptance probability. Loosely spaeking, in both cases the characterization says that such passing $F$ fits a small (i.e., polynomially related to the acceptance probability) collection of functions $f:U \to R$ in the sense that for almost all $x=(x_1,...,x_k)\in D^k$ the value of $F(x)$ is close to the value $f(x_1),...,f(x_k)$.

The proofs proceed in two steps. The first step refers to a notion of "local" consistency, which means that we consider all $k$-sequences that contain a typical $m$-subset (and $k-m$ other arbitrary elements) and show that this set of $k$-subset fits a small collection of functions $f$. The second step shows that these different local views are actually "globally" consistent (i.e., they typically refer to the same collection of functions $f$). The intuition underlying both steps is that consistency of $F$ on two random $k$-sequences of intersection size $m$ implies similar consitency when the intersection size is $m+1$ or $2m$. (Be warned that the actual proofs are quite involved.)

Testing in the list decoding regime means a characterization of the type of functions $F:U^k \to R^k$ that are accepted by the test with small (but not too small) probability. For the natural test that checks consistency on two random $k$-sequences of significant intersection (i.e., intersection size $m=sqrt(k)$ or so), such a characterization was provided in [1] for the case that the acceptance probability is $k^{-Omega(1)}$. It was also shown in [1] that this test cannot provide meaningful information for lower acceptance probability, in [2] showed that an alternative 3-query test can also handle exponentially small (in $k$) acceptance probability. Loosely spaeking, in both cases the characterization says that such passing $F$ fits a small (i.e., polynomially related to the acceptance probability) collection of functions $f:U \to R$ in the sense that for almost all $x=(x_1,...,x_k)\in D^k$ the value of $F(x)$ is close to the value $f(x_1),...,f(x_k)$.

The proofs proceed in two steps. The first step refers to a notion of "local" consistency, which means that we consider all $k$-sequences that contain a typical $m$-subset (and $k-m$ other arbitrary elements) and show that this set of $k$-subset fits a small collection of functions $f$. The second step shows that these different local views are actually "globally" consistent (i.e., they typically refer to the same collection of functions $f$). The intuition underlying both steps is that consistency of $F$ on two random $k$-sequences of intersection size $m$ implies similar consitency when the intersection size is $m+1$ or $2m$. (Be warned that the actual proofs are quite involved.)

[abstract of Avi's talk]

The direct product code encodes the truth table of a function $f:U\to R$ by a function $f^k:U^k \to R^k$, which lists the value of $f$ in all $k$-tuples of inputs. We study its (approximate) proximity testing to this code in the ``list decoding" regime.

We give a 3-query tester for distance $1-exp(-k)$ (which is impossible with 2 queries). We also give a 2-query tester for distance $1-1/poly(k)$ for a derandomized version of this code (of polynomial rate).

We then use the same techniques to reduce soundness error in 2-query PCPs, which gives incomparable decay to the Parallel Repetition Theorem.

[Tal at TCC'09, March 15, 2009]

**An Optimally Fair Coin Toss**
by Tal Moran, Moni Naor, and Gil Segev.

[my comments]

The great success of theoretical research
in cryptography, in the 1980s, led to a feeling
that everything is possible. This view has been
changing during the 1990s when the emergence of
various impossibility results got more attention.
Most of these impossibility results are not absolute
but rather refer to natural well-defined abstractions
of the known (and even "perceivable") proof techniques.

For example, while 2-message zero-knowledge proofs were proven to exist only for BPP sets, restricting the simulation (which underlies the definition of ZK) to black-box use of the adversary rules out also 3-message protocols as well as any constant-round public-coin protocols. Indeed, Barak's ZK protocol (simulated via non-blackbox use of the adversary's program) is the most dramatic example of a construction that bypasses the known limitation of a natural and general formulation of the known (and "perceivable") proof techniques.

The current paper provides yet another striking example. It refers to the tradeoff between the round complexity of a coin tossing protocol and the influence an adversary may have on its outcome (in a model that mandates that the honest party outputs a coin value also in the case that the other party aborts the protocol).

Cleve showed in 1986 that in any $r$-round protocol one party can influence the outcome by at least $Omega(1/r)$, whereas a simple $r$-round protocol guarantees influence of at most $O(1/sqrt(r))$. Furthermore, Cleve and Impagliazzo (1993) proved that, within a natural well-defined construction paradigm, no protocol can outperform the aforementioned simple protocol. Yet, the current paper presents an $r$-round protocol (which deviates from the latter construction paradigm) that guarantees influence of at most $O(1/r)$.

For comments on seven other talks given at TCC'09, see my notes.

For example, while 2-message zero-knowledge proofs were proven to exist only for BPP sets, restricting the simulation (which underlies the definition of ZK) to black-box use of the adversary rules out also 3-message protocols as well as any constant-round public-coin protocols. Indeed, Barak's ZK protocol (simulated via non-blackbox use of the adversary's program) is the most dramatic example of a construction that bypasses the known limitation of a natural and general formulation of the known (and "perceivable") proof techniques.

The current paper provides yet another striking example. It refers to the tradeoff between the round complexity of a coin tossing protocol and the influence an adversary may have on its outcome (in a model that mandates that the honest party outputs a coin value also in the case that the other party aborts the protocol).

Cleve showed in 1986 that in any $r$-round protocol one party can influence the outcome by at least $Omega(1/r)$, whereas a simple $r$-round protocol guarantees influence of at most $O(1/sqrt(r))$. Furthermore, Cleve and Impagliazzo (1993) proved that, within a natural well-defined construction paradigm, no protocol can outperform the aforementioned simple protocol. Yet, the current paper presents an $r$-round protocol (which deviates from the latter construction paradigm) that guarantees influence of at most $O(1/r)$.

For comments on seven other talks given at TCC'09, see my notes.

[original abstract]

We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC '86] showed that for any two-party $r$-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by $\Omega(1/r)$. However, the best previously known protocol only guarantees $O(1/\sqrt{r})$ bias, and the question of whether Cleve's bound is tight has remained open for more than twenty years.

In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve's lower bound is tight: we construct an $r$-round protocol with bias $O(1/r)$.

[Krzysztof at Weizmann's Theory Seminar, Mar. 22, 2009]

**Sublinear Graph Approximation Algorithms**
by Krzysztof Onak, based on joint works
with Noga Alon, Avinatan Hassidim, Jonathan A. Kelner,
Philip N. Klein, and Huy N. Nguyen.

[my comments]

A natural setting in which sublinear time algorithms
may be useful is in approximating the value of various
graph theoretic quantities, especially when these quantities
are hard to determine (e.g., minVC).
Parnas
and Ron (2007)
and Marko and Ron (2007) focused
on maximal matching and minVC, while observing a correspondance
between constant-time approximation algorithms for the size of
the maximal independent set and distributed algorithms that
find such sets. The current works continue this line of research,
while extending the study to many other problems (e.g., manimum
matching, minimum dominating sets, etc)
and introducing new techniques and improved results.

For details, see Constant-Time Approximation Algorithms via Local Improvements (by Nguyen and Onak), An Improved Constant-Time Approximation Algorithm for Maximum Matchings (by Yoshida, Yamamoto and Ito), and forthcoming papers.

For details, see Constant-Time Approximation Algorithms via Local Improvements (by Nguyen and Onak), An Improved Constant-Time Approximation Algorithm for Maximum Matchings (by Yoshida, Yamamoto and Ito), and forthcoming papers.

[talk's abstract]

Can the size of a solution to a graph problem be computed faster than the solution itself? I will show that the answer to this question is positive in many cases. For example, I will present an approximation algorithm that for the class of graphs with degree bounded by d, computes the maximum matching size to within $\epsilon * n$ in time that only depends on $d$ and $\epsilon$. I will give an overview of all known results, including some work still in progress.

[Klim at the WIS Theory Seminar, Mar. 29, 2009]

**3-Query Locally Decodable Codes of Subexponential Length**
by Klim Efremenko

[my comments]

This celebrated work is an obvious choice and requires no introduction.
(I'm happy to add it to my collection of choices,
based on the fact that it was presented today at our formal seminar.)
I wish, however, to emphasize two aspects.
Firstly, that the exposition of the construction,
which does build on Yekhanin's ideas, is quite accessible.
Secondly, that, in contrast to Yekhanin's construction,
the current construction scheme has the pleasing feature
of benefitting from more queries
(i.e., it yields shorter lengths when more queries are allowed).

[original abstract]

Locally Decodable Codes (LDC) allow one to decode any particular symbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work Yekhanin constructs a $3$-query LDC with sub-exponential length of size $\exp(\exp(O({\log n\over \log\log n})))$. However, this construction requires a conjecture that there are infinitely many Mersenne primes. In this paper we give the first unconditional constant query LDC construction with subexponantial codeword length. In addition our construction reduces codeword length. We give construction of $3$-query LDC with codeword length $\exp(\exp(O(\sqrt{\log n \log \log n })))$. Our construction also could be extended to higher number of queries. We give a $2^r$-query LDC with length of $\exp(\exp(O(\sqrt[r]{\log n (\log \log n)^{r-1} })))$.

[Iftach, private discussion, March 2009]

**A Parallel Repetition Theorem for Any Interactive Argument**
by Iftach Haitner

[my comments]

It has been known for more than a decade that parallel repetition
may fail to reduce the error in computationally-sound proof
(a.k.a argument) systems. This work shows that any argument system
can be slightly modified such that parallel repetition does reduce
the error in the *resulting* argument system. The modification
of a generic argument amounts to having the verifier abort at random
with probability $1/4$ (i.e., after each round, the verifier aborts
with probability $1/4r$, where $r$ denotes the number of rounds).
In case of abort, the verifier always accepts, which means that
this modification increases the probability of error.
The benefit of this modification is that the probability
of cheating in the parallel execution is not sensitive to whether
the verifier aborts in any typical individual copy,
which establishes sufficient independence between the copies.

The above random-termination modification is applicable to error-reduction (by parallel repetition) of any two-party game, where the error here refers to the probability that a specific party wins although we wish it to lose.

Note that the result is actually incomparable to a recent Prallalel Repetition Theorem for any (unmodified) public-coin protocol (by Hastad, Pass, Pietrzak, and Wikstrom).

The above random-termination modification is applicable to error-reduction (by parallel repetition) of any two-party game, where the error here refers to the probability that a specific party wins although we wish it to lose.

Note that the result is actually incomparable to a recent Prallalel Repetition Theorem for any (unmodified) public-coin protocol (by Hastad, Pass, Pietrzak, and Wikstrom).

[original abstract]

The question whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive arguments (e.g., 3-message protocols - Bellare, Impagliazzo and Naor [FOCS '97], and constant-round public-coin protocols - Pass and Venkitasubramaniam [STOC '07]), Bellare et. al gave example of interactive arguments for which parallel repetition does not reduce the soundness error at all. We show that by slightly modifying any interactive argument, in a way that preserves its completeness and only slightly deteriorates its soundness, we get a protocol for which parallel repetition does reduce the error at a weak exponential rate. In this modified version, the verifier flips at the beginning of each round an (1 - frac1{4m}, frac1{4m}) biased coin (i.e., 1 is tossed with probability 1/4m), where m is the round complexity of the (original) protocol. If the coin is one, the verifier halts the interaction and accepts, otherwise it sends the same message that the original verifier would. At the end of the protocol (if reached), the verifier accepts if and only if the original verifier would.

[reading, March 2009]

**Are PCPs Inherent in Efficient Arguments?**
by Guy Rothblum and Salil Vadhan

[my comments]

Known results of low communication argument systems utilize PCP systems.
My feeling was that this is a coincidence
(or rather a consequence of the use of a specific construction paradigm),
but the current work essentially proves me wrong.
Specifically, any argument system of low communication complexity
that is proved sound by reducing (via relatively few queries)
the breaking of a cryptographic primitive to the cheating strategy
yields a good PCP. Interestingly, the soundness of the candidate PCP
is absolute whereas its completeness condition relies on the existence
of an implementation of the underlying cryptographic primitive.
Indeed, the work present a general definition of a
cryptographic primitive, which is of independent interest.

The proof oracle of the constructed PCP system is used to describe a strategy for the prover in the argument system. The PCP verifier first emulates an execution of the argument system and then invokes the reduction in an attempt to break the underlying cryptographic primitive. The PCP verifier accepts iff the original system accepts and the reduction fails to break the primitive. Thus, computational soundness follows by noting that if the prover manages to cheat then the reduction will break the primitive. Interestingly, establishing the completeness condition is the challenging part and it requires the use of a cryptographic primitive that cannot be broken. The query complexity of the constructed PCP equals the sum of (1) the communication complexity of the argument system, and (2) the query complexity of the reduction. I wonder whether the latter term can be eliminated.

[To appear in CCC'09.]

The proof oracle of the constructed PCP system is used to describe a strategy for the prover in the argument system. The PCP verifier first emulates an execution of the argument system and then invokes the reduction in an attempt to break the underlying cryptographic primitive. The PCP verifier accepts iff the original system accepts and the reduction fails to break the primitive. Thus, computational soundness follows by noting that if the prover manages to cheat then the reduction will break the primitive. Interestingly, establishing the completeness condition is the challenging part and it requires the use of a cryptographic primitive that cannot be broken. The query complexity of the constructed PCP equals the sum of (1) the communication complexity of the argument system, and (2) the query complexity of the reduction. I wonder whether the latter term can be eliminated.

[To appear in CCC'09.]

[original abstract]

Starting with Kilian (STOC92), several works have shown how to use probabilistically checkable proofs (PCPs) and cryptographic primitives such as collision-resistant hashing to construct very efficient argument systems (a.k.a. computationally sound proofs), for example with polylogarithmic communication complexity. Ishai et al. (CCC07) raised the question of whether PCPs are inherent in efficient arguments, and to what extent. We give evidence that they are, by showing how to convert any argument system whose soundness is reducible to the security of some cryptographic primitive into a PCP system whose efficiency is related to that of the argument system and the reduction (under certain complexity assumptions).

[reading, March 2009]

**Worst-Case Running Times for Average-Case Algorithms**
by L. Antunes and L. Fortnow

[my comments]

This work builds on an earlier work of the authors
and Vinodchandran (2003),
who showed that the exact (i.e., pointwise) complexity of a problem
(or rather any pointwise behavior of an algorithm)
that is polynomial on the average w.r.t a certain distribution
is characterized by a function related to that distribution.
Specifically, both the distribution and the pointwise bound
are determined by the polynomial-time Kolmogorov distribution.

The current work goes beyond the previous one by proving that, under standard assumptions that allow derandomization, the class of polynomial-time Kolmogorov distribution is "universal" for the class of P-samplable distributions. Here universality means that the latter distributions dominate all P-samplable distributions, which seems to be of independent interest.

[To appear in CCC'09.]

The current work goes beyond the previous one by proving that, under standard assumptions that allow derandomization, the class of polynomial-time Kolmogorov distribution is "universal" for the class of P-samplable distributions. Here universality means that the latter distributions dominate all P-samplable distributions, which seems to be of independent interest.

[To appear in CCC'09.]

[original abstract]

Under a standard hardness assumption we exactly characterize the worst-case running time of languages that are in average polynomial-time over all polynomial-time samplable distributions. More precisely we show that if exponential time is not infinitely often in subexponential space, then the following are equivalent for any algorithm $A$:where $K(x)$ is the Kolmogorov complexity (size of smallest program generating $x$) and $K^p(x)$ is the size of the smallest program generating $x$ within time $p(|x|)$.

- For all $\p$-samplable distributions $\mu$, $A$ runs in time polynomial on $\mu$-average.
- For all polynomial $p$, the running time for A is bounded by $2^{O(K^p(x)-K(x)+\log(|x|))}$ for \emph{all} inputs $x$.

To prove this result we show that, under the hardness assumption, the polynomial-time Kolmogorov distribution, $m^p(x)=2^{-K^p(x)}$, is universal among the P-samplable distributions.

[reading + private discussions with the authors, April 2009]

**Composition of low-error 2-query PCPs using decodable PCPs**
by Irit Dinur and Prahladh Harsha

[my comments]

The abstract is quite long and does a good job explaining
the context of this work, so I guess I need add nothing.
Actually, let me seize the oppertunity to express my opinion that
the most important aspect of [MR08b] is achieving *almost-linear*
length PCP of arbitrary small constant error rather than making this
small error vanish (i.e., be sub-constant).
In fact, obtaining a fixed polynomial length for an arbitrary small
constant error would have been interesting too.

Regarding the proof of the result: The actual presentation proceeds in terms of (highly) robust PCPs, while relying on the equivalence of the latter to two-query PCPs. Indeed, this equivalence is another contribution of the paper. As for the composition of robust PCPs it starts by observing that there is no need to keep the outer proof for consistency, since consistency can be tested by comparing two related inner proofs (which refer to the same position in the outer proof). This, however, changes nothing because still consistency can be achieved by modifying less than half of the total length of both scanned parts. The new idea is to check consistency among $d$ such parts, which may allow to reduce robustness error to approximately $1/d$. Indeed, this is doable, and is what the new composition theorem does. Note that the above description relies on the fact that the inner PCP is decodable.

Regarding the proof of the result: The actual presentation proceeds in terms of (highly) robust PCPs, while relying on the equivalence of the latter to two-query PCPs. Indeed, this equivalence is another contribution of the paper. As for the composition of robust PCPs it starts by observing that there is no need to keep the outer proof for consistency, since consistency can be tested by comparing two related inner proofs (which refer to the same position in the outer proof). This, however, changes nothing because still consistency can be achieved by modifying less than half of the total length of both scanned parts. The new idea is to check consistency among $d$ such parts, which may allow to reduce robustness error to approximately $1/d$. Indeed, this is doable, and is what the new composition theorem does. Note that the above description relies on the fact that the inner PCP is decodable.

[original abstract]

Proof composition, introduced in [AS98], is an essential ingredient in all known constructions of probabilistically checkable proofs (PCPs). By now, modular composition theorems are known [BGH+06, DR06], but only in the constant soundness error regime. These theorems have led to better understanding of PCPs, resulting in alternate proofs of the PCP Theorem, and construction of shorter PCPs [BGH+06, DR06, BS08, Din08].

In contrast, a similar understanding of PCP composition in the low soundness error regime has been lacking. The existing composition methods are non-modular (i.e., composition is very much tailored to the specific PCPs that are composed in an ad-hoc fashion), resulting in complicated constructions of PCPs. Furthermore, until recently, composition in the low soundness error regime suffered from incurring an extra "consistency" query, resulting in a PCP that is not "two-query" and hence, much less useful for hardness-of-approximation reductions. In a recent breakthrough, Moshkovitz and Raz [MR08b] constructed almost linear-sized low-error 2-query PCPs for every language in NP. The main technical component of their construction is a composition technique, which however is fairly involved and works only for certain specific 2-query PCPs.

The main result of this paper is a surprisingly simple, yet generic, composition theorem for low error two-query PCPs. Towards this end, we introduce a new variant of PCP, which we call a decodable PCP (dPCP). A dPCP is an encoding of an NP witness that is both checkable and decodable. The dPCP verifier in addition to verifying the validity of the given proof like a standard PCP verifier, also locally decodes the original NP witness. Our composition is generic in the sense that it works regardless of the way the component PCPs are constructed.

By repeatedly applying the new composition theorem to known components, we give a considerably simpler and modular proof of the result of [MR08b]. However, our construction has the same bottleneck as [MR08b] with respect to the error probability, in that, it is inverse logarithmic (not polynomial) in the alphabet size.

[presentation in Eurocrypt'09, Apr. 27, 2009]
[+ a recall from announcement at TCC'09, Mar. 16, 2007]

Two recent works on resettable security by Vipul Goyal and Amit Sahai:

**(1) Resettably Secure Computation**,
presented at Eurocrypt'09.

**(2) Resolving the simultaneous Resettability Conjecture**,
announced at the rump session of TCC'09.

[my comments]

Resetting attacks refer to the possibility of forcing a party
to interact several times using the same internal coin tosses.
Such attacks are natural in a context where the party's randomness
is generated offline and hardwired into the party's strategy
(e.g., as in the case of some "smart cards").
Resetting attacks were first considered in the context of
zero-knowledge, first w.r.t a resetting attack on the prover
(when attempting to gain knowledge)
and later w.r.t a resetting attack on the verifier
(when attempting to fool it into accepting false statements).
[The corresponding primitives are denoted rZK and rsZK, resp.]

The current two works extend the study, first by referring to arbitrary secure multi-party computation, and second by considering resetting attacks on both parties in the same zero-knowledge arguments. In the first work resetting attacks on any honest majority is considered, but in the case of two-party protocols the single party to be subjected to such attacks is predetermined (indeed, as in the case of rZK and rsZK). The second work seems a necessary condition towards handling resetting attacks on any party (i.e., not a predetermined one) also in the case that the honest parties are not in majority (i.e., the two-party case).

Re the 1st work, it is important to note that resettable security is defined via an ideal model that allows resetting (or rather invoking the functionality with the same honest-party input and different inputs for the other party, which may be adaptovely selected by the adversary). Note that such an attack cannot be avoided in the real resetting model, and thus it must be allowed in the ideal model. Consequently, both the real and ideal models for resetting attcaks are stronger than the corresponding models for concurrent executions, and thus resettable security does not necessarily imply concurrent security. (Indeed, this stands in contrast to the case of zero-knowledge, where the parties are supposed to interact on a common input, which cannot be modified by an adversary.)

The current two works extend the study, first by referring to arbitrary secure multi-party computation, and second by considering resetting attacks on both parties in the same zero-knowledge arguments. In the first work resetting attacks on any honest majority is considered, but in the case of two-party protocols the single party to be subjected to such attacks is predetermined (indeed, as in the case of rZK and rsZK). The second work seems a necessary condition towards handling resetting attacks on any party (i.e., not a predetermined one) also in the case that the honest parties are not in majority (i.e., the two-party case).

Re the 1st work, it is important to note that resettable security is defined via an ideal model that allows resetting (or rather invoking the functionality with the same honest-party input and different inputs for the other party, which may be adaptovely selected by the adversary). Note that such an attack cannot be avoided in the real resetting model, and thus it must be allowed in the ideal model. Consequently, both the real and ideal models for resetting attcaks are stronger than the corresponding models for concurrent executions, and thus resettable security does not necessarily imply concurrent security. (Indeed, this stands in contrast to the case of zero-knowledge, where the parties are supposed to interact on a common input, which cannot be modified by an adversary.)

[abstract not available]

The first paper appears in the proceedings of Eurocrypt'09, LNCS 5479.

[email exchange with Rafail, May 2009; following announcement at TCC'09]

Two recent works justifying various features of known zero-knowledge protocols.

**(1) On the Composition of Public-Coin Zero-Knowledge Protocols**,
by Rafael Pass, Wei-Lung Dustin Tseng, and Douglas Wikstrum.

**(2) On Public versus Private Coins in Zero-knowledge Proof Systems**,
by Rafael Pass and Muthuramakrishnan Venkitasubramaniam.

[my comments]

Indeed, from my perspective, the common theme here is
the justification of various features of known zero-knowledge protocols.
Specifically, the **first work** [to appear in Crypto'09]
explains why the only known protocols
(i.e.,proofs or arguments) that preserve zero-knowledge
under parallel composition are protocols that use private-coins.
This paper shows that parallel executions of interactive arguments
that are of the public-coin type cannot be simulated in a black-box manner
(unless they are trivial; i.e., establish membership in a BPP-set).
The **second work** [announced at the rump session of TCC'09]
explains why the known construction of constant-round
zero-knowledge *proofs* uses a (seemingly) stronger assumption
than the mere existence of one-way functions.
This paper provides an upper-bound on the computational complexity
of sets having constant-round proof systems that use one-way functions
in a "fully black-box" manner
(i.e., the protocol makes a black-box use of the function,
and the simulator uses the adversary in a black-box way).

Note that these justifications do not say that it is impossible to get rid of the aforementioned features of the known zero-knowledge protocols. They only says that doing so will require non-black-box techniques. Still this makes us feel better about our failure to get rid of these features so far (let alone in the past, before the potential of non-black-box techniques was realized).

Note that these justifications do not say that it is impossible to get rid of the aforementioned features of the known zero-knowledge protocols. They only says that doing so will require non-black-box techniques. Still this makes us feel better about our failure to get rid of these features so far (let alone in the past, before the potential of non-black-box techniques was realized).

[original abstracts]

Abstract for the 1st paper: We show that only languages in BPP have public-coin, black-box zero-knowledge protocols that are secure under an unbounded (polynomial) number of parallel repetitions. This results holds both in the plain model (without any set-up) and in the Bare Public-Key Model (where the prover and the verifier have registered public keys). We complement this result by showing the existence of a public-coin black-box zero-knowledge proof that remains secure under any a-priori bounded number of concurrent executions.

Abstract for the 2nd paper (tentative): Goldreich-Krawczyk show that only languages in BPP have constant-round *public-coin* black-box zero-knowledge protocols. We extend their lower bound to ``fully black-box'' *private-coin* protocols based on one-way functions. More precisely, we show that only languages in $BPP^CF$, where CF is a ``collision-finding'' oracle (as in [Haitner et al. 07], [Simon 98]), can have constant-round fully black-box zero-knowledge *proofs* based on one-way functions; the same holds for fully black-box zero-knowledge *arguments* with sublinear verifier communication complexity. We also establish near-linear lower bounds on the round complexity of fully black-box concurrent zero-knowledge proofs (or arguments with sublinear verifier communication) for languages outside $BPP^CF$. To establish these results we present a transformation from private-coin protocols into CF-relativized public-coin protocols; for the case of fully black-box protocols based on one-way functions, this transformation preserves zero knowledge, round complexity and communication complexity. As an additional corollary of this transformation (and relying on recent parallel repetition theorems for public-coin protocols), we establish that parallel repetition reduces the soundness error in all fully black-box private-coin arguments with sublinear verifier communication complexity.

[Per at STOC'09, June 1, 2009]

** Randomly supported independence and resistance**
by Per Austrin and Johan Hastad

[my comments]

The actual technical core of this paper is a study of
the size of random sets of $n$-bit strings that (w.h.p.)
support a $k$-wise independent distribution over $n$-bit strings,
where a set $S$ **supports** a $k$-wise independent distribution
if there exists such a distribution whose support is coantianed in $S$.
Interestingly, this happens at size $\tildeO(n^k)$,
and this result is essentially tight. The paper also studies
the size for which every set supports some $k$-wise independent
distribution, which is $(1-Omega(1)^k) 2^n$.
My feeling is that these results may be useful in the context
of proving lower bounds (e.g., for property testing).
The application to *approximation resistence* follows
via a result of Austrin and Mossel (CCC'08).

[original abstract]

We prove that for any positive integer $k$, there is a constant $c_k$ such that a randomly selected set of $c_k n^k \log n$ Boolean vectors with high probability supports a balanced $k$-wise independent distribution. In the case of $k \leq 2$ a more elaborate argument gives the stronger bound $c_k n^k$. Using a recent result by Austrin and Mossel this shows that a predicate on $t$ bits, chosen at random among predicates accepting $c_2 t^2$ input vectors, is, assuming the Unique Games Conjecture, likely to be approximation resistant. These results are close to tight: we show that there are other constants, $c_k'$, such that a randomly selected set of cardinality $c_k' n^k$ points is unlikely to support a balanced $k$-wise independent distribution and, for some $c>0$, a random predicate accepting $ct^2/\log t$ input vectors is non-trivially approximable with high probability. In a different application of the result of Austrin and Mossel we prove that, again assuming the Unique Games Conjecture,anypredicate on $t$ bits accepting at least $(32/33) \cdot 2^t$ inputs is approximation resistant. Essentially all results extend from the Boolean domain to larger finite domains.

[talk at STOC'09, June 1, 2009]

**Twice-Ramanujan Sparsifiers**
by Joshua D. Batson and Daniel A. Spielman and Nikhil Srivastava

[my comments]

A special case of this result refers to the construction of a
**cut sparsifiers** which is a subgraph that preserves the relative
densities of all cuts in the original graph.
The paper presents a (deterministic) polynomial-time algorithm
that finds such a sparsifier with a linear number of edges, improving
over prior works that obtained an almost linear number of edges
via a probabilistic polynomial-time algorithm.
The odd title refers to the fact that the degree bound achieved here
is twice the one obtained for the special case of the complete graph.
Unfortunately, this is not the only oddity in the exposition.

[original abstract]

We prove that for every $d>1$ and every undirected, weighted graph $G(V,E)$ on $n$ vertices, there exists a weighted subgraph $H$ of $G$ with at most $dn$ edges that preserves the Laplacian quadratic form of $G$ up to a multiplicative error of $d+1 \pm 2 \sqrt{d}$. Thus, $H$ approximates $G$ spectrally at least as well as a Ramanujan expander with $dn/2$ edges approximates the complete graph. We give an elementary, deterministic polynomial time algorithm for constructing $H$.

[two talks at STOC'09, May 31, 2009]

Two related works on non-malleable commitments (NMCs).

**(1) Non-Malleability Amplification**
by Huijia Lin and Rafael Pass

**(2) A Unified Framework for Concurrent Security: Universal Composability
from Stand-alone Non-malleability**
by Huijia Lin and Rafael Pass and Muthuramakrishnan Venkitasubramaniam

[my comments]

The key definition is of a $c$-robust NMC, which is a commitment
scheme that remains non-malleabily secure also when run in parallel to
a single (execution of) an arbitrary $c$-round protocol,
where $c$ is a constant (e.g., $c=4$ will do).
(In the papers, $c$-robust protocols are called "natural",
because it is quite easy to obtain it from the basic case (of $c=0$).)

The first work present several interesting techniques for the construction on NMC, including a transformation of any $k$-round $c$-robust NMC into a $O(k)$-round NMC that is both $c$-robust and maintains security under concurrent self-composition. This transformation applies also in the case that security holds only wrt $t$-bit long IDs (where standard NMC essentially corresponds to IDs of linear length). Another crucial ingrediant is a transformation of any ($c$-robust) protocol that is secure under self-composition wrt $t$-bit long IDs into a ($c$-robust) protocol that is secure wrt $2^{t-1}$-bit long IDs, where the latter protocol is no longer secure under self-composition but maintains the round complexity of the original protocol. Iterating these two steps for $log^*$ times, we obtain a $log^*$-round NMC that (is $c$-robust and) maintains security under concurrent self-composition. Indeed, this is another incarnation of the amplification approach (i.e., obtaining a remarkable result by starting from a known result and applying a sequence of iterartions such that each iteration improves the construct in a relatively moderate manner).

The second work present a unified framework for constructing environmentally-secure (aka UC-secure) two-party protocols in various models (including various set-up assumptions and weak notions of security such as quasi-PPT simulation). The framework uses a $c$-robust NMC and refers to a rather generic gadget, called a "UC-puzzle", that provides (execution) "soundness" w.r.t environments and (statistical) stand-alone "secrecy" (via simulation). Intuitively, environmental-"secrecy" is obtained via the NMC, which allows for simple instantiations of the UC-puzzle.

Taken together, these works provide a host of techniques for the construction of NMC and a very important application of NMC, which may view as no less than a justification for the large effort invested in the construction of NMC.

The first work present several interesting techniques for the construction on NMC, including a transformation of any $k$-round $c$-robust NMC into a $O(k)$-round NMC that is both $c$-robust and maintains security under concurrent self-composition. This transformation applies also in the case that security holds only wrt $t$-bit long IDs (where standard NMC essentially corresponds to IDs of linear length). Another crucial ingrediant is a transformation of any ($c$-robust) protocol that is secure under self-composition wrt $t$-bit long IDs into a ($c$-robust) protocol that is secure wrt $2^{t-1}$-bit long IDs, where the latter protocol is no longer secure under self-composition but maintains the round complexity of the original protocol. Iterating these two steps for $log^*$ times, we obtain a $log^*$-round NMC that (is $c$-robust and) maintains security under concurrent self-composition. Indeed, this is another incarnation of the amplification approach (i.e., obtaining a remarkable result by starting from a known result and applying a sequence of iterartions such that each iteration improves the construct in a relatively moderate manner).

The second work present a unified framework for constructing environmentally-secure (aka UC-secure) two-party protocols in various models (including various set-up assumptions and weak notions of security such as quasi-PPT simulation). The framework uses a $c$-robust NMC and refers to a rather generic gadget, called a "UC-puzzle", that provides (execution) "soundness" w.r.t environments and (statistical) stand-alone "secrecy" (via simulation). Intuitively, environmental-"secrecy" is obtained via the NMC, which allows for simple instantiations of the UC-puzzle.

Taken together, these works provide a host of techniques for the construction of NMC and a very important application of NMC, which may view as no less than a justification for the large effort invested in the construction of NMC.

[original abstract of (1)]

We show a technique for amplifying commitment schemes that are non-malleable with respect to identities of length t, into ones that are non-malleable with respect to identities of length $\Omega(2^t)$, while only incurring a constant overhead in round-complexity. As a result we obtain a construction of $O(1)^{log^* n}$-round (i.e., "essentially" constant-round) non-malleable commitments from any one-way function, and using a black-box proof of security.

[original abstract of (2)]

We present a unified framework for obtaining Universally Composable (UC) protocols by relying on stand-alone secure non-malleable commitments. Essentially all results on concurrent secure computation---both in relaxed models (e.g., quasi-polynomial time simulation), or with trusted set-up assumptions (e.g., the CRS model, the imperfect CRS model, or the timing model)---are obtained as special cases of our framework. This not only leads to conceptually simpler solutions, but also to improved set-up assumptions, round-complexity, and computational assumptions. Additionally, this framework allows us to consider new relaxed models of security: we show that UC security where the adversary is a uniform PPT but the simulator is allowed to be a non-uniform PPT (i.e., essentially, traditional UC security, but with a non-uniform reduction) is possible without any trusted set-up. This gives the first results on concurrent secure computation without set-up, which can be used for securely computing ``computationally-sensitive'' functionalities (e.g., data-base queries, ``proof of work''-protocols, or playing bridge on the Internet).

[Eric at STOC'09, May 31, 2009]

**Testing Juntas Nearly Optimally**
by Eric Blais

[my comments]

Given the importance of dealing with functions that depend
on few of their inputs (aka juntas), the problem of optimizing
the complexity of testing juntas does deserve the attention
it has received. The algorithm presented in this work is
very appealing, and has query complexity $\tildeO(k/\epsilon)$,
where $\epsilon$ is the proximity parameter.

[original abstract]

A function on n variables is called a k-junta if it depends on at most k of its variables. In this article, we show that it is possible to test whether a function is a k-junta or is "far" from being a k-junta with O(k log k)/epsilon queries, where epsilon is the approximation parameter. This result improves on the previous best upper bound of O(k^1.5 )/epsilon queries and is asymptotically optimal, up to a logarithmic factor. We obtain the improved upper bound by introducing a new algorithm with one-sided error for testing juntas. Notably, the new algorithm is a valid junta tester under very general conditions: it holds for functions with arbitrary .nite domains and ranges, and it holds under any product distribution over the domain. A key component of the analysis of the algorithm is a new structural result on juntas: roughly, we show that if a function f is .far. from being a k-junta, then f is .far. from being determined by k parts in a random partition. The structural lemma is proved using the Efron-Stein decomposition method.

[July 10, 2009]

*Efficient Two-Sources Randomness Extraction:
A Challenge from the mid-1980s*

[a brief description]

The challenge is constructing an explicit two-source randomness extractors
with parameters that meet the existential counting argument.
For starters, we highlight the problem of extracting a single bit
from two sources that have arbitrary small constant rate,
where the extracted bit should have arbitrary small constant bias.

[more details]

My paper with Benny (FOCS'85 and [SICOMP, Vol. 17, No. 2, 1988]) shows the existence of a function $f:\{0,1\}^n\times\{0,1\}^n\to\{0,1\}^m$ that is an extractor of error $\e$ for pairs of $(n,k)$-sources, provided that $m < 2k-2\log_(m/\e)$. (The proof uses a counting argument that is based on the fact that it suffices to consider flat sources.) The challenge is to provided a polynomial-time computable function $f$ that meets the foregoing parameters.

For starters, let us focus on the case that $m=1$. In this case, the said paper provides a very simple function (i.e., inner-product) that constitutes an extractor of error $\e$ for pairs of $(n,k)$-sources, provided that $k > (n/2) + 2\log_(2/\e)$. This result refers to min-entropy rate (i.e., $k/n$) one half, and a concrete challenge is to extend this result to any constant min-entropy rate (i.e., $(n,k)$-sources such that $k=\Omega(n)$).

Note that, under a reasonable conjecture regarding the Paley Graph (i.e., a graph with vertex set $Z_p$, for a prime $p$, such that $(i,j)$ is an edge iff $i-j$ is a quadratic residue modulo $p$), the corresponding function (i.e., $f(i,j)=1$ iff $(i,j)$ is an edge) is an extractor of exponentially vanishing error for pairs of sources of any constant min-entropy rate (see [CG, Corr 11, p. 240]).

This is how things stood for almost two decades, but renewed interest in the problem (pioneered in 2004 by Barak, Impagliazzo and Wigderson), led to very significant progress on closely related problems (e.g., the problem of extraction from three sources, but the construction two-sources dispersers)^{[*]}. As for the problem of two-source extraction, we mention the well-known result of Bourgain (2005) that provides an explicit construction for min-entropy rate 0.4999.

^{[*]}The problem of extraction from three sources is much more important for the original motivation (of actually using such extractors to purify randomness various applications), but one may argue that the construction of two-sources dispersers is technically more related to the problem of constructing two-sources extractors.

[Youming at Random'09, Aug. 21, 2009]

**On the Security of Goldreich's One-Way Function**
by Andrej Bogdanov and Youming Qiao.

[my comments]

For a couple of reasons,
my original suggestion referred to the length preserving case,
where $m=n$, yet the generalization to $m=O(n)$ seems natural.
Interestingly, this work shows that this generalization
is problematic when the O-notation hides a sufficiently large constant
(i.e., a constant that is exponential in the constant degree $d$).
In this case, predicates that have non-zero correlation with
any single bit (or even any fixed pair of bits) yield a function
that is easy to invert on mosl inputs (for a random underlying graph).
My interpretation of this result is that $m >> n$ should not
be allowed (recall that $m > n^d$ is definitely bad).

[original abstract]

Goldreich (ECCC 2000) suggested a simple construction of a candidate one-way function $f: \B^n \to \B^m$ where each bit of output is a fixed predicate $P$ of a constant number $d$ of (random) input bits. We investigate the security of this construction in the regime $m = Dn$, where $D=D(d)$ is a sufficiently large constant. We prove that for any predicate $P$ that correlates with either one or two of its variables, $f$ can be inverted with high probability. We also prove an amplification claim regarding Goldreich's construction. Suppose we are given an assignment $x' \in \B^n$ that has correlation $\epsilon > 0$ with the hidden assignment $x \in \B^n$. Then, given access to $x'$, it is possible to invert $f$ on $x$, with high probability, provided $D = D(d,\epsilon)$ is sufficiently large.

[Shachar at Random'09, Aug. 21, 2009]

**Random low degree polynomials are hard to approximate**
by Ido Ben Eliezer, Rani Hod and Shachar Lovett.

[my comments]

This work shows that almost all degree $d+1$ polynomials over GF(2)
are extremely hard to approximate by degree $d$ polynomials;
that is, the correlation is exponentially vanishing in the number
of variables.
To me this sounds as a very natural and/or basic result
in the context of arithmetic complexity.

[original abstract]

We study the problem of how well a typical multivariate polynomial can be approximated by lower degree polynomials over the field F_2. We prove that, with very high probability, a random degree d+1 polynomial has only an exponentially small correlation with all polynomials of degree d, for all degrees d up to Theta(n). That is, a random degree d+1 polynomial does not admit good approximations of lower degree. In order to prove this, we prove far tail estimates on the distribution of the bias of a random low degree polynomial. Recently, several results regarding the weight distribution of Reed--Muller codes were obtained. Our results can be interpreted as a new large deviation bound on the weight distribution of Reed--Muller codes.

[Jeff at Random'09, Aug. 23, 2009]

**Pseudorandom Generators and Typically-Correct Derandomization**
by Jeff Kinne, Dieter van Melkebeek and Ronen Shaltiel.

[my comments]

This work presents interesting results regarding "typically correct"
derandomizations. On the positive side, it gets rid of the use
of randomness extractors in this context, showing that a certain type
of PRGs, called "seed extending PRGs" is all that is needed.
(It should be noted that the NW-type PRGs are all seed extending,
and it is actually instructive to note this fact in any exposition
of the NW construction.)
On the negative side, it is shown that typically-correct
derandomization (with sufficiently small number of errors;
e.g., as achieved in prior conditional results)
imply circuit lower bounds very much as the implication
known for the standard derandomization.

[original abstract]

The area of derandomization attempts to provide efficient deterministic simulations of randomized algorithms in various algorithmic settings. Goldreich and Wigderson introduced a notion of ``typically-correct'' deterministic simulations, which are allowed to err on few inputs. In this paper we further the study of typically-correct derandomization in two ways. First, we develop a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings. We show that our technique strictly generalizes an earlier approach by Shaltiel based on randomness extractors, and simplifies the proofs of some known results. We also demonstrate that our approach is applicable in algorithmic settings where earlier work did not apply. For example, we present a typically-correct polynomial-time simulation for every language in BPP based on a hardness assumption that is weaker than the ones used in earlier work. Second, we investigate whether typically-correct derandomization of BPP implies circuit lower bounds. Extending the work of Kabanets and Impagliazzo for the zero-error case, we establish a positive answer for error rates in the range considered by Goldreich and Wigderson. In doing so, we provide a simpler proof of the zero-error result. Our proof scales better than the original one and does not rely on the result by Impagliazzo, Kabanets, and Wigderson that NEXP having polynomial-size circuits implies that NEXP coincides with EXP.

[Nina at the Barriers in Computational Complexity workshop, Aug. 28, 2009].

**Approximate Clustering without the Approximation**
by Maria-Florina (Nina) Balcan
[Based mainly on works with Avrim Blum, Anupam Gupta, and Santosh Vempala]

[my comments]

My own view is that this direction of research has a much more broad
applicability than realized so far.
For any search problem $R$, consider the problem of finding
a solution $y$ to a given instance $x$
(i.e., given $x$ find $y$ s.t. $(x,y)\in R$).
Suppose that with respect to an objective function $\Phi$
(modeling an auxiliary objective function believes to be minimized
by valid solutions) the following **$(c,\eps)$-property** holds:
for every $x,y$
if $\Phi(x,y) \leq c\cdot OPT(x) \eqdef\min_z\{\Phi(x,z)\}$,
then $y$ is $\eps$-close to a solution to $x$ wrt $R$
(i.e., exists $z$ s.t. $Delta(y,z) < \eps |y|$ and $(x,z)\in R$).
In such cases (i.e., if the solution space (defined by $R$)
satisfies such a property wrt an auxiliary objective $\Phi$),
one typically tries to $c$-approximate the objective $\Phi$
in order to find strings that are $\eps$-close to being a valid solution.
Often, this is suggested as the justification for searching for
a $c$-approximation to the objective $\Phi$.
However, the point is that there may be a direct way of finding
a string that is $\eps$-close to being a solution,
instead of attempting to $c$-approximate the objective $\Phi$
(which may be infeasible to approximate...).
Furthermore, a proof of the $(c,\eps)$-property (of some $\Phi$)
may be useful towards this goal, in which case we may view $\Phi$
as a pivot of the analysis (rather than as a goal of the algorithm).

Later comment by Avrim and Nina: Our model is actually a bit more
like an instance-based version of the above condition. That is, say
that the instance $x$ satisfies the **$(c,\eps)$-property** if:
for every $y$, if $\Phi(x,y) \leq c\cdot OPT(x)$ then $y$ is
$\eps$-close to a solution to $x$ wrt $R$. Then the goal is to have
an algorithm that finds a good $y$ (one that is $O(\eps)$-close to a
solution) on those instances $x$ satisfying the property. This is
the kind of guarantee we get for clustering.

Oded: I still consider it good to present the global definition first,
and the instance-based version as a ramification.

[original abstract]

There has been substantial work on approximation algorithms for clustering data under distance-based objective functions such as k-median, k-means, and min-sum objectives. This work is fueled in part by the hope that approximating these objectives well will indeed yield more accurate solutions. That is, for problems such as clustering proteins by function, or clustering images by subject, there is some unknown correct “target” clustering and the implicit assumption is that clusterings that are approximately optimal in terms of these distance-based measures are also approximately correct in terms of error with respect to the target. In this work we show that if we make this implicit assumption explicit — that is, if we assume that any c-approximation to the given clustering objective Phi is epsilon-close to the target — then we can produce clusterings that are O(epsilon)-close to the target, even for values c for which obtaining a c-approximation is NP-hard. In particular, for the k-median, k-means, and min-sum objectives, we show that we can achieve this guarantee for any constant c > 1. Our results show how by explicitly considering the alignment between the objective function used and the true underlying clustering goals, one can bypass computational barriers and perform as if these objectives were computationally substantially easier.

[Eden at the WIS theory Seminar, Nov. 8, 2009]

**New Approximation Algorithms for Densest k-Subgraph**
by Aditya Bhaskara, Moses Charikar, Eden Chlamtac,
Uriel Feige and Aravindan Vijayaraghavan.

[my comments]

I was and am intrigued by the fact that the design of the algorithm
(for the worst-case problem) was actually guided by the analysis of
the planted randomn instance model. This seems related to the fact
that a key aspect of the problem at hand can be related to structures
that are in some sense random (or better than random, where
deviations from random are employed to the benefit of the algorithm).

[original abstract]

We consider the Densest $k$-Subgraph (DkS) problem: Given a graph $G$ and a parameter $k$, find a subgraph of $G$ on $k$ vertices with the maximum number of edges. This is a well known optimization problem, which is a natural optimization version of the decision problem k-CLIQUE. The previous best known approximation ratio by Feige, Kortsarz and Peleg in 2001 was $O(n^{1/3-\epsilon})$ for $\epsilon$ estimated at around 1/60. We improve on this result, giving an $O(n^{1/4+\epsilon})$ approximation for any $\epsilon>0$. In essence, our algorithm relies on the following principle: In a random graph with average degree $n^{\alpha}$, a planted $k$-subgraph can be detected if it has average degree (strictly) greater than $k^{\alpha}$. We show that this principle largely carries over to provide results in the same spirit for worst-case (non-random) instances.

[Amir at MFO, Nov. 16, 2009]

**Update on Polynomial Identity Testing**
by Amir Shpilka.

[my comments]

The most striking fact I learned is that the general case
of PIT reduces to the case of depth four.

[original abstract]

Polynomial Identity Testing (PIT for short) is the problem of efficiently and deterministically deciding whether a given (explicitly or via black-box access) arithmetic circuit computes the zero polynomial. This is one of the most fundamental problems in computational complexity and it is strongly related to the problem of proving lower bounds for arithmetic circuits. In this talk I will survey several results on the PIT problem. Specifically I will discuss the relation between derandomization of PIT and circuit lower bounds, explain the importance of derandomizing PIT in the bounded depth model and give an overview of the ideas behind several recent detrerministic PIT algorithms, designed for restricted circuit classes. At the end of the talk I will present what I find to be the most accessible important open questions in the field.

[Zeev at MFO, Nov. 17, 2009]

**Kakeya Sets and Extractors (survey)**
by Zeev Dvir.

[my comments]

Just see the original abstract.

[original abstract]

In a series of works, we have been getting strong new bounds on the size of Kakeya sets, and using the same/related proofs to analyze a new family of randomness extractors, that are capable achieving parameters that were previously unknown. Details below. A Kakeya set in F_q^n is a set that contains a line in every direction. A randomness extractor is a function E:{0,1}^n x {0,1}^t -> {0,1}^m that satisfies the condition that if X is a random variable with sufficiently high minentropy, and s is a uniform random variable (independent of X), then E(X,s) is distributed nearly uniformly. Our specific results includeAll the result are held to together by a unifying technique that could be called the polynomial method. Here combinatorial parameters (like the cardinality) of a set S with algebraic properties are bounded by constructing a polynomial P_S that vanishes on S, and then analyzing this polynomial. This leads often to very simple analysis of parameters that were otherwise elusive.

- Z. Dvir. On the size of Kakeya sets in finite fields. This work shows that Kakeya sets must have size c_n q^n where c_n is a universal constant independent of q.
- Z.Dvir and A. Wigderson. Kakeya sets, new mergers and old extractors. This work constructs a simple ``merger of random sources based on the ideas from [1], and builds alternate extractors that match state-of-the-art constructions.
- S. Saraf and M. Sudan. Improved lower bound on the size of Kakeya sets over finite fields. This work improves the lower bound of [1] to q^n/c^n for some constant c independent of q and n.
- Z. Dvir, S. Kopparty, S. Saraf and M. Sudan. Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers. This work improves the Kakeya sets bounds in [1], [3] to q^n/2^n (which is within a factor of 2+o(1) of the upper bounds). It also gives improved analysis of the mergers of [2], leading to explicit extractors attaining a new range of parameters.

[Les at MFO, Nov. 17, 2009]

**Holographic Algorithms** by Leslie Valiant.

[my comments]

As clarified in the talk, the key notion is actually
that of holographic reductions between counting problems.
These reductions map combinations of sets of solutions
rather than individual solutions, and they can be used
both for deriving algorithms (i.e., by a reduction
to an easy problem) and for deriving hardness results
(i.e., by reductions from hard problems).

[original abstract]

We briefly review where we are and summarize a known lower bound argument with regard to holographic algorithms. We go on to discuss, and illustrate with new examples, some avenues that remain open in this framework in the search for polynomial time algorithms.

[Anup at MFO, Nov. 17, 2009]

**Direct Sums in Randomized Communication Complexity**
by Boaz Barak, Mark Braverman,
Anup Rao and Xi Chen.

[my comments]

I would highlight more the new models and results regarding compressing
any communication protocol at the protocol-level rather than
the message level. Indeed, these models and results seem complementary
to Schulman et al treatment of error-correction at the protocol level.
Thus, these two models extend the classical treatments of compression
and error-correction from the single message level to the protocol level.

The direct sum result follows by starting with a protocol for the $k$-wise problem, deriving a protocol for a single instance that has the same communication complexity but a low "information contents", and then reducing the communication complexity by using the new compression (at protocol level) scheme.

[original abstract]

Does computing several copies of a function require more computational effort that computing a single copy? In this work, we give the first answers to this question for the model of randomized communication complexity. Ignoring logarithmic factors, we show that:Our results are obtained by designing compression schemes for communication protocols that can be used to compress the communication in a protocol that does not transmit a lot of information about its inputs.

- Computing n copies of a function requires sqrt{n} times the communication.
- For average case complexity, given any distribution mu on inputs, computing n copies of the function on n independent inputs sampled according to mu requires at least sqrt{n} times the communication for computing one copy.
- If mu is a product distribution, computing n copies on n independent inputs sampled according to mu requires n times the communication. We also study the complexity of computing the parity of n evaluations of f, and obtain results analogous to those above.

[Chris at MFO, Nov. 18, 2009]

**Fast polynomial factorization and modular composition**
by Kiran Kedlaya and Chris Umans

[my comments]

Just see the original abstract.

[original abstract]

We obtain randomized algorithms for factoring degree n univariate polynomials over F_q requiring O(n^{1.5+o(1)}log^{1+o(1)}q+n^{1+o(1)}log^{2+o(1)}q) bit operations. When log q < n, this is asymptotically faster than the best previous algorithms (von zur Gathen and Shoup (1992) and Kaltofen and Shoup (1998)); for log q > n, it matches the asymptotic running time of the best known algorithms. The improvements come from new algorithms for modular composition of degree n univariate polynomials, which is the asymptotic bottleneck in fast algorithms for factoring polynomials over finite fields. The best previous algorithms for modular composition use O(n^{(\omega+1)/2}) field operations, where \omega is the exponent of matrix multiplication (Brent and Kung (1978)), with a slight improvement in the exponent achieved by employing fast rectangular matrix multiplication (Huang and Pan (1997)). We show that modular composition and multipoint evaluation of multivariate polynomials are essentially equivalent, in the sense that an algorithm for one achieving exponent \alpha implies an algorithm for the other with exponent \alpha + o(1), and vice versa. We then give two new algorithms that solve the problem near-optimally: an algebraic algorithm for fields of characteristic at most n^{o(1)}, and a nonalgebraic alorithm that works in arbitrary characteristic. The latter algorithm works by lifting to characteristic 0, applying a small number of rounds of multimodular reduction, and finishing with a small number of multidimensional FFTs. The final evaluations are reconstructed using the Chinese Remainder Theorem. As a bonus, this algorithm produces a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. The new algorithms for modular composition lead to asymtotic improvements in algorithms for irreducibility testing, computing minimal polynomials, and other algebraic problems. Reducing the exponent of polynomial factorization from 1.5 to 1 is an outstanding open problem, and time permitting, I'll speculate on some possible routes to obtaining that result.

[Omer at MFO, Nov. 18, 2009]

**Going down HILL: Efficiency Improvements
for Constructions of PRG from any OWF**
by Iftach Haitner, Omer Reingold and Salil Vadhan.

[my comments]

The key point is the introduction of the
notion of *next bit/block pseudoentropy* (**nBPS**):
The random variable $X$ has nbPE at least $k$
if there exists a (correlated) $Y$ such that
The false PE generator of HILL is thus replaced by the
following false nbPE generator, which is manipulated to obtain a PRG
(analogously but differently from the manipulation of PE in HILL).
The false nbPE generators takes input $x,h$,
where $h$ is a suitable length-preserving hashing function,
and output $f(x),h,h(x))$. This has nbPE at least $|x|+|h|+\log|x|$.
(When proving this fact,
one analyzes the first $|f(x)|+|h|+k(x)-c\log |x|$ bits via extraction,
and the next $(c+1)\log|x|$ bits via hardcore,
where $k(x)$ is $\log_2|f^{-1}(f(x))|$.)

- for all $i$ it holds that $(X_1,...,X_{i-1},Y_i) \equiv (X_1,...,X_{i-1},X_i)$,
- $\sum_i H(Y_i|X_1,...,X_{i-1}) \geq k$.

[original abstract]

We give a new construction of Pseudorandom Generators from any One-Way Function. The construction achieves better parameters and is simpler than that given in the seminal work of H.stad, Impagliazzo, Levin and Luby [HILL 99, Haitner-Harnik-Reingold-05, Holenstein 06]. The construction employs insights developed in the context of constructions of statistically hiding commitments [Haitner-Reingold-Vadhan-Wee09]. An additional advantage over all previous constructions is that our Pseudorandom Generators invoke the One-Way Function in a non-adaptive manner. Using [Applebaum-Ishai-Kushilevitz04], this implies the existence of Pseudorandom Generators in NC^0 based on the existence of One-Way Functions in NC^1.

[Prasad at MFO, Nov. 19, 2009]

**Complexity of CSPs: Exact and Approximation**
by Prasad Raghavendra.

[my comments]

While the main result asserts that,
for every CSP and every epsilon,
there exists a polynomial-time algorithm that gets within epsilon
of the approximation factor that is UGJ-hard,
I was most impressed by the fact that this seemingly optimal
approximation factor can be efficiently approxmated.

[original abstract]

Constraint Satisfaction Problem (CSP) such as 3-SAT, Horn-SAT is specified by a set of predicates. In the exact version, the goal is to find out if an instance of the CSP is completely satisfiable or unsatisfiable. In the approximate version, the goal is to find an assignment that satisfies the maximum number of constraints. The algebraic dichotomy conjecture of Jeavons-Bulatov is an attempt at characterizing exactly for which CSPs the exact version is in P. Roughly speaking, this conjecture asserts that CSPs that are in P are characterized by non-trivial operations that take feasible solutions to an instance, to another feasible solution. For example, adding three feasible solutions to a linear system over F_2 yields another feasible solution, therefore linear equations are easy to solve. The Unique Games Conjecture of Khot yields the threshold of hardness for approximation versions of every CSP. In this work, we show that the Unique games conjecture is indeed a natural generalization of the algebraic dichotomy conjecture from exact-CSP to approximate CSP. In a way, it yields a natural explanation as to where the approximation threshold for a CSP arises from.

[Ben at MFO, Nov. 20, 2009]

**k-Clique on Random Graphs**
by Ben Rossman.

[my comments]

Needless to say, the lower bound refers to the average-case
complexity of graphs drawn from the threshold probability.
The fact that these lower bounds are tight (as demonstarted
by matching upper bounds) is striking. Same for the fact
the the average-lower bound significantly improves the best
previously known lower bounds for worst-case in the AC0 case.

[original abstract]

I will discuss results, of myself (lower bound) and Kazuyuki Amano (upper bound), showing that the average-case complexity of k-CLIQUE (for constant k) is n^(k/4 + O(1)) for AC^0 circuits as well as monotone circuits.

[Mark at MFO, Nov. 20, 2009]

**Poly-logarithmic independence fools AC0 circuits**
by Mark Braverman: .

[my comments]

Just see the original abstract.

[original abstract]

We prove that poly-sized AC0 circuits cannot distinguish a poly-logarithmically independent distribution from the uniform one. This settles the 1990 conjecture by Linial and Nisan (1990). The only prior progress on the problem was by Bazzi (2007), who showed that O(log^2 n)-independent distributions fool poly-size DNF formulas. Razborov (2008) has later given a much simpler proof for Bazzi's theorem.

[Scott at MFO, Nov. 20, 2009]

**Efficient Simulation of Quantum Mechanics Collapses the Polynomial Hierarchy**
by Scott Aaronson.

[my comments]

In find the warm-up result, which refers to exact simulation
and has a much simpler proof, almost as appealing as the main result.
Here I refer to the fact that the probabilty distribution
to be simulated involve only probabilities that have a short
representation as binary fractions.

[original abstract]

We give a new type of evidence that quantum mechanics is hard to simulate classically---evidence that is more complexity-theoretic in character than Shor's famous factoring algorithm. Specifically we show the following: Suppose there's a BPP machine that, given any quantum circuit C, approximately samples (with constant error in variation distance) from the probability distribution over C's possible outputs. Then P^sharpP = BPP^NP, and in particular the polynomial hierarchy collapses. The proof uses a quantum algorithm that simulates a system of n identical, non-interacting bosonic particles. We exploit an old observation: that the amplitude for n non-interacting bosons to evolve to a particular state is given by the squared permanent of an n-by-n matrix, a sharpP-complete function. Therefore, one might hope that the ability to classically sample the bosons' final distribution would put sharpP in the polynomial hierarchy. However, pushing this implication through requires some further ideas from complexity theory, including the random self-reducibility of the permanent, noisy interpolation of polynomials, and approximate counting in BPP^NP. We also need to upper-bound the probability that two or more bosons will occupy the same state (unlike fermions, bosons are not subject to the Pauli exclusion principle, and can "pile on top of each other"). Our result can be strengthened in two ways. First, even if every distribution samplable in BQP can be approximately sampled in BPP^PH, we still get that P^sharpP = PH and PH collapses. This provides a new sort of evidence that quantum computers have capabilities outside the entire polynomial hierarchy. Second, we can prove a version of our result for relational problems (problems where the output is an n-bit string, and there could be many valid outputs), rather than sampling problems. What remains unknown is a result for decision problems (e.g., "if P=BQP then PH collapses").

[Shiri at the WIS Theory Seminar, Nov. 22, 2009]

**Fault-Tolerant Spanners for General Graphs**
by
Shiri Chechik, Michael Langberg, David Peleg and Liam Roditty.

[my comments]

I assume that there are other examples of the following issue,
but none occurs to me right now. The issue is that the algorithm
works by invoking some algorithm on many related inputs
(in this case the subgraph obtained by remviong $f$ vertices)
while using the same coin tosses in the various invocations.
That is, the analysis benefits from the fact that some
random choices are the same in all these invocations.
(I vaguely recall the same occuring with respect to
invocations of some oracles that relate to local queries,
but the current case seems different.)

In the case of edges omission, the construction involves an overhead factor that equals the number of omissions, which is optimal. For vertex omissions the overhead is a factor of $k^{f+2}\log n$, where $f$ is the number of omitted vertices and $n$ is the total number of vertices.

[original abstract]

The idea of spanners is that for certain algorithmic purposes, instead of using the entire graph, it is sufficient to use a sparser subgraph that closely approximates the distances of the original graph. Formally, a subgraph $H$ is a $k$-spanner of $G$ if for every two vertices their distance in $H$ is at most $k$ times their distance in the original graph $G$. In the fault-tolerant settings, we would like to find a sparse spanner of $G$ which remains a ``good" spanner even when some of the vertices fail. We show how to construct sparse fault-tolerant spanners for general graphs. Our construction is based on the approximate distance oracle construction of Thorup and Zwick (2005). In addition, we show an extremely simple algorithm for constructing fault-tolerant spanners which are resilient to edge failures.

[Benny at the WIS Theory Seminar, Dec. 6, 2009]

**Cryptography by Cellular Automata
or How Fast Can Complexity Emerge in Nature?**
by Benny Applebaum, Yuval Ishai, and Eyal Kushilevitz.

[my comments]

The main result is a proof of the stadard assumption
regarding the infeasiblity of decoding random linear codes
implies a one-way function that can be effected by
a single step of a cellular automaton.
This result weakens the connection between graph expansion
and cryptographic hardness in NC0, or at least indicate that
the required expansion may be much weaker than suspected.
It also contradicts the feeling that cryptographic hardness
should go hand in hand with inapproximability, or again
indicates that such connection should be better qualified.

[original abstract]

A cellular automaton (CA) is a discrete dynamical system in which cells are placed on a grid and the state of each cell is updated via a *spatially local* deterministic rule that depends only on the few cells within its close neighborhood. This model is commonly used to describe real world systems in nature and society. CAs are known to be capable of a highly complex behavior including the ability to generate computational intractability and computational unpredictability. However, it is not clear how fast these phenomena evolve and how common they are with respect to all possible initial configurations. Previous results suggest that computational intractability and unpredictability can evolve fast from some exceptional initial configurations, or alternatively, can evolve from almost all initial configurations after sufficiently long time. Our main results show that, under widely accepted conjectures, computational intractability and unpredictability can evolve after a single computation step from almost all initial configurations. This is done by constructing cryptographic primitives which are computed by a single step of a CA. These results support the view that computational forms of complexity can emerge from simple spatially local interactions in a very common and immediate way.

[ITCS, Tsinghua University, Beijing, Jan. 8-10, 2010]

**The ITCS mini-workshop on Property Testing, January 2010**

See an annotated program of this workshop.
The following three choices are taken from this program.

[workshop's abstract]

Property Testing is the study of super-fast (randomized) algorithms for approximate decision making. These algorithms are given direct access to items of a huge data set, and determine whether this data set has some predetermined (global) property or is far from having this property. Remarkably, this approximate decision is made by accessing a small portion of the data set.Property Testing has been a subject of intensive research in the last couple of decades, with hundreds of studies conducted in it and in closely related areas. Indeed, Property Testing is closely related to Probabilistically Checkable Proofs (PCPs), and is related to Coding Theory, Combinatorics, Statistics, Computational Learning Theory, Computational Geometry, and more.

The mini-workshop, hosted by the Institute for Computer Science (ITCS) at Tsinghua University (Beijing), brought together a couple of dozen of leading international researchers in Property Testing and related areas.

[Rocco at the ITCS mini-workshop on Property Testing, Jan. 8, 2010]

**Testing by Implicit Learning (survey)**
by Rocco Servedio.

[my comments]

I am fascinated by the **implicit learning** paradigm,
which is pivoted on emulating a learning algorithms
for $k$-variable functions by using $n$-bit long samples
with $k$ influential variables. The emulation proceeds by
randomly partitioning the variables to $O(k^2)$ sets,
identifying $k$ sets that are likely to each contain
a single influential variable, and converting the $n$-bit
long samples to corresponding $k$-bit samples. Specifically,
each $n$-bit sample is convereted by determining for
each influential set whether the subset of variables labeled 1
or the subset labeled 0 is influential, and seting the corresponding
bit accordingly. Once the learning algorithm outputs a hypothesis
for the $k$-bit function, it is tested using adequate queries
(which are again emulated correspondingly).

[original abstract]

We describe a a general approach to testing classes of Boolean functions. The method extends the early observation of Goldreich, Goldwasser and Ron that any proper learning algorithm can be used as a testing algorithm. This observation by itself does not typically give rise to constant-query (independent of n, the number of input variables) testing algorithms because virtually every interesting class of n-variable functions requires a superconstant (as a function of n) number of examples for proper learning. Our "implicit learning" approach works essentially by running a proper learning algorithm over k-dimensional examples where k is a function independent of n. The learning is "implicit" because the testing algorithm never actually knows which k of the n input variables are the ones that are being used for learning; this is accomplished using ingredients from the junta test of Fischer et al.Our approach can be used to test whether a Boolean function on n input variables has a "concise representation" in several different senses. Specifically, we obtain property testers that make poly(s/\epsilon) queries (independent of n) for Boolean function classes such as s-term DNF formulas (answering a question posed by Parnas et al.), size-s decision trees, size-s Boolean formulas, and size-s Boolean circuits. The method extends to classes of Boolean functions that have sparse Fourier representations, and also to non-Boolean classes such as s-sparse polynomials, size-s algebraic circuits, and size-s algebraic computation trees over finite fields of bounded size.

While the algorithms obtained by our approach typically have poly(s) query complexity, their running time is usually exponential in s because of a brute-force search that is used to do the learning. We have been able to achieve a better running time of poly(s) for the class of s-sparse GF(2) polynomials, using more sophisticated computationally efficient algorithms from learning theory for learning s-sparse GF(2) polynomials.

Relevant papers can be found at

[Madhu at the ITCS mini-workshop on Property Testing, Jan. 9, 2010]

**Invariance in Property testing (survey)**
by Madhu Sudan.

[my comments]

Madhu mused on what makes problems easily testable
and suggested that closure under a rich permutation group plays a
major role. This seems supperted by the numerous results regarding
algebraic functions (which are all invariant under suitable affine
transformation) and graph properties (which, by definition, are closed
under isomorphism). He surveyed a unified approach that refers to the
former, where the affine transformations refer to the identification
of the main with a vector space.

[original abstract]

This series of works was triggered by a question/conjecture in a work of Alon, Kaufman, Krivelevich, Litsyn and Ron (AKKLR) who asked if every2-transitivecode with a local constraint was locally testable. On thinking about the question we realized that no one seemed to have explicitly studied the role ofinvarianceof a property when testing it. In retrospect, this property seems to be crucial in (a) defining graph-properties (these are properties that remained unchanged when we renumber the vertices), (b) separating the new (i.e., twentieth century) work in property testing from the classical domain of statistics: Statistics typically deal with properties which enjoy a full symmetry group (names of all participants can be changed and this should not affect the statistics), whereas property testing tries to determine satisfiability of properties which do not enjoy a full symmetry group (e.g., graph properties are not invariant under arbitrary movement of edges), and (c) algebraic properties also have nice (and much smaller) symmetry groups. The AKKLR question/conjecture could be generalized in this sense to asking: When can you take property described by a local forbidden pattern (constraint) and the symmetry group of the property and use them to characterize and test the property? We explored this question in a series of works.[1]

Algebraic Property Testing: The Role of Invariance(withTali Kaufman, ECCC TR07-111): In this work we considered properties of functions that formed a linear vector space over some field F, where the domain of the functions were of the form K^n where K was a (small) field extending F, and where the property was invariant under linear transformations of K^n. We gave necessary and sufficient conditions for such properties to be testable with constant queries (when K was of constant size). While the motivation for these families were low-degree polynomials, we showed not all properties here were low-degree polynomials and the degree did not characterize the locality of the test. Paper available here [1].[2]

2-Transitivity is insufficient for Property Testing(withElena GrigorescuandTali Kaufmann, ECCC TR08-033): In this work we used some of the functions explored in paper [1] above, but over large fields K, to get a counterexample to the AKKLR conjecture (this clearly shows that affine/linear-invariance is not just syntactic variations on polynomials, which were well0-understood by AKKLR). Paper available here [2].[3]

Testing Linear-Invariant Non-Linear Properties(withArnab Bhattacharyya,Victor Chen, andNing Xie, ECCC TR08-088): In this work we explored some properties that were not vector spaces (so functions that were not closed under addition), but still linear invariant. This led to a nice bridge between combinatorial property testing and algebraic property testing. Paper available here [3].[4]

Succinct Representation of Codes with Applications to Testing(withElena GrigorescuandTali Kaufmann, ECCC TR09-043): The paper [1] showed that if some properties were characterized by asinglelocal constraint and its rotations under the group of affine transformations, then it was locally testable by the natural test. In this work we showed that many codes, including sparse affine-invariant codes over fields of the form 2^prime have this property. Paper available here [4].[5]

Limits on the rate of locally testable affine-invariant codes(withEli Ben-Sasson, unpublished): In this work we show that (despite its richness) affine-invariance is not going to lead to much denser locally testable codes than already known. While the result is disappointing, if one cares about property testing beyond just the construction of locally testable codes, then this results adds to the body of knowledge on algebraic property testing.Finally let me list a few other results in this direction (though I won't describe them due to lack of time):

- Shapira - STOC 2009
- Kaufman and Wigderson - ICS 2010
- Bhattacharyya, Kopparty, Schoenebeck, Sudan, and Zuckerman - ECCC TR09-086

[Shubhangi at the ITCS mini-workshop on Property Testing, Jan. 10, 2010]

**Some recent results on testing of sparse linear codes**
by Swastik Kopparty and Shubhangi Saraf.

[my comments]

Shubhangi advocated viewing locally testable linear codes as
a special case of *tolerant* testing the Hadamard codewords
under some adequate distributions (specifically, the distribution
that is uniform on the corresponding positions of the Hadamard code).
The tolerance condition prevents rejecting a codeword based on
positions that are assigned zero probability. She presented two
results, relating to two notion of tolerant testing (or rather
distance approximaton): the more general result yields a weak
notion of approximation and may obtained under the so-called
uniformly correlated condition, whereas a strong notion of
approximation requires a stronger correlated condition.
A distribution $\mu$ is called *$k$-uniformly correlated*
if there is a joint distribution of $k$-tuples such that each
element (in the $k$-tuple) is distributed as $\mu$
but their sum is uniformly distributed.
The stronger condition requires that this holds when there $k$
elements are independently drawn from $\mu$, which is equivalent to
requiring that the code be sparse and have small Fourier coefficients.

[original abstract]

We study the problem oftolerant linearity testing under general distributions, as well as an important special case, the problem of locally testing linear codes.Given a distribution μ over $F_2^n$ and a function $f: F_2^n \to F_2$, we are interested in distinguishing (using very few queries to f), between (1) the case where

fisμ-closeto a linear function, and (2) the case wherefisμ-farfrom all linear functions (here μ-distance between functions f, g is measured by the probability that f and g differ on a random point sampled from μ). For every linear code $C$, there is a distribution μ for which tolerant linearity testing implies the local testability of $C$.Via this connection, we give a very simple proof that sparse random linear codes are locally testable (and locally list-decodable) from (1 / 2 − ε)-fraction errors with constantly many queries, for every constant ε > 0. More precisely, we show that any linear code in $F_2^n$ that is:

- sparse (i.e., has only poly(n) codewords)
- unbiased (i.e., each nonzero codeword has Hamming weight in $(1/2-n^{-\gamma}, 1/2 + n^{-\gamma})$ for some constant γ > 0)
can be locally tested and locally list decoded from (1 / 2 − ε)-fraction errors using only $poly(1/\epsilon)$ queries to the received word. This simultaneously simplifies and strengthens a result of Kaufman and Sudan, who gave a local tester and local (unique) decoder for such codes from some constant fraction of errors.

We also give a general criterion which implies that linearity is testable under a given distribution. This condition, for example, implies that linearity is testable under the p-biased distribution.

[ETH, Zurich, Feb. 9-11, 2010]

**The 7th Theory of Cryptography Conference (TCC), January 2010**

See my notes regarding ten talks
presented in this conference.

[private presentation of Zvika (Brakerski), Mar. 14, 2010]

Two related works on Oblivious RAMs:

**(1) Oblivious RAMs without Cryptographic Assumptions**
by Ajtai.

**(2) Perfectly Secure Oblivious RAM Without Random Oracles**
by Damgard, Meldgaard, and Nielsen.

[my comments]

The issue is hiding the virtual acccess pattern
in a setting where a bounded-memory player stores
data items on a remote memory. Every desired item
is fetched by a visible sequence of real accesses,
and we wish this visible sequence to yield no information on the
request sequence (i.e., the actual items we are interested in).
A solution involving a polylogarithmic (amortized) overhead
was known for two decades
(see Goldreich and Ostrovsky),
but this solution required the player to have access
to a random function
(which can be implemented using a pseudorandom function).
The current works waive this requirement;
instead of using a random function for storing a large
number of coin tosses, the current work use the memory
itself to store these values while providing an oblivious
way of accessing the required random values.

[original abstract]

See Ajtai's Oblivious RAMs without Cryptographic Assumptions and Damgard et al.'s Perfectly Secure Oblivious RAM Without Random Oracles

[scanning the FOCS'09 proceedings, March 2010]
(better late than never.)

Needless to say, the following works are unrelated.

**(1) Faster Generation of Random Spanning Trees**
by Jonathan A. Kelner and Aleksander Madry.

**(2) Constructing Small-Bias Sets from Algebraic-Geometric Codes**
by Avraham Ben-Aroya and Amnon Ta-Shma.

**(3) A Probabilistic Inequality with Applications to Threshold
Direct-Product Theorems** by Falk Unger.

**(4) A Complete Characterization of Statistical Query Learning with
Applications to Evolvability** by Vitaly Feldman.

Note: a handful of other works are covered by previous choices.

[my comments]

Re (1): I'm happy to see yet another application of the paradigm
of decomposing any graph into parts that are rapidly mixing,
while omitting relatively few edges. Let me seize this opportunity
and mention the application of this paradigm in
the bipartite tester for bounded-degree graphs.

Re (2): When small-bias spaces were first defined and constructed, I felt that the exact polynomial dependence on the length and bias parameters (i.e., $n$ and $\eps$, respectively) is quite immaterial. Nowadays, being even more convinced of the fundamental nature of this notion, obtaining an efficient construction of optimal size seems a worthy goal to me, and this paper seems to make a significant step in this direction.

Re (3): This work generalizes Chernoff Bound to the case that the 0-1 random variables may not be independent but have a small bias in the sense that the bias of any linear combination decreases exponentially with its size. On the other hand, these biases correspond to co-moments of the random variables, which may hint to why this condition may lead to strong tail inequalities. Turning back to the dual view of biases, which are reflected in guessing XORs of values, this leads to proofs of Threshold Direct-Product Theorems (DPTs) in various setting. Thus, while I still maintain that it is odd to derive a standard DPT via the XOR Lemma (cf. Remark following Lemma 7 in the old survey of Yao's XOR Lemma), this work demonstrates that the XOR Lemma is a natural tool towards deriving Theorems DPTs.

Re (4): Being fascinated with Valiant's model of Evolvability, this paper is an obvious choice for me. It seems that this paper indicates (more than prior works) that evlovability as modeled by Valiant is quite realistic. I refer to the monotone feature, and do note that it is established (for all SQ learnable classes) only via a distribtrubtion-dependent algorithm. Still!

Re (2): When small-bias spaces were first defined and constructed, I felt that the exact polynomial dependence on the length and bias parameters (i.e., $n$ and $\eps$, respectively) is quite immaterial. Nowadays, being even more convinced of the fundamental nature of this notion, obtaining an efficient construction of optimal size seems a worthy goal to me, and this paper seems to make a significant step in this direction.

Re (3): This work generalizes Chernoff Bound to the case that the 0-1 random variables may not be independent but have a small bias in the sense that the bias of any linear combination decreases exponentially with its size. On the other hand, these biases correspond to co-moments of the random variables, which may hint to why this condition may lead to strong tail inequalities. Turning back to the dual view of biases, which are reflected in guessing XORs of values, this leads to proofs of Threshold Direct-Product Theorems (DPTs) in various setting. Thus, while I still maintain that it is odd to derive a standard DPT via the XOR Lemma (cf. Remark following Lemma 7 in the old survey of Yao's XOR Lemma), this work demonstrates that the XOR Lemma is a natural tool towards deriving Theorems DPTs.

Re (4): Being fascinated with Valiant's model of Evolvability, this paper is an obvious choice for me. It seems that this paper indicates (more than prior works) that evlovability as modeled by Valiant is quite realistic. I refer to the monotone feature, and do note that it is established (for all SQ learnable classes) only via a distribtrubtion-dependent algorithm. Still!

Back to Oded Goldreich's homepage.