As usual, my choice of presentations is strongly biased towards my areas of interest. In particular, I attended only a small portion of APPROX and about half of RANDOM. (Also, in the context of this report, I feel it is OK for me to deviate from my policy of not surveying my own works.)

by Johan Hastad

Considering first the case of general systems, it is NP-hard to approximate the number of equations that can be satisfied simultaneously to within a factor of $(2-\eps)^d$ (or $2^d-\eps$, since we consider constant $d$ and $\eps>0$). However, in the case of satisfiable systems (for $d>1$), it is NP-hard to find an assignment that satisfies more than a $2^{-(d-1)}-2^{-(2d-1)}+\eps$ fraction of the equations, but there is a polynomial-time algorithm that finds an assignment that satisfies a $2^{-(d-1)}-2^{-(2d-1)}$ fraction of the equations.

The algorithm works by iteratively trying to find affine factors in the equations (that are normalized to the form Expression=1), and use each such factors to "extract" one variable. Once the process gets stuck, a random assignment is shown to yield the desired bound. The hardness result is obtained by a novel twist on the standard construction of PCPs systems which uses as a gadget the extremal equation $x_1x_2...x_d + x_{d+1}x_{d+2}...x_{2d} = 1$. The novelty is in the use of this gadget: one runs $2d$ copies of the 3-query inner verifier (of Hastad's 3LIN result, but with related noise) and substitutes each $x_i$ (in the above polynomial equation) with the corresponding 3-lin equation.

by Anand Louis, Prasad Raghavendra, Prasad Tetali and Santosh Vempala.

by Chandan Dubey and Thomas Holenstein.

by Omri Weinstein, Ronitt Rubinfeld, Dana Ron and Muli Safra.

The main observation underlying the algorithm is that, for every $k \leq \sqrt(n)$ (actually only $k = \sqrt(n)$ is used), the probability $\prob_{x,i}[f(x) \neq f(x \or \chi_\{i\})]$ is relatively well approximated by the probability $\prob_{x,I:|I|=k}[f(x) \neq f(x \or \chi_I)]/k$, where $\chi_I$ denotes the vector that is 1 in positions $I$ and zero elsewhere. However, since the latter probability is $k$ times larger, approximating it via repeated samples allows using a factor of $k$ less samples.

by Dana Ron and Gilad Tsur

In contrast, if we are promised that the function is linear (or a low degree polynomial), then the complexity can be reduced to a constant (whereas the Omega(k) lower bound on testing still holds).

by Brett Hemenway, Martin Strauss, Rafail Ostrovsky and Mary Wootters

The current work provides an improved construction of a locally decodable code (LDC) for this setting. Specifically, it weakens the cryptographic assumption or rather the cryptographic primitive on which the code is based (from a computational PIR to a standard public-key encryption scheme), and reduces two main complexity measures (i.e., the query complexity and the length of the public-key).

by Irit Dinur and Tali Kaufman

by David Williamson

Among the ten problems he listed, I wish to highlight three. The first is the problem of efficiently coloring a 3-colorable graph with as few colors as possible; the current upper bound is $n^{0.211}$, whereas no constant number may suffice (assuming the Unique Game Conjecture). Can an efficient algorithm use a polylogarithmic number of colors? (How about "just" an $n^{o(1)}$ number?) The second problem refers to Asymmetric TSP under the triangle inequality, where a recent breakthrough offers a sub-logarithmic approximation factor. Can a constant factor approximation be achieved? The last problem refers to TSP under the triangle inequality, where we are stuck with a 1.5 factor approximation since 1975. Can this be improved?

by Lidor Avigad and Oded Goldreich

by Oded Goldreich and Tali Kaufman

by Eldar Fischer and Eyal Rozenberg

Furthermore, in the case that the property is both approximately hereditary and inflatable, it is shown that any tester having feature (2) can be converted to one that does depend on the size of the graph while preserving the query complexity up to a polynomial blow-up. Note that the original tester may have two-sided error, whereas if the property is (perfectly) hereditary then this trasformation [combined with prior results] yields a one-sided error tester of polynomially related query complexity. Recall that it is known that (perfectly) hereditary can be tested within a number of queries that only depends on the proximity parameter, but the said complexity may be huge. (N.B.: Properties that are approximately hereditary may exhibit a huge gap between the complexity of one-sided error and two-sided error testers.)

by Anindya De and Thomas Watson

The second result is related to Viola's study of the complexity of distributions [FOCS'10]. A stronger result was obtained independently by Viola [FOCS'11].

by Thomas Watson

by Sergei Artemenko and Ronen Shaltiel

Back to list of Oded's choices.