Comments on fourteen RANDOM/APPROX'11 Presentations (cf LNCS 6845)

See full program

As usual, my choice of presentations is strongly biased towards my areas of interest. In particular, I attended only a small portion of APPROX and about half of RANDOM. (Also, in the context of this report, I feel it is OK for me to deviate from my policy of not surveying my own works.)

1. Satisfying Degree d Equations over GF(2)
by Johan Hastad

Recall that while it is easy to determine whether a system of linear equations (over GF(2)) is satisfiable, it is NP-hard to approximate the number of equations that can be satisfied simultaneously to within a factor of $2-\eps$ (for every constant $\eps>0$). This paper shows that the discrepancy between the case of satisfiable systems and general ones carries over to degree d equations, but the result for satisfiable systems takes a different form.

Considering first the case of general systems, it is NP-hard to approximate the number of equations that can be satisfied simultaneously to within a factor of $(2-\eps)^d$ (or $2^d-\eps$, since we consider constant $d$ and $\eps>0$). However, in the case of satisfiable systems (for $d>1$), it is NP-hard to find an assignment that satisfies more than a $2^{-(d-1)}-2^{-(2d-1)}+\eps$ fraction of the equations, but there is a polynomial-time algorithm that finds an assignment that satisfies a $2^{-(d-1)}-2^{-(2d-1)}$ fraction of the equations.

The algorithm works by iteratively trying to find affine factors in the equations (that are normalized to the form Expression=1), and use each such factors to "extract" one variable. Once the process gets stuck, a random assignment is shown to yield the desired bound. The hardness result is obtained by a novel twist on the standard construction of PCPs systems which uses as a gadget the extremal equation $x_1x_2...x_d + x_{d+1}x_{d+2}...x_{2d} = 1$. The novelty is in the use of this gadget: one runs $2d$ copies of the 3-query inner verifier (of Hastad's 3LIN result, but with related noise) and substitutes each $x_i$ (in the above polynomial equation) with the corresponding 3-lin equation.

2. Algorithmic Extensions of Cheeger's Inequality to Higher Eigenvalues and Partitions
by Anand Louis, Prasad Raghavendra, Prasad Tetali and Santosh Vempala.

Given the wide applicability of Cheeger's Inequality, any meaningful extension of it, let alone one that revisits the underlying ideas, seems of natural interest. The original inequality sandwiches the expansion/conductance between functions of the second eigenvalue, and the current work upper bounds generalizations of the expansion by a function of higher rank eigenvalues.

3. Approximating lattice problems using an approximate shortest vector oracle
by Chandan Dubey and Thomas Holenstein.

The Shortest Vector Problem (SVP) is considered easier than the Closest Vector Problem (CVP); indeed, it is known that an $f$-factor approximate SVP can be obtained via an $f$-factor approximate CVP. Still Kannan showed that approximate CVP can be reduced to approximate SVP, while losing in the approximation factor; specifically, $f^2 n^{3/2}$-CVP is reduced to $f$-SVP. The current paper reduces the loss to $f^2 n^{1/2}$, which feels right (at least wrt the dependence of $n$). (Caveat: I'm not an expert on lattices!)

4. Approximating the influence of a monotone Boolean function in $O(\sqrt{n})$ query complexity
by Omri Weinstein, Ronitt Rubinfeld, Dana Ron and Muli Safra.

The influence of an $n$-var Boolean function $f$, denoted $\I(f)$, is $\sum_{i\in[n]}\prob_x[f(x)\neq f(x^i)]$, where $x^i$ denotes the string $x$ with the $i$th bit flipped. While obtaining a constant factor approximation of the influence of general functions has query complexity $THETA(n/\I(f))$, this paper shows that in the case of monotone functions query complexity is $THETA(sqrt(n)/\I(f))$,

The main observation underlying the algorithm is that, for every $k \leq \sqrt(n)$ (actually only $k = \sqrt(n)$ is used), the probability $\prob_{x,i}[f(x) \neq f(x \or \chi_\{i\})]$ is relatively well approximated by the probability $\prob_{x,I:|I|=k}[f(x) \neq f(x \or \chi_I)]/k$, where $\chi_I$ denotes the vector that is 1 in positions $I$ and zero elsewhere. However, since the latter probability is $k$ times larger, approximating it via repeated samples allows using a factor of $k$ less samples.

5. On Approximating the Number of Relevant Variables in a Function
by Dana Ron and Gilad Tsur

This work considered what can be gained by relaxing the notion of testing $k$-juntas to distinguishing $k$-juntas from functions that are far from, say, $2k$-juntas. That is, it considers a ``double approximation'' obtained by combining the notion of close functions (underlying property testing) with the standard notion of approximation (for the number of influential variables). The main result is that on does not gain much: ignoring the effect of the proximity parameter (or viewing it as fixed), the query complexity of the latter problem is $THETA(k)$, whereas the standard testing task has complexity $\tildeO(k)$.

In contrast, if we are promised that the function is linear (or a low degree polynomial), then the complexity can be reduced to a constant (whereas the Omega(k) lower bound on testing still holds).

6. Public-Key Locally-Decodable Codes with Short Keys
by Brett Hemenway, Martin Strauss, Rafail Ostrovsky and Mary Wootters

This work refers to the computational channel model , which ``resides'' between the worst-case error model of Hamming and the random error model of Shannon. In this intermediate model, introduced by Lipton in the early 1990s, the errors are determined by a computationally bounded adversary, whereas the communicating parties either share some secret information or rely on some public-key infrastructure.

The current work provides an improved construction of a locally decodable code (LDC) for this setting. Specifically, it weakens the cryptographic assumption or rather the cryptographic primitive on which the code is based (from a computational PIR to a standard public-key encryption scheme), and reduces two main complexity measures (i.e., the query complexity and the length of the public-key).

7. Dense locally testable codes cannot have constant rate and distance
by Irit Dinur and Tali Kaufman

The paper considers four parameters of LTCs: (1) their distance (ideally - linear); (2) their rate (ideally - constant); (3) their locality/query complexity (ideally: a constant); and (4) their density, i.e., the number of short constraints. Referring to the open problem of whether ideal levels can be achieved simultaneously for parameters (1)-(3), this paper shows that this cannot be achieved in a dense code (i.e., one that satisfies a super-linear number of constraints of constant

8. Invited Talk: Some Open Problems in Approximation Algorithms
by David Williamson

David gave a very interesting talk, pivoted on his recent book on approximation algorithms co-authored by David Shmoys. In the first part of the talk, he discussed some of the choices made in the process of writing this book (most importantly, the decision to organized the material around techniques rather than based on problems being solved). In the second part, he discussed the list of ten open problems selected for the concluding chapter of the book.

Among the ten problems he listed, I wish to highlight three. The first is the problem of efficiently coloring a 3-colorable graph with as few colors as possible; the current upper bound is $n^{0.211}$, whereas no constant number may suffice (assuming the Unique Game Conjecture). Can an efficient algorithm use a polylogarithmic number of colors? (How about "just" an $n^{o(1)}$ number?) The second problem refers to Asymmetric TSP under the triangle inequality, where a recent breakthrough offers a sub-logarithmic approximation factor. Can a constant factor approximation be achieved? The last problem refers to TSP under the triangle inequality, where we are stuck with a 1.5 factor approximation since 1975. Can this be improved?

9. Testing Graph Blow-Up
by Lidor Avigad and Oded Goldreich

I started the talk by saying that I strongly disagree with an often voiced opinion by which "testing graph properties in the dense graph model is well understood." This opinion seems to be rooted in the celebrated characterization of properties that are testable in complexity that only depends only on the proximity parameter. Putting aside the fact that this characterization is not really "effective", I noted that it merely draws the borders of an interesting class of problems, but does not even attempt to classify these problems according to their relative complexity (i.e., the growth rate of complexity in terms of the proximity parameter). Needless to say, there is much to understand wrt the latter issue (e.g., polynomial vs super-polynomial dependence). This work is aimed at trying to understand the lowest (reasonable) complexity class; that is, the graph properties that are testable by nonadaptive tester that uses a number of queries that is (almost) linear in the reciprocal of the proximity parameter. In particular, we show that this class contains a non-trivial set of natural problems, where each problem refers to the property of being a blow-up of some fixed graph (i.e., each fixed graph yields a different property - the set of graphs that can be obtained by a blow-up of this fixed graph).

10. Proximity Oblivious Testing and the Role of Invariances
by Oded Goldreich and Tali Kaufman

This work presents a general notion of properties that are defined by a small number of local constraints and an action on the functions' domain (and range). Such properties are said to satisfy the invariance condition, and they are clearly closed under the corresponding action. The question is whether satisfying the invariance condition (IC) is related to easy testability, and specifically to proximity-oblivious testing (POT). The plain answer is negative: In general, satisfying IC may not imply having a POT, and having a POT may not imply satisfying IC. However, in some interesting cases (i.e., in some "models of property testing"), the two notions are equivalent: E.g., in the model of testing graph properties in the adjacency matrix representation, a property satisfies IC iff it has a POT. Interestingly, natural models of property testing (incl the one above) can be defined in terms of this formalism; i.e., as the`class of all properties that are closed under a certain action.

11. Inflatable graph properties and natural property tests
by Eldar Fischer and Eyal Rozenberg

The starting point of this work is the question of how may a tester (say of graph properties in the adjacency matrix representation) depend on the size of the object (i.e., number of vertices in the graph). Three main types of dependency are discussed next: (1) The size of the graph determines the set of vertices, which seem essential in order to make queries. But this seems an artificial problem, which may be avoided in case the tester just queries the subgraph induced by a random set of vertices of a specific size. (2) The size of the graph may determine the query complexity of the tester. But this is irrelevant in case the query complexity only depends on the proximity parameter. But, the last dependency seems more fundamental: (3) The tester's decision may depend (also) on the size of the graph; for example, consider the property that consists of all odd-sized graphs (or all odd-sized bipartite graphs and all even-sized 3-colorable graphs). Thus, non-surprisingly, a tester may be oblivious of the size of the graph only if the property is the "same" for all graph sizes. The question, of course, is what does it mean for a property to be the same for all graph sizes. The paper makes a convincing case that the right answer is that the property is both approximately hereditary and inflatable (which intuitively corresponds to downward and upward approximate-closures).

Furthermore, in the case that the property is both approximately hereditary and inflatable, it is shown that any tester having feature (2) can be converted to one that does depend on the size of the graph while preserving the query complexity up to a polynomial blow-up. Note that the original tester may have two-sided error, whereas if the property is (perfectly) hereditary then this trasformation [combined with prior results] yields a one-sided error tester of polynomially related query complexity. Recall that it is known that (perfectly) hereditary can be tested within a number of queries that only depends on the proximity parameter, but the said complexity may be huge. (N.B.: Properties that are approximately hereditary may exhibit a huge gap between the complexity of one-sided error and two-sided error testers.)

12. Extractors and Lower Bounds for Locally Samplable Sources
by Anindya De and Thomas Watson

This work lumps together two remotely related results. The first result is a deterministic extractor for the class of distributions that can be generated by NC0 circuits. Since in such circuits each output depends on a constant number of inputs, these sources are called local. The key idea is that such sources can be approximated by convex combinations of (a restricted type of) affine sources, and so extractors that work for the latter will do here. This demonstrates again that study of a clean model of sources may lead to results regarding sources of natural appeal, also when the former model seems to lack such an appeal.

The second result is related to Viola's study of the complexity of distributions [FOCS'10]. A stronger result was obtained independently by Viola [FOCS'11].

13. Query Complexity in Errorless Hardness Amplification
by Thomas Watson

This paper presents a query-optimal reduction for hardness amplification in the context of errorless circuits. Most importantly, the dependence on the desired hardness is reduced from linear to logarithmic.

14. Lower bounds on the query complexity of non-uniform and adaptive reductions showing hardness amplification
by Sergei Artemenko and Ronen Shaltiel

This work establishes the folklore belief by which hardness amplification actually trades off two hardness measures: the success probability is reduced at the cost of a corresponding increase in the size of the circuits (for which the success bound is stated). The issue is dealing with (fully) non-uniform reductions (i.e., that utilize advice that may depend arbitrarily on the oracle) and with adaptivity.

Back to list of Oded's choices.