Dagstuhl workshop on Computational Complexity of Discrete Problems

I found the Dagstuhl workshop on Computational Complexity of Discrete Problems very engaging. Following are brief comments regarding twenty talks, which I attended and feel comfortable commenting on.

Recent developments in arithmetic circuits (Shubhangi Saraf)

VP and VNP are classes of polynomials that may be viewed as the Arithmetic Circuit analogues of P and NP. A relatively recent line of results shows that separating them amounts to separating the size of depth-four Arithmetic circuits for them. Specifically, while every polynomial in VP has depth-four A-circuits with bottom fan-in sqrt(n) and size $n^{O(sqrt(n))}$, VNP has polynomials for which every depth-four A-circuits with bottom fan-in sqrt(n) must have size $n^{Omega(sqrt(n))}$. Hence a separation would follow if the latter lower bound can be improved (i.e., the Omega replaced by an omega, or only by a sufficiently large constant).

Still, one should be careful about being too optimistic, since the $n^{Omega(sqrt(n))}$ holds also for VP...

Note (Apr 9, 2014): See ECCC TR14-045.

Circuits with medium fan-in (Anup Rao)

The model here is analogous to the one in my paper with Avi Boolean Circuits of Depth Three and Arithmetic Circuits with Arbitrary Gates. Specifically, the current work studies the size of circuits with general Boolean gates of a given arity for computing some Boolean functions in P, whereas our paper studies Arithmetic circuits with general (Set) Multi-linear gates for computing (set) multi-linear functions. Here the arity is fixed and one seeks lower bounds on the size, whereas in our paper the focus is on the maximum between the arity and the size.

For a fix arity $k$ (say $k=2n/3$), almost all circuits require $2^{n-k}$ many gates, and this bound is tight (e.g., consider the $2^{n-k}$ residual functions obtained by all possible fixing of the value of $n-k$ variables). The main result is that there exists a function in $P$ that requires $Omega(\log n)^2$ gates of arity $2n/3$. On the other hand, every simple function (i.e., one having a log-depth (bounded fan-in) circuit of linear size) can be computed by $o(n)$ gates of fan-in $n^{1/5}$.

Indexing irreducible polynomials over finite fields (Swastik Kopparty)

Indexing a set $S$ means having an efficiently computable bijection from $[|S|]$ to $S$. The inverse of such a function is a ranking function only if the indexing function is monotone (which is not required here). The main result is that indexing the set of irreducible polynomials of degree $n$ over $F=GF(q)$ is deterministically reducible to (1) finding such an irreducible polynomial $p$, and (2) finding a generator for $F[x]/p(x)$. The reduction reduces to the problem of indexing the "rotation classes" of $[q]^n$ of period $n$, where rotation classes are equivalence classes (of $n$-long sequences over $[q]$) such that two sequences are equivalent if they are rotations of one another.

Small circuits and big numbers; better bounds on computing the bits of arithmetic circuits (Eric Allender)

The computational problems considered are of computing bits in the binary representation of large numbers that are presented in a succinct manner. For example, given an arithmetic circuit $C:\R^n\to\R$, an integer $i$, and a binary input $x\in\{0,1\}^n$, compute the $i$th bit in the binary expansion of $C(x)$. This problem is count-P hard and resides in the counting hierarchy. The complexity of computing the $i$th bit of $\pi$ is not larger. Another problem is given natural numbers $x < y$ and $i$, to compute the $i$th bit in the binary expansion of $x / y$. This problem is in PH, and it is in P in the special case of $x=1$ (or when $x$ is a power of two). What happens if $x=3$?

On the Unique Games Conjecture and the Approximation Resistance of Predicates (Subhash Khot)

This was a double-feature of two remotely related topics. The UGC states that for every $\e,\d>0$, there exists $k=k(\e,\d)$ such that, given a 2-CSP with permutation constraints over $k$ labels, it is NP-hard to distinguish instances of valuer at least $1-\e$ from instances of value at most $\d$. The problem is equivalent to the case that $\d$ is as high as $1-\omega(sqrt(\e))$. In general, a quadratic gap seems fundamental here.

In the second talk, the focus is on $k$-ary CSP over Boolean values, where all constrains are induced by a single predicate (allowing possible negations of the chosen variables). The density of a predicate $f$, denoted $\rho(f)$, is the probability that $f$ evaluates to one under a random assignment, and $f$ is called approximation resistant if for every $\e>0$ it is hard to distinguish instances of value at least $1-\e$ from instances of value at most $\rho(f)+\e$. The result is a characterization of approximation resistant predicates, but the characterization is not effective (i.e., decidable).

Improved Inapproximability results for Hypergraph Coloring (Prahladh Harsha)

Given a simple 3-colorable graphs, we can color it with $n^{0.204}$ colors in polynomial-time, but coloring it with four colors is NPH and with polylog colors it is UGC-hard. Turning to 8-uniform hypergraphs, it is shown that given a 2-colorable hypergraph it it hard to colr it with $exp(exp(sqrt(log log n)))$ colors. The techniques involve use of the "short long code" which provides the evaluation of the input under all low degree polynomials (rather than under all polynomials/functions).

(2+epsilon)-Sat is NP-hard (Johan Hastad)

See a prior record

Small de Morgan formulas make low-frequency sound: Fourier concentration from shrinkage (Valentine Kabanets)

A function $f$ has Fourier concentration (with error $\gamma$) on degree $t$ if $\sum_{S:|S|>t} f_S^2 < \gamma$. It is known that AC0 circuits of size $s$ have F-concentration on polylogarithmic (in $s$) degree. This work shows that if $f$ has a de-Morgan formula of size $s$, then it has F-concentration for degree $s^{0.51}$, where the half corresponds to the reciprocal of the shrinkage exponent (i.e., two).

Direct product testing (Irit Dinur)

Following a general perspective, the focus was on the $k$-parallel repetition of the following confused-and-compare game played. With probability $p$, the referee sends the same random value $v\in V$ to both parties and accepts iff they answer with the same value. Otherwise, it sends them two independent random values and always accepts. The question is how are the success probability of arbitrary strategies $A,B:V^k\to\{0,1\}^k$ related to their ``closeness'' to being direct products of some functions $a,b:V\to\{0,1\}$.

For high success probability $1-\e$ and $p=1/sqrt(k)$ we get good approximate-agreement (i.e., $F$ agrees with some $f^k$ on almost all coordinates of a random $k$-tuple), and the new result referring to $p=Omega(1)$ yields exact agreement (i.e., agreement on all coordinates). For low success probability (i.e., as low as $exp(-sqrt(k))$), one gets occasional agreement.

Constant rate PCPs with sublinear query complexity (Eli Ben-Sasson)

The point is getting PCPs of length that is linear in the length of the original NP-witness (for SAT). This can be done with query complexity that is an arbitrary small power of the length. Such trade-off seems inherent if one is going to use the BFLS paradigm of embedding the input in a fixed routing network. In such a case the degree of the network lower bounds the query complexity and its depth lower bounds the overhead factor (in the length).

Set membership with two bit probes (Jaikumar Radhakrishnan)

The task is to succinctly represent an $n$-sized subset of $[m]$ such that membership queries can be answered by $t$ probes. It is known that the required length is at least $t n^{1-(1/t)} m^{1/t}$ whereas length $t n m^{4/(t+1)}$ suffices for odd $t$. But the upper bound is useless for $t=2,3,4$. Focusing on $t=2$, it is shown that length $m^{1-(1/4n)}$ suffices and length at least $m^{1-(4/n)}$.

Recent progress on interactive error correction: an overview (Mark Braverman)

The focus was on the rate of interactive error correction with coding over a constant-size alphabet. In such a case, non-interactive coding has rate $0.49999$ and rate $0.99999$ can be obtained in the list-decoding setting. In the interactive setting, the rate seems to double, but one can improve over it when the communication pattern (i.e., who talks when) is adaptive (which requires careful modeling with several non-equivalent models). A main tool in all of this is tree-codes, which are prefix codes $C:\{0,1\}^n\to\Sigma^n$ (i.e., the $i$-bit prefix of $C(x)$ only depends on the $i$-bit prefix of $x$) such that the distance between $C(x)$ and $C(y)$ is $Omega(n-|c(x,y)|$ where $c(x,y)$ is of the longest common prefix of $x$ and $y$.

An Interactive Information Odometer and Applications (Omri Weinstein)

The task is described by the following two-party functionality $(p,q)\mapsto (X,X)$, where $X$ is 1 with probability $(p-q)^2$ and zero otherwise. The aim is to perform this task while leaking a comparable amount of information. (Btw, the task of having $X$ be 1 with probability $|p-q|$ can be solved by Correlated Sampling; i.e., having each party toss a coin with bias determined by its input, exchanging the outcomes, and outputting~1 iff they differ.)

Note (Apr 9, 2014): See ECCC TR14-047.

Extended Formulations and Information Complexity (Ankur Moitra)

Given a polyhedra $P$ with many faces, one may wish to express it as a projection of a higher dimensional polyhedra that has fewer faces. At times, one may decrease the number of faces logarithmically, and this is useful when one wants to optimize a function defined over these bodies. The extended complexity of $P$, is the minimum number of faces one may obtain this way.

Arthur, Merlin, and Data Stream Computation (Amit Chakrabarti)

Interestingly, in the context of streaming private coins (probabilistic) proof systems are more powerful than the public coin counterparts. The reason is that the sketch is generated based on the random coins may be tossed before the prover-verifier interaction, and so it is crucial whether these coins are given to the prover. (In contrast, for coins tossed in the interaction itself, after the stream was processed, public-vs-private is immaterial, as it is the case whenever computational complexity is not an issue.) The result states a space-times-communication complexity of $Omega(n)$ for MA, but a three-message private-coin protocol of polylogarithmic complexity.

Turnstile Streaming Algorithms Might as Well Be Linear Sketches (David P. Woodruff)

The statement is proved in two stages. In the first stage a generic streaming algorithm is made to depend only on the accumulated frequencies (obliviously of the stream that gave rise to it). The ideas have a flavour of the compression result for finite automata, but the details are far more complex here.

Pseudorandom generators with optimal seed length for non-boolean poly-size circuits (Ronen Shaltiel)

Computational indistinguishability is defined with respect to distinguishers that output a binary value, but this work studies distinguishers $D$ that output a non-binary value and require that $D(G(s))$ be statistically close to $D(r)$, where $s\in\{0,1\}^k$ and $r\in\{0,1\}^\ell$ are uniformly distributed and $\ell=|G(s)|$. The work considers a hybrid of the BMY and NW settings: On the one hand, the pseudorandom generator (PRG) is required to run in polynomial time (as in [BM,Y]), but on the other hand it is allowed more time than the distinguisher (as in [NW]). Such PRGs are constructed based on assumptions regarding non-deterministic complexity (as in works regarding derandomization of AM). Needless to say, their seed length must exceed the length of the output of the potential distinguishers.

Testing Equivalence of Polynomials under Shifts (Amir Shpilka)

Given two Arithmetic circuits, $C_1,C_2$, the problem is to find a shift (vector) $b$ such that $C_1(x)=C_2(x+b)$, if such exists. The main result is a (deterministic) reduction of this problem to PIT (which is a specail case in which $C_2\equiv0$). This holds for any classes of circuits that satisfy some (minimal) closure properties. The related problem of finding a linear transformation (i.e., a matrix $A$ and a vector $b$ such that $C_1(x)=C_2(Ax+b)$) is NP-Hard.

Approaches to bounding the exponent of matrix multiplication (Chris Umans)

Rephrasing the problem in terms of the asymptotic rank of the (3-dim) matrix multiplication tensor, progress was obtained by a sequence of sophisticated manipulations of tensors including taking tensor powers, approximating tensors, and embedding tensors. So far all progress relied on some designs of small size (alas the search for good designs is infeasible even for such small size). The group theoretic approach offers the possibility of starting with a design of a large size, and a recent version raises the hope that Abelian groups may suffice.

Fixed Parameter Parallelism (Till Tantau)

For a class C, the parametrized class XC requires that the problem is in C for every value of the parameter, whereas FTC requires that the dependence on the parameter be isolated; for example, XP asks that for each $k$ the complexity takes the form $n^{f(k)}$, whereas FTP asks that it takes the form $f(k) n^{O(1)}$. The question is whether problems in FPT intersection XL have an FPT-type algorithm that has space complexity of the XL-type.


Back to list of Oded's choices.