Comments on some talks presented at a complexity workshop

Following are my comments on few of the talks given at the Oberwolfach meeting on Complexity, which took place in November 11-16, 2018.

On Circuit Lower Bounds for Nondeterministic Quasi-Polytime (presented by Ryan Williams)

The framework, presented by Ryan in 2010, translates algorithmic improvements over exhaustive search into circuit lower bound for functions computable in NEXP (or smaller amount of NTIME). The framework has two version, one relating to the problem of deciding circuit satisfiability for circuits of a chosen class C, and the other relating to the problem of distinguishing circuits that are satisfied by at least half of the possible assignments and circuits that are unsatisfiable, a problem which is easily solved in RP. The satisfiability problem (for the circuit class C) is denoted C-SAT, and the corresponding derandomization problem is denoted gap-C-SAT, whereas in both problems we seek deterministic algorithm of running time that is smaller than $2^n$, where $n$ is the length of the input to the circuit.

The framework was originally applied to algorithms that run in time $2^n/n^c$, for some sufficiently large constant $c$, and yield size lower bounds for functions in NEXP. The current work employs this framework to algorithms that present a more significant improvement in the running time; specifically, to algorithms running in time $2^{n-n^{c'}}$, for any positive constant $c'$. In doing so it obtains size lower bounds for functions in quasi-polynomial NTIME.

A point worthy highlighting is that algorithms solving C-SAT in time $2^{(1-\Omega(1))n}$ are unlikely to exist for the satisfiability problem even if C if the class of CNFs of bounded clause length, unless the strong ETH is false. On the other hand, such (deterministic) algorithms (i.e., solving gap-C-SAT in time $2^{(1-\Omega(1))n}$) are widely believed to exist for the derandomization problem (e.g., BPP=P (in promise versions) yields even stronger results)

The new results are based on an analogous scaled-down versions of the easy witness lemma, which seem harder to prove. These lemma refer to a generic verifier of proofs for a deterministic verifier V, where these proofs are viewed as witnesses. It asserts that if the set S of valid assertions accepted by V (with an adequate proof) has circuits of small size, then each input in S has proofs (or witnesses) accepted by V that can be described by circuits of relatively small size. The original lemma referred to sets in NEXP and poly-size circuits, whereas the current lemma refers to a smaller gap between the NTIME bound and the circuit size bound. For example, it can refer to any super-polynomial NTIME bound and the circuit class P/poly. Alternatively, it refers to NP and translates a bound of $n^c$ on the size of circuits for the decision problem to a bound of $n^{O(c^3}}$ on the size of circuits describing witnesses.

On the Hardness of 2-to-2 Games (presented by Dor Minzer)

The Unique Game Conjecture refers to distinguishing instances (to be defined) for which almost all constraints can be simultaneously satisfied from instances for which only few constraints can be simultaneously satisfied. Specifically, the constraints refer to two variables assigned values in a fixed alphabet, and the uniqueness condition postulates that for each value assigned to one variable there exists at most one satisfying value for the other variable. The conjecture asserts that, for any $\epsilon>0$, it is hard to distinguish instances in which at least a $1-\epsilon$ fraction of constraints can be simultaneously satisfied from instances in which at most an $\epsilon$ fraction of constraints can be simultaneously satisfied.

The Unique Game Conjecture (UGC) yields optimal inapproximation thresholds for many natural problems (of the CSPs type), and the most acute challenge to it arises from sub-exponential algorithms that solve this problem; specifically, their running time is $2^{n^\beta}$, where $\beta=\beta(\epsilon)$ is a constant that depends on $\epsilon$ such that $\beta(\epsilon)$ vanishes with $\epsilon \gt 0$. The point is that an NP-hard problem is unlikely to have such algorithms (certainly, if the ETH holds), unless the reduction has a polynomial blow-up in which the degree of polynomial is $1/\beta(\epsilon)$.

Interestingly, the foregoing algorithms hold also for the problem of distinguishing instances in which at least half of constraints can be simultaneously satisfied from instances in which at most an $\epsilon$ fraction of constraints can be simultaneously satisfied, where half can be replace by any fixed constant (that is independent of $\epsilon$). This is interesting since the current work shows that the latter problem is NP-hard, while indeed using a reduction with a polynomial blow-up where the degree of the polynomial grows with $1/\epsilon$. Hence, in my opinion, the doubts regarding UGC cast by these algorithms are eliminated.

The foregoing NP-hardness result follows by establishing the NP-harness of an analogous 2-by-2 game in which the instances are two-variables constraints such that for each value assigned to one variable there exist at most two satisfying values for the other variable. The main result is that, for any positive $\epsilon$, it is NP-hard to distinguish 2-by-2 game instances in which at least a $1-\epsilon$ fraction of constraints can be simultaneously satisfied from instances in which at most an $\epsilon$ fraction of constraints can be simultaneously satisfied. Futhermore, assuming ETH, the time complexity of this problem, which is upper bounded by $2^{n^\beta(\epsilon)}$, is lower bounded by $2^{n^\alpha(\epsilon)}$, where $\alpha(\epsilon)\in(0,1)$ for every positive $\epsilon$.

On the Foundation of Program Obfuscation (presented by Rachel Lin)

While a strong notion of program obfuscation (knows as virtual black-box) is known to be unimplementable, a weak notion that only requires that obfuscated versions of functionally-identical programs be indistinguishable is not ruled out and (in contrast to initial beliefs) found many applications in cryptography. In particular, such indistinguishability obfuscators (iO) exist (in an uninteresting sense) if P=NP, Of course, believing that P is different than NP, the latter assertion only indicates that the impossibility of iO is hard to establish. But can we show that iO exists in a world in which P is different than NP, let alone assuming various reasonable complexity assumptions?

Being quite conservative wrt assumptions, I don't really feel at easy even with the DDH assumptions, but researchers have investigated bilinear and multilinear versions of it, feeling quite comfortable with the bilinear version. The talk surveyed attempts to base iO on various incarnations of the multilinear version, while attempting to minimize the ``level'' of "linearity" with the hope of reaching the level two. The current state at level three seems most promising, while a first attempts at level two have failed (in ways that leave the door open to more sophisticated attempts).

On High Dimensional Expanders (presented by Tali Kaufman)

High Dimensional Expanders (HDE) are special cases of simplicial complexes, where a t-complex is a sequence $X=(X(0),X(1),...,X(t))$ such that $X(i)$ is a collection of $i$-subsets of $X(0)$ that is close under taking subsets (i.e., for $i\ls j$, each $i$-subset of a set in $X(j)$ is in $X(i)$). The $i$-subsets (elements of $X(i)$) are called $i$-faces. In particular, a 1-complex is a graph (in which the 1-faces are edges).

A high dimensional expander is a complex in which the underlying graph of each link is an expander (of bounded degree); that is, a complex is a $\lambda$-HDE if the second eigenvalue of the underlying graph of each link resides in $[-\lambda,\lambda]$. Interestingly, HDEs (i.e., for $t \gt 1$) are a rare case; that is, their mere existence, let alone explicit construction, is far from obvious. (In particular, a random bounded-degree complex is not a HDE, and in fact most of its links have an underlying graph that has no edges.) This stands in contrast to the case of $t=1$, where a random d-regular graph is an expander (wvhp). On the other hand, showing that $X$ is a HDE reduces to showing that the underlying graph of $X$ is connected, and each 1-link is an expander (uniformly as above).

In any case, HDE exist and can be explicitly constructed; in fact, their existence has only a constructive proof. An indication to their usefulness to complexity theory is provided by the fact that they can be used to construct an agreement test of parameters not achieved before. The global-vs-local behavior that they exhibit raises hope that they can be used towards constructing better locally testable codes and proofs (i.e., PCPs).

On Classical verification of quantum computations (presented by Thomas Vidick)

Thomas was presenting a work by Urmila Mahadev, which refers to the following notion. One may view it as an incarnation of a doubly-efficient proof system, since the prover's strategy has complexity that is polynomially related to the complexity of deciding, whereas the verifier is supposedly much more efficient (assuming BQP is different from BPP).

DEF: For a set S in BQP, a quantum-to-classic (q2c) argument system for a set S consists of a QPT prover P and a PPT verifier V satisfying the following conditions:

Completeness (inputs in S are accepted whp)
For all sufficiently large $x\in S$, the verifier V accepts w.p. at least 2/3 when interacting with P on common input x.
Computational Soundness (inputs outside S are rejected whp)
For every QPT tildeP and all sufficiently large $x\not\in S$, the verifier V accepts $x$ w.p at most $1/3$ when interacting with tildeP on common input x.
The errors in both items can be reduced by sequential repetition. One may also consider non-uniform cheating provers in the computational soundness condition, but this is less called for here since the honest prover is a uniform machine that gets no auximilary inputs (as opposed to the definition of "classical" argument systems for NP).

THM: Assuming that LWE (with exponential modulus and subexponential noise ratio) is hard for QPT, every set in QBP has a (four-round) q2c argument system.

On recent developments related to RL versuss L (presented by Omer Reingold)

Omer surveyed three research threads related to the derandomization of RL.

  1. Hitting set generators (plus) for small error. Seed length $\tildeO(\log(nw)\log(n)+\log(1/\epsilon))$ was obtained, where the point being that the dependence on the small error is added to the rest rather than being multiplied by some term that depends on $n$. The "plus" means that this construct can be even used for derandomizing BPL, although it is not a PRG.
  2. Problems regarding (undirected) graphs beyond the problem of connectivity. Here (deterministic) algorithms of space complexity $\tildeo(\log n)$ were obtained for approximating parameters such as hitting time, commute time, and escape probabilities.
  3. PRGs for constant width Read-Once Branching Programs, in two models.

Invariant theory: an introduction for complexity theorists (by Avi Wigderson)

I had hard time following any of this, but did hear the message loud and clear. It is that many central questions in complexity theory can be formulated or rather cast as questions that refer to the most basic concepts of invariance theory. It remains to be seen whether taking that perspective is useful (esp., towards making progress on our central questions).

On Doubly-efficient interactive proof systems (presented by Ron Rothblum)

I already wrote a lot about this subject (see choice 193 and surveys).

On Distributed PCP and Applications for Fine-Grained Complexity (presented by Aviad Rubinstein)

This result has nothing to do with PCP, and little to do with distributed computing. It rather refers to the fine-grained complexity of approximation versions of exact optimization problems that were previously studied in the fine-grained context.

The talk focused on the hardness of closest pair problems. Specifically, on input sets of $d$-dim 0-1-vectors $S$ and $T$, the desired output is the largest value of the inner product (over the integers) of $a$ and $b$ where $s\in S$ and $t\in T$. The complexity is stated in terms of $N=|S|=|T|$, assuming that $d=N^{o(1)}$. Specifically, assuming the SETH, it is shown that no $O(N^{2-\espilon})$ can approximate this value to a factor of $\exp((\log N)^{1-o(1)})$, for every positive $\epsilon$.

The reduction builds on the reduction used by Williams for establishing the corresponding hardness of the exact version. Specifically, given a k-CNF formula $\phi:\{0,1\}^n\to\{0,1\}$, we use $N=2^{n/2}\cdot2^{\sqrt{n}}$ and $d=2^{\log n}\cdot2^{\sqrt n}$, and refer to an MA communication complexity protocol for solving the set intersection problem using messages of length $\sqrt n$ and randomness complexity $\log n$. Denoting the strategies of the two parties by $A$ and $B$, for each input $\sigma\in\{0,1\}^{n/2}$, common randomness $r\in\{0,1\}^{\log n}=[n]$ and Merlin witness $w\in\{0,1\}^{\sqrt n}$, the $(r,m)^\th$ coordinate of $s^{\sigma,w}$ is 1 iff $A(\sigma,w)=m$ (i.e., A sends the message $m$ in this case). Likewise, for each input $\tau\in\{0,1\}^{n/2}$, the $(r,m)^\th$ coordinate of $t^{\tau,w}$ is 1 iff $B(\tau,w,m)=1$ (i.e., B accepts in case it gets the message $m$, when its input is $\tau$ and it received the witness $w$ and sees coins $r$). Note that the inner product of $s^{\sigma,w}$ and $t^{\tau,w}$ equals $n$ times the probability that $B(\tau)$ accepts when interacting with $A(\sigma)$ and when the Merlin provides the witness $w$.

Near-optimal constructions of epsilon-biased sets (presented by Amnon Ta-Shma)

The point is obtaining $\epsilon$-biased sequences of length $n$ using a sample space of size $O(n/\epsilon^2)$, rather than $O(n^2/\epsilon^2)$ and $O(n/\epsilon^3)$, which were known before.

The idea is to start with a small bias generator of small constant bias, denoted $c$, embed the possible seeds in a Ramanujan graph (expander of degree $2/c^2$ and spectral gap $1-c$), take a random walk of length $t=\log_2(1/\epsilon)+O(1)$ and XOR the corresponding pseudorandom sequences. Unfortunately, the bias of the resulting sequence is approximately $(2c)^{t/2} \approx c^{t/2}$, rather than $(2c)^t$, whereas the sample space grows by a factor of $(2/c^2)^t \approx (1/c)^{2t}$. That is, only every second random step contributes to decreasing the bias (at the expected rate of $c$). The solution is to take a pseudorandom walk on the $O(n)$-vertex expander graph of degree $d$, where the pseudorandom walk is generated by a walk on a $\poly(d)$-vertex graph of degree $d' \ll d$, but I don't really understand why it works. Nevertheless, the claim is that the bias is decreased by a factor of $\sqrt{d'}$ in each step.

On Pseudodeterministic Algorithms and Proof (presented by Shafi Goldwasser)

The talk was intended to cover both pseudo-deterministic algorithms and proof systems, but as Shafi feared ahead of time, she only covered the first type. Loosely speaking, pseudodeterministic algorithms are randomized algorithms for search problems that (whp) return the same ``canonical'' solution, whenever the instance does have a solution. That is, the focus is on search rather than on decision problems, and this setting raises an issue that does not arise in the case of decision problems. As commented by Madhu, the source of the issue is that the set of potential answers is not a predetermined set that can be easily enumerates (as the set $\{0,1\}$ or $[\poly(n)]$).

Fixing a serach problem $R \subseteq \{0,1\}^*\times\{0,1\}^*$, let $R(x) = \{y:(x,y)\}$ denote the set of valid solutions for $x$, and $S_R = \{x: R(x)\neq\emptyset\}$ be the set of instances having a valid solution. We consider the "search analogues" of BPP (and P).

DEF: A relation $R$ is in "search-BPP" (resp., "search-P") if

  1. Membership in $R$ can be decided in PPT (resp., poly-time); that is, there exists a PPT (resp., deterministic poly-time) algorithm $VS such that for every $(x,y) \in R$ it holds that $\prob[V(x,y)=1]\geq2/3$ and for every $(x,y) \not\in R$ it holds that $\prob[V(x,y)=0]\geq2/3$.
  2. Solutions can be found in PPT (resp., poly-time); that is, there exists a PPT (resp., deterministic poly-time) algorithm $AS such that for every $x \in S_R$ it holds that $\prob[A(x) \in R(x)]\geq2/3$, whereas for every $x \not\in S_R$ it holds that $\prob[A(x)=\bot]\geq2/3$.
Condition (1) refers to the "meaningfulness" of solutions; that is, to the fact that they can be efficiently verified as valid.

Note that, both in the content of P and BPP, (2) does not imply (1). On the other hand, in the context of deterministic algorithms (i.e., P), the question of whether "it is always the case that (2) implies (1)" is the "search formulation of the P-vs-NP" problem (i.e., "(2) always implies (1)" holds iff P=NP).

Seeking a pseudodeterministic algorithm for $R$ means repklacing (2) by (2') There exists a PPT (resp., deterministic poly-time) algorithm $AS such that for every $x \in S_R$ there exists a canonical $y_x$ such that $\prob[A(x)=y_x]\geq2/3$, whereas for every $x \not\in S_R$ it holds that $\prob[A(x)=\bot]\geq2/3$.

In my opinion, a key message to complexity theorists is that the difference between BPP and promise-BPP translates to the difference between (2) and (2'); that is:

Hence, the difference between promose classes and pure decisional classes, often viewed as a "structural complexity" issue, translates here to a fundamental difference in (realistic) algorithmic capabilities. Indeed, if promise-BPP=P, then this difference disappears, but as long as we cannot establish the former equality we face two natural classes of search problems.

On the Algebraic CSP dichotomy theorem (presented by Venkat Guruswami)

The pivot of the dichotomy theorem is the notion of a polymorohism, which is a local mapping of several valid solutions to a CSP (wrt a fixed predict) to a valid assignment. Actually, a polymorphism maps tuples of assignments that satisfy a fixed $k$-ary predicate $P$ to an assignment that satiosfies $P$; that is, $f:D^m\to D$ is a polymorphism of $P:D^k\to\{0,1\}$ if whenever $x^{(i)}=(x_1^{(i)},...,x_k^{(i)})$ satisfies $P$ for every $i\in[m]$, then it follows that $y=(y_1,...,y_m)$ s.t. $y_j=f(x_j^{(1)},...,x_j^{(m)})$ satisfies $P$.

It turns out that P-CSPs is solvable in poly-time iff $P$ has a non-trivial polymorphism, where dictatorship functions are trivial polynorphisms (and so are polymorphisms $f$'s that are invariant under shifts of all assignments of the form $a^{k-1}b$). A key intermediate result is that adding polymorphisms makes the problem easier; that is, if the set of polymorphisms of P is contained in the set of polymorphisms of P', then solving P'-CSPs is log-space reducible to solving P-CSPs. Since we know that solving P'-CSP is NP-hard for some P', it follows that solving P-CSP for any P that has only dictatorship polymorphisms is NP-Hard.


Back to list of Oded's choices.