A sample of FOCS19 presentations

As usual, I am posting brief comments on a few presentations that I attended, with the usual caveats asserting that some omission reflect my own limitations rather than anything else. Here is a list of the presentation I will comment on. See also my report on the event Why I Am Excited About This Research Direction: A Tribute to Shafi Goldwasser.

Derandomization from Algebraic Hardness: Treading the Borders by Zeyu Guo, Mrinal Kumar, Ramprasad Saptharishi, and Noam Solomon

Eventually, it turned out (in a subsequent work by some of the authors) that there is no need to tread the borders. Anyhow, the result is a tight conversion of hardness into pseudorandomness in the form of a hitting set generator. It also turns out that any construction of a non-trivial hitting set (i.e., one that is smaller than the trivial by one unit) yields a hitting set of optimal size (up to a polynomial relation).

Sampling Graphs without Forbidden Subgraphs and Unbalanced Expanders with Negligible Error by Benny Applebaum and Eliran Kachlon

I have already recommended this interesting work (see 259), but wish to re-iterate the main question addressed here. For a fixed graph $H$ and parameters $n,m$ (and $k$), we wish to generate an $n$-vertex graph with $m$ edges that is $H$-free. Furthermore, we want a distribution of $k$-wise independent graphs such that with overwhelming probability the graph is $H$-free (i.e., the graph is not $H$-free with negligible probability). Note that, for $m=Theta(n)$, a truely random graph will not be $H$-free with noticeable probability. This computational problem is solved by employing a pseudorandom construction that supports super-fast checking of being $H$-free (via a computation on the seed of the generator rather than by running a search on the resulting graph).

Reed-Muller codes polarize by Emmanuel Abbe and Min Ye

My take-away from this presentation was a clear description of the polarization theory. For an $n$-by-$n$ matrix $M$, it considers the process of applying $M$ to a random vector $u$, subjecting the result to noise, yielding a vector $uM+e$, and iteratively decoding the bits of $u$ by scanning the bits $(uM+e)M^{-1}$. On the average, in the $i$th iteration, we have $I_i=1-H_2(\eta)$ units of information regarding $u_i$, and that point is that for suitable matrices almost each of the $i$'s is either close to zero or to one. If that is the case, then a good encoding is obtained by placing the plaintext at the $i$ for which $I_i$ is close to 1. (Warning: Finding this set of indices may not be easy, but a polynomial-time algorithm is known.)

Lower bounds for maximal matchings and maximal independent sets by Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikal Rabie, and Jukka Suomela

I have already recommended this interesting line of works: see my comments on Jukka Suomela's spotlight presentation at WOLA'19. The current work is arguablly the crown jewel of that line, so far.

New Notions and Constructions of Sparsification for Graphs and Hypergraphs by Nikhil Bansal, Ola Svensson, and Luca Trevisan

I vaguely recall thinking of this notion of a cut-sparsifier two decades ago, but I definitely got nowhere. The point is that this work (like my own thinking) did not allow for weighted subgraphs and aimed only at graphs that admit an unweighted cut-sparsifier. The work suggests a weaker notion that in some cases coincides with the natural notion of an unweighted cut-sparsifier, but can be achieved for all graphs.

Efficient Construction of Rigid Matrices Using an NP Oracle by Josh Alman and Lijie Chen

Its not that I care so much about this specific problem (of obtaining rigidity at this range of parameters), let alone the weak notion of explicitness (mod an NP oracle), but I was stunned by the insights that underlie this work. The authors envision and materialize Ryan Williams' (of deriving lower bounds by algorithmic improvements) in a domain that seems totally unrelated. Of course, the construction of rigid matrices is a derandomization problem, but it is not one that refers to a computational model. Still, they manage to derive a corresponding computational model and apply Ryan's method to it, which is indeed highly nontrivial given that this computational model is quite weird and restricted (so one has to transform all ingrediants of the method accordingly).

Truly Optimal Euclidean Spanners by Hung Le and Shay Solomon

Given $n$ points in the plane (or in a higher dimensional Euclidean space), the problem is to construct an $n$-vertex weighted graph that approximately represents the Euclidean distances. The first result in this work is that the sparsity parameter achieved by the greedy spanner is optimal, and that its lightness parameter can be improved to the optimal. The second result is that one can improve over the foregoing bounds by allowing Steiner points.

Hardness Magnification for all Sparse NP Languages by Lijie Chen, Ce Jin, and Ryan Williams

Hardness magnification means showing that seemingly weak lower bounds imply very strong lower bounds. Several results of this type have been proved in recent years for various models and computational problems (including works on quantified derandomization such as Roei Tell's these this). Most relevant to the current work is the recent work Weak Lower Bounds on Resource-Bounded Compression Imply Strong Separations of Complexity Classes (by McKay, Murray, and Williams). What the current work shows is that the results obtained in the latter work using the Min Circuit Size Problem and its variants are actually an artifact of these problems referring to sparse sets. Specifically, they show results of the following form: If a (sub-exp) sparse NP-set has no $n^{1.01}$-size circuits, then it has no $n^{100}$-size circuits.

The Average-Case Complexity of Counting Cliques in Erdos-Renyi Hypergraphs by Enric Boix-Adsera, Matthew Brennan, and Guy Bresler

My own focus is on the case of graphs. For this case, this work provides a worst-case to average-case reduction of the problem of counting k-cliques, where the average-case result refers to the Erdos-Renyi distribution (with various edge densities). This result should be compared to my own work with Guy Rothblum. On the one hand, the distribution used in the current work is far simpler (than the sampleable distribution used in the prior work). On the other hand, the current work refers to algorithms that err with low probability tending to zero, whereas the prior work refers to algorithms that err with high probability tending to one. (In both cases, the rate of convergence is the reciprocal of a polylogarithmic function).
Added (May 2020): My take of the main result.

Non-deterministic Quasi-Polynomial Time is Average-case Hard for ACC Circuits by Lijie Chen

Prior works have shown worst-case ACC-hardness for NQP (see this), and average-case ACC-hardness for NEXP. The current work obtains the best (or worst) of these results, proving average-case ACC-hardness for NQP. The proof strategy follows Williams' celebrated method (of deriving lower bounds by algorithmic improvements) but runs against many difficulties that arise from the average-case setting (most notoriously in proving an adequate easy witness lemma).

Agreement testing theorems on layered set systems by Yotam Dikstein and Irit Dinur

An agreement test is specified by a distribution on pairs of sets in a set system (or by a graph in which these sets are vertices and considering the uniform distribution over edges). Such a test is intended to test whether partial assignments (i.e., local views of the assignments to sets) given to the individual sets fits some global assignment. The test picks a pair of sets according to this distribution and accepts if and only if the assignments to elements in the intersection is the same in both sets. The performance of the test is the function relating its rejection probability to the disagreement rate with the best global function; that is, in the current regime, the aim is showing that if the test rejects with probability $\eps$, then there exist a global assignment that agrees with $1-O(\eps)$ of the local views (i.e., the assignments to sets). This works shows that a sufficient condition for such a good performance.

A characterization of graph properties testable for general planar graphs with one-sided error (It's all about forbidden subgraphs) by Artur Czumaj and Christian Sohler

One commendable aspect of this work is that it studies the general graph model, which is aimed at dealing with sparse graphs that lack a feasible degree bound. The main result is a (one-sided error) tester of size-oblivious complexity for subgraph-freeness under the promise that the tested graph is planar. The point being that the straightforward tester (which takes a BFS of depth bounded by the radius of the forbidden subgraph) that works in the bounded-degree model cannot be used here. Instead one takes a ``randomly trimmed'' BFS such that for each vertex in the frontier we only extend a bounded number of randomly selected vertices. Of course, the issue is proving that this works, and planarity is used in that analysis, which begs the question of what is actually needed. (Of course, some promise is required, since no size-oblivious algorithm can distinguish a $n$-cycle from an $(n-sqrt(n))$-cycle that is connected to a sqrt(n)-sized clique.)


See 60th FOCS.


Back to list of Oded's choices.