On the 3rd Workshop on Local Algorithms (WOLA19)

The 3rd Workshop on Local Algorithms (WOLA) took place on July 20-22, 2019, in ETH Zurich. While the list of organizers contains six names, the bulk of the work was performed by Mohsen Ghaffari, who should be credited for the success of this workshop, and to Ronitt Rubinfeld who initiated the WOLA series. I found the WOLA'19 program extremely interesting and have learned a lot from attending most of the presentations. In fact, I felt quite exhausted after each of the days. The less exhaustive choices and brief comments that appear below are restricted by my own capacity (and not only by my own taste and interests).

The program of the workshop posted at WOLA19 webpage includes slides of almost all presentations. Below you can find my comments on six of the (45-minutes long) spotlight presentations and on ten of the (17-minutes long) regular presentations.


Spotlight presentations

I gain something significant from each of the following spotlight presentations and believe that most TOCers will also benefit from looking at the corresponding slides.

C. Seshadhri: How random walks led to advances in testing minor-freeness

The presentation is based on a couple of works (which Sesh co-authored with Akash Kumar and Andrew Stolman). I recommended each of these works in the past (see Choices 248 and 263), but this presentation provided an added value for me, since it clearly highlighted on the common theme that is the pivot of both works.

In both works the easy case is when the graph is rapidly mixing, and the difficulty is coping with the general case. The key idea is using a very weak (and in some sense minimal) notion of rapid mixing; that is, rather than requiring convergence in exponential rate, one requires convergence at a polynomial rate that is higher than the convergence rate on minor-closed graphs (like grids). This weak notion of convergence is shown to suffice for dealing with isolated components that are each rapidly mixing, and its benefit is that it allows a stronger result regarding decomposition of general graphs (i.e., the decomposition provides stronger guarantees regarding the `interaction' between the components)

Sanjeev Khanna: Sublinear Algorithms for Graph Coloring

The presentation was based on a recent work (co-authored with Sepehr Assadi and Yu Chen), which provides $(Delta+1)$-colorings of graphs of sublinear complexity in three different computational models, where Delta denotes the max-degree of the graph. In contrast, finding maximal independent sets and amxoimal matching in some of these models requires linear complexity (i.e., in the LCA model there is an $Omega(n^2)$ query lower bound when n is the number of vertices). In particular, in the LCA (Local Computation Algorithm) model, they obtain a $tildeO(n^{3/2})$-time algorithm, while also establishing an $Omega(n^{3/2})$ query lower bound. Furthermore, their algorithm is non-adaptive, whereas the lower bound holds also for adaptive algorithms.

The main technical tool is a palette sparsification theorem that asserts that if each vertex is assign a logarithmic number of random colors (out of the universal palette of [Delta+1]), then a corresponding $(Delta+1)$-coloring still exists, with high probability over the foregoing choices.

The proof of this theorem combines two different strategies, which are adequate in two different extreme cases: The case that all (but one) vertices have degree at most $0.99 Delta$, and the case that the graph consists of isolated $(Delta+1)$-cliques. In the first case, we consider log-many invocations of a sequential greedy algorithm that colors each vertex with the new random color if such a coloring does not conflicts with its colored neighbors, and observe that in each iteration each uncolored vertex gets colored with probability at least 0.01. In the second case, we consider the bipartite graph having the clique's vertices on one side and the assigned colors on the other size, and argue that this graph has a perfect matching. The highly non-trivial part is dealing with the general case, which is done by a complicated decomposition of any graph to a set of almost cliques that are sparsely connected.

Eric Vigoda: Local Markov Chains, Belief Propagation, and Phase Transitions

My own take home message from this survey is that approximating the number of independent sets is easy (i.e., can be done in polynomial-time) when the max-degree is at most five, but is hard (assuming NP not in RP) even when the max-degree is six. This curious fact is rooted in the even more surprising fact by which the feasibility of approximation is related to a statistical physics phase transition on infinite Delta-regular tree, where Delta is the degree bound (and the phase transition occurs in $(5,6)$).

Shubhangi Saraf: Some recent results on high rate local codes

This presentation provided a wide perspective on the known trade-offs between rate and query complexity for codes that are locally decodable (and testable), focusing on the high rate regime. I recommended the breakthrough work of Kopparty, Meir, Ron-Zewi, and Saraf (see Choice 177) in 2015, but the current survey includes a recent improvement of Gopi, Kopparty, Oliveira, Ron_Zewi, and Saraf that obtains the above with distance parameter that meets the GV bound. Hence, the take home message is that local decodability (and testability), where locality means $n^{o(1)}$ query complexity, `comes for free' (in the sense that it can be obtained at the same distance-vs-rate trade-off as general explicit constructions of codes)

Keren Censor-Hillel: Three Challenges in Distributed Optimization

The focus of this survey presentation was actually on the complexity of minVC and maxIS in the two main models of distributed computation, and the third challenge was to go beyond these cases. In the LOCAL model, the question (or challenge) refers to the exploration distance that is required in order to locally determine an $(1+\eps)$-approximation of minVC (minimum Vertex Cover). Here the upper and lower bounds are in the ball-park of $poly(1/\eps,\log n)$. Turning to the CONGEST model (in which only bounded communication is allowed in each round), the question is whether the foregoing upper bound can be met. Currently, it is only known how to do so for 2-factor approximations (see Guy Even's presentation (below)).

The presentation focuses on showing a $tildeOmega(n^2)$ lower bound on the number of rounds needed for computing the exact minVC. The proof is by reduction from the two-party complexity of disjointness. In the reduction each input is encoded as a bipartite graph on $n+n$ vertices, and two (fixed) gadgets are constructed that connect these two bipartite graphs using few edges that have endpoints in the two gadgets. Each party can simulate the execution of a CONGEST algorithm on the part of the graph that it holds, but CONGEST-model communication between vertices that belong to two different gadgets requires communication in the two-party CC model. Hence, the CC-model complexity is upper-bounded by the product of the round complexity of the CONGEST model and the number of inter-gadget edges, which implies a lower bound on the round complexity that loses only the latter factor.

Jukka Suomela: Locality lower bounds through round elimination

This presentation provides an exposition of a surprising proof technique that has been successful in the context of providing (exploration) distance lower bounds in the LOCAL model. The surprising aspect in this inspiring technique is that it proceeds by transforming algorithms for certain problems into lower complexity algorithms for related problems. Such a technique is almost unprecedented (see my comments on Choice 211), and is definitely surprising (i.e., how can one generically reduce the complexity of algorithms)?

Of course, the answer to the latter question is that the resulting algorithm solves a related problem, not the original one, which is potentially easier (unless the original problem was already simplified as in the case of `sinkless orientation' restricted to graphs that are locally tree-like). So a key concern when designing such a complexity-reduction process is not to trivialize the resulting problem. Indeed, when seeking to prove a lower bound of $r+1$ rounds, we start from an algorithm of round complexity $r$, perform $r$ round-reduction steps, and seek to obtain a zero-round algorithm that solves a non-trivial problem (i.e., a distributed problem that cannot be solved without any communication).

The core of the round-reduction step is considering a new edge-labeling (subject to vertex-constraints) problem in which the labels are supposed to reflect local views of the neighbors in the original given protocol. The new vertex-constraints will enforce consistency between the view (very much as in Dinur's gap amplification...). If we are lucky, then the resulting labeling problem can be further simplified (in a non-trivial manner); such a simplification is important in order to maintain some understanding of the resulting problem, which is importance since we need to prove the non-triviality of the problem obtained after `many' round-reduction.

When proving the $Omega(log^* n)$ lower bound for 3-coloring, the round-reduction step is from $k$-coloring to $2^k$-coloring, and so we can take $log^* o(n)$ steps and still get a non-trivial problem. When proving the $Omega(log n)$ lower bound for sinkless orientations, each pair of round-reduction steps return to the original problem, which is restricted to graphs with log-neighborhoods that looks as trees.

Regular presentations

I selected only ten regular presentations from the program, but found also other ones interesting. My selection is confined to the ones for which I feel that I have something to say. (Most of the following presentations are based on joint work of several researchers, but I only mentioned the name of the presenter. Recall that the entire program of the workshop is posted at WOLA19 webpage (see local copy HERE) and this webpage includes slides of almost all presentations.)

Sampling edges in bounded arboricity graphs by Talya Eden
A crucial point here is that the sampling algorithm generate edges that are distributed almost uniformly in a strong sense. Specifically, the generated distribution is close to the target distribution in a point-wise manner (i.e., each item occurs with probability that is within a constant factor of the desired probability). Indeed, we are often satisfied with generating a distribution that is statistically close to the desired one (i.e., being close in Total Variation Distance). This is good enough in applications that use a small number of samples and output a bounded value (e.g., for $A:U\to[0,1]$, it holds that if $G$ is $\esp$-close to $T$ (in TVD), then $E[A(G)] = E[A(T)]\pm\eps$). But for applications that produce unbounded values being statistically close may not suffice, whereas the stronger notion of point-wise approximation does suffice; for example, if $Prob[G=e] = (1\pm\eps) Prob[T=e]$, then $E[A(G)] = (1\pm\eps) E[A(T)]$ for every $A$ (including $A$ such that $Prob[A(G) \gg 1/\eps] = \eps$ and $Prob[A(G)=0] = 1-\eps$).

On Solving Linear Systems in Sublinear Time by Robert Krauthgamer
This work appeared as 5th entry in my report of ITCS19 (see Choice 258). I wish to highlight (again...) the fact that this work suggest to consider Local Computation Algorithms for various algebraic and numerical problems, whereas so far LCA were considered only for graph theoretic problems.

Property testing of the Boolean rank by Michal Parnas
The Boolean rank is defined for Boolean matrices under Boolean operations (i.e., addition is replaced by logical OR, whereas multiplication corresponds to a Boolean AND). It equals the ND communication complexity of the two-argument function defined by the matrix, and is NP-Hard to compute. This work presents an $\esp$-tester (one-sided error and non-adaptive) for having Boolean rank $r$ that operates in time polynomial in $r/\eps$. (Such a result was known for the standard (i.e., real) rank.) For binary rank (i.e., rank over GF(2)), they have an $\eps$-tester of complexity exponential in the rank, and ask whether this can be improved to polynomial.

Local Access to Huge Random Objects through Partial Sampling by Amartya Shankha Biswas
I was happy to see an interesting work that refers to the direction initiated in the work On the Implementation of Huge Random Objects. Generating Erdos-Renyi G(n,p) graphs is trivial if one only needs to support adjacency queries, but here they also want to support (next and random) neighbor queries. These additional queries are easy to support when $p$ is extremely high (e.g., $p=Omega(1)$) and are known to be supported when $p$ is extremely low (e.g., $p=O(1/n)$), but here they focus on the intermediate range (e.g., $p=1/sqrt(n)$). The construction uses small memory that grows with the number of queries, but I conjecture that one can get a memoriless implementation as sought by the aforementioned work.

Time hierarchies of distributed complexities by Alkida Balliu
This presentation surveyed the fascinating structure of the round complexity (in the LOCAL model) of Local Checkable Labeling problems. The picture in case of paths and cycles is quite simple: There are only three possible (deterministic) complexities (i.e., Theta(1), Theta(log^* n), and Theta(log n)). For general trees and general graphs, the picture is more complex with infinitely many number of possibilities. Specifically, for trees, we can also see (deterministic = randomized) complexities of the form $n^{1/k}$ for any natural $k$ as well as an exponential gap between deterministic and randomness complexity (at $log n$ vs $loglog n$).

On the Volume Complexity of LCLs by Will Rosenbaum
Taking a closer look at the LOCAL model, it is suggested to consider algorithms that explicitly and adaptively query for a specified neighbor of a visited vertex (by specifying the port number and obtaining the vertex's ID as well as its degree and local input if relevant). That is, at each round, the processor at each vertex may query the graph about neighbors of vertices revealed to it so far. The volume complexity of an execution is the number of queries made by the most active processor in that execution. This work presents hierarchy results for volume complexity and compares them to the results surveyed in the previous presentation.

Phase transitions in the independent sets of random graphs by Endre Csoka
This presentation highlights a possible application of the study of random graphs. Specifically, if for some parameter random graphs have a certain value (for that parameter) whereas they cannot be distinguished within some computation resources from graphs that have a different value (for the same parameter), then we get a lower bound on the complexity of computing the parameter. As an assertion, this is trivial, but the point is that in some cases it may be useful to use it (and use a known but non-trivial result about the behavior of random graphs).

Optimal Distributed Covering Algorithms by Guy Even
The presentation focused on a distributed algorithm for finding an approximate minimum vertex cover in $f$-uniform hypergraphs, for any fixed $f$ (e.g., $f=2$ corresponds to the standard minVC). It obtains an approximation factor of $f+\eps$ in time $poly(\log(n/\eps))$, improving over a prior bound that was polynomial in $1/\eps$ and $log n$. Guy highlighted the fact that this improvement allows to set $\eps=1/2n$ and so obtain a 2-factor approximation of minVC, but I think that improving the dependence of the complexity on the approximation parameter is a natural and important issue when dealing with approximation algorithms. Guy's model presentation stroke me as a tribute to his father, Shimon Even, who was famous for his crystal clear presentations.

Fast Approximate Shortest Paths in the Congested Clique by Michal Dory
This presentation surveyed a host of new algorithms for finding approximate shortest paths in the CONGEST model. In particular, for APSP they obtain a constant-factor approximation algorithm running in polylogarithmic time. All results are obtained by clever use of sparse matrix multiplication, while overcoming the fact that the relevant matrices don't seem sparse on the face of it.

Breaking Quadratic Time for Small Vertex Connectivity by Sorrachai Yingchareonthawornchai
This work appeared as 3rd entry in my report of STOC19 (see Choice 269). As in STOC19, the declared take-home message was Local algorithms are useful for vertex connectivity. The current presentation, however, includes some improvements over the results presented in STOC. Specifically, the complexity of the exact algorithm was improved from $tildeO(m + n^{4/3}k^{7/3})$ (which is $tilde(n^{4/3})$ for $m=O(n)$ and $k=O(1)$) to $tildeO(m + n k^3)$ (which is $tildeO(n)$ for $m=O(n)$ and $k=O(1)$), where $k$ is the (sought value of the) vertex connectivity. Furthermore, it presents a one-sided error $\eps$-tester for $k$-vertex-connectivity of complexity $tildeO(k/\eps^2)$ (improving over a known $tildeO((2/\eps)^{k+1})$ bound).

Slides of all foregoing presentations

The program of the workshop is posted at WOLA19 webpage (see local copy HERE), and this webpage includes slides of all foregoing presentations.


Back to list of Oded's choices.