On three ICALP'19 presentations

Needless to say, it is great to experience a conference in Europe; that is, European locations are typically much more charming than (north) American ones. Even a not so famous city as Patras has much to offer, let alone being only a (long) bridge away from the charming small town of Nafpaktos. Due to various personal constraints, I attended only one day of the conference, which took place in Patra, Greece. Here are three presentations I wish to highlight.

  1. Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set by Nabil Mustafa
  2. Fine-Grained Complexity of Dynamic Programming Problems (survey) by Karl Bringmann
  3. A Quest Towards Strong Unconditional Lower Bounds (survey) by Kasper Green Larsen
Since I found the original abstracts extremely clear, I will just reproduce them below while adding brief comments.

Computing Optimal Epsilon-Nets Is as Easy as Finding an Unhit Set by Nabil Mustafa

Original abstract: Given a set system (X,R) with VC-dimension d, the celebrated result of Haussler and Welzl (1987) showed that there exists an epsilon-net for (X,R) of size O(d/epsilon log(1/epsilon)). Furthermore, the algorithm is simple: just take a uniform random sample from X. However, for many geometric set systems this bound is sub-optimal and since then, there has been much work presenting improved bounds and algorithms tailored to specific geometric set systems. In this paper, we consider the following natural algorithm to compute an epsilon-net: start with an initial random sample N. Iteratively, as long as N is not an epsilon-net for R, pick any unhit set S in R (say, given by an Oracle), and add O(1) randomly chosen points from S to N. We prove that the above algorithm computes, in expectation, epsilon-nets of asymptotically optimal size for all known cases of geometric set systems. Furthermore, it makes O(1/epsilon) calls to the Oracle. In particular, this implies that computing optimal-sized epsilon-nets are as easy as computing an unhit set in the given set system.

My comment: The analysis is based on identifying a parameter (or a complexity measure) of set systems, bounding the (expected) running-time of the algorithms in terms of this parameter, and determining the value of the parameter for all popular set systems (which were considered before).

Presburger Award Lecture by Karl Bringmann on Fine-Grained Complexity of Dynamic Programming Problems

Original abstract: Fine-grained complexity theory yields tight running-time lower bounds based on assumptions such as the Strong Exponential Time Hypothesis (SETH). This approach was particularly successful for many textbook dynamic programming problems, explaining long-standing lack of algorithmic progress. In this talk we focus on a line of research that was triggered by a fine-grained lower bound for the Frechet distance and resulted in tight bounds for many other classical problems, including sequence similarity problems such as edit distance and longest common subsequence, as well as regular expression pattern matching, tree edit distance, and compressed string problems.

My comment: The importance of fine-grained complexity is widely recognized by now. This lecture highlights the fact that many of the problems that have been studied in this context have a seemingly optimal Dynamic Programming aalgorithm.

Presburger Award Lecture by Kasper Green Larsen on A Quest Towards Strong Unconditional Lower Bounds

Original abstract: One of the holy grails of theoretical computer science is to prove strong unconditional lower bounds on the time needed to solve computational problems in realistic models of computation. Unfortunately we are very far from this goal in most settings. One of the areas where we've been quiet succesful, is in proving lower bounds for data structures and problems of a similar online flavour. In this talk, I will survey the state-of-the-art results and techniques in data structure lower bounds and the barriers we face in improving our techniques. Much of the focus will be on my own contributions to this area, but also on recent progress in areas outside of data structures where our techniques have found exciting applications.

My comments: Kasper was arguing that the lower bounds proved in the context of data structures are more genuinely time bounds than the lower bounds proved in communication complexity and streaming algorithms (and I'd add property testing). But I beg to differ: The data structure `time' bounds are actually cell-probe bounds that arise from limited space, and talking about space bounds under limited time (or number of passes) is just the dual view. Furthermore, in the context of property testing, the query complexity bounds are unrelated to space...

The context of data structure consists of some information that is (staticly or dynamically) encoded in a storage space in a way eneabling many possihble queries (e.g., $n$ bits of information are encoded in $s=O(n)$ storage that supports a set of $m=poly(n)$ possible queries). The method surveyed in this talk consists of taking a random sample of the storage, and arguing that on the one hand the sample is smaller than the information encoded in the strorage, while on the other hand it allows for recoverind a sufficient number of possible queries, which in turn determine the original information.


Back to list of Oded's choices.