Fine Grained Approximation Algorithms and Complexity (FG-APX 2019)

Bertinoro Workshop, May 27-31, 2019

Program

 
Monday (May 27)
Tuesday (May 28)
Wednesday (May 29)
Thursday (May 30)
Friday (May 30)
9:05 Opening remarks

9:15
Amir Abboud
Some Connections Between the Conjectures in Fine-Grained Complexity

Vincent Cohen-Addad
New Hardness Results for Planar Graph Problems in P and an Algorithm for Sparsest Cut

10:30 Coffee break

11:00
Elazar Goldenberg
Hardness Amplification of Optimization Problems

11:35
Open problems collaboration

9:15
Debarati Das
Truly Sub-quadratic Time Constant Factor Approximation of Edit Distance

Michal Kouckı
Constant factor approximations to edit distance on far input pairs in nearly linear time

10:30 Coffee Break

11:00
Elazar Goldenberg
Sublinear Algorithms for Gap Edit Distance

11:35
Open problems collaboration

9:15
Aviad Rubinstein

Karthik C.S.
Inapproximability of Clustering: Inspired from Fine-Grained Complexity

10:30 Coffee break

11:00
Ohad Trabelsi
New Algorithms and Lower Bounds for All-Pairs Max-Flow in Undirected Graphs

11:35
Open problems collaboration


9:15
Richard Peng
Batch-Dynamic Graph Algorithms

Euiwoong Lee
An FPT-approximation Algorithm for k-cut

10:30 Coffee break

11:00
Nicole Wein
Time/accuracy trade-off lower bounds from k-OV

11:35
Open problems collaboration

9:15
Rasmus Kyng
As hard as linear equations or two-commodity flow

Arturs Backurs
Efficient Density Evaluation for Smooth Kernels

10:30 Coffee break

11:00 Open problems collaboration
13:00 Lunch
13:00 Lunch
13:00 Lunch
13:00 Lunch
13:00 Lunch
15:00
Liam Roditty
Approximating the Diameter

15:35
Open problems collaboration

16:00 Coffee break

16:30
Open problems collaboration

15:00
Marvin Künnemann
Approximating APSP without Scaling

15:35
Open problems collaboration

16:00 Coffee break

16:30
Open problems collaboration
15:00
Excursion to Ravenna


15:00
Divesh Aggarwal
(Gap/S)-ETH Hardness of SVP

15:35
Open problems collaboration

16:00 Coffee break

16:30
Open problems collaboration


Workshop adjourns

19:30 Dinner
19:30 Dinner
19:30 Dinner
19:30 Dinner

 

Abstracts and slides

 

Amir Abboud
Some Connections Between the Conjectures in Fine-Grained Complexity

Fine-grained complexity utilizes a small set of conjectures to derive conditional lower bounds for a large collection of problems. These conjectures concern the time complexity of a few core problems such as k-SAT, Orthogonal Vectors, 3SUM, All Pairs Shortest Paths, and k-Clique. The relationships between these conjectures are poorly understood.

This talk will discuss some connections between the conjectures, including a tight reduction from Weighted-k-Clique to Orthogonal Vectors (joint work with Karl Bringmann, Holger Dell, and Jesper Nederlof) and new findings about the Set Cover Conjecture.

[slides (pptx)]

Divesh Aggarwal
(Gap/S)-ETH Hardness of SVP

There has been a lot of research in the last two decades on constructing cryptosystems whose security relies on the assumption that certain well-studied computational lattice problems cannot be solved efficiently. The most well studied computational problems are the shortest vector problem (SVP), and the closest vector problem (CVP).

These problems are well known to be NP-hard. However, such hardness proofs tell us very little about their quantitative or fine-grained complexity, e.g., does the fastest possible algorithm run in time at least, say, 2^{n/5} , or is there an algorithm that runs in time 2^{n/100} or even 2^{\sqrt{n}}? The NP hardness results cannot distinguish between these cases, but we certainly need to be confident in our answers to such questions if we plan to base the security of widespread cryptosystems on these answers.

The standard approach in complexity theory to study fine-grained hardness of computational problems is assuming the exponential time hypothesis (ETH) which states that 3-SAT cannot be solved in 2^{o(n)} time. In this talk, I will present our recent result on the fine-grained hardness of SVP under variants of the exponential time hypothesis.

This is joint work with Noah Stephens-Davidowitz.

[slides (pptx)]

Arturs Backurs
Efficient Density Evaluation for Smooth Kernels

Given a kernel function k and a dataset P (of points from R^d), the kernel density function of P at a point q from R^d is equal to KDF_P(q) := 1/|P| sum_{p in P} k(p,q). Kernel density evaluation has numerous applications, in scientific computing, statistics, computer vision, machine learning and other fields. In all of them it is necessary to evaluate KDF_P(q) quickly, often for many inputs q and large point-sets P.

In this paper we present a collection of algorithms for efficient KDF evaluation under the assumptions that the kernel k is "smooth", i.e., the value changes at most polynomially with the distance. This assumption is satisfied by several well-studied kernels, including the (generalized) t-student kernel and rational quadratic kernel. For smooth kernels, we give a data structure that, after O(dn log(Phi n)/eps^2) preprocessing, estimates KDF_P(q) up to a factor of 1+eps in O(d log (Phi n)/eps^2) time, where Phi is the aspect ratio. The log(Phi n) term can be further replaced by log n under an additional decay condition on k, which is satisfied by the aforementioned examples.

We further extend the results in two ways. First, we use low-distortion embeddings to extend the results to kernels defined for spaces other than l_2. The key feature of this reduction is that the distortion of the embedding affects only the running time of the algorithm, not the accuracy of the estimation. As a result, we obtain (1+eps)-approximate estimation algorithms for kernels over other l_p norms, Earth-Mover Distance, and other metric spaces. Second, for smooth kernels that are decreasing with distance, we present a general reduction from density estimation to approximate near neighbor in the underlying space. This allows us to construct algorithms for general doubling metrics, as well as alternative algorithms for l_p norms and other spaces.

Joint work with Moses Charikar, Piotr Indyk and Paris Siminelakis.

[slides (pptx)]

Vincent Cohen-Addad
New Hardness Results for Planar Graph Problems in P and an Algorithm for Sparsest Cut

The Sparsest Cut and Minimum Quotient Cut problems are classic optimizations that have been extensively studied. For planar inputs with integer vertex weights, both problems are in
P and the best known algorithms run in time O(n^2 W ), where W is the sum of vertex weights. Both problems are key for several recursive procedures and are used to find “optimal” edge
separators in planar graphs. However, despite a significant amount of effort, fastening these algorithms has remained an open problem, and the only subquadratic algorithms only achieve
O(log n)-approximation. We explain this lack of results by proving that the Sparsest Cut and Minimum Quotient Cut problems have no O(n^2−ε ) time even with unit node weights, under the (min, +)-convolution conjecture. Thus, the problems become the first natural planar graph problems in P for which we can prove conditional lower bounds, a repeatedly raised challenge.
To complement our results, we provide an O(1)-approximation in near-linear time, improving upon the 25-year old near-linear time O(log n)-approximation and th 3.5-approximation
algorithm running in time Õ(n^2 ) both due to Rao [STOC92]. We also prove near-quadratic lower bounds under SETH for variants of the closest pair problem in planar graphs, and use them to show that the popular Average-Linkage procedure for Hierarchical Clustering cannot be simulated in truly subquadratic time. At the core of our constructions is a diamond-like gadget that we have designed in order to settle the complexity of (planar) Diameter in (the standard model of) distributed computing. We prove an Ω(n/ log n) lower bound on the number of communication rounds required to compute the weighted diameter of a network in the CONGET model, even when the underlying graph is planar and all nodes are D = 4 hops away from each other (the unweighted diameter D is 4). This is the first poly(n) lower bound in the planar-distributed setting, and it complements the recent poly(D, log n) upper bounds of Li and Parter [STOC 2019] for (exact) unweighted diameter and for (1 + ε) approximate weighted diameter. Our lower bounds hold even for planar graphs of constant treewidth.

Joint work with Amir Abboud and Philip N. Klein.

[slides (pptx)]

Debarati Das
Truly Sub-quadratic Time Constant Factor Approximation of Edit Distance

Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. This distance measure finds applications in fields such as computational biology, pattern recognition, text processing, and information retrieval. The edit distance can be computed exactly using a dynamic programming algorithm that runs in quadratic time, which is also known to be almost optimal under SETH assumption [Backurs, Indyk 2015]. Andoni, Krauthgamer and Onak (2010) gave a nearly linear time (randomized) algorithm that approximates edit distance within poly-log(n) approximation factor. In this talk we discuss a randomized algorithm with running time O(n2-2/7) that approximates the edit distance within a constant factor.
(Joint work with Chakraborty, Goldenberg, Koucky and Saks.)

[slides (pptx)]

Elazar Goldenberg
Hardness Amplification of Optimization Problems

In this talk, we will see a general hardness amplification scheme for optimization problems based on the technique of direct products. In particular, for the edit distance problem, our hardness amplification theorem may be informally stated as follows: If for some \eps>0, there is a distribution D of Edit Distance problem such that any algorithm running in time n^{2-\eps} can correctly answer only on 1-1/n^{o(1)} fraction of inputs sampled from D then there is a distribution D' of Edit Distance problem such that any algorithm running in time n^{2-2\eps} can correctly answer only on 0.01 fraction of inputs sampled from D'.
 
Moreover, we can extend the above hardness amplification result to problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.
 
Joint work with Karthik C. S.

[slides (pptx)]

Elazar Goldenberg
Sublinear Algorithms for Gap Edit Distance
 
In this talk, we will see a sublinear time algorithm solving the gap edit distance problem.
More specifically, we show an algorithm for distinguishing whether the edit distance is at most t or at least t^2 in time O( n/t+ t^3 ).
We improve over the best known algorithm by Andoni and Onak (STOC09) for small values of t.
 
Our algorithm is based on a new approach that adaptively switches between uniform sampling and reading contiguous blocks of the input strings. In contrast, all previous algorithms choose which coordinates to query non-adaptively.

Joint work with Robert Krauthgamer and Barna Saha.

[slides (pptx)]

Michal Kouckı
Constant factor approximations to edit distance on far input pairs in nearly linear time

Abstract:  I will describe an algorithm that runs in time n^{1+eps} and computes a constant factor approximation to edit distance of two input strings, given that their edit distance is at least n^{1-delta} for some delta>0 depending on eps>0. This builds on the previous work by Diptarka Chakraborty, Debarati Das, Elazar Goldenberg, Michal Koucky and Mike Saks. Joint work with Mike Saks.

[slides (pptx)]

Karthik C. S.
Inapproximability of Clustering: Inspired from Fine-Grained Complexity

The two most popular objectives optimized in clustering algorithms are k-means and k-median. The k-means (resp. k-median) problem in the L_p-metric is specified by n points as input and the goal is to classify the input point-set into k clusters such that the k-means (resp. k-median) objective is minimized. The best known inapproximability factor in literature for these NP-hard problems in L_p-metrics were below 1.01. In this talk, we take a significant step to improve the hardness of approximating these problems in various L_p-metrics.

Our techniques are inspired from the tools and perspectives developed in fine-grained complexity in the last couple of years. In particular, there is an interesting interplay between geometric embeddings and communication protocols that will be emphasized and exploited in our proofs.

Joint work with Vincent Cohen-Addad.

[slides (pptx)]

Marvin Künnemann
Approximating APSP without Scaling

Zwick's (1+\epsilon)-approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time \tilde O(n^\omega/\epsilon \cdot \log W), where \omega=2.373 is the exponent of matrix multiplication and W denotes the largest weight. Since it uses the scaling technique, it has a factor log W in the running time.
 
In this talk, we discuss whether APSP and related problems admit strongly polynomial approximation schemes (i.e., whose running time avoids a dependence on W).
In particular:
(1) We remove the \log W-factor from Zwick's running time for APSP on undirected graphs and several graph characteristics.
(2) We give a strongly polynomial approximation scheme for APSP on directed graphs, running in time \tilde O(n^{(\omega+3)/2} / \epsilon).
(3) We explain why our approximation scheme for APSP on directed graphs has a worse exponent than \omega : Any improvement over our exponent (\omega+3)/2 would improve the best known algorithm for Min-Max Product. In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.

This is joint work with Karl Bringmann and Karol Węgrzycki.

[slides (pptx)]

Rasmus Kyng
As hard as linear equations or two-commodity flow

I will explain some hardness results which demonstrate that several classes of structured systems of linear equations are as hard to solve as arbitrary systems of linear equations, up to polylogarithmic factors. More generally, we can ask what problems are at least as hard as solving arbitrary linear equations. Based on this idea, in some recent follow-up work with we show that if fast algorithms for Packing and Covering linear programs could run with somewhat better ε-dependence (e.g. ε^{−1/5} to obtain a (1 + ε)-approximate solution) then we would get algorithms that break long standing barriers in the linear equation solver literature. Finally, we have been digging through the archives, and I will highlight an old result that with a trivial tweak has interesting consequences that are not widely known today. Itai Alon essentially showed in 1978 that that two-commodity flow is as hard as general linear programming, in a fairly strong sense.

Based on joint works with [Peng Zhang] and [Richard Peng, Peng Zhang, and Di Wang].

[slides (pptx)]

Euiwoong Lee
An FPT-approximation Algorithm for k-cut

In the k-cut problem, we are given an edge-weighted graph G and an integer k, and have to remove a set of edges with minimum total weight so that G has at least k connected components. The problem can be solved exactly in time n^{Omega(k)}, and can be 2-approximated in time poly(n, k).

A reduction from k-clique suggests that the runtime n^{Omega(k)} for an exact algorithm cannot be improved; there will be no FPT exact algorithm running in time f(k)*poly(n) for any function f. From the approximation side, Small Set Expansions Hypothesis implies that the 2-approximation cannot be improved in poly(n, k) time. But can we approximate k-cut better than a factor 2 in FPT time? We answer the question positively, and present an 1.8-approximation algorithm running in f(k)*n^4.

Joint work with Anupam Gupta and Jason Li.

[slides (pptx)]

Richard Peng
Batch-Dynamic Graph Algorithms

We study parallel algorithms for large evolving graphs: the edges are distributed over machines with small space, and updates/queries arrive in small batches. We give constant round batch-dynamic algorithms for several problem considered difficult to solve in O(1) parallel rounds statically, including undirected connectivity, directed max-matching, and maximal independent set.
Lower bounds in this model face difficulties inherent to communication-based models: the machines can perform arbitrarily powerful computations between rounds of communications. Here we obtain a separation in the difficulties of batch-dynamic undirected connectivity by combining adaptivity with P-hardness reductions.

[slides (pptx)]

Liam Roditty
Approximating the Diameter

In this talk I will survey the leaturatre on diameter in general graphs. Then I will present a quadratic time algorithm that almost achieve 3/2-approximation.
The talk is based on a joint work with Backurs, Segal, V. Williams, Wein.

[slides (pptx)]

Ohad Trabelsi
New Algorithms and Lower Bounds for All-Pairs Max-Flow in Undirected Graphs

We investigate the time-complexity of the All-Pairs Max-Flow problem: Given a graph with n nodes and m edges, compute for all pairs of nodes the maximum-flow value between them. If Max-Flow (the version with a given source-sink pair s,t) can be solved in time T(m), then anO(n2)T(m) is trivial upper bound. But can we do better?
We show three results:
1. We design the first hardness reductions for All-Pairs Max-Flow in undirected graphs, giving an essentially optimal lower bound for the node-capacities setting.
2. For edge capacities, we have discovered a surprising new algorithm that breaks the O(mn)barrier for graphs with unit-capacity edges!
Assuming T(m)=m1+o(1), our algorithm runs in time m3/2+o(1)and outputs a cut-equivalent tree. Even with current Max-Flow algorithms we improve state-of-the-art in many density regimes.
3. Finally, we explain the lack of lower bounds for constructing a Gomory-Hu tree by proving a non-reducibility result.
This result is based on a new near-linear time non-deterministic algorithm for constructing a cut-equivalent tree and may be of independent interest.

Joint work with Amir Abboud and Robert Krauthgamer.

[slides (pptx)]

Nicole Wein
Time/accuracy trade-off lower bounds from k-OV

There are many conditional lower bounds based on the orthogonal vectors (OV) conjecture which states that the OV problem (given a set of binary vectors find an orthogonal pair) cannot be solved in time n^{2-eps}. The strong exponential time hypothesis (SETH) implies the OV conjecture, as will as the more general k-OV conjecture, which states that k-OV (given a set of binary vectors find k orthogonal vectors) cannot be solved in time n^{k-eps}.

For example, the OV conjecture implies a lower bound for the Diameter problem (find the farthest pair of vertices in a graph): the OV conjecture rules out a better than 3/2-approximation for Diameter in m^{2-eps} time. We show that by reducing from k-OV instead of OV, one can achieve a time-accuracy trade-off lower bound for Diameter (and related problems). For example, no m^{3/2−eps} time algorithm can achieve an approximation factor better than 5/3. This shows that the running time of the previously known 3/2-approximation in O(m^{3/2}) time is conditionally tight.

The purpose of this talk is to introduce the technique that yields time/accuracy trade-off lower bounds for Diameter (and related problems) with the goal of inspiring discussion about time/accuracy trade-off lower bounds for a broader range of problems.

Joint work with Arturs Backurs, Liam Roditty, Gilad Segal, and Virginia Vassilevska Williams.

[slides (pptx)]

END