SODA 2016 Accepted Papers with Abstracts

Yixin Cao. Linear Recognition of Almost Interval Graphs
Abstract: Let $\mbox{interval} + k v$, $\mbox{interval} + k e$, and $\mbox{interval} - k e$ denote the classes of graphs that can be obtained from some interval graph by adding $k$ vertices, adding $k$ edges, and deleting $k$ edges, respectively. When $k$ is small, these graph classes are called almost interval graphs. They are well motivated from computational biology, where the data ought to be represented by an interval graph while we can only expect an almost interval graph for the best. For any fixed $k$, we give linear-time algorithms for recognizing all these classes, and in the case of membership, our algorithms provide also a specific interval graph as evidence. When $k$ is part of the input, these problems are also known as graph modification problems, all NP-complete. Our results imply that they are fixed-parameter tractable parameterized by $k$, thereby resolving the long-standing open problem on the parameterized complexity of recognizing $\mbox{interval}+ k e$, first asked by Bodlaender et al.\ [Bioinformatics, 11:49--57, 1995]. Moreover, our algorithms for recognizing $\mbox{interval}+ k v$ and $\mbox{interval}- k e$ run in times $O(6^k \cdot (n + m))$ and $O(8^k \cdot (n + m))$, (where $n$ and $m$ stand for the numbers of vertices and edges respectively in the input graph,) significantly improving the $O(k^{2k}\cdot n^3m)$-time algorithm of Heggernes et al.~[STOC 2007; SICOMP 2009] and the $O(10^k \cdot n^9)$-time algorithm of Cao and Marx [SODA 2014; TALG 2015] respectively.
Noga Alon, Humberto Naves and Benny Sudakov. On the maximum quartet distance between phylogenetic trees
Abstract: A conjecture of Bandelt and Dress states that the maximum quartet distance
between any two phylogenetic trees on $n$ leaves is at most
$(\frac 23 +o(1))\binom{n}{4}$.
Using the machinery of flag algebras we improve the currently known bounds
regarding this conjecture, in particular we show that the maximum is at most
$(0.69 +o(1))\binom{n}{4}$.
We also give further evidence that the conjecture is true by proving that
the maximum distance between caterpillar trees is at most
$(\frac 23 +o(1))\binom{n}{4}$.
Scott Garrabrant and Igor Pak. Permutation patterns are hard to count
Abstract: Let F be a finite set of permutations and let C(n,F) denote the number of permutations on n elements avoiding the set of patterns F. We prove that C(n,F) cannot be computed in time polynomial in n, unless EXP = parity EXP.

Our tools also allow us to disprove the Noonan--Zeilberger conjecture which states that the sequence {C(n,F)} is P-recursive.
Varun Kanade, Reut Levi, Zvi Lotker, Frederik Mallmann-Trenn and Claire Mathieu. Effective Diameter for Forest Fire and Social Random Walk Model
Abstract: Leskovec, Kleinberg and Faloutsos (2005) observed that many real social networks exhibit properties such as shrinking diameter, densification, and (power-law) heavy tail degree distributions. To explain these phenomena, they introduced a generative model, called the Forest Fire model, and using simulations showed that this model indeed exhibited these properties; however, proving this rigorously was left as an open problem.

In this paper, we analyze one of these properties, shrinking diameter. We define a restricted version of their model that incorporates the main features that seem to contribute towards this property, and prove that the graphs generated by this model exhibit shrinking diameter. We prove that an even simpler model, the random walk model, already exhibits this phenomenon.
Andreas Galanis and Leslie Ann Goldberg. The complexity of approximately counting in 2-spin systems on k-uniform bounded-degree hypergraphs
Abstract: One of the most important recent developments in the complexity of approximate counting is the classification of the complexity of approximating the partition functions of antiferromagnetic 2-spin systems on bounded-degree graphs. This classification is based on a beautiful connection to the so-called uniqueness phase transition from statistical physics on the infinite Delta-regular tree. Our objective is to study the impact of this classification on unweighted 2-spin models on k-uniform hypergraphs. As has already been indicated by Yin and Zhao, the connection between the uniqueness phase transition and the complexity of approximate counting breaks down in the hypergraph setting. Nevertheless, we show that for every non-trivial symmetric k-ary Boolean function f there exists a degree bound Delta_0 so that for all Delta>=\Delta_0 the following problem is NP-hard: given a k-uniform hypergraph with maximum degree at most Delta, approximate the partition function of the hypergraph 2-spin model associated with f. It is NP-hard to approximate this partition function even within an exponential factor. By contrast, if f is a trivial symmetric Boolean function (e.g., any function f that is excluded from our result), then the partition function of the corresponding hypergraph 2-spin model can be computed exactly in polynomial time.
Antares Chen, David Harris and Aravind Srinivasan. Partial Resampling to Approximate Covering Integer Programs
Abstract: We consider positive covering integer programs, which generalize set cover and which have attracted a long line of research developing (randomized) approximation algorithms. Srinivasan (2006) gave a rounding algorithm based on the FKG inequality for systems which are ``column-sparse.'' This algorithm may return an integer solution in which the variables get assigned large (integral) values; Kolliopoulos \& Young (2005) modified this algorithm to limit the solution size, at the cost of a worse approximation ratio.
We develop a new rounding scheme based on the Partial Resampling variant of the Lov\'{a}sz Local Lemma developed by Harris \& Srinivasan (2013). This achieves an approximation ratio of $1 + \frac{\ln (\Delta_1+1)}{\amin} + O(\sqrt{ \frac{\log (\Delta_1+1)}{\amin}} )$, where $\amin$ is the minimum covering constraint and $\Delta_1$ is the maximum $\ell_1$-norm of any column of the covering matrix (whose entries are scaled to lie in $[0,1]$); we also show nearly-matching inapproximability and integrality-gap lower bounds.

Our approach improves asymptotically, in several different ways, over known results. First, it replaces $\Delta_0$, the maximum number of nonzeroes in any column (from the result of Srinivasan) by $\Delta_1$ which is always -- and can be much -- smaller than $\Delta_0$; this is the first such result in this context. Second,
our algorithm automatically handles multi-criteria programs; we achieve improved approximation ratios compared to the algorithm of Srinivasan, and give, for the first time when the number of objective functions is large, polynomial-time algorithms with good multi-criteria approximations.
We also significantly improve upon the upper-bounds of Kolliopoulos \& Young when the integer variables are required to be within $(1 + \epsilon)$ of some given upper-bounds, and show nearly-matching inapproximability.
David Harris and Aravind Srinivasan. Algorithmic and Enumerative Aspects of the Moser-Tardos Distribution
Abstract: Moser \& Tardos have developed a powerful algorithmic approach (henceforth ``MT") to the Lov\'{a}sz Local Lemma (LLL); the basic operation done in MT and its variants is a search for ``bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less ``random'' form, building further on the ``MT-distribution" concept of Haeupler et al.\ in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known.

We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has ``large" min-entropy and hence that its support-size is large.
Tamal Dey, Facundo Memoli and Yusu Wang. Multiscale Mapper: An Algorithm for Topological Summarization via Codomain Covers
Abstract: Summarizing topological information from datasets and maps defined on them is a central theme in topological data analysis. \textsf{Mapper}, a tool for such summarization, takes as input both a possibly high dimensional dataset and a map defined on the data, and produces a summary of the data by using a cover of the codomain of the map. This cover, via a pullback operation to the domain, produces a simplicial complex connecting the data points. The resulting view of the data through a cover of the codomain offers flexibility in analyzing the data. However, it offers only a view at a fixed scale at which the cover is constructed. Inspired by the concept, we explore a notion of hierarchical family of coverings which induces a hierarchical family of simplicial complexes connected by simplicial maps, which we call {\em multiscale mapper}. We study the resulting structure, and design practical algorithms to compute its persistence diagrams efficiently. Specifically, when the domain is a simplicial complex and the map is a real-valued piecewise-linear function, the algorithm can compute the exact persistence diagram only from the 1-skeleton graph of the input complex. For general maps, we present a combinatorial version of the algorithm that acts only on \emph{vertex sets} connected by the 1-skeleton graph, and this algorithm approximates the exact persistence diagram thanks to a stability result that we show to hold.
Mahdi Cheraghchi and Piotr Indyk. Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform
Abstract: For every fixed constant $\alpha > 0$, we design an algorithm for computing the $k$-sparse Walsh-Hadamard transform of an $N$-dimensional vector $x \in \R^N$ in time $k^{1+\alpha} (\log N)^{O(1)}$. Specifically, the algorithm is given query access to $x$ and computes a $k$-sparse $\tilde{x} \in \R^N$ satisfying $\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} - H_k(\hat{x})\|_1$, for an absolute constant $c > 0$, where $\hat{x}$ is the transform of $x$ and $H_k(\hat{x})$ is its best $k$-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to $x$ (i.e., all queries are determined and performed in parallel when the algorithm starts).

An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive $\ell_1/\ell_1$ compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time $k^{1+\alpha} (\log N)^{O(1)}$ (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008).

Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to $k (\log N)^{O(1)}$ reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter $\alpha$).

Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to $\tilde{O}(k \log^3 N)$.
Marcin Pilipczuk and Magnus Wahlström. Directed multicut is W[1]-hard, even for four terminal pairs
Abstract: We prove that Multicut in directed graphs, parameterized by the size of the cutset, is W[1]-hard and hence unlikely to be fixed-parameter tractable even if restricted to instances with only four terminal pairs. This negative result almost completely resolves one of the central open problems in the area of parameterized complexity of graph separation problems, posted originally by Marx and Razgon [SIAM J. Comput. 43(2):355-388 (2014)], leaving only the case of three terminal pairs open.

Our gadget methodology allows us also to prove W[1]-hardness of the Steiner Orientation problem parameterized by the number of terminal pairs, resolving an open problem of Cygan, Kortsarz, and Nutov [SIAM J. Discrete Math. 27(3):1503-1513 (2013)].
Ivan Bliznets, Fedor Fomin, Marcin Pilipczuk and Michał Pilipczuk. Subexponential parameterized algorithm for Interval Completion
Abstract: In the Interval Completion problem we are given an $n$-vertex graph $G$ and an integer $k$, and the task is to transform $G$ by making use of at most $k$ edge additions into an interval graph. This is a fundamental graph modification problem with applications in sparse matrix multiplication and molecular biology. The question about fixed-parameter tractability of Interval Completion was asked by Kaplan, Shamir and Tarjan~[FOCS 1994; SIAM J. Comput. 1999] and was answered affirmatively more than a decade later by Villanger at el.~[STOC 2007; SIAM J. Comput. 2009], who presented an algorithm with running time $O(k^{2k} n^3 m)$. We give the first subexponential parameterized algorithm solving Interval Completion in time $k^{O(\sqrt{k})} n^{O(1)}$. This adds Interval Completion to a very small list of parameterized graph modification problems solvable in subexponential time.
Moran Feldman, Ola Svensson and Rico Zenklusen. Online Contention Resolutions Schemes
Abstract: We introduce a new rounding technique designed for online optimization problems, which is related to contention resolution schemes, a technique initially introduced in the context of submodular function maximization. Our rounding technique, which we call online contention resolution schemes (OCRSs), is applicable to many online selection problems, including Bayesian online selection, oblivious posted pricing mechanisms, and stochastic probing models. It allows for handling a wide set of constraints, and shares many strong properties of offline contention resolution schemes. In particular, OCRSs for different constraint families can be combined to obtain an OCRS for their intersection. Moreover, we can approximately maximize submodular functions in the online settings we consider.

We, thus, get a broadly applicable framework for several online selection problems, which improves on previous approaches in terms of the types of constraints that can be handled, the objective functions that can be dealt with, and the assumptions on the strength of the adversary. Furthermore, we resolve two open problems from the literature; namely, we present the first constant-factor constrained oblivious posted price mechanism for matroid constraints, and the first constant-factor algorithm for weighted stochastic probing with deadlines.
Aaron Bernstein and Clifford Stein. Faster Fully Dynamic Matchings with Small Approximation Ratios
Abstract: Maximum cardinality matching is a fundamental algorithmic problem with many algorithms and applications. The fully dynamic version, in which edges are inserted and deleted over time has
also been the subject of much attention. Existing algorithms for dynamic matching (in general n-vertex m-edge graphs) fall into two groups: there are fast (mostly randomized) algorithms that do not achieve a better than 2-approximation, and there are slow algorithms with Omega(m^{1/2})
update time that achieve a better-than-2 approximation. Thus the obvious question is whether we can design an algorithm that achieves a tradeoff between these two: a o(m^{1/2}) update time and a better-than-2 approximation simultaneously. We answer this question in the affirmative. Previously, such bounds were only known for the special case of bipartite graphs.

Our main result is a fully dynamic deterministic algorithm that maintains a (3/2 + epsilon)-approximation in amortized update time O(m^{1/4}epsilon^{-2.5}). In addition to achieving the trade-off described above, our algorithm manages to be polynomially faster than all existing deterministic algorithms (excluding a log(n)-approximation of Onak et. al.) while still maintaining a better-than-2 approximation.

We also give stronger results for graphs whose arboricity is at most a. We show how to maintain a (1+ epsilon)-approximate fractional matching or a (3/2 + epsilon)-approximate integral matching in worst-case time O(a(a + log n)) for constant epsilon. When the arboricity is constant, this bound is O(log n) and when the arboricity is polylogarithmic the update time is also polylogarithmic. Previous results for small arboricity non-bipartite graphs could only maintain a maximal matching (2-approximation).

We maintain the approximate matching without explicitly using augmenting paths. We define an intermediate graph, called an edge degree constrained subgraph (EDCS) and show that the EDCS contains a large matching, and show how to maintain an EDCS H in G. The EDCS was used in previous works on bipartite graphs, however the details and proofs are completely different in general graphs. The algorithm for bipartite graphs relies on ideas from flows and cuts to non-constructively prove the existence of a good matching in H, but these ideas do not seem to extend to non-bipartite graphs. In this paper we instead explicitly construct a large fractional matching in H. In some cases we can guarantee that this fractional matching is gamma-resticted, which means that it only uses values either in the range [0,gamma] or 1. We then combine this matching with a new structural property of maximum matchings in non-bipartite graphs, which is analogous to the cut induced by maximum matchings in bipartite graphs.
Timothy Chan and Ryan Williams. Deterministic APSP, Partial Matches, and More: Quickly Derandomizing the Polynomial Method
Abstract: We show how to solve all-pairs shortest paths on $n$ nodes in deterministic $n^3/2^{\Omega(\sqrt{\log n})}$ time, and how to count the pairs of orthogonal vectors among $n$ 0-1 vectors in $d = c\log n$ dimensions in deterministic $n^{2-1/O(\log c)}$ time. These running times essentially match the best known randomized algorithms of (Williams, STOC'14) and (Abboud, Williams, and Yu, SODA 2015) respectively, and the ability to count was open even for randomized algorithms. By reductions, these two results yield fast deterministic algorithms for many other problems.

A key to our constructions is an efficient way to deterministically simulate certain probabilistic polynomials critical to the algorithms of prior work, applying the tools of epsilon-biased sets and modulus-amplifying polynomials.
Rajesh Chitnis, Graham Cormode, Hossein Esfandiari, Mohammadtaghi Hajiaghayi, Andrew McGregor, Morteza Monemizadeh and Sofya Vorotnikova. Kernelization via Sampling with Applications to Dynamic Graph Streams
Abstract: In this paper we present a simple but powerful subgraph sampling primitive that is applicable in a variety of computational models including dynamic graph streams (where the input graph is defined by a sequence of edge/hyperedge insertions and deletions) and distributed systems such as MapReduce. In the case of dynamic graph streams, we use this primitive to prove the following results:

Matching: First, there exists an $\tilde{O}(k^2)$ space algorithm that returns an exact maximum matching on the assumption the cardinality is at most $k$. The best previous algorithm used $\tilde{O}(kn)$ space where $n$ is the number of vertices in the graph and we prove our result is optimal up to logarithmic factors. Our algorithm has $\tilde{O}(1)$ update time. Second, there exists an $\tilde{O}(n^2/\alpha^3)$ space algorithm that returns an $\alpha$-approximation for matchings of arbitrary size. We generalize both results for weighted matching. Third, there exists an $\tilde{O}(n^{4/5})$ space algorithm that returns a constant approximation in graphs with bounded arboricity. While there has been a substantial amount of work on approximate matching in insert-only graph streams, these are the first non-trivial results in the dynamic setting.

Vertex Cover and Hitting Set: There exists an $\tilde{O}(k^d)$ space algorithm that solves the minimum hitting set problem where $d$ is the cardinality of the input sets and $k$ is an upper bound on the size of the minimum hitting set. We prove this is optimal up to logarithmic factors. Our algorithm has $\tilde{O}(1)$ update time. The case $d=2$ corresponds to minimum vertex cover.

Finally, we consider a larger family of parameterized problems (including $b$-matching, disjoint paths, vertex coloring among others) for which our subgraph sampling primitive yields fast, small-space dynamic graph stream algorithms. We then show lower bounds for natural problems outside this family.
Giorgi Nadiradze and Andreas Wiese. On approximating strip packing with a better ratio than 3/2
Abstract: In the strip packing problem we are given a set of rectangular items
that we want to place in a strip of given width such that we minimize
the height of the obtained packing. It is a very classical two-dimensional
packing problem that has received a lot of attention and it has applications
in many settings such as stock-cutting and scheduling. A straight-forward
reduction from Partition shows that the problem cannot be
approximated with a better absolute factor than 3/2. However, this reduction
requires the numeric values to be exponentially large. In this paper,
we present a (1.4+epsilon)-approximation algorithm with pseudo-polynomial
running time. This implies that for polynomially bounded input data
the problem can be approximated with a strictly better ratio than
for exponential input which is a very rare phenomenon in combinatorial
optimization.

Our algorithm is based on a structural lemma proving that there is
a packing of height (1.4+epsilon)OPT that allows a partition
of the packing area into few rectangular boxes. These boxes have the
property that they decouple the packing decisions for the items that
are thin and high, wide and narrow, or large in both dimensions.
The interaction of these item types is typically a major problem
when designing algorithms and our partition completely solves this.
Particularly
difficult are items that are thin and high since a single one of them
can increase the optimal packing height significantly if placed wrongly
and there can be up to Omega(n) of them. For
those items the box partition is even fine grained enough so that
all items in a box have essentially the same height. This reduces
the usually difficult packing decisions for these items to a problem
that can be solved easily via a pseudo-polynomial time dynamic program.

The mentioned reduction from Partition also breaks down if we allow
to drop a constant number of input items (while the compared optimal
solution cannot do that). We show that then we can approximate the
problem much better and present a polynomial time algorithm computing
a packing of height at most (1+epsilon)OPT that needs to drop at most
a constant number of items.
Stephen Alstrup, Cyril Gavoille, Esben Bistrup Halvorsen and Holger Petersen. Simpler, faster and shorter labels for distances in graphs
Abstract: We consider how to assign \emph{labels} to any undirected graph with $n$ nodes such that, given the labels of two nodes and no other information regarding the graph, it is possible to determine the distance between the two nodes. The challenge in such a \emph{distance labeling scheme} is primarily to minimize the maximum label length and secondarily to minimize the time needed to answer distance queries (decoding). Previous schemes have offered different trade-offs between label lengths and query time. This paper presents a simple algorithm with shorter labels and shorter query time than any previous solution, thereby improving the state-of-the-art with respect to both label length and query time in one single algorithm. Our solution addresses several open problems concerning label length and decoding time and is the first improvement of label length for more than three decades.

More specifically, we present a distance labeling scheme with labels of length $\frac{\log 3}{2}n+o(n)$ bits\footnote{Throughout the paper, all logarithms are in base 2.} and constant decoding time. This outperforms all existing results with respect to both size and decoding time, including Winkler's (Combinatorica 1983) decade-old result, which uses labels of size $(\log 3)n$ and $\Oh(n / \log n)$ decoding time, and Gavoille et al.\ (SODA'01), which uses labels of size $11n+o(n)$ and $O(\log\log n)$ decoding time. In addition, our algorithm is simpler than the previous ones.

In the case of integral edge weights of size at most $W$, we present almost matching upper and lower bounds for the label size $\ell$:
$\frac{1}{2}(n-1) \log \ceil{\frac{W}{2} + 1} \le \ell \le \frac{1}{2}n \log{(2W+1)} + O(\log n\cdot\log(nW))$.

Furthermore, for $r$-additive approximation labeling schemes, where distances can be off by up to an additive constant $r$, we present both upper and lower bounds. In particular, we present an upper bound for $1$-additive approximation schemes which, in the unweighted case, has the same size (ignoring second order terms) as an adjacency labeling scheme, namely $n/2$.

We also give results for bipartite graphs as well as for exact and $1$-additive distance oracles.
Yuichi Yoshida. Gowers Norm, Function Limits, and Parameter Estimation
Abstract: Let $\{f_i:\Fp^i \to \bit\}$ be a sequence of functions, where $p$ is a fixed prime and $\Fp$ is the finite field of order $p$. The limit of the sequence can be syntactically defined using the notion of ultra limit. Inspired by the Gowers norm, we introduce a metric over limits of function sequences, and study properties of it. One application of this metric is that it provides a simpler characterization of affine-invariant parameters of functions that are constant-query estimable than the previous one obtained by Yoshida (STOC'14). Using this characterization, we show that the property of being a function of a constant number of low-degree polynomials and a constant number of factored polynomials (of arbitrary degrees) is constant-query testable if it is closed under blowing-up. Examples of this property include the property of having a constant spectral norm and degree-structural properties with rank conditions.
Jonathan Scarlett and Volkan Cevher. Phase Transitions in Group Testing
Abstract: We consider the fundamental limits on the number of tests required for the group testing problem, which has applications in medical testing, fault detection, communication protocols, pattern matching, and database systems. Specifically, we have $p$ items labeled $\{1,\dotsc,p\}$, among which a uniformly random subset $S$ of cardinality $k$ are defective. The goal is to recover the set $S$ based on $n$ possibly noisy measurements, each taking the form
\begin{equation}
Y = \openone\bigg\{ \bigcup_{i \in S} \{X_i = 1\} \bigg\} \oplus Z, \nonumber
\end{equation}
where $X=(X_1,\dotsc,X_p)\in\{0,1\}^p$ is a measurement vector indicating the items involved in each test, and $Z\in\{0,1\}$ represents possible noise under modulo-$2$ addition. We consider Bernoulli designs, in which each entry of each measurement vector is independently drawn at random from a Bernoulli distribution.

In the noiseless case with $k = \Theta(p^{\theta})$ ($\theta \le \frac{1}{3}$), we show the following for an optimal recovery algorithm:
\begin{align}
n \ge \Big( k\log_2\frac{p}{k} \Big) (1+\eta) \implies \PP[\mathrm{error}] \to 0 \nonumber \\
n \le \Big( k\log_2\frac{p}{k} \Big) (1-\eta) \implies \PP[\mathrm{error}] \to 1 \nonumber
\end{align}
for any $\eta > 0$, where an error means that $\hat{S} \ne S$, with $\hat{S}$ denoting the estimate of $S$. That is, we provide an exact threshold at which the error probability under goes a phase transition from $0$ to $1$, for a measurement-optimal recovery rule minimizing the required number of measurements regardless of the computational complexity. The same threshold was previously shown to be the best possible when the measurement vectors are designed adaptively, with each vector chosen based on the outcomes of previous tests. Thus, despite being non-adaptive, Bernoulli designs are \emph{asymptotically optimal} when $\theta \le \frac{1}{3}$.

In the case of partial recovery, we show that
\begin{align}
n \le \Big( (1-\alpha^{*}) k\log_2\frac{p}{k} \Big) (1-\eta) \implies \PP[\mathrm{error}] \to 1, \nonumber
\end{align}
for any $\eta > 0$, where an error means that $|S \backslash \hat{S}| \ge \alpha^{*}|S|$ or $|\hat{S} \backslash S| \ge \alpha^{*}|S|$, so that $\alpha^{*}$ indicates the allowed ``distance'' from $\hat{S}$ to $S$. Thus, there is little gain to be obtained in the required number of measurements even under this relaxed criterion. Finally, analogous necessary and sufficient conditions are derived for the noisy setting with $Z \sim \Bernoulli(\rho)$, and we again provide exact thresholds for sufficiently small $\theta$.

Our analysis takes a novel approach revealing that the error probability is characterized by tail probabilities containing quantities of the form $\imath(x;y) := \log\frac{P_{Y|X}(y|x)}{P_Y(y)}$, permitting precise characterizations due to the fact that independent summations concentrate around their mean. This approach is a significant departure from widely-used tools such as maximum-likelihood decoding and Fano's inequality, and leads to sharper bounds for the case that $k$ scales with $p$, as well as the stronger statement $\PP[\mathrm{error}] \to 1$ (as opposed to merely $\PP[\mathrm{error}] \not\to 0$) in the converse bounds.
Noah Stephens-Davidowitz. Discrete Gaussian Sampling Reduces to CVP and SVP
Abstract: The discrete Gaussian $D_{L - t, s}$ is the distribution that assigns to each vector $x$ in a shifted lattice $L-t$ probability proportional to $e^{-\pi ||x||^2/s^2}$. It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters $s$ have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed $2^{n+o(n)}$-time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP).

Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when $t = 0$) to SVP. In the other direction, there is a simple reduction from $\gamma$-approximate SVP for any $\gamma = \Omega(\sqrt{n/\log n})$, and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor.
Ashwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, Lior Seeman and Yaron Singer. Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions
Abstract: The Adaptive Seeding problem is an algorithmic challenge motivated by influence maximization in social networks: One seeks to select among certain accessible nodes in a network, and then select, adaptively, among neighbors of those nodes as they become accessible in order to maximize a global objective function. More generally, adaptive seeding is a stochastic optimization framework where the choices in the first stage affect the realizations in the second stage, over which we aim to optimize.

Our main result is a $(1-1/e)^2$-approximation for the adaptive seeding problem for any monotone submodular function. While adaptive policies are often approximated via non-adaptive policies, our algorithm is based on a novel method we call \emph{locally-adaptive} policies. These policies combine a non-adaptive global structure, with local adaptive optimizations. This method enables the $(1-1/e)^2$-approximation for general monotone submodular functions and circumvents some of the impossibilities associated with non-adaptive policies.

We also introduce a fundamental problem in submodular optimization that may be of independent interest: given a ground set of elements where every element appears with some small probability, find a set of expected size at most $k$ that has the highest expected value over the realization of the elements. We show a surprising result: there are classes of monotone submodular functions (including coverage) that can be approximated almost optimally as the probability vanishes. For general monotone submodular functions we show via a reduction from \textsc{Planted-Clique} that approximations for this problem are not likely to be obtainable. This optimization problem is an important tool for adaptive seeding via non-adaptive policies, and its hardness motivates the introduction of \emph{locally-adaptive} policies we use in the main result.
Amit Chakrabarti and Anthony Wirth. Incidence Geometries and the Pass Complexity of Semi-Streaming Set Cover
Abstract: Set cover, over a universe of size~$n$, may be modelled as a
data-streaming problem, where the $m$ sets that comprise the instance
are to be read one by one. A semi-streaming algorithm is allowed only
$O(n poly(\log n, \log m))$ space to process this stream. For each
$p \ge 1$, we give a very simple deterministic algorithm that makes
$p$ passes over the input stream and returns an
appropriately certified $(p+1)n^{1/(p+1)}$-approximation to the
optimum set cover. More importantly, we proceed to show that this
approximation factor is essentially tight, by showing that a factor
better than $0.99 n^{1/(p+1)}/(p+1)^2$ is unachievable for a $p$-pass
semi-streaming algorithm, even allowing randomisation. In particular,
this implies that achieving a $\Theta(\log n)$-approximation requires
$\Omega(\log n/\log\log n)$ passes, which is tight up to the
$\log\log n$ factor.

These results extend to a relaxation of the set cover problem where we
are allowed to leave an~$\epsilon$ fraction of the universe uncovered: the
tight bounds on the best approximation factor achievable in~$p$ passes
turn out to be $\Theta_p(\min\{n^{1/(p+1)}, \epsilon^{-1/p}\})$.

Our lower bounds are based on a construction of a family of high-rank
incidence geometries, which may be thought of as vast generalisations
of affine planes. This construction, based on algebraic techniques,
appears flexible enough to find other applications and is therefore
interesting in its own right.
Christian Glacet, Avery Miller and Andrzej Pelc. Time vs. Information Tradeoffs for Leader Election in Anonymous Trees
Abstract: Leader election is one of the fundamental problems in distributed computing. It calls for all nodes of a network to agree on a single node, called the leader. If the nodes of the network have distinct labels, then agreeing on a single node means that all nodes have to output the label of the elected leader. If the nodes of the network are anonymous, the task of leader election is formulated as follows: every node $v$ of the network must output a simple path, which is coded as a sequence of port numbers, such that all these paths end at a common node, the leader. In this paper, we study deterministic leader election in anonymous trees.

Our aim is to establish tradeoffs between the allocated time $\tau$ and the amount of information that has to be given {\em a priori} to the nodes to enable leader election in time $\tau$ in all trees for which leader election in this time is at all possible. Following the framework of {\em algorithms with advice}, this information (a single binary string) is provided to all nodes at the start by an oracle knowing the entire tree. The length of this string is called the {\em size of advice}. For a given time $\tau$ allocated to leader election, we give upper and lower bounds on the minimum size of advice sufficient to perform leader election in time $\tau$.

For most values of $\tau$, our upper and lower bounds are either tight up to multiplicative constants, or they differ only by a logarithmic factor. Let $T$ be an $n$-node tree of diameter $diam \leq D$. While leader election in time $diam$ can be performed without any advice, for time $diam-1$ we give tight upper and lower bounds of $\Theta (\log D)$. For time $diam-2$ we give tight upper and lower bounds of $\Theta (\log D)$ for even values of $diam$, and tight upper and lower bounds of $\Theta (\log n)$ for odd values of $diam$. Moving to shorter time, in the interval $[\beta \cdot diam, diam -3]$ for constant $\beta >1/2$, we prove an upper bound of $O(\frac{n\log n}{D})$ and a lower bound of $\Omega(\frac{n}{D})$, the latter being valid whenever $diam$ is odd or when the time is at most $diam-4$. Hence, with the exception of the special case when $diam$ is even and time is exactly $diam-3$, our bounds leave only a logarithmic gap in this time interval. Finally, for time $\alpha \cdot diam$ for any constant $\alpha <1/2$ (except for the case of very small diameters), we again give tight upper and lower bounds, this time $\Theta (n)$.
Gerandy Brito, Ioana Dumitriu, Shirshendu Ganguly, Christopher Hoffman and Linh Tran. Recovery and rigidity in a regular stochastic block model
Abstract: The stochastic block model is a natural and popular model for studying community detection in random networks. Its clustering properties have been extensively studied in the statistics, physics and computer science literature. Recently this area has experienced major mathematical breakthroughs, particularly for the binary (two-community) version. In this paper, we introduce a variant of the binary model which we call the regular stochastic block model (RSBM). We prove rigidity of this model by showing that with high probability an exact recovery of the community structure is possible. Spectral methods exhibit a regime where this can be done
eciently. Moreover we also prove that, in this setting, any suitably good partial recovery can be bootstrapped to obtain a full recovery of the communities.
Ali Kemal Sinop. How to Round Subspaces: A New Spectral Clustering Algorithm
Abstract: A basic problem in spectral clustering is the following. If a solution obtained from the spectral relaxation is close to an integral solution, is it possible to find this integral solution even though they might be in completely different basis? In this paper, we propose a new spectral clustering algorithm. It can recover a k-partition such that the subspace corresponding to the span of its indicator vectors is $O(\sqrt{\epsilon})$ close to the original subspace in spectral norm with $\epsilon$ being the minimum possible ($\epsilon \le 1$ always). Moreover our algorithm does not impose any restriction on the cluster sizes. Previously, no algorithm was known which could find a $k$-partition closer than $o(k \epsilon)$.

We present two applications for our algorithm. First one finds a disjoint union of bounded degree expanders which approximate a given graph in spectral norm. The second one is for approximating the sparsest $k$-partition in a graph where each cluster have expansion at most $\phi_k$ provided $\phi_k \le O(\lambda_{k+1})$ where $\lambda_{k+1}$ is the $(k+1)^{st}$ eigenvalue of Laplacian matrix. This significantly improves upon the previous algorithms, which required $\phi_k \le O(\lambda_{k+1}/k)$.
Dimitris Achlioptas and Fotis Iliopoulos. Focused Stochastic Local Search and the Lovasz Local Lemma
Abstract: We develop tools for analyzing focused stochastic local search algorithms. These are algorithms which search a state space probabilistically by repeatedly selecting some constraint that is violated in the current state and moving to a random nearby state which, hopefully, addresses the violation without introducing many new ones. A large class of such algorithms arise from the algorithmization of the Lov\'{a}sz Local Lemma, a non-constructive tool for proving the existence of satisfying states. Here we give tools that provide a unified analysis of such algorithms and of many more, expressing them as instances of a general framework.
Maria Chudnovsky, Jan Goedgebeur, Oliver Schaudt and Mingxian Zhong. Obstructions for three-coloring graphs with one forbidden induced subgraph
Abstract: The complexity of coloring graphs without long induced paths is a notorious problem in algorithmic graph theory, an especially intruiging case being that of 3-colorability. So far, not much was known about certification in this context.

We prove that there are only finitely many 4-critical $P_6$-free graphs, and give the complete list that consists of 24 graphs. In particular, we obtain a certifying algorithm for 3-coloring $P_6$-free graphs, which solves an open problem posed by Golovach et al. Here, $P_6$ denotes the induced path on six vertices.

Our result leads to the following dichotomy theorem: if $H$ is a connected graph, then there are finitely many 4-critical $H$-free graphs if and only if $H$ is a subgraph of $P_6$. This answers a question of Seymour.

The proof of our main result involves two distinct automatic proofs, and an extensive structural analysis by hand.
Marek Cygan, Fedor Fomin, Alexander Golovnev, Alexander Kulikov, Ivan Mihajlin, Jakub Pachocki and Arkadiusz Socała . Tight bounds for graph homomorphism and subgraph isomorphism
Abstract: TBA.
Sariel Har-Peled, Haim Kaplan and Micha Sharir. Approximating the k-Level in Three-Dimensional Plane Arrangements
Abstract: Let $H$ be a set of $n$ planes in three dimensions, and let $r<n$ be a parameter. We give a simple alternative proof of the existence of a $(1/r)$-cutting of the first $n/r$ levels of $\A(H)$, which consists of $O(r)$ vertical triangular prisms. The proof does not use sampling, and exploits techniques based on planar separators and various structural properties of levels in three-dimensional arrangements and of planar maps. Interestingly, this yields a terrain that approximates the $k$-level and has its vertices on this level.
Richard Peng. Approximate Undirected Maximum Flows in O(m polylog(n)) Time
Abstract: We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.
Samuel Hopkins, Pravesh Kothari, Aaron Henry Potechin, Prasad Raghavendra and Tselil Schramm . On the Integrality Gap of Degree-4 Sum of Squares for Planted Clique
Abstract: TBA.
William M. Hoza and Leonard J. Schulman. The Adversarial Noise Threshold for Distributed Protocols
Abstract: We consider the problem of implementing distributed protocols, despite adversarial channel errors, on synchronous-messaging networks with arbitrary topology.

In our first result we show that any n-party T-round protocol on an undirected communication network G can be compiled into a robust simulation protocol on a sparse (O(n) edges) subnetwork so that the simulation tolerates an adversarial error rate of Omega(1/n); the simulation has a round complexity of O((m log n / n) T), where m is the number of edges in G. (So the simulation is work-preserving up to a log factor.) The adversary's error rate is within a constant factor of optimal. Given the error rate, the round complexity blowup is within a factor of O(k log n) of optimal, where k is the edge connectivity of G.
We also determine that the maximum tolerable error rate on directed communication networks is Theta(1/s) where s is the number of edges in a minimum equivalent digraph.

Next we investigate adversarial per-edge error rates, where the adversary is given an error budget on each edge of the network. We determine the limit for tolerable per-edge error rates on an arbitrary directed graph to within a factor of 2. However, the construction that approaches this limit has exponential round complexity, so we give another compiler, which transforms T-round protocols into O(mT)-round simulations, and prove that for polynomial-query black box compilers, the per-edge error rate tolerated by this last compiler is within a constant factor of optimal.
János Pach, Natan Rubin and Gábor Tardos. Beyond the Richter-Thomassen Conjecture
Abstract: If two closed Jordan curves in the plane have precisely one point in common, then it is called a {\em touching point}. All other intersection points are called {\em crossing points}. The main result of this paper is a Crossing Lemma for closed curves: In any family of $n$ pairwise intersecting simple closed curves in the plane, no three of which pass through the same point, the number of crossing points exceeds the number of touching points by a factor of at least $\Omega((\log\log n)^{1/8})$.


As a corollary, we prove the following long-standing conjecture of Richter and Thomassen: The total number of intersection points between any $n$ pairwise intersecting simple closed curves in the plane, no three of which pass through the same point, is at least $(1-o(1))n^2$.
Xi Chen and Jinyu Xie. Tight Bounds for the Distribution-Free Testing of Monotone Conjunctions
Abstract: We improve both upper and lower bounds for the distribution-free testing of monotone conjunctions. Given oracle access to an unknown Boolean function f : {0,1}^n -> {0,1} and sampling oracle access to an unknown distribution D over {0,1}^n, we present an \tilde{O}(n^{1/3}/\eps^5)-query algorithm that tests whether f is a monotone conjunction versus \eps-far from any monotone conjunction with respect to D. This improves the previous best upper bound of \tilde{O}(n^{1/2}/\eps) by Dolev and Ron [DR11] when 1/\eps is small compared to n. For some constant \eps_0 > 0, we also prove a lower bound of \tilde{\Omega}(n^{1/3}) for the query complexity, improving the previous best lower bound of \tilde{\Omega}(n^{1/5}) by Glasner and Servedio [GS09]. Our upper and lower bounds are tight, up to a polylogarithmic factor, when the distance parameter \eps is a constant.
Martin Dyer, Mark Jerrum and Haiko Müller. On the switch Markov chain for perfect matchings
Abstract: We study a simple Markov chain, the switch chain, on the set of all perfect matchings in a bipartite graph. This Markov chain was proposed by Diaconis, Graham and Holmes as a possible approach to a sampling problem arising in Statistics. They considered several classes of graphs, and conjectured that the switch chain would mix rapidly for graphs in these classes. Here we settle their conjecture almost completely. We ask:~for which graph classes is the Markov chain ergodic and for which is it rapidly mixing? We provide a precise answer to the ergodicity question and close bounds on the mixing question. We show for the first time that the mixing time of the switch chain is polynomial in the class of monotone graphs. This class was identified by Diaconis, Graham and Holmes as being of particular interest in the statistical setting.
Jisu Jeong, Eun Jung Kim and Sang-Il Oum. Constructive algorithm for path-width of matroids
Abstract: Given $n$ subspaces of a finite-dimensional vector space over a fixed finite field $\mathbb F$, we wish to find a linear layout $V_1,V_2,\ldots,V_n$ of the subspaces such that $\dim((V_1+V_2+\cdots+V_i) \cap (V_{i+1}+\cdots+V_n))\le k$ for all $i$;
such a linear layout is said to have width at most $k$.
When restricted to $1$-dimensional subspaces, this problem is equivalent to computing the path-width of an $\mathbb F$-represented matroid in matroid theory and computing the trellis-width (or minimum trellis state-complexity) of a linear code in coding theory.

We present a fixed-parameter tractable algorithm to construct a linear layout of width at most $k$, if it exists, for input subspaces of a finite-dimensional vector space over $\mathbb F$. As corollaries, we obtain a fixed-parameter tractable algorithm to produce a path-decomposition of width at most $k$ for an input $\mathbb F$-represented matroid of path-width at most $k$, and a fixed-parameter tractable algorithm to find a linear rank-decomposition of width at most $k$ for an input graph of linear rank-width at most $k$. In both corollaries, no such algorithms were known previously.
Our approach is based on dynamic programming combined with the idea developed by Bodlaender and Kloks (1996) for their work on path-width and tree-width of graphs.

It was previously known that a fixed-parameter tractable algorithm exists for the decision version of the problem for matroid path-width; a theorem by Geelen, Gerards, and Whittle~(2002) implies that for each fixed finite field $\mathbb F$, there are finitely many forbidden $\mathbb F$-representable minors for the class of matroids of path-width at most $k$. An algorithm by Hlin{\v e}n{\'y} (2006) can detect a minor in an input $\mathbb F$-represented matroid of bounded branch-width. However, this indirect approach would not produce an actual path-decomposition even if the complete list of forbidden minors were known.
Our algorithm is the first one to construct such a path-decomposition and does not depend on the finiteness of forbidden minors.
T-H. Hubert Chan and Shaofeng H.-C. Jiang. Reducing Curse of Dimensionality: Improved PTAS for TSP (with Neighborhoods) in Doubling Metrics
Abstract: We consider the Traveling Salesman Problem with Neighborhoods (TSPN) in doubling metrics. The goal is to find a shortest tour that visits each of a given collection of subsets (regions or neighborhoods) in the underlying metric space.


We give a randomized polynomial time approximation scheme (PTAS) when the regions are fat weakly disjoint. This notion of regions was first defined when a QPTAS was given for the problem in [SODA 2010: Chan and Elbassioni]. We combine the techniques in the previous work, together with the recent PTAS for TSP [STOC 2012: Bartal, Gottlieb and Krauthgamer] to achieve a PTAS for TSPN. However, several non-trivial technical hurdles need to be overcome for applying the PTAS framework to TSPN.

(1) Heuristic to detect sparse instances. In the STOC 2012 paper, a minimum spanning tree heuristic is used to estimate the portion of an optimal tour within some ball. However, for TSPN, it is not known if an optimal tour would use points inside the ball to visit regions that intersect the ball.


(2) Partially cut regions in the recursion. After a sparse ball is identified by the heuristic, the PTAS framework for TSP uses dynamic program to solve the instance restricted to the sparse ball, and recursively solve the remaining instance. However, for TSPN, it is an important issue to decide whether each region partially intersecting the sparse ball should be solved in the sparse instance or considered in the remaining instance.

Surprisingly we show that both issues can be resolved by conservatively making the ball in question responsible for all intersecting regions. In particular, a sophisticated charging argument is needed to bound the cost of combining tours in the recursion.

Moreover, more refined procedures are used to improve the dependence of the running time on the doubling dimension $k$
from the previous $\exp[O(1)^{k^2}]$ (even for just TSP) to $\exp[O(1)^{O(k \log k)}]$.
Chidambaram Annamalai. Finding perfect matchings in bipartite hypergraphs
Abstract: Haxell’s condition is a natural hypergraph analog of Hall’s condition, which is a well- known necessary and sufficient condition for a bipartite graph to admit a perfect matching. That is, when Haxell’s condition holds it forces the existence of a perfect matching in the bipartite hypergraph. Unlike in graphs, however, there is no known polynomial time algorithm to find the hypergraph perfect matching that is guaranteed to exist when Haxell’s condition is satisfied.
We prove the existence of an efficient algorithm to find perfect matchings in bipartite hypergraphs whenever a stronger version of Haxell’s condition holds. Our algorithm can be seen as a generalization of the classical Hungarian algorithm for finding perfect matchings in bipartite graphs. The techniques we use to achieve this result could be of use more generally in other combinatorial problems on hypergraphs where disjointness structure is crucial, e.g. Set Packing.
Sixia Chen, Matthew Dippel, Alexander Russell, Abhishek Samanta and Ravi Sundaram. How to Play Multichannel Rendezvous Games with Public Randomness
Abstract: We study a basic discovery problem arising in multichannel wireless
settings: Two players are equipped with radios that may be tuned to
channels in potentially different subsets, $S_1$ and $S_2$, of a
universe of $n$ channels. In order to discover each other, the
players must tune to the same channel at the same time. The behavior
of each player is determined by a potentially randomized channel
schedule, which determines which channel the player tunes for each
positive time $t$; the challenge is to guarantee that the players
discover each other as quickly as possible. This is precisely the
``blind rendezvous problem'' studied in the popular cognitive radio
setting.

The problem is complicated by the fact that there is no notion of
identity: the channel schedule of a player may depend only on the
player's channel subset; furthermore, the players may have commenced
their schedules at different times and may be brought together at
arbitrary points in their schedules. In prior
work~\cite{CRSS:Deterministic} it was shown that
\emph{deterministic} schedules can guarantee rendezvous in time
$O(|S_1|\cdot|S_2|\log\log n)$.
These schedules require quadratic time to rendezvous, which is
impractical for typical applications with large channel universes.

In this paper, we beat this quadratic barrier by utilizing a public
source of randomness---a \emph{random beacon} that outputs a random
bit during each time step---to achieve rendezvous in expected time
\[
O\left(\log n + \frac{|S_1 \cup S_2|}{|S_1 \cap S_2|}\right)\,.
\]
As it happens, existing practical environments can indeed yield such
a public source of randomness. Observe that this is always at most
linear in $n$ and offers an exponential speedup when channel subsets
and their intersection are constant fractions of the universe. We
achieve these results by combining minwise independence with a novel
pseudo-random tool, the \emph{synchronizing disperser}. Typical
dispersers permit efficient use of randomness for hitting witness
sets; our synchronizing disperser does more, permitting two nodes
with differing views of the shared randomness (on account of their
different starting times) to extract common pseudorandom samples
efficiently. Our construction draws heavily on the expansion
properties of Cayley graphs and may be of independent
interest. Finally, we counterbalance this result by establishing an
$\Omega(|S_1|\cdot|S_2|)$ lower bound on the time to rendezvous with
any amount of private randomness but no public randomness.
Niv Buchbinder and Moran Feldman. Deterministic Algorithms for Submodular Maximization Problems
Abstract: Randomization is a fundamental tool used in many theoretical and practical areas of computer science. We study here the role of randomization in the area of submodular function maximization. In this area most algorithms are randomized, and in almost all cases the approximation ratios obtained by current randomized algorithms are superior to the best results obtained by known deterministic algorithms. Derandomization of algorithms for general submodular function maximization seems hard since the access to the function is done via a value oracle. This makes it hard, for example, to apply standard derandomization techniques such as conditional expectations. Therefore, an interesting fundamental problem in this area is whether randomization is inherently necessary for obtaining good approximation ratios.

In this work we give evidence that randomization is not necessary for obtaining good algorithms by presenting a new technique for derandomization of algorithms for submodular function maximization. Our high level idea is to maintain explicitly a (small) distribution over the states of the algorithm, and carefully update it using marginal values obtained from an extreme point solution of a suitable linear formulation. We demonstrate our technique on two recent algorithms for unconstrained submodular maximization and for maximizing submodular function subject to a cardinality constraint. In particular, for unconstrained submodular maximization we obtain an optimal deterministic 1/2-approximation showing that randomization is unnecessary for obtaining optimal results for this setting.
Andris Ambainis, Alexander Belov, Oded Regev and Ronald de Wolf. Efficient Quantum Algorithms for (Gapped) Group Testing and Junta Testing
Abstract: In the k-junta testing problem, a tester has to efficiently decide whether a given function f:{0,1}^n-->{0,1} is a k-junta (i.e., depends on at most k of its input bits) or is epsilon-far from any k-junta. Our main result is a quantum algorithm for this problem with query complexity O(sqrt{k/epsilon}) and time complexity O(n sqrt{k/epsilon}). This quadratically improves over the query complexity of the previous best quantum junta tester, due to Atici and Servedio. Our tester is based on a new quantum algorithm for a gapped version of the combinatorial group testing problem, with an up to quartic improvement over the query complexity of the best classical algorithm. For our upper bound on the time complexity we give a near-linear time implementation of a shallow variant of the quantum Fourier transform over the symmetric group, similar to the Schur-Weyl transform. We also prove a lower bound of Omega(k^{1/3}) queries for junta-testing (for constant epsilon).
Michael Kerber, Don Sheehy and Primoz Skraba. Persistent Homology and Nested Dissection
Abstract: Nested dissection exploits underlying topology to do matrix reductions while persistent homology exploits matrix reductions to reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering. Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology of a filtration by exploiting information about the underlying space. It gives reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology.
Kunal Agrawal, Jing Li, Kefu Lu and Benjamin Moseley. Scheduling Parallel DAG Jobs Online to Minimize Average Flow Time
Abstract: In this work, we study scheduling parallelizable jobs online with an objective of minimizing average flow time. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting and a scalable algorithm is known. However, a model of parallelizability that is widely used by practitioners is the DAG model and little is known for this model in the online setting. The DAG model and the speed-up curves models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question if an online algorithm can be $O(1)$-competitive with $O(1)$-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is $(1+\eps)$-speed $O(\frac{1}{\eps^3})$-competitive algorithm for any $\eps >0$. We further introduce the \emph{first} greedy algorithm for scheduling parallelizable jobs. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is $(2+\eps)$-speed $O(\frac{1}{\eps^3})$-competitive for any $\eps >0$.
Christian Wulff-Nilsen. Approximate Distance Oracles for Planar Graphs with Improved Query Time-Space Tradeoff
Abstract: We consider approximate distance oracles for edge-weighted n-vertex undirected planar graphs. Given epsilon > 0, we present a (1+epsilon)-approximate distance oracle with O_{epsilon}(n(loglog n)^2) space and O_{epsilon}((loglog n)^3) query time. For fixed epsilon, this improves the previous best product of query time and space of the oracles of Thorup (FOCS 2001, J. ACM 2004) and Klein (SODA 2002) from O(n log n) to O(n(loglog n)^5).
Zeyuan Allen-Zhu, Yin Tat Lee and Lorenzo Orecchia. Using Optimization to Obtain a Width-Independent, Parallel, Simpler, and Faster Positive SDP Solver
Abstract: We study the design of polylogarithmic depth algorithms for approximately solving packing and covering semidefinite programs (or positive SDPs for short). This is a natural SDP generalization of the well-studied positive LP problem.

Although positive LPs can be solved in polylogarithmic depth while using only $log^2 n/eps^3$ parallelizable iterations [3], the best known positive SDP solvers due to Jain and Yao [18] require $log^{14}(n) /eps^{13}$ parallelizable iterations. Several alternative solvers have been proposed to reduce the exponents in the number of iterations [19, 29]. However, the correctness of the convergence analyses in these works has been called into question [29], as they both rely on algebraic monotonicity properties that do not generalize to matrix algebra.

In this paper, we propose a very simple algorithm based on the optimization framework proposed in [3] for LP solvers. Our algorithm only needs $log^2 n / eps^3$ iterations, matching that of the best LP solver. To surmount the obstacles encountered by previous approaches, our analysis requires a new matrix inequality that extends Lieb-Thirring's inequality, and a sign-consistent, randomized variant of the gradient truncation technique proposed in [2, 3].
Guillaume Chapuy and Guillem Perarnau. Connectivity in bridge-addable graph classes: the McDiarmid-Steger-Welsh conjecture
Abstract: The study of typical properties of random graphs is of particular importance for the theoretical analysis of complex networks. In this field, many models of randomness (such as Erd\H{o}s-R\'enyi or random planar graphs, preferential attachment models) have been successfully analyzed thanks to the fact that their underlying structure enables one to perform explicit computations of some observables. Another approach, pioneered by McDiarmid, Steger and Welsh (2005) is to consider graphs taken uniformly from an abstract graph class, assuming only some global property of the class but without fully specifying it. Although exact computations are no longer possible, results obtained in this setup are arguably very robust, since they apply universally for many different models of random graphs.

The foundational and most studied problem in this second approach is a conjecture of these authors on bridge-addable classes that we prove in this paper. A class of graphs is \emph{bridge-addable} if any graph obtained by adding an edge between two connected components of a graph in the class is also in the class. Examples of bridge-addable classes include forests, planar graphs, graphs with bounded treewidth, or graphs excluding any 2-connected minor. We prove that a random graph from a bridge-addable class is connected with probability at least $e^{-\frac{1}{2}} + o(1)$, when its number of vertices tends to infinity.

This lower bound is tight since it is reached for forests. The best previously known constants where $e^{-1}$, $e^{-0.7983}$ and $e^{-2/3}$ proved respectively by McDiarmid, Steger and Welsh, by Balister, Bollob\'as and Gerke, and by Norin.
Zeyuan Allen-Zhu, Aditya Bhaskara, Silvio Lattanzi, Vahab Mirrokni and Lorenzo Orecchia. Expanders via Local Edge Flips
Abstract: Designing distributed and scalable algorithms to improve network connectivity is a central topic in peer-to-peer networks. In this paper we focus on the following well-known problem: given an $n$-node $d$-regular network, we want to design a decentralized, local algorithm that transforms the graph into one that has good connectivity properties (low diameter, expansion, etc.). To this end, Mahlmann and Schindelhauer introduced the random ``flip'' transformation, where in each time step, a random pair of vertices that have an edge decide to `swap a neighbor'. They conjectured that performing $\poly(d) \times n\log n$ such flips at random would convert any connected $d$-regular graph into a $d$-regular expander graph, with high probability. However, the best known upper bound for the number of steps is roughly $O(n^{17} d^{23})$, obtained via a delicate Markov chain comparison argument.

Our main result is to prove that a natural instantiation of the random flip produces an expander in at most $O(n^2 d^2 \sqrt{\log n})$ steps, with high probability. Our argument uses a potential-function analysis based on the matrix exponential, together with the recent beautiful results on the higher-order Cheeger inequality of graphs. We also show that our technique can be used to analyze another well-studied random process known as the `random switch', and show that it produces an expander in $O(n d)$ steps for $d=\Omega(\log n)$, with high probability.
Boris Bukh and Venkatesan Guruswami. An improved bound on fraction of correctable deletions
Abstract: We consider codes over fixed alphabets against worst-case symbol
deletions. For any fixed $k \ge 2$, we construct a
family of codes over alphabet of size $k$ with positive rate,
which allow efficient recovery from a worst-case deletion
fraction approaching $1-\frac{2}{k+1}$. In particular, for binary
codes, we are able to recover a fraction of deletions approaching
$1/3$. Previously, even non-constructively the largest deletion
fraction known to be correctable with positive rate was
$1-\Theta(1/\sqrt{k})$, and around $0.17$ for the binary case.

Our result pins down the largest fraction of correctable deletions for
$k$-ary codes as $1-\Theta(1/k)$, since $1-1/k$ is an upper bound even
for the simpler model of erasures where the locations of the missing
symbols are known.

Closing the gap between $1/3$ and $1/2$ for the limit of worst-case
deletions correctable by binary codes remains a tantalizing open
question.
Pankaj K. Agarwal, Kyle Fox and Oren Salzman. An Efficient Algorithm for Computing High Quality Paths amid Polygonal Obstacles
Abstract: We study a path-planning problem amid a set O of obstacles in R^2, in which we wish to compute a short path between two points while also maintaining a high clearance from O; the clearance of a point is its distance from a nearest obstacle in O. Specifically, the problem asks for a path minimizing the reciprocal of the clearance integrated over the length of the path. We present the first polynomial-time approximation scheme for this problem. Let n be the total number of obstacle vertices and let ε ∈ ( 0, 1 ]. Our algorithm computes in time O(n^2/ε^2 log n/ε) a path of total cost at most (1+ε) times the cost of the optimal path.
Rasmus Pagh. Locality-sensitive Hashing without False Negatives
Abstract: We consider a new construction of locality-sensitive hash functions for Hamming space that is *covering* in the sense that is it guaranteed to produce a collision for every pair of vectors within a given radius r.
The construction is efficient in the sense that the expected number of hash collisions between vectors at distance cr, for a given c>1, comes close to that of the best possible data independent LSH without the covering guarantee, namely, the seminal LSH construction of Indyk and Motwani (FOCS '98).
The efficiency of the new construction essentially *matches* their bound if cr = log(n)/k, where n is the number of points in the data set and k>0 is integer, and differs from it by at most a factor ln(4)<1.4 in the exponent for general values of cr. As a consequence, LSH-based similarity search in Hamming space can avoid the problem of false negatives at little or no cost in efficiency.
Guy Kindler, Alexandra Kolla and Luca Trevisan. Approximation of non-boolean 2CSP
Abstract: We develop a polynomial time $\Omega\left ( \frac 1R \log R \right)$ approximate algorithm for Max 2CSP-$R$, the problem where we are given a collection of constraints, each involving two variables, where each variable ranges over a set of size $R$, and we want to find an assignment to the variables that maximizes the number of satisfied constraints. Assuming the Unique
Games Conjecture, this is the best possible approximation up to constant factors.

Previously, a $1/R$-approximate algorithm was known, based on linear programming. Our algorithm is based on semidefinite programming. The Semidefinite Program that we use has an almost-matching integrality gap.

For the more general Max $k$CSP-$R$, in which each constraint involves $k$ variables, each ranging over a set of size $R$, it was known that the best possible approximation is of the order of $k/R^{k-1}$, provided that $k$ is sufficiently large compared to $R$; our algorithm shows that the bound $k/R^{k-1}$ is not tight for $k=2$.
Christos Papadimitriou, George Pierrakos, Christos-Alexandros Psomas and Aviad Rubinstein. On the Complexity of Dynamic Mechanism Design
Abstract: We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent’s joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer’s current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation — arguably the simplest meaningful dynamic mechanism design problem imaginable — is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.
David Eppstein. Treetopes and their Graphs
Abstract: We define treetopes, a generalization of the three-dimensional roofless polyhedra (Halin graphs) to arbitrary dimensions. Like roofless polyhedra, treetopes have a designated base facet such that every face of dimension greater than one intersects the base in more than one point. We prove an equivalent characterization of the 4-treetopes using the concept of clustered planarity from graph drawing, and we use this characterization to recognize the graphs of 4-treetopes and realize these graphs as polytopes in polynomial time. This result provides the first class of non-simple 4-polytopes, other than pyramids, that can be recognized efficiently from their graphs.
Satoru Iwata, Shin-Ichi Tanigawa and Yuichi Yoshida. Improved Approximation Algorithms for k-Submodular Function Maximization
Abstract: This paper presents a polynomial-time $1/2$-approximation algorithm for maximizing nonnegative $k$-submodular functions. This improves upon the previous $\max\{1/3, 1/(1+a)\}$-approximation by Ward and Zivny, where $a=\max\{1, \sqrt{(k-1)/4}\}$. We also show that for monotone $k$-submodular functions there is a polynomial-time $k/(2k-1)$-approximation algorithm while for any $\varepsi
lon>0$ a $((k+1)/2k+\varepsilon)$-approximation algorithm for maximizing monotone $k$-submodular functions would require exponentially many queries. In particular, our hardness result implies that our algorithms are asymptotically tight.
We also extend the approach to provide constant factor approximation algorithms for maximizing skew-bisubmodular functions, which were recently introduced as generalizations of bisubmodular functions.
Tsz Chiu Kwok, Lap Chi Lau and Yin Tat Lee. Improved Cheeger's Inequality and Analysis of Local Graph Partitioning using Vertex Expansion and Expansion Profile
Abstract: We prove two generalizations of the Cheeger's inequality. The first generalization shows that the second eigenvalue is at least the product of the edge expansion of G and the robust vertex expansion of G. The second generalization shows that the second eigenvalue is at least the product of the edge expansion of G and the k-way expansion of G. These show that the spectral partitioning algorithm has better performance guarantees when the robust vertex expansion is large (e.g. planted random instances) or the k-way expansion is large (instances with few disjoint non-expanding sets). Both bounds are tight up to a constant factor.

Our approach is based on a method to analyze solutions of Laplacian systems, and this allows us to extend the results to local graph partitioning algorithms. In particular, we show that our approach can be used to analyze personal pagerank vectors, and to give a local graph partitioning algorithm for the small-set expansion problem with performance guarantees similar to the generalizations of Cheeger's inequality. We also present a spectral approach to prove similar results for the truncated random walk algorithm. These show that local graph partitioning algorithms almost match the performance of the spectral partitioning algorithm, with the additional advantages that they apply to the small-set expansion problem and their running time could be sublinear. Our techniques provide common approaches to analyze the spectral partitioning algorithm and local graph partitioning algorithms.
Dimitris Achlioptas, S. Hamed Hassani, Nicolas Macris and Rudiger Urbanke. Bounds for Random Constraint Satisfaction Problems via Spatial Coupling
Abstract: We report on a novel technique called spatial coupling and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels. However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their (coarse) satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances - namely the spatially coupled ensemble - which has the same (coarse) satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the (coarse) satisfiability threshold of the original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the bigger ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K=3, for instance, our lower bound is equal to 3.67 which is greater than the current bounds in the literature. Similarly, for graph 3-colorability we get a bound of 4.44 which is also greater than the current bounds in the literature.
Sina Dehghani, Soheil Ehsani, Mohammadtaghi Hajiaghayi and Vahid Liaghat. Online Degree-Bounded Steiner Network Design
Abstract: We initiate the study of degree-bounded network design problems in the online setting.
The degree-bounded Steiner tree problem -- which
asks for a subgraph with minimum degree that connects a
given set of vertices -- is perhaps one of the
most representative problems in this class. This paper
deals with its well-studied generalization called the
degree-bounded Steiner forest problem where the connectivity
demands are represented by vertex pairs that need to be individually connected.
In the classical online model, the input graph is
given offline but the demand pairs arrive sequentially
in online steps. The selected subgraph starts off as the
empty subgraph, but has to be augmented to satisfy
the new connectivity constraint in each online step.
The goal is to be competitive against an adversary that knows the input in advance.

The standard techniques for solving degree-bounded problems often fall in the category of \textit{iterative} and \textit{dependent} rounding techniques. Unfortunately, these rounding methods are inherently difficult to adapt to an online settings since the underlying fractional solution may change dramatically in between the rounding steps. Indeed, this might be the very reason that despite many advances in the online network design paradigm in the past two decades, the natural family of degree-bounded problems has remained widely open.

In this paper, we design an intuitive greedy-like algorithm that achieves a competitive
ratio of $O(\log n)$ where $n$ is the number of vertices.
We show that no (randomized) algorithm can achieve a (multiplicative)
competitive ratio $o(\log n)$; thus our result is asymptotically tight.
We further show strong hardness results for the group Steiner tree and
the edge-weighted variants of degree-bounded connectivity problems.

F{\"u}rer and Raghavachari resolved the offline variant of degree-bounded
Steiner forest in their paper in SODA'92. Since then, the family of
degree-bounded network design problems has been extensively studied in the
literature resulting in the development of many interesting tools and
numerous papers on the topic.
We hope that our approach and its dual analysis, paves the way for solving the
\textit{online} variants of the classical problems in this family of problems.
John Iacono and Stefan Langerman. Weighted dynamic finger in binary search trees
Abstract: It is shown that the online binary search tree data structure GreedyASS performs asymptotically as well on a sufficiently long sequence of searches as any static binary search tree where each search begins from the previous search (rather than the root). This bound is known to be equivalent to assigning each each item $i$ in the search tree a weight positive $w_i$ and bounding the search cost as $O\left(1+ \log \frac{\displaystyle \sum_{\min(s_{i-1},s_i) \leq x \leq \max(s_{i-1},s_i)}w_x}{\displaystyle \min(w_{s_i},w_{s_{i-1}})} \right)$ amortized. This result is the strongest finger-type bound to be proven for binary search trees. By setting the weights to be equal, one observes that our bound implies the dynamic finger bound. Compared to the previous proof of the dynamic finger bound for splay trees, our result is significantly shorter, stronger, simpler, and has reasonable constants.
Michael Dinitz and Zeyu Zhang. Approximating Low-Stretch Spanners
Abstract: Despite significant recent progress on approximating graph spanners (subgraphs which approximately preserve distances), there are still several large gaps in our understanding. We give new results for two of them: approximating basic $k$-spanner (particularly for small $k$), and the dependence on $f$ when approximating $f$-fault tolerant spanners.

For basic $(2k-1)$-spanner, we first show an integrality gap for the natural flow-based LP (the main tool in almost all nontrivial spanner approximations) which nearly matches the trivial $O(n^{1/k})$-approximation. For small stretch there is still room for improvement, though, so we also design an $\tO(n^{1/3})$-approximation for $4$-spanner (both basic and directed). This was the last value of $k$ for which only an $O(\sqrt{n})$-approximation was known for basic $k$-spanner, and thus implies that for any $k$ the approximation ratio is at most $\tO(n^{1/3})$.

For $f$-fault tolerant spanners, we show that in the small-stretch setting ($k \in \{3,4\}$) it is possible to entirely remove the dependence on $f$ from the approximation ratio, at the cost of moving to bicriteria guarantees. The previous best dependence on $f$ was either linear (in the stretch $3$ case) or exponential (for stretch $4$).
Daniel Lokshtanov, Marcin Pilipczuk and Erik Jan van Leeuwen. Independence and Efficient Domination on $P_6$-free Graphs
Abstract: In the Independent Set problem, the input is a graph $G$, every vertex has a non-negative integer weight, and the task is to find a set $S$ of pairwise non-adjacent vertices, maximizing the total weight of the vertices in $S$. We give an $n^{O(\log^2 n)}$ time algorithm for this problem on graphs excluding the path $P_6$ on 6 vertices as an induced subgraph. Currently, there is no constant $k$ known for which Independent on $P_k$-free graphs becomes NP-complete, and our result implies that if such a $k$ exists, then $k > 6$ unless all problems in NP can be decided in (quasi)polynomial time.

Using the combinatorial tools that we develop for the above algorithm, we also give a polynomial-time algorithm for Efficient Dominating Set on $P_6$-free graphs. In this problem, the input is a graph $G$, every vertex has an integer weight, and the objective is to find a set $S$ of maximum weight such that every vertex in $G$ has exactly one vertex in $S$ in its closed neighborhood, or to determine that no such set exists. Prior to our work, the class of $P_6$-free graphs was the only class of graphs defined by a single forbidden induced subgraph on which the computational complexity of Efficient Dominating Set was unknown.
Tasuku Soma and Yuichi Yoshida. Non-convex Compressed Sensing with the Sum-of-Squares Method
Abstract: We consider stable signal recovery in q quasi-norm for 0 < q ≤ 1. In this problem, given a measurement vector y = Ax for some unknown signal vector x ∈ R n and a known matrix A ∈ R m×n , we want to recover z ∈ R n with x − z q = O( x − x ∗ q ) from a measurement vector, where x ∗ is the s-sparse vector closest to x in q quasi-norm.
Although a small value of q is favorable for measuring the distance to sparse vectors, previous methods for q < 1 involve q quasi-norm minimization which is computationally intractable. In this paper, we overcome this issue by using the sum-of-squares method, and give the first polynomial-time stable recovery scheme for a large class of matrices A in q quasi-norm for any fixed constant 0 < q ≤ 1.
Yossi Azar, Amir Epstein, Lukasz Jez and Adi Vardi. Make-to-Order Integrated Scheduling and Distribution
Abstract: Production and distribution are fundamental operational functions in supply chains.
The main challenge is to design algorithms that optimize operational performance by jointly scheduling production and delivery of customer orders. In this paper we study a model of scheduling customer orders on multiple identical machines and their distribution
to customers afterwards.
The goal is to minimize the total time from release to distribution plus total distribution cost to the customers.
We design the first poly-logarithmic competitive algorithm for the problem (previously only linear competitive algorithms were known).
Our model generalizes two fundamental problems: scheduling of jobs on multiple identical machines
(where the goal function is to minimize the total flow time) as well as the TCP Acknowledgment problem.
John Fearnley and Rahul Savani. The Complexity of All-switches Strategy Improvement
Abstract: Strategy improvement is a widely-used and well-studied class of algorithms for solving graph-based infinite games. These algorithms are parametrized by a switching rule, and one of the most natural rules is ``all switches'' which switches as many edges as possible in each iteration. Continuing a recent line of work, we study all-switches strategy improvement from the perspective of computational complexity. We consider two natural decision problems, both of which have as input a game G, a starting strategy s, and an edge e. The problems are: 1. The edge switch problem, namely, is the edge e, ever switched by all-switches strategy improvement when it is started from s on game G? 2. The optimal strategy problem, namely, is the edge e used in the final strategy that is found by strategy improvement when it is started from s on game G? We show PSPACE-completeness of the edge switch problem and optimal strategy problem for the following settings: Parity games with the discrete strategy improvement algorithm of Voge and Jurdzinski; mean-payoff games with the gain-bias algorithm; and discounted-payoff games and simple stochastic games with their standard strategy improvement algorithms. We also show PSPACE-completeness of an analogous problem to edge switch for the bottom-antipodal algorithm for Acyclic Unique Sink Orientations on Cubes.
Badih Ghazi, Ilan Komargodski, Pravesh Kothari and Madhu Sudan. Communication with Contextual Uncertainty
Abstract: We introduce a simple model illustrating the role of context in communication and the challenge posed by uncertainty of knowledge of context. We consider a variant of distributional communication complexity where Alice gets some information x and Bob gets y, where (x,y) is drawn from a known distribution, and Bob wishes to compute some function g(x,y) (with high probability over (x,y)). In our variant, Alice does not know g, but only knows some function f which is an approximation of g. Thus, the function being computed forms the context for the communication, and knowing it imperfectly models (mild) uncertainty in this context.
A naive solution would be for Alice and Bob to first agree on some common function h that is close to both f and g and then use a protocol for h to compute h(x,y). We show that any such agreement leads to a large overhead in communication ruling out such a universal solution.
In contrast, we show that if g has a one-way communication protocol with complexity k in the standard setting, then it has a communication protocol with complexity O(k(1+I)) in the uncertain setting, where I denotes the mutual information between x and y. In the particular case where the input distribution is a product distribution, the protocol in the uncertain setting only incurs a constant factor blow-up in communication and error.
Furthermore, we show that the dependence on the mutual information I is required. Namely, we construct a class of functions along with a non-product distribution over (x,y) for which the communication complexity is a single bit in the standard setting but at least Omega(\sqrt{n}) bits in the uncertain setting.
David Peleg and Shay Solomon. Dynamic (1 + \eps)-Approximate Matchings: A Density-Sensitive Approach
Abstract: Approximate matchings in fully dynamic graphs have been intensively studied in recent years.
Gupta and Peng [FOCS'13] presented a deterministic algorithm for maintaining fully dynamic $(1+\eps)$-approximate maximum cardinality matching (MCM) in general graphs with worst-case update time $O(\sqrt{m} \cdot \eps^{-2})$, for any $\eps > 0$, where $m$ denotes the current number of edges in the graph.
Despite significant research efforts, this $\sqrt{m}$ update time barrier remains the state-of-the-art even if amortized time bounds and randomization are allowed or the approximation factor is allowed to increase from $1+\eps$ to $2-\eps$, and even in basic graph families such as planar graphs.

This paper presents a simple deterministic algorithm whose performance depends on the \emph{density} of the graph.
Specifically, we maintain fully dynamic $(1+\eps)$-approximate MCM with worst-case update time $O(\alpha \cdot \eps^{-2})$ for graphs with \emph{arboricity} bounded by $\alpha$.
(The arboricity of a graph is the minimum number of edge-disjoint forests into which it can be partitioned, and it is close to the density of its densest subgraph.)
The update time bound holds even if the arboricity bound $\alpha$ changes dynamically.
Since the arboricity ranges between $1$ and $\sqrt{m}$, our density-sensitive bound $O(\alpha \cdot \eps^{-2})$ naturally generalizes the $O(\sqrt{m} \cdot \eps^{-2})$ bound of Gupta and Peng.

For the family of constant arboricity graphs (which includes forests, planar graphs, and graphs excluding a fixed minor),
in the regime $\eps = O(1)$ our update time reduces to a constant.
This should be contrasted with the previous best 2-approximation results for constant arboricity graphs,
which achieve either an $O(\log n)$ worst-case bound (Kopelowitz et al., ICALP'14) or an $O(\sqrt{\log n})$ amortized bound (He et al., ISAAC'14), where $n$ stands for the number of vertices in the graph.

En route to this result, we provide \emph{local} algorithms of independent interest for maintaining fully dynamic approximate matching and vertex cover.
Anne Driemel, Amer Krivosija and Christian Sohler. Clustering time series under the Frechet distance
Abstract: The Frechet distance is a popular distance measure for curves. We study the problem of clustering time series under the Frechet distance. In particular, we give $(1+\eps)$-approximation algorithms for variations of the following problem with parameters $k$ and $\ell$. Given $n$ univariate time series $P$, each of complexity at most $m$, we find $k$ time series, not necessarily from $P$, which we call cluster centers and which each have complexity at most $\ell$, such that (a) the maximum distance of an element of $P$ to its nearest cluster center or (b) the sum of these distances is minimized. Our algorithms have running time near-linear in the input size. To the best of our knowledge, our algorithms are the first clustering algorithms for the Frechet distance which achieve an approximation factor of $(1+\eps)$ or better.
Lin Chen, Nicole Megow and Kevin Schewior. An O(log m)-Competitive Algorithm for Online Machine Minimization
Abstract: We consider the online machine minimization problem in which jobs with hard deadlines arrive online over time at their release dates. The task is to determine a feasible preemptive schedule on a minimum number of machines. Our main result is a general O(log m)-competitive algorithm for the online problem, where m is the optimal number of machines used in an offline solution. This is the first improvement on an intriguing problem in nearly two decades. To date, the best known result is a O(log p_min/p_max)-competitive algorithm by Phillips et al. (STOC 1997) that depends on the ratio of maximum and minimum job sizes, p_max and p_min. Even for m=2 no better algorithm was known. Our algorithm is in this case constant-competitive. When applied to laminar or agreeable instances, our algorithm achieves a competitive ratio of O(1) even independently of m.
The following two key components lead to our new result. Firstly, we derive a new lower bound on the optimum value that relates the laxity and the number of jobs with intersecting time windows. Then, we design a new algorithm that is tailored to this lower bound and balances the delay of jobs by taking the number of currently running jobs into account.
Justin Hsu, Zhiyi Huang, Aaron Roth and Zhiwei Steven Wu. Jointly Private Convex Programming
Abstract: We present a general method for approximately solving convex programs defined
by private information from agents, when the solution can be naturally
partitioned among the agents. This class of problems includes multi-commodity
flow problems, general allocation problems, and multi-dimensional knapsack
problems, among other examples. The accuracy of our algorithm depends on the
number of coupling constraints, which bind multiple agents. On the
other hand, our accuracy is nearly independent of the number of variables, and
in many cases, actually improves as the number of agents increases. A special
case of our result (solving general allocation problems beyond ``Gross
Substitute'' preferences) resolves the main open problem from [Hsu et al. STOC
2014].

We also consider strategic agents who have preferences over their part of the
solution. For any convex program in our class that maximizes social
welfare, we show how to create an approximately dominant strategy
truthful mechanism, approximately maximizing welfare. The central idea is to
charge agents prices based on the approximately optimal dual variables, which
are themselves computed under differential privacy. Our results substantially
broaden the class of problems that are known to be solvable under privacy
and/or incentive constraints.
Merav Parter, David Peleg and Shay Solomon. Local-on-Average Distributed Tasks
Abstract: A distributed task is {\em local} if its time complexity is (nearly)
constant, otherwise it is {\em global}. Unfortunately, local tasks
are relatively scarce, and most distributed tasks require time at least
logarithmic in the network size (and often higher than that).
In a dynamic setting, i.e., when the network undergoes repeated and frequent
topological changes, such as vertex and edge insertions and deletions,
it is desirable to be able to perform a {\em local} update procedure around
the modified part of the network, rather than running a static {\em global}
algorithm from scratch following each change.

This paper makes a step towards establishing the hypothesis that
many (statically) {\em non-local} distributed tasks are
{\em local-on-average} in the dynamic setting, namely,
their \emph{amortized} time complexity is $O(\log^* n)$.

Towards establishing the plausibility of this hypothesis, we
develop a technique for transforming static $O(\polylog(n))$ time algorithms
into dynamic $O(\log^* n)$ amortized time update procedures. We then apply this
method to several fundamental problems whose static time complexity
is logarithmic, including forest-decomposition, edge-orientation
and coloring sparse graphs, and show that their amortized time complexity in the dynamic setting is indeed $O(\log^* n)$.
Luca Becchetti, Andrea Clementi, Emanuele Natale, Francesco Pasquale and Luca Trevisan. Stabilizing Consensus with Many Opinions
Abstract: We consider the following distributed consensus problem: Each node in a
complete communication network of size $n$ initially holds an \emph{opinion},
which is chosen arbitrarily from a finite set $\Sigma$. The
system must converge toward a consensus state in which all, or almost all
nodes, hold the same opinion. Moreover, this opinion should be \emph{valid},
i.e., it should be one among those initially present in the system. This
condition should be met even in the presence of an adaptive, malicious
adversary who can modify the opinions of a bounded number of nodes in every
round.

We consider the \emph{3-majority dynamics} in the \emph{Uniform-Gossip} model:
An efficient protocol in which, in every round, each node revises its opinion
on the basis of the opinions held in the previous round by 3 randomly chosen
neighbors (with ties broken arbitrarily). Let $k$ be the number of (initial)
valid opinions. We show that, for any $\alpha < 1/3$ and for $k =
\mathcal{O}(n^{\alpha})$, the 3-majority dynamics converges in time polynomial in $k$ and
$\log n$ even in the presence of an adversary who can affect up to
$o(\sqrt{n})$ nodes at each round.

Previously, the convergence of the 3-majority protocol was known for $|\Sigma|
= 2$ only, with an argument that is robust to adversarial errors. On the other
hand, no anonymous uniform-gossip protocol that is robust to
adversarial errors was known for $|\Sigma| > 2$.
Shiri Chechik and Christian Wulff-Nilsen. Near-Optimal Light Spanners
Abstract: A Spanner $H$ of a weighted undirected graph $G=(V,E)$
is a ``sparse" subgraph that approximately preserves distances between every pair of nodes.
The stretch of spanner is said to be $k$ (also referred to as $k$-spanner for short) if $\dist(s,t,H) \leq k \cdot \dist(s,t,G)$, where $\dist(s,t,H')$ for a subgraph $H'$ is the distance from $s$ to $t$ in $H'$.
Two main measures of the sparseness of the spanner are the size (number of edges) and the total weight (the sum of the weights of the edges in the spanner).

It is well-known that one can efficiently construct a $(2k-1)$-spanner with $O(n^{1+1/k})$ number of edges \cite{ADDJS93}.
This size-stretch tradeoff is conjectured to be optimal based on the girth conjecture of Erd\H{o}s \cite{Er64}.
However, the current state of the art for the second measure is not yet optimal.

Recently Elkin, Neiman and Solomon [ICALP 14] improved the state of the art by presenting an improved analysis of the greedy algorithm,
proving that the greedy algorithm admits $(2k-1) \cdot (1+\eps)$-stretch and total edge weight of $O_{\eps}(k/\log{k} \cdot \omega(MST(G))\cdot n^{1/k})$, where
$\omega(MST(G))$ is the weight of the minimum spanning tree of $G$.
The previous analysis by Chandra \etal [SOCG 92] admitted
$(2k-1)\cdot (1+\eps)$-stretch and total edge weight of $O_{\eps}(k \omega(MST(G)) n^{1/k})$.
Hence, Elkin \etal improved the weight of the spanner by a $\log{k}$ factor.

In the work, we complectly remove the $k$ factor from the weight, presenting a spanner with $(2k-1)\cdot (1+\eps)$-stretch,
$O_{\eps}(\omega(MST(G)) n^{1/k})$ total weight and $O(n^{1+1/k})$ number of edges.
Up to $(1+\eps)$ factor in the stretch this matches the girth conjecture of Erd\H{o}s \cite{Er64}.
Anupam Gupta, Viswanath Nagarajan and Sahil Singla. Algorithms and Adaptivity Gaps for Stochastic Probing
Abstract: A stochastic probing problem consists of a set of elements whose
values are independent random variables. The algorithm knows the
distributions of these variables, but not the actual outcomes. The
only way to learn the actual outcomes is to probe these elements.
However, there are constraints on which set of elements may be
probed. (E.g., we may have to travel in some metric to probe elements
and have limited time.) These constraints are called outer
constraints. We want to develop an algorithm that picks some set of
elements to maximize the (expected) value, subject to the picked
subset of elements satisfying some other set of constraints, called
the inner constraints. Such problems were studied for both inner and
outer constraints being matroid intersections; these modeled problems
like kidney matching and Bayesian auctions. One limitation of past
work was their reliance on linear-programming-like techniques,
which meant going beyond matroid-like structures was difficult.

In this work, we give a very general adaptivity gap result that holds for all
prefix-closed outer constraints, as long as the inner constraints are
matroid intersections. The adapativity gap is O(log n) for any
constant number of inner matroid constraints. We feel prefix-closedness
captures all "reasonable" outer constraints, like orienteering, connectivity, and
precedence. Based on this we give the first approximation algorithms
for such stochastic probing problems. These have applications, e.g., to
path-planning and precedence-constrained scheduling problems.
John Augustine, William K. Moses Jr., Amanda Redlich and Eli Upfal. Balanced Allocation: Patience is not a Virtue
Abstract: Load balancing is a well-studied problem, with balls-in-bins being the primary framework. The greedy algorithm $\mathsf{Greedy}[d]$ of Azar et al. places each ball by probing $d > 1$ random bins and placing the ball in the least loaded of them. It ensures a maximum load that is exponentially better than the strategy of placing each ball uniformly at random. V\"ocking showed that a slightly asymmetric variant, $\mathsf{Left}[d]$, provides a further significant improvement. However, this improvement comes at an additional computational cost of imposing structure on the bins.

Here, we present a fully decentralized and easy-to-implement algorithm called $\mathsf{FirstDiff}[d]$ that combines the simplicity of $\mathsf{Greedy}[d]$ and the improved balance of $\mathsf{Left}[d]$. The key idea in $\mathsf{FirstDiff}[d]$ is to probe until a different bin size from the first observation is located, then place the ball. Although in the abstract the number of probes could be quite large, we show that $\mathsf{FirstDiff}[d]$ requires $d$ probes on average per ball (in both the standard and heavily-loaded settings). Thus the number of probes is no greater than either that of $\mathsf{Greedy}[d]$ or $\mathsf{Left}[d]$. More importantly, we show that $\mathsf{FirstDiff}[d]$ closely matches the improved maximum load ensured by $\mathsf{Left}[d]$ in both the standard and heavily-loaded settings. We additionally give experimental data that $\mathsf{FirstDiff}[d]$ is indeed as good as $\mathsf{Left}[d]$, if not better, in practice.
Robert Ganian, Ramanujan M. S. and Stefan Szeider. Discovering Archipelagos of Tractability for Constraint Satisfaction and Counting
Abstract: The Constraint Satisfaction Problem (CSP) is a central and generic computational problem which provides a common framework for many theoretical and practical applications. A central line of research is concerned with the identification of classes of instances for which CSP can be solved in polynomial time; such classes are often called “islands of tractability”. A prominent way of defining islands of tractability for CSP is to restrict the relations that may occur in the constraints to a fixed set, called a constraint language, whereas a constraint language is conservative if it contains all unary relations. Schaefer’s famous Dichotomy Theorem (STOC 1978) identifies all islands of tractability in terms of tractable constraint languages over a Boolean domain of values. Since then many extensions and generalizations of this result have been obtained. Recently, Bulatov (TOCL 2011, JACM 2013) gave a full characterization of all islands of tractability for CSP and the counting version #CSP that are defined in terms of conservative constraint languages.

This paper addresses the general limit of the mentioned tractability results for CSP and #CSP, that they only apply to instances where all constraints belong to a single tractable language (in general, the union of two tractable languages isn’t tractable). We show that we can overcome this limitation as long as we keep some control of how constraints over the various considered tractable languages interact with each other. For this purpose we utilize the notion of a strong backdoor of a CSP instance, as introduced by Williams et al. (IJCAI 2003), which is a set of variables that when instantiated moves the instance to an island of tractability, i.e., to a tractable class of instances. We consider strong backdoors into scattered classes, consisting of CSP instances where each connected component belongs entirely to some class from a list of tractable classes. Figuratively speaking, a scattered class constitutes an archipelago of tractability. The main difficulty lies in finding a strong backdoor of given size k; once it is found, we can try all possible instantiations of the backdoor variables and apply the polynomial time algorithms associated with the islands of tractability on the list component wise. Our main result is an algorithm that, given a CSP instance with n variables, finds in time f(k)*poly(n) a strong backdoor into a scattered class (associated with a list of finite conservative constraint languages) of size k or correctly decides that there isn’t such a backdoor. This also gives the running time for solving (#)CSP, provided that (#)CSP is polynomial-time tractable for the considered constraint languages. Our result makes significant progress towards the main goal of the backdoor-based approach to CSPs – the identification of maximal base classes for which small backdoors can be detected efficiently.
Patrizio Angelini, Giordano Da Lozzo, Giuseppe Di Battista, Valentino Di Donato, Philipp Kindermann, Günter Rote and Ignaz Rutter. Windrose Planarity: Embedding Graphs with Direction-Constrained Edges
Abstract: Given a planar graph $G(V,E)$ and a partition of the neighbors of each vertex $v \in V$ in four sets $UR(v)$, $UL(v)$, $DL(v)$, and $DR(v)$, the problem Windrose Planarity asks to decide whether $G$ admits a windrose-planar drawing, that is, a planar drawing in which (i) each neighbor $u \in UR(v)$ is above and to the right of $v$, (ii) each neighbor $u \in UL(v)$ is above and to the left of $v$, (iii) each neighbor $u \in DL(v)$ is below and to the left of $v$, (iv) each neighbor $u \in DR(v)$ is below and to the right of $v$, and (v) edges are represented by curves that are monotone with respect to each axis. By exploiting both the horizontal and the vertical relationship among vertices, windrose-planar drawings allow to simultaneously visualize two partial orders defined by means of the edges of the graph.

Although the problem is NP-hard in the general case, we give a polynomial-time algorithm for testing whether there exists a windrose-planar drawing that respects a combinatorial embedding that is given as part of the input. This algorithm is based on a characterization of the plane triangulations admitting a windrose-planar drawing. Furthermore, for any embedded graph admitting a windrose-planar drawing we show how to construct one with at most one bend per edge on an $O(n) \times O(n)$ grid. The latter result contrasts with the fact that straight-line windrose-planar drawings may require exponential area.
Antonio Blanca and Alistair Sinclair. Random-cluster Dynamics in Z^2
Abstract: The random-cluster model has been widely studied as a unifying framework for random graphs, spin systems and electrical networks, but its dynamics have so far largely resisted analysis. In this paper we analyze the Glauber dynamics of the random-cluster model in the canonical case where the underlying graph is an n by n box in the Cartesian lattice Z^2. Our main result is a O(n^2 log n) upper bound for the mixing time at all values of the model parameter p except the critical point p=p_c(q), and for all values of the second model parameter $q \ge 1$. We also provide a matching lower bound proving that our result is tight. Our analysis takes as its starting point the recent breakthrough by Beffara and Duminil-Copin on the location of the random-cluster phase transition in Z^2. It is reminiscent of similar results for spin systems such as the Ising and Potts models, but requires the reworking of several standard tools in the context of the random-cluster model, which is not a spin system in the usual sense.
Ishay Haviv and Oded Regev. The Restricted Isometry Property of Subsampled Fourier Matrices
Abstract: A matrix $A \in \C^{q \times N}$ satisfies the restricted isometry property of order $k$ with constant $\eps$ if it preserves the $\ell_2$ norm of all $k$-sparse vectors up to a factor of $1\pm \eps$. We prove that a matrix $A$ obtained by randomly sampling $q = O(k \cdot \log^2 k \cdot \log N)$ rows from an $N \times N$ Fourier matrix satisfies the restricted isometry property of order $k$ with a fixed $\eps$ with high probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math., 2008), its subsequent improvements, and Bourgain (Geom. Funct. Anal., 2014).
Badih Ghazi, Pritish Kamath and Madhu Sudan. Communication Complexity of Permutation-Invariant Functions
Abstract: Motivated by the quest for a broader understanding of upper bounds in communication complexity, at least for simple functions, we introduce the class of ''permutation-invariant'' functions. A partial function $f:\{0,1\}^n \times \{0,1\}^n\to \{0,1,?\}$ is permutation-invariant if for every bijection $\pi:\{1,\ldots,n\} \to \{1,\ldots,n\}$ and every $\mathbf{x}, \mathbf{y} \in \{0,1\}^n$, it is the case that $f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}^{\pi}, \mathbf{y}^{\pi})$. Most of the commonly studied functions in communication complexity are permutation-invariant. For such functions, we present a simple complexity measure (computable in time polynomial in $n$ given an implicit description of $f$) that describes their communication complexity up to polynomial factors and up to an additive error that is logarithmic in the input size. This gives a coarse taxonomy of the communication complexity of simple functions. Our work highlights the role of the well-known lower bounds of functions such as 'Set-Disjointness' and 'Indexing', while complementing them with the relatively lesser-known upper bounds for 'Gap-Inner-Product' (from the sketching literature) and 'Sparse-Gap-Inner-Product' (from the recent work of Canonne et al. [ITCS 2015]). We also present consequences to the study of communication complexity with imperfectly shared randomness where we show that for total permutation-invariant functions, imperfectly shared randomness results in only a polynomial blow-up in communication complexity after an additive $O(\log \log n)$ overhead.
Joshua Brakensiek, Venkatesan Guruswami and Samuel Zbarsky. Efficient Low-Redundancy Codes for Correcting Multiple Deletions
Abstract: We consider the problem of constructing codes to recover from k-bit deletions with efficient encoding/decoding, for a fixed k. The single deletion case is well understood, with the Varshamov-Tenengolts code from 1965 giving an asymptotically optimal construction with ~ 2^n/n codewords of length n, i.e., at most log n bits of redundancy. However, even for the case of two deletions, there was no known explicit construction with redundancy less than n^{Omega(1)}.

For any fixed k, we construct a binary code with c_k log n redundancy that can be decoded from k deletions in O_k(n log^4 n) time. The coefficient c_k can be taken to be O(k^2 log k), which is only quadratically worse than the optimal, non-constructive bound of O(k). We also indicate how to modify this code to allow for a combination of up to k insertions and deletions.

We also note that amongst linear codes capable of correcting k deletions, the (k+1)-fold repetition code is essentially the best possible.
Xue Chen. Integrality Gaps and Approximation Algorithms for Dispersers and Bipartite Expanders
Abstract: We study the problem of approximating the quality of a disperser. A bipartite graph $G$ on $([N],[M])$ is a $(\rho N,(1-\delta)M)$-disperser if for any subset $S\subseteq [N]$ of size $\rho N$, the neighbor set $\Gamma(S)$ contains at least $(1-\delta)M$ distinct vertices. Our main results are strong integrality gaps in the Lasserre hierarchy and an approximation algorithm for dispersers.
\begin{enumerate}
\item For any $\alpha>0$, $\delta>0$, and a random bipartite graph $G$ with left degree $D=O(\log N)$, we prove that the Lasserre hierarchy cannot distinguish whether $G$ is an $(N^{\alpha},(1-\delta)M)$-disperser or not an $(N^{1-\alpha},\delta M)$-disperser.
\item For any $\rho>0$, we prove that there exists infinitely many constants $d$ such that the Lasserre hierarchy cannot distinguish whether a random bipartite graph $G$ with right degree $d$ is a $(\rho N, (1-(1-\rho)^d)M)$-disperser or not a $(\rho N, (1-\Omega(\frac{1-\rho}{\rho d + 1-\rho}))M)$-disperser. We also provide an efficient algorithm to find a subset of size exact $\rho N$ that has an approximation ratio matching the integrality gap within an extra loss of $\frac{\min\{\frac{\rho}{1-\rho},\frac{1-\rho}{\rho}\}}{\log d}$.
\end{enumerate}
Our method gives an integrality gap in the Lasserre hierarchy for bipartite expanders with left degree~$D$. $G$ on $([N],[M])$ is a $(\rho N,a)$-expander if for any subset $S\subseteq [N]$ of size $\rho N$, the neighbor set $\Gamma(S)$ contains at least $a \cdot \rho N$ distinct vertices. We prove that for any constant $\epsilon>0$, there exists constants $\epsilon'<\epsilon,\rho,$ and $D$ such that the Lasserre hierarchy cannot distinguish whether a bipartite graph on $([N],[M])$ with left degree $D$ is a $(\rho N, (1-\epsilon')D)$-expander or not a $(\rho N, (1-\epsilon)D)$-expander.
Nicolas Bousquet, Yang Cai, Christoph Hunkenschroder and Adrian Vetta. On the Economic Efficiency of the Combinatorial Clock Auction
Abstract: Since the 1990s spectrum auctions have been implemented world-wide.
This has provided for a practical examination of an assortment of auction mechanisms and, amongst these,
two simultaneous ascending price auctions have proved to be extremely successful.
These are the simultaneous multiround ascending auction (SMRA) and the combinatorial
clock auction (CCA). It has long been known that, for certain classes of valuation functions,
the SMRA provides good theoretical guarantees on social welfare.
However, no such guarantees were known for the CCA.

In this paper, we show that CCA does provide strong guarantees on social welfare {\em provided}
the price increment and stopping rule are well-chosen.
This is very surprising in that the choice of price increment has been used primarily
to adjust auction duration and the stopping rule has attracted little attention.
The main result is a polylogarithmic approximation guarantee for
social welfare when the maximum number of items demanded $\C$ by a bidder is fixed.
Specifically, we show that either the revenue of the CCA is at least
an $\Omega\Big(\frac{1}{\C^{2}\log n\log^2m}\Big)$-fraction of the optimal welfare or
the welfare of the CCA is at least an $\Omega\Big(\frac{1}{\log n}\Big)$-fraction of the optimal
welfare, where $n$ is the number of bidders and $m$ is the number of items. As a corollary,
the welfare ratio -- the worst case ratio between the social welfare of the
optimum allocation and the social welfare of the CCA allocation -- is at most $O(\C^2 \cdot \log n \cdot \log^2 m)$.
We emphasize that this latter result requires no assumption on bidders valuation functions.
Finally, we prove that such a dependence on $\C$ is necessary. In particular, we show that the
welfare ratio of the CCA is at least $\Omega \Big(\C \cdot \frac{\log m}{\log \log m}\Big)$.
Gabor Braun, Jonah Brown-Cohen, Arefin Huq, Sebastian Pokutta, Prasad Raghavendra, Aurko Roy, Benjamin Weitz and Daniel Zink. The Matching Problem Has No Small Symmetric SDP
Abstract: Yannakakis (1991,1988) showed that the matching problem does not have a small
symmetric linear program. Rothvoss (2014)
recently proved that any, not necessarily symmetric, linear program
also has exponential size. It is natural to ask whether the matching
problem can be expressed compactly in a framework such as
semidefinite programming (SDP) that is more powerful than linear
programming but still allows efficient optimization.
We answer this question negatively for symmetric SDPs: any symmetric
SDP for the matching problem has exponential size.

We also show that an O(k)-round Lasserre SDP relaxation for the
asymmetric metric traveling salesperson problem yields at least as good an approximation
as any symmetric SDP relaxation of size n^{k}.

The key technical ingredient underlying both these results is an upper
bound on the degree needed to derive polynomial identities that hold
over the space of matchings or traveling salesperson tours.
Yossi Azar, Ilan Cohen, Amos Fiat and Alan Roytman. Packing Small Vectors
Abstract: Online $d$-dimensional vector packing models many settings such as minimizing resources in data centers where jobs have multiple resource requirements (CPU, Memory, etc.). However, no online $d$-dimensional vector packing algorithm can achieve a competitive ratio better than $d$. Fortunately, in many natural applications, vectors are relatively small, and thus the lower bound does not hold. For sufficiently small vectors, an $O(\log d)$-competitive algorithm was known. We improve this to a constant competitive ratio, arbitrarily close to $e$, given that vectors are sufficiently small.

We give improved results for the two dimensional case. For arbitrarily small vectors, the first fit algorithm for two dimensional vector packing is no better than 2-competitive. We present a natural family of first fit variants, and for optimized parameters get a competitive ratio $\approx 1.48$ for sufficiently small vectors.

We improve upon the $1.48$-competitive ratio -- not via a first fit variant -- and give a competitive ratio arbitrarily close to $4/3$ for packing small, two dimensional vectors. We show that no algorithm can achieve better than a $4/3$ competitive ratio for two dimensional vectors, even if one allows the algorithm to split vectors amongst arbitrarily many bins.
Avrim Blum, Sariel Har-Peled and Benjamin Raichel. Sparse Approximation via Generating Point Sets
Abstract: For a set $\dataset$ of $\npoints$ points in in the unit ball $\Ball^d \subseteq \Re^d$, consider the problem of finding a small subset $\algset \subseteq \dataset$ such that its convex-hull $\eps$-approximates the convex-hull of the original points. We present an efficient algorithm to compute such a $\eps'$-approximation of size $\alg$, where $\eps'$ is function of $\eps$, and $\alg$ is a function of the minimum size $\opt$ of such an $\eps$-approximation. Surprisingly, there is no dependency on the dimension $d$ in both bounds. Furthermore, every point of $\dataset$ can be $\eps$-approximated by a convex-combination of points of $\algset$ that is $O(1/\eps^2)$-sparse.

Our result can be viewed as a method for sparse, convex autoencoding: approximately representing the data in a compact way using sparse combinations of a small subset $\algset$ of the original data. The new algorithm can be kernelized, and it preserves sparsity in the original input.
Arturs Backurs, Piotr Indyk, Ilya Razenshteyn and David Woodruff. Nearly-optimal bounds for sparse recovery in generic norms, with applications to k-median sketching
Abstract: We initiate the study of trade-offs between sparsity and the number of measurements in sparse recovery schemes for {\em generic} norms. Specifically, for a norm $\|\cdot\|$, sparsity parameter $k$, approximation factor $K>0$, and probability of failure $P>0$, we ask: what is the minimal value of $m$ so that there is a distribution over $m \times n$ matrices $A$ with the property that for any~$x$, given $Ax$, we can recover a $k$-sparse approximation to $x$ in the given norm with probability at least $1-P$? We give a partial answer to this problem, by showing that for norms that admit efficient linear sketches, the optimal number of measurements $m$ is closely related to the {\em doubling dimension} of the metric induced by the norm $\|\cdot\|$ on the set of all $k$-sparse vectors. By applying our result to specific norms, we cast known measurement bounds in our general framework (for the $\ell_p$ norms, $p \in [1,2]$) as well as provide new, measurement-efficient schemes (for the Earth-Mover Distance norm). The latter result directly implies more succinct linear sketches for the well-studied planar {\em $k$-median clustering} problem. Finally, our lower bound for the doubling dimension of the EMD norm enables us to answer the open question of [Frahling-Sohler, STOC'05], showing that the space complexity of clustering problems in the dynamic streaming model requires a super-constant number of machine words.
Massimo Cairo, Roberto Grossi and Romeo Rizzi. New Bounds for Approximating Extremal Distances in Undirected Graphs
Abstract: The diameter D, the radius r, and the eccentricities ε(v) of all nodes v are well-known extremal distances of an undirected graph G=(V,E). (Here we denote n=|V| and m=|E|.) We investigate the possibility of providing α-multiplicative β-additive approximations D̃, r̃ and ε̃(v) for these extremal distances in Õ(mᵞ) expected time, for some α, β, γ, where Õ() neglects poly-log factors. Specifically, it is required that 1/α D - β ≤ D̃ ≤ D for the diameter, r ≤ r̃ ≤ α r + β for the radius, and 1/α ε(v) - β ≤ ε̃(v) ≤ ε(v) for the eccentricities.

All the known exact algorithms achieve α=1, β=0 and γ=2. A single graph search yields α=2 for the diameter and the radius and α=3 for the eccentricities, all with β=0 and γ=1. Roditty and Vassilevska W. [STOC13] obtain α=3/2, β<1 and γ=3/2 for D and r. Chechik, Larkin, Roditty, Schoenebeck, Tarjan and Vassilevska W. [SODA14] give α=3/2 for D, r and α=5/3 for ε(v), all with β=0 and γ=3/2. Under the Strong Exponential Time Hypothesis (SETH) of Impagliazzo, Paturi and Zane [JCSS01], it is impossible for the diameter to get α<3/2, β=0, γ<2 [STOC13] and α<4/3, β=O(mᵟ), γ < 2 - 2δ for any δ≥0 [SODA14].

This paper contains the contributions summarized below.

First, we show under SETH that it is impossible to get α<3/2 for the diameter and α<5/3 for the eccentricities with β=O(mᵟ) and γ < 2 - 2δ for any δ ≥ 0. Thus, we tighten the above results for the diameter and the eccentricities. In particular, the recently achieved approximation factors α=3/2 for D and α=5/3 for ε(v) cannot be improved in truly sub-quadratic time (γ<2) even with a constant additive term β=O(1) under SETH: the eccentricities are more difficult to approximate than the diameter.

Second, we present an algorithmic scheme that gives nontrivial approximations with exponent γ = 1+∊ arbitrarily close to the optimum 1. Specifically, for any integer k ≥ 0, we present a randomized algorithm giving α = 2 - 1/2ᵏ for diameter and radius and α = 3 - 4/(2ᵏ+1) for eccentricities with β<1 in Õ(m n^{1/(k+1)}) expected time (thus γ = 1 + 1/(k+1)). For k ≥ 2, this gives a family of previously unknown bounds, and approaches near-linear running time as k grows.

Third, we observe a new connection between approximating the diameter and finding h-dominating sets, which are subsets of nodes at distance ≤ h from every other node. These sets could be of independent interest for other graph problems. Our algorithms discover an h-dominating set X for h ~ (α-1) D that is employed to give an α-approximation D̃ in Õ(|X| m) time. Since finding a smaller X would yield a faster algorithm, we give some lower bounds on the size of X in the worst case through explicit constructions. As an intriguing example, given any graph of diameter 4 our algorithms find a 3-dominating set of size Θ̃(n^{1/3}), which is proven to be tight up to poly-logs.
Mohsen Ghaffari. An Improved Distributed Algorithm for Maximal Independent Set
Abstract: The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper
presents an extremely simple randomized algorithm providing a near-optimal local complexity for this problem, which incidentally, when combined with some known techniques, also leads to a near-optimal global complexity.

Classical MIS algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability, all nodes terminate after $O(\log n)$ rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node $v$ terminates after $O(\log deg(v)+\log 1/\epsilon)$ rounds, with probability at least $1-\epsilon$. The guarantee holds even if the randomness outside 2-hops neighborhood of $v$ is determined adversarially. This degree-dependency is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04].

Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12; arXiv: 1202.1983v3], we get a randomized MIS algorithm with a high probability global complexity of $O(\log \Delta) + 2^{O(\sqrt{\log \log n})}$, where $\Delta$ denotes the maximum degree. This improves over the $O(\log^2 \Delta) + 2^{O(\sqrt{\log \log n})}$ result of Barenboim et al., and gets close to the $\Omega(\min\{\log \Delta, \sqrt{\log n}\})$ lower bound of Kuhn et al.

Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lov\'{a}sz Local Lemma.
Chandra Chekuri and Kent Quanrud. Fast Approximations for Matroid Intersection
Abstract: We present approximation algorithms for the weighted matroid intersection problem in the independence oracle model. Given two matroids $\mathcal{M}_1$ and $\mathcal{M}_2$ defined over a ground set $\mathcal{N}$ of $n$ elements, the best exact algorithm to find a maximum cardinality independent set (the unweighted case), due to Cunningham, requires $O(n k^{1.5})$ independence queries and time. Here $k$ is the rank of the matroid intersection. For the weighted case, algorithms due to Frank (and later others), require $O(n k^2)$ independence queries. There are also scaling based algorithms that require $O(\sqrt{k} n^2 \log (kW))$ independence queries where $W$ is the maximum weight (assuming all weights are integers). Recently, Huang, Kakimura, and Kamiyama described an algorithm that gives a $(1-\epsilon)$-approximation for the weighted matroid intersection problem while requiring $O(n k^{1.5} \logof{k} / \epsilon)$ independence queries and time.

We observe that a $(1-\epsilon)$-approximation for the maximum cardinality case can be obtained in $O(nk/\epsilon)$ independence queries by adapting Cunningham's algorithm. Our main contribution is a $(1-\epsilon)$ approximation algorithm for the weighted matroid intersection problem that requires $O(n k \log^2 (1/\epsilon) / \epsilon^2)$ independence queries and time.
Michael Bender, Jeremy Fineman, Seth Gilbert and Maxwell Young. How to Scale Exponential Backoff: Constant Throughput, Polylog Access Attempts, and Robustness
Abstract: Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (i) it should provide constant throughput, wasting as little time as possible; (ii) it should require few failed access attempts, minimizing the amount of wasted effort; and (iii) it should be robust, continuing to work efficiently even if some of the access attempts fail for spurious reasons. Unfortunately, exponential backoff has some well-known limitations in two of these areas: it provides poor (sub-constant) throughput (in the worst case), and is not robust (to resource acquisition failures).

The goal of this paper is to fix exponential backoff, particularly focusing on the case where processes arrive in an on-line, worst-case fashion. We present a relatively simple backoff protocol Re-Backoff that has, at its heart, a version of exponential backoff. It guarantees expected constant throughput with dynamic process arrivals and requires only an expected polylogarithmic number of access attempts per process.


Re-Backoff is also robust to periods where the shared resource is unavailable for a period of time. If it is unavailable for D time slots, Re-Backoff provides the following guarantees. When the number of packets is a finite n, the average expected number of access attempts for successfully sending a packet is O(log^2( n + D)). In the infinite case, the average expected number of access attempts for successfully sending a packet is O(log^2(\nc) + \log^2(D) )$ where \nc is the maximum number of processes that
are ever in the system concurrently.
Shuchi Chawla, Nikhil R. Devanur, Anna Karlin and Balasubramanian Sivan. Simple pricing schemes for consumers with evolving values
Abstract: We consider a pricing problem where a buyer is interested in purchasing/using a good, such as an app or music or software, repeatedly over time. The consumer discovers his value for the good only as he uses it, and the value evolves with each use. Optimizing for the seller's revenue in such dynamic settings is a complex problem and requires assumptions about how the buyer behaves before learning his future value(s), and in particular, how he reacts to risk. We explore the performance of a class of pricing mechanisms that are extremely simple for both the buyer and the seller to use: the buyer reacts to prices myopically without worrying about how his value evolves in the future; the seller needs to optimize for revenue over a space of only two parameters, and can do so without knowing fine details of the value evolution process. We present simple-versus-optimal type results, namely that under certain assumptions, simple pricing mechanisms of the above form are approximately optimal regardless of the buyer's risk profile.

Our results assume that the buyer's value per usage evolves according to a Markov process that satisfies a martingale property. For our main result, we consider pricing mechanisms in which the seller offers the product for free for a certain number of uses, and then charges an appropriate fixed price per usage. We assume that the buyer responds by buying the product for as long as his value exceeds the fixed price. Importantly, the buyer does not need to know anything about how his future value will evolve, only how much he wants to use the product {\em right now}. This mitigation of risk yields a larger pool of buyers.
Ioannis Panageas, Piyush Srivastava and Nisheeth Vishnoi. Evolutionary dynamics in finite populations mix rapidly
Abstract: In this paper we prove that the rate of convergence, or mixing time, of a broad class of evolutionary dynamics in finite populations is roughly logarithmic in the size of the state space. An important special case of such a stochastic process is the Wright-Fisher model from evolutionary biology (with selection and mutation) on a population of size N over m genotypes. Our main result implies that the mixing time of this process is roughly O(log N) for all mutation rates and fitness landscapes, and solves the main open problem from DSV '12. In particular, it significantly extends the main result in V '15 who proved this for m=2. Biologically, such models have been used to study the evolution of viral populations with applications to drug design strategies countering them. Here the time it takes for the population to reach a steady state is important both for estimation of the steady-state structure of the population as well to determine the treatment strength and duration. Our result, that such populations exhibit rapid mixing, makes both of these approaches sound. Our proof relies on a new connection between Markov chains arising in evolutionary dynamics and dynamical systems on the simplex, and we expect that these techniques would be useful beyond the evolutionary biology setting.
Mohsen Ghaffari and Bernhard Haeupler. Distributed Algorithms for Planar Networks II: Low-Congestion Shortcuts, MST, and Min-Cut
Abstract: This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks.

Consider a graph $G=(V, E)$ and a partitioning of $V$ into subsets of nodes $S_1$, ..., $S_{N}$, each inducing a connected subgraph $G[S_i]$. We define an $\alpha$-congestion shortcut with dilation $\beta$ to be a set of subgraphs $H_1$, ..., $H_{N} \subseteq G$, one for each subset $S_i$, such that:
(1) For each $i \in [1, N]$, the diameter of the subgraph $G[S_i]+H_i$ is at most $\beta$.
(2) For each edge $e\in E$, the number of subgraphs $G[S_i]+H_i$ containing $e$ is at most $\alpha$.

We prove that any partition of a $D$-diameter planar graph into individually-connected parts admits an $O(D\log D)$-congestion shortcut with dilation $O(D\log D)$, and we also present a distributed construction of it in $\tilde{O}(D)$ rounds. We moreover prove these parameters to be near-optimal; i.e., there are instances in which, unavoidably, $\max\{\alpha, \beta\} = \Omega(D\frac{\log D}{\log\log D})$.

Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive $\tilde{O}(D)$-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of $\Omega(D)$. We remark that this is the first result bypassing the well-known $\tilde{\Omega}(D+\sqrt{n})$ existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das Sarma et al. [STOC'11]) in a family of graphs of interest.
Timothy M. Chan. Improved Deterministic Algorithms for Linear Programming in Low Dimensions
Abstract: At SODA'93, Chazelle and Matou\v sek presented a derandomization of Clarkson's sampling-based algorithm for solving linear programs with $n$ constraints and $d$ variables in $d^{(7+o(1))d}n$ deterministic time. The time bound was subsequently improved to $d^{(5+o(1))d}n$ by Br\"onnimann, Chazelle, and Matou\v sek [FOCS'93]. We obtain a further improved deterministic algorithm running in $d^{(1/2+o(1))d}n$ time. Along the way, we also point out a simplified version of Chazelle and Matou\v sek's algorithm, avoiding $\eps$-approximations.
Ran Gelles, Bernhard Haeupler, Gillat Kol, Noga Ron-Zewi and Avi Wigderson. Towards optimal deterministic coding for interactive communication
Abstract: We study \emph{efficient, deterministic} interactive coding schemes
that simulate any interactive protocol both under adversarial and random errors,
and can approach rate 1.

For channels that flip bits with probability~$\epsilon<1/2$,
our coding scheme achieves a communication rate of $1 -\Theta\left(\sqrt{H({\eps})}\right)$ and
a failure probability of~$\exp(-n/ \log n)$ in length $n$ protocols. Prior to our work, all deterministic schemes (either efficient or not) had a rate bounded away from~$1$. Furthermore, the best failure probability achievable by an efficient deterministic protocol was only quasi-polynomial, i.e., of the form $\exp(- \log^{O(1)} n)$ (Braverman, ITCS 2012).

For channels in which an adversary controls the noise pattern our coding scheme can tolerate $O(1/\log n)$ fraction of errors with rate approaching 1. No similar deterministic and efficient
schemes were known with any constant rate.


Essential to both results is an explicit, efficiently encodable and decodable linear {\em tree-code} of length~$n$ that has a relative distance $\Theta(\eps/\log n)$ and a
rate~$1-\Theta(\epsilon)$, defined over a polynomial size alphabet. No nontrivial distance bound was known for any efficient constant rate tree-code.
The linearity of our code, and in particular the fact that it can be {\em systematic}, turns out to play an important role in approaching rate 1 for random and adversarial errors.

A central contribution of the paper in deriving our protocols for random and adversarial errors,
is a novel code-concatenation scheme, a notion standard in coding theory which we adapt for the interactive setting. We use the above tree-code as the ``outer code'' in this concatenation. The necessary deterministic ``inner code'' is achieved by a nontrivial derandomization of the probabilistic protocol of (Haeupler, STOC 2014). This deterministic coding scheme (with {\em exponential-time} preprocessing,
but applied here to $\log n$ bit blocks) can handle an $\epsilon$ fraction of adversarial errors with a communication rate of $1 - \Theta\left(\sqrt{H({\eps})}\right)$.
Nick Gravin, Yuval Peres and Balasubramanian Sivan. Towards Optimal Algorithms for Prediction with Expert Advice
Abstract: We study the classical problem of prediction with expert advice in the adversarial setting with a geometric stopping time. In 1965, Cover gave the optimal algorithm for the case of 2 experts. In this paper, we design the optimal algorithm, adversary and regret for the case of 3 experts. Further, we show that the optimal algorithm for 2 and 3 experts is a probability matching algorithm (analogous to Thompson sampling) against a particular randomized adversary. Our proof shows that the probability matching algorithm is not only optimal against this particular randomized adversary, but also minimax optimal against all possible adversaries.

Our analysis develops upper and lower bounds simultaneously, analogous to the primal-dual method. Our analysis of the optimal adversary goes through delicate asymptotics of the random walk of a particle between multiple walls. We use the connection we develop to random walks to derive an improved algorithm and regret bound for the case of 4 experts, and, provide a general framework for designing the optimal algorithm and adversary for an arbitrary number of experts.
Shi Li. Approximating capacitated $k$-median with $(1+\eps)k$ open facilities
Abstract: In the capacitated $k$-median (\CKM) problem, we are given a set $F$ of facilities, each facility $i \in F$ with a capacity $u_i$, a set $C$ of clients, a metric $d$ over $F \cup C$ and an integer $k$. The goal is to open $k$ facilities in $F$ and connect the clients $C$ to the open facilities such that each facility $i$ is connected by at most $u_i$ clients, so as to minimize the total connection cost.

In this paper, we give the first constant approximation for \CKM, that only violates the cardinality constraint by a factor of $1+\eps$. This generalizes the result of [Li15], which only works for the uniform capacitated case. Moreover, the approximation ratio we obtain is $O\big(\frac{1}{\eps^2}\log\frac1\eps\big)$, which is an exponential improvement over the ratio of $\exp(O(1/\epsilon^2))$ in [Li15]. The natural LP relaxation for the problem, which almost all previous algorithms for \CKM are based on, has unbounded integrality gap even if $(2-\eps)k$ facilities can be opened. We introduce a novel configuration LP for the problem, that overcomes this integrality gap.
Thodoris Lykouris, Vasilis Syrgkanis and Eva Tardos. Learning and Efficiency in Games with Dynamic Population
Abstract: We study the quality of outcomes in repeated games when the population of players is dynamically changing, and where participants use learning algorithms to adapt to the dynamic environment. Price of anarchy has originally been introduced to study the Nash equilibria of one-shot games. Many games studied in computer science, such as packet routing or ad-auctions, are played repeatedly. Given the computational hardness of Nash equilibria, an attractive alternative in repeated game settings is that players use no-regret learning algorithms. The price of total anarchy considers the quality of such learning outcomes, assuming a steady environment and player population, which is rarely the case in online settings.

In this paper we analyze efficiency of repeated games in dynamically changing environments. An important trait of learning behavior is its versatility to changing environments, assuming that the learning method used is adaptive, i.e., doesn't rely too heavily on experience from the distant past. We show that, in large classes of games, if players choose their strategies in a way that guarantees low adaptive regret, high social welfare is ensured, even under very frequent changes.

A main technical tool for our analysis is the existence of a solution to the welfare maximization problem that is both close to optimal and relatively stable over time. Such a solution serves as a benchmark in the efficiency analysis of learning outcomes. We show that such a stable and close to optimal solution exists for many problems, even in cases when the exact optimal solution can be very unstable. We further show that a sufficient condition on the existence of stable outcomes is the existence of a differentially private algorithm for the welfare maximization problem. Hence, we draw a strong connection between differential privacy and high efficiency of learning outcomes in frequently changing repeated games.

We demonstrate our techniques by focusing on two classes of games as examples: independent item auctions and congestion games. In both applications we show that adaptive learning guarantees high social welfare even with surprisingly high churn in the player population.
Chien-Chung Huang, Naonori Kakimura and Naoyuki Kamiyama. Exact and Approximation Algorithms for Weighted Matroid Intersection
Abstract: We present exact and approximation algorithms for the weighted matroid intersection problems. Our exact algorithms are faster than previous algorithms when the largest weight is relatively small. Our approximation algorithms deliver a (1-\epsilon)-approximate solution with running times significantly faster than known exact algorithms.

The core of our algorithms is a decomposition technique: we decompose the weighted version of the problem into a set of unweighted matroid intersection problems. The computational advantage of this approach is that we can then make use of fast unweighted matroid intersection algorithms as a black box for designing algorithms. To be precise, we show that we can find an exact solution via solving W unweighted matroid intersections problems, where W is the largest given weight. Furthermore, we can find a (1-\epsilon)-approximate solution via solving O(\epsilon^{-1} \log r) unweighted matroid intersection problems, where r is the smallest rank of the given two matroids.

Our algorithms are simple and flexible: they can be adapted to specific matroid intersection problems, making use of specialized unweighted matroid intersection algorithms. In this paper, we show the following results.
- Given two general matroids, using Cunningham's algorithm, we solve the problem exactly in O(\tau Wnr^{1.5}) time and (1-\epsilon)-approximately in O(\tau \epsilon^{-1}nr^{1.5}\log r) time, where \tau is the time complexity of an independence oracle call.
- Given two graphic matroids, using the algorithm of Gabow and Xu, we solve the problem exactly in O(W \sqrt{r}n \log r) time and (1-\epsilon)-approximately in
O( \epsilon^{-1} \sqrt{r}n \log^2r) time.
- Given two linear matroids (in the form of two r-by-n matrices), using the algorithm of Cheung, Kwok and Lau, we solve the problem exactly in O(nr \log r_*+ W n r_*^{\omega-1}) time and (1-\epsilon)-approximately in O(nr \log r_*+ \epsilon^{-1} n r_*^{\omega-1} \log r_* ) time, where r_* is the maximum size of a common independent set.

Finally, we give a further application of our decomposition technique. We use it to solve efficiently the rank-maximal matroid intersection problem, a problem motivated by matching problems under preferences.
Tsvi Kopelowitz, Seth Pettie and Ely Porat. Higher Lower Bounds from the 3SUM Conjecture
Abstract: The 3SUM conjecture has proven to be a valuable tool for proving conditional lower bounds on
dynamic data structures and graph problems.
This line of work was initiated by Patrascu (STOC 2010)
who reduced 3SUM to an offline SetDisjointness problem.
However, the reduction introduced by Patrascu{} suffers from several inefficiencies,
making it difficult to obtain {\em tight} conditional lower bounds from the 3SUM conjecture.

In this paper we address many of the deficiencies of Patrascu's framework. We give new and efficient
reductions from 3SUM to offline SetDisjointness and offline SetIntersection (the reporting version of SetDisjointness)
which leads to polynomially higher lower bounds on several problems.
Using our reductions, we are able to show the essential optimality of several algorithms, assuming the 3SUM conjecture.

- Chiba and Nishizeki's $O(m\alpha)$-time algorithm (SICOMP 1985) for enumerating all
triangles in a graph with arboricity/degeneracy $\alpha$ is essentially optimal, for any $\alpha$.
- Bj{\o}rklund, Pagh, Williams, and Zwick's algorithm (ICALP 2014)a for listing $t$ triangles is essentially optimal
(assuming the matrix multiplication exponent is $\omega=2$).
- Any static data structure for SetDisjointness that answers queries in constant time must spend $\Omega(N^{2-o(1)})$
time in preprocessing, where $N$ is the size of the set system.

These statements were unattainable via Patrascu's reductions.

We also introduce several new reductions from 3SUM to pattern matching problems and dynamic graph problems.
Of particular interest are new conditional lower bounds for dynamic versions of Maximum Cardinality Matching,
which introduce a new technique for obtaining amortized lower bounds.
Jean-François Biasse and Fang Song. A polynomial time quantum algorithm for computing class groups and solving the principal ideal problem in arbitrary degree number fields
Abstract: This paper gives polynomial time quantum algorithms for computing the
ideal class group and solving the principal ideal problem (PIP) in
arbitrary classes of number fields under the Generalized Riemann
Hypothesis. Previously, these problems were known to have polynomial time
solutions in classes of fixed degree number fields by a result of Hallgren
[Hallgren STOC05]. More recently, Biasse and Song [BiasseSong 15] showed
that the PIP could be solved in classes of totally real fields of
arbitrary degree (a special case relevant to cryptography) in polynomial
time by adapting the unit group algorithm of Eistentrager et al. [EHKS
STOC14].

Computing the class group and solving the PIP are fundamental problems in
number theory. In particular, they are connected to many unproven
conjectures in both analytic and algebraic number theory. Our algorithms
also directly apply to the computation of relative class groups and unit
groups, to the computation of ray class groups and to the the resolution
of norm equations. Moreover, the resolution of the PIP is a key component
in the cryptanalysis of cryptosystems based on the hardness of finding a
short generator in a principal ideal.

Unlike the previous methods, our strategy for computing the ideal class
group and solving the PIP consists of reducing these tasks to the
computation of S-unit groups. This is another fundamental problem in
number theory, and it is the first time that an algorithm for this task is
described without relying on oracles for the PIP and the ideal class
group. We show that S-units can be found in polynomial time on a quantum
computer by generalizing the recent work of Eisentrager et al. [EHKS
STOC14] on the computation of the unit group. Our main technical
contribution is an efficient quantum reduction from computing the S-units
to solving an instance of a generalized HSP problem on R^O(n), which
involves new results on the metrical properties of lattices. In addition,
we show how to convert the output into an exact compact representations,
which is convenient for further algebraic manipulations.
Nikhil Bansal, Marek Elias and Arindam Khan. Improved Approximation for Vector Bin Packing
Abstract: We study d-dimensional vector packing problem, a classical generalization of bin packing problem, widely used in resource allocation and scheduling problems. Here we are given a set of d-dimensional vectors v_1, ... , v_n in [0,1]^d. The goal is to pack them into least number of unit vector bins so that for every bin in each dimension the sum of packed vectors is at most one.

For the 2-dimensional case we give an asymptotic approximation guarantee of $1+\ln(1.5)+\epsilon \approx (1.405+\epsilon)$, improving upon the previous bound of $1+\ln 2+\epsilon \approx (1.693+\epsilon)$. We also give an almost tight $(1.5+\epsilon)$ absolute approximation guarantee, improving upon the previous bound of 2 by Kellerer and Kotov. For the $d$-dimensional case, we get a $1.5+\ln (\frac{d}2)+ o_d(1)+\epsilon$ guarantee, improving upon the previous $(1+\ln d+\epsilon)$ guarantee due to Bansal, Caprara and Sviridenko based on the Round & Approx (R & A) framework. Here $(1+ \ln d)$ was a natural barrier as rounding-based algorithms can not achieve better than $d$ approximation. We get around this by exploiting various structural properties of (near)-optimal packings, and using multi-objective multi-budget matching based techniques and expanding the R & A framework to go beyond rounding-based algorithms. Along the way we also prove several results that could be of independent interest.
Venkatesan Guruswami and Euiwoong Lee. Nearly Optimal NP-Hardness of Unique Coverage
Abstract: The {\em Unique Coverage} problem, given a universe $V$ of elements and a collection $E$ of subsets of $V$, asks to find $S \subseteq V$ to maximize the number of $e \in E$ that intersects $S$ in {\em exactly one} element. When each $e \in E$ has cardinality at most $k$, it is also known as {\em $1$-in-$k$ Hitting Set}, and admits a simple $\Omega(\frac{1}{\log k})$-approximation algorithm.

For constant $k$, we prove that $1$-in-$k$ Hitting Set is NP-hard to approximate within a factor $O(\frac{1}{\log k})$. This improves the result of Guruswami and Zhou [SODA'11, ToC'12], who proved the same result assuming the Unique Games Conjecture. For Unique Coverage, we prove that it is hard to approximate within a factor $O(\frac{1}{\log^{1 - \epsilon} n})$ for any $\epsilon > 0$, unless NP admits quasipolynomial time algorithms. This improves the results of Demaine et al. [SODA'06, SICOMP'08], including their $\approx 1/\log^{1/3} n$ inapproximability factor which was proven under the Random 3SAT Hypothesis. Our simple proof combines ideas from two classical inapproximability results for Set Cover and Constraint Satisfaction Problem, made efficient by various derandomization methods based on bounded independence.
Jiyan Yang, Yinlam Chow, Christopher Ré and Michael Mahoney. Weighted SGD for $\ell_p$ Regression with Randomized Preconditioning
Abstract: In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. In this paper, we aim to bridge the gap between these two methods in solving {\it constrained} overdetermined linear regression problems---e.g., $\ell_2$ and $\ell_1$ regression problems.
\begin{compactitem}
\item
We propose a hybrid algorithm named \pcsgd that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.
\item
By rewriting the $\ell_p$ regression problem into a stochastic optimization problem, we connect \pcsgd to several existing $\ell_p$ solvers including RLA methods with algorithmic leveraging (RLA for short).
\item
We prove that \pcsgd inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity.
Such SGD convergence rate is superior over other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.
\item
Particularly, when solving $\ell_1$ regression, \pcsgd returns an approximate solution with $\epsilon$ relative error on the objective value in $\bigO(\log n \cdot \nnz(A) + \poly(d)/\epsilon^2)$ time.
This complexity is {\it uniformly} better than that of RLA methods in terms of both $\epsilon$ and $d$ when the problem is unconstrained.
In the presence of constraints, \pcsgd only has to solve a sequence of much simpler and smaller optimization problem over the same constraints which in general can be more efficient than solving the constrained subproblem required in RLA.
\item
For $\ell_2$ regression, \pcsgd returns an approximate solution with $\epsilon$ relative error on the objective value and solution vector in prediction norm in $\bigO(\log n \cdot \nnz(A) + \poly(d) \log(1/\epsilon) /\epsilon)$ time.
We show that when solving unconstrained $\ell_2$ regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy $\epsilon$, high dimension $n$ and low dimension $d$ satisfy $d\geq 1/\epsilon$ and $n \geq d^2/\epsilon$.
\end{compactitem}
Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that \pcsgd converges to a medium-precision solution, e.g., $\epsilon=10^{-3}$, more quickly.
Chandra Chekuri and Vivek Madan. Simple and Fast Rounding Algorithms for Directed and Node-weighted Multiway Cut
Abstract: We study the multiway cut problem in {\em directed} graphs and one
of its special cases, the {\em node-weighted} multiway cut problem
in {\em undirected} graphs. In {\sc Directed Multiway Cut} (\DirMC)
the input is an edge-weighted directed graph $G=(V,E)$ and a set of
$k$ terminal nodes $\{s_1,s_2,\ldots,s_k\} \subseteq V$; the goal is
to find a min-weight subset of edges whose removal ensures that there
is no path from $s_i$ to $s_j$ for any $i \neq j$. In {\sc
Node-weighted Multiway Cut} (\NodeMC) the input is a node-weighted
undirected graph $G$ and a set of $k$ terminal nodes
$\{s_1,s_2,\ldots,s_k\} \subseteq V$; the goal is to remove a
min-weight subset of nodes to disconnect each pair of
terminals. \DirMC admits a $2$-approximation \cite{NaorZ01}
and \NodeMC admits a $2(1-\frac{1}{k})$-approximation
\cite{GargVY04}, both via rounding of LP relaxations.
Previous rounding algorithms for these problems, from nearly twenty
years ago, are based on careful rounding of an {\em
optimum} solution to an LP relaxation. This is particularly true
for \DirMC for which the rounding relies on a custom LP formulation
instead of the natural distance based LP relaxation \cite{NaorZ01}.

In this paper we describe extremely simple and near linear-time
rounding algorithms for \DirMC and \NodeMC via a natural distance
based LP relaxation. The dual of this relaxation is a special case
of the maximum multicommodity flow problem. Our
algorithms achieve the same bounds as before but have the
significant advantage in that they can work with {\em any feasible}
solution to the relaxation. Consequently, in addition to obtaining
``book'' proofs of LP rounding for these two basic problems, we also
obtain significantly faster approximation algorithms by taking
advantage of known algorithms for computing near-optimal solutions
for maximum multicommodity flow problems. We also investigate
lower bounds for \DirMC when $k=2$ and in particular prove that
the integrality gap of the LP relaxation is $2$ even in directed
planar graphs.
Chandra Chekuri and Vivek Madan. Constant Factor Approximation for Subset Feedback Problems via a new LP relaxation
Abstract: We consider subset feedback edge and vertex set problems in
undirected graphs. The input to these problems is an undirected
graph $G=(V,E)$ and a set of $k$ terminals nodes $S =
\{s_1,s_2,\dots,s_k \} \subset V$. A cycle in $G$ is interesting if
it contains a terminal. In the Subset Feedback Edge Set problem
(\SFES) the edges are weighted and the goal is to remove a minimum
weight set of edges such that no interesting cycle remains. In the
Subset Feedback Vertex Set (\SFVS) the nodes are weighted and the
goal is to remove a minimum weight set of nodes such that no
interesting cycle remains.

A $2$-approximation is known for \SFES \cite{EvenNSZ00} and a
$8$-approximation is known for \SFVS \cite{EvenNZ00}. The algorithm
and analysis for \SFVS is very complicated. One reason for the difficulty
in addressing feedback problems in undirected graphs has been the lack of
LP relaxations with constant integrality gaps. The natural LP has
an $\Omega(\log n)$ integrality gap.

In this paper, we introduce new LP relaxations for \SFES and \SFVS
and show that the integrality gap of this LP relaxatoin is at most
$13$ for both \SFES and \SFVS. Our LP formulation and analysis are
intuitive and simple to understand. We believe that related
formulations and ideas would lead to an approximation ratio better
than $8$ for for \SFVS.
Vladimir Braverman, Harry Lang, Keith Levin and Morteza Monemizadeh. Clustering Problems on Sliding Windows
Abstract: We explore clustering problems in the streaming sliding window model in both general metric spaces and Euclidean space. We present the first polylogarithmic space $O(1)$-approximation to the metric $k$-median and metric $k$-means problems in the sliding window model, answering the main open problem posed by
Babcock, Datar, Motwani and O'Callaghan~\cite{BDMO03}, which has remained unanswered for over a decade. Our algorithm uses $O(k^3 \log^6 n)$ space
and $\poly(k, \log n)$ update time. This is an exponential improvement on the space required by the technique due to Babcock, et al. We introduce a data structure that extends smooth histograms as introduced by Braverman and Ostrovsky~\cite{BO06} to operate on a broader class of functions. In particular, we show that using only polylogarithmic space we can maintain a summary of the current window from which we can construct an $O(1)$-approximate clustering solution.

Merge-and-reduce is a generic method in computational geometry for adapting offline algorithms to the insertion-only streaming model. Several well-known coreset constructions are maintainable in the insertion-only streaming model using this method, including well-known coreset techniques for the $k$-median and $k$-means problems in both low-and high-dimensional Euclidean spaces~\cite{HPM04,Ch09}. Previous work \cite{FS05} has adapted coreset techniques to the insertion-deletion model, but translating them to the sliding window model has remaineda challenge. We give the first algorithm that, given an insertion-only streaming coreset of space $s$ (maintained using merge-and-reduce method), maintains this coreset in the sliding window model using $O(s^2\epsilon^{-2}\log n)$ space.

For clustering problems, our results constitute the first significant step towards resolving problem number 20 from the List of Open Problems in Sublinear Algorithms~\cite{sublinear_open_20}.
Damian Straszak and Nisheeth Vishnoi. Natural Algorithms for Flow Problems
Abstract: In the last few years, there has been a significant interest in the computational abilities of Physarum polycephalum (a slime mold). This arose from a remarkable experiment which showed that this organism can compute shortest paths in a maze (NTY '00). Subsequently, the workings of Physarum were mathematically modeled as a dynamical system and algorithms inspired by this model were proposed to solve several graph problems: shortest paths and flows to name a few. Indeed, computer scientists have initiated a rigorous study of these dynamics and a first step towards this was taken by BMV '12 and BBDKM '13 who proved that the Physarum dynamics for the shortest path problem are efficient (when edge-lengths are polynomially bounded). In this paper, we take this further: we prove that the discrete time Physarum-dynamics can also efficiently solve the uncapacitated min-cost flow problems on undirected and directed graphs; problems that are significant generalizations of shortest path. This raises the tantalizing possibility that nature, via evolution, developed algorithms that efficiently solve some of the most complex computational problems, about a billion years before we did.
Anja Becker, Leo Ducas, Nicolas Gama and Thijs Laarhoven. New directions in nearest neighbor searching with applications to lattice sieving
Abstract: To solve the approximate nearest-neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 - A, where a vector survives a filter if it is contained in this spherical cap, and where ideally each filter has an independent random direction.

For small A, these filters are very similar to the spherical locality-sensitive hash family (LSH) previously studied by Andoni et al. For A > 0 however we show we achieve a superior performance, provided we have access to an efficient oracle for finding relevant filters. Whereas LSH is limited by a performance parameter \rho > 1/(2c^2 - 1) for approximate NNS with approximation factor c, with spherical LSF we achieve values \rho < 1/(2c^2 - 1) with the exact exponent \rho depending on the density of the data set. For sparse data, we obtain \rho = 1/(2c^2 - 1).

To instantiate the above oracle, we replace the independent filters by filters from certain structured codes. We show that the additional structure in these codes does not significantly change the collision probabilities, while allowing efficient decoding for identifying relevant filters.

We finally apply this method to lattice sieving algorithms for solving the shortest vector problem (SVP) on lattices, and show that it leads to a heuristic time complexity of (3/2)^(n/2) ~ 2^0.292n for solving SVP in dimension n. This improves upon the current best algorithm for solving SVP which uses spherical LSH and runs in time 2^0.298n. Experiments with the GaussSieve validate the claimed speedup and show that this method may be practical as well, as the polynomial overhead is small.
Constantinos Daskalakis and Sebastien Roch. Species Trees from Gene Trees Despite a High Rate of Lateral Genetic Transfer: A Tight Bound
Abstract: Reconstructing the tree of life from molecular sequences is a fundamental problem in computational biology. Modern data sets often contain a large number of genes which can complicate the reconstruction problem due to the fact that different genes may undergo different evolutionary histories. This is the case in particular in the presence of lateral genetic transfer (LGT), whereby a gene is inherited from a distant species rather than an immediate ancestor. Such an event produces a gene tree which is distinct (but related) from the species phylogeny.

In previous works, a natural stochastic model of LGT was introduced and it was shown that the species phylogeny can be reconstructed from gene trees despite surprisingly high rates of LGT. Both lower and upper bounds on this rate were obtained, but a large gap remained. Here we close this gap, up to a constant. Specifically, we show that the species phylogeny can be reconstructed perfectly even when each edge of the tree has a constant probability of being the location of an LGT event. Our new reconstruction algorithm builds the tree recursively from the leaves. We also provide a matching lower bound.
Yann Disser, Jan Hackfeld and Max Klimm. Undirected Graph Exploration with Θ(log log n) Pebbles
Abstract: We consider the fundamental problem of exploring an undirected and initially unknown graph by an agent with little memory. The vertices of the graph are unlabeled and the edges incident to a vertex have locally distinct labels. In this setting, it is known that Θ(log n) bits of memory are necessary and sufficient to explore any graph of size at most n. We show that this memory requirement can be decreased significantly by making a part of the memory distributable in the form of pebbles. A pebble is a device that can be dropped to mark a vertex and can be collected when the agent returns to the vertex. We show that Θ(log log n) pebbles are both necessary and sufficient to explore any graph of size at most n and bounded degree.
Paweł Brach, Marek Cygan, Jakub Łącki and Piotr Sankowski. Algorithmic Complexity of Power Law Networks
Abstract: It was experimentally observed that the majority of real-world networks are scale-free and follow power law degree distribution.
The aim of this paper is to study the algorithmic complexity of such ``typical'' networks. The contribution of this work is twofold.

First, we define a deterministic condition for checking whether a graph has a power law degree distribution and experimentally validate it on real-world networks.
This definition allows us to derive interesting properties of power law networks. We observe that for exponents of the degree distribution in the range $[1,2]$ such networks exhibit double power law phenomenon that was observed for several real-world networks. Our observation indicates that this phenomenon could be explained by just pure graph theoretical properties.

The second aim of our work is to give a novel theoretical explanation why many algorithms run faster on real-world data than what is predicted by algorithmic worst-case analysis.
We show how to exploit the power law degree distribution to design faster algorithms for a number of classical P-time problems including transitive closure, maximum matching, determinant, PageRank and matrix inverse.
Moreover, we deal with the problems of counting triangles and finding maximum clique.
Previously, it has been only shown that these problems can be solved very efficiently on power law graphs when these graphs are random, e.g., drawn at random from some distribution. However, it is unclear how to relate such a theoretical analysis to real-world graphs, which are fixed.
Instead of that, we show that the randomness assumption can be replaced with a simple condition on the degrees of adjacent vertices, which can be used to obtain similar results.
Again, we experimentally validate that many real-world graphs satisfy our property.
As a result, in some range of power law exponents, we are able to solve the maximum clique problem in polynomial time, although in general power law networks the problem is NP-complete.

In contrast to previously done average-case analyses, we believe that this is the first ``waterproof'' argument that explains why many real-world networks are easier. Moreover, an interesting aspect of this study is the existence of structure oblivious algorithms, i.e., algorithms that run faster on power law networks without explicit knowledge of this fact or explicit knowledge of the parameters of the degree distribution, e.g., algorithms for maximum clique or triangle counting.
Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Eva Rotenberg and Mikkel Thorup. The Power of Two Choices with Simple Tabulation
Abstract: The power of two choices is a classic paradigm for load balancing when assigning m balls to n bins. When placing a ball, we pick two bins according to two hash functions h0 and h1, and place the ball in the least loaded bin. Assuming fully random hash functions, when m = O(n), Azar et al. [STOC’94] proved that the maximum load is lg lg n + O(1) with high probability. No such bound was known with a hash function implementable in constant time.

In this paper, we investigate the power of two choices when the hash functions h0 and h1 are implemented with simple tabulation, which is a very efficient hash function evaluated in constant time. Following their analysis of Cuckoo hashing [J.ACM’12], Pătraşcu and Thorup claimed that the expected maximum load with simple tabulation is O(lg lg n). This did not include any high probability guarantee, so the load balancing was not yet to be trusted.

Here we show that with simple tabulation, the maximum load is O(lg lg n) with high probability, giving the first constant time hash function with this guarantee. We also give a concrete example where, unlike with fully random hashing, the maximum load is not bounded by lg lg n + O(1), or even (1 + o(1)) lg lg n with high probability. Finally we show that the expected maximum load is lg lg n + O(1), just like with fully random hashing.
Pratik Ghosal, Adam Kunysz and Katarzyna Paluch. Characterisation of Strongly Stable Matchings
Abstract: An instance of a strongly stable matching problem (SSMP) is an undirected bipartite graph $G=(A \cup B, E)$, with an adjacency list of each vertex being a linearly ordered list of ties, which are subsets of vertices
equally good for a given vertex. Ties are disjoint and may contain one vertex. A matching $M$ is a set of vertex-disjoint edges. An edge $(x,y) \in E \setminus M$ is a {\em blocking edge} for $M$ if $x$ is either unmatched or strictly prefers $y$ to its current partner in $M$, and $y$ is either unmatched or strictly prefers $x$ to its current partner in $M$ or is indifferent between them. A matching is {\em strongly stable} if there is no blocking edge with respect to it. We present a characterisation of the set of all strongly stable matchings, thus solving an open problem already stated in the book by Gusfield and Irving \cite{GI}. It has previously been shown that strongly stable matchings form a distributive lattice \cite{Manlove} and although the number of strongly stable matchings can be exponential in the number of vertices, we show that there exists a partial order with $O(m)$ elements representing all strongly stable matchings, where $m$ denotes the number of edges in the graph. We give two algorithms that construct two such representations: one in $O(nm^2)$ time and the other in $O(nm)$ time, where $n$ denotes the number of vertices in the graph. Note that the construction of the second representation has the same time complexity as that of computing a single strongly stable matching.
Gwenaël Joret, Piotr Micek and Veit Wiechert. Sparsity and dimension
Abstract: We prove that posets of bounded height whose cover graphs belong to a fixed class with bounded expansion have bounded dimension. Bounded expansion, introduced by Ne\v{s}et\v{r}il and Ossona de Mendez as a model for sparsity in graphs, is a property that is naturally satisfied by a wide range of graph classes, from graph structure theory (graphs excluding a minor or a topological minor) to graph drawing (e.g.\ graphs with constant book thickness). Therefore, our theorem generalizes a number of results including the most recent one for posets of bounded height with cover graphs excluding a fixed graph as a topological minor (Walczak, SODA 2015). We also show that the result is in a sense best possible, as it does not extend to nowhere dense classes; in fact, it already fails for cover graphs with locally bounded treewidth.
Yu Yokoi and Satoru Iwata. Finding Stable Allocations in Polymatroid Intersection
Abstract: The stable matching model of Gale and Shapley (1962) has been generalized in various directions such as matroid kernels due to Fleiner (2001) and stable allocations in bipartite networks due to Baïou and Balinski (2002). Unifying these generalizations, we introduce a concept of stable allocations in polymatroid intersection.

Our framework includes both integer- and real-variable versions. The integer-variable version corresponds to a special case of the discrete-concave function model due to Eguchi, Fujishige, and Tamura (2003), who established the existence of a stable allocation by showing that a simple extension of the deferred acceptance algorithm of Gale and Shapley finds a stable allocation in pseudo-polynomial time. It has been open to develop a polynomial time algorithm even for our special case.

In this paper, we present the first strongly polynomial algorithm for finding a stable allocation in polymatroid intersection. To achieve this, we utilize the augmenting path technique for polymatroid intersection. In each iteration, the algorithm searches for an augmenting path by simulating a chain of proposes and rejects in the deferred acceptance algorithm. The running time of our algorithm is $O(n^3\gamma)$, where $n$ and $\gamma$ respectively denote the cardinality of the ground set and the time for computing the saturation and exchange capacities. This is as fast as the best known algorithm for the polymatroid intersection problem.
Ran Duan, Jugal Garg and Kurt Mehlhorn. An Improved Combinatorial Polynomial Algorithm for the Linear Arrow-Debreu Market
Abstract: We present an improved combinatorial algorithm for the computation of equilibrium prices in the linear Arrow-Debreu model. For a market with n agents and integral utilities bounded by U, the algorithm runs in O(n^7 \log^3(nU)) time. This improves upon the previously best algorithm of Ye by a factor of \tOmega(n). The algorithm refines the algorithm described by Duan and Mehlhorn and improves it by a factor of \tOmega(n^3). The improvement comes from a better understanding of the iterative price adjustment process, the improved balanced flow computation for nondegenerate instances, and a novel perturbation technique for achieving nondegeneracy.
Marek Cygan, Marcin Mucha, Piotr Sankowski and Qiang Zhang. Online Pricing with Impatient Bidders
Abstract: Abstract. In this paper we consider the following online pricing problem. An auctioneer is selling identical items in unlimited supply, whereas each bidder from a given set is interested in purchasing a single copy of the item. Each bidder is characterized by a budget and a time interval, in which he is considering to buy the item. Bidders are willing to buy the item at the earliest time provided it is within their time intervals and the price at that time is within their budgets. We call such bidders impatient bidders. The problem is considered in the online setting, i.e., each bidder arrives at the start of his time interval, and only then an algorithm learns of his existence and his budget. The goal of the seller is to set the price of the item over time so that the total revenue is maximized.

The impatient bidders problem was introduced by Bansal et al. [TALG’10], where the authors considered the online setting in which additionally the algorithm is given the end of the time interval of each bidder when he arrives. Bansal et al. showed that the competitive ratio for deterministic algorithms is lower bounded by \Omega(\sqrt{log h}) and upper bounded by O(log h), whereas in the randomized setting the gap was between \Omega(\sqrt{log log h / log log log h}) and O(log log h).

We study two versions of the impatient bidders problem, that is in the online setting of Bansal et al., but also in a more restrictive (from the algorithmic perspective) setting where the end of the time interval of each bidder remains unknown until it is hit. We prove that tight bounds for both settings and surprisingly, in both cases the optimum competitive ratios are the same. In particular we prove that the competitive ratio of an optimum deterministic algorithm is \Theta(log h/ log log h), whereas in for randomized algorithms it is \Theta(log log h).

Additionally, we also consider the Veblen bidders problem, where each bidder is only interested in buying the good at a given price level. If its price drops below that level at any point within their time interval, but before they buy it, they lose interest in buying the good. The Veblen bidders problem plays a twofold role in our paper. Not only is it interesting by itself, but it is also the core idea in our proofs for the impatient bidders problem.
Amir Abboud and Greg Bodwin. Error Amplification for Pairwise Spanner Lower Bounds
Abstract: A {\em pairwise spanner} of a graph $G = (V, E)$ and a ``pair set'' $P \subseteq V \times V$ is a subgraph $H$ that preserves all pairwise distances in $P$, up to some additive error term $+k$. When $k=0$ the object is called a {\em pairwise distance preserver}.

A large and growing body of work has considered upper bounds for these objects, but lower bounds have been elusive. The only known lower bound results are (1) Coppersmith \& Elkin (SODA'05) against preservers (zero error), and (2) considerably weaker bounds by Woodruff (FOCS'06) against spanners ($+k$ error).

Our main result is an amplification theorem: we prove that lower bounds against pairwise {\em distance preservers} imply lower bounds against pairwise \emph{spanners}. In other words, to prove lower bounds against {\em any} constant stretch spanners, it is enough to consider only subgraphs that are not allowed to stretch pairwise distances at all!

We apply this theorem to obtain drastically improved bounds. Some of these include:
\begin{itemize}
\item Linear size pairwise spanners with up to $+(2k-1)$ error cannot span $|P|=\omega(n^{(1+k)/(3+k)})$ pairs - this is a large improvement over Woodruff's $|P|=\omega(n^{2-2/k})$.
\item $|H| = \Omega(n^{1 + 1/k})$ edges are required for a $+(2k-1)$ spanner of $|P| = \Omega(n^{1 + 1/k})$ pairs - this is another large improvement over Woodruff's $|P| = \Omega(n^2)$.
\item The first tight bounds for pairwise spanners: for $+2$ error and $P=\Theta(n^{3/2})$ we show that $\Theta(n^{3/2})$ edges are necessary and sufficient.
\end{itemize}

We also show analogous lower bounds against subset spanners (where $P = S \times S$ for some node subset $S$) and $D$-spanners (where $P$ is the set of pairs whose distance exceeds some threshold $D$).
Greg Bodwin and Virginia Vassilevska Williams. Better Distance Preservers and Additive Spanners
Abstract: We make improvements to the upper bounds on several popular types of distance preserving graph sketches. These sketches are all various restrictions of the {\em additive pairwise spanner} problem, in which one is given an undirected unweighted graph $G$, a set of node pairs $P$, and an error allowance $+\beta$, and one must construct a sparse subgraph $H$ satisfying $\delta_H(u, v) \le \delta_G(u, v) + \beta$ for all $(u, v) \in P$.

The first part of our paper concerns {\em pairwise distance preservers}, which make the restriction $\beta=0$ (i.e. distances must be preserved {\em exactly}). Our main result here is an upper bound of $|H| = O(n^{2/3}|P|^{2/3} + n|P|^{1/3})$ when $G$ is undirected and unweighted. This improves on existing bounds whenever $|P| = \omega(n^{3/4})$, and it is the first such improvement in the last ten years.

We then devise a new application of distance preservers to graph clustering algorithms, and we apply this algorithm to {\em subset spanners}, which require $P = S \times S$ for some node subset $S$, and {\em (standard) spanners}, which require $P = V \times V$. For both of these objects, our construction generalizes the best known bounds when the error allowance is constant, and we obtain the strongest polynomial error/sparsity tradeoff that has yet been reported (in fact, for subset spanners, ours is the {\em first} nontrivial construction that enjoys improved sparsity from a polynomial error allowance).

We leave open a conjecture that $O(n^{2/3}|P|^{2/3} + n)$ pairwise distance preservers are possible for undirected unweighted graphs. Resolving this conjecture in the affirmative would improve and simplify our upper bounds for all the graph sketches mentioned above.
Subhash Khot, Rishi Saket and Devanathan Thiruvenkatachari. Hardness of Satisfiable CSPs and Hypergraph Coloring via efficient PCPs with Super-position Complexity
Abstract: In this work we construct a PCP Outer Verifier with a super-position complexity
gap to prove the following hardness results:

* It is quasi-NP-hard to satisfy 31/32 + 2^{-(log n)^{1/4 - o(1)}} fraction of the
clauses of a satisfiable Max-5SAT instance on n variables.

* It is quasi-NP-hard to color an n-vertex 2-colorable 6-uniform hypergraph using
2^{(log n)^{1/4 - o(1)}} colors.

The hardness result for Max-5SAT gives the best bound on the ``error'' in the
inapproximability threshold for satisfiable CSPs, improving the previous
2^{-{2^{Omega(sqrt{loglog n})}}} error in the threshold for Max-4SAT proved by
Dinur and Guruswami. Our second result is an improvement over the hardness of
hypergraph coloring for 2-colorable 8-uniform hypergaphs (with the same hardness
factor) obtained by Huang using a new construction of an Outer Verifier with
super-position complexity.

Huang's Outer Verifier substantially simplified and improved upon the first such
construction by Khot and Saket who introduced the notion of super-position
complexity and gave a hardness factor of 2^{(log n)^{1/20 - o(1)}} for
hypergraph coloring. The Outer Verifier of Huang is obtained by (i) a direct
reduction from standard Label Cover to a system of constant width quadratic
equations over F[2] with an approximation as well as a super-position complexity
gap, and (ii) amplifying the approximation gap by Parallel Repetition while
preserving the super-position complexity bound.

We enhance this construction of the Outer Verifier by using a modified version
of Parallel Repetition which yields an additional projection bound, analogous to
the one proved by Håstad for the standard Label Cover. This guarantee allows us
to use more efficient Inner Verifiers to prove our hardness results.
Yair Bartal, Arnold Filtser and Ofer Neiman. Constructing Almost Minimum Spanning Trees with Constant Average Distortion
Abstract: Minimum Spanning Trees of weighted graphs are fundamental objects in numerous applications. In particular in distributed networks, the minimum spanning tree of the network is often used to route messages between network nodes. Unfortunately, while being most efficient in the total cost of connecting all nodes, minimum spanning trees fail miserably in the desired property of approximately preserving distances between pairs.
While known lower bounds exclude the possibility of the worst case distortion of a tree being small, it was shown in \cite{ABN07} that there exists a spanning tree with constant average distortion. Yet, the weight of such a tree may be significantly larger than that of the MST.
In this paper we show that any weighted undirected graph admits a {\em spanning tree} whose weight is at most $(1+\delta)$ times that of the MST, providing {\em constant average distortion} $O(1/\delta^2)$.

The constant average distortion bound is implied by a stronger property of {\em scaling distortion}, i.e., improved distortion for smaller fractions of the pairs.
The result is achieved by first showing the existence of a low weight {\em spanner} with small {\em prioritized distortion}, an even stronger property allowing to prioritize the nodes whose associated distortions will be improved. We show that prioritized distortion implies scaling distortion via a general transformation, which may be of independent interest.
Lingxiao Huang, Pinyan Lu and Chihao Zhang. Canonical Paths for MCMC: from Art to Science
Abstract: Markov Chain Monte Carlo (MCMC) method is a widely used algorithm design scheme with many applications. To make efficient use of this method, the key step is to prove that the Markov chain is rapid mixing. Canonical paths is one of the two main tools to prove rapid mixing. However, there are much fewer success examples comparing to the other main tool of coupling. The main reason is that there is no systematic approach or general recipe to design canonical paths. Building up on a previous exploration by McQuillan, we develop a general theory to design canonical paths for MCMC: We reduce the task of designing canonical paths to solve a system of linear equations, which can be automatically done even by machine.

Making use of this general approach, we obtain fully polynomial-time randomized approximation scheme (FPRAS) for counting the number of $b$-matching with $b\leq 7$ and $b$-edge-cover with $b\leq 2$. They are natural generations of matching and edge covers for graphs. No polynomial time approximation was known for these problems.
Christos Kalaitzis. An Improved Approximation Guarantee for the Maximum Budgeted Allocation Problem
Abstract: We study the Maximum Budgeted Allocation problem,
which is the problem of assigning indivisible items
to players with budget constraints. In its most general
form, an instance of the MBA problem might include many
different prices for the same item among different
players, and different budget constraints for every
player. So far, the best approximation algorithms we know
for the MBA problem achieve a $3/4$-approximation ratio,
and employ a natural LP relaxation, called the Assignment-LP.
In this paper, we give an algorithm for MBA, and prove that
it achieves a $3/4+c$-approximation ratio, for some constant
$c>0$. This algorithm works by rounding solutions to an
LP called the Configuration-LP, therefore also showing that
the Configuration-LP is strictly stronger than the
Assignment-LP (for which we know that the integrality gap
is $3/4$) for the MBA problem.
Robert Hildebrand, Robert Weismantel and Kevin Zemmer. An FPTAS for Minimizing Indefinite Quadratic Forms over Integers in Polyhedra
Abstract: We present a generic approach that allows us to develop a fully polynomial-time approximation scheme (FTPAS) for minimizing nonlinear functions over the integer points in a rational polyhedron in fixed dimension. The approach may be regarded as a link between the subdivision strategy of Papadimitriou and Yannakakis (2000) and real algebraic certificates for the positivity of polynomials. Our general approach is widely applicable. We apply it, for instance, to the Motzkin polynomial and to indefinite quadratic forms x^T Q x in a fixed number of variables, where Q has at most one positive, or at most one negative eigenvalue. In dimension three, this leads to an FPTAS for general Q.
Klaus Jansen, Marten Maack and Malin Rau. Approximation schemes for machine scheduling with resource (in-)dependent processing times
Abstract: We consider two related scheduling problems: resource constrained scheduling on identical parallel machines and a generalization with resource dependent processing times.
In both problems, jobs require a certain amount of an additional resource and have to be scheduled on machines minimizing the makespan, while at every point in time a given resource capacity is not exceeded.
In the first variant of the problem the processing times and resource amounts are fixed, while in the second the former depends on the latter.
We present asymptotic fully polynomial approximation schemes (AFPTAS) for the problems:
For any $\eps>0$ a schedule of length at most $(1+\eps)$ times the optimum plus an additive term of $\mathcal{O}(1/\eps^2)$ is provided, and the running time is polynomially bounded in $1/\eps$ and the input length.
Up to now only approximation algorithms with constant approximation ratios were known.
Riccardo Colini-Baldeschi, Bart de Keijzer, Stefano Leonardi and Stefano Turchetta. Approximately Efficient Double Auctions with Strong Budget Balance
Abstract: Mechanism design for one-sided markets is an area of extensive research in economics and, since more than a decade, in computer science as well.
Two-sided markets, on the other hand, have not received the same attention despite the numerous applications to web advertisement, stock exchange, and frequency spectrum allocation. This work studies double auctions, in which unit-demand buyers and unit-supply sellers act strategically.

An ideal goal in double auction design is to maximize the social welfare of buyers and sellers with individually rational (IR),
incentive compatible (IC)} and strongly budget-balanced (SBB) mechanisms. The first two properties are standard.
SBB requires that the payments charged to the buyers are entirely handed to the sellers. This property is crucial in all the contexts
that do not allow the auctioneer retaining a share of buyers' payments or subsidizing the market.

Unfortunately this goal is known to be unachievable even for the special case of bilateral trade, where there is only one
buyer and one seller. Therefore, in subsequent papers, meaningful trade-offs between these requirements have been investigated.

Our main contribution is the first IR, IC and SBB mechanism that provides an O(1)-approximation to the optimal social welfare. This result holds for any number of buyers and sellers with arbitrary, independent distributions. Moreover, our result continues to hold when there is an additional matroid constraint on the sets of buyers who may get allocated an item. To prove our main result, we devise an extension of sequential posted price mechanisms to two-sided markets. In addition to this, we improve the best-known approximation bounds for the bilateral trade problem.
Sepehr Assadi, Sanjeev Khanna, Yang Li and Grigory Yaroslavtsev. Maximum Matchings in Dynamic Graph Streams and the Simultaneous Communication Model
Abstract: We study the problem of finding an approximate maximum matching in two closely related computational models, namely, the dynamic graph streaming model and the simultaneous multi-party communication model. In the dynamic graph streaming model, the input graph is revealed as a stream of edge insertions and deletions, and the goal is to design a small space algorithm to approximate the maximum matching. In the simultaneous model, the input graph is partitioned across $k$ players, and the goal is to design a protocol where the $k$ players simultaneously send a small-size message to a coordinator, and the coordinator computes an approximate matching.

Dynamic graph streams - We resolve the space complexity of single-pass turnstile streaming
algorithms for approximating matchings by showing that for any $\eps > 0$, $\Theta(n^{2 - 3\eps})$ space is both sufficient and necessary (up to polylogarithmic factors) to compute an $n^{\eps}$-approximate matching; here $n$ denotes the number of vertices in the input graph.

The simultaneous communication model - Our results for dynamic graph streams also resolve the (per-player) simultaneous communication complexity for approximating matchings in the edge partition model. For the vertex partition model, we design new randomized and deterministic protocols for $k$ players to achieve an $\alpha$-approximation. Specifically, for $\alpha \geq \sqrt{k}$, we provide a randomized protocol with total communication of $O(nk/\alpha^2)$ and a deterministic protocol with total communication of $O(nk/\alpha)$. Both these bounds are tight. Moreover, our work generalizes the results established by Dobzinski et.al (STOC 2014) for the special case of $k = n$. Finally, for the case of $\alpha = o(\sqrt{k})$, we establish a new lower bound on the simultaneous communication complexity which is super-linear in $n$.
Attila Bernáth and Tamás Király. Blocking optimal k-arborescences
Abstract: Given a digraph $D=(V,A)$ and a positive integer $k$, an arc set $F\subseteq A$ is called a $k$-arborescence if it is the disjoint union of $k$ spanning arborescences. The problem of finding a minimum cost $k$-arborescence is known to be polynomial-time solvable using matroid intersection. In this paper we study the following problem: find a minimum cardinality subset of arcs that contains at least one arc from every minimum cost $k$-arborescence. For $k=1$, the problem was solved in [A. Bernáth, G. Pap , Blocking optimal arborescences, IPCO 2013]. In this paper we give an algorithm for general $k$ that has polynomial running time if $k$ is fixed.
Ross Churchley, Bojan Mohar and Hehui Wu. Packing edge-disjoint odd (u,v) trails
Abstract: Despite Menger's famous duality between packings and coverings of (u,v)-paths in a graph, there is no duality when we require the paths be odd: a graph with no two edge-disjoint odd (u,v)-paths may need an arbitrarily large number of edges to cover all such paths. In this paper, we study the relaxed problem of packing odd trails. For a graph G and vertices u,v \in V(G), let \nu_{ot}(u,v) be the maximum number of edge-disjoint (u,v)-trails of odd length in a graph and let \tau_{ot}(u,v) be the minimum number of edges that intersect every odd (u,v)-trail in G. Our main result is an approximate duality for odd trails: \nu_{ot}(u,v) ≤ \tau_{ot}(u,v) ≤ 8\nu_{ot}(u,v). The proof leads to a polynomial-time algorithm to find, for any given k, either k edge-disjoint odd (u,v)-trails or a set of fewer than 8k edges intersecting all such trails. This yields a constant-factor approximation algorithm for the odd trail packing number \nu_{ot}(u,v).
Benjamin Sach, Raphael Clifford, Allyx Fontaine, Tatiana Starikovskaya and Ely Porat. The $k$-mismatch problem revisited
Abstract: We revisit the complexity of one of the most basic problems in pattern matching. In the $k$-mismatch problem we must compute the Hamming distance between a pattern of length $m$ and every $m$-length substring of a text of length $n$, as long as that Hamming distance is at most $k$. Where the Hamming distance is greater than $k$ at some alignment of the pattern and text, we simply output ``No''.

We study this problem in both the standard offline setting and also as a streaming problem. In the streaming $k$-mismatch problem the text arrives one symbol at a time and we must give an output before processing any future symbols. Our main results are as follows:
\begin{itemize}
\item Our first result is a deterministic $O(nk^2\log{k}/m+n\polylog{m})$ time \emph{offline} algorithm for $k$-mismatch on a text of length $n$. This is a factor of $k$ improvement over the fastest previous result of this form from SODA 2000~\cite{ALP:2000,ALP:2004}.
\item We then give a randomised and online algorithm which runs in the same time complexity but requires only $O(k^2\polylog{m})$ space in total.
\item Next we give a randomised $(1+\epsilon)$-approximation algorithm for the streaming $k$-mismatch problem which uses $O(k^2\polylog{m}/\epsilon^2)$ space and runs in $O(\polylog{m}/\epsilon^2)$ worst-case time per arriving symbol.
\item Finally we combine our new results to derive a randomised $O(k^2\polylog{m})$ space algorithm for the streaming $k$-mismatch problem which runs in $O(\sqrt{k}\log{k} + \polylog{m})$ worst-case time per arriving symbol. This improves the best previous space complexity for streaming $k$-mismatch from FOCS 2009~\cite{Porat:09} by a factor of $k$. We also improve the time complexity of this previous result by an even greater factor to match the fastest known offline algorithm (up to logarithmic factors).
\end{itemize}
Brendan Nagle, Vojtech Rodl and Mathias Schacht. An Algorithmic Hypergraph Regularity Lemma
Abstract: Szemeredi’s Regularity Lemma is one of the most powerful tools in combinatorics. It asserts that all large graphs G admit a bounded partition of E(G), most classes of which consist of regularly distributed edges. The original proof of this result was non-constructive. A constructive proof was given by Alon, Duke, Lefmann, Rodl and Yuster, which allows one to efficiently construct a regular partition for any large graph.

Szemeredi’s Regularity Lemma was extended to hypergraphs by various authors. Frankl and Rodl gave one such extension to 3-uniform hypergraphs, and Rodl and Skokan extended this result to k-uniform hypergraphs. W.T. Gowers gave another such extension, using a different concept of regularity than that of Frankl, Rodl and Skokan. Similarly to the graph case, all of these proofs are non-constructive. In this paper, we report on a constructive proof of Gowers’ Hypergraph Regularity Lemma, and discuss an application.
Michael B. Cohen. Simpler and tighter analysis of sparse oblivious subspace embeddings
Abstract: We present a new analysis of sparse oblivious subspace embeddings, based on the "matrix Chernoff" technique. These are probability distributions over (relatively) sparse matrices such that for any d-dimensional subspace of R^n, the norms of all vectors in the subspace are simultaneously approximately preserved by the embedding with high probability--typically with parameters depending on d but not on n. The families of embedding matrices considered here are essentially the same as those in [Nelson-Nguyen, 2012], but with better parameters (sparsity and embedding dimension). Because of this, this analysis essentially serves as a ``drop-in replacement'' for Nelson-Nguyen's, improving bounds on its many applications to problems such as as least squares regression and low-rank approximation.

This new method is based on elementary tail bounds combined with matrix trace inequalities (Golden-Thompson or Lieb's theorem), and does not require combinatorics, unlike the Nelson-Nguyen approach. There are also variants of this method that are even simpler, at the cost of worse parameters. Furthermore, the bounds obtained are much tighter than previous ones, matching known lower bounds up to a single log(d) factor in embedding dimension (previous results had more log factors and also had suboptimal tradeoffs with sparsity).
Radu Curticapean and Dániel Marx. Tight conditional lower bounds for counting perfect matchings on graphs of bounded treewidth and cliquewidth
Abstract: Thanks to a series of recent results, by now we have a good
understanding of how NP-hard optimization problems become easier on graphs of
bounded treewidth and bounded cliquewidth: for various problems,
matching upper bounds and conditional lower bounds describe exactly how the
running time has to depend on treewidth or cliquewidth. Fomin~et~al.~[SODA 2009, SODA 2010] have shown a significant difference between the two parameters:
assuming the Exponential-Time Hypothesis (ETH), the best possible
algorithms for problems such as MAX CUT and EDGE DOMINATING SET have running time $2^{O(t)}\cdot n^{O(1)}$ when
parameterized by treewidth, but $n^{O(t)}$ when parameterized by
cliquewidth. In this paper, we show that a similar phenomenon occurs
also in the domain of counting problems. Specifically, we prove
that, assuming the Counting Strong Exponential-Time Hypothesis (#SETH), the
problem of counting perfect matchings

- has no $(2-\epsilon)^k\cdot n^{O(1)}$ time algorithm for any $\epsilon>0$ on graphs of treewidth $k$ (but it is known be solvable in time $2^{k}\cdot n^{O(1)}$ if a tree decomposition of width $k$ is given), and

- has no $O(n^{(1-\epsilon)k})$ time algorithm for any $\epsilon>0$ on graphs of cliquewidth $k$ (but it can be solved in time $O(n^{k+1})$ if a $k$-expression is given).

A celebrated result of Fisher, Kasteleyn, and Temperley from the 60s
show that counting perfect matchings in planar graphs is
polynomial-time solvable and Gallucio and Loebl [Electr. J. Comb., 6,
1999] gave a $4^{k}\cdot n^{O(1)}$ time algorithm for graphs of genus
$k$. We show that the dependence on genus $k$ has the be exponential:
assuming $\sharpETH$, there is no $2^{o(k)}\cdot n^{O(1)}$ time
algorithm for the problem on graphs of genus $k$.

Typically, hardness proofs for counting perfect matchings involve a
``mysterious'' step where we reduce an NP-hard decision problem to
(the counting version of) a polynomial-time solvable problem. We
demonstrate that, by using the framework of Holant problems and
observing that every constant-size signature can be realized by a
matchgate, such hardness proofs can be obtained in a clean and
methodological manner, even in the parameterized setting. This opens
the possibility of further hardness results of similar flavor for
other counting problems.
Ivan Bliznets, Marek Cygan, Paweł Komosa, Lukas Mach and Michał Pilipczuk. Lower bounds for the parameterized complexity of Minimum Fill-in and other completion problems
Abstract: In this work, we focus on several completion problems for subclasses of chordal graphs: Minimum Fill-In, Interval Completion, Proper Interval Completion, Trivially Perfect Completion, and Threshold Completion. In these problems, the task is to add at most k edges to a given graph in order to obtain a chordal, interval, proper interval, trivially perfect, or threshold graph, respectively. We prove the following lower bounds for all these problems, as well as for the related Chain Completion problem:

* Assuming the Exponential Time Hypothesis, none of these problems can be solved in time 2^O(n^{1/2} / log^c n) or 2^O(k^{1/4} / log^c k) n^O(1) for some integer c.
* Assuming the non-existence of a subexponential-time approximation scheme for Min
Bisectionon d-regular graphs, for some constant d, none of these problems can be solved in time 2^o(n) or 2^o(sqrt(k)) n^O(1).

For all the aforementioned completion problems, apart from Proper Interval Completion, FPT algorithms with running time of the form 2^O(sqrt(k) log k) n^O(1) are known. Thus, the second result proves that a significant improvement of any of these algorithms would lead to a surprising breakthrough in the design of approximation algorithms for Min Bisection.

To prove our results, we propose a new reduction methodology based on combining the classic approach of starting with a sparse instance of 3-Sat, prepared using the Sparsification Lemma, with the existence of almost linear-size Probabilistically Checkable Proofs (PCPs). Apart from our main results, we also obtain lower bounds excluding the existence of subexponential algorithms for the Optimum Linear Arrangementproblem, as well as improved, yet still not tight, lower bounds for Feedback Arc Set in Tournaments.
Shivam Garg and Geevarghese Philip. Raising The Bar For Vertex Cover: Fixed-parameter Tractability Above A Higher Guarantee
Abstract: We investigate the following above-guarantee parameterization of the classical Vertex Cover problem: Given a graph G and a non-negative integer k as input, does G have a vertex cover of size at most (2LP - MM) + k? Here MM is the size of a maximum matching of G, LP is the value of an optimum solution to the relaxed (standard) LP for Vertex Cover on G, and k is the parameter. Since (2LP − MM) >= LP >= MM, this is a stronger parameterization than those---namely, above MM, and above LP---which have been studied so far.

We prove that Vertex Cover is fixed-parameter tractable for this stronger parameter k: We derive an algorithm which solves Vertex Cover in time O*(3^k), and thus push the envelope further on the parameterized tractability of Vertex Cover.
Djamal Belazzougui and Simon Puglisi. Range Predecessor and Lempel-Ziv Parsing
Abstract: The Lempel-Ziv parsing of a string (LZ77 for short) is one of the most important and widely-used algorithmic tools in data compression and string processing. We show that the Lempel-Ziv parsing of a string of length $n$ on an alphabet of size $\sigma$ can be computed in $\O(n\log\log\sigma)$ time ($O(n)$ time if we allow randomization) using $\O(n\log\sigma)$ bits of working space; that is, using space proportional to that of the input string in bits. The previous fastest algorithm using $\O(n\log\sigma)$ space takes $\O(n(\log\sigma+\log\log n))$ time. We also consider the important {\em rightmost} variant of the problem,
where the goal is to associate with each phrase of the parsing its {\em most recent} occurrence in the input
string. We solve this problem in $\O(n(1 + (\log\sigma/\sqrt{\log n}))$ time, using the same working space as above. The previous best solution for rightmost parsing uses $\O(n(1+\log\sigma/\log\log n)$ time and $\O(n\log n)$ space. As a bonus, in our solution for rightmost parsing we provide a faster construction method for efficient {\em 2D orthogonal range reporting}, which is of independent interest.
Joshua Wang, Amir Abboud and Virginia Vassilevska-Williams. Approximation and Fixed Parameter Subquadratic Algorithms for Radius and Diameter in Sparse Graphs
Abstract: The radius and diameter are fundamental graph parameters, with several natural definitions for directed graphs.
Each definition is well-motivated in a variety of applications. All versions of diameter and radius can be solved via solving all-pairs shortest paths (APSP), followed by a fast postprocessing step. However, solving APSP on $n$-node graphs requires $\Omega(n^2)$ time even in sparse graphs.

We study the question: when can diameter and radius in \emph{sparse} graphs be solved in truly subquadratic time, and when is such an algorithm unlikely?
Motivated by lower bounds on computing these measures exactly in truly subquadratic time under plausible assumptions, we search for \emph{approximation} and \emph{fixed parameter subquadratic} algorithms, and alternatively, for reasons why they do not exist.

We find that:

- Most versions of Diameter and Radius can be solved in truly subquadratic time with
\emph{optimal} approximation guarantees, under plausible assumptions.
For example, there is a $2$-approximation algorithm for directed Radius with one-way distances that runs in $\tilde{O}(m\sqrt{n})$ time, while a $(2-\delta)$-approximation algorithm in $O(n^{2-\eps})$ time is considered unlikely.

- On graphs with treewidth $k$, we can solve all versions in $2^{O(k\log{k})}n^{1+o(1)}$ time. We show that these algorithms are near \emph{optimal} since even a $(3/2-\delta)$-approximation algorithm that runs in time $2^{o(k)}n^{2-\eps}$ would refute the plausible assumptions.
Sunil Arya and David Mount. A Fast and Simple Algorithm for Computing Approximate Euclidean Minimum Spanning Trees
Abstract: The Euclidean minimum spanning tree (EMST) is a fundamental and widely studied structure. In the approximate version we are given an n-element point set P in R^d and an error parameter \eps > 0, and the objective is to compute a spanning tree over P whose weight is at most (1+\eps) times that of the true minimum spanning tree. Assuming that d is a fixed constant, existing algorithms have running times that (up to logarithmic factors) grow as O(n/\eps^{\Omega(d)}). We present an algorithm whose running time is O(n\log n + (\eps^{-2} \log^2 \frac{1}{\eps})n). Thus, this is the first algorithm for approximate EMSTs that eliminates the exponential \eps dependence on dimension. (Note that the O-notation conceals a constant factor of the form O(1)^d.) The algorithm is deterministic and very simple.
Gilbert Oscar, Jacob Hendricks, Matthew Patitz and Trent Rogers. Computing in continuous space with self-assembling polygonal tiles (extended abstract)
Abstract: In this paper we investigate the computational power of the polygonal tile assembly model (polygonal TAM) at temperature 1, i.e. in non-cooperative systems. The polygonal TAM is an extension of Winfree's abstract tile assembly model (aTAM) which not only allows for square tiles (as in the aTAM) but also allows for tile shapes that are polygons. Although a number of self-assembly results have shown computational universality at temperature 1, these are the first results to do so by fundamentally relying on tile placements in continuous, rather than discrete, space. With the square tiles of the aTAM, it is conjectured that the class of temperature 1 systems is not computationally universal. Here we show that the class of systems whose tiles are composed of a regular polygon P with n > 6 sides is computationally universal. On the other hand, we show that the class of systems whose tiles consist of a regular polygon P with n less than or equal to 6 cannot compute using any known techniques. In addition, in the extended version of the paper, we show a number of classes of systems whose tiles consist of a non-regular polygon with greater than 3 sides are computationally universal.
Matti Karppa, Petteri Kaski and Jukka Kohonen. A faster subquadratic algorithm for finding outlier correlations
Abstract: A fundamental task in data analysis is to detect {\em outlier pairs}
of strongly correlated variables among a collection of $n$ variables
with otherwise weak pairwise correlations.
After normalization, this task amounts to the geometric task where
we are given as input a set of $n$ vectors with unit Euclidean norm
and dimension $d$, and we are asked to find all the outlier pairs of
vectors whose inner product is at least $\rho$ in absolute value, subject
to the promise that all but at most $r$ pairs of vectors have inner product
at most $\tau$ in absolute value for some constants $0\leq \tau <\rho\leq 1$.
In particular this promise implies that there are at most $r$ outlier pairs.
A natural objective for algorithm design in this setting is to
seek designs whose running time scales {\em subquadratically} in
the number of variables $n$ and avoids substantial dependence
on the outlier parameter $\rho$ and the background parameter $\tau$.

In a recent breakthrough result, Gregory Valiant [FOCS 2012; J.\,ACM 2015]
showed that for
{\em binary inputs} (that is, $\{-1,1\}$-valued data normalized to
unit Euclidean length) there is a randomized algorithm that finds the
at-least-$\rho$-correlated pairs among an at-most-$\tau$-correlated
background in time
\[
\tilde O(n^{(5-\omega)/(4-\omega)+\omega(\log\rho)/(\log\tau)}+rdn^{1/(4-\omega)})\,,
\]
where $2\leq\omega<2.38$ is the exponent of square matrix
multiplication. In particular, for small values of $r$
the running time is subquadratic
in the number of variables $n$ assuming $\tau$ is sufficiently small
compared with $\rho$, and furthermore, unlike earlier approaches such as
{\em locality-sensitive hashing} [Indyk and Motwani, STOC 1998],
when $\tau\rightarrow 0$ the exponent of the running time converges
to a subquadratic value $(5-\omega)/(4-\omega)$ that is
{\em independent of the outlier strength $\rho$} for any constant $\rho$.
The key insight in Valiant's algorithm is to first
(i) {\em compress} the input by randomly expanding and aggregating the input
vectors and then (ii) {\em detect} the correlations in the compressed data
with fast matrix multiplication.

We improve on Valiant's algorithm with a faster compression subroutine
that enables a better tradeoff between parts (i) and (ii) of the
framework, as controlled by a parameter $0<\gamma<1$.
We present a randomized algorithm that for binary inputs
runs in time
\[
\tilde O(n^{\max\,\{1-\gamma+M(\Delta\gamma,\gamma),\,M(1-\gamma,2\Delta\gamma)\}}+rdn^{2\gamma})\,,
\]
where $M(\mu,\nu)$ is the
exponent to multiply an $n^\mu\times n^\nu$ matrix with an $n^\nu\times n^\mu$
matrix and $\Delta=1/(1-(\log\rho)/(\log\tau))$.
With $\gamma=1/(2\Delta+1)$ our algorithm runs in time
$\tilde O(n^{2\omega/(3-(\log\rho)/(\log\tau))}+rdn^{1/(\Delta+1/2)})$
and with $\gamma=\alpha/(2\Delta+\alpha)$ in time
$\tilde O(n^{4/(2+\alpha(1-(\log\rho)/(\log\tau)))}+rdn^{\alpha/(\Delta+\alpha/2)})$,
where $0.3<\alpha\leq 1$ is the threshold exponent for
rectangular matrix multiplication. For example, the last bound
implies that asymptotically it is possible to detect extremely weak
outliers with, say, $\rho=2^{-100}$ and $\tau=2^{-101}$,
in time $\tilde O(n^{1.998}+rdn^{0.003})$.
Conversely, in the strong outlier setting with $\tau\rightarrow 0$ and
any constant $\rho$, we obtain the exponent
$2\omega/3$ for the running time, improving Valiant's exponent
$(5-\omega)/(4-\omega)$ across the range $2\leq\omega<2.38$.
(The notation $\tilde O(\cdot)$ hides polylogarithmic factors in $n$
and $d$ whose degree may depend on $\rho$ and $\tau$.)
Ross Berkowitz and Swastik Kopparty. Robust Positioning Patterns
Abstract: In this paper, we construct large sequences and matrices with the property that the contents of any small window determine the location of the window, *robustly*. Such objects have found many applications in practical settings, from positioning of wireless devices to smart pens, and have recently gained some theoretical interest.

In this context, we give the first explicit constructions of sequences and matrices with high rate and constant relative distance. Accompanying these efficient constructions, we also give efficient decoding algorithms, which can determine the position of the window given its contents, even if a constant fraction of the contents have been corrupted.
Sarah Cannon and Dana Randall. Sampling on lattices with free boundary conditions using randomized extensions
Abstract: Many statistical physics models are defined on an infinite lattice by taking appropriate limits of the model on finite lattice regions.   A key consideration is which boundary to use when taking these limits, since the boundary can have significant influence on properties of the limit.  Fixed boundary conditions assume that the boundary cells are given a fixed assignment, and free boundary conditions allow these cells to vary, taking the union of all possible fixed boundaries.  It is known that these two boundary conditions can cause significant differences in physical properties, such as whether there is a phase transition, as well as computational properties, including whether local Markov chain algorithms used to sample and approximately count are efficient.  

We show that randomly extending the boundary of a configuration with free boundaries by a few layers, choosing among only a constant number of allowable extensions, we can generalize the arguments used in the fixed boundary setting to infer bounds on the mixing time for free boundaries. We demonstrate this technique of using randomized extensions for 3-colorings on rectangular regions of $Z^2$ and lozenge tilings of triangular regions on the triangular lattice with free boundaries, building on arguments for the fixed boundary cases due to Luby, Randall and Sinclair.
Surender Baswana, Shreejit Ray Chaudhury, Keerti Choudhary and Shahbaz Khan. Dynamic DFS Tree in Undirected Graphs: breaking the O(m) barrier
Abstract: Depth first search (DFS) tree is a fundamental data structure for solving various problems in graphs. It is well known that it takes O(m+n) time to build a DFS tree for a given undirected graph G=(V,E) on n vertices and m edges. We address the problem of maintaining a DFS tree when the graph is undergoing updates (insertion and deletion of vertices or edges).


We present the following results for this problem.

1- Fault tolerant DFS tree:
There exists a data structure of size O~(m) (where O~() hides the poly-logarithmic factors.) such that given any k updates, a DFS tree for the resulting graph can be reported in O~(nk) worst case time. When the k updates are deletions only, this directly implies an O~(nk) time fault tolerant algorithm for a DFS tree.

2- Fully dynamic DFS tree:
There exists a fully dynamic algorithm for maintaining a DFS tree that takes worst case O~(\sqrt{mn}) time per update for any arbitrary online sequence of updates.

3- Incremental DFS tree:
Given any arbitrary online sequence of edge insertions, we can maintain a DFS tree in O~(n) worst case time per edge insertion.

These results are the first o(m) worst case time results for maintaining a DFS tree in a dynamic environment. Moreover, our fully dynamic algorithm provides, in a seamless manner, the first non-trivial deterministic algorithm with O(1) query time and o(m) worst case update time for the dynamic subgraph connectivity, bi-connectivity, and 2-edge connectivity.
Mark Braverman, Jieming Mao and S. Matthew Weinberg. Interpolating Between Truthful and Non-Truthful Mechanisms for Combinatorial Auctions
Abstract: We study the communication complexity of combinatorial auctions via \emph{interpolation mechanisms} that interpolate between non-truthful and truthful protocols. Specifically, an interpolation mechanism has two phases. In the first phase, the bidders participate in some non-truthful protocol \emph{whose output is itself a truthful protocol}. In the second phase, the bidders participate in the truthful protocol selected during phase one. Note that virtually all existing auctions have either a non-existent first phase (and are therefore truthful mechanisms), or a non-existent second phase (and are therefore just traditional protocols, analyzed via the Price of Anarchy/Stability).

The goal of this paper is to understand the benefits of interpolation mechanisms versus truthful mechanisms or traditional protocols, and develop the necessary tools to formally study them. Interestingly, we exhibit settings where interpolation mechanisms greatly outperform the optimal traditional and truthful protocols. Yet, we also exhibit settings where interpolation mechanisms are (provably) no better than truthful ones. Finally, we apply our new machinery to prove that the recent single-bid mechanism of Devanur et. al.~\cite{DevanurMSW15} (the only pre-existing interpolation mechanism in the literature) achieves the optimal price of anarchy among a wide class of protocols, a claim that simply can't be addressed by appealing just to machinery from communication complexity or the study of truthful mechanisms.
George Christodoulou and Alkmini Sgouritsa. Designing Networks with Good Equilibria under Uncertainty
Abstract: We consider the problem of designing network cost-sharing protocols
with good equilibria under uncertainty. The underlying game is a
multicast game in a rooted undirected graph with nonnegative edge costs.
A set of k terminal vertices or players need to establish connectivity
with the root. The social optimum is the Minimum Steiner Tree.

We are interested in situations where the designer has incomplete
information about the input. We propose two different models, the
adversarial and the stochastic. In both models, the designer has
prior knowledge of the underlying metric but the requested subset
of the players is not known and is activated either in an adversarial
manner (adversarial model) or is drawn from a known probability
distribution (stochastic model).

In the adversarial model, the goal of the designer is to choose
a single, universal cost-sharing protocol that has low Price
of Anarchy (PoA) for all possible requested subsets of players.
The main question we address is: to what extent can prior knowledge
of the underlying metric help in the design?

We first demonstrate that there exist classes of graphs where
knowledge of the underlying metric can dramatically improve the
performance of good network cost-sharing design. For outerplanar
graph metrics, we provide a universal cost-sharing protocol with
constant PoA, in contrast to protocols that, by ignoring the graph
metric, cannot achieve PoA better than Omega(log k). Then, in our
main technical result, we show that there exist graph metrics, for
which knowing the underlying metric does not help and any universal
protocol has PoA of Omega(log k), which is tight. We attack this
problem by developing new techniques that employ powerful tools
from extremal combinatorics, and more specifically Ramsey Theory
in high dimensional hypercubes.

Then we switch to the stochastic model, where each player is
independently activated according to some probability distribution
that is known to the designer. We show that there exists a randomized
ordered protocol that achieves constant PoA. By using standard
derandomization techniques, we produce a deterministic ordered
protocol that achieves constant PoA. We remark, that the first result
holds also for the black-box model, where the probabilities are not
known to the designer, but is allowed to draw independent
(polynomially many) samples.
Ittai Abraham, Shiri Chechik, Daniel Delling, Andrew Goldberg and Renato Werneck. On Dynamic Approximate Shortest Paths for Planar Graphs with Worst-Case Costs
Abstract: Given a base weighted planar graph $G_{input}$ on $n$ nodes and parameters $M, \epsilon$ we present a dynamic distance oracle with $1+\epsilon$ stretch and worst case update and query costs of $M \cdot \mbox{poly-log} (n)$. We allow arbitrary edge weight updates as long as the shortest path metric induced by the updated graph has stretch of at most $M$ relative to the shortest path metric of the base graph $G_{input}$.

For example, on a planer road network, we can support fast queries and dynamic traffic updates as long as the shortest path from any source to any target (including using arbitrary detours) is between, say, 80 and 3 miles-per-hour.

As a warm-up we also prove that graphs of bounded treewidth have exact distance oracles in the dynamic edge model.


To the best of our knowledge, this is the first dynamic distance oracle for a non-trivial family of dynamic changes to planar graphs with worst case costs of $o(n^{1/2})$ both for query and for update operations.
Amir Abboud, Arturs Backurs, Thomas Dueholm Hansen, Virginia Vassilevska Williams and Or Zamir. Subtree Isomorphism Revisited
Abstract: The \emph{Subtree Isomorphism} problem asks whether a given tree is contained in another given tree.
The problem is of fundamental importance and has been studied since the 1960s.
For some variants, e.g., \emph{ordered trees}, near-linear time algorithms are known, but for the general case truly subquadratic algorithms remain elusive.


Our first result is a reduction from the Orthogonal Vectors problem to Subtree Isomorphism, showing that a truly subquadratic algorithm for the latter refutes the Strong Exponential Time Hypothesis (SETH).

In light of this conditional lower bound, we focus on natural special cases for which no truly subquadratic algorithms are known. We classify these cases against the quadratic barrier, showing in particular that:

\begin{itemize}
\item Even for binary, rooted trees, a truly subquadratic algorithm refutes SETH.
\item Even for rooted trees of depth $O(\log\log{n})$, where $n$ is the total number of vertices, a truly subquadratic algorithm refutes SETH.
\item For every constant $d$, there is a constant $\eps_d>0$ and a randomized, truly subquadratic algorithm for degree-$d$ rooted trees of depth at most $(1+ \eps_d) \log_{d}{n}$.
In particular, there is an $O(\min\{ 2.85^h ,n^2 \})$ algorithm for binary trees of depth $h$.
\end{itemize}

Our reductions utilize new ``tree gadgets" that are likely useful for future SETH-based lower bounds for problems on trees. Our upper bounds apply a folklore result from randomized decision tree complexity.