## A few works from FOCS'14

The following three works were presented in FOCS'14,
and can be obtained from
the
program's website.
Additional works presented at FOCS'14 have appeared as prior choices.

#### Preventing False Discovery in Interactive Data Analysis is Hard
(by Moritz Hardt and Jonathan Ullman)

Iterative data analysis is modeled as an adaptive sequence of
statistical queries (a la the model of Kearns), which are answered by
a single sample of the data (rather than by the data's statistic itself).
The issue is that the error in an adaptive sequence of queries
may be much larger than the error in a nonadaptive sequence of queries,
because the adaptive queries may depend on the sample through
the information on the sample provided by the answers to prior queries.
That is, answers to statistical queries that are based on the sample
may reveal information on the sample rather than on the universe.
The amount of such extra information is related to the non-privacy
of the mechanism used to answer these queries, and hence bounds
on privacy of certain mechanisms become relevant.
The main result is that the existence of one-way functions
imply that any computationally efficient mechanism of answering
adaptive queries will fail for $n^3.01$ queries (recently improved to $n^2.01$).
In contrast, such mechanisms can handle $n^2$ queries,
whereas non-efficient mechanisms can handle exponentially many queries.

#### New algorithms and lower bounds for monotonicity testing
(by Xi Chen, Rocco Servedio, and Li-Yang Tan)

The issue is the query complexity of non-adaptive testers
for monotonicity of Boolean functions. The main result in this paper
is a lower bound of the form $n^Omega(1)$, recently improved to $n^0.499$.
They also improve the upper bound from $\poly(1/\eps)\tildeO(n^{7/8})$
to $\poly(1/\eps)\tildeO(n^{5/6})$.
#### Path-Finding Methods for Linear Programming:
Solving Linear Programs in tildeO(sqrt(rank)) Iterations
and Faster Algorithms for Maximum Flow
(by Yin Tat Lee and Aaron Sidford)

The new algorithm is based on an iterative process that
updates auxiliary information that is useful for the next step
along with making the current step. That is, in each iteration,
the algorithm moves to a better internal point based on weights
that determine a good direction in which to move, whereas these
weights are found based on the point reached in the prior iteration.
At a very high level this strategy reminds me of the strategy employed
in the polynomial-time approximation of the permanent of non-negative matrices
(by Jerrum, Sinclair, and Vigoda, 33rd STOC, 2001) as abstracted in
my review.

Back to
list of Oded's choices.