RANDOM+APPROX 2012
MIT, Aug 15-17, 2012
A kind of preface (by me...)
These "twin" mini-conferences have been running together for about
a decade and a half, but this year I felt that they were more
integrated than in the past. Also, my impression was that the location,
a lecture room in the always busy STATA bldg of MIT,
fits these conferences very well.
Following is a small sample of the many interesting talks
that took place.
Invited talk on routing problems by Julia Chuzhoy
Julia started with the two "extreme" problems of
(1) congestion minimization (serving all request pairs)
and (2) maximization of Edge Disjoint Pathes (EDP), discussing
the "knowledge gap" between the directed case (in which the bounds
are tight) and the undirected case in which they are sub-exponentially
apart. She then moved to the "mixed problem" of EDP with congestion,
presenting the current state-of-the-art of congestion two and polylog
approximation factor (which is tight up to the specific polynomial).
Invertible Zero-Error Dispersers and Defective Memory
with Stuck-At Errors by Ariel Gabizon (and Ronen Shaltiel)
Ariel started with the "stuck-at" error, which is a coding problem
in which the encoder knows the location of "stuck" bits but the decoder
does not. The paper shows a efficient (randomized) coding scheme that
acheive the optimal rate, which is 1 minus the "stuck bits" rate.
The construction is based on (1) observing that this problem is
equivalent to constructing seedless, zero-error "invertable" disperser
for bit fixing sources, and (2) constructing such a desperser by combining known constructs.
On the Coin Weighing Problem with the Presence of Noise
by Nader Bshouty
I could not attend the talk, but the paper seems interesting.
It refers to learning an unknown vector $x\in\{0,1\}^n$,
when allowed queries of the form $Q$ answered by the integer
value of $\sum_{i\in S}x_i$. This was studied decades ago,
but the current work refers to the case that answers may be
wrong (or missing, which makes sense in the non-adaptive case).
The paper studies the non-adaptive case, showing that the
error rate (resp., missing rate) must be below 1/4 (resp., 1/2)
to allow a solution, and in that case the number of queries
is $Theta(n/log n)$, just as in the non-noisy case.
It then presents a *polynomial-time* algorithm that makes
$O(n/log log n)$ queries and tolerates a constant number of queries.
Add'l comment:
In addition to improving the performance of the latter, I wonder
if adaptivity helps here (but did not really think about it).
Also, I am wondering whether the aforementioned bounds
(e.g., on error rate) cannot be derived directly from some coding bounds.
A new point of NP-hardness for 2-to-1 Label Cover
by (Per Austrin and Ryan O'Donnell and) John Wright
This result refers to label sizes 3 and 6, and improves the soundness
parameter for (perfect completenss) systems. The talk also mentioned
a follow-up result that refers to 3-coloring. The techinque, as in the
following two paper, focus on the design and analyss of adequate tests.
Circumventing d-to-1 for Approximation Resistance
of Satisfiable Predicates Strictly Containing Parity of Width Four
by Cenny Wenner
The point is that the hardness result ("approximation resistance")
is now based on NP-hardness rather than on the d-to-1 LC Conjecture.
This refers to satisfiable instances of CSPs of arity four,
provided that the CSP predicate strictly implies the parity predicate.
On the NP-hardness of Max-Not-2
by Johan Hastad
This work also refers to satisfiable instances of CSP,
but they are of arity three. The known results were for
predactes that have 7 (SAT) and 6 satisfying assignments
and the current work refers a (the) predcate having 5.
(The case of 4 is efficiently solvable, is is Lin2.)
Here too, approximation resistance is proved based on NP-hardness.
Equivalently, this shows a non-adaptive PCP[log,3] with perfect
completeness and soundness error just above 5/8 for NP, which is tight
in light of Zwick's result (1998) that shows that lower soundness error
is only possible for P.
Optimal Hitting Sets for Combinatorial Shapes
by Aditya Bhaskara, Devendra Desai and Srikanth Srinivasan
One thing I liked about this work is the simple reduction to the special
case of Threshold shapes, which capitlizes on the fact that one is dealing
with hitting rather than with pseudorandomness.
Almost k-wise vs. k-wise independent permutations, and uniformity
for general group actions by Noga Alon and Shachar Lovett
[presented by Tali Kaufman]
The paper studies the question of local independence versus
local almost-independence within a general framework that
refers to group actions. In second thought, I realized that
this general framework may be of interest for other applications
and that the fact that results are established in it may shed
light on the principles underlying results of this sort.
Add'l comment:
The distributions in question are distributions over a group $G$,
which in turn acts on some domain $D$, and (almost) uniformity
refers to the distribution that results from that action
(which is a distribution on $D$).
In the case of standard $k$-wise independence over $B$,
we have $D={n\choose k}\times B^k$, whereas each $g\in G$
maps $(S,b_1...b_k)$ to $(S,b'_1...b'_k)$ such that
$b'_i=b_i+PRJ(g,s_i)$ where $s_i$ is the $i$th element in $S$
and $PRJ(g,j)$ is the $j$th element in a sequence $(e_1,...,e_n)\in B^n$
that is associated with $g$ (via a 1-1 correspondance $G\to B^n$).
Testing Lipschitz Functions on Hypergrid Domains
by (Pranjal Awasthi,) Madhav Jha (, Marco Molinaro
and Sofya Raskhodnikova)
This work extends the scope of testing L-functions from lines and
hypercubes to their common generalization, the hypergrid.
The elegant analysis of the tester introduces (and analyzes)
a (natural) local modification operator.
This work "rounds up" very nicely this appealing line of research,
but leaves open the important case of general range
(since, so far, only bounded integer range was treated).
Limitations of Local Filters of Lipschitz and Monotone Functions
by (Pranjal Awasthi, Madhav Jha,) Marco Molinaro
(and Sofya Raskhodnikova)
The known exponential (in dimension) lower bounds are extended to adaptive
filters, and to a realxed notion that allows the filter to introduce
a large additive error (in each value).
Back to
list of Oded's choices.