- Michael Kearns: Experiments in Social Computation
- Eli Ben-Sasson: An Additive Combinatorics Approach to the Log-rank Conjecture
- Aleksander Madry: Online Algorithms and the K-server Conjecture
- Shubhangi Saraf: The Method of Multiplicities

These four lectures were a pleasure to listen to. In particular, the speakers did a great job in clarifying the issues at hand, and it is a pity that no record was made.

*Talk 1:*
Mike was apologetic about the nature of his talk,
giving the feeling that he does not consider it "theory";
but I think this is theory at its best
(although no definitions and theorems were given):
Bringing up issues is theory, provoking thought is theory,
offering insights is theory, and broadening horizons is theory.
Specifically, Mike talked of (social network) tasks that consists
of a network structure and a computational task.
The network structure is only known partially to the human vertices,
and it may vary within some specific classes. The tasks are defined
in terms of desired functionalities mapping local inputs to local
outputs. One insight is that it makes no sense to study the network
structure in isolation; it should be studied in connection to tasks,
since variations in structures may be better for one task but worse
for another. There was much more, but too much to comment
in a brief summary as the present one, and much of it begs
for a mathematical modeling in standard TOC terms
(which will have to avoid "solutions" that are unavailable
to a set of humans that are not coordinated a priori).
E.g., one reported experiment refers to reaching a consensus
on a color (among 9 possible colors), when the agents
only see their own color and the colors of their neighbors.

*Talk 2:*
(The rank (over the reals) of the matrix describing a Boolean function
is often used to derive a lower bound on the communication complexity (CC)
of the function, since it is known that the CC is at least log the rank.
On the other hand, it is easy to see that CC is at most the rank.)
A well-known question is whether there exists a tighter relation
between communication complexity and (matrix) rank.
Using a (plausible) conjecture from additive combinatorics,
this work (presented by Eli)
shows that communication complexity is sub-linear in the rank.
My most important personal benefit from this talk is in drawing my
attention to a prior work by Eli Ben-Sasson and Noga Zewi
showing a transformation from (seedless)
extractor for affine sources to two-source extractors
(indeed, shame on me for missing that intriguing work before).
Specifically, the new work relates the Approximate Duality Conjecture
(introduced in the prior work) to a better known conjecture (i.e., PFR).

*Talk 3:*
I wish to commend Alex for not shying away from providing an
excellent perspective on on-line algorithms that a worst-case
approach to dealing with the uncertainty involved in inputs
that arrives gradually while one needs to make irrevocable
decisions about portions of it. I share his preferring this
approach to uncertainty over a stochastic approach that assumes
perfect knowledge of the distribution of inputs.
The talk itself focused on the $k$-server problem with $n$ location,
recalling the $k$ lower bound on the competitive ratio
of deterministic algorithms, which holds also when $n=k+1$,
and the logarithmic lower bound for randomized algorithms,
both hold also for the special case of cashing/paging.
The new contribution is a randomized algorithm of
poly-logarithmic in $n$ competitive ratio,
which begs the question of whether $polylog(k)$ is possible.
Alex concluded by returning to the big picture and suggesting
the exploration of models between worst-case competitive ratio
and average-case stochastic analysis. I was wondering whether
the notion of probability bounded sources can yield an appealing model
(e.g., say that the next request is chosen by an adversary subject to
no location being selected with probability exceeding $1/3$).

*Talk 4:*
Shubhangi's survey of the Method of Multiplicities used the
application to Locally Decodable Codes (LDC) as its main arena.
While the initial focus in the study of LDC was on the length
of codes allowing recovery with a constant number of queries,
the aforementioned application related to the opposite setting:
It focuses on codes with rate close to one and seeks recovery
with a sublinear number of queries. Prior to the current work,
only a rate of half was attainable (with a number of queries
that is a square root of the length), and the current work
allows to get arbitrarily close to one.
(Loosely speaking, the multiplicity of a polynomial $P$
at a point $a$ is the largest $s$ such that all partial
derivatives of $P$ upto order $s-1$ vanish at point $a$.
The ``SZ Lemma with multiplicities'' asserts that for
an $n$-var polynomial $P$ of degree $d$ over $F$,
it holds that $\sum_{a\in F^n}mult_P(a) \leq d\cdot|F|^{n-1}$.)

See copy of the original program.

Back to list of Oded's choices.