Inspired by Shafi Goldwasser's approach and contributions, the theme of this event is highlighting the excitement of the process of doing research. Four speakers presented research they are particularly excited about through this lens. Following is my take of their presentations.

In any human life situation, it is this common context that makes coordination of action, let alone a feeling of mutual understanding, possible via brief communication. The communication is brief in comparison to the parties' inputs, let alone in comparison to the vast context that they are aware of. A technical talk in a conference such as FOCS presumes the use of a common natural language, and several layers of technical knowledge build on top of it (e.g., high-school math, knowledge of certain parts of TOC at large, and specific knowledge of the specific area).

Turning to the actual studies surveyed in this talk, they are toy examples in which the context takes the form of shared randomness, knowledge of the joint distribution from which inputs are picked, or the function that needs to be computed. The point is that each party may have a different view of these key components, and these views are highly correlated but not identical.

The second generation of candidate constructions is based on combining ``bilinear'' and ``trilinear maps'' with strong notions of pseudorandom generators (i.e., ones that are highly local, meaning having degree two or three when written as polynomial maps).

Of course, easier said than done. It turns out that devising learning algorithms for this setting requires dealing with new challenges and phenomenon (e.g., the performance of the parametric algorithms is typically non-continuous in the parameters, a phenomenon that does not occur in standard machine learning settings). So this research is pushing forward both the algorithms used in applications and the algorithmic methods of machine learning itself.

A few comments are in place. First, we are talking about centralized algorithms, which are not restricted in their access to the input (unlike distributed algorithm in the so-called LOCAL model where, in each round, each processor can obtain information only from its neighbors). Second, few copies of the centralized algorithm may run on the same input, without coordinating their actions (via communication) beyond sharing some initial global randomness. Using randomization seems essential (challenge: formulate and prove this assertion), and different random choices may yield different outputs, but each choice of randomness must determine a unique output. In particular, the output is not allowed to depend on the sequence of queries. The latter requirement is natural per se, but it also implies that there is no need for inter-queries storage, and that all copies of the algorithm (which may be used by different entities) obtain the same output (provided that they start with the same randomness).

In spite of the differences, there is a tight connection between the study of LCAs and the study of the LOCAL model of distributed computing. In particular, for problems regarding bounded-degree graphs, the probe complexity of LCAs is at most exponential in the round complexity of the distributed algorithm.

Back to list of Oded's choices.