My impression is that the Theory-Fest/STOC17 (see program) was very successful.

I must confess that I was originally skeptical wrt the general idea as well as some specific aspects of it, and attended the event only because I was required to give a lecture. Still, I must admit I was wrong. I especially liked the idea of three parallel sessions coupled with few talks in a plenary session, but I would have suggested to have more (say 10-15) presentations of STOC papers rather than only 3+1 "best papers". Also, the presented papers need not be "best" (per merits) but rather "most adequate for a plenary session" per the appeal of what can be presented in a short talk. I also liked the idea of presentation of papers that appeared in other conferences (esp., CCC, SODA, PODC, ML/NIPS, TCC etc) or even ("heavens forbid") only in journals. The tutorials and workshops also seem to add a lot, and I am told that the same holds wrt the poster sessions (although having them at the night of a full day strikes me as too excessive).

Following are some of the lecture I have heard and got most from.

*Explicit, Almost Optimal, Epsilon-Balanced Codes*by Amnon Ta-Shma.- Avi Wigderson:
*On the Nature and Future of ToC* *Non-Interactive Delegation and Batch NP Verification from Standard Computational Assumptions*by Zvika Brakerski, Justin Holmgren, and Yael Tauman Kalai.*Competitive Distribution Estimation: Why is Good-Turing Good*by Alon Orlitsky*Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace*by William Hoza and Chris Umans.*On the Complexity of Local Distributed Graph Problems*by Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus.*On cap sets and the group-theoretic approach to matrix multiplication*by Chris Umans- Orna Kupferman:
*Examining classical graph-theory problems from the viewpoint of formal-verification methods* *Optimal Mean-based Algorithms for Trace Reconstruction (using $exp(O(n^{1/3}))$ Samples)*by Anindya De, Ryan O'Donnell, Rocco Servedio, Fedor Nazarov, and Yuval Peres.*Time-Space Hardness of Learning Sparse Parities*by Gillat Kol, Ran Raz, and Avishay Tal*Addition is Exponentially Harder than Counting for Shallow Monotone Circuits*by Xi Chen, Igor Oliveira, and Rocco Servedio*An Improved Distributed Algorithm for Maximal Independent Set*by Mohsen Ghaffari- Workshop on
*Probabilistically checkable and interactive proofs (PCP/IP): Between theory and practice*(organized by Eli Ben-Sasson, Alessandro Chiesa, and Yuval Ishai)

While I do not like the idea of trying to predict the future
of TOC or trying to direct it by some authoritative advice,
I found some comments made in the panel *TCS - the Next Decade*
well taken. In particular, Cynthia Dwork highlighted the inspiration
provided to the study of differential privacy by past theoretical
research in cryptography (although the former had to make a significant
deviation from the mind-set of the latter). Ankur Moitra highlighted
the ToC tradition of differentiating between models and algorithms,
and the willingness to develop both in a slow and creative manner, while
noting that such an attitude is lacking in some engineering disciplines.
Tim Roughgarden added that core ToC tends not to care about
specific problems but rather uses them as a way to uncover
more general phenomena and/or to discover new techniques.
Russell Impagliazzo and Dan Spielman mused on the impression that
practical researchers tend to view all problems as feasibly solvable.

Amnon gave a highly intuitive talk that focused on the analysis, and I wish to complement it by spelling out the construction. One way to describe it is as a variant on the Zig-Zag/Replacement product, except that here we combine a large $DS-regular expander graph with a small $D^t$-vertex $d$-regular expander graph, where as usual $d << D$ but here $t>1$ (rather than $t=1$ as usual). In addition, we use functions $M:[D^t]\to[D]$ and $P:[D^t]\to[D^t]$ in order to map labels of vertices of the small graph to moves on the large graph, and to permute the labels of the small graph. Specifically, viewing $[D^t] = [D]^t$, Amnon lets $M$ take the first element in a $t$-sequence and $P$ be a cyclic shift of such a sequence. The $\eps$-bias space is obtained by starting with an $\eps_0$-bias space, for some small constant $\eps_0>0$, and associating its elements with the vertices of the composed graph. Taking a random walk on this expander we output the XOR of the labels of the vertices visited.

While not each of us can deliver an exposition of the type delivered by Avi, we can and should all recognize and fully internalize the verity of the main message he communicated. Such an internalization will project the sense of importance of TOC even if we don't communicate it explicitly and elaborately. For sure, we should stop portraying ourselves as supposedly playing useless games.

This specific work (presented first in NIPS 2015) suggest to measure the performance of these algorithms by comparing their performance to the performance of the best algorithm that is given the histogram of the target distribution (i.e., given the distribution up to relabeling). It is shown that the error of a variant of the Good-Turing algorithm exceeds the optimal error by a function that is a negative power of the number of samples, and such a type of performance is the best possible.

- Generalizations of classical graph theoretic problems obtained by considering edge labels. For example, one may search for shortest, Eulerian, or Hamiltonian paths among all paths that satisfy a predetermined condition regarding the sequence of labels that correspond to the edges on the path. Such a condition may be specified by a regular expression or by a more expressive formalism.
- Generalizations of classical graph theoretic problems into games in which an adversary controls some of the vertices (or edges). For example, max-flow can be generalized such that, at adversary-controlled vertices, a legal worst-case (rather than best) assignment of the arriving flow to the outgoing edges is selected.
- Classical graph theoretic problems regarding graphs that are given by succinct representation, where the succinct representation is given by a hierarchical structure (which specifies an iterative construction in which boxes in one super-graph are replaced by designated super-graphs).

Improving over the classical results of Luby and Alon, Babai and Itai, the current paper (presented in SODA 2016) provides an algorithm of complexity $O(log d)+\exp(O(loglog n))$. The general strategy consists of two steps. First, $O(\log d)$ rounds are used in order to ``shatter'' the graph to connected components of polylogarithmic size. This step is randomized and succeeds with very high probability; it cannot be recursed, since we need to care of an almost linear number of different events (whereas the round complexity is related both to the degree and to the desired error bound). The second stage applies the know deterministic algorithm (of complexity $\exp(\sqrt(\log s))$ for $s$-vertex graphs) to each of the connected components.

Back to list of Oded's choices.