THE 2015 OBERWOLFACH MEETINGS ON COMPLEXITY THEORY PROGRAM NOTES BY ODED GOLDREICH MONDAY MORNING * Welcome and individual introductions * Special session on two-source randomness extractors This session celebrated the breakthrough result of Eshan Chattopadhyay and David Zuckerman, which gave an explicit construction of a two-source extractor that handles poly-logarithmic min-entropy. David Zuckerman presented the history and wide context of the problem of constructing two-source extractors, and Gil Cohen presented an overview of the construction. On a later day, an informal specialized session featured more detailed descriptions of two components of the construction. Specifically, Eshan Chattopadhyay presented constructions of ``non-mallable extractors'' and Raghu Meka presented constructions of ``resilient functions''. One interesting point regarding the construction of Eshan Chattopadhyay and David Zuckerman is that its analysis makes explicit use of Mark Braverman's celebrated result by which poly-logarithmic-wise independence fools bounded-depth Boolean circuits. This is remarkable because these two areas of complexity theory did not seem related before and their history did not register any actual interaction so far. * Short reports by first-time student and post-doc attendees (8min each) Christina Brzuska mused about the nature of a few recent computational assumptions used in cryptography, noted that some of them are not known to imply one-way functions, and some of them contradict one another. Eshan Chattopadhyay presented a new type of randomness extractors, called extractors for sumset sources (i.e., the source is a sum, over any group, of few independent min-entropy sources). Gillat Kol briefly surveyed work on interactive compression, highlighting a general incompressibility lower bound as well as a new upper bound that holds for product distributions. Li-Yang Tan reported on an explicit function having polynomail circuits of depth $d+1$ such that $\exp(n^{o(1/d)})$-size circuits of depth $d$ cannot predict it with advantage $n^{-\Omega(1/d)}$. Sebastien Tavenas reported on work in arithmetics complexity, but I lost my notes for it. MONDAY AFTERNOON * Assumptions in cryptography: from one-way functions to the new age Vinod Vaikuntanathan presented a unified framework in which a wide spectrum of cryptographic assumptions, ranging from the very minimal assumption that one-way functions (denoted OWF) exist to the very speculative assumption by which it is feasible to obfuscate computer programs such that the obfuscations of functionally equivalent programs cannot be distinguished (known as the IO assumption). The framework is of ``randomized encoding'' (RE) and the following holds: (1) RE for functions in NC1 is possible (unconditionally), (2) RE for functions in P is implied by the existence of OWF, (3) ``reusable RE'' is possible under a widely believed assumption that a specific function constitutes a OWF, and (4) ``public reusable RE'' is equivalent to IO. * Preventing false discovery in interactive data analysis Interactive (or adaptive) data analysis refers to a setting in which one first obtains a sample of the data, and then conducts a study of this sample by issueing queries and examining the answers (e.g., testing various hypotheses regarding the data). The point is that these queries are selected adaptively based on prior answers, and the problem is to avoid (false) discoveries that refer to the sample but do not reflect the original data. One key observation is that avoiding such a phenomenon is closely related to devising a ``privacy preserving mechanism'' for answering statistical queries to the data, whereas the design of such mechanisms is related to complexity theory. In particular, it was shown that if one-way functions exist, the one cannot present false discoveries when making more than ${\widetilde O}(n^2)$ queries to a sample of size $n$. TUESDAY MORNING * Recent work in fine-grained complexity Fine-grained complexity aims to provide evidence that the known polynomial-time algorithms for some natural tractable problems are actually the best possible. Ryan Williams surveyed research in this area, while highlighting the connection between it and the study of the exact complexity of problems that seem to require exponential-time such as 3SAT. * Lower bounds for bounded-depth formulas Ben Rossman outlined his proof that shows that the simple conversion of circuits of size $s$ and depth $d$ into formulae of size $s^d$ and depth $d$ is essentially the best possible. Specifically, he showed that the parity of $n$ variables, which can be computed by a depth $d$ circuit of size $\exp(n^{1/(d-1)})$, requires depth $d$ formula of size $\exp(\Omega(d\cdot n^{1/d})$. * The complexity of random CSPs: recent advances Ryan O'Donnell surveyed the state-of-art regarding the complexity of solving random CSP instances, focusing on the use of the conjecture that it is hard to solve random instances of density that is close to the satisfiability threshold. WEDNESDAY MORNING * (Non)-compressibility of interactive communication: progress and challenges Mark Braverman focused on the gap between the amount of syntactic communication and the information contents of two-party interactive communication, raising the question of the extent to which an interactive communication can be compressed to its informative contents. It is known that, in general, the best compression is to an exponentail amount, but a quadratic amount is possible when the distribution of each input is independent of the distribution of the other input. * Communication complexity in distributed computing Rotem Oshman focused on the relation between multi-party communication complexity and distributed computing, which highlighted the difference between the ``local'' model (which allows messages of unbounded length in each round) and the ``congest'' model (in which only short messages are allowed in each round). In both models, in each round, each party can only communicate with its neighbors in the fixed communication network. The cnnection to multi-party communcation complexity refers to the more natural but less studied model in which the paries' inputs are dijoint (a.k.a number-in-hand model) as opposed to the so-called number-on-forehead model. * On lower bounds for low-depth arithmetic circuits The survey, presented by Neeraj Kayal, visited some of the main themes and techniques in this area, starting from the observation that sufficiently strong lower on the size of depth four circuits would yield such lower bounds for general arithmetic circuits (of unbounded depth). THURSDAY MORNING * High-rate locally-testable and locally-correctable codes The study of LTCs and LDCs focuses on two opposite extremes. In one extreme the focus is on a constant number of queries (i.e., optimal locality), whereas in the other extreme the focus is on linear length (i.e., constant or optimal rate). Or Meir considered the latter regime and presented LDCs that use $\exp({\widetilde O}({\sqrt\log k}))=k^{o(1)}$ queries (improving over the prior bound of $k^{1/O(1)}$) and LTCs that use a quasi-poly-logarithmic number of queries (i.e., the number of queries is $(\log k))^{O(\log\log k)}$). Recall that in the constant-query regime, it is known that LDCs cannot have almost linear length (i.e., encoding $k$ bits requires length at least $k^{1+\Omega(1)}$), whereas the best known constructions have sub-exponential length. For LTCs, the best known constructions use almost linear length (i.e., the codeword has length ${\widetilde O}(k)$). * Rigidity of random Toeplitz matrices and depth-three circuits Avishay Tal addressed the problem of presenting explicit functions that require depth-three circuits of size $\exp(\omega({\sqrt n}))$. He presented a proof of such a result in a {\em restricted model}\/ of depth three circuits, which arises from a natural model of multi-linear circuits for computing multi-linear functions, by studying the ``rigidity'' of random Boolean Toeplitz matrices. Specifically, he showed that such a random matrix disagrees with any rank $r$ matrix on at least ${\widetilde\Omega}(n^3/r^2)$ entries, which improves over the previously known bound of $\Omega(n^2/r)$ when $r