Dates: January 5-8, 2025
Venue: The beautiful Lopatie conference center, Weizmann Institute of Science, Israel.
Organizers: Irit
Dinur and Thomas Vidick
[YouTube channel]
  
[Program with resources]
  
[Program PDF]
This workshop will have several mini-courses that will go in depth into a topic,
together with several more standard one-hour lectures.
Abstract: The goal of this minicourse is to describe the overarching themes of the resolution of Connes' embedding problem (CEP) due to Ji--Natarajan--Vidick--Wright--Yuen and the refutation of the Aldous--Lyons conjecture (ALC) due to Bowen--Chapman--Lubotzky--Vidick. These two problems are part of a series of problems that ask whether certain objects are approximable, and in the end, the above two results can be seen as inapproximability results of the relevant objects. The intriguing thing about these pure mathematics results is the methods used in deducing them. In particular, the use and analysis of certain models of interactive proofs. In the first lecture, we will present the aforementioned series of approximation problems --- CEP, ALC, the group soficity problem of Gromov--Weiss and more. Our presentation will use a purely group theoretic setup that allows us to present all of these problems as questions about the free group. We then relate CEP and ALC to certain interactive proof systems, MIP* and IP^{IRS} respectively, and show how undecidability in these proof systems imply a negative answer to the approximation problems.In the second lecture, we relate a variant of MIP*, which we call TailoredMIP*, to IP^{IRS}. This allows us to deal with a single interactive proof system, whose undecidability resolves both CEP and ALC. In the remaining time, we will describe why TailoredMIP* contains the Halting Problem. This uses a technique called compression.
This minicourse focuses on analytic methods in spectral graph theory, with a particular emphasis on free probability theory. The primary goal is to develop an understanding of what free probability theory entails and to explore its limitations in the study of graphs. We will then delve into its finite counterpart—finite free probability—and examine concepts such as interlacing families of polynomials. Finally, we will discuss applications of these theories to expander graphs.
Below are the details of the three lectures:
Lecture 1:
In the first lecture, we will begin with a brief introduction to spectral graph theory and expander graphs. Following this, we will provide an overview of free probability theory, which, broadly speaking, is a probabilistic framework suited to non-commutative random variables (e.g., matrices). Key concepts such as "freeness"—a form of independence in the non-commutative setting—will be introduced, along with its underlying elegant combinatorial structures and the analytic tools the theory offers. However, freeness is a phenomenon that does not exist in finite dimensions, making the standard form of free probability theory largely inapplicable to many problems in theoretical computer science.
Lecture 2:
Given the limitations of free probability theory for finite-dimensional settings, the second lecture will focus on finite free probability—a "sister theory" to classical free probability. This theory was pioneered by Marcus, Spielman, and Srivastava in a series of groundbreaking works about a decade ago. We will explore its principles and how it extends the ideas of free probability to finite settings.
Lecture 3:
In the final lecture, we will examine applications of finite free probability, in particular, the resolution of the Marcus-Spielman-Srivastava (MSS) theorem on the existence of bipartite Ramanujan graphs of every degree. If time permits, we will also analyze combinatorial graph operations, such as the Zig-Zag product, through the analytic lens of free probability.
The graph counting lemma of Chung, Graham, and Wilson (Combinatorica 1988) is a fundamental result in combinatorics, which states that if a large graph is pseudorandom (in a specific sense that we will define), then the number of copies of any small graph $H$ in $G$ is close to what is expected from a random graph of the same density. However, this result is only nontrivial when $G$ is a dense graph. In this work, we obtain a counting lemma that works in the sparse setting and is well-suited for the density increment arguments in additive combinatorics. In a recent remarkable breakthrough, Kelley and Meka (FOCS 2023) obtained a strong upper bound on the density of sets of integers without nontrivial three-term arithmetic progressions. We combine our counting lemma with other ideas to establish Kelley--Meka type bounds for all linear patterns defined by translation-invariant systems of binary linear forms, i.e., each form depends on exactly two variables. In particular, we obtain strong bounds for the Turan problem in Abelian Cayley sum graphs, i.e., an upper bound on the maximum edge density of an Abelian Cayley sum graph with a clique of a given size. To prove our results, we employ some of the recent technology developed by Kelley and Meka and also the follow-up work by Kelley, Lovett, and Meka (STOC 2024).
This talk is based on a joint work with Yuval Filmus, Hamed Hatami, and Kaave Hosseini (FOCS 2024, full version: arXiv:2311.12248)
Abstract coming soon...
Abstract: Let A, B, C be subsets of the special unitary group SU(n) of Haar measure ≥ e^{−n^1/3}. Then ABC = SU(n). In fact, the product abc of random elements a ∼ A, b ∼ B, c ∼ C is equidistributed in SU(n). This makes progress on a question that was posed independently by Gowers studying nonabelian variants of questions from additive combinatorics and settles a conjecture of physicists studying quantum communication complexity. To prove our results we introduce a tool known as ‘hypercontractivity’ to the study of high rank compact Lie groups. We then show that it synergies with their representation theory to obtain our result. Time permitting, I will also discuss corresponding results for finite simple groups, where the relevant representation theoretic notion of ‘tensor rank’ was introduced only recently independently by Gurevich-Howe and Guralnick-Larsen-Tiep. (Based on joint work with Ellis-Kindler-Minzer, Filmus-Kindler-Minzer, Keevash, Evra-Kindler, and Evra-Kindler-Lindzey.)
Lecture 1:
Alphabet-Soundness Tradeoff in PCPs and Global Hypercontractivity
We will give a gentle introduction to PCPs, and discuss some of their parameters.
We will focus on the tradeoff between the alphabet size of the PCP and soundness of the PCPs, which are often crucial in applications in hardness of approximation.
Time permitting, we will discuss the notion of global hypercontractivity and explain how it relates to questions about PCPs as above.
Based mostly on joint work with Kai Zhe Zhang.
Lecture 2:
Size Efficient PCPs and Fault-tolerant Routing on HDX
This talk will focus on the size parameter of a PCP, and in particular on the problem of constructing PCPs with good soundness that have quasi-linear size.
We will discuss the idea of hardness amplification, derandomized hardness amplification and present such schemes based on high-dimensional expanders,
a recent generalization of expander graphs that has proven to be extremely useful.
Based mostly on joint works with Mitali Bafna, Noam Lifshitz, Nikhil Vyas and Zhiwei Yun.
Lecture 3:
On Approximability of Satisfiable CSPs and Friends
Constraint satisfaction problems (CSPs in short) are among the most important computational problems studied in TCS.
This talk will focus on a recent line of study addressing the complexity of approximating satisfiable instances of CSPs, and
connections of this study to multi-player parallel repetition theorems, property testing and extremal combinatorics.
Based mostly on joint works with Amey Bhangale, Subhash Khot and Yang P. Liu.
Quantum LDPC (qLDPC) codes are a natural extension of classical LDPC codes, designed to protect quantum bits, or qubits, instead of classical bits. However, unlike their classical counterparts, constructing good qLDPC codes with both non-vanishing relative distance and rate has proven to be a significant challenge. All currently known constructions of such codes are based on the concept of high-dimensional expansion, which directly accounts for their good distance and other desirable features, such as linear-time decoding and the NLTS property. In this talk, I will provide a gentle introduction to quantum LDPC codes and discuss recent constructions of good qLDPC codes, which naturally extend Sipser-Spielman expander codes from graphs to high-dimensional cell complexes.
Coboundary expansion is a high dimensional analogue of the Cheeger constant for simplicial complexes. Originally, this notion was motivated by the fact that it implies topological expansion, but nowadays a significant part of the motivation stems from its deep connection to problems in theoretical computer science.
In this talk, we will discuss only coboundary expansion of two-dimensional complexes. In this setting, coboundary expansion can be defined with respect to any group coefficients (not just Abelian groups). In fact, recent applications in theoretical CS of this notion put an emphasis on coboundary expansion with coefficients in finite symmetric groups.
The aim of this talk is to explain the following result: Given any finite group G, we construct a family of simplicial complexes (of uniformly bounded degree) that are high dimensional spectral expanders and whose 2-skeletons are coboundary expanders with respect to G. This construction is elementary and uses a variation of the coset complex construction of spectral high dimensional expanders.
This talk is based on a joint work with Tali Kaufman and Shmuel Weinberger.
Interactive oracle proofs (IOPs) extend the classical notion of probabilistically-checkable proofs (PCPs) by allowing a verifier to interact with a prover over a small number of rounds, while querying the prover’s messages in only a few locations.
A recent line of work gave highly-efficient IOPs outperforming state-of-the-art PCPs, for example, constant-round and constant-query IOPs with only a linear (and even approaching the witness length) amount of communication, as well as IOPs with linear-time prover complexity. These constructions were leveraged in turn to obtain highly-efficient succinct arguments and zero-knowledge proofs.
The improved efficiency was obtained by replacing polynomial-based codes, commonly used in such proof systems, with more efficient (tensor-based) codes. In particular, these constructions bypassed a barrier imposed by the need to encode the computation using a multiplication code.
In the talk I will survey these highly-efficient IOP constructions, and their cryptographic applications.
Based on Joint works with Ron Rothblum and Mor Weiss.
I will survey the wide variety methods of proving the expansion of Cayley graphs with different groups and generators. I will also pose some of the problems which are still open in this broad area.
Abstract coming soon...
Abstract coming soon...