Some highlights of the LevinFest

On 10-11 July 2024, a workshop in honor of Leonid Levin took place at BU. The program of this workshop is posted HERE (see local copy). Following are my brief take-home ideas of some of the presentations (prepended by the original abstracts).

Eric Allender - The Minimum Circuit Size Problem through the Ages

The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest circuit that computes a given function. MCSP has played a significant role in the history of computational complexity theory, while somehow managing to avoid having its complexity be classified in the usual collection of complexity classes. This lecture will discuss some of this long history, present several connections between MCSP and central complexity-theoretic notions, and survey some of the recent developments where MCSP has played a crucial part.

The story begins in the 1950s in USSR, where MCSP was first studied. Interest in it was revived in the 2000s by Kabanets and Cai. (Recall that in MCSP, given a truth-table of a Boolean function $f$ and an integer $i$, the question is whether $f$ has a circuit of size at most $i$.)

Kolmogorov complexity is related to it too. Versions include the original formulation (i.e., $K(x)$ is the length of the shortest program, for a universal machine that produces $x$), the time-bounded version that refers to programs that produce the desired string within a specific complexity bound (i.e., $K^t(x)$ refers to producing $x$ within $t(|x|)$ steps), and Levin's version $Kt(x)$ that equals the min $|p|+\log_2 t$ such that $p$ produces $x$ in $t$ steps. There are versions in which one considers programs that produce individual bits (e.g., $U(p,i)=x_i$) for all $i$).

Known results about these problems are sensitive to details such as the type of reduction (e.g., deterministic vs randomized) and the level of approximation in the corresponding gap problems.

Q+A: MCSP is fundamentally different from circuit minimization, where one is given a circuit (smaller than the truth-table length) and is asked about the existence of a smaller one.

Oded Goldreich - Pseudorandom Generators

(This entry was included in order to report of a follow-up comment of Leonid Levin.)

Talking in the presence of Leonid Levin is the most intellectually frightening experience there is. Failure to survive is guaranteed, and one better choose a glorious way to fail. Of the glorious possibilities closely related to Levin and feasible for me - i.e., NP-completeness, probabilistic proof systems, and pseudorandomness - I chose the latter.

I view the concept of pseudorandom generators as a general paradigm having numerous incarnations that share three basic aspects:

  1. a stretching (of a random seed to a pseudorandom sequence),
  2. a notion of computational indistinguishability (underlying the definition of pseudorandomness), and
  3. bounds on the complexity of the stretching (aka generation) process.
I will shortly preview several incarnations starting from the archetypical case, which is tailored to any efficient application, and ending with a few highly restricted notions, which nevertheless have numerous applications.

Here is a report on Levin's reaction to my presentation of Mazor and Pass's version of the HILL construction. As I said in my presentation, Levin does not like the HILL construction, believing that there should be a simpler construction that does not suffer from a significant security loss (as HILL does). Levin presented his decades-old observation that simpler constructions suffice in the case that the one-way function $f$ is `almost' one-to-one and the conjecture that `natural' extraction from $(f(x),x)$ (i.e., $F(desc(E),x)=(desc(E),E(f(x)x))$) preserves the infeasibility of inversion. In order to avoid the question of what is natural, Levin suggests concrete instantiations such as the following (in which the elements reside in a large finite field): $F(r,x) = (r,f(x)+rx)$ (here the operations are of the field). (Note the equivalence to the more redundant form $F(r,r',x) = (r,r',r'',r'f(x)+rx+r'')$.)

Igor Carboni Oliveira - Levin Complexity and Pseudodeterministic Constructions of Primes

I will discuss an influential notion of complexity for strings introduced by L. Levin, known as Kt complexity, and some recent related developments motivated by the task of explicitly constructing large primes.

I have recommended this work in HERE, but wish to add some comments about the proof.

The proof partitions all possible length into segment and for each segment considers a sequence of increasing lengths, denoted $n_0,n_1,..,n_t$. Starting with a trivial hitting set $H_0$ of length $n_0$, for each $i$, searching the hitting set $H_i$ for a valid solution (wrt a fixed verification procedure) is reduced to the computation of a function $F_i$. Now, if this function is hard, then it is used in order to construct a hitting set $H_{i+1}$ for length $n_{i+1}$. Hence, if $F_i$ is hard then we get a good hitting set of length $n_{i+1}$. Otherwise, we use the verification procedure as a distinguisher and this yields (via reconstruction algorithm) an efficient procedure for computing $F_i$. Lastly, at length $n_t$, we can afford to scan $H_t$, which means that either we solve the problem for length $n_t$ or there exists an $i$ such that $H_i$ is good by $H_{i+1}$ is not good, which implies that we can scan $H_i$ (by computing $F_i$ as outlined above).

Leonid Levin - Set Theory in the Foundation of Math: Internal Classes, External Sets

Usual math sets have special types: countable, compact, open, occasionally Borel, rarely projective, etc. They have finite "depth": are described by a single Set Theory formula with variables ranging over objects unrelated to math formulas. Exotic expressions referring to sets with no depth limit or to Powerset axiom appear mostly in esoteric or foundational studies.

Recognizing internal to math (formula-specified) and external (based on parameters in those formulas) aspects of math objects greatly simplifies foundations. I postulate external sets (not internally specified, treated as the domain of variables) to be hereditarily countable and independent of formula-defined classes, i.e. with finite Kolmogorov Information about them. This allows elimination of all non-integer quantifiers in Set Theory statements.

Talk Outline:

More details in https://www.cs.bu.edu/fac/lnd/.m/sta.pdf

I can only offer an extremely superficial overview. Set theory distinguishes classes of objects, (which I'd call `vulgar sets') from actual sets (which I'd call `formal sets') constructed by following the ZF axioms. The latter axiomatization was suggested in order to avoid the contradictions that arise from Cantor's simpler definition. Levin considers the ZF axioms rather ad hoc, and suggests to restrict (the expressibility) of the system in a different way, while (happily) sacrificing the theory of cardinals

Madhu Sudan - Universality in Communication

The theories of computing and communication come to loggerheads when facing the problem of universality. Computing rejoices in the concept that there are infinitely many universal languages to change the language of choice every minute. This is of course devastating for communication where there is a need to synchronize languages among communicating elements so that they understand each other ... but should it be? Humans do seem to learn to communicate without prior synchronization. What is the underlying theory for such communication? And can this lead to universal communication?

In this talk we will describe some of our efforts over the years, starting with joint works with Juba and Goldreich, trying to grapple with this question. There may even be hints of answers.

In addition to touching on a theme close to Leonid Levin's works (universality), this series of works was also partially inspired by many Levinisms that challenge widely adopted notions in our communications (e.g. "what should be the ideal length of a paper?"). I hope to use this joyous occasion to reraise all the controversies!

The talk had several parts that address issues that are crucial to real-life communication (esp., communication between humans). These include (1) the lack of a universal language that may be used in communication, and (2) the reliance of communication on a huge amount of shared context, which allows to use far shorter communication. Focusing on issue (2), note that issue (1) enters here because there is no perfect agreement about the shared context.

In the briefly surveyed works, which use the framework of communication complexity as their starting point (but with the intension of providing efficient protocols rather than lower bounds), the shared context was modeled by either the function $f$ to be computed or the shared randomness available to the parties or a priori knowledge of the joint input distribution. In each of this cases, the studies focus on the case that the parties start with an approximate (rather than perfect) agreement on the shared context, and a key question is how does the efficiency deteriorates with the approximation error.


Back to list of Oded's choices.