The story begins in the 1950s in USSR, where MCSP was first studied. Interest in it was revived in the 2000s by Kabanets and Cai. (Recall that in MCSP, given a truth-table of a Boolean function $f$ and an integer $i$, the question is whether $f$ has a circuit of size at most $i$.)

Kolmogorov complexity is related to it too. Versions include the original formulation (i.e., $K(x)$ is the length of the shortest program, for a universal machine that produces $x$), the time-bounded version that refers to programs that produce the desired string within a specific complexity bound (i.e., $K^t(x)$ refers to producing $x$ within $t(|x|)$ steps), and Levin's version $Kt(x)$ that equals the min $|p|+\log_2 t$ such that $p$ produces $x$ in $t$ steps. There are versions in which one considers programs that produce individual bits (e.g., $U(p,i)=x_i$) for all $i$).

Known results about these problems are sensitive to details such as the type of reduction (e.g., deterministic vs randomized) and the level of approximation in the corresponding gap problems.

Q+A: MCSP is fundamentally different from circuit minimization, where one is given a circuit (smaller than the truth-table length) and is asked about the existence of a smaller one.

*Talking in the presence of Leonid Levin is the most intellectually frightening experience there is. Failure to survive is guaranteed, and one better choose a glorious way to fail. Of the glorious possibilities closely related to Levin and feasible for me - i.e., NP-completeness, probabilistic proof systems, and pseudorandomness - I chose the latter.
*

*
I view the concept of pseudorandom generators as a general paradigm having numerous incarnations that share three basic aspects:
*

- a stretching (of a random seed to a pseudorandom sequence),
- a notion of computational indistinguishability (underlying the definition of pseudorandomness), and
- bounds on the complexity of the stretching (aka generation) process.

Here is a report on Levin's reaction to my presentation of Mazor and Pass's version of the HILL construction. As I said in my presentation, Levin does not like the HILL construction, believing that there should be a simpler construction that does not suffer from a significant security loss (as HILL does). Levin presented his decades-old observation that simpler constructions suffice in the case that the one-way function $f$ is `almost' one-to-one and the conjecture that `natural' extraction from $(f(x),x)$ (i.e., $F(desc(E),x)=(desc(E),E(f(x)x))$) preserves the infeasibility of inversion. In order to avoid the question of what is natural, Levin suggests concrete instantiations such as the following (in which the elements reside in a large finite field): $F(r,x) = (r,f(x)+rx)$ (here the operations are of the field). (Note the equivalence to the more redundant form $F(r,r',x) = (r,r',r'',r'f(x)+rx+r'')$.)

I have recommended this work in HERE, but wish to add some comments about the proof.

The proof partitions all possible length into segment and for each segment considers a sequence of increasing lengths, denoted $n_0,n_1,..,n_t$. Starting with a trivial hitting set $H_0$ of length $n_0$, for each $i$, searching the hitting set $H_i$ for a valid solution (wrt a fixed verification procedure) is reduced to the computation of a function $F_i$. Now, if this function is hard, then it is used in order to construct a hitting set $H_{i+1}$ for length $n_{i+1}$. Hence, if $F_i$ is hard then we get a good hitting set of length $n_{i+1}$. Otherwise, we use the verification procedure as a distinguisher and this yields (via reconstruction algorithm) an efficient procedure for computing $F_i$. Lastly, at length $n_t$, we can afford to scan $H_t$, which means that either we solve the problem for length $n_t$ or there exists an $i$ such that $H_i$ is good by $H_{i+1}$ is not good, which implies that we can scan $H_i$ (by computing $F_i$ as outlined above).

Recognizing internal to math (formula-specified) and external (based on parameters in those formulas) aspects of math objects greatly simplifies foundations. I postulate external sets (not internally specified, treated as the domain of variables) to be hereditarily countable and independent of formula-defined classes, i.e. with finite Kolmogorov Information about them. This allows elimination of all non-integer quantifiers in Set Theory statements.

Talk Outline:

- Set Theory: Some History, Self-Referentials.
- Dealing with the Concerns, Cardinalities.
- Going at the Self-Referential Root.
- Radical Computer Theorist Hits Back.
- Some Complexity Background.
- Independence Postulate.
- Eliminating All Non-integer Quantifiers.
- A Problem: One-Way Functions.
- Takeout: the Issues.
- Takeout: a Way to Handle.
- Some More IP Applications.
- Appendix: ZFC Axioms.
- To Modify ZFC.

I can only offer an extremely superficial overview.
Set theory distinguishes classes of objects,
(which I'd call `vulgar sets') from actual sets
(which I'd call `formal sets') constructed by following the ZF axioms.
The latter axiomatization was suggested in order to avoid
the contradictions that arise from Cantor's simpler definition.
Levin considers the ZF axioms rather *ad hoc*, and suggests
to restrict (the expressibility) of the system in a different way,
while (happily) sacrificing the theory of cardinals

In this talk we will describe some of our efforts over the years, starting with joint works with Juba and Goldreich, trying to grapple with this question. There may even be hints of answers.

*
In addition to touching on a theme close to Leonid Levin's works (universality), this series of works was also partially inspired by many Levinisms that challenge widely adopted notions in our communications (e.g. "what should be the ideal length of a paper?"). I hope to use this joyous occasion to reraise all the controversies!
*

The talk had several parts that address issues that are crucial to real-life communication (esp., communication between humans). These include (1) the lack of a universal language that may be used in communication, and (2) the reliance of communication on a huge amount of shared context, which allows to use far shorter communication. Focusing on issue (2), note that issue (1) enters here because there is no perfect agreement about the shared context.

In the briefly surveyed works, which use the framework of communication complexity as their starting point (but with the intension of providing efficient protocols rather than lower bounds), the shared context was modeled by either the function $f$ to be computed or the shared randomness available to the parties or a priori knowledge of the joint input distribution. In each of this cases, the studies focus on the case that the parties start with an approximate (rather than perfect) agreement on the shared context, and a key question is how does the efficiency deteriorates with the approximation error.

Back to list of Oded's choices.