I wish to highlight one aspect of this work,
which I find extremely interesting.
It is that Avishay transforms average-case hardness with respect to
formula of a given size into worst-case hardness for much larger formulas.
*This is a result of an unprecedented nature.*
Typically, transformations of hardness come at a cost of a lower
complexity of the computational devices being fooled (or failing).
This is an artifact of the proof by contradiction, which reduces
the violation of the hypothesis to the violation of the claim,
while this reduction (as any standard one) increases the complexity.
In contrast, Avishay shows a transformation between notion of hardness
that increases the complexity of devices that are being fooled (or failed),
and his proof by contradiction decreases the complexity of the device
that violates the claim and so obtains a sufficiently smaller device
that violates the hypothesis. Indeed, this is achieved while
going from a strong notion of hardness (i.e., average-case)
to a weaker one (i.e., worst-case hardness), but this is
remarkable nevertheless (and one cannot expect such a complexity
reduction without moving to a weaker notion of hardness).

As shown in the paper, the foregoing transformation is a powerful new tool with great potential for improving the state-of-art regarding formula lower bounds, as demonstrated in the following toy example. Recall that this transformation (essentially) amplifies a lower bound of $s$ on the size of formulae that approximate a Boolean function with advantage $\epsilon>2^{-s}$ to a lower bound of ${\Omega}(\log(1/\epsilon))\cdot s$ on the size of formulae that compute the function. Starting with an almost trivial lower bound of $n-1$ on the size of any formula that approximates the parity of $n$ values with any positive advantage, one obtains the classical lower bound of ${\Omega}(n^2)$ on the size of any formula computing the parity.

**Comment (25/12/2016)**:
Some researchers have told me that they believe that
my claim of *unprecedented nature* is overstated in light of prior
works by Hajnal, Maass, Pudlak, Szegedy, and Turan ("Threshold circuits of
bounded depth", JCSS, April 1993) and by Buresh-Oppenheim and Santhanam
(Making Hard
Problems Harder, ECCC 2006).
In particular, it was pointed out that Avishay writes "The reduction
is in the spirit of the discriminator lemma of Hajnal et al. [HMP+93]
that shows that average-case lower bounds for thresholds circuits of depth d
implies worst-case lower bounds for thresholds circuits of depth d+1."
(end of Sec 1.0).
I believe that Avishay's text overstates the relation, since [HMP+93]
treat circuits that are composed in a way that supports that lemma;
that is, they consider a depth $d+1$ threshold circuit and the
average-case behavior of subcircuits that feed the output (threshold) gate.
As for the work of [BS06], firstly it transforms a function on which the
average-case lower bound is given to a more complex one for which the
worst-case bound is shown, whereas Avishay maintains the same function.
More importantly, their proof does not increase the absolute size for which
lower bounds are proved but rather slightly decreases it (as usual),
however their proof also reduces the input length and so the size increases
as a function of the input length if the initial lower bound is large
enough. In other words, their proof by contradiction yields a slightly
larger circuit, not a smaller one as in Avishay's work (and in [HMP+93]).

A de Morgan formula over Boolean variables $x_1, \ldots, x_n$ is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves are marked with variables or their negation. We define the size of the formula as the number of leaves in it. Proving that some explicit function (in P or NP) requires large formula is a central open question in computational complexity. While we believe that some explicit functions require exponential formula size, currently the best lower bound for an explicit function is the $\tilde\Omega(n^3)$ lower bound for Andreev's function.

In this work, we show how to trade average-case hardness in exchange for size. More precisely, we show that if a function $f$ cannot be computed correctly on more than $\frac{1}{2} + 2^{-k}$ of the inputs by any formula of size at most $s$, then computing $f$ exactly requires formula size at least $\tilde\Omega(k) \cdot s$. The proof relies on a result from quantum query complexity by Reichardt [SODA, 2011]. Due to the work of Impagliazzo and Kabanets [CC, 2016], this tradeoff is essentially tight.

As an application, we improve the state of the art lower bounds for explicit functions by a factor of $\tilde\Omega(\log n)$.

Additionally, we present candidates for explicit simple functions that we believe have formula complexity $\tilde\Omega(n^4)$. In particular, one such function was studied in [Goldreich-Tal, CC, 2016] and is given by $F(x,y,z) = \sum_{i=1}^{n}\sum_{j=1}^{n}{x_i y_j z_{i+j}} mod 2$. Based on our main theorem, we give non-trivial super-quadratic lower bounds for these functions.

See ECCC TR16-179.

Back to list of Oded's choices.