While the current model may be adequate in many settings,
I am intrigged by an alternative model, which one may call *generic*.
Actually, there are variants according to how the restriction is chosen
(i.e., at random or by the learner).
In any case, for a chosen restriction $\rho$,
the learner gets a device computing $f_\rho$
(i.e.,the function $f$ restricted by $\rho$),
rather than a deviced derived from $D$
(i.e., a device that computes $f$)
by restriction, simplication, and some "rearrangements" (see below).

Of course, this generic model may be very hard to analyze, still there are some trivial results and immediate questions regarding whether one can do any better.

For example, if one chose restrictions that fix log-many bits, then one can obtain a circuit for the function by obtaining poly-many circuits for restricted functions. This is trivial if one select the restrictions, but also possible if one only gets random restrictions (where all restrictions fix log many bits). Note that, wvhp, a random set of $n 2^L$ restrictions each fixing $L$ bits will ``cover'' all $n$-bit strings; on the other hand, testing whether or not a given set of restrictions covers all $n$-bit strings is equivalent to SAT.

From the foregoing perspective, the results presented in the paper are motivated as a small step towards dealing with the "generic" model. That is, at least in some computational models, the "rearrangements" (e.g., fliping sub-formulas) that are considered in the paper may be conceptualized as a subset of a set of modifications that allow to move between any two devices that compute the same function.

We introduce a notion of non-black-box access to computational devices (such as circuits, formulas, decision trees, and so forth) that we call restriction access. Restrictions are partial assignments to input variables. Each restriction simplifies the device, and yields a new device for the restricted function on the unassigned variables. On one extreme, full restrictions (assigning all variables) correspond to evaluating the device on a complete input, yielding the result of the computation on that input, which is the same as standard black-box access. On the other extreme, empty restrictions (assigning no variables) yield a full description of the original device. We explore the grey-scale of possibilities in the middle. Different variants of restriction access, in which the values to restricted variables, as well as the subset of free (unassigned) variables, are generated deterministically or randomly, in friendly or adversarial fashions may naturally suit a variety of situations in computational learning, computational biology, automated proofs, cryptography and complexity theory.

Focusing on learning theory, we show that restriction access provides a setting in which one can obtain positive results for problems that have resisted attack in the black-box access model. We introduce a PAC-learning version of restriction access, and show that one can efficiently learn both decision trees and DNF formulas in this model. These two classes are not known to be learnable in the PAC model with black-box access.

Our DNF learning algorithm is obtained by a reduction to a general learning problem we call population recovery, in which random samples from an unknown distribution become available only after a random part of each was erased. More speci fically, assume that every member of an unknown population is described by a vector of values (to some xed set of attributes). The algorithm has access to random samples, each of which is a random member of the population, whose values are given only for a random subset of the attributes! Analyzing our efficient algorithm to fully recover the unknown population calls for understanding another basic problem of independent interest: robust local inversion of matrices. The population recovery algorithm and construction of robust local inverses for some families of matrices are the main technical contributions of the paper.

Back to list of Oded's choices.