Testing by Implicit Learning (survey)

by Rocco Servedio.

Oded's comments

I am fascinated by the implicit learning paradigm, which is pivoted on emulating a learning algorithms for $k$-variable functions by using $n$-bit long samples with $k$ influential variables. The emulation proceeds by randomly partitioning the variables to $O(k^2)$ sets, identifying $k$ sets that are likely to each contain a single influential variable, and converting the $n$-bit long samples to corresponding $k$-bit samples. Specifically, each $n$-bit sample is convereted by determining for each influential set whether the subset of variables labeled 1 or the subset labeled 0 is influential, and seting the corresponding bit accordingly. Once the learning algorithm outputs a hypothesis for the $k$-bit function, it is tested using adequate queries (which are again emulated correspondingly).

The original abstract

We describe a a general approach to testing classes of Boolean functions. The method extends the early observation of Goldreich, Goldwasser and Ron that any proper learning algorithm can be used as a testing algorithm. This observation by itself does not typically give rise to constant-query (independent of n, the number of input variables) testing algorithms because virtually every interesting class of n-variable functions requires a superconstant (as a function of n) number of examples for proper learning. Our "implicit learning" approach works essentially by running a proper learning algorithm over k-dimensional examples where k is a function independent of n. The learning is "implicit" because the testing algorithm never actually knows which k of the n input variables are the ones that are being used for learning; this is accomplished using ingredients from the junta test of Fischer et al.

Our approach can be used to test whether a Boolean function on n input variables has a "concise representation" in several different senses. Specifically, we obtain property testers that make poly(s/\epsilon) queries (independent of n) for Boolean function classes such as s-term DNF formulas (answering a question posed by Parnas et al.), size-s decision trees, size-s Boolean formulas, and size-s Boolean circuits. The method extends to classes of Boolean functions that have sparse Fourier representations, and also to non-Boolean classes such as s-sparse polynomials, size-s algebraic circuits, and size-s algebraic computation trees over finite fields of bounded size.

While the algorithms obtained by our approach typically have poly(s) query complexity, their running time is usually exponential in s because of a brute-force search that is used to do the learning. We have been able to achieve a better running time of poly(s) for the class of s-sparse GF(2) polynomials, using more sophisticated computationally efficient algorithms from learning theory for learning s-sparse GF(2) polynomials.

Relevant papers can be found at

Back to list of Oded's choices.