#
Evaluation May Be Easier Than Generation

###
Moni Naor

###
Abstract:

Kearns et al. (STOC 94) defined two notions for learning a distribution
**D**. The first is with a *generator*, where the learner presents a generator
that outputs a distribution identical or close to **D**. The other is
with an *evaluator*, where the learner presents a procedure that on input
*x* evaluates correctly (or approximates) the probability that *x*
is generated by **D**. They showed an example where efficient learning
by a generator is possible, but learning by an evaluator is computationally
infeasible.
Though it may seem that generation is, in general, *easier* than
evaluation, in this talk we show that the converse may be true: we provide
a class of distributions where efficient learning with an evaluator is
possible, but coming up with a generator that approximates the given distribution
is infeasible. We also show that some distributions may be learned (with
either a generator or an evaluator) to within any *epsilon > 0*, but
the learned hypothesis must be of size proportional to *epsilon* (and
not *log* *epsilon* which is always the case in the distribution-free
PAC model).

PDF
Postscript
, gzipped
Postscript .

Back to
On-Line Publications

Back Home