Learning to Impersonate
or
Simulating Secret Knowledge: The Learnability of Adaptively Changing Distributions

Moni Naor and Guy N. Rothblum

Abstract:

Consider the T-1000 robot in the movie Terminator 2: Judgement Day. The robot can ``imitate anything it touches... anything it samples," which, if it were possible, would be an astonishing computational learning achievement. The goal of the robot is to perfectly impersonate objects, but the objects it is learning to impersonate (namely people) are highly complex, with probabilistic behavior that changes adaptively in different settings. In this work we consider this type of computational learning: we formalize a new model for learning adaptively changing distributions (ACDs), which is relevant both to the field of computational learning theory and to the field of cryptography.

Consider the task of learning to simulate a stochastic object that has secret knowledge. It generates outputs by some distribution that uses the secret knowledge and can change arbitrarily (and adaptively) over time. A learning algorithm that does not have the secret knowledge sees samples from the object's (adaptively changing) output distribution, and would like to learn to simulate it. We present a learning algorithm with sample complexity that is linear in the number of secret bits and polynomial in the error and confidence parameters. We also show that computationally efficient learning of ACDs is always possible if and only if one-way functions do not exist.

This is the full version of the ICML 2006 paper. Postscript , gzipped Postscript , PDF .

Related On-Line Papers:

Back to On-Line Publications

Back Home