Exploiting Computationally Bounded Adversaries in New Domains

by Jad Silbak

Oded's comments

Again (see prior choice), I was happy to obtain a good overview of this research direction as well as get reminded of some classics. The point is that error-correcting codes and randomness extraction are typically viewed and studied as combinatorial (or information theoretic) objects, where the computational (or rather algorithmic) aspect refers to their proper operation. In contrast, Jad promoted revisiting these notions as well as others while using relaxing their original definition by using computational analogue of these definition. Indeed, this suggestion was original made a few decades ago, but it was not pursued as extensively so far, and it is hoped that the recent series of works will change this sour state of affairs.

In the context of coding theory, Jad's focus was on narrowing the gap between the random noise (Shannon) model and the adversarial noise (Hamming) model. The main technique here is the sieving of the outcome of a list decoder by usage of a (hashing) mechanism that makes it infeasible for the new decoder to output more than a single codeword (of a given distance from the corrupted word).

Original abstract

A central insight of modern cryptography is that to guarantee the security and integrity of a task, it often suffices to defend against computationally bounded adversaries. This perspective aligns with how we model real world attackers and can enable outcomes that are provably impossible against unbounded adversaries. Although this idea has long been a driving force in cryptography and complexity theory, it has not been fully exploited in several natural and fundamental problems that have traditionally been studied in the unbounded adversary model.

In this talk, we show that adopting this computational perspective enables us to surpass information theoretic limits in three classic settings: communication, randomness, and privacy.


Back to list of Oded's choices.