I liked the perspective offered in this talk by which
various computationally bounded channels are to be viewed
as a middle ground between the overly simplified average-case notion
of a Binary Symmetric Channel and the worst-case notion of
an unrestricted (malicious-flavored) channel that flips
a bounded number of bits (attributed to Hamming).
The known gap in the rates of communicating over
the extreme type of channels raises the question
of which types of intermediate channels allow
meeting the higher rate of the most lenient BSC.
Furthermore, *for which settings can the rate bound
be matched by an efficient construction?*

The latter question is the actual focus of this work, which considers the case the computational bound is captured by (concrete) bounds on the size of circuits that determine the corrupted locations as a function of the transmitted (randomized) codeword.

**Side comment:**
Let me seize the opportunity and comment that probabilistic encoding
in the context of error correction was first considered
(in the context of TOC) in the 1997 posting
A Probabilistic Error-Correcting Scheme
that Provides Partial Secrecy.

Guruswami and Smith (J. ACM 2016) considered codes for channels that are computationally bounded which modify at most a p-fraction of the bits of the codeword. This class of channels is significantly stronger than Shannon’s binary symmetric channel which flips each bit independently with probability p, but weaker than Hamming’s channels which may flip any p-fraction of bits and are computationally unbounded.

Recently, there has been a growing body of work that aims to construct codes against channels that are computationally bounded (e.g., bounded memory channels, channels that are poly-size circuits). In this work, we consider bounded size channels and construct codes that:

- Achieve an optimal rate of 1-H(p) (matching the rate of binary symmetric channels, and beating the rate of Hamming channels).
- Are explicit, assuming E does not have a size $2^\Omega(n)$ nondeterministic circuits.

A key component in our codes (that we believe to be of independent interest) is a new complexity theoretic notion of hard to sample functions (HTS). Loosely speaking, a function f is HTS for circuits of size n^c, if for every randomized circuit A of size n^c that samples a distribution (X,Y) such that X has sufficiently large min-entropy, it holds that Pr[Y=f(X)] is small. This notion is inspired by a related notion introduced by Viola (SICOMP 2012) in which X is the uniform distribution and is similar in flavor to the definition of multi-collision-resistant hash functions. Building on classical works on “hardness amplification” (and using many additional tools and ideas from pseudorandomness) we construct HTS functions under hardness assumptions.

Back to list of Oded's choices.