I was most fascinated by the replacement of the original coonjecture, which referred to exponential-sized circuits with PH gates, by a strong version of the standard string conjecture used in derandomization (i.e., there exists a positive constant $\beta$ and a set in $E$ does not have size $2^{\beta n}$-circuits). The stronger conjecture is that there exists a positive constant $\beta$ such that for every integer $i$ there is a set in $E$ that is not in $DTime(2^{i n})/2^{\beta n}$. (N.B.: the decoupling of running-time and advice length.)
Another aspect that I found interesring is the notion of a multiplicative (error) extractor. Here it is required that the extractor's output hits aech set with probability that is at most a factor $F$ times the density of the set. The original works focus on a multilicative factor that is only slightly larger than 1 (e.g., $F=1.01$), but I was curious about larger factors (e.g., $F=n^3$). (An additive (negligible) error may be allowed on top of the multiplicative factor.) It turns out that this notion is closely related to the notion of a condenser with an entropy gap that is logarithmic in the foregoing factor.
I was also happy to be reminded that the construction of (seedless) extractors for sampleable sources implies circuit lower bounds. Specifically, a $t$-time computable extractor $EXT$ against sources sampled by size $s$ circuits implies that DTIME(t) requires circuits of size $\Omega(s)$. This holds even if $EXT$ output a single bit with deviation/error (say) 1/10 on sampleable sources of min-entropy $n-1$,
This is proved by considering the source $X$ defined as follow: Uniformly select $x$ and $y$ and output $x$ if $E(x)=1$ (which happens wp approx 1/2) and $y$ otherwise. Then, the source $X$ has min-entropy at least $n-1$ and is sampled by size $O(SIZE(EXT))$, which implies $SIZE(EXT) \gt s/O(1)$, since $EXT$ fails on $X$.
In an influential paper, Trevisan and Vadhan (FOCS 2000) introduced the notion of (seedless) extractors for samplable distributions (namely, distributions that can be sampled by a poly-size circuit). Trevisan and Vadhan showed that under a strong complexity theoretic hardness assumption, there are extractors for samplable distributions with large min-entropy of $k=(1-\gamma) \cdot n$, for some small constant $\gamma \gt 0$.
In the talk, I will explain the motivation for extractors for samplable distributions, and the relation of this area to the well known area of worst-case to average-case hardness amplification. I will give a high level overview of the Trevisan-Vadhan construction, and will also explain some of the recent constructions.
See