The result outlined in the 2nd paragraph of the abstract essentially asserts that, under strong but reasonable intractability assumptions, it is possible to derandomized BPTime[T] by DTime[T'] such that $T'(n)=n^{1.01}\cdot T(n)$.
One insight that I gained from the paper is the following. Most work on de-randomization start by modelling a potential randomized decision procedure and the potential inputs to it by a non-uniform family of Boolean circuits. Consequently, derandomizind $BPtime(T)$ is reduced to constructing a pseudorandom generator that fools circuits of size $T$, which requires a seed of length at least $\log_2T$, which means that one cannot hope for a consequence that is better than $BPtime(T) \subseteq Dtime(T^2)$. In contrast, as observed and capitalized upon in this work, such a modelling blurs the fact that we need only deal with a linear amount of non-uniformity. Hence, $BPtime(T) \subseteq Dtime(T\cdot L)$, where $L(n)=n$, is potentially possible (and is essentially achieved, under some reasonable computational difficulty assumptions, in this work).
Extending the classical ``hardness-to-randomness'' line-of-works, Doron et al. (FOCS 2020) recently proved that derandomization with near-quadratic time overhead is possible, under the assumption that there exists a function in DTIME[2^n] that cannot be computed by randomized SVN circuits of size $2^{(1-\eps)n}$ for a small $\eps>0$.
In this work we extend their inquiry and answer several open questions that arose from their work. Our main result is that derandomization with almost no time overhead is possible, under a plausible hypothesis. Specifically, we show that probabilistic algorithms that run in time T(n) can be deterministically simulated in time $n T(n)^{1+\eps}$, under a hypothesis that is formally incomparable to the one of Doron et al., but is arguably more standard: We assume that there exist non-uniformly secure one-way functions, and that for $\delta=\delta(\eps)$ and $k=k_T(\eps)$ there exists a problem in DTIME[2^{kn}] that is hard for algorithms that run in time $2^{(k-\delta)n}$ and use $2^{(1-\delta)n}$ bits of advice. We also show that the latter hypothesis (or, more accurately, a relaxation of it that we use) is in fact necessary to obtain the derandomization conclusion if one relies on a PRG construction (as is indeed our approach).
For sub-exponential time functions $T(n)=2^{n^{o(1)}}$ we further improve the derandomization time to $n^{1+\eps} T(n)$, under a mildly stronger hypothesis. We also show that the multiplicative time overhead of n is essentially optimal, conditioned on a counting version of the non-deterministic strong exponential-time hypothesis (i.e., on #NSETH). Nevertheless, we show that in the average-case setting a faster derandomization is possible: Under hypotheses similar to the ones in our main result, we show that for every L in BPTime[n^k] there exists a deterministic algorithm A_L running in time $n^{k+\eps}$ such that for every distribution over n-bit strings that samplable in time $n^k$ it holds that $Pr_x[A_L(x)=L(x)]\geq1-n^{-omega(1)}$.
Lastly, we present an alternative proof for the result of Doron et al. using a proof paradigm that is both considerably simpler and more general; in fact, we show how to simplify the analysis of any construction that ``extracts randomness from a pseudoentropic string''. We use this simpler proof to extend their result, deducing a mildly slower derandomization (i.e., with cubic or quadratic overhead) from weaker hardness assumptions (i.e., for SVN circuits that do not use randomness).
Available from ECCC TR20-148.