I look forwards to studying this survey, but, fearing that I will not be able to get to it in the near future, I am taking my chances and recommending it based on a superficial look. Yet, I don't feel that I am taking any risk here: The surveyed direction is most timely and the author made important contributions to it. More importantly, the author is genuinely interested in communicating the contents of this research direction to the complexity theory community at large (and even beyond it to TOC at large), and possesses the relevant communication skills.
Update (30-Aug-22): I just finished reading the survey; yes, in retrospect, my fear of not getting to it soon was wrong. But my speculation regarding the contents of the survey was correct. It is a great read: Both very educational and fun.
Is randomness ever necessary for space-efficient computation? It is commonly conjectured that L=BPL, meaning that halting decision algorithms can always be derandomized without increasing their space complexity by more than a constant factor. In the past few years (say, from 2017 to 2022), there has been some exciting progress toward proving this conjecture. Thanks to recent work, we have new pseudorandom generators (PRGs), new black-box derandomization algorithms (generalizations of PRGs), and new non-black-box derandomization algorithms. This article is a survey of these recent developments. We organize the underlying techniques into four overlapping themes:
Available from ECCC TR22-121.