This paper initiatses a studuy of the simultaneous time-and-space complexity of composition. Its starting point are the two standard composition lemmas that are used in the context of space complexity (i.e., the naive sequential composition and the emulative composition). While the authors seem surprised at the fact that this question was not studied before, I'm less surprise and do commend them for initiating this study.
Loosely speaking, focusing on logarithmic space, they show that for fast algorithms (e.g., nearly time and log-space algorithms) composition comes with a cost (or an overhead). In contrast, they show that extending this result to arbitrary polynomial-time algorithms would imply breakthrough derandomizatiin results (for BPL).
The first result relies on almost quadratic lower bounds for time-space trade-offs of SORTING (integers with no duplications). In contrast, they decoupose SORTING into two stages that can each be performed in logarithmic space and time $tildeO(n^{3/2})$. In the first stage, for each i\in[t]$, they find the $i\cdot t$-th quantile and list all elements that are not larger than this quantile but are larger than the prior quantile. Hence, in (log space and) time $t\cdot\tildeO(n)$, they get $m=n/t$ lists that each contains larger elements than the previous list. In the first stage, for each i\in[t]$, they sort the $i$-th list (in log-space and $\tildeO(m^2)$ time). Letting $t=sqrt(n)$, both algorithms run in time $\tildeO(n^{3/2})$.
Given algorithms $A_1,A_2$ running in logspace and linear time, there are two basic ways to compute the composition $x\rightarrow A_2(A_1(x))$. Applying naive composition gives an algorithm in linear time but linear space, while applying emulative composition (i.e. the composition of space-bounded algorithms) gives an algorithm in logarithmic space but quadratic time. Such time overheads are extremely common while designing low-space algorithms.
A natural question is whether one can enjoy the best of both worlds: for every $A_1,A_2$, is there a routine to compute $A_2\circ A_1$ in linear time and in small space? We prove that:
To prove our results we develop the technical toolkit in an ongoing line of works studying pseudorandomness for low-space algorithms, and as a by-product, we improve several very recent results in this line of work. For example, we show a new ``win-win pair of low-space algorithms'', extending the construction of (Doron, Pyne, Tell, Williams, STOC 2025) to more general functions and eliminating a previously large polynomial runtime overhead; we reduce the task of derandomizing low-space algorithms to the seemingly tractable task of proving lower bounds for uniform Read-Once Branching Programs; and we introduce two highly efficient targeted pseudorandom generators, which use optimized versions of previous generators as building-blocks.
See ECCC TR25-140.