I feel that the abstracts talks very well for itself, although I think that the term separation results is more adequate than hierarchy theorems for the results presented in this paper.
Actually, the abstract is a bit unclear as to the difference between the weak and strong version. As clarified in the introduction, the difference is not that much in the gap between the (upper bound on the) query complexity and the (lower bound on the) time complexity, but rather in the upper bound on the time complexity. In my opinion, the later aspect is rather secondary, and I'd put far more weight on whether or not a computational assumption is used. Indeed, it is a bit surprising that one can get a unconditional separation result (but note that there are unconditional hierarchy theorems for randomized time (and maybe their usage here led to using the term hierarchy for the result obtained)).
Lastly, I note that an analogous study in the context of computational learning theory was initiated in Computational Sample Complexity.
We initiate a systematic study of the computational complexity of property testing, focusing on the relationship between query and time complexity. While traditional work in property testing has emphasized query complexity—often via information-theoretic techniques—relatively little is known about the computational hardness of property testers. Our goal is to chart the landscape of time-query interplay and develop tools for proving time complexity lower bounds. Our first contribution is a pair of time-query hierarchy theorems for property testing. For all suitable nondecreasing functions $q(n)$ and $t(n)$ with $t(n)\geq q(n)$, we construct properties with query complexity $\widetilde{\Theta}(q(n))$ and time complexity $\widetilde\Omega(t(n))$. Our weak hierarchy holds unconditionally, whereas the strong version—assuming the Strong Exponential Time Hypothesis—provides better control over the time complexity of the constructed properties.
We then turn to halfspaces in $\mathbb{R}^d$, a fundamental class in property testing and learning theory. We study the problem of approximating the distance from the input function to the nearest halfspace within additive error $\epsilon$. (The distance approximation problem is known to have roughly the same complexity as tolerant property testing for appropriate setting of parameters.) For the distribution-free distance approximation problem, known algorithms achieve query complexity $O(d/\epsilon^2)$, but run in time $\widetilde{\Theta}(1/\epsilon^d)$. We provide a fine-grained justification for this gap: assuming the (integer) $k$-SUM conjecture, any algorithm must have running time $(1/\epsilon)^{\lceil (d+1)/2 \rceil -o(1)}$. This fine-grained lower bound yields a provable (under a well-established assumption) separation between query and time complexity for a natural and well-studied (tolerant) testing problem. We also prove that any randomized Statistical Query (SQ) algorithm under the standard Gaussian distribution requires $(1/\epsilon)^{\Omega(d)}$ queries if the queries are answered with additive error up to $\epsilon^{\Omega(d)}$, revealing a fundamental barrier even in the distribution-specific setting.
See ECCC TR25-185.