If you know me, then you know that there is no way I can read this paper. On the other hand, there is no way for me to pass on this paper without commending it.
As noted in the abstract, the two known proofs of the PCP Theorem rely on the proof composition paradigm. In the second proof (i.e., of Dinur), a logarithmic number of compositions is inherent to the entire approach, which is based on gradual gap amplification. In the first proof (i.e., the one of ALMSS), the fact that two compositions are used rather than one seems an unlucky coincidence. Of course, this is not a pure coincidence, but is rather an inherent consequence of the specific outer and inner verifiers that ALMSS compose.
Specifically, the (robust) outer verifier of ALMSS has polylogarithmic query complexity (and logarithmic randomness), whereas the inner verifier (of proximity) uses a polynomial amount of randomness (and constant query complexity). In contrast, the alternative construction presented here is based on a better inner verifier (of proximity). This verifier uses an amount of randomness that is an arbitrary small constant power of the length of the assertion and its query complexity is a constant (which depends on the former small constant).
An alternative perspective is to consider the problem of providing a ``direct'' construction showing that, for every $c \gt 0$ (or rather just for $c=1/5$), CSAT has a PCP with $n^c$ randomness and a constant number of queries. Recall that the Hadamard-based construction yields a PCP with quadraic randomness and constant query complexity, whereas the RM-based PCP has logarithmic randomness and polylogarithmic query complexity.
(Section 1.0, and then the entire Section 1 make the foregoing abstract description more clear. Note however that it talks in terms of a constant number of queries to a larger alphabet $\Sigma$, which yields robustness of O(\log|\Sigma|)$ binary queries.)
All known proofs of the PCP theorem rely on multiple "composition" steps, where PCPs over large alphabets are turned into PCPs over much smaller alphabets at a (relatively) small price in the soundness error of the PCP. Algebraic proofs, starting with the work of Arora, Lund, Motwani, Sudan, and Szegedy use at least 2 such composition steps, whereas the "Gap amplification" proof of Dinur uses $\Theta(\log n)$ such composition steps. In this work, we present the first PCP construction using just one composition step. The key ingredient, missing in previous work and finally supplied in this paper, is a basic PCP (of Proximity) of size $2^{n^\varepsilon}$, for any $\varepsilon > 0$, that makes $\mathcal{O}_\varepsilon(1)$ queries.
At the core of our new construction is a new class of alternatives to "sum-check" protocols. As used in past PCPs, these provide a method by which to verify that an $m$-variate degree $d$ polynomial $P$ evaluates to zero at every point of some set $S \subseteq \mathbb{F}_q^m$. Previous works had shown how to check this condition for sets of the form $S = H^m$ using $\mathcal{O}(m)$ queries with alphabet $\mathbb{F}_q^d$ assuming $d \geq |H|$. Our work improves this basic protocol in two ways: First we extend it to broader classes of sets $S$ (ones closer to Hamming balls rather than cubes). Second, it reduces the number of queries from $\mathcal{O}(m)$ to an absolute constant for the settings of $S$ we consider. Specifically when $S = (\{0,1\}^{m/c}_{\leq 1})^c$, where $T = \{0,1\}^{a}_{\leq b} \subseteq \mathbb{F}_q^a$ denotes the set of Boolean vectors of Hamming weight at most $b$ in $\mathbb{F}_q^a$, we give such an alternate to the sum-check protocol with $\mathcal{O}( 1)$ queries with alphabet $\mathbb{F}_q^{\mathcal{O}(c+d)}$, using proofs of size $q^{\mathcal{O}(m^2/c)}$. Our new protocols use insights from the powerful theory of Gr?bner bases to extend previously known protocols to these new settings with surprising ease. In doing so, they highlight why these theories from algebra may be of further use in complexity theory.
See ECCC TR25-165.