I have, of course, a natural interest in this paper. Having to retract a claimed result is quite embarrassing, but the emmbrassement is somewhat diminished if the result is subsequently proved to be correct.
I find the current proof brilliant and simplicity is part of it. My view of it is as follows: AGGM take the natural approach of trying to emulate the (adaptive) reduction while trying to force the prover to unique (valid) answers (i.e., unique inverses) as would have been provided by an inversionm oracle. The forcing used by AGGM was flawed. In contrast, the current approach is "exactly" the opposite - give the prover full freedom (in providing any inverse it wishes), and argue about the size of the Cartesian product of the number of choices for the reduction times a reciprocal space that represents the limits of this freedom. The key observation is in the proof of Clm 4: For a fixed randomness of the reduction, the probability that a certain sequence of answers was chosen is $\prod_i (1/s(y_i))$, where $s(y)$ is the number of possible valid answers to the query $y$ (which is the number $f$-preimages of $y$ if $y$ is in the image of $f$ and one (indiacting ``no preimage'') otherwise). This is a bit confusing, since one may try to think of all possible answers-sequences, which is the wrong way to think of this. Instead one should think of the conditional probabilities, conditioned on the last query made...
We prove that if the hardness of inverting a size-verifiable one-way function can be based on NP-hardness via a general (adaptive) reduction, then NP is contained in coAM. This claim was made by Akavia, Goldreich, Goldwasser, and Moshkovitz (STOC 2006), but was later retracted (STOC 2010).
See ECCC TR14-108.