The actual result is a polynomial-time (randomized) Byzantine Agreement protocol with optimal resilience (i.e., less than one third of the parties may be corrupted), via the use of fraud detection. Specifically, the model is asynchronous, and the adversary is computationally unbounded, has full information of the state of the entire system at any time, and may corrupt parties adaptively. The fraud detection mechanism is actually one that detects the use of correlated random moves rather than the truly random moves postulated by the protocol.
The foregoing paradigm is employed in the contents of taking the majority of binary votes casts by the parties. The key observation is that even when the corrupted parties selects their bits after the honest parties, their ability to control the majority (e.g., make it undetermined, when the number of corrupted parties equals the number of honest parties) relies on them correlating their votes with that of honest guys. In particular, there must be a pair of parties (where at least one of them is dishonest) such that their votes are (slightly) correlated. This correlation can be detected (when many votes are cast), and the detection can be used to reduce the confidence in the votes of a correlated pair of parties, when we use weighted majority, and ultimately totally ignore their votes. On the other hand, in periods (called epochs) in which no such correlation is detected, the majority votes are effectively certified to be used in the application (as a global random coin). Hence, after sufficiently many epochs, we eliminate all bad parties (unless we have already halted before, after successfully using the global random coin).
One thing that distinguishes (theoretical) computer science from other scientific disciplines is its full-throated support of a fundamentally adversarial view of the universe. Malicious adversaries, with unbounded computational advantages, attempt to foil our algorithms at every turn and destroy their quantitative guarantees. However, there is one strange exception to this world view and it is this: the algorithm must accept its input as sacrosanct, and may never simply reject its input as illegitimate. But what if some inputs really are illegitimate? Is building a bullshit detector for algorithms a good idea?
To illustrate the power of the Bullshit Detection worldview, we give the first polynomial-time protocol for Byzantine Agreement that is resilient to $f \ls n/3$ corruptions against an omniscient, computationally unbounded adversary in an asynchronous message-passing model. (This is the first improvement to Ben-Or and Bracha's exponential time protocols from the 1980s that are resilient to $f\ls n/3$ corruptions.) The key problem is to design a coin-flipping protocol in which corrupted parties (who chose the outcomes of their coins maliciously) are eventually detected via statistical tests.
Based on extended abstracts at STOC 2022 and SODA 2023, which are available from HERE.