On Quantum Computing

by Oded Goldreich

Revised July 2005. See original posted version (Jan. 2004).

Main Points

Some (but I'm told that not all of the) believers of Quantum Computing (QC) assert that its possibility is ensured (or even required) by the Laws of Quantum Mechanics (QM). In my opinion, such an assertion is based on two fundamental confusions.

Firstly, the Laws of QM are merely a refutable model (or assumption) about the real world, they are not the reality itself nor can they ever be proved to provide a full description of reality. (This is indeed a generic remark that applies to any model of reality, but it is good to bear it in mind when one wants to talk about ``lack of assumptions'': The assumption that QM provides a ``correct'' description of reality is no less an assumption than the conjecture that one-way functions exist. On the contrary, it seems that the latter assumption may be proved correct whereas the former can only be refuted (and can never be proved correct).) [See further discussion (Nov. 2011).]

Secondly, as far as I know (and here I may be wrong), QM says that certain things are not impossible, but it does not say that every thing that is not impossible is indeed possible. For example, it says that you cannot make non-Unitary transformations, but this by itself does not mean that you can effect any Unitary transformation that you want. [See further discussion (Feb. 2014).]

The above concerns are quite abstract and seem unrelated to the most important discovery of QC: The existence of a simple and fast algorithm for factoring integers in the QC model. But the above concerns (and ramifications of them) become relevant if one asks what does the aforementioned discovery mean.

As far as I am concerned, the QC model consists of exponentially-long vectors (possible configurations) and some ``uniform'' (or ``simple'') operations (computation steps) on such vectors. (The length of the vectors, which correspond to a standard work-tape, is exponential in the length of the actual input.) The key point is that the associated complexity measure postulates that each such operation can be effected at unit cost (or unit time). My main concern is with this postulate. My own intuition is that the cost of such an operation or of maintaining such vectors should be linearly related to the amount of ``non-degeneracy'' of these vectors, where the ``non-degeneracy'' may vary from a constant to linear in the length of the vector (depending on the vector). Needless to say, I am not suggesting a concrete definition of ``non-degeneracy'', I am merely conjecturing that such exists and that it captures the inherent cost of the computation. I would further suggest that this issue is related to the transition from QM behavior to ``seemingly classical'' behavior.

I wish to stress that I am not concerned with the question of whether or not QC that factor integers of the size used in commercial applications of RSA will be built in our life-time. I am concerned of the question of whether the (simplified, in my mind) model of QC (with unit cost operations) is an adequate model of reality. Needless to say, this question seems to lie more in the domain of physics than in that of computer science.

[I have held the aforementioned opinions since I first heard of QC in the early 1990's.]

Further discussion

In respond to my original text, Scott Aaronson made a few comments, to which I have responded, and so on. I believe that our points of agreement are numerous, and I'll use Scott's nicer formulation in order to express the main ones:

Our disagreements can also be clearly stated:

At times, I heard the believers of QC draw an analogy between QC and randomized computation. In my opinion, this analogy is highly misleading. The key point about QC is the ability to actually maintain a huge vector of (exponentially many) ``possibilities'', whereas a randomized algorithm merely maintains one possibility (indeed selected at random) in each time. The probability space of the execution of a probabilistic algorithm is a mental experiment taking place in the analysis (not in reality).

Extract from a paper of Levin (2003)

(Taken from https://www.cs.bu.edu/fac/lnd/expo/qc.htm, slightly revised, with boldface by me)

The development of RSA and other applications of one-way functions was all the more remarkable as the very existence of one-way functions remains unproven and subject to repeated assaults. The first came from Shamir himself, one of the inventors of the RSA system. He proved in 1979 that factoring (on infeasibility of which RSA depends) can be done in a polynomial number of arithmetic operations. This result uses a so-called "unit-cost model," which charges one unit for each arithmetic operation, however long the operands. Squaring a number doubles its length, repeated squaring brings it quickly to cosmological sizes. Embedding a huge array of ordinary numbers into such a long one allows one arithmetic step to do much work, e.g., to check exponentially many factor candidates. The closed-minded cryptographers, however, were not convinced and this result brought a dismissal of the unit-cost model, not RSA.

Another, not dissimilar, attack is raging this very moment. It started with the brilliant result of Peter Shor (in 1994). He factors integers in polynomial time using an imaginary analog device, Quantum Computer (QC), inspired by the laws of quantum physics taken to their formal extreme.


Back to Oded's page of essays and opinions or to Oded's homepage.