**Scott Aaronson**, the Career Development Associate Professor of Electrical Engineering and Computer Science, a member of the Theory of Computation and Complexity Theory groups, and principal investigator with the Computer Science and Artificial Intelligence Laboratory, CSAIL, at MIT, has written about his field--quantum computing--in an essay for the *New York Times*, part of a series of Essays on Computing by leaders in the field. Read his Dec. 5, 2011 essay, "Quantum Computing Promises New Insights, Not Just Supermachines," also posted below in its entirety.

When people hear that I work on quantum computing — one of the most radical proposals for the future of computation — their first question is usually, "So when can I expect a working quantum computer on my desk?" Often they bring up breathless news reports about commercial quantum computers right around the corner. After I explain the strained relationship between those reports and reality, they ask: "Then when? In 10 years? Twenty?"

Unfortunately, this is sort of like asking Charles Babbage, who drew up the first blueprints for a general-purpose computer in the 1830s, whether his contraption would be hitting store shelves by the 1840s or the 1850s. Could Babbage have foreseen the specific technologies — the vacuum tube and transistor — that would make his vision a reality more than a century later? Today’s quantum computing researchers are in a similar bind. They have a compelling blueprint for a new type of computer, one that could, in seconds, solve certain problems that would probably take eons for today’s fastest supercomputers. But some of the required construction materials don’t yet exist.

So you might think quantum computers are something real scientists — as opposed to science-fiction buffs — won’t need to worry about for a long time. But I’d urge a different view. Quantum computing really is one of the most exciting things happening in science right now. Just not for the reasons you usually hear.

First, though, what is a quantum computer? Walk into a quantum computing lab, and you won’t see much: maybe a fist-size "trap" where ions (often cadmium or calcium) are suspended in a magnetic field, a laser for moving the ions around, a computer screen with a row of flickering white blobs representing the ions’ approximate locations. The real action, one might say, is happening in a different realm entirely: in the alien mathematics that governs what the ions are doing.

Struggling to shoehorn that mathematics into newspaper-friendly metaphors, most popular writers describe a quantum computer as a magic machine that could process every possible answer in parallel, rather than trying them one at a time. Supposedly, it could do that because, unlike today’s computers that manipulate bits, a quantum computer would manipulate quantum bits, or qubits, which can be 0 and 1 simultaneously.

But that’s a crude way to visualize what a quantum computer does, and misses the most important part of the story. When you measure a quantum computer’s output, you see just a single, random answer — not a listing of all possible answers. Of course, if you had merely wanted a random answer, you could have picked one yourself, with much less trouble.

Thus, the sole reason to prefer a quantum computer is that the subatomic world obeys different laws of probability than the ones we are used to. In everyday life, it would be silly to speak of a "minus 30 percent chance of rain tomorrow," much less a "square root of minus 1 percent chance." However, quantum mechanics is based on numbers called amplitudes, which are closely related to probabilities but can also be negative (in fact, they are complex numbers). Crucially, if an event (say, a photon hitting a screen) can happen one way with positive amplitude, and a different way with negative amplitude, then the two amplitudes can "interfere destructively" and cancel each other out, so that the event never happens at all. The goal in quantum computing is to choreograph a computation so that the amplitudes leading to wrong answers cancel each other out, while the amplitudes leading to right answers reinforce.

Contrary to a widespread misconception, it’s for only a few specialized types of problem that researchers know how exploit that trick to obtain dramatic speed increases over today’s computers. To date, the two main examples are simulating the behavior of atoms and molecules, and breaking certain cryptographic codes — including, by unlucky coincidence, most of the codes used in modern electronic commerce. Of course, both problems can be tackled with conventional computers, too — but, though no one has proved it, it’s plausibly conjectured that any conventional algorithm would need a huge amount of time. While code-breaking understandably grabs the headlines, it’s the more humdrum application of quantum computers — simulating quantum physics and chemistry — that has the potential to revolutionize fields from nanotechnology to drug design.

Unfortunately, while small quantum computations have already been demonstrated in the lab, they typically fall apart after only a few dozen operations. That’s why one of the most-celebrated quantum computations to date has been to factor 15 into 3 times 5 — with high statistical confidence! The problem is decoherence: basically, stray interactions that intrude prematurely on the computer’s fragile quantum state, "collapsing" it like a soufflé. In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet.

And yet, even though useful quantum computers might still be decades away, many of their payoffs are already arriving. For example, the mere possibility of quantum computers has all but overthrown a conception of the universe that scientists like Stephen Wolfram have championed. That conception holds that, as in the "Matrix" movies, the universe itself is basically a giant computer, twiddling an array of 1’s and 0’s in essentially the same way any desktop PC does.

Quantum computing has challenged that vision by showing that if "the universe is a computer," then even at a hard-nosed theoretical level, it’s a vastly more powerful kind of computer than any yet constructed by humankind. Indeed, the only ways to evade that conclusion seem even crazier than quantum computing itself: One would have to overturn quantum mechanics, or else find a fast way to simulate quantum mechanics using today’s computers.

Setting aside these cosmic concerns, there are more practical spinoffs from research in quantum computing. Techniques invented to understand quantum algorithms have repeatedly proved useful for understanding conventional algorithms — and there are now cryptographic codes, purely for conventional computers, for which the main evidence of their security comes from arguments based on quantum computing. Quantum computing ideas have also influenced chemistry and physics: Several research groups have used quantum-computing analogies to explain the remarkable light-harvesting efficiency of photosynthetic molecules in plants, and to suggest how solar panels might be designed with similar efficiencies.

But the biggest payoff so far may have been an improvement in the way quantum mechanics itself is taught and understood. Since its beginnings in the 1920s, quantum mechanics has been considered the prototype of an abstruse, complicated theory: something beyond the grasp of all but a few physicists. Today, though, I and others regularly explain its underlying logic to students by focusing on the simplest imaginable system to which that logic applies: the qubits that make up a quantum computer.

Like fusion power, practical quantum computers are a tantalizing possibility that the 21st century may or may not bring — depending on the jagged course not only of science and technology, but of politics and economics. By contrast, as a scientific endeavor that combines many of the deepest questions of physics and computer science, there’s no need to wait for quantum computing: It’s already here.

By SCOTT AARONSON

Published in the *New York Times*: December 5, 2011