Top Academics: Here’s How We Facilitate the Next Big Leap in Quantum Computing

Journal: 
PC Magazine
Thursday, April 11, 2024

In advance of the ribbon-cutting for its new IBM System One quantum computer, the first one on a college campus, Rensselaer Polytechnic Institute (RPI) last week hosted a quantum computing daywhich featured several prominent speakers who together provided a snapshot of where the field is now. I’ve been writing about quantum computing for a long time, and have noted some big improvements, but there are also a host of challenges that still need to be overcome. Here are some highlights. 

From Quantum Physics to Quantum Computing

Jay M. Gambetta, Vice President of Quantum Computing at IBM

Jay M. Gambetta, IBM (Credit: Michael J. Miller)

The first plenary speaker was Jay M. Gambetta, Vice President of Quantum Computing at IBM, who gave an overview of the history and progress of quantum computing, as well as the challenges and opportunities ahead. He explained that quantum computing is based on exploiting the quantum mechanical properties of qubits, such as superposition and entanglement, to perform computations that are impossible or intractable for classical computers. He talked about watching the development of superconducting qubits, as they moved from single qubit systems in 2007, to 3-qubit systems in 2011, and now with IBM’s Eagle chip, which has 127 qubits and is the heart of the Quantum System One.

He then asked how we could make quantum computing useful. His answer: We need to keep building larger and larger systems and we need to improve error correction.
“There are very strong reasons to believe there are problems that are going to be easy for a quantum computer but hard for a classical computer, and this is why we’re all excited,” Gambetta said. He discussed the development of quantum circuits and that while the number of qubits was important, equally important was the “depth,” detailing how many operations you can do and the accuracy of the results. Key to solving this are larger and larger systems, and also error mitigation, a topic that would be discussed in much greater detail later in the day.

To get to “quantum utility”—which he said would be reached when a quantum computer is better than a brute force simulation of a quantum computer on a classical machine—you would need larger systems with at least 1000 gates, along with improved accuracy and depth, and new efficient algorithms.

He talked about quantum algorithmic discovery, which means finding new and efficient ways to map problems to quantum circuits. For instance, a new variation on Shor’s algorithm, which allows for factorization in much faster time than would be possible on a classical computer. “The future of running error-mitigated circuits and mixing classical and quantum circuits sets us up to explore this space, ” he said.

In a panel discussion that followed, James Misewich from Brookhaven National Laboratory discussed his interest in using quantum computing to understand quantum chromodynamics (QCD), the theory of strong interactions between quarks and gluons. QCD is a hard problem that scales well with the number and depth of qubits, and he is looking at entanglement between jets coming out of particle collisions as a possible avenue to explore quantum advantage.

Jian Shi and Ravishankar Sundararaman from RPI’s Materials Science and Engineering faculty talked about computational materials science, and applying quantum computing to discover new materials and properties. Shi noted there was a huge community now doing quantum chemistry, but there is a gap between that and quantum computing. He stressed that a partnership between the two groups will be important, so each learns the language of the other and can approach the problems from a different perspective.

 

Grand Challenges and Error Correction

Steve M. Girvin, Eugene Higgins Professor of Physics, Yale University

Steve M. Girvin, Yale University (Credit: Michael J. Miller)

One of the most interesting talks was given by Steve M. Girvin, Eugene Higgins Professor of Physics, Yale University, who discussed the challenges of creating an error-correction quantum computer.

Grand Challenges

(Credit: Steve Girvin)

Girvin described how the first quantum revolution was the development of things like the transistor, the laser, and the atomic clock, while the second quantum revolution is based on a new understanding of how quantum mechanics works. He usually tells his students that they do the things that Einstein said were impossible just to make sure that we have a quantum computer and not a classical computer.

 

He thought there was a bit too much hype around quantum computing today. quantum is going to be revolutionary and do absolutely amazing things, but it’s not its time yet. We still have massive problems to solve.

He noted that quantum sensors are extremely sensitive, which is great for making sensors, but bad for building computers, because they are very sensitive to external perturbations and noise. Therefore, error correction is important.

Among the issues Girvin discussed were making measurements to detect errors, but he said we also need calculations to decide if it truly is an error, where it is located, and what kind of error it is. Then there is the issue of deciding what signals to send to correct those errors. Beyond that, there is the issue of putting these together in a system to reduce overall errors, perhaps borrowing from the flow control problems used in things like telephony.

In addition to quantum error detection, Girvin said there are “grand challenges all up and down the stack,” from materials to measurement to machine models and algorithms. We need to know how to make each layer of the stack more efficient, using less energy and fewer qubits, and get to higher performance so people can use these to solve science problems or economically interesting problems.

Then there are the algorithms. Girvin noted that there were algorithms way before there were computers, but it took time to decide on the best ones for classical computing. For quantum computing, this is just the beginning, and over time, we need people to figure out how to build up their algorithms and how to do heuristics. They need to discover why quantum computers are so hard to program and clever tools to solve these problems.

Another challenge he described was routing quantum information. He noted that having two quantum computers that can communicate classically is exponentially less good than having two quantum computers that can communicate with quantum information, entangling with each other.

 

He talked about fault tolerance, which is the ability to correct errors even when your error correction circuit makes errors. He believes that fact that it’s possible to do that in a quantum system, at least in principle, is even more amazing than the fact that if you had a perfect quantum computer, you could do interesting quantum calculations.

Girvin described the difficulty in correcting errors, saying you have an unknown quantum state, and you’re not allowed to know what it is, because it’s from the middle of a quantum computation. (If you know what it is, you’ve destroyed the superposition, and if you measure it to see if there’s an error, it will randomly change, due to state collapse.) Your job is that if it develops an error, please fix it.

“That’s pretty hard, but miraculously it can be done in principle, and it’s even been done in practice,” he said. We’re just entering the era of being able to do it. The basic idea is to build in redundancy, such as building a logical qubit that consists of multiple physical qubits, perhaps nine. Then you have two possible giant entangled states corresponding to a logical Zero and a logical One. Note the one and zero aren’t living in any single physical qubit, both are only the superposition of multiple ones.

In that case, Girvin says, if the environment reaches in and measures one of those qubits, the environment doesn’t actually learn what it knows. There’s an error, but it doesn’t know what state, so there’s still a chance that you haven’t totally collapsed anything and lost the information.

He then discussed measuring the probability of errors and then seeing whether it exceeds some threshold value, with some complex math. Then correcting the errors, hopefully quickly—something that should improve with new error correction methods and better, more precise physical qubits.

All this is still theoretical. That’s why fault tolerance is a journey with improvements being made continuously. (This was in opposition to Gambetta, who said systems are either fault tolerant or they aren’t). Overall, Girvin said, “We still have a long way to go, but we’re moving in the right direction.”

The Road to Quantum Advantage

Garde, Minnich, Corcoles, Gupta, Kleese van Dam

Garde, Minnich, Corcoles, Gupta, Kleese van Dam (Credit: Michael J. Miller)

Later in the morning, Austin Minnich, Professor of Mechanical Engineering and Applied Physics, Caltech described “mid-circuit measurement” and the need for hybrid circuits as a way of finding, and thus mitigating errors.

In a discussion that followed, Kerstin Kleese van Dam, Director of the Computational Science Initiative at Brookhaven National Laboratory, explained that her team was looking for answers to problems, whether solved on traditional or quantum machines. She said there were problems they can’t solve accurately on a traditional computer, but there remains the question of whether the accuracy will matter. There are areas, such as machine learning, where quantum computers can do things accurately. She predicts that quantum advantage will come when we have systems that are large enough. But she also wondered about energy consumption, noting that a lot of power is going into today’s AI models, and if quantum can be more efficient.

Shekhar Garde, Dean of the School of Engineering, RPI, who moderated this part of the discussion, compared the status of quantum computing today to where traditional computing was in the late 70s or early 80s. He asked what the next 10 years would bring.

Kleese van Dam said that within 10 years, we would see hybrid systems that combine quantum and classical computing, but also hoped we would see libraries that are transferred from high-performance computing to quantum systems, so a programmer could use them without having to understand the way the gates work. Aparna Gupta, Professor and Associate Dean of RPI’s Lally School of Management would bet on the hybrid approach offering more easy access and cost-effectiveness, as well as “taking away the intrigue and the spooky aspects of quantum, so it is becoming real for all of us”

Antonio Corcoles, Principal Research Scientist, IBM Quantum, said he hoped users who don’t know quantum will be able to use the system because the complexity will become more transparent, but that can take a long time. In between, they can develop quantum error correction in a way that is not as disruptive as current methods. Minnich talked about “blind quantum computing” where many smaller machines might be linked together.

Lin Lin, Professor of Mathematics at the University of California, Berkeley

Lin Lin, University of California, Berkeley (Credit: Michael J. Miller)

One of the most interesting talks came from Lin Lin, Professor of Mathematics at the University of California, Berkeley, who discussed the theoretical aspects and challenges of achieving quantum advantage for scientific computation. He defined quantum advantage as the ability to solve problems that are quantumly easy but classically hard, and proposed a hierarchy of four levels of problems.

Quantum Advantage Hierarchy slide from Lin Lin

Quantum Advantage Hierarchy (Credit: Lin Lin)
  • Level I: Cryptography. This is the most well-known and established area of quantum advantage, exemplified by Shor’s algorithm for factoring large numbers.

  • Level II: Unitary quantum processes, which involve simulating quantum dynamics under a given Hamiltonian. This implements Richard Feynman’s original vision for quantum computing. People don’t yet know for sure that the best possible algorithms for this will be quantum, but there is a good reason to believe that quantum advantage can be achieved here, as such tasks are very hard for classical computers.

  • Level III: Non-unitary quantum processes (This was a bit over my head). Lin said non-unitary processes are not very natural to implement on classical computers, In the past decade, there have been some ingenious ideas to come up with efficient quantum algorithms to shoehorn these non-unitary problems into unitary ones.

  • Level IV: Classical processes. Involves solving linear or nonlinear equations that are not necessarily quantum in nature, particularly those with a high number of dimensions.

Lin said that for the first two levels, a lot of people think quantum advantage will be achieved, as the methods are generally understood. But on the next two levels, there needs to be a lot of work on the algorithms to see if it will work. That’s why this is an exciting time for mathematicians as well as physicists, chemists, and computer scientists.

 

This talk was followed by a panel during which Lin said that he is interested in solving quantum many-body problems, as well as applying quantum computing to other areas of mathematics, such as numerical analysis and linear algebra.

Michael J. Miller - PC magazine