Tracking quantum computing has been a bit confusing in that there are multiple approaches to it. Most of the effort goes toward what are called gate-based computers, which allow you to perform logical operations on individual qubits. These are well understood theoretically and can perform a variety of calculations. But it’s possible to make gate-based systems out of a variety of qubits, including photons, ions, and electronic devices called transmons, and companies have grown up around each of these hardware options.
But there’s a separate form of computing called quantum annealing that also involves manipulating collections of interconnected qubits. Annealing hasn’t been as worked out in theory, but it appears to be well matched to a class of optimization problems. And, when it comes to annealing hardware, there’s only a single company called D-Wave.
Now, things are about to get more confusing still. On Tuesday, D-Wave released its roadmap for upcoming processors and software for its quantum annealers. But D-Wave is also announcing that it’s going to be developing its own gate-based hardware, which it will offer in parallel with the quantum annealer. We talked with company CEO Alan Baratz to understand all the announcements.
The simplest part of the announcement to understand is what’s happening with D-Wave’s quantum-annealing processor. The current processor, called Advantage, has 5,000 qubits and 40,000 connections among them. These connections play a major role in the chip’s performance as, if a direct connection between two qubits can’t be established, others have to be used to act as a bridge, resulting in a lower effective qubit count.
Starting this week, users of D-Wave’s cloud service will have access to an updated version of Advantage. The qubit and connection stats will remain the same, but the device will be less influenced by noise in the system (in technical terms, its qubits will maintain their coherence longer). “This performance update will allow us to solve larger problems with greater precision and a higher probability of correctness due to some new fabrication processes that we are using,” Baratz told Ars. He said the improvements came about through changes to the qubit fabrication process and the materials used to create them.
The influence of noise in a quantum optimizer doesn’t necessarily mean it will produce a “wrong” result. Typically, for optimization problems, it means the machine won’t find the most optimal solution but will find something close to it. So the reduced noise in the new processor means that the machine is more likely to find something closer to the absolute optimum.
Further out in the future is the follow-on system, Advantage 2, which is expected late next year or the year after. This will see another boost to the qubit count, going up to somewhere above 7,000. But the connectivity would go up considerably as well, with D-Wave targeting 20 connections per qubit. “Now that we’ve crossed a certain threshold on the number of qubits, it seems to be connectivity that will give us the bigger boost,” Baratz told Ars.
Further from the hardware
D-Wave provides a set of developer tools it calls Ocean. In previous iterations, Ocean has allowed people to step back from directly controlling the hardware; instead, if a problem could be expressed as a quadratic unconstrained binary optimization (QUBO), Ocean could produce the commands needed to handle all the hardware configuration and run the problem on the optimizer. D-Wave referred to this as a hybrid problem solver, since Ocean would use classical computing to optimize the QUBO prior to execution.
The only problem is that not everyone who might be interested in trying D-Wave hardware knows how to express their problem as a QUBO. So, the new version of Ocean will allow an additional layer of abstraction by allowing problems to be sent to the system in the format typically used by people who tend to solve these sorts of problems. “You will now be able to specify problems in the language that data scientists and data analysts understand,” Baratz promised.
If that does work out, then this might eliminate a major roadblock that could keep people from testing whether D-Wave’s hardware offers a speed-up on their problems.
The biggest part of today’s announcement, however, may be that D-Wave intends to also build gate-based hardware. Baratz explained that he thinks that optimization is likely to remain a valid approach, pointing to a draft publication that shows that structuring some optimization problems for gate-based hardware may be so computationally expensive that it would offset any gains the quantum hardware could provide. But it’s also clear that gate-based hardware can solve an array of problems that a quantum annealer can’t.
He also argued that D-Wave has solved a number of problems that are currently limiting advances in gate-based hardware that uses electronic qubits called transmons. These include the amount and size of the hardware that’s needed to send control signals to the qubits and the ability to pack qubits in densely enough so that they’re easy to connect but not close enough that they start to interfere with each other.
One of the problems D-Wave faces, however, is that the qubits it uses for its annealer aren’t useful for gate-based systems. While they’re based on the same bit of hardware (the Josephson junction), the annealer’s qubits can only be set as up or down. A gate-based qubit needs to allow manipulations in three dimensions. So, the company is going to try building flux qubits, which also rely on Josephson junctions but use them in a different way. So, at least some of the company’s engineering expertise should still apply.
Will the rest? There’s no way to find out without building hardware, and Baratz said that the first test qubits were just being chilled to operating temperatures when we spoke. He was also conservative about what the qubit count would look like once the hardware would be ready for public use, saying “until we build and measure, I’m not going to guess.”