Both options work for error correction, but current hardware limits them. The current generation of quantum hardware is called "NISQ": noisy mid-range quantum processors. The medium scale indicates the number of qubits, usually in the tens, while the noisy part indicates the fact that current qubits often cause errors. These errors can be due to problems setting or reading qubits, or losing the state of qubits when calculating. p>
In the long run, most experts expect some kind of error correction to be necessary. Most error-correction schemes involve distributing logical qubit information across several qubits and using additional qubits to track this information in order to identify and correct errors.
Going back to Google Quantum Computing, they mention that their processor design was chosen because it simplifies debugging implementation. Now, the team has implemented two different bug fixes on the processor. The results show that error correction clearly works, but we'll need more qubits and less inherent error before it can be useful.
In all quantum processors, qubits are arranged by calling their neighbors. There are many ways to potentially modify these connections, with limitations stemming from the fact that some qubits have to sit at the edge of the network and have fewer connections. (Most processors with more bits have either one or more passive connections due to manufacturing problems or high error rates)Enlarge the connections between the Sycamore qubit chip. A real chip has more qubits, but they are all in this pattern. John Timmer
Google chose an architecture that connects all internal qubits to four neighbors. Meanwhile, people on the edge only have one pair of connections. You can see this original design on the right.
Two different debugging schemes are shown below. In both schemes, the data - a single logical qubit - is distributed by qubits indicated by red dots. Blue dots are qubits that can be measured to check for errors and manipulated to correct them. To compare to standard bits, you can think of the blue bits as a way to check the parity of adjacent bits, and if you make a mistake, identify the qubits that are most likely to have this problem. In the first setup, on the left, the measurement and storage units are shifted along a linear string, while the string length is limited to the number of processor bits (which is larger than the graph). as shown here). Each measurement qubit tracks each of its neighbors. If they each encounter one error, the measurement detects that thing. (These are qubits, there is more than one possible error, and if two types of errors occur at the same time, this scheme fails.) p> Zoom/data (red) and qubit scaling (blue) There are two ways: as a single chain (left) and as a threaded unit (side) on the right). John Timmer
The second design, located on the right, requires special architecture and is therefore difficult to extend to larger parts of the processor. Determining which of the data qubits is responsible for detecting errors is more difficult. This means that when problems are found, arithmetic operations should be discarded until they can be corrected. However, the advantage is that both types of errors can be detected simultaneously, thus providing stronger protection.
Does it work? In general, in fact, working on what may be the clearest representation, the researchers started the linear patch system with a string of five qubits, and gradually added more strings to 21 qubits. As the qubit string gets larger and larger, it gradually gets stronger, although the error rate between the fifth string and the 21st string decreases by a factor of 100. However, errors still occur, so error correction is not flawless at this point. Performance remains stable up to 50 rounds of error checking.
There were some errors in the configuration of the second error correction, but most of them were taken and the exact nature of the error can generally be inferred. But because these setups require more precise engineering to work with, the team didn't expand them beyond a limited number of qubits.
The debugging system crashes somewhat because it's been brought up so often. For the linear system, the researchers found that 11 percent of the studies ended up finding a significant error. This is clearly one of the functions of the "annoying" side of current NISQ processors, but it also means that if they were to correct them all, the error correction should be amazing. And since it works with the same device, it's possible that there are some errors in the data bits.Advertising
Another problem noted by the researchers is the hierarchical system of the first system. Since the string revolves across the processor, the qubits that are spaced in the string can be physically adjacent to each other. This physical proximity allows them to influence each other, creating associated errors in the measurements.
Finally, the entire system sometimes suffers from very poor performance. The researchers attribute this to the effect of cosmic rays or local radiation sources hitting the chip. Although not particularly frequent, it is frequent enough to cause problems and increase in size as the number of bits grows, simply because processors serve an increasing purpose.
Finally, we're not there yet. For the second design, where bug detection causes computation leaks, the research team found it pulls more than a quarter of their operations. "We found that the overall performance of Sycamore [processors] needs to be improved to see the suppression of errors in this code," the researchers acknowledge.
Even with a 21 qubit string, the error rate at the end of every 100,000 operations is about 1. This is certainly enough to reasonably expect that accounts will continue to discover and correct errors. But then you have to remember that all these 21 qubits were used to encode a logical qubit. Even the largest current processors can contain only two qubits using these systems.
None of this is surprising to anyone in the world of quantum computing, since it is generally accepted that we intend to correct qubits before we can perform sufficient useful calculations. We need about a million qubits. This does not mean that NISQ processors will not be useful before then. If an important computational process takes a billion years for a supercomputer, running it thousands of times on a quantum processor is still a good deal, as long as it produces an error-free result. But this useful error correction will have to wait.
Nature, 2021. DOI: 10.1038/s41586-021-03588-y (about DOIs).
Google corrects errors in its quantum processor
We are now - often horrifyingly - watching what happens to the virus and ...
Welcome to Version 4.09 Rocket Report! I was definitely l...
Flight controllers at NASA and Roscosmos succeeded in pr...
A team of engineers at the University of Maryland has developed a soft...