To use a quantum error-correcting code to improve the performance of a noisy quantum computer, we need to be able to do operations on encoded states without causing the uncontrollable spread of errors.
Consider the controlled-NOT, a very simple and common two-qubit operation. If the first bit is a 0, nothing happens. If the first bit is a 1, the second bit is flipped. This means that if, before the CNOT, there is a bit flip error in the first bit, then, after the CNOT, the second bit is wrong as well, since it gets flipped exactly when it is supposed to stay the same. The error has propagated from the first bit to the second bit.
This much is true classically. However, in a quantum computer, we also have to worry about sign errors. Consider what the CNOT does to the state
(a |0> + b |1>) (|0> + |1>).
When the first qubit is 1, the second qubit is flipped, which has no effect on this state, so after the CNOT, the state is still
(a |0> + b |1>) (|0> + |1>).
If there has been a sign error on the second qubit, so the state is actually
(a |0> + b |1>) (|0> - |1>),
then when the first qubit is 1, the second qubit gets an overall sign of -1. There is no sign change when the first qubit is 0, so in this case, after the CNOT the state is
(a |0> - b |1>) (|0> - |1>).
The sign error has propagated from the second qubit to the first qubit.
The possibility of error propagation is a major concern when attempting fault-tolerant computation. If we are doing CNOTs willy-nilly, a single error in a block can propagate to other qubits in the block, becoming two or three or even more errors. If we are using a code that can only correct a small number of errors, we will move from a situation we can handle (one error) to a situation we cannot. In order to avoid this eventuality, we should set up our fault-tolerant algorithm to only do CNOTs between corresponding qubits in different blocks.
The other concern for fault-tolerant computation is performing operations which take valid codewords to valid codewords. A particularly good code for this purpose is the seven-qubit code. The 0 and 1 of this code are superpositions of the odd and even parity states, respectively, of the seven-bit classical Hamming code. Its stabilizer is given below:
Suppose we take the Hadamard transform, which maps
|0> -> |0> + |1>
|1> -> |0> - |1>,
and perform it on each qubit of the seven-qubit code. The Hadamard transform effectively interchanges X and Z. Therefore, when applied to the seven-qubit code, it switches the first three generators of the stabilizer with the last three generators. Since the states of the code are exactly those states which are +1-eigenvectors of all six operators, after the Hadamard transform, they are still +1-eigenvectors of these same six operators, so they are valid codewords. In fact, it turns out that the new codeword is just the (encoded) Hadamard transform of the original codeword.
There are a number of basic operations which, like the Hadamard transform, convert Pauli matrices into other Pauli matrices or tensor products of Pauli matrices. Because of the symmetry of the stabilizer of the seven-qubit code, any such operation will just permute the elements of the stabilizer. Therefore, any of these operations can be used to get a fault-tolerant operation on the seven-qubit code. This gives a large set of operations, but not quite enough for universal computation. A more complicated construction allows us to complete the set of basic operations, producing universal fault-tolerant computation for this code and similar codes.
For the more general stabilizer code, most operations will not permute elements of the stabilizer. However, by preparing certain ancilla states, using those operations that do work, and performing carefully chosen measurements, we can still do universal fault-tolerant computation.
Therefore, by choosing a suitable quantum code and performing fault-tolerant operations, we can reduce the effective noise in a quantum computer. We perform fault-tolerant error correction to clean up existing errors alternately with fault-tolerant operations to advance the calculation. Of course, errors are occuring even during error-correction, so there is a race between the appearance of new errors and our ability to correct them. If the error rate per gate is low enough, we can correct errors faster than they appear, and we can keep the error rate lower than it would be without error-correction.
Now, sooner or later, there will be a real error anyway. Just by chance, it is possible that a large number of errors will all occur at about the same time. Our code can only handle so many, and if there are more than it can handle, the result will be an error in the data. The more errors that need to occur for this to happen, the less likely it is to happen. However, typically codes to correct more errors require more effort to work at all, so it is not clear we can ultimately win by using a larger code. As it happens, if we pick the right kind of larger code, we can correct more errors with only a modest increase in the complexity of the error-correction step. The result is that if the basic error rate is below some threshold, we can do arbitrarily long computations by choosing a sufficiently large code.
Back to Daniel Gottesman's home page
October 29, 1997