Search
  • Philip Turk

Making quantum errors worse (on purpose)

Updated: Aug 17

In an internally funded project this year we are exploring the world of quantum computing together with our colleagues from the Mathematics & Cybernetics department at SINTEF.


The amazingness of quantum computers has been explained countless times by others and will not be the focus of this blog post. Nor will I go into much detail about how they work. For a short introduction, check out this cool little video from Kurzgesagt.


The important thing to note for now is that quantum computers may be a real game changer (some say a "revolution") for many fields of science and engineering once the technology becomes mature.


Size and quality

Before it can reach its full potential however, we need to solve two main problems. The first one is the size of quantum computers, i.e. the number of quantum bits (qubits) that a quantum computer has. Small quantum computers naturally have a limited processing capacity, but if you have a large enough one, your ability to solve certain kinds of mathematical problems is almost unlimited. Engineers have already managed to push the technological limit from just a handful of qubits a few years ago to 50 and beyond. We can probably expect this rapid improvement to continue in the foreseeable future. Once we reach several hundred qubits, algorithms like Shor's integer factorisation method [link] start to become feasible, at least in principle.

The second problem has to do with the quality of the qubits, namely their error rate. Quantum states are inherently unstable and hard to control, which is why it is virtually impossible to build 100% accurate quantum computers. In fact, with every operation a quantum computer performs on a set of qubits, there will be some amount of error involved. When we perform a sequence of many operations (like one usually does in most algorithms), this error accumulates and leads to more or less wrong results. To understand what this means and how to mitigate the problem, let's use the following analogy:


The imperfect archer

You have a friend who is an archer, and who is also very good at calculating things. So when you give him an integer and ask him to find its two prime factors, he can easily calculate the answer for you. Unfortunately, he is also mute.

To communicate the information to you, he uses a wall that has a two-dimensional grid drawn on it to represent all the possible combinations of prime factors. He would simply shoot an arrow at the point on the grid that represents the correct answer.

Archery however is never 100% precise and your friend may occasionally miss his target, giving you a wrong result. Even worse, there is a constant wind from the right blowing his arrows off course. They therefore tend to land a bit to the left from where he was aiming. So he is not only producing random errors, but with a significant bias to one side, too.

Let's assume you gave him the number 91 to factorise. He calculates that 91 = 713, so he aims at the square 7x13 (coloured in green here - but you don't see that!). However, because of the wind and his imperfect aim, the arrow lands off to the left (blue dot).

Naively, you would interpret this as 91 = 711, which is of course the wrong answer.

What can you do to mitigate this? You cannot simply tell your friend to get better at aiming. You cannot turn off the wind. An electric fence is stopping him from getting any closer to the wall. There is no way of getting rid of the error.

However you can always make the error worse! Right now, your friend is shooting from a distance of 20 meters, giving you the wrong result of 7x11. Now you ask him to shoot again from 40, 60, 80 and 100 metres distance. This is what happens:

The further away your friend stands, the larger the error gets and the further away he hits from where he was aiming. Of course you cannot really see where he was aiming, nor which square represents the correct answer. Instead, you look at the sequence of progressively worse hits and wonder: where would he hit if he could shoot from distance zero? Where was he actually aiming?

The blue dots don't make a perfectly straight line, but roughly so. You cannot guess exactly where the archer was aiming, but you estimate that it was probably somewhere within the red circle.

Which means that the correct solution is probably 7x13. You were able to guess this, even though not a single arrow hit correctly!


You probably already figured that the archer in this analogy represents a quantum computer. Much like the archer, the quantum computer can perform amazing calculations for us, but has some amount of error that we cannot get rid of. We can however artificially make it worse and then try to extrapolate what the correct (i.e. error-free) result might look like.


Quantum circuits

To see how we can make quantum errors worse, we need to look at quantum gates and quantum circuits.

What you see here is a quantum circuit on 5 qubits, visualised using IBM's Qiskit package [link]. Each horizontal line represents one qubit (called q0, q1, q2, q3 and q4), while the double line at the bottom called c represents 5 classical bits. Each of the boxes marked with H, X or Z represents a single-bit quantum operation, i.e. some sort of manipulation of the state of that qubit. The vertical lines connecting two qubits marked with a "+" represent a two-qubit operation called CX or CNOT.

Right now, you don't need to know what all of these operations do, only that they are called quantum gates and that the quantum computer will apply them on the qubits according to their order from left to right.

In the end, we measure each of the 5 qubits and store the results in the 5 classical bits, as indicated by the 5 black boxes at the right end of the diagram. Measuring a qubit will destroy its complex quantum state and produce only 0 or 1 as a result. The probability with which we get 0 or 1 depends on the quantum state, but once we measured it, there is no way of reconstructing it.

Every quantum algorithm can in principle be represented in this way. This particular example is taken from a 2020 paper by our colleagues from the Mathematics & Cybernetics department [link].


Quantum error

If we let a quantum computer evaluate this circuit, we get the following result:

We have 5 measurements of qubits and each of them can turn out as 0 or 1, which gives us 32 possible combinations. The blue bars are what should be the result on a perfect, error-free quantum device. We would expect to measure "01101" and "11111" each with 50% probability. However, looking at the actual result from a real quantum computer (we used IBM's ibmqx2 in this case), we can see how badly the error has confounded the measurement. Given only the pink bars, we would have a hard time reconstructing the correct result. Sure enough, the two highest bars are still "01101" and "11111", but they are far from 50% where they should be! All the other bars have varying heights - which is quite bad considering that they should all lie at zero.

Note that what you see here is a typical, but not necessarily representative result. Next time we run the same circuit on the same quantum computer, the pink bars may look a bit different due to random noise.


Increasing the error

Now as with the archer, we want to make the error even worse, hoping that this will give us some clue as to what the correct result might be. How exactly do we do that?

This is where we come back to quantum gates. Counting all types of gates and all their varieties in use today, there are a dozen or so, but not every quantum system implements all of them. The ones you see above (H, X, Z, CX) tend to be some of the most common ones. It just so happens that these four, as well as a few others, are a special type of operation called involution! This is a mathematical property which means that they cancel themselves out whenever they are applied twice.

Now, if we triple every gate in our circuit, in other words replace H by HHH, X by XXX and so forth, two of the three will neutralise each other and it will change essentially nothing - except add some more error.

Here we see the same circuit as above, but with every gate tripled:

It is equivalent to the first one when run on a perfect error-free quantum computer, except that it would take three times as much time to run through. On a real machine however, it should produce an error that is three times as bad.

In the same fashion, we can can construct more circuits by repeating every gate 5 times, 7 times, etc. (remember that we always add two of each gate, ensuring that they cancel each other out). This is analogous to telling the archer to shoot from 3, 5 and 7 times the original distance.

And here is the result, as obtained from the same ibmqx2:

Again, note that this is a typical result which may vary somewhat each time we try this.


Analysis

Let's reflect a bit on what we see here. Generally, the error seems to get progressively worse with higher repeating factors, as we would expect.

Can we find enough regularities to extrapolate the correct result? Well, I would argue yes. At least partially. But the situation is a bit more complicated than with the archer.

Look at the outcome for measurement "00000" for instance. Just from looking at it, it seems pretty clear that it should be zero (or very close to zero) and that the rising bars are solely due to an increasing error - which is of course correct. The measurements for "00001", "00010" and a few others look similar.

Now look at the measurements for "11111". We see a sharp decline in the height of the bars and would therefore probably guess correctly that the blue bar should lie somewhere above the pink bar. But how high? Considering that the difference between each of the other bars and its neighbours is 5-10, we might perhaps estimate the blue bar to be around 30%. Not a great estimation, but still better than what we measured.

Looking at "01101" things get even worse. There is no longer a clear progression here. We start at somewhere below 20%, followed by a sharp drop to around 5% where all the remaining bars lie. What do we make of this? Is it likely that the blue bar would be higher than the pink one, or is this too irregular to tell? Just from looking at the diagram, that seems unclear. Perhaps a more sophisticated analysis would bring us somewhere close to a good estimation, but that seems far from certain. Quite a few of the other measurements look equally inconclusive.

And then it gets even outright misleading when we look at "11011"! These declining bars seem to indicate clearly that the correct result must be somewhere above 5%, while it should actually be zero.

In conclusion, we would have a hard time to reconstruct the correct result exactly, but we would for the most part end up with something better than what we measured.


Now why is this so much more complex than expected? Part of the reason is that the analogy with the archer is, of course, oversimplified. We sort of implied that tripling each quantum gate would simply triple the error, but it is not that easy. Quantum errors are generally non-linear; i.e. if you try to add them up, their combined effect is not simply the sum of their singular effects. We also neglected the measurement itself, which produces its own error, but can not be repeated like the other operations. Lastly, there is always a lot of random fluctuation that will make the data look even more messy.

In the archer analogy, this is as if the wind was constantly shifting while the archer has to use randomly misshaped arrows. No wonder it's a little harder to guess the correct answer in such conditions.


The next step

This is more or less as far as our little project got by now. We have developed some code to randomly build quantum circuits with certain parameters, and then some more code to artificially increase the error (we tried a few other methods than the one described here, with mixed success). We tested this on a few circuits and got results more or less similar to the example above.

But the work is far from done here! So far, the extrapolation of the correct results was based on human gut-feeling. This is nice to get some intuition and develop some ideas, but it's not exactly science. The next step has to be a more systematic approach.

We want to let machine learning (ML) do the extrapolation for us. This way we can automate the process and ensure that results are reproducible and free of (human) bias. The best approach would be to generate a random sample of quantum circuits and test how well different ML algorithms perform in guessing the correct results.

The end goal here is to develop a kind of hybrid system between machine learning and quantum computing, which would have a higher accuracy than quantum computing alone.


In all this, the machine learning itself must not use up more computational power than simply simulating the quantum computer - otherwise a simulation would be more feasible, which means that we gain no advantage from having a quantum computer in the first place.


Scaling up

In order to train the ML algorithms to perform their task, we need training data that includes both the real measurement and the error-free correct answer. Since no quantum computer today is error-free, the correct answer has to come from simulations run on classical computers. This is no problem as long as we talk about small quantum computers (up to ~30 qubits). Large quantum computers however are impossible to simulate with classical hardware.

If we want to apply this technique to quantum computers with hundreds of qubits, we have to find a way of training an ML algorithm on sample data from a smaller quantum computer and then somehow generalise its implicit knowledge to larger sizes. This in itself is an interesting problem to solve and it is not at all clear if it is even possible.

On the other hand, if this work is successful, it could significantly improve the development of larger quantum systems in the near future.



#quantum #computing #quantumerror #errormitigation #machinelearning #ML #extrapolation

80 views

©2020 by SINTEF Smart Data Research Group. Proudly created with Wix.com