Quantum Computing, Beyond Qubits – Part 2: Leveraging Nature’s Randomness

Georges Lteif

Georges Lteif

Software Engineer

Last Updated on April 15, 2022.
Subscribe now to stay posted!
About Us
12 min read

1. Overview

In the first part of this series, we went through the basics of quantum computing, mainly from the perspective of a computer scientist.

In the third instalment of this series, we will discuss quantum computers than can solve hard optimization problems through a process known as Quantum Annealing.

We talked about bits versus qubits, and classical gates versus quantum gates. We completed the discussion with a simplified description of Deutsch’s quantum algorithm which, I hope, gave the reader a feel of what quantum computing is about.

In the second part of this series, we would like to approach the subject of randomness in the quantum world.

This will lead us to explore two topics.

A- Classical vs Quantum Randomness

First, we will look at nature’s way of creating randomness in the subatomic world, and how different that is from our day-to-day experience of random events.

B- Leveraging Quantum Randomness

Once we have that covered, we will then move on to the next topic: how quantum computing leverages randomness to achieve quantum speedups for some types of problems.

But before we proceed, a word of caution:

Do Not Be Intimidated by the Equations!

We will keep the mathematical equations in this text to an absolute minimum. There will also be a generous amount of accompanying explanation whenever these are present. In most cases, they are very simple and should not be difficult to comprehend.


2. Table of Contents


3. Leveraging Randomness in Quantum Computing

Before we dive into the strange world of quantum randomness, let’s have a look at what classical probability theory can tell us about classical randomness.

Once that has been done, we will look at how nature’s version of randomness can be put to good use in quantum computing.

3.1 Classical Randomness

Our classical formulation of probability theory relies on the three axioms proposed by Andrey Kolmogorov in 1933. The story is as follows.

Given a sample space of all possible outcomes of a certain experiment:

\Omega = \{A, B, C, \cdots \}

We define an event space F  on \Omega  , which is no more than a collection of events, each of which is a subset E \subseteq \Omega  .

To complete our probability space, we also need a probability measure which we call P  .

In this case, (\Omega, F, P)  is a probability space if it satisfies the following three axioms.

Axiom 1

The probability of any event is a non-negative number.

This translates to:

P(E) \in \mathbb{R}, P(E) \geq 0 \quad  \forall E \in F

Fairly simple and intuitive: the probability of observing an event or a subset of events is a greater than, or equal to zero.

Axiom 2

The probability that at least one event in F will occur is 1.

This can be denoted as:

P(\Omega) = 1

Axiom 3

The probabilities of exclusive events is the sum of probabilities of each event.

Which we can write as:

P(\displaystyle\bigcup_{i=0}^{\infty}E_i) = \displaystyle\sum_{i=0}^{\infty} P(E_i)

What this means is that the probability of obtaining 1 or 6 on a fair die is the sum of probabilities of obtaining 1 or 6.

This is all very common-place, in fact so common-place that we probably cannot imagine what negative or subtractive probability might look like.

Actually, no such forms of randomness have been observed on the macro scale. Things can change significantly on the micro scale though as we shall see in a bit.

3.2 Quantum Superposition

We discussed quantum superposition in Part 1 of this series in quite some detail, so we will not attempt to go over that again here.

What we will do however is try and see A) what the mathematical formulation of quantum superposition actually looks like and B) why superposition alone is not enough to achieve any speedups in our quantum computations.

Let’s start with the mathematical formulation.

In the canonical base, the qubit states can be written as follows:

|0\rangle = \displaystyle{1\choose 0}, \quad |1\rangle = \displaystyle{0\choose 1}

A superposition of two qubits, both in the zeroth state, can be written as:

|00\rangle =  \begin{pmatrix} 1 \\ 0 \end{pmatrix}   \otimes   \begin{pmatrix} 1 \\ 0 \end{pmatrix}  =   \begin{pmatrix} 1 \otimes \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ 0 \otimes \begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{pmatrix}  =  \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}

Take a second example, this time the state of |10\rangle  which we can compute as:

|10\rangle =  \begin{pmatrix} 0 \\ 1 \end{pmatrix}   \otimes   \begin{pmatrix} 1 \\ 0 \end{pmatrix}  =   \begin{pmatrix} 0 \otimes \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ 1 \otimes \begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{pmatrix}  =  \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}

Hopefully the above two examples have successfully given you a feel of how:

  1. Qubit superposition can be formed using the tensor product \otimes
  2. The dimension of the vector in superposition grows exponentially (by powers of 2) as we add more qubits.

Now that we have the mathematical formulation in place, let’s move on to the next topic or how quantum superposition gets us halfway through to achieving quantum computational speedup.

In fact, a system of n qubits can be put in a superposition of 2^n  states. We write that as follows:

S_n = \displaystyle\sum_{i=0}^{2^n-1} \alpha_i s_i

Where s_i  is a pure quantum state:

s_i = |\cdots 0010 \cdots \rangle

And the amplitudes \alpha_i  being complex numbers that satisfy the following constraint:

\displaystyle \sum_{i=0}^{2^n-1} |\alpha_i|^2 = 1

After we prepare the system, we can perform quantum measurements on it. Every measurement will yield a different result, in a random manner.

We do know however that the results are going to comply with two strict rules:

  1. Rule number one says that result has to be one of the pure quantum states s_i
  2. The second rule decrees that the probability of observing s_i  is equal to the modulus of the amplitude \alpha_i  :

Pr(s_i) = |\alpha_i|^2

What Superposition Allows Us to Do?

Exponentially Large Spaces

Quantum superposition allows us to construct systems of qubits whose parameters (the \alpha_i  ) can inhabit an exponentially large vector space which scientists refer to as Hilbert space. This, in itself, is a massive improvement over classical systems that, although theoretically possible, happen to be hopelessly impractical.

Quantum Circuits and Computations

Quantum circuits are composed of quantum gates (in some ways similar to logical Boolean gates) that allow us to apply unitary transformations to qubit systems. The computational speedup comes partially from the fact that those circuits can run simultaneously run computations on all the states that form the qubit system in superposition.

Is That All?

Running measurements on the input qubits as it is will give us a totally random result, which is not very useful when running quantum computing algorithms. In fact, what we need is to increase the amplitude, and therefore the probability, of obtaining the correct answer. To achieve this, we need to use quantum interference.

3.3 Quantum Interference

A typical introduction to quantum interference would be to look at the double slit experiment.

This experiment has been told and retold many, many times, so we will skip the basics here and go straight to the interesting stuff.

But before we do that, let’s have a look at a comparison table between classical and quantum randomness when it comes to the Kolmogorov axioms.

AxiomClassicalQuantum
1Probabilities are positive real numbers.Amplitudes are complex numbers. Probabilities are calculated as modulus of amplitudes.
2Probability of at least one event happening is 1.No deviation from classical experience.
3Probabilities of exclusive events add up.Probabilities of exclusive events can experience constructive or destructive interference.
Classical vs Quantum Randomness

What this tables tells us is that quantum systems can experience constructive or destructive interference due to the fact that the amplitudes are complex numbers.

Let’s see how that works by looking more closely at the double-slit experiment.

double slit experiment quantum superposition computing
Quantum Interference

To fully appreciate the weirdness of quantum phenomena, particularly interference, we need to keep two things in mind:

  1. Interference can be observed not just with photons but with molecules of decent sizes
  2. Interference will also happen even when we fire particles towards the screen, one at a time

To understand where this phenomena comes from, let’s look at the quantum state of the particle hitting the screen at position x.

Simply put, we can write this state as a superposition of two states, each representing one path the particle could use to travel from the emitter to the detector:

\Psi(x) = \Psi_A(x) + \Psi_B(x)

If this was a classical experiment, we could make the following argument:

Probability that the particle hits the screen at position x = Probability of the particle going through A towards x + Probability of going through B and hitting x as well.

We would have something like this:

Pr(x) = p_A + p_B = |\Psi_A(x)|^2 + |\Psi_B(x)|^2

Experiments have confirmed however that this model does not make correct predictions.

Actually, if we go one step back and look at the wavefunction of the quantum mechanical object, and try to calculate the probability from that function we obtain the following:

Pr(x) = |\Psi(x)|^2 = |\Psi_A(x) + \Psi_B(x)|^2

This happens to lead us to the following result:

Pr(x) = |\Psi_A(x)|^2 + |\Psi_B(x)|^2 + \Psi_A^* \Psi_B + \Psi_A \Psi_B^*

We can also write the wavefunctions as functions of complex numbers:

\Psi_A(x) = p_A \cdot e^{i\theta_A}

And:

\Psi_B(x) = p_B \cdot e^{i\theta_B}

Which, after some elementary algebra would lead us to:

Pr(x) = p_A + p_B + 2\ p_A\ p_B \cos(\theta_A-\theta_B)

The term 2\ p_A\ p_B \cos(\theta_A-\theta_B)  can be positive, negative, or zero and the result could be an increase or a decrease of the overall probability.

In the first case, the interference is said to be constructive as it magnifies the probability amplitude of the particle hitting x.

In the second case, the interference is said to be destructive because it reduces that same probability amplitude.

This interference, be it constructive or destructive, depends on the phase difference between the two paths or \cos(\theta_A-\theta_B)  . This is a crucial point that we will be revisiting over and over during this discussion.

Quantum Circuits Revisited

Now that we understand interference, how do we design a quantum circuit that amplifies the amplitude of obtaining the correct answer so that the chance of observing it is a bit more than 50%? That’s what quantum algorithms are all about.

Is 51% Good Enough?

Yes. In fact, if the probability of obtaining the correct result in a quantum computing exercise is slightly above 50%, we can repeat the experiment as much as we like, take a majority vote, and select the winning answer. Naturally, the higher the probability, the lesser trials we need to perform and the sooner we can get to an answer.


4. Quantum Algorithms Explained

It is true that quantum computing will not be available on your PC anytime soon. We are however lucky enough to live in a day and age where anybody who has access to the internet can run a quantum algorithm on a REAL quantum computer.

In fact, all you need to do is visit the IBM Quantum Composer website, go through a few basic tutorials, and you are good to go provided, of course, that you understand the basics of quantum computation.

In the following subsections, we will look at different elementary quantum circuits to get a feel of how quantum algorithms can be constructed and run.

4.1 Quantum Circuits and Algorithms

In Part 1 or this series, we introduced the idea of quantum gates by comparing them to their classical counterparts.

In this article, we will look at one example that will help us understand how quantum gates can be assembled in such a way to form problem-solving, quantum computing machine.

We will show how the resulting quantum circuit leverages randomness to achieve quantum computational speedups.

At its most basic form, a quantum circuit would look like this:

quantum circuit computing deutsch josza
Basic Quantum Circuit

The ultimate purpose of such an algorithm is to query a certain function f(x)  for the correct result, say locating its zero.

The function is usually a black-box type of thing that is referred to as an Oracle. This problem has the nice property of having a solution that is hard to find but easy to verify.

To understand how a quantum algorithms can solve this problem (find the solution) easily, let’s look at each step and see if we can understand what’s happening.

First, we start with one qubit in state |0\rangle  . Easy. This qubit will store our result.

Next, we place the qubit in a superposition state of |0\rangle  and |1\rangle  . This will allow us to compute the result of the function we are querying for both inputs at the same time.

The third and most crucial step is the phase query. This is where quantum interference works it’s magic to magnify the amplitude of the correct result. We will talk about that in details in the following sections.

In the fourth step, we reapply a Hadamard gate to get rid of quantum superposition. At this stage, we should have a higher chance of observing the correct result since its amplitide was magnified in the prior step.

Finally, we measure the result.

Now that we (hopefully!) got the big picture, let’s go over some of the interesting details.

4.2 Superposition and Hadamard Gates

The Hadamard gate is a unitary transformation that applies the following operation to single qubits.

H = \dfrac{1}{\sqrt{2}}   \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}

Its purpose is to take a pure quantum state and put in a superposition. When applied to the |0\rangle  and |1\rangle  states, we obtain the following:

|-\rangle = H|0\rangle = \dfrac{1}{\sqrt{2}}   \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}  \begin{pmatrix} 1 \\ 0 \end{pmatrix}  = \dfrac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}

And:

|+\rangle = H|1\rangle = \dfrac{1}{\sqrt{2}}   \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}  \begin{pmatrix} 0 \\ 1 \end{pmatrix}  = \dfrac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}

To appreciate the strange nature randomness in quantum mechanics, let’s compare a classical vs quantum experiment.

In the classical world, if you flip a coin once, you have a 50/50 chance of getting heads or tails. If you flip it twice, the odds remain the same, 50% each.

In the quantum world however, things are quite different. In fact, if you start with qubit in state |0\rangle  , apply a Hadamard gate and perform a measurement, you get 50% chance of measuring |0\rangle  and 50% chance of |1\rangle  .

Lets try that on IBM Quantum Composer:

Hadamard Gate
Experiment 1

The result is somewhere around 50% each:

quantum circuit hadamard gate
Experiment Result of Single Hadamard Gate

Now let’s change the experiment slightly by adding another Hadamard gate.

Two Hadamard Gates
Experiment 2

Reasoning in the classical manner, We expect to see a uniform distribution of results between 0 and 1, just like in the first case. But that’s not what happens, have a look:

Experiment Result of Two Hadamard Gates

The application of two gates sequentially has eliminated the randomness to provide a definite outcome!

Actually, we can verify this by applying the Hadamard twice on the same input, which results in the identity matrix:

H \cdot H = I

That’s not very surprising though as these properties (unitarity and reversibility) are both requirements for building quantum gates.

4.3 Interference and Phase Query Gates

Phase query gates can be defined by the following function:

Q_f = (-1)^{f(x)}

When applied to an input state |x\rangle  , we get the following:

Q_f|x\rangle = (-1)^{f(x)}|x\rangle

Given that f(x) \in [0,1]  , the result is always either |x\rangle  or -|x\rangle  , with the difference being only in the phase.

Why Is the Difference Only in the Phase?

Remember that \exp(i\pi) = -1  . This means that the probability of observing |x\rangle  or -|x\rangle  are the same since the amplitude of |-1| = |\exp(i\pi)| = 1

Now let’s see how that works to amplify one result over another. In fact, for this trick to work, the input to the Oracle must be in a superposition of the two pure states, but that’s exactly what we would have done in a prior step by passing the qubit through the Hadamard gate!

The result should now look like this:

\begin{aligned}  Q_fH|0\rangle &= Q_f (\dfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle)) \\  &= \dfrac{1}{\sqrt{2}}((-1)^{f(0)}|0\rangle + (-1)^{f(1)}|1\rangle))  \end{aligned}

Looking back at the Deutsch-Josza problem, we see that when f(0) = f(1)  , the computation would yield:

Q_fH|0\rangle = \dfrac{(-1)^{f(0)}}{\sqrt{2}}(|0\rangle + |1\rangle))

Which after applying another Hadamard gate, will yield |0\rangle  .

If, however, f(0) = \bar{f(1)}  , the computation would then yield:

Q_fH|0\rangle = \dfrac{(-1)^{f(0)}}{\sqrt{2}}(|0\rangle - |1\rangle))

Which once more, after applying another Hadamard gate, will yield |1\rangle  .


5. The Current State of Quantum Computing

It is really interesting to see how quantum computing is so radically different from the classical counterpart.

In fact, it would not be an exaggeration to place the current state of quantum computing in a similar position to classical computing in the 1950s.

Scientists are still trying to figure out how to:

  1. Make these “computers” bigger and more stable, and
  2. What sort of algorithms would benefit from performance improvements on quantum computers.

It is really fascinating to be living in such a time to witness the birth of new, cutting-edge technology.


6. Further Reading


Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles

1. Overview I could not find a better way to start an article on Risk and Risk Management than by quoting the opening lines of Donald Lessard and Roger Miller’s 2001 paper that, briefly but lucidly, summarizes the nature of large engineering endeavours. It goes like this: This article leans heavily on three handbooks thatContinue reading “Technical Risk Management and Decision Analysis — Introduction and Fundamental Principles”

Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction

1. Overview Dealing with complexity is an integral part of our lives, even if we do not realise it.  An organisation can be modelled as a complex system from the scale of megacorporations right down to the smallest teams. The architecture of software solutions can be equally complicated, and megaprojects and implementations are certainly involved.Continue reading “Complexity and Complex Systems From Life on Earth to the Universe: A Brief Introduction”

Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos

Synopsis Most physical theories adopt a mechanistic view when examining natural phenomena where any system can be modelled as a machine whose initial conditions and dynamics govern its future behaviour. In this book, Programming the Universe — A Computer Scientist Takes on the Cosmos, Professor Seth Lloyd proposes a radically different approach centred around aContinue reading “Book Review: Programming the Universe — A Quantum Computer Scientist Takes on the Cosmos”

From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems

1. Overview Software design and architecture are two very elusive concepts; even Wikipedia’s entries (ref. architecture, design) are somewhat fuzzy and do not clearly distinguish between the two. The Agile manifesto’s statement on architecture and design is especially brief and raises more questions than answers. The most common definition of software architecture is as follows:Continue reading “From Abstract Concepts to Tangible Value: Software Architecture in Modern IT Systems”

Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects

1. Overview The complexity of business requirements in IT projects has experienced exponential growth due to pressures by increasingly sophisticated client preferences, novel technologies, and fierce competition. Consider, for example, the case of financial payments. In the mid-80s, most payment transactions occurred inside bank branches, and only the biggest banks offered services on ATM orContinue reading “Business Requirements and Stakeholder Management: An Essential Guide to Definition and Application in IT Projects”

Loading…

Something went wrong. Please refresh the page and/or try again.

Leave a Reply