Quest 5: Algorithm virtuoso

One of the main motivations for studying quantum computing is the fact that quantum computers can solve some problems much faster than we know using the computers we currently have. To distinguish quantum computers from ordinary computers, we will use the general term classical computer to refer to any computational device that we currently have. This includes your laptop or desktop computer, but also very small computers such as in your smartphone or smartwatch, as well as very big and powerful supercomputers that may occupy a complete room. What distinguishes all these computers from quantum computers is not how big, small, slow, or fast they are, but the fact that their inner workings can be described by classical physics (more specifically, electromagnetism). In other words, their hardware operates in ways that can be described by old physical theories that predate quantum mechanics. This is a bit similar to how the music composed by Mozart and Bach is called classical music. Just like classical computers, classical music also does not take full advantage of more advanced musical instruments, such as the electric guitar or synthesizers.

As a consequence of the kind of hardware that classical computers have, all information they store – be it a picture, a sound file, a video or a web page – are represented by bitstrings, i.e., long sequences of zeroes and ones. This information is processed by following some rules that describe how these zeroes and ones should be modified to get a useful answer. We call this sequence of instructions an algorithm. You can think of an algorithm as a recipe – it is a sequence of instruction such that if you carefully follow them you will get the desired outcome, such as a chocolate brownie! For example, an algorithm might take two binary strings as input and produce another binary string that contains the sum of the two original strings (when interpreted as numbers) as output. Just like recipes, algorithms were originally executed by people. In fact, the word ‘computer’ used to refer to a person who is computing. Nowadays, however, algorithms are run on actual computers. Since computers generally are not very smart, they need the algorithm to be described to them in an extremely precise way. This description, a program, is a concrete implementation of the abstract algorithm in such a way that the computer can understand it. For this, we need to use some programming language which the computer is able to translate by itself into elementary operations on zeroes and ones, and then execute them in the actual hardware.

When you design an algorithm, program it in your computer, and run it, you want to get the answer in some reasonable amount of time. The actual time it takes to get the answer, however, depends on many factors:

  1. 1.

    how fast your computer is at executing different elementary instructions,

  2. 2.

    whether the input of your program is read from a hard disk, a network connection, or directly from memory,

  3. 3.

    which programming language you use to specify your program (and the version of the compiler or interpreter that runs the program),

and so forth. This makes it very hard to compare different algorithms.

To make the comparison of different algorithms easier and more fair, computer scientists do not look at how long it takes to run the algorithm on a specific computer configured in some particular way. Instead they count the number of elementary operations the algorithm performs. This way they can make sure that they are comparing the algorithms themselves and not the computers the algorithms are running on (indeed, a good algorithm running on a very slow computer might appear worse than a bad algorithm running on a very fast computer). More specifically, what computer scientists want to know is how the number of operations grows with the size of the problem the algorithm must solve. Indeed, the larger amount of data you need to process, the longer it will take no matter how good the algorithm is. So what you want to know is whether your algorithm will still be able to cope when the data gets extremely large. The area of computer science that studies this is called computational complexity.

In quantum computing, we are interested in designing quantum algorithms which solve computational problems by manipulating quantum states instead of bitstrings. The elementary instructions that we will use are gates, such as the Hadamard gate, the controlled-NOT gate, or a measurement. We will specify our quantum algorithms pictorially in terms of a quantum circuit, which you already have a lot of experience crafting in Quirky. But we could also use a textual representation like you would expect from an ordinary computer program. For example, the left-hand side circuit could be represented by the right-hand program text:

[Uncaptioned image]

CNOT q1 -> q2;
H q1;

where q1 and q2 refer to the two qubits (recall that the bottom wire corresponds to the first qubit). Given a computational problem, we can then compare the number of elementary instructions that the best known quantum and classical algorithms use to solve it. In this way, we can get a precise understanding of the advantage offered by future quantum computers.