1.1 Probabilistic bits
Probabilities are used to quantify the likelihood of events – the more likely an event is, the larger its probability. An event that occurs with certainty has probability (indeed, it has a 100% chance of happing) while an event that never occurs has probability .
As an example, we can think of tossing a coin. If you toss a coin and cover it with your hands without looking at it, there are two possible events – either the coin shows “heads” or it shows “tails”. For a fair coin, the two events are equally likely to occur, so we assign a probability of or to both (this is represented by in Fig. 1.1). However, the coin could be biased and more likely to land on one side than the other. Depending on how biased it is, we can imagine a whole spectrum of possibilities: one extremely biased coin might always show “heads” while another might always show “tails” (see and on the left and right side of Fig. 1.1). The first coin shows “tails” with probability (since it always shows “heads”) while the second shows “tails” with probability .
Since we don’t want to worry about designing coins of different shapes and sizes, it is helpful to somehow abstract the information represented by a coin toss. We can do this by associating the outcomes “heads” and “tails” with the bit values and , respectively. The coin toss can then be described by two probabilities: the probability that it is equal to 0 (“heads”), and the probability that it is equal to 1 (“tails”). Such a bit that attains its two possible values with some probabilities is called a probabilistic bit. Note that the two probabilities must necessarily add to one: . For example, if the coin is fair then , as explained above.
Did you notice that it would be enough to specify just one of the probabilities or since either of them can be obtained from the other? For clarity we will always specify both and write them down as a vector:
We call this vector the probability distribution or the state of the probabilistic bit. This vector notation is convenient not only for collecting all probabilities together in a nice table, but it will also provide a geometrical way to visualize a probabilistic bit. Moreover, it will help us to see the analogy between probabilistic and quantum bits.
We will call the states associated to or “heads” and to or “tails” deterministic, since in this case there is no uncertainty – is it fully determined which side of the coin is up. In Eq. 1.1, they correspond to probability distributions with , and , , respectively. Since they will be used very often, it is convenient to introduce a shorthand notation for them:
This notation might look confusing at first, but you can also think of and as and or as and , if you wish.
These two states form a basis of all states, meaning that we can express any other state of a probabilistic bit as a linear combination of them:
Every state is a two-dimensional vector, which we can visualize in a two-dimensional coordinate system. Then the possible states of a probabilistic bit is precisely the line segment connecting the two points and which correspond to the two deterministic states of the bit (see Fig. 1.2).
Exercise 1.1 (Understanding the blue line segment).
According to Fig. 1.2, the possible states of a probabilistic bit form a line segment. Take some time to investigate this. See if you can answer the following questions:
-
1.
Why do the states of a probabilistic bit all lie on a line?
-
2.
Why does this line end at the coordinate axes and does not go further?
-
3.
Which point on the line segment corresponds to a fair coin?
Solution.
-
1.
Since one of the two events has to occur, the two probabilities necessarily add to 1. This means that . If you write this as you recognize this as the equation of a line with slope minus one.
-
2.
If the line would go further one of the probabilities would become negative. Since probabilities cannot be negative, we must require that and , which is equivalent to saying that the line segment must end at the coordinate axes.
-
3.
This is the midpoint, where .