1.3 Measuring a probabilistic bit

If you toss a fair coin and immediately cover it, you have no idea on which side it landed. In this situation your knowledge about the state of the coin is described by the uniform distribution

[Uncaptioned image]
=(1/21/2)=12[0]+12[1]
.
\vbox{\hbox{\includegraphics[width=30.0pt]{figs/01.png}}}=\begin{pmatrix}1/2\\ 1/2\end{pmatrix}=\frac{1}{2}\,[0]+\frac{1}{2}\,[1].
(1.29)

However, if you uncover the coin and see “heads”, your knowledge is updated to

[Uncaptioned image]
=[0]
\vbox{\hbox{\includegraphics[width=30.0pt]{figs/0.png}}}=[0]
(1.30)

because now you know with certainty that heads is up. We will refer to the process of uncovering a probabilistic coin to determine which side is up as measurement.33 3 We have borrowed this term from quantum computing where we will see that a similar procedure exists.

Notice from Eqs. 1.29 and 1.30 that the state of the coin before and after the measurement is different. Indeed, after the measurement you are no longer ignorant about which side is up. Now, imagine you cover the coin again after you have just measured it. What is its state now? Well, it clearly is still

[Uncaptioned image]
=[0]
\vbox{\hbox{\includegraphics[width=30.0pt]{figs/0.png}}}=[0]
(1.31)

because you already know that “heads” is up. In fact, even if you measure (look at) the coin again, you will still see “heads”. Similarly, if you got “tails” the first time you measured a random coin, you will keep getting “tails” no matter how many times you measure it again.

More generally, if you have a probabilistic bit described by the distribution (p0p1)\bigl{(}\begin{smallmatrix}p_{0}\\ p_{1}\end{smallmatrix}\bigr{)}, measuring it yields the outcome 0 (or “heads”) with probability p0p_{0} and 11 (or “tails”) with probability p1p_{1}:

(1.32)

The state of the probabilistic bit after the measurement is no longer (p0p1)\bigl{(}\begin{smallmatrix}p_{0}\\ p_{1}\end{smallmatrix}\bigr{)} but one of the basis states [0]=(10)[0]=\bigl{(}\begin{smallmatrix}1\\ 0\end{smallmatrix}\bigr{)} or [1]=(01)[1]=\bigl{(}\begin{smallmatrix}0\\ 1\end{smallmatrix}\bigr{)}, depending on the measurement outcome. For example, if you got outcome 11 (or “tails”), the new state is [1][1]. In general a measurement changes the state!

An important point about measurements is that they do not let you extract the actual probabilities p0p_{0} and p1p_{1} – all you get as a measurement outcome is a single bit 0 or 11. Also, your original probabilistic bit (p0p1)\bigl{(}\begin{smallmatrix}p_{0}\\ p_{1}\end{smallmatrix}\bigr{)} is gone after the measurement, so you cannot measure it again. This is in fact quite intuitive. If we toss a coin exactly once then we get a single random outcome – but from this single outcome alone we cannot learn whether the coin was fair or biased!

However, suppose we toss the same coin a large number of times. In this case we would expect that the fraction of times that we obtain outcome 11 is roughly p1p_{1}. In other words,

N1Np1,\displaystyle\frac{N_{1}}{N}\approx p_{1}, (1.33)

where NN is the total number of measurements and N1N_{1} the number of times that we obtained outcome 11. The more outcomes we have gathered, the better the approximation should get.44 4 How good is this estimate? One can show that, on average, the error is of the order of 1/N1/\sqrt{N}, so quickly goes to zero if we repeat the experiment many times.

This provides us with a procedure for estimating p1p_{1}. Of course, since p0+p1=1p_{0}+p_{1}=1, this also gives us an estimate of p0p_{0}. For example, Alice could use this procedure to estimate the probability that her biased coin in 1.3 lands on [Uncaptioned image] and [Uncaptioned image] . You can now try this out for yourself.

Homework 1.4 (Coin tossing).
  1. 1.

    Find a coin and draw on its two sides 0 and 11 with a marker. Toss it 3030 times and write down the outcomes in a table of the following form:

    Number of tosses NN 1 2 3 4 5 6 7 8 9 30
    The NN-th outcome 1 0 1 0 0 0 1 1 1 1

    (The gray outcomes are just an example. Replace them by the outcomes that you got. )

  2. 2.

    Estimate the probability of getting outcome 11 for your coin by using Eq. 1.33.

  3. 3.

    It is interesting to see how the estimate changes as you increase the number of tosses NN. For this, extend your table from part 1 by three more rows so that it looks as follows:

    Number of tosses NN 1 2 3 4 5 6 7 8 30
    The NN-th outcome 1 0 1 0 0 0 1 1 1
    Cumulative sum N1N_{1} 1 1 2 2 2 2 3 4 16
    Ratio N1/NN_{1}/N 1 1/2 2/3 2/4 2/5 2/6 3/7 4/8 16/30
    Its numerical value 1.00 0.50 0.67 0.50 0.40 0.33 0.43 0.50 0.53

    The meaning of the rows is as follows: (1) the number NN of tosses so far, (2) the outcome of the NN-th toss, (3) the sum of the first NN outcomes, (4) the estimate of the probability of getting outcome 1 based on the first NN tosses, (5) the numerical value of this estimate. Feel free to use Excel or a similar program to make this table.

  4. 4.

    Plot the last row of your table as a function of the number of tosses NN.

Hack.
  1. 1.

    We got the following sequence of outcomes:

    1,0,1,0,0,0,1,1,1,0,0,1,1,1,0,0,1,0,1,1,0,1,1,0,0,0,0,0,0,1.1,0,1,0,0,0,1,1,1,0,0,1,1,1,0,0,1,0,1,1,0,1,1,0,0,0,0,0,0,1.
  2. 2.

    The estimated probability of outcome 11 for our coin is 14/30=7/150.4714/30=7/15\approx 0.47.

  3. 3.

    The table in our case looks as follows:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
    1 0 1 0 0 0 1 1 1 0 0 1 1 1 0 0 1 0 1 1 0 1 1 0 0 0 0 0 0 1
    1 1 2 2 2 2 3 4 5 5 5 6 7 8 8 8 9 9 10 11 11 12 13 13 13 13 13 13 13 14
    11 12\frac{1}{2} 23\frac{2}{3} 12\frac{1}{2} 25\frac{2}{5} 13\frac{1}{3} 37\frac{3}{7} 12\frac{1}{2} 59\frac{5}{9} 12\frac{1}{2} 511\frac{5}{11} 12\frac{1}{2} 713\frac{7}{13} 47\frac{4}{7} 815\frac{8}{15} 12\frac{1}{2} 91\frac{9}{1} 12\frac{1}{2} 1019\frac{10}{19} 1120\frac{11}{20} 1121\frac{11}{21} 611\frac{6}{11} 1323\frac{13}{23} 1324\frac{13}{24} 1325\frac{13}{25} 12\frac{1}{2} 1327\frac{13}{27} 1328\frac{13}{28} 1329\frac{13}{29} 715\frac{7}{15}
    1.00 0.50 0.67 0.50 0.40 0.33 0.43 0.50 0.56 0.50 0.45 0.50 0.54 0.57 0.53 0.50 0.53 0.50 0.53 0.55 0.52 0.55 0.57 0.54 0.52 0.50 0.48 0.46 0.45 0.47

  4. 4.

    Here is a plot of our intermediate probability estimates:

    [Uncaptioned image]