1.3 Measuring a probabilistic bit
If you toss a fair coin and immediately cover it, you have no idea on which side it landed. In this situation your knowledge about the state of the coin is described by the uniform distribution
However, if you uncover the coin and see “heads”, your knowledge is updated to
because now you know with certainty that heads is up. We will refer to the process of uncovering a probabilistic coin to determine which side is up as measurement.33 3 We have borrowed this term from quantum computing where we will see that a similar procedure exists.
Notice from Eqs. 1.29 and 1.30 that the state of the coin before and after the measurement is different. Indeed, after the measurement you are no longer ignorant about which side is up. Now, imagine you cover the coin again after you have just measured it. What is its state now? Well, it clearly is still
because you already know that “heads” is up. In fact, even if you measure (look at) the coin again, you will still see “heads”. Similarly, if you got “tails” the first time you measured a random coin, you will keep getting “tails” no matter how many times you measure it again.
More generally, if you have a probabilistic bit described by the distribution , measuring it yields the outcome (or “heads”) with probability and (or “tails”) with probability :
The state of the probabilistic bit after the measurement is no longer but one of the basis states or , depending on the measurement outcome. For example, if you got outcome (or “tails”), the new state is . In general a measurement changes the state!
An important point about measurements is that they do not let you extract the actual probabilities and – all you get as a measurement outcome is a single bit or . Also, your original probabilistic bit is gone after the measurement, so you cannot measure it again. This is in fact quite intuitive. If we toss a coin exactly once then we get a single random outcome – but from this single outcome alone we cannot learn whether the coin was fair or biased!
However, suppose we toss the same coin a large number of times. In this case we would expect that the fraction of times that we obtain outcome is roughly . In other words,
where is the total number of measurements and the number of times that we obtained outcome . The more outcomes we have gathered, the better the approximation should get.44 4 How good is this estimate? One can show that, on average, the error is of the order of , so quickly goes to zero if we repeat the experiment many times.
This provides us with a procedure for estimating . Of course, since , this also gives us an estimate of . For example, Alice could use this procedure to estimate the probability that her biased coin in 1.3 lands on and . You can now try this out for yourself.
Homework 1.4 (Coin tossing).
-
1.
Find a coin and draw on its two sides and with a marker. Toss it times and write down the outcomes in a table of the following form:
(The gray outcomes are just an example. Replace them by the outcomes that you got. )
-
2.
Estimate the probability of getting outcome for your coin by using Eq. 1.33.
-
3.
It is interesting to see how the estimate changes as you increase the number of tosses . For this, extend your table from part 1 by three more rows so that it looks as follows:
The meaning of the rows is as follows: (1) the number of tosses so far, (2) the outcome of the -th toss, (3) the sum of the first outcomes, (4) the estimate of the probability of getting outcome 1 based on the first tosses, (5) the numerical value of this estimate. Feel free to use Excel or a similar program to make this table.
-
4.
Plot the last row of your table as a function of the number of tosses .
Hack.
-
1.
We got the following sequence of outcomes:
-
2.
The estimated probability of outcome for our coin is .
-
3.
The table in our case looks as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1 0 1 0 0 0 1 1 1 0 0 1 1 1 0 0 1 0 1 1 0 1 1 0 0 0 0 0 0 1 1 1 2 2 2 2 3 4 5 5 5 6 7 8 8 8 9 9 10 11 11 12 13 13 13 13 13 13 13 14 1.00 0.50 0.67 0.50 0.40 0.33 0.43 0.50 0.56 0.50 0.45 0.50 0.54 0.57 0.53 0.50 0.53 0.50 0.53 0.55 0.52 0.55 0.57 0.54 0.52 0.50 0.48 0.46 0.45 0.47 -
4.
Here is a plot of our intermediate probability estimates: