Copyright ©1999 by Paul Niquette. All rights reserved.
probability predictions are surprising. In the case of
experiment described in the
puzzle, Dr. Theodore
P. Hill of the Georgia Institute of Technology wrote in
a "quite involved calculation"
revealed a surprising
probability. It showed, he said, that the overwhelming
odds are that...
Most fakers don't know this and avoid guessing long runs of heads or tails, which they mistakenly assume to be improbable. At just a glance, Dr. Hill can see whether or not a student's 200 coin-toss results contain a run of six heads or tails. If they don't, the student is branded a fake.
To confirm Dr. Hill's probabilities relevant to coin-tossing, you are invited to review Randomly Right: the Mathematics and Randomly Right: the Model. To explore the psychology of cheating, have a look at Randomly Wrong: the Fakery.
Randomly Wrong: The Fakery
The sketch on the right depicts what is called a "state diagram." At any given toss, your simulated coin will be showing either H or T and each arrow represents the probability for the face that will come up on the next toss. "The coin has no memory," you remind yourself. At every toss, it does not matter which side comes up, it is equally likely that the next toss will come up the same or opposite. That's probability theory.
You might label the arrows p(H|H), p(H|T), p(T|T), p(T|H). For the honest coin, of course, you know that...
p(H|H) = p(H|T) = p(T|T) = p(T|H) = 1/2.While you are typing, a thought strikes you: "Even a dishonest coin has no memory." Thus, for a bent or weighted coin that favors heads, you might expect...
p(H|H) = p(T|H) > p(H|T) = p(T|T)...for which the symptom would be more H's than T's. You have no intention of simulating a dishonest coin, so you keep track of the totals of H's and T's as you go along and try to keep them equal without being too obvious about it. To fake randomness, you dare not type any kind of repeating pattern. Faking is not as easy as you thought, is it? And there's more...
At some point you look at what you have just typed and see a string of, say, two H's. In the course of your faked random typing you have arrived at state H2. This state diagram shows you the situation. Here again, every arrow represents a probability. You know that for an honest coin...
p(H2|H3) = p(H2|T1) = 1/2Now, along comes a disembodied voice that whispers a question in your ear, "You have just typed two H's, do you want to show three H's in a row?" You shrug and command your fingers to continue typing at what you intend to be random. Maybe a T appears. If so, fine. Maybe instead, you look at your screen and see another H, which means you have arrived at state H3, such that the following probabilities are supposed to apply...
p(H3|H4) = p(H3|T1) = 1/2...but the voice asks, "You have just typed three H's, do you really want to show four H's in a row?" You try to disregard the question, probably with success. Nevertheless, at some state, Hn, the voice will be so insistent that you find yourself tempted to favor typing a T, thereby struggling to simulate random -- but equal -- likelihoods for H's and T's. According to the puzzle solution above, that temptation apparently becomes so strong that n < 6. States H6 or a T6 simply do not arise in faked coin tosses.
The psychology explored in the the Randomly Wrong puzzle is elementary and relevant to gambling. Whereas the coin has no memory, you do. You must try to keep in mind that wheels and dice have no memory. Wagering selections call for anticipating an outcome based on probabilities, which is tantamount to faking random events. Listening to that voice can lead to your downfall. Instead, when you observe some repeating pattern, like six H's, you might try to visualize that somewhere else in the universe, six T's in a row have just shown up.
You may have noticed that psychology takes us out of pure probability theory and into the realm of statistics. Computers are sensational for analyzing statistical phenomena (see, for example, Randomly Right: the Model). The faking of random coin tosses by real people offers an opportunity to discern any number of psychological insights. Consider the development of a Fake Randomizing Skill Database having the objective to calibrate the cognitive simulation of randomness, characterized by...
Finally, for representing faked coin tosses, the arrows on the state diagram need be changed from pure probabilities to statistical distributions (mean and standard deviation). Readers are invited to make comments about how those distributions might look.
Randomly Right: the Mathematics
Meanwhile, let's just take a simplified look at the problem. Start with a thought experiment that calls for the tossing of a coin six times and figuring out the probability that six heads will come up in sequence: HHHHHH. One chance in 64, right? We can write that probability p(HHHHHH)=1/64. Same for six tails: p(TTTTTT)=1/64. The probability that either will occur is the sum, p(HHHHHH)+p(TTTTTT)=1/32. So, if we carry out 32 independent experiments, tossing a coin six times in each experiment, then we would expect to have maybe one of the experiments come up as a success. We observe that 32 such experiments requires tossing the coin 6*32=192 times. Hey, that's less than 200 right there. Exclamation point withheld, because...
...if that were the basis of Dr. Hill's criterion, then he would be correct only about half the time in accusing his students of faking their tosses. Hardly what he called "overwhelming odds." Still, for the sophisticated student, this simple formulation affords a threshold value for successful faking, for we see that in his or her total of n mandated tosses, he or she should assure the presence of strings with length s such that s < n / 2s-1.Now consider a 6-toss experiment in which almost six heads in sequence appear, THHHHH. The probability for that is one chance in 64, p(THHHHH)=1/64, just like any other combination of six tosses. Ah, but the first toss in the next independent experiment might be an H. Of course, H and T are equally likely, such that p(H)=p(T)=1/2. If H does come up, we have our six heads in sequence, five from one experiment and the sixth from the first toss in the next experiment. The joint probability for that is p(THHHHH)*p(H)=1/128. Same for p(HTTTTT)*p(T)=128, so the probability of either is 1/128+1/128=1/64. In the 32 6-toss experiments conducted within 200 toss, that increases the quality of Dr. Hill's criterion by some amount; however, since the experiments are now overlapping, they are no longer independent, so some of that "quite involved calculation" gets brought into the picture here.
Likewise, for other successive 6-toss experiments...
p(TTHHHH)*p(HH)+ p(HHTTTT)*p(TT)=1/128The detection does get improved further, inasmuch as the solution says, "in a series of 200 tosses, either heads or tails will come up six times in a row." Thus, it is not required that all tosses at the beginning of the first pair of 6-toss experiments be the same. If expressions of the form p(xxxHHH) are allowed, then each such probability is increased.
By now the overlapping of our simple thought experiments will have obliterated any numerical precision. It would be neat, though, to have a specific number for the probability that in 200 tosses, at least six consecutive heads or six consecutive tails will appear.
than seven years after the publication of Randomly
Wrong, the following e-mail message was received from
a graduate student in the department of Electrical and Computer
at Carnegie Mellon University and obviously a sophisticated
|I was reading your site instead of
studying for a queuing
theory exam, and I realized that this problem can be
modeled as a discrete
time Markov chain. The trick was to realize that
(a) for a fair coin it doesn't matter whether a toss comes up heads or tails, only whether it matches the previous one, andThe resulting chain has a transition matrix given by:
P = [[0 1.0000 0 0 0 0 0]Most of its states are transient, but we don't want the limiting probabilities anyway.
Raising P to the 200th power results in:
P^200 = [[0 0.0176 0.0090 0.0046 0.0023 0.0012 0.9653]The top right entry (1,7) is the probability of getting 6+ heads/tails in a row in 200 flips or fewer, assuming a fair coin. We only need to consider P^200 because state 6 is "sticky" and cannot be left, once entered. So, with probability a sequence of 200 coin flips will yield 6 or more heads/tails in a row at some point.
Those who crave an exact answer are free to evaluate the following polynomial with p = 0.5. "Quite involved" indeed!
Applying the criteria set forth in Discovering Assumptions, this solution would have warranted a grade of 'C'. Ryan Johnson was not satisfied with that. "To earn a grade of 'B'," he wrote, "I should point out that this approach will work for any (k, N) where this case is (6, 200). For the 'A', we could ask, 'in a series of 200 tosses, what is the probability that the longest run of either heads or tails is six?' I'm not sure right off how to solve it, but it would probably involve a second chain that goes out to 7 before 'sticking'. Simply subtracting the probability of 'sticking' at 7 in the second chain from the answer to the first chain might do it."
Randomly Right: The Model
Spreadsheet conventions used here include...
For n = 1...200, let...
Watch the two locations D201 and E201. One or
them will show non-zero values for nearly every honestly random