Bayes’ Theorem: Some Intuitive Principles, Part III

In the first two parts of this series, I expressed the practical consequences of Bayes’ theorem in terms or two informal principles:

Principle I:
All things being equal, the theory which predicts the a higher likelihood for the observed data is the theory most likely to be true.


Principle II:
All things are not equal because evidence accumulates.

I explained these principles with the aid of a dice game. At random, I select either a 4-sided die or a 20-sided die, and report to you the numbers that I read off the selected die. The first principle is illustrated when, after rolling the die once and reporting a 3, you see that you have gained new information, and you know that it is more likely that I selected and rolled the 4-sided die than the 20-sided die. The second principle is illustrated when I roll the selected die 10 times and report 10 numbers, each 4 or less. With each roll of the die, you become more confident in the theory that the 4-sided die was selected. Indeed, the odds of rolling 10 numbers 4 or less on a 20-sided die are almost 1 in 10 million.

At the conclusion of my last article, I posed the following puzzler: How should update our beliefs if I report that I rolled a 14 on my eleventh roll of the die?

The question seems all too easy to answer. Since I can only roll a 14 on the 20-sided die, we are inclined to believe with certainty that I had selected the 20-sided die, and that, by a quirk of fate, you were led to believe that the 20-sided die was a 4-sided die.

Being rational is no guarantee of being correct, and a rational believer updates his beliefs in light of new evidence. Thus, initial inclination to reverse our beliefs after hearing the report of a 14 seems consistent and reasonable with our inferences so far.

The surprise ending here?

Assuming that I’m playing this game with complete honesty, my reports the casting of my selected die are quite reliable, but they’re not infallible. There are lots of ways that I could make a mistake, even in so simple a game as this. I could misread the die, become temporarily disoriented, or speak my report incorrectly. Likewise, as an observer, you might have misheard my report. Assuredly, these are highly improbable errors that we might make, but they’re not impossible. In the rolling of a single die, these errors are negligible. But when we have established a theory to 1 part in 10 million, we need to pay attention to the rates of these errors.

What is the probability that, on any given round of the game, you might incorrectly hear my report of “4” as “14”? What is the probability that on any particular round of the game, I might say “14” instead of “4”?

I don’t know these probabilities, but they are plausibly greater than 1 in 10 million. If playing this game were our full time jobs, and a round of this game takes just 4 seconds to play, then 10 million rounds would take us more than 5 years to play out. It’s plausible that the die caster or observer would make these errors multiple times over a five year period. That being the case, it’s difficult to dislodge our theory that the 4-sided die was selected on the basis of merely a single report of a 14. After 10 rolls of the die, We’re so confident in our theory that the 4-sided die was selected, that any alternative conclusion represents an extraordinary claim. Even improbable variations of our 4-sided die theory are rationally more probable than the alternatives.

From this, we reach the third informal principle of Bayesian inference, the famous adage:

Principle III
Extraordinary claims require extraordinary evidence.

When evidence for a theory accumulates, and we’ve concluded with high probability the theory is true, it becomes difficult for competing theories to dislodge the leading theory without extremely good evidence. The evidence has to be of the kind that distinguishes the alternative theory from the leader.

The principle ingredient in the scientific method is experimental control. Control of an experiment means accounting for the types of experimenter errors that might throw our conclusions into doubt. Scientists strive to create procedures where human error or machine error won’t significantly affect their conclusions.

Recent controversy was stirred when an experiment at Europe’s prestigious physics laboratory, CERN, appeared to discover neutrino particles traveling faster than the speed of light. While anything is possible, faster-than-light neutrinos violate two interlocking theories about the laws of physics that explain more than a century of experimental evidence. Faster than-light neutrinos are an extraordinary claim, and it would take more than the report of a single elite team of scientists to change rational beliefs about physics. Physicists didn’t simply take the result at face value. Indeed, even the team of scientists who initially discovered the anomaly recognized that the result was probably incorrect. Scientists got to work, comparing different lines of evidence and studying possible errors in the experiment. It turns out, a piece of bad wiring in the experiment was responsible for a delay of 60 billionths of a second in a GPS signal, and this delay explained the erroneous result.

Thus, when you read about an anecdote or single study that appears to overturn a well-established theory (or perhaps multiple interlocking theories), remember this third principle, and try to understand the the experimental controls before reaching a conclusion.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *