top of page

Bayes' Theorem

A tool for better thinking and decision-making



Bayes' Theorem might seem like an abstract concept reserved for statisticians and mathematicians, but it's a tool we can all use in our decision-making processes. At its core, Bayes' Theorem is a formula for updating probabilities based on new evidence. This powerful theorem has applications ranging from medical diagnoses to machine learning, offering a systematic way to revise our beliefs in light of additional information.


What is Bayes' Theorem?


Bayes' Theorem is a principle in probability theory that describes how to update the probabilities of hypotheses when given new evidence. A hypothesis is an idea or assumption we develop based on limited evidence that serves as the starting point for further investigation. Evidence is the available facts or information we can use to evaluate whether a belief or hypothesis is true. In its simplest form, Bayes' Theorem is expressed as:



Where:

  • P(A|B) is a conditional probability: the probability of event A occurring given that B is true. We say this as "the probability of A given B."

  • P(B|Ais also a conditional probability: the probability of event B occurring given that A is true.

  • P(A) is the probability of hypothesis A being true without any given conditions. This is also called prior probability.

  • P(B) is the probability of observing evidence B without any given conditions. This is also called marginal probability.


History


Thomas Bayes was an English statistician, philosopher, and Presbyterian minister in the 18th century. Bayes' work on the theorem came from his interest in solving a particular problem: how to make inferences about an unknown parameter through repeated trials. The theorem was published posthumously in 1763, thanks to his friend Richard Price who recognized its significance and edited and presented Bayes' findings to the Royal Society. The theorem laid the groundwork for what we now call Bayesian statistics, offering a mathematical way to update beliefs based on new evidence.


Applications


Bayes' Theorem finds applications in a wide array of fields. In medicine, it's used for diagnostic testing, helping to determine the probability of a disease given the test results. In machine learning, Bayesian methods are employed to improve the decision-making processes of AI systems. The theorem is also pivotal in the field of finance, where it helps in risk assessment and the valuation of investments based on probabilistic outcomes. It's even the principle behind email spam filters.


An Example

Let's say you go to the doctor for a routine diagnostic test for a disease. The disease being tested for affects 1 out of 2,000 people in the population. The test can correctly identify if you have the disease 99% of the time (called sensitivity), and can correctly identify if you don't have the disease 99% of the time (called specificity). Given that you tested positive for the disease, what is the probability you have the disease?


Most people would say 99%, since the test is 99% accurate. But that's wrong. Let's use Bayes's Theorem to find out why.


We start by rewriting Bayes' Theorem in terms of our hypothesis and evidence. Our hypothesis is that we have the disease. Our evidence is that we tested positive.


The probability of testing positive given you have the disease, P(+|D), is 99%. The probability of having the disease, P(D), is 1 out of 2,000, or 0.05%. The probability of testing positive, P(+), is a combination of the probability of testing positive if you have the disease and the probability of falsely testing positive if you do not have the disease.


So P(+) = P(D) x P(+|D) + P(-D) x P(+|-D). I'm using P(-D) here to mean you don't have the disease.


Now we can plug in the numbers to Bayes' Theorem:





We find that the probability of having the disease, given that we've tested positive for it, is only 4.7%. That seems odd at first, considering the test is 99% accurate. But it makes sense when we consider that the disease only affects 1 out of 2,000 people. So the test will correctly identify that one person with the disease, 1 x 99%, but it will also incorrectly identify 20 people as having the disease, 2,000 x 1%. If only 1 out of those 21 total people who tested positive actually has the disease, then the probability that any individual has the disease given they tested positive is 1 out of 21, or about 4.7%.


That begs the question, if a 99% accurate test can only tell you that you have a 4.7% probability of having the disease, how can any doctor identify a disease with any accuracy?


That's where Bayes' Theorem really becomes important. That positive test is just one piece of evidence. Now we update our priors. We can run the same diagnostic test a second time. If we test positive a second time, rather than the prior probability of having the disease being 1 in 2,000, it's 4.7%.


If we adjust our formula with this new information, we get:






Now we see that with two positive tests, the probability of having the disease jumps to 83%.


Other things that can affect our priors are whether or not you are showing symptoms of the disease, if you have a history of the disease in your family, or if your lifestyle habits predispose you to the disease. If you have been smoking a pack a day for 40 years, your prior probability of having lung cancer before a test is much higher than for a non-smoker.


Modern screenings for various types of cancer, such as mammograms and colonoscopies, can certainly save many lives since cancer is easy to treat when detected early. However, if you have a positive test on a routine screening and you don't have any symptoms, Bayes' Theorem can potentially put your mind at ease and allow you to make more informed decisions for the next steps.


 

Making Better Decisions


At its heart, Bayes' Theorem provides a framework for decision-making under uncertainty. It teaches us to be flexible with our beliefs and to adjust our views as new information becomes available. This can be as simple as revising the likelihood of rain based on a weather forecast update, or as complex as re-evaluating investment strategies after a market shift.


To apply Bayes' Theorem in everyday decision-making, start by establishing your initial belief or hypothesis and its rough probability. When new evidence is presented, use the theorem to calculate the updated probability. This iterative process of updating beliefs can lead to more informed and rational decisions.


Conclusion


Bayes' Theorem is more than just a mathematical formula; it's a fundamental approach to understanding uncertainty and making better decisions. It might come as a surprise that most things in life are uncertain. That's why it's important to think probabilistically rather than in certainties. After all, how can we learn anything new if we have convinced ourselves that our ideas are 100% right? The brilliance of Bayes' Theorem is that it's an iterative process that converges on the truth as more evidence is made available. Although we can never be 100% right, this tool allows us to be less wrong and make decisions that better serve us and those around us.


Additional Great Resources:

Video - The Bayesian Trap, by Veritasium

Book - The Signal and the Noise, by Nate Silver

Comments


bottom of page