Recently I was asked to introduce Bayesian inference in three minutes flat.
In 10 slides, available at https://osf.io/68y75/, I made the following points:
- Bayesian inference is “common sense expressed in numbers” (Laplace)
- We start with at least two rival accounts of the world, aka hypotheses.
- These hypotheses make predictions, the quality of which determines their change in plausibility: hypotheses that predicted the observed data relatively well receive a boost in credibility, whereas hypotheses that predicted the observed data relatively poorly suffer a decline.
- “Today’s posterior is tomorrow’s prior” (Lindley) – the cycle of knowledge updating and Bayesian learning never ends.
- When we learn, we (ought to) do so using Bayes’ rule: new knowledge equals old knowledge times a predictive updating factor.
- We use Bayes’ rule in order to avoid internal inconsistencies (i.e., inference that is silly, farcical, or ridiculous – pick your favorite term). When there are no internal inconsistencies the system is called coherent.
- Be coherent! (Lindley, de Finetti, and –implicitly– all Bayesians)