Despite no one asking me, I really do believe people adopting probabilistic thinking and a Bayesian approach to consuming media could improve our society. Bayesianism is one of the most defensible approaches for how to ‘know’ things, but leads to a healthier society. In essence, Bayesian thinking is about how to ‘best’ update the probability that something is true given the evidence we’ve seen. When we receive new evidence whether it confirms or disconfirms our belief, you should judge how strong or weak it is and update your beliefs in accordance. Almost never should a belief be 0 or 100% (absolute certainty against/for). Bayesianism enforces interrogating your own beliefs by avoiding absolutes, requiring you to state other possibilities, and generally pushes towards epistemic humility.
Under Bayesian thinking, saying you ‘believe’ something (your prior) is only partially informative. Belief is just your ‘best guess’ among all possibilities you’ve considered. You arrive at belief by comparing probabilities across possibilities. Belief typically implies it is the most likely of all the possibilities (e.g. it is 40% likely my car is secretly a transformer tired from saving the world, and 30% likely its broken. Either way, it won’t start). In this case, I’d ‘believe’ the 40%.
By definition Bayesianism is open to new evidence, however all evidence is not equally weighted. The ‘likelihood’ ratio is the likelihood of seeing a piece of evidence if a specific possibility is true or false. If my car never tries to protect me from decepticons who kill me, the likelihood of it being a transformer goes down sharply. However, there is a still a small possibility it is a deep agent, and I was sacrificed for some wider galactic good, maybe 1% (this would be a false negative). So the likelihood ratio is this: Conditional on being a transformer, it is 1% likely, my car would do nothing to save me, and 99% likely it would break cover and protect me. Of course, if my car is not a transformer, the likelihood that it would not save me is effectively 100%. Once I receive the evidence that it did not (putting aside that I’m dead), my belief should change as follows:
- I can calculate the prior odds ratio of this being true: If I believe 40% likely that my car is a transformer, then there is a 60% chance it is not before any evidence. This is roughly 2/3 odds (the odd of being true divided by not being true, 40%/60%).
- I also must calculate the likelihood ratio of seeing the evidence that my car did not save me given it is or is not a transformer. This is 1% / 100%, 1/100 odds (likelihood a transformer would not save me divided by likelihood a car would not save me).
- Multiplying these two things gives me my updated ratio belief that my car is a transformer: .0067 (ratio). Which I need to convert back to percentage (.0067/(1+.0067)= .0067. Converted to percentage, .0067 * 100 = .67%.
After not being saved by my car. I now believe it is only .67% likely to be a transformer (my ‘posterior belief’).
Unfortunately, we never ‘perceive’ evidence directly (this is naive realism, which is generally rejected). Instead, our perception of the world -- the ‘evidence’ we see or receive– is always mediate, at a minimum through our own senses. This is extremely impactful because Bayesianism helps work against our strong cognitive bias for overweighting evidence related to rare events. Evolutionists think this is because we are wired to overreact to potential dangers, even if rare. Thus, hearing that 5 people died from the latest Covid vaccine -- out of billions who took it -- somewhat breaks our brain. Likewise, if we hear that someone down the street won the lottery. Hearing any non-zero number, or any even mildly convincing evidence related to rare events lead people to massively overestimate the likelihood that a rare event is in fact true. This is why gestures to rare events in the past are so powerful (it simply proves that it could happen, but we interpret it as making it likely to happen). This is both the seed and growth engine of misinformation and conspiracies.
Mediation requires introducing a new Bayesian element: perceiving evidence of something that is not true, aka a false positive on a ‘test’ (here, evidence is implicitly a ‘test’ of a specific belief). In the above example, this would be the likelihood of perceiving my car fighting decepticons (it transforms, oddly organic-yet-metal machine guts fly, and I’m saved), while it in fact not being a Transformer. Other possible explanations are unlikely, but they do exist (non-zero). I could be dreaming, going mad, or someone spiked my dinner with LSD. Let’s say the likelihood of this is .001%. That is, 1 in 1 millions. This is extremely unlikely, but not zero! This matter hugely, as we’ll see later, so keep this number in your head.
In reality, all ‘evidence’ is mediated this way. In clinical settings, the ‘perception’ of evidence is in relationship to tests. All clinical tests have false positives (test ‘perceives’ have cancer, but you don’t) and false negatives (test ‘perceives’ you do not have cancer, but you do). Interpreting these are only meaningful in relation to the base rate of the actual event in reality. Extremely rare events mean that even a test with 1% false positives is almost useless.
Let’s apply this to my example, but instead of starting at 40%, we need to change our epistemic viewpoint to being someone outside of myself who is trying to judge whether my car is in fact a transformer based on my (truthful) account.
Bayesianism is especially powerful in these cases, when the base-rate for an event is very low (it is very unlikely). In these cases, even seemingly extremely compelling evidence for something oftentimes can fail to make it meaningfully likely. Let’s assume that there are only 20 transformers currently also cars on earth at any one time, and roughly 4 billion cars. The base rate is 20/4 billion, aka a base rate of .000000005. In this case, even if I did in fact see my car protect me from decepticons, the incredibly low base rate means that I may not be able to meaningfully convince anyone else that this happened. Even if they believed in transformers! This is because another possible explanation (again, mad, dreaming, lsd drugged) are all competing and their base rates mean that my perception of the event is not alone decisive that the event happened.
It is worth stressing that this is not because people do not believe in transformers (they do!), but because it is real but rare. In the world of media, this means that people who do believe that rare events have happened in history such as political conspiracies, cover-ups, collusion, assassinations, shadow-institutions etc, should still rationally require extremely decisive evidence before believing new conspiracy theories. Things looking fishy or ‘not adding up’ can be a good reason to ask more questions, but a poor reason to update beliefs for rare events. You want ‘red-handed’, ‘near-absolute’ evidence for rare events under Bayesianism.
Very sadly in modern society, our brains act the opposite way. The more outlandish the theory or claim, the more our brain pays attention because of cognitive biases. These biases served us well while living in premodern jungles, savannahs, or deserts -- where overcaution protected us from rare events which nonetheless lead to death. These biases leave us highly manipulable today. Media algorithms + profit motive make this a gradient where we disproportionately seek information about possible but unlikely claims, and are then served more posts related to them. This creates an incentive to create more evidence about rare events. However, since rare events are (by definition) rare, they are not likely to truly occur very often. Thus, media is incentivized to highlight weak evidence around events which might have occurred. Since we strongly overweight evidence about outlandish claims, we reward this behavior (i.e. the rise of clickbait, tabloids, and more recent brigade of podcasts and short-form videos ‘just asking questions’ or short form ‘analysts’ drawing implications to support conspiracies today based on thin inferences from historical events).
Let’s return to our transformer example. If my friend knew the base rate of transformers in the population is 20/4 billion cars, then their prior belief that my car is a transformer is .0000005% (not 40%). My friend absolutely trusts me (it’s not about my honesty). He trusts that I did in fact have a vivid, even exhilarating experience of being saved. If you remember, the likelihood of me having this experience (my car, the transformer, saving me) and it not being true is only 1 in 1 million, extraordinarily unlikely. Intuitively, we’d say that 1 in 1 million is so rare that he should believe, but this is wrong. In fact, it shouldn’t be close.
I won’t redo the math (its the same), but in this case, he still should not believe my car is a transformer.
Because the base-rate of transformers in the car population is so low, he would correctly update the likelihood my car is a transformer from .0000005% -> .5%. That is, he should be 99.5% sure that my car is not a transformer, and instead strongly encourage me to take precautions: see a sleep doctor, neurologist, protect my food supply.
As a final small point. In the above example, fallibility and trustworthiness are not the same thing. My friend can believe I’m being truthful (100% trustworthy) and believe my evidence, while not believing my claim. I did in fact perceive something, but that perception could have been due to something other than a Cybertronian. Similarly, as we interpret media – whether social, traditional, or other – we can reject the claim of both traditional media or tiktokers without necessarily believing they are lying. They can be telling the truth but be wrong for a huge variety of reasons. After all, they mediate the evidence we receive, but the evidence they produce is also mediate (the hierarchy is endless). Of course, we could have prior evidence (or a base-rate assumption) that certain outlets are lying. Traditional media has business interests (but also institutional interests), while social media is entirely designed around algorithmic virality.
Other people and (most importantly) ourselves are likewise fallible. A healthy assumption (a healthy ‘prior’) is maybe that people are accidentally wrong far more than we’re purposefully lying. There are lots of cognitive biases which lead us to double-down on being accidentally wrong, though that’s beyond Bayes.
Bayesianism is almost certainly true, but it can’t solve our problems directly. This is because priors and constructing evidence are inherently subjective. In the above example, I used 40% for my belief about my transformer-car, while my friend used the ‘general’ base-rate. Unfortunately, Bayesianism does not offer guidance on exactly select (‘construct’) priors, nor how to judge the evidence. This is a recursive (i.e. ‘turtles all the way down’) problem, because something objective like a base rate must be arrived at inductively, but that process itself must have a prior. There are also ontological questions about what counts as evidence etc., that can challenge how perceived events are weighted. For instance, the dreaming, going mad, lsd possibility is obviously constructed, and there are ~infinite other possibilities (esoteric, Rube Goldberg-like pathways) one could construct by which my true perception was nonetheless a false positive.
I think the implications of Bayesianism are relatively clear: