# Bayesian Robustness

As an addendum to my previous post I would like to show how sensitive that type of analysis is to our prior assumptions. Why would we do this? Well, let’s say we’re not really sure what a good estimate is for our prior assumptions. We can instead choose a range of values and see what happens to our posterior analysis.

I used a beta distribution to to generate random values for $P(X|\theta)$, $P(X)$, and $P(\theta)$ (with means centered around 0.99, 0.2, and 0.0001, respectively). After generating those random values I obtained the following density for $P(\theta|X)$:

This shows a heavily weighted right tail. For this particular example the skewness comes from our assumption about $P(X|\theta)$; extremely small values can cause it to blow up. Nevertheless, the average state of our beliefs ($E[P(X|\theta)]\approx 0.004$) says that we can be comfortable about our assumptions.

We could stop here, but I would like to dive into a little risk analysis. Wouldn’t you say that even a 1% risk is too much of a risk to allow Jeff Bezos to walk freely in American society, knowing he has a large potential for harm? Let’s say that if Bezos is an upstanding citizen, he provides some arbitrary unit of benefit to society (say +1 utility). However, if he is more concerned with bringing about the New World Order than selling Amazon Prime memberships–and he certainly has the economic means to bring about mass destruction–then we can say his negative contribution is a hundred times worse than his positive contribution (say -100 utility).

So what is my belief about Bezos’ expected contribution to society after observing the evidence against him? Using the above distribution he still, on average, provides 0.96 utility to society.

Even though his downside is disproportionately negative (based on my assumptions) compared to his upside, my beliefs suggest that it’s still not really worth entertaining.