Jensen’s Inequality

The modern philosopher Nassim Nicholas Taleb says that one should never cross a river that is on average 4 feet deep. The intuition is clear: Averages are a borderline useless metric when the payoff structure (i.e. being in a certain depth of water) is non-linear. To put it another way, being in 8 feet of water is not twice as bad as being in 4 feet. Jensen’s Inequality captures this idea in a single elegant statement:

Let f be a convex function and X be an arbitrary random variable. Then

f(\mathbb{E}[X]) \leq \mathbb{E}[f(X)]

In the example above we could think of X to be water depth and f to be our relative discomfort. When pain is convex (disproportionately uncomfortable) with respect to some input, our expected pain is much worse than the pain of the average input.

This can be flipped into a more positive outcome. Think of how much more enjoyable it is to eat a single large meal in the evening than trying to micromanage your calorie intake with 10 smaller meals evenly spread out throughout the day. I guarantee your total enjoyment from all those little snacks won’t add up to be anywhere close to enjoyable as that dinner at the end of the day. Or to paraphrase Socrates, hunger is the best spice.

Try to think of more applications of Jensen’s Inequality. When else is the average utility greater than the utility of the average?

The Kelly Criterion

I have an open secret I’d like to share: I love gambling. It’s one of my favorite pastimes. If you don’t like gambling that’s fine. We’ll probably never be best friends, but we can get along. Now if you’re the type of person who will judge people that enjoy gambling, that’s where we’re going to start running into problems. How would you like it if I judged you for loving Judd Apatow movies? You’d probably think I’m a jerk. Well bug off, this post isn’t for you!

So there are a couple things you should know about me. I’m a mathematics enthusiast (obviously), but I am also born and raised in Vegas (lived there for the first 27 years of my life). I have also worked for the casino gaming industry as a game designer and mathematician. Let me tell you something: You CAN’T win; not in the long run at least. I’m sure most of you know this, but there is still a small minority of people that think they have it figured out; some system (martingale betting being the most common mistake among the less savvy gamblers). So if I can’t win then why do I enjoy it so much? Well, maybe I’m just an adrenaline junkie. It’s a roller-coaster ride. I don’t even know if I enjoy the money so much as the high beating the odds against a rigged system. And I’m happy to pay a (reasonable) premium for that thrill.

Now don’t get me wrong, some people can beat the system. See Ed Thorp or Jeff Ma who applied betting systems for beating blackjack. Or Haralabob Voulgaris who has made a very nice living off betting on NBA games. Then there’s the endless amount of poker pros such as Phil Ivey and Daniel Negreanu–I’m sure most of them will tell you that poker isn’t gambling, and I’m inclined to agree, but we still associate poker with the casino.

No matter what kind of gambling you’re in to you must at least be familiar with one single formula: The Kelly Criterion. And I am using gambling in the absolutely loosest sense of the word. The Kelly Criterion can actually be applied to any quantifiable stake where your return depends on an uncertain outcome. This includes poker, the stock market, or evaluating a job offer!


The formula is given by:

 f =  \frac{p(b + 1) - 1}{b}  = p - \frac{1-p}{b} 

Let p be the probability of winning a bet with payoff odds of b (i.e. for every dollar you place on a bet, you win back b\$. Then f is the percentage of your bankroll that you should place on that bet in order to maximize your long run returns. Note: A negative value implies you should place your stake on the other side of that bet.

The first thing I noticed about this equation was that it seemed way too conservative. For example, take the limit of b\rightarrow\infty. We get

\lim_{b\rightarrow\infty} f = p

If we set p=\frac{1}{2} we get f\rightarrow\frac{1}{2}. So no matter how large your payoff is for a 50-50 outcome, you should never risk more than half of your current bankroll. This flies in the face of a cursory examination of expected value. Say we get paid a 100 to 1 on a coin flip. The expected return for any dollar we place into this wager is \frac{1}{2} (\$ 100) + \frac{1}{2}(-\$1) = \$49.5 , so shouldn’t we place as much money as possible into this bet to maximize our returns? Absolutely not! And the reason is risk of ruin.

You don’t want to be put in a position where you can lose your entire bank-roll, even with a near infinite payoff. After repeated trials you’re still eventually going to lose everything and you’re going to have to start again from 0.

So clearly any fraction of your bankroll less than 100% will avoid risk of ruin. Why not bet 90% of our bankroll on a positive EV bet? Well, it was derived that this ratio will maximize your long run rate of return. I have to admit it still feels overly conservative, but that’s one of the beautiful things about math: It doesn’t care how you feel. You start with your assumptions, you go through the motions, and you observe the results.

One side effect of the Kelly Criterion is it also makes a great case for diversification. Don’t put your entire bankroll on any single investment! No matter how great the return may look, if there’s even a 1% chance it could go bust you have to put some of your money somewhere else!


So let me ask a question: Can you think of any exceptions to using the Kelly Criterion? When might you want to be more or less aggressive? No right or wrong answer here. Just curious what you can come up with.

The Bellman Equation

I’ve been taking classes in AI and Machine Learning and I’ve already bumped into this one on a few separate occasions. In my experience I usually see it in the context of Markov Decision Processes. Here it is:

 V^\pi(s) = R(s,\pi(s)) + \gamma \sum_{s'} P(s'|s,\pi(s)) V^\pi(s')

This guy is pretty nasty looking, but he’s actually not so bad once you get to know him. First let me list out what everything in there is supposed to represent…

  •  V^\pi(s) : Think of this as value of being in a given state of the world (s) given that you tend to behave in a certain way (\pi).
  • \pi: In machine learning we call this the policy, but that’s just another of saying an agent’s behavior. It maps the current state of the world to an action \pi(s)=a. Think of it this way: Let’s say my state s=”hungry.” This state tends to be mapped to the action a=”go eat a sandwich.”
  • R(s,\pi(s)): Now we’re looking at the right-hand side of the equation. This is what’s called the reward function. It’s just a number in arbitrary units that tells us how good (or bad) it feels to be in state s, and performing action \pi(s)=a.
    • Note: If we stopped here the equation above would look kind of stupid, right? The value of an action at a given state is equal to the reward from taking an action at a given state? Seems kind of redundant. Luckily it has more to say!
  • \gamma: The discount rate where 0 \leq\gamma<1. More on this later.
  • P(s'|s,\pi(s)): Think of s' as any state of the world after performing your action \pi(s)=a. Say you finish your sandwich, then your new state could be “I’m not hungry” or “I’m now dating Selena Gomez.” P(s'|s,\pi(s)) represents the probability of transition to that new state, given your current state and action.

With the meaning of these variables defined we can think of the 2nd term in the equation as an expected value. Specifically, the expected value of our future returns given our present behavior:

 V^\pi(s) = R(s,\pi(s)) + \gamma \mathbb{E}[V^\pi(S)|s,\pi]

Even though you won’t typically see the Bellman Equation unless you take some very specialized coursework in machine learning, I couldn’t help but feel sense of familiarity with this one the first time I saw it. Then it hit me: In my economics classes! Every business student is familiar with discounted cash flow. In this case we’re not trying to calculate the net present value of an investment, we’re trying to calculate the net present value of an agent’s behavior.

The discount rate \gamma tells us to what degree we should value the outcome of our future behavior. A \gamma of close to 0 and an agent will become hedonistic, concerned only with the current state of the world. A \gamma close to 1 and the agent is willing to forgo present reward for future gain, but perhaps to a fault–who cares if I have a guaranteed way of becoming a billionaire 200 years from now? The value of \gamma is up to us, but it’s a matter of striking the right balance between the present and the future.

Net Present Value

Honestly, this equation doesn’t tickle me like others do. It’s practical and it’s good to know, but it’s a little too cold-hearted for my tastes. There’s just no love in it. 😦 However I still really want to talk about as a lead up to my next post which will be about the Bellman Equation.

Before we dive deeper into the Bellman Equation let’s look at how you would evaluate some investment s, where the best return on alternative investment is given by 1+i:

 NPV(s) = \sum_{t=0}^{\infty} \frac{R_t}{(1+i) ^{t}}

NPV(s) is the present value of some investment s.

What this is trying to tell us is that we shouldn’t just look at the nominal returns of some asset, but we should discount any future returns by how far out in the future we receive it. Think of it this way: A dollar today is more valuable than a dollar tomorrow. Why? We could invest that dollar right now and receive some (small) return. Or the dollar might not be as valuable tomorrow because of inflation. Or we could just straight up get hit by a comet by the time we get to enjoy the fruits of our newfound fortune! That’s what the discount rate \frac{1}{1+i} is trying to factor in: Our value of the present over the value of the future.

Now I’m going to ask you a question to help that discount rate sink in: How much would you pay for a million bucks in your bank account right now? That sounds kind of stupid, right? You’d probably pay any value up to a million bucks. How about a year down the road? Or 50 years down the road? Try to derive a value  \frac{1}{1+i} based on how much you would pay to receive that money in the future.