Graph all the things

analyzing all the things you forgot to wonder about


interests: game theory, interactive visualizations

A coworker posed this problem:

Suppose you're at a casino with $100, and need $200 to get home. The casino only supports one style of betting: you can put any amount of money on the table, and with 30% chance they double it, otherwise taking it away. What should your strategy be to maximize your chance of getting home?

Starting from $100, you'd think that you should just take the obvious 30% chance to double your money. And you'd be right. But why? I wasn't able to come up with any simple proof other than solving the general case, which I found surprisingly interesting.

To study this, let's rephrase the problem so that you have x money, 0 < x < 1, and want to reach 1 money. Each bet has chance \alpha of doubling your money. Denote by b(x) the amount of money your strategy bets when you have x money, and suppose that following this strategy gives you a p(x) chance of winning when you have x money.

Here's an interactive graph of your probability of winning, playing by the optimal strategy.

Click on the graph to zoom in!


Numerical Solution

I first solved numerically, starting from an obvious lower bound on p(x)

p_0(x) = \begin{cases}0,&x < 1 \\ 1,&x = 1\end{cases}
and iteratively updating a lower bound p_{n+1}(x) by choosing the optimal b_{n+1}(x) based on p_n(x). In particular, at iteration n, update
b_{n+1}(x) = \text{argmax}_b\alpha p_n(x + b) + (1-\alpha)p_n(x - b)
p_{n+1}(x) = \alpha p_n(x + b_{n+1}(x)) + (1-\alpha) p_n(x-b_{n+1}(x))

One can prove inductively that p_n(x) \le p(x).

I was expecting something boring - that you should always just bet as much as you can until you hit x=1/2, then bet as much as you need to reach 1 money - but I was wrong. Instead, it converged to these interesting fractal b(x) and p(x):

A locally optimal strategy is globally optimal

I'll prove that the following local optimality condition is necessary and sufficient for the optimal strategy to this problem:

A strategy with b(x) and p(x) is a globally optimal strategy (no strategy with \widetilde{p}(x) has \widetilde{p}(x) > p(x) for any x) if and only if there exist no x_0 and b_0 such betting b_0 with x_0 money and otherwise following the strategy gives a higher chance of winning than betting b(x).

To prove it is necessary, suppose that there were an x_0 and b_0 such that betting b_0 with x_0 gave a higher chance of winning than betting b(x). Then clearly the strategy

\widetilde{b}(x) = \begin{cases}b_0,&x=x_0 \\ b(x),&x\ne x_0\end{cases}
outperforms b(x), since it will give higher chance of winning at x_0, and potentially at any other x satisfying x \pm b(x) = x_0, etc. This proves that our condition is necessary for any optimal strategy.

To prove that it is sufficient, suppose for the sake of contradiction there were another strategy \widetilde{b}(x) with corresponding \widetilde{p}(x) such that \widetilde{p}(x_0) > p(x_0) for some x_0. Let \hat{b}(x) denote the composite strategy that follows b(x) where p(x) > \widetilde{p}(x) and \widetilde{b}(x) otherwise. This composite strategy satisfies \hat{p}(x) \ge p(x) at every x, and \hat{p}(x_0) > p(x_0). Now consider the set of all x such that \hat{b}(x) \ne b(x), and suppose for the sake of contradiction they all satisfy \alpha p(x + \hat{b}(x)) + (1-\alpha)p(x - \hat{b}(x)) \le p(x). Decreasing our chance of winning at such an x could only decrease it elsewhere too, so if we choose any subset of these points x to play by \hat{b}(x) instead of b(x), we must still get a strategy no better than b(x). But this violates our assumption that \hat{p}(x_0) > p(x_0), so we must have that there exists some x_1 and bet \hat{b}(x_1) that gives a chance of winning \alpha p(x_1 + \hat{b}(x_1)) + (1-\alpha)p(x_1 - \hat{b}(x_1)) > p(x_1). This contradicts local optimality, implying that any locally optimal solution must also be a globally optimal one.

Exact Solution

I'll show that on the subset of points x with a terminating sequence of binary digits x = \overline{a_0.a_1a_2a_3\ldots a_k}, an optimal solution is
b(x) = 2^{-k}
p(x) = \sum_{i=0}^na_i\alpha^{\sum_{j=0}^i(1-a_j)}(1-\alpha)^{\sum_{j=0}^ia_j}
All I have to do is show that this b(x) is consistent with p(x) and that b(x) is locally optimal, implying global optimality.

To prove that b(x) is consistent with p(x), choose r such that the last r digits a_{k - r + 1}, \ldots a_k of x are 1's, and a_{k-r} is a 0. Then betting 2^{-k} will give us an \alpha chance of having \overline{0.a_1\ldots a_{k-r-1}1} money if we win and a 1-\alpha chance of having \overline{0.a_1\ldots a_{k-1}} money if we lose. Our probability p(x) is then consistent with our bet b(x) if \alpha p(x + b(x)) + (1-\alpha) p(x - b(x)) = p(x). Plugging in this p and b, and letting q = \sum_{i=0}^{k-r-1}a_i\alpha^{\sum_{j=0}^i(1-a_j)}(1-\alpha)^{\sum_{j=0}^ia_j} and \beta = \alpha^{\sum_{j=0}^{k-r-1}(1-a_j)}(1-\alpha)^{\sum_{j=0}^{k-r-1}a_j}, we get

\alpha p(x + b(x)) + (1-\alpha) p(x - b(x)) = \alpha(q + \beta) + (1-\alpha)\left(q + \beta\sum_{j=0}^{r-1}\alpha(1-\alpha)^{j}\right)
\alpha p(x + b(x)) + (1-\alpha) p(x - b(x)) = q + \beta\left(\alpha + \sum_{j=0}^{r-1}\alpha(1-\alpha)^{j + 1}\right)
\alpha p(x + b(x)) + (1-\alpha) p(x - b(x)) = p(x)
This shows that our betting strategy b(x) is consistent with p(x).

To prove that the strategy is locally optimal, I'll consider any strategy played on amounts of money that terminate in binary, and show that a bet can only be worse than b(x) = 2^{-k}, given p(x). Consider any bet b at x, and let \overline{a_0.a_1\ldots a_mc_{m + 1}c_{m+2}\ldots c_k} = x + b and \overline{a_0.a_1\ldots a_md_{m+1}d_{m+2}\ldots d_k} = x - b (where the m+1st digit is the first one that differs; c_{m+1} = 1 and d_{m+1}=0).

I'll skip the details, but one can prove that in any of the following cases, the bet b is no better than b-2^{-(m+2)}, which results in the first m+2 digits of the outcomes agreeing:

  • If c_{m+2} = d_{m+2} = 1.
  • If c_{m+2} = d_{m+2} = 0.
  • If c_{m+2} = 1 and d_{m+2} = 0.

Additionally, the outcomes \overline{a_0.a_1\ldots1000\ldots0} and \overline{a_0.a_1\ldots0111\ldots1} are at least as good as \overline{a_0.a_1\ldots1000\ldots1} and \overline{a_0.a_1\ldots0111\ldots0} on average. Applying all these rules implies that no bet is better than 2^{-k}, where the amount of money x has k binary digits after the decimal point.

Using the previous result that a locally optimal strategy is globally optimal, this strategy is globally optimal.