Consider the standard two-envelope paradox – there are two envelopes in front of you, and all you know is that one of them has twice as much money as the other. It seems that you should be indifferent to which envelope you choose to take. However, once you’ve taken an envelope and opened it, you’ll see some amount of money there, and you can reason that the other envelope either has or , which has an expected value of ~~1.5x~~ , so you’ll wish you had taken the other.

Of course, this reasoning doesn’t really work, because of course you always know more than just that one envelope has twice as much money as the other, especially if you know who’s giving you the money. (If the amount I see is large enough, I’ll be fairly certain that this one is the larger of the two envelopes, and therefore be happy that I chose it rather than the other.)

But it’s also well-known that the paradox can be fixed up and made more paradoxical again by imposing a probability distribution on the amounts of money in the two envelopes. Let’s say that one envelope has dollars in it, and the other has dollars, where is chosen by counting the number of tries it takes a biased coin to come up heads, where the probability of tails on any given flip is . Now, if you see that your envelope has 2 dollars, then you know the other one has 4 dollars, so you’d rather switch than stay. And if you see that your envelope has any amount other than 2, then (after working out the math) the expected value of the other one will be (I believe) of the amount in your current envelope, so again you’d rather switch than stay.

This is all fair enough – Dave Chalmers pointed out in his “The St. Petersburg Two-Envelope Problem” that there are other cases where A can be preferred to B conditional on the value of B, while B can be preferred to A conditional on the value of A, while unconditionally, one should be indifferent between A and B. This just means that we shouldn’t accept Savage’s “sure-thing principle”, which says that if there is a partition of possibilities, and you prefer A to B conditional on any element of the partition, then you should prefer A to B unconditionally. Of course, restricted versions of this principle hold, either when the partition is finite, or the payouts of the two actions are bounded, or one of the unconditional expected values is finite, or when the partition is fine enough that there is no uncertainty conditional on the partition (that is, when you’re talking about strict dominance rather than the sure-thing principle).

What I just noticed is that it’s trivial to come up with an example where we have the same pattern of conditional preferences, but there should be a strict unconditional preference for A over B. To see this, just consider this same example, where you know that the two envelopes are filled with the same pattern as above, but that 5% of the money has been taken out of envelope B. It seems clear that unconditionally one should prefer A to B, since it has the same probabilities of the same pre-tax amounts, and no tax. But once you know how much is in A, you should prefer B, because the 5% loss is smaller than the 25% difference in expected values. And of course, the previous reasoning shows why, once you know how much is in B, you should prefer A.

Violations of the sure-thing principle definitely feel weird, but I guess we just have to live with them if we’re going to allow decision theory to deal with infinitary cases.

Kevembuangga(03:00:19) :LOL

Are you SURE that you are that much against “strong metaphysical claims”?

More seriously my point is, before trying to answer a question (any question!) is it not worthwhile to ponder HOW the question came about?

Ready made questions seem to me as suspicious as ready made answers.

P.S. For some strange reason posting a comment from an Opera 9.64/Linux browser doesn’t work, Javascript quirks I guess.

Badal Joshi(09:24:48) :Shouldn’t the expected value in the last line of the first paragraph be .

David Speyer(13:33:29) :“Let’s say that one envelope has 2^n dollars in it, and the other has 2^{n+1} dollars, where n is chosen by counting the number of tries it takes a biased coin to come up heads, where the probability of tails on any given flip is 2/3.”

Why shouldn’t I just say that this hypothesis is impossible? The expected amount of money in the envelope is infinite. There is no way for you to credibly commit that you will put the money in the envelopes according to this rule.

Kenny(13:54:25) :Hi David,

Of course I wouldn’t believe any person who claimed to be offering me such a game. But decision theory should be able to deal with any epistemic situation one might conceivably be in, and not just the ones that some human could offer you. It’s easier to describe these things in terms of someone offering you a bet or a gamble or whatever, but it’s really supposed to be something where some action you take is going to have consequences with this much utility, with such and such a probability, given your degrees of belief about how the world is. The payoffs wouldn’t be money, but rather complete ways everything you care about could go, measured in utility terms.

If you’re only ruling out distributions on the basis that their expectation is infinite, then you’d be fine with a distribution like this where the coin has any bias with probability <1/2 of tails but the payoffs are the same, or a distribution like this where the payoffs are of size where but the bias is the same. It seems very strange to allow the same probabilities of different payoffs, and the same payoffs with different probabilities, but not this particular combination of payoffs and probabilities.

Another idea would be to say that no game with unbounded payoffs is possible. But then you allow every version of this game that is truncated at N flips, for any arbitrarily large integer N. Again, it seems quite strange to allow all of these, but not this limit game, which has extremely tiny probability of differing from any particular one of them, when N is large enough. The fact that there really do seem to be infinitely many distinct ways the world could turn out, and unboundedly large values that situations could have, suggests that we should be able to combine these into a description of some possible epistemic state.

Another idea would be to say that not only are unbounded payoffs in a particular game disallowed, but that in fact there is a uniform upper bound for the payoffs that are possible. I suspect that this would be the best way to go, but then we do end up with the slightly odd situation that there is some N such that nothing is N times as good as gaining a dollar. It's not obvious that there couldn't be such an N, but it does seem slightly strange.

Anyway, you're definitely right that these cases are odd and not likely to come up in any sort of application of decision theory. But since the theory is often considered to be an analysis of the notion of what one rationally ought to do in a given situation, it should apply to any conceivable situation, and not just the ones that we expect to run into in ordinary circumstances. I know this won't settle all your worries, but this is the sort of thing that motivates people to consider these strange infinitary scenarios. (Well, that together with the fact that it's fun to work with infinity.)

horus kemwer(00:05:30) :Why isn’t this just one of these cases where utilities and monetary value come apart? The axioms of standard decision theory apply to *utilities*, not money. This paradox depends upon utility assignments tracking monetary values, but the fact of the matter is that that money is subject to “diminishing marginal utility” like everything else. At some point we don’t switch envelopes because the amount in the first envelope is so valuable that the potential *monetary* benefit from switching envelopes does not surpass the sheer *utility* of staying with the received value. End of story – no puzzle or paradox at all.

This analysis extends to the stronger version presented here. At some point, the significance of the monetary difference between the two envelopes *even knowing that one has had 5% removed (or whatever)* is just swamped by the utility of discovering a sufficiently large amount in the first envelope opened. Past a certain point, even orders of magnitude difference mean nothing when it comes to a choice between different quantities of money. The 5% is just noise in the calculation, nothing more. If we assess the standard principles of decision theory w/r/t utility rather than money, there is simply no puzzle here.