A Stronger Two-Envelope Paradox

24 05 2009

Consider the standard two-envelope paradox – there are two envelopes in front of you, and all you know is that one of them has twice as much money as the other. It seems that you should be indifferent to which envelope you choose to take. However, once you’ve taken an envelope and opened it, you’ll see some amount x of money there, and you can reason that the other envelope either has 2x or x/2, which has an expected value of 1.5x 1.25x, so you’ll wish you had taken the other.

Of course, this reasoning doesn’t really work, because of course you always know more than just that one envelope has twice as much money as the other, especially if you know who’s giving you the money. (If the amount I see is large enough, I’ll be fairly certain that this one is the larger of the two envelopes, and therefore be happy that I chose it rather than the other.)

But it’s also well-known that the paradox can be fixed up and made more paradoxical again by imposing a probability distribution on the amounts of money in the two envelopes. Let’s say that one envelope has 2^n dollars in it, and the other has 2^{n+1} dollars, where n is chosen by counting the number of tries it takes a biased coin to come up heads, where the probability of tails on any given flip is 2/3. Now, if you see that your envelope has 2 dollars, then you know the other one has 4 dollars, so you’d rather switch than stay. And if you see that your envelope has any amount other than 2, then (after working out the math) the expected value of the other one will be (I believe) 5/4 of the amount in your current envelope, so again you’d rather switch than stay.

This is all fair enough – Dave Chalmers pointed out in his “The St. Petersburg Two-Envelope Problem” that there are other cases where A can be preferred to B conditional on the value of B, while B can be preferred to A conditional on the value of A, while unconditionally, one should be indifferent between A and B. This just means that we shouldn’t accept Savage’s “sure-thing principle”, which says that if there is a partition of possibilities, and you prefer A to B conditional on any element of the partition, then you should prefer A to B unconditionally. Of course, restricted versions of this principle hold, either when the partition is finite, or the payouts of the two actions are bounded, or one of the unconditional expected values is finite, or when the partition is fine enough that there is no uncertainty conditional on the partition (that is, when you’re talking about strict dominance rather than the sure-thing principle).

What I just noticed is that it’s trivial to come up with an example where we have the same pattern of conditional preferences, but there should be a strict unconditional preference for A over B. To see this, just consider this same example, where you know that the two envelopes are filled with the same pattern as above, but that 5% of the money has been taken out of envelope B. It seems clear that unconditionally one should prefer A to B, since it has the same probabilities of the same pre-tax amounts, and no tax. But once you know how much is in A, you should prefer B, because the 5% loss is smaller than the 25% difference in expected values. And of course, the previous reasoning shows why, once you know how much is in B, you should prefer A.

Violations of the sure-thing principle definitely feel weird, but I guess we just have to live with them if we’re going to allow decision theory to deal with infinitary cases.

Advertisements




Betting Odds and Credences

17 08 2007

I was just reading the interesting paper When Betting Odds and Credences Come Apart, by Darren Bradley and Hannes Leitgeb, at least in part because of some issues that are coming up in my dissertation about the relations between bets and credences. Their paper is a response to a paper by Chris Hitchcock arguing for the 1/3 answer in the Sleeping Beauty problem, where he shows that if Beauty bets as if her credences were anything other than 1/3, then she is susceptible to a Dutch book.

They end up agreeing that she should bet as if her credences were 1/3, but they argue that this doesn’t mean that her credences should actually be 1/3, because of some similarities this case has to other cases where betting odds and credences come apart. I know at least Darren supports (or has supported) the 1/2 answer in the Sleeping Beauty case, so he’s got a reason to argue for this position.

I think in the end though, their paper has convinced me of the opposite – the correct thing to do in this situation is to bet as if one’s credence is 1/2, even though one’s credence should actually be 1/3! I get the 1/3 credence argument from a bunch of sources (especially Mike Titelbaum’s work on the topic). But for the betting as if one’s credence is 1/2, I might be using the term “bet” in a somewhat non-standard way. However, I think my usage is inspired by my attempt to resist some of the claims of Bradley and Leitgeb.

They give some examples of other cases in which it might look as if one should bet at different odds than one’s credences. For instance, if one is offered a bet on a coin coming up heads, but knows that this bet will only be offered if the coin has actually come up tails, then it looks as if one should bet at odds different from one’s credences. However, they agree that in this case one’s credences change as soon as the bet is offered, and one should bet at odds equal to the new credences.

Their next example is very similar, but without the shift in credences. One is offered a bet on a coin coming up heads, but knows that if the coin actually came up heads then the bet is carried out with fake money (indistinguishably replacing the real money in your and the bookie’s pockets) and is real if the coin actually came up tails. In this case, it looks like one should bet at odds different from one’s credences, which should still be 1/2.

However, I think that in this case what’s going on is that one isn’t really being offered a proper bet on heads at odds of 1/2. Functionally speaking, the money transfer involved will be like a bet on heads at odds of 1. It might be described as a bet at different odds, but I think bets should be individuated in some sort of functionalist way here, rather than according to their description in this sense. Thus, since one’s credence in heads is less than 1, one shouldn’t accept this bet.

Bradley and Leitgeb then say that what goes on in Hitchcock’s set-up of the Sleeping Beauty bets is similar. The bet will be repeated twice if the coin comes up tails (because Beauty and the bookie both forget the Monday bet), and thus this is a situation like the one with the bet that might turn out to be with pretend money, but in the opposite direction. Thus, this bet ends up being one that costs the agent $20 if the coin comes up heads, and wins her $20 if it comes up tails, so it’s functionally a bet at odds of 1/2. I think this is the set of bets she should be willing to accept, but that her credence in heads should be 1/3, so her betting odds and credences should come apart.

Of course, there may be a slight difference between the situations. In this version of the Sleeping Beauty bets, the bet gets made twice if the coin comes up tails, rather than paying off double. Perhaps the fact that it’s agreed to multiple times doesn’t make the same difference that having money replaced by something twice as valuable would. If so, then this bet really was properly described as a bet at odds of 1/3, so that I would no longer think that this is an example where betting odds and credences should come apart.

So I think I don’t really accept the particular claims that Bradley and Leitgeb make in this paper, but it’s only because I’m trying to do something subtle about how to individuate bets in functional terms. I’m sure there are good examples out there on which betting odds and credences could rationally come apart, but I’m not convinced whether the Sleeping Beauty case is one of them.





Back from Australia

5 07 2007

I’m back from spending three weeks in Australia again – as usual, it was a very productive trip. It was also nice to get to attend the workshops on Norms and Analysis and Probability that went on last week. There were a lot of interesting talks there, so I won’t go through very many of them. Overall, I think the most interesting was Peter Railton’s talk in the first workshop, where he seemed to be supporting a framework for metaethics and reasons that is broadly compatible with the framework of decision theory. However, he brought in lots of empirical work in psychology to show that for both degree of belief and degree of desire, there seem to be two distinct systems at work – one more immediately regulating behavior, while the other being more responsive to feedback and generally regulating the first. It reminded me somewhat of what Daniel Kahneman was talking about in a lecture here at Berkeley a few months ago. But not being an expert in any of this stuff, I can’t say too much more than that.

Another particularly thought-provoking talk was Roy Sorensen’s in the Norms and Analysis workshop. He presented a situation in which you are the detective in a library. You just saw Tom steal a book, so you know that he’s guilty. However, before you punish him, the defense presents an envelope that may either contain nothing, or may contain exculpatory evidence (something like, “Tom has an identical twin brother in town”, or “The librarians have done a count and it seems that no books are missing”, which would make you give up your belief that Tom was guilty). Given that you know Tom is guilty, should you open the envelope or not? On the one hand, it seems you should, because you should make maximally informed decisions. On the other hand, it seems you shouldn’t, because either the envelope contains nothing, or it contains information you know is misleading, and in either case it’s no good.

Sorensen was arguing that you shouldn’t open the envelope, but I don’t think he succeeded in convincing any of the audience. But I think the puzzle sheds interesting light on what it takes to know that evidence is misleading, and how apparent evidence or the lack thereof really plays out when you know other background facts about where the evidence is coming from.





The Las Vegas Paradox

8 01 2007

It seems safe to say that money (and basically any other good) has a generally diminishing marginal value. This is perhaps one of the biggest justification for redistributive taxation, in which we take a bunch of money unequally from people and give it to people in some much more even distribution, as with social security and some other government programs. (Of course, most programs redistribute things unequally, but still often in a more equal way than the money was originally distributed.)

However, another sort of redistribution sometimes seems justified, and it suggests that the marginal value of money can’t be strictly decreasing. If we took a penny from everyone in the country, and gave the resulting $3,000,000 to one person at random, it seems that it would make that one person tremendously happy at basically no cost to anyone else. And sure enough, people voluntarily engage in this sort of activity all the time, in raffles, and (notably, especially since I just spent a week and a half visiting my parents at their new place in Las Vegas) in slot machines. In fact, in both cases, people willingly take part despite the fact that some of the money is siphoned off either for charity or to the rich people that own the casinos.

Now, perhaps this behavior is just irrational (so that we shouldn’t derive any moral about the marginal utility of money from it). Or perhaps people get some other benefit from the transaction (like the feeling of doing good for a charity in the case of a raffle, or the excitement one gets from occasionally getting small payoffs that one promptly loses again in the slot machine). But at some level, the original game of taking a penny from everyone to give the entire amount to someone chosen at random just intuitively (at least to me) seems reasonable.

However, there may be some sort of argument that it isn’t reasonable. After all, if it was an improvement to overall utility to do that, then some sort of principle of additivity (which I’m willing to question for other reasons however) would suggest that it would be good to do it multiple times. It’s unclear at what point it could go from being good to bad (maybe there’s a sorites in here?) so if it’s good to do it once, then it would be good to do it 300,000,000 times. But at that point, it seems that it could have a severe negative effect on total utility (if some people ended up losing $3,000,000 overall while some other lucky ones won it two or more times – a loss of $3,000,000 is clearly much worse to almost everyone than a gain of that much is good to anyone) or at best a neutral effect (if everyone wins exactly once). So either it was never good to begin with (which just seems implausible to me), or it switches from being good to being bad at some point (though it seems very hard to say where), or else we have to give up some sort of additivity for gambles, though it’s unclear just how.





Texas Decision Theory

24 10 2006

I was in Austin a couple weeks ago for the second Texas Decision Theory Workshop, which was a lot of fun. It was a fairly small group, and some interesting topics I didn’t know much about were discussed. In particular, there was a lot of discussion (primarily by Sahotra Sarkar and Carl Wagner) about decision making with imprecise probabilities. There was also a lot of discussion of multiple-criteria decision making. My friend Alex Moffett discussed the impossibility theorems of Arrow, and Gibbard and Satterthwaite – he mentioned an analogy between multiple-agent decision making (as in the traditional presentations of these theorems) and multiple-criteria decision making, suggesting that in this context at least, the “independence of irrelevant alternatives” criterion really is important. And Mike Titelbaum presented some of his work on generalized versions of conditionalization as constraints on rational agents even under forgetting.

I talked about some of the things I discussed earlier this summer, but with more of a worked-out formalism for describing the decision apparatus (and constructing stronger decision theories out of weaker ones). I was surprised to see that some of this formalism that I developed for infinitary cases seems to resemble some of the formalisms for imprecise probabilities! I’ll have to look into this more to see how they really connect.





Dominance and Decisions

21 07 2006

I’ve finally posted slides from the talk that I gave at Stanford and a couple times in Australia in the last couple months. (Each of the talks was somewhat shorter than this whole set of slides – I’ve combined them all here.) I discuss some ideas about putting decision theory on new foundations in order to better deal with some problematic cases due to Alan Hájek and others, and in the process get a slightly more unified account of the Two-Envelope Paradox and some others. Of course, my theory’s not fully worked out yet, so comments and criticism are certainly welcome!





Melbourne Visit

18 06 2006

Liek Richard before me (and myself last year), I had a nice visit in Melbourne. Unfortunately, it was fairly short because the tickets were more expensive at other times. It’s amazing how helpful it can be to explain your ideas to someone who isn’t working immediately in the same field – I got some useful ideas from my conversations with Greg and Zach that I spent some time writing up yesterday. In some sense they’re just points about how to present some of the ideas, but the right way to present and link ideas is certainly an extremely large part of the advances in most good work (if not 90% of the progress).

Anyway, so that this post has some slight amount of content itself, here is a link my boyfriend sent me to a talk by psychologist Daniel Gilbert on decision theory, and how people are often bad at estimating both probabilities and utilities. I find it particularly interesting because I’m talking on Tuesday about decision theory here in Canberra (I’ll be repeating it at the AAP in a couple weeks, and I gave a version a few weeks ago at Stanford as well). But also, it’s interesting that someone could be talking about this stuff to a general audience at South by Southwest (which apparently is much more than just a music festival).