Probabilistic Inference Barrier

21 01 2007

Using the methods of Russell and Restall’s paper on inference barriers, I will show that one can’t derive an “is” from a “probably”. That is, no consistent set of statements expressing only relations among the probabilities of statements expressible in a non-probabilistic object language can entail anything about the actual truth-values of such object language statements, unless the statements are either tautologies or contradictions.

Let S be a (consistent) set of probabilistic statements, and let O be the (non-tautological) object language statement that is said to follow from S. To show that O does not follow from S, start with a probability space (a set of “states”, together with an algebra of subsets of this set called “events”, a real-valued function on this algebra satisfying the probability axioms, and a specification of which state is “actual”) satisfying all of S. Call this space P. Now create a space P’ by adding a single state X in which O is false to the state space. A subset of this space will be an event iff either it doesn’t contain X and was an event in P, or it does contain X and removing X gives an event in P. Define the probability function on these events by assigning the same probability to any event in P’ that either it, or it without X, had in P. Let X be the “actual” state in the new space.

By construction, every probabilistic statement will have the same truth-value in the two spaces, because every proposition has exactly the same probability (effectively, all we did was add a single state with 0 probability). In particular, all of S is true in P’. Also, by construction, O is false in P’. Thus, S does not entail O, QED.

One way to block this argument would be to require that every non-empty event have non-zero probability, but this would block a lot of interesting probability spaces (in particular, any space with uncountably many mutually incompatible events). However, a very similar argument would go through if one allowed some small tolerance of epsilon in the probabilistic statements of S (assuming none of the statements are conditional probability statements, whose value can in fact deviate by much more than epsilon due to the addition of a single state of probability epsilon).

But in some sense, this shows the weakness of these “inference barrier” results, which Gillian Russell points out should really be called “implication barriers”. Under certain conditions, it’s certainly rational to infer that it will rain, given that there’s a 99% chance of rain. This result merely shows that no amount of probabilistic evidence will ever entail anything with certainty, even though it might entail it with probability 1. The distinction between probability 1 and certainty is something I’m thinking about right now for my dissertation.

Advertisements




Frank P. Ramsey Appreciation Society

11 01 2007

I recently stumbled upon the FPRAS webpage through a fortuitously placed ad in Gmail. It’s good to know that there’s a society devoted to this important intellectual figure, but it’s a bit distressing to know that they have such poor web design sensibilities. Also, the only description it has of the society suggests that it’s all about Ramsey Theory, ignoring his philosophical and economic contributions. Ramsey Theory is definitely very interesting stuff – on one level it basically says that if you’re looking at a big enough collection, then there’s bound to be some ordered substructure. (More precisely, for any positive integers n and k, there is an N such that any coloring of the edges of a complete graph on N vertices with at most k colors has some set of n vertices where all edges between them are the same color. For 3 and 2, the value is 6, so that if you have 6 people at a party, there are bound to be either 3 mutual acquaintances, or 3 mutual strangers.)





The Las Vegas Paradox

8 01 2007

It seems safe to say that money (and basically any other good) has a generally diminishing marginal value. This is perhaps one of the biggest justification for redistributive taxation, in which we take a bunch of money unequally from people and give it to people in some much more even distribution, as with social security and some other government programs. (Of course, most programs redistribute things unequally, but still often in a more equal way than the money was originally distributed.)

However, another sort of redistribution sometimes seems justified, and it suggests that the marginal value of money can’t be strictly decreasing. If we took a penny from everyone in the country, and gave the resulting $3,000,000 to one person at random, it seems that it would make that one person tremendously happy at basically no cost to anyone else. And sure enough, people voluntarily engage in this sort of activity all the time, in raffles, and (notably, especially since I just spent a week and a half visiting my parents at their new place in Las Vegas) in slot machines. In fact, in both cases, people willingly take part despite the fact that some of the money is siphoned off either for charity or to the rich people that own the casinos.

Now, perhaps this behavior is just irrational (so that we shouldn’t derive any moral about the marginal utility of money from it). Or perhaps people get some other benefit from the transaction (like the feeling of doing good for a charity in the case of a raffle, or the excitement one gets from occasionally getting small payoffs that one promptly loses again in the slot machine). But at some level, the original game of taking a penny from everyone to give the entire amount to someone chosen at random just intuitively (at least to me) seems reasonable.

However, there may be some sort of argument that it isn’t reasonable. After all, if it was an improvement to overall utility to do that, then some sort of principle of additivity (which I’m willing to question for other reasons however) would suggest that it would be good to do it multiple times. It’s unclear at what point it could go from being good to bad (maybe there’s a sorites in here?) so if it’s good to do it once, then it would be good to do it 300,000,000 times. But at that point, it seems that it could have a severe negative effect on total utility (if some people ended up losing $3,000,000 overall while some other lucky ones won it two or more times – a loss of $3,000,000 is clearly much worse to almost everyone than a gain of that much is good to anyone) or at best a neutral effect (if everyone wins exactly once). So either it was never good to begin with (which just seems implausible to me), or it switches from being good to being bad at some point (though it seems very hard to say where), or else we have to give up some sort of additivity for gambles, though it’s unclear just how.