Although I initially intended this blog to be mainly about philosophy of math and metaphysics, I suppose it was inevitable that I’d eventually mention probability here, given the paper I presented at FEW in Austin a month ago and will present again in a week and a half at the AAP in Sydney.
When I finished the first version of that paper in December, I had noticed some claims in van Fraassen’s “Fine-Grained Opinion, Probability, and the Logic of Full Belief” that I thought were a little odd. He suggests dealing with conditionalizing on events of probability 0 by having a well-ordered sequence of probability functions, and just switching to the first function in the list where the conditionalizing event has non-zero probability. I seem to remember (the paper doesn’t seem to be online, and I left my photocopy in my office) he also suggested that this was a substantially better method than using infinitesimal probabilities to represent possible events that seem to have probability 0. But I convinced myself (and probably managed to prove) that in fact, a system using infinitesimal probabilities would be identical to the one van Fraassen endorses, rather than worse. I was going to write this up, but Branden Fitelson mentioned to me that Vann McGee had proven this already, and said it was mentioned in Ernest Adams’ A Primer of Probability Logic.
I finally got around to glancing through the relevant portions of that book a couple weeks ago, and found citations to the article “Learning the Impossible”, in Ellery Eels and Brian Skyrms, Probability and Conditionals. I finally read that article this afternoon, and saw that in fact McGee hadn’t proved the identity of the systems I had thought, but had rather shown that infinitesimals and Popper functions were the same!
So I was about to write up my proof and put it on my website, but I decided to search for van Fraassen’s article first, to remember just what it was I was arguing against there. But on Google Scholar, though I didn’t get the article itself, the first thing that came up that cited it was “Lexicographic Probability, Conditional Probability, and Nonstandard Probability”, by Joseph Halpern, from the CS department at Cornell. I haven’t read the whole thing yet, but it looks like he proves all three systems of conditionalizing on probability 0 to be equivalent using countable additivity, and shows which ones are more general when not making this assumption, or a few others. So it looks like there’s no need for me to write anything up at all, which is too bad I guess. But it looks like a good paper that all philosophers interested in this debate should look at.
Note that all three of these forms of conditionalizing are in general different from the solution I advocate in my paper linked above. I argue that Popper functions overgenerate conditional probabilities, just as Kolmogorov’s ratio analysis undergenerates, and suggest that some probabilities conditional on events of probability 0 should be defined only relative to a set of relevant alternatives to the condition, rather than absolutely as all these approaches require. Teddy Seidenfeld has pointed out in his joint paper “Improper Regular Conditional Distributions” that the method I advocate (initially proposed by Kolmogorov, in fact) runs into some problems in certain spaces. But I think that this just means that in some of these cases there are in fact no conditional probabilities, rather than that they are given by Popper functions.
Recent Comments