I had a long conversation with Mike Titelbaum yesterday, largely about Hilary Greaves‘ manuscript, “Probability in the Everett Interpretation”. I think it’s a very interesting paper, trying to use a deterministic interpretation of quantum mechanics (essentially a many-worlds interpretation) and ordinary principles of decision theory to show that a rational agent will always act as if her credences matched the probabilities recommended by the Born rule, so that there’s no need for objective chance. In talking about this, I was trying to explain to Mike why I’ve always been suspicious of objective chances, because they imply some sort of fishy metaphysics, like that of the Copenhagen interpretation of quantum mechanics.
Mike’s been reading Lewis’ “A Subjectivist’s Guide to Objective Chance”, where he suggests that the Principal Principle (basically, the rule that you should proportion your credences according to the chances) gives him almost his complete understanding of chance. So he suggested that theories that postulate chances might instead by interpreted as just directly specifying credences for an agent to have. A very simple such theory tells me to always have credence 1/2 in heads when I’m flipping an ordinary coin. Although such a theory isn’t as good as a deterministic theory in always telling me to believe the truth, it does at least guarantee that I can’t get Dutch booked (because I use a probability function) and in addition that my credences tend to match the long-run frequencies, at least with probability 1.
But I pointed out, this “probability 1” is only the probability of the theory, which is exactly what we’re trying to justify here. So it’s unclear exactly what makes these credences good ones to have (this is basically what we were trying to puzzle out from Hilary Greaves’ paper, which I haven’t fully read). And I pointed out that there’s another weirdness here in these theories.
Most scientific theories affect our beliefs just by telling us what is true. On this view, a theory with chances affects our beliefs by telling us how strongly to believe things, without saying anything about what’s true. You can say that it says “the coin has 1/2 chance of heads” is true, but this is a purely theoretical statement since it involves chances – just as a delta function fills the place of a function but isn’t one, “chance” fills the place of a noun, but doesn’t refer to anything. Instead, we know how to calculate integrals involving a delta function, and how to adjust our beliefs when “believing” a sentence with “chance” in it. In this sense, a theory involving chances doesn’t make metaphysical claims the way ordinary scientific theories do – they try to tell us what’s true (and thereby what to believe) while the chance-based theory just tells us what to believe without going for the intermediary of truth.
In a sense, this is like some of what happens in the Stalnaker framework for conversations. Ordinarily, a conversational context is taken to be the set of worlds that is theoretically open for speakers in the conversation (either one of the participants might actually know enough to rule out some of these worlds, but this set represents the “common ground” between them). Whenever someone asserts something, the proposition expressed by the asserted sentence defines some set of possible worlds, and the context set is then intersected with this set to produce the new conversational context. However, some people have proposed that certain types of sentences, like epistemic modals involving “might”, work differently. They have a context change potential just like ordinary sentences, but it’s just directly a function from contexts to contexts, rather than a set that is intersected. Thus, these sentences tell you how to update the context, but not by telling you what’s true. Instead, they just do it directly.
On the view described above, scientific theories involving chances do something similar. If such theories are accepted, it’s a blow for scientific realism, because we’ll have a theory that doesn’t say what’s true. But it might be the best we can do. If we can make sense of in just what sense such a theory might be good. But as Mike points out, this might just mean solving the problem of induction, because it’s exactly the sense in which I believe with credence 1/2 that the coin will come up heads, because approximately half of the flips in the past have come up heads.
(I’m not sure if this post by Cosma Shalizi is hinting at something similar or not.)