From the last Carnival of Philosophy, I found a post by another Kenny about the relation between Bayesian epistemology and probability! He puts forward three views of what this relation might be:

Here are brief definitions of each view, and how each one relates subjective degrees of rational confidence to probabilities (I will explain in more depth later).

* (P) takes subjective degrees of rational confidence as primitive. There is no state space for degrees of rational confidence, because

they aren’t probabilities.

* (KPW) takes subjective degrees of rational confidence to beactual probabilitiesover the state space of allepistemically possible worlds, where the epistemically possible worlds are formal constructions that may or may not be objectively possible.

* (LPW) takes subjective degrees of rational confidence to beactual probabilitiesover the state space of the subset of the really possible worlds which areepistemically accessible.

However, he seems to be focused on a very particular understanding of the word “probability” that might not quite be what I would mean by it. The very fact that he talks about a relation between rational degrees of confidence and probabilities suggests that he’s understanding the word differently from how I am.

My understanding of the word is that “probability” refers to any function from a Boolean algebra to the real numbers satisfying the following three properties: (1) it is never negative; (2) the tautology is assigned value 1; (3) finite additivity (that is, given two elements whose conjunction is the contradiction, the probability of their disjunction is the sum of their probabilities). I’d also be willing to apply the term “probability” in cases where instead of a Boolean algebra in the strict mathematical sense, one uses any structure where the terms “tautology”, “conjunction”, “contradiction”, and “disjunction” have a natural interpretation.

It seems that Kenny Pearce, by contrast, understands the term to require that the algebra be an algebra of sets over some state space, and that there be some objective fact about the probability values. If this interpretation is right, then I don’t think I’d quite take any of the positions he mentions. At any rate, I think I support something more like (KPW) than the others, where “actual probabilities” isn’t taken in any objective sense. In explaining this position, I think I can give answers to three questions he raises:

1. Why should we suppose that we can use the math of probability theory in dealing with degrees of rational confidence?

2. The math of probability theory is generally interpreted in terms of sets called state spaces, but, ex hypothesi, degrees of rational confidence, not being probabilities, have no state spaces. What, then, does the math mean?

3. Why should we suppose that when an occurrence has a well defined objective probability, our subjective degree of rational confidence should be assigned a value equal to its probability?

In response to the first question, the standard answer would be to refer to something like a Dutch book argument – degrees of rational confidence can be described by the mathematics of probability theory because if degrees of confidence couldn’t, then the agent would be subject to a certain loss from a set of bets they would be willing to take, and therefore would be irrational. (There’s some slipperiness here with generating the bets from the confidences, and concluding irrationality based on a collection of bets the agent may take individually, but I think this can be cleaned up.) There’s also a host of other arguments for something like this same conclusion (though Alan Hájek raises issues for them in his (forthcoming?) “Arguments For – Or Against? – Probabilism”). As Kenny Pearce notes, nothing about these arguments requires there to be a state space, so they don’t end up being probabilities in his sense (due to Kolmogorov), but they do seem to be probabilities in the sense I use (and Popper, and Borel, and others).

As for the second question, I think that there actually *is* a state space that is relevant for degrees of rational confidence, which is why I lean more towards something like what Kenny Pearce calls (KPW) rather than (P). The state space here would be the set of epistemic possibilities (whatever those are – I don’t really have a good theory of them, do you?). Despite my lack of an account of them, I think they do need to play a role. I think we can’t make very good sense of the notion of a degree of confidence in p, *supposing* q, without a set of possibilities that we can restrict to the q-possibilities. Also, these epistemic possibilities seem to play an important role in other aspects of epistemic modality, not just degree of belief. And most importantly, I think there’s a rational difference between having an rational confidence of 0 in p and actually being certain that p will not happen. When measuring the speed of light, there’s a difference between my attitude towards it being exactly 2.9980000000000001 x 10^{8} m/s, and my attitude towards it between 3 m/s – I consider the former possible given what I know, and the latter not. However, since there is some interval around 2.998 x 10^{8} m/s that I can’t rule out, and there are infinitely many such values that I am indifferent between, I can’t give any of them a positive value without either violating additivity or assigning values larger than 1 to certain disjunctions. So I propose that the state space contains infinitely many epistemic possibilities, and that my degree of confidence in certain sets of these possibilities is 0, even though the set is non-empty. (Of course, for the empty set, I trivially have confidence 0 in that set of possibilities.) So I think this aspect of the math actually applies quite well to degrees of confidence, though I’m willing to concede that many people will want to challenge this point, and I don’t think it’s as important as the point that degrees of confidence must be probabilities in something like the general sense I outlined earlier.

However, I don’t think such a state space comes with objectively correct probabilities to assign – after all, it’s infinite, and Bertrand’s Paradox shows how all sorts of troubles arise when we think that symmetries of an infinite space constrain probability assignments.

As for the third question, I’m not sure I agree with its premise. I’m not totally convinced that when there is a well-defined objective probability, we should match it with our degrees of confidence. Consider a fair coin that has just been tossed. There is some sense in which it had an objective probability of 1/2 of coming up heads, so this principle would suggest having degree of belief 1/2 in heads. But if I also know that this coin was one of 10 fair coins flipped at that point, 9 of which happened to come up heads, then (in the frequency sense, as opposed to the chance sense) there is also an objective probability of 9/10 of that coin being heads up, so this principle would suggest the contradictory degree of belief of .9. Maybe in this situation one of the two principles wins out (my guess would be the latter), but I don’t really know under what circumstances something like this should be the case. Of course, I also don’t really know what sorts of objective things count as “well-defined objective probabilities” – is it chances, frequencies, or something else? There are many well-defined objective things that obey the mathematics of probability, but it’s an interesting question which (if any) should be tracked by our degrees of confidence.

Kenny Pearce suggests that on the (KPW) theory of degrees of confidence, it’s the fact that “the worlds … divide more or less evenly” that makes us assign 1/6 to each of the propositions about the way the die might land up. I don’t think there’s such thing as an objective measure over this infinite state space, so we can’t even make sense of the worlds dividing more or less evenly. Thus, if there is some objective reason for the degrees of belief we assign, I don’t know what it is yet, but I don’t think it could be anything like what Kenny Pearce suggests in either (KPW) or (LPW). (Also, I don’t think (LPW) is even a viable candidate, because this is supposed to be a theory of degree of rational belief, and actual possibilities have almost nothing to do with rational epistemic possibilities – one could try to make a modified 2-dimensionalist version of this strategy, as Frank Jackson does, but I’m not convinced that this will work.)

I think that these degrees of confidence exist, and are actually often much more precise than we realize (there’s no reason we should have transparent access to exactly what our degrees of belief are), but they’re not constitutively tied to any sort of objective probability in the sense that Kenny Pearce was expecting for a relation between Bayesian epistemology and probability. These degrees of belief are themselves probabilities, just in a different interpretation than Kenny Pearce was specifically considering.

Charles Wells(12:18:41) :Taking degrees of rational confidence as given sounds more like fuzzy logic than probability. Have you looked into that?

Kenny(08:39:57) :I have looked somewhat into fuzzy logic, but I don’t think it’s really that relevant here. It’s more a device for dealing with vagueness than with any of the things that probability is normally taken to deal with. And for the argument that degrees of rational confidence should be modeled with a probability function (or something very much like one), look at Dutch book arguments.

Paul Shearer(22:48:37) :Fun fact from a practicing mathematician — sometimes in Bayesian inference, “degrees of rational confidence” which are not probability distributions are sometimes used. They are called improper priors.

Sometimes improper priors produce paradoxical results though, so they can be modified to produce probability distributions.