Dominance and Decisions

21 07 2006

I’ve finally posted slides from the talk that I gave at Stanford and a couple times in Australia in the last couple months. (Each of the talks was somewhat shorter than this whole set of slides – I’ve combined them all here.) I discuss some ideas about putting decision theory on new foundations in order to better deal with some problematic cases due to Alan Hájek and others, and in the process get a slightly more unified account of the Two-Envelope Paradox and some others. Of course, my theory’s not fully worked out yet, so comments and criticism are certainly welcome!





Penrose and Perry

17 07 2006

Last week, one of the campers asked me about Penrose’s arguments about the mind based on Gödel’s theorems. I wasn’t entirely sure about them, since it’s been years since I read Penrose, but I discussed it for about an hour with the camper and another staff member that was interested, and helped clarify (to myself as well) what seems to be going on here. In the process, I think I’ve found a connection between an issue of these arguments and John Perry’s “The Problem of the Essential Indexical”. (Of course, I should probably check out the book on Gödel’s theorem by the late Torkel Franzén for more about these arguments.)

Anyway, as far as I can remember (which may not be terribly accurate), Penrose’s arguments go something like the following. If humans were computational beings, then the set of mathematical truths knowable by humans would be recursively axiomatizable. The set of truths knowable by humans seems to include Peano arithmetic. Therefore, Gödel’s second incompleteness theorem tells us that the fact that this set of truths is consistent is independent of this set, and thus the consistency can’t be known. However, since we know that the truth is consistent, and we know this set of sentences to be true, this means that we know the set is consistent, which is a contradiction. Therefore, our initial assumption that humans are computational must be false.

Some versions of this argument might use some propositional attitude other than knowledge – I think those would generally be weaker arguments, because there’s no good reason to suppose that beliefs (or anything else) should be consistent. I will also bracket the contention by various non-classical logicians that the truth might not actually be consistent – I think all that we need is that 0=1 is not provable from the knowable truths, and that Gödel’s theorem tells us we can’t prove that. And besides, I can’t conceive of what it would mean for the truth to be inconsistent (except perhaps for some special cases like the liar paradox).

I think the important challenge to this argument comes when I say “we know this set of sentences to be true, [and thus] we know the set is consistent”. There seems to me to be an important ambiguity in knowing a set of sentences to be true. In particular, there is a difference between knowing, of each sentence in the set, that it is true, and knowing of the set as a whole, that all of its elements are true sentences. To know that the set is consistent, one needs the latter, rather than the former, because consistency is a property of the set as a whole, and not of its individual sentences, the way truth is.

In other words, although I might know that “the set of all mathematical truths that I know is a consistent set”, it seems plausible that if I was presented with this same set under a different mode of presentation, I might not recognize it as a consistent set. This seems to be an important step in this version of Penrose’s argument – I don’t think Gödel’s theorems would cause trouble for my knowledge of the statement “the set of all mathematical truths I know is consistent”, though it would cause trouble for any system S proving the statement “system S is consistent”, when system S is presented in some sort of transparent manner, say by listing its axioms. Presenting it as “my system” doesn’t let me prove much about its consequences, so I can know it to be consistent. Presenting it in a more extensional format lets me prove a lot about its consequences, perhaps at the cost of my knowing that it’s consistent. Penrose would need to bridge that gap in order for his argument to be valid. (At least as stated above – it seems quite plausible that he’s got a more subtle argument that gets around these points.)

Anyway, I found an interesting duality here to Perry’s point in “The Problem of the Essential Indexical”. In that paper, Perry points out the importance of learning some essentially indexical information in addition to purely descriptive information. In his example, one can be in a grocery store and see a trail of spilled sugar, and realize “someone has a leaky bag of sugar – I should go let him or her know”. After following the trail for a little bit, one realizes one is walking in a circle, and evetually gains the new information “I have a leaky bag of sugar”, at which point one can remedy the situation. This information must be presented with the indexical “I”, rather than with any description or other third-person presentation (unless one already has the connection between that description and the first person).

Conversely, in this case with the Penrose argument, it seems to me that one might have the indexical information, without the third-person description! That is, one can know “the set of mathematical truths I know is consistent” (because it is a set of truths), without knowing, “system S is consistent”, even if system S characterizes the set of mathematical truths I know. Perhaps all that’s important here is the difference between two different descriptions of the same set, but maybe there’s a role for Perry’s “essential indexical” as well, though the role is the opposite of the one he is concerned with.





AAP

11 07 2006

The conference of the Australasian Association for Philosophy was just last week, and it was a lot of fun. The probability stream in particular I found quite interesting – there were some interesting pairings of talks, with both Rachael Briggs and Mike Titelbaum talking about updating on indexical beliefs and applying it to sleeping beauty; Matt Weiner and me talking about infinitary decision theory and versions of the two-envelope paradox; and Jonathan Shaffer and Antony Eagle talking about the (in)compatibility of non-extreme chances with determinism.

There were a couple other talks I found quite interesting as well. Brendan Jackson and Sam Cumming gave interesting talks on formal semantics (Brendan on analyticity coming from semantic structure alone, and Sam on discourse-referent-based approaches to Frege’s puzzle). And Peter Forrest posed a problem for general relativity in that its ontology calls for space-time to be a differentiable manifold, which requires not only a set of points and a designated colleciton of “open” sets, but also a collection of functions mapping these sets to real numbers in order to give the differential structure of things. He pointed out that it’s quite unclear in what sense these functions have any physical reality, and proposed some ways to minimize this ontology, some of which seemed to have consequences for views on time.

Finally, Zach Weber (a student at Melbourne) gave a very interesting talk presenting some of his results using naive Fregean set theory, in a relevant logic to avoid triviality. He’s been able to prove, fairly simply, the existence of inaccessible cardinals, the axiom of choice, and many other interesting results that one wouldn’t necessarily expect to get so easily. Whatever one’s thoughts on paraconsistent logics, I think there’s probably interesting stuff here to motivate some sort of principle to transfer some of these results to classical set theory, if possible.

Anyway, in addition to all these talks, the conference was quite a nice environment to talk to people from all over Australia (and the US, and other places) about various philosophical topics, and was a nice way to spend my last week in the country. Immediately afterwards, I left for the US, and I’m now teaching at the Canada/USA Mathcamp until early August, before returning to Berkeley. So I might not have as much time for posting in the next month. But it’ll probably be more mathematically oriented than a lot of what I’ve been posting recently.





Chance Expressivism

1 07 2006

I had a long conversation with Mike Titelbaum yesterday, largely about Hilary Greaves‘ manuscript, “Probability in the Everett Interpretation”. I think it’s a very interesting paper, trying to use a deterministic interpretation of quantum mechanics (essentially a many-worlds interpretation) and ordinary principles of decision theory to show that a rational agent will always act as if her credences matched the probabilities recommended by the Born rule, so that there’s no need for objective chance. In talking about this, I was trying to explain to Mike why I’ve always been suspicious of objective chances, because they imply some sort of fishy metaphysics, like that of the Copenhagen interpretation of quantum mechanics.

Mike’s been reading Lewis’ “A Subjectivist’s Guide to Objective Chance”, where he suggests that the Principal Principle (basically, the rule that you should proportion your credences according to the chances) gives him almost his complete understanding of chance. So he suggested that theories that postulate chances might instead by interpreted as just directly specifying credences for an agent to have. A very simple such theory tells me to always have credence 1/2 in heads when I’m flipping an ordinary coin. Although such a theory isn’t as good as a deterministic theory in always telling me to believe the truth, it does at least guarantee that I can’t get Dutch booked (because I use a probability function) and in addition that my credences tend to match the long-run frequencies, at least with probability 1.

But I pointed out, this “probability 1” is only the probability of the theory, which is exactly what we’re trying to justify here. So it’s unclear exactly what makes these credences good ones to have (this is basically what we were trying to puzzle out from Hilary Greaves’ paper, which I haven’t fully read). And I pointed out that there’s another weirdness here in these theories.

Most scientific theories affect our beliefs just by telling us what is true. On this view, a theory with chances affects our beliefs by telling us how strongly to believe things, without saying anything about what’s true. You can say that it says “the coin has 1/2 chance of heads” is true, but this is a purely theoretical statement since it involves chances – just as a delta function fills the place of a function but isn’t one, “chance” fills the place of a noun, but doesn’t refer to anything. Instead, we know how to calculate integrals involving a delta function, and how to adjust our beliefs when “believing” a sentence with “chance” in it. In this sense, a theory involving chances doesn’t make metaphysical claims the way ordinary scientific theories do – they try to tell us what’s true (and thereby what to believe) while the chance-based theory just tells us what to believe without going for the intermediary of truth.

In a sense, this is like some of what happens in the Stalnaker framework for conversations. Ordinarily, a conversational context is taken to be the set of worlds that is theoretically open for speakers in the conversation (either one of the participants might actually know enough to rule out some of these worlds, but this set represents the “common ground” between them). Whenever someone asserts something, the proposition expressed by the asserted sentence defines some set of possible worlds, and the context set is then intersected with this set to produce the new conversational context. However, some people have proposed that certain types of sentences, like epistemic modals involving “might”, work differently. They have a context change potential just like ordinary sentences, but it’s just directly a function from contexts to contexts, rather than a set that is intersected. Thus, these sentences tell you how to update the context, but not by telling you what’s true. Instead, they just do it directly.

On the view described above, scientific theories involving chances do something similar. If such theories are accepted, it’s a blow for scientific realism, because we’ll have a theory that doesn’t say what’s true. But it might be the best we can do. If we can make sense of in just what sense such a theory might be good. But as Mike points out, this might just mean solving the problem of induction, because it’s exactly the sense in which I believe with credence 1/2 that the coin will come up heads, because approximately half of the flips in the past have come up heads.

(I’m not sure if this post by Cosma Shalizi is hinting at something similar or not.)