## Epistemic Modals and Modality

24 06 2006

On Thursday and Friday this week there was a conference on epistemic modality here at ANU – though more of it ended up being about the semantics of epistemic modal words. Unfortunately, John MacFarlane and Brian Weatherson couldn’t be here, so the conference was trimmed slightly.

On the first day, Frank Jackson wondered about how we should assign probabilities to possible worlds so that we don’t end up with metaphysical omniscience – his answer was basically that our credence in a sentence should be the sum of the probabilities of the worlds contained in its A-intension, rather than its C-intension (I hope I’m getting the terminology right). That is, rather than finding out what proposition the sentence expresses and summing the probabilities over the (centered) worlds where that proposition is true, we should figure out in which centered worlds the sentence expresses a proposition that is also true in that centered world, and sum over those. So for instance, although “water is H2O” actually expresses a necessarily true proposition, there are worlds (ones like Putnam’s twin earth) in which “water” refers to a different substance, so the proposition expressed at that world ends up being false at that world. Since there might be worlds that we can’t tell apart from the actual one, it makes sense that those would play a role in our probability assignments. (My concern about the whole framework is a worry about why we should be assigning probabilities to sets of worlds in the first place, rather than to something more epistemically accessible – in which case the problem doesn’t arise.)

Andy Egan spoke about his version of relativist semantics for epistemic modals. Since I’ve mainly only been exposed to John MacFarlane’s version, I was quite interested in this. It does sound to me that the frameworks could be intertranslatable, if done carefully. John does this approximately by saying that the proposition expressed by uttering “might P” at CU is true when assessed at CA iff the proposition expressed by P (at CU?) is compatible with the knowledge of the agent at CA. Andy does it approximately by saying that the proposition expressed by uttering “might P” is a set of centered worlds including all centers whose knowledge is compatible with P. So roughly, John seems to view the proposition as a function, while Andy views it as a set. However, their main differences are in the norms for asserting and denying such statements, which I don’t think I understand fully enough for either – but they’ll have to give some convincing story in order to be able to say that propositions aren’t just true or false simpliciter.

Matt Weiner picked up where Andy Egan left off and gave an interesting argument for why we might have relativist semantics for epistemic modals, rather than contextualist semantics, like we do for “I”, “here”, and other similar terms. Basically, the idea is that we’ve got a conversational norm that one shouldn’t let a proposition one judges false remain unchallenged. Then relativist semantics makes it easy for people to assert modals to share their ignorance, and requires others to share their knowledge to fix this state. So this semantics makes them good tools for joint inquiry.

Seth Yalcin started the next morning by pointing out that epistemic modals lead to a certain type of Moore’s paradox. The ordinary paradox is approximately, “It’s raining, but I don’t know that it’s raining”, which is certainly a very bad thing for anyone to ever assert, but is perfectly reasonable to suppose or embed in the antecedent of a conditional. Depending on your account of epistemic modals, this should mean approximately the same as “It’s raining, but it might not be raining” – which is just as bad to assert, but is interestingly about equally bad to suppose or embed in a conditional. (Compare, “If it’s raining, but I don’t know that it’s raining, then I must be confused”, while “If it’s raining, but it might not be raining, then (I must be confused/it’s raining/etc.)”.) Then he attempted to explain this by giving a very interesting semantics involving sets of probability functions over sets of worlds, rather than just sets of worlds. I’m very interested in looking more at the details of that as he works it out.

Jonathan Schaffer then argued for the KGB account of modals over the CIA account. (That is, the contextualist view he called “Kratzer’s Graded Basis” over the relativists’ “Contexts and Indices of Assessment”.) He disputed the accuracy of a lot of the CIA data, showed that the KGB deals better with modals of all sorts (with epistemic modals just a special case), and showed that some of the propaganda pushed by CIA agents is predicted by the KGB, so it shouldn’t mislead us.

Finally, Dave Chalmers returned to the notion of epistemic modality, after four papers on semantics, and disputed Frank Jackson’s idea of doing epistemic modality in terms of worlds. Instead, he suggested that there should be some space of epistemic possibilities, which are effectively something like maximal consistent conjunctions (or perhaps sets, to deal with infinities?) of sentences. However, the sentences should be phrased in some basic vocabulary that is sufficient to deal with all concepts whatsoever, and “consistent” means “not knowable to be false by any a priori means”. Thus, he’s allowing much stronger reasoning principles than just first-order logic, because he thinks that all mathematical claims (for instance) can be settled by a priori reasoning (which I guess must therefore be much stronger than Turing complete). Also, he’d like to modify this view to deal with epistemically non-ideal agents.

Anyway, it was quite an interesting conference, I learned a lot, and I’m very interested in seeing how these projects continue to develop!

## Melbourne Visit

18 06 2006

Liek Richard before me (and myself last year), I had a nice visit in Melbourne. Unfortunately, it was fairly short because the tickets were more expensive at other times. It’s amazing how helpful it can be to explain your ideas to someone who isn’t working immediately in the same field – I got some useful ideas from my conversations with Greg and Zach that I spent some time writing up yesterday. In some sense they’re just points about how to present some of the ideas, but the right way to present and link ideas is certainly an extremely large part of the advances in most good work (if not 90% of the progress).

Anyway, so that this post has some slight amount of content itself, here is a link my boyfriend sent me to a talk by psychologist Daniel Gilbert on decision theory, and how people are often bad at estimating both probabilities and utilities. I find it particularly interesting because I’m talking on Tuesday about decision theory here in Canberra (I’ll be repeating it at the AAP in a couple weeks, and I gave a version a few weeks ago at Stanford as well). But also, it’s interesting that someone could be talking about this stuff to a general audience at South by Southwest (which apparently is much more than just a music festival).

## Discontinuities in States and Actions

13 06 2006

(The ideas in this post originate in several conversations with Alan Hájek and Aidan Lyon.)

In chapter 15 of his big book on probability, E. T. Jaynes said “Not only in probability theory, but in all mathematics, it is the careless use of infinite sets, and of infinite and infinitesimal quantities, that generates most paradoxes.” (By “paradox”, he means “something which is absurd or logically contradictory, but which appears at first glance to be the result of sound reasoning. … A paradox is simply an error out of control; i.e. one that has trapped so many unwary minds that it has gone public, become institutionalized in our literature, and taught as truth.” His position is somewhat unorthodox, hinting that in some sense all of infinite set theory (and many classic examples in probability theory) is made up of this sort of paradox. But I th ink a lot of what he says in the chapter is useful, and I intend to study it more to see what it says about the particular infinities and zeroes that I’ve been worrying about in probability theory.

His recommendation of what to do is as follows:

Apply the ordinary processes of arithmetic and analysis only to expressions with a finite number n of terms. Then after the calculation is done, observe how the resulting finite expressions behave as the parameter n increases indefinitely.

Put more succinctly, passage to a limit should always be the last operation, not the first.

One suggestion to take from this idea might be that in phrasing any well-defined decision problem, the payoffs should in some sense be a continuous function o f the states. (I should point out that I got this suggestion from Aidan Lyon.) For instance, consider the game in the St. Petersburg paradox (interestingly, the argument in section 3 there about boundedness of utilities seems to miss a possibility – M artin considers the case of bounded utilities where the maximum is achievable, and unbounded ones where the maximum is not achievable, but not bounded ones where the maximum is unachievable, which seems to largely invalidate the argument). An objection y ou might be able to make to this game is that payoffs haven’t been specified for every possibility – although the probability of a fair coin repeatedly flipped coming up heads every time is zero, that doesn’t mean that it’s impossible. So we must specify a payoff for this state. But of course, we’d like to not have to wait forever to start giving you your payout, since then you’ll effectively get no payout. So we have to be giving a sequence of approximations at each step. Which basically suggests tha t we should make the payoff for the limit state be the limit of the payoffs of the finite states. Which is just as Jaynes would like – we shouldn’t do something (specify a payout) after taking a limit. Instead, we should specify payouts, and t hen take a limit, making the payout of the limit stage be the limit of the finite payouts. Which in this case means that there’s actually a chance of an infinite payout for the St. Petersburg game! (Even if that chance has probability zero.) So per haps it’s no longer so problematic a game – the expectation is no longer higher than every payout.

Note the sort of continuity I’m considering here. In some sense the payouts are discontinuous (obviously, they jump with each coin toss). But in the natural topology on the sp ace (where the open sets are exactly the pieces of evidence we could have at some point – in this case that the game will take at least n flips) it is continuous. Which leads us to a distinction between two games that classically look the same – in one game I flip a fair coin and give you \$1 if it comes up heads and nothing if it comes up tails; in the other game I throw an infinitely thin dart randomly at a dartboard and give you \$1 if it hits the left half and nothing if it hits the right half (stipulate that the upper half of the center line counts as left, the lower half counts as right, and the center point doesn’t exist). The difference is that in the former case, it’s always easy to tell which state has occurred, so we can calculate the pa yoff. In the latter case though, if the dart hits exactly on the middle line, then we can’t tell which payoff you should get unless we can measure the location of the dart with infinite accuracy. If we can only tell to within 1 mm where the dart has hit, then any dart that hits within 1 mm of the center line will be impossible to pay on. If we can refine our observations, then we can pay up for most of these points, but even closer ones will still cause trouble. And no matter how much we refine them, a dart that hits the line exactly (this has probability zero, but it seems that it still might happen, since it’ll hit some line) will be one that we can never know which payoff is right. So you’ll be stuck waiting for your payoff rather than actu ally getting one or the other. So the game is bad again, although the analogous coin-flipping game is good.

So once we’ve found the right topology for the state space, it seems that we may want to require that the payoffs for any well-defined game be co ntinuous on it. (For other reasons, like representing our limited capacity to know about the world, we might want to require that any isolated points (where the payoff can jump) must have positive probability, like the finite numbers of coin flips in the St. Petersburg case, but not the infinite one.)

In conversation today, Alan Hájek pointed out another sort of discontinuity that can arise in decision theory, namely one where payoffs are discontinuous on one’s actions. In the cases above, what’s discontinuous are the payoffs within some action I might agree to perform (say, playing the St. Petersburg game, or agreeing to a payoff schedule for a dart throw). But we can run into problems when we’re faced with multiple possible actions. The one that Alan mentioned was where you’re at the gates of heaven, and god offers you the possibility to stay as many days as you like, provided that you give him the (finite) number ahead of time on a piece of paper (a really large one, that you can compress a s much text on as you want). So you start writing down a large number, say by writing 9999…. No matter how long you keep writing, it’s in your interest to write another 9. However, Alan points out that the worst possible outcome here is that you keep writing 9’s forever and never actually make it into heaven!

To model this in the framework of decision theory, there are a bunch of actions that you can choose from. Action n results in you being in heaven for exactly n days, and then the re is one more action, that’s somehow the limit of all these previous actions, in which you never make it to heaven. No state space or anything else is relevant (in this particular case). But on your preferences, action n is preferred to action m iff n>m, except that the limit of these actions doesn’t get the limit of the payoffs. That is, staying there writing 9’s forever doesn’t give you infinitely many days in heaven – in fact, it gives you none! So this decision problem mig ht also be counted as somehow bad on Jaynes’ account, because it involves an essential discontinuity in the payoffs, though this time it is with respect to the space of actions rather than the space of states. (The topology on the space of actions will have to be defined with open sets being possible partial actions, or something of the sort, just as the topology on the space of states is defined with open sets being the possible partial observations, or something.)

Maybe this is a problem for Jaynes? This game that god has given you doesn’t seem to be too paradoxical – somehow it doesn’t seem as bad as St. Petersburg, even though it puts decision theory almost in a worse place (no decision is correct, rather than the correct decision being to pay any finite amount for a single shot at a game).

Anyway, I thought there was an interesting distinction here between these two types of discontinuities. I don’t know if one is more problematic than the other, but it’s something to think about. Also, I should point out that a decision problem like this last one seems to have first been introduced by Cliff Landesman, in “When to Terminate a Charitable Trust?”, which came out in Analysis some time in the mid-’90s I think.s

13 06 2006

I was discussing indispensability arguments at the bar this evening with some of the philosophers in Canberra, and an interesting question came up regarding Penelope Maddy’s position on them. I unfortunately don’t have access to a copy of her book Naturalism in Mathematics right now, so I’m half doing this as a reminder to myself to check it out when I get a chance (or in case someone who knows better than me reads this and decides to comment to clear things up).

Anyway, my recollection of her position (in her naturalist phase) is roughly as follows. Quine has pointed out that natural science is a powerful and progressing body of knowledge that has helped us build a tremendous amount of understanding. Therefore, we should adopt the methods of its practitioners (or at least, the methods they follow when doing their best work, not necessarily the methods they say they adopt) when we want to find out what’s really going on fundamentally in the world. Maddy points out that mathematics is also such a body of knowledge, and that when Quine applied the methods of the natural sciences, he ended up with a much weaker theory than mathematicians (or at least, set theorists) want. Therefore, she suggests that when we talk about mathematics, we should adopt the methods of mathematicians – the needs of scientists are neither necessary nor sufficient (nor, perhaps, even relevant) for answering questions about whether various mathematical claims are true.

Of course, the methods of mathematics are fairly restrictive and straightforward, so we can’t even say anything about many supposed ontological questions about mathematics (like whether numbers really exist), and about basically all epistemic questions about mathematics (like how we come to have knowledge about numbers). As a result, these questions are effectively meaningless, because there is no way to answer them. So Maddy’s naturalism is a sort of third way, distinct from both realism and nominalism.

There’s also something misleading, it seems to me, about calling it “naturalism”. She develops it on analogy with Quinean naturalism, but it has important differences. In particular, it says that there is a body of knowledge that is not continuous with the natural sciences, namely mathematics! On at least some ways of putting Quinean naturalism, this is exactly what he wants to reject! (Of course, the alternative bodies of knowledge he was thinking about were things like “first philosophy”, rather than mathematics.)

But now I wonder – since Maddy (as I understand her) accepts something very much like Quinean naturalism about the physical sciences (when considered separately from mathematics), what does she have to say about traditional indispensability arguments? She obviously doesn’t think that they give one reason to believe that numbers and sets really exist (at one point she says something like, “if science can’t criticize, it also can’t support mathematical claims”). However, it seems that if entities of these sorts really are indispensable in doing natural science, then don’t we have scientific reason to say they actually exist, even if not mathematical reason to say so? Just as science makes us say there are electrons and quarks and genes and ions, it also seems to make us say that there are numbers and functions and the like, because all of these entities appear in our best theories. Maybe this is no reason from the mathematical point of view, but don’t we end up in effect with a reason to believe in physical objects with all the properties of numbers and functions and sets and the like? (Of course, these are quite unusual physical objects that have no spatiotemporal location and no causal properties, but science has already told us about strange particles that have no identity conditions and multiple positions (like electrons) or strange causal isolations (like black holes), so the “mathematical” entities mentioned in the theories could be seen as just even stranger physical objects, if Maddy won’t accept them as mathematical objects.)

I’m actually fairly sympathetic to this position – if I believed in the actual indispensability of mathematics, then I would grant mathematical entities exactly this kind of physical existence. But I’m also inclined to think that most people would regard this as a reductio of any position if it made one say that mathematical entities had such existence. Especially if the point of the theory was to remove mathematical existence claims from special philosophical consideration.

But maybe I’m just misreading one or more parts of the theory.

## Linguistics and Philosophy

6 06 2006

I’m off to Australia this evening (I’ll be there until July 7), and it’ll probably be a few days before I get settled in.

Anyway, until then, I was struck by this post by Mark Liberman over at Language Log, where he points out that by several measures, it seems that the field of psychology is around 10 to 100 times larger than the field of linguistics. This quite surprised me, because I was under the impression that linguistics was substantially larger than philosophy (at least, currently). I’m not entirely sure where I got this impression, because when I repeat the sorts of searches that Mark Liberman mentions there, philosophy comes out ahead of linguistics. I must have picked it up when finding papers on Google Scholar and noticing that, for instance, an important paper in philosophy like “On What There Is” or “Two Dogmas of Empiricism” have 192 and 649 citations listed each, while Grice’s two big papers get 478 and 3487 citations each. I’m sure it’s at least in part because these papers are somewhat more recent, and thus have been cited by more papers that have made it into Google’s database, but I had also thought that citations by linguists were swamping those by philosophers. It’s also possible that Google just has better coverage of linguistics journals than philosophical ones. But anyway, now I’m wondering, which discipline is larger, and how is it really possible to measure something like that?