Gricean Silences

29 05 2006

This post came out of a conversation I had with Andy Egan and Gillian Russell the other day, and then similar topics came up in Ryan Muldoon’s comments on Ed Epsen’s paper the next day at FEW. I don’t remember exactly how the topic came up, but we were trying to figure out whether silences can carry implicatures, or more ordinarily, whether you can say something without words. And of course the answer is yes:

Q: “What do you like about John?”
A: [silence]

This one works almost exactly like Grice’s recommendation letter example in “Logic and Conversation”. (By the way, anyone who hasn’t read this paper really should. I think the notion of implicature is one of the most significant advances philosophy has made in our understanding of the world. The paper is widely reprinted, but unfortunately not available online – there’s a two-page excerpt here.) Based on the conversational context, A is expected to make a contribution to the conversation mentioning some positive fact about John. A’s silence violates the maxim of quantity (she hasn’t said as much as is expected), so Q can infer that some other conversational principle (one requiring politeness) must conflict with anything that A would be in a position to say. Therefore, Q comes to believe that there is nothing (or at least nothing relevant) about John that A likes.

But then I realized that we should think about this (and perhaps the original recommendation letter example) a bit more carefully. It seems that the story given above could work in at least two different ways. In one case, A is struggling for an answer, and the silence just comes about because she can’t think of anything she likes about John. In the second case, A knows there is nothing she likes about John and remains actively silent. I think the second case is an example of an implicature carried by a silence, but the first is not.

The explanation of the distinction comes from Grice’s earlier classic paper, “Meaning” in which he suggests that speaker A means y by utterance x iff “A intended the utterance of x to produce some effect [y] in an audience by means of recognition of this intention.” He comes to this recursive intention account of meaning by way of a bunch of examples, which I think parallel the situation here. If I don’t intend you to believe (or consider) something by means of my action, then I didn’t mean it, even if it’s true. Thus, my silence can reveal my dislike for John, just as an accidentally dropped photograph can reveal where I was the other day, but neither means it. But even just performing the action intentionally isn’t enough – Grice suggests that showing someone a photograph doesn’t constitute a meaning of what is depicted, because my intention plays only an unnecessary role in the observer’s coming to believe the truth of what is depicted.

However, showing someone a drawing can constitute a meaning, since the person has to recognize the intention of the person who made the drawing in order to come to believe in the truth of what is depicted (assuming that this was in fact the intent of the drawer). One reason for this distinction that Grice doesn’t discuss in that paper is that only a recursive intention like this can help the speaker and hearer achieve “common knowledge” of the content of the proposition. (That is, not only do you know p, but I know you know p, and you know I know you know p, and …) If the utterance succeeds, then the listener believes that p. But in addition, the listener believes that the utterer intended her to believe that p, so the listener can believe that the listener believes that the utterer now believes that p. But the listener believes that the utterer intended her to believe this, and the cycle can repeat, generating common knowledge. Common knowledge is important in a lot of acts of communication, and a simple, non-recursive, intention can’t generate it.

So for a silence to really implicate (in Grice’s “speaker meaning” sense) something, it has to be intentionally given, and its meaning must be intentional as well, and so on. This makes it much harder than I initially thought for a silence to mean something, but a slighty specification of the original example can fix it.

(Ed’s paper was on zero-knowledge signalling in games of imperfect information, and Ryan pointed out that in Ed’s particular example, one player comes to know that the other player knows some fact, but not because the other player intended her to come to know this. However, it seems that a slightly modified version of the example will put the Gricean condition back in. It was quite an interesting application of the zero-knowledge proof literature in computer science to game theory.)

Advertisements




Motivation for Expected Utility

28 05 2006

FEW just ended, and it was just as exciting as ever. I think it was a bit more international, a bit more interdisciplinary, and substantially larger (in terms of audience) than either of the past two years. Anyway, as seems to happen when I’m at conferences, I’ve got some more ideas to blog about. This one was something I thought of when writing my comments for Katie Steele’s paper on decision theory and the Allais paradox. To start out, here is a presentation of the Allais paradox: In the first decision, one has a 90% chance of $1,000,000 and is choosing whether the other chances should be a 1% chance of $0 and a 9% chance of $5,000,000, or a 10% chance of $1,000,000. The second decision is just the same, but with a 90% chance of $0. The principle of independence suggests that whether the 90% chance is of $1,000,000 or of $0 should make no difference when choosing between the different 10% options – yet many seemingly-rational people choose the flat $1,000,000 in the first case and the 9% chance of $5,000,00 in the second.

There are several principles of decision theory that seem fairly intuitive I have abstracted from the general notion of “independence”, and they seem to lead directly to expected utility theory:

  • The value of a gamble is a real number.
  • Nothing besides probabilities and utilities of outcomes is relevant for the value of a gamble.
  • The value of a gamble is a weighted sum of several contributions.
  • Each contribution is associated with a particular possible outcome.
  • The weight of each contribution is proportional to the probability of the corresponding outcome.
  • The value of each contribution is proportional to the utility of that outcome.

From these principles, it is easy to see that (wherever possible) the value of a gamble is equal to its expected utility (modulo some scalar multiple that applies equally to all gambles).

These principles seem fairly plausible, but of course there are reasons one might question each. The first principle underlies most traditional decision theory of any sort, even though there are traditional examples (the St. Petersburg game) that seem to contradict it, and one can also come up with actions where the intuitive preference relation between them can’t possible be represented by real numbers.

The second principle can be questioned in cases where one already has a package of gambles, and one wants to amortize risk. That is, if purchasing insurance is to be considered rational, it will be because we care not just about the probabilities and payoffs of the insurance gamble, but also about the fact that we get positive payoffs when something bad happens, and negative payoffs when good things happen.

The third principle is probably fairly easy to question, but I don’t know of any natural way to reject it.

The fourth principle can be rejected in one natural way to deal with risk-aversion. For instance, in addition to the “local” factors associated with each outcome, one can add a “global” factor associated with the variance or standard deviation of the payoffs of the gamble. We might need to be careful when adding this factor to make sure that we don’t violate more fundamental constraints (like the principle of dominance – that if gamble A always has a better payoff than gamble B in every state, then one should prefer A to B). In introducing such a factor, we’ll have to figure out just what extra factors might be relevant, and how to weight them, which is at least one reason why this option is much less attractive than standard expected utility theory, though it has obvious appeal for dealing with risk-aversion.

I don’t know if there’s any reason one might rationally reject the fifth principle, though presumably it will have to be relaxed somehow if one is to rationally prefer gamble A to gamble B if they are identical, save for a much higher payoff for A than B on a state of probability 0.

The most natural way to relax the sixth may be in conjunction with relaxing the second. Some other factor beyond utility of an outcome may be considered. Another way to relax it would be to make the contribution of an outcome depend not only on its utility, but also on how things could have turned out otherwise on the same gamble. In an example by Amartya Sen cited by Katie, cracking open a bottle of champagne when receiving nothing in the mail may make quite a different contribution to an overall gamble when one could have received a serious traffic summons in the mail than when one could have received a large check in the mail.

To make it more clear that this doesn’t only arise when the experience of the event is different, we can consider a version of the Allais paradox with memory erasure. That is, the payoffs are just the same, but in addition to the cash, one has one’s memory erased and replaced with the memory of making a gamble with a sure outcome of whatever it is one received. Thus, there is nothing worse about the $0 when one could have had a guaranteed $1,000,000 than about the $0 when one only had a chance of making money. Since we seem to make the same decisions anyway, it seems that a counterfactual factor of what could have happened otherwise (rather than an emotional factor) must be factored into the value of the gamble.

These methods of relaxing the fourth and sixth principle seem to do different violations to the notion of independence (the sixth maintaining a kind of locality, and the fourth adding a global factor), but it turns out that the procedures in each can model exactly the same decisions as the other. I think the only way to decide between them will be by finding replacements for these principles and seeing which has more natural restrictions.

I hadn’t thought much about violations of independence before reading Katie’s paper, but I think they might be quite plausible. However, it’s interesting to see how some very strong version of principles like independence lead directly to expected utility, in a way that avoids standard representation theorems and the laws of large numbers.





FEW 2006

23 05 2006

I’ll be busy the rest of the week with the Formal Epistemology Workshop going on here at Berkeley. It should be a lot of fun – sounds like lots of people are coming in this year for it! I’ll post about it afterwards.





Etchemendy on Consequence

15 05 2006

I’m really not totally sure what to make of Etchemendy’s objections to Tarski’s account of consequence in The Concept of Logical Consequence – perhaps I shouldn’t admit that while I’m TAing a class that covers this book, and grading papers about it. In general, the particular points he makes seem largely right, but I’m not really sure how they add up to the supposed conclusion. I suppose this all means that at some point I should put in some more serious study of the book. But the middle of exam week may not be the time for that.

His objections basically seem to be that Tarski’s account is either extensionally inadequate (if the universe is finite) or adequate for purely coincidental reasons; and that it doesn’t seem to be able to guarantee the right modal and epistemic features.

The worry about finiteness runs as follows – Tarski says that a sentence is a logical truth iff there is no model in which it is false. If there are only finitely many objects (including sets and models and the like), then every model has a domain which is at most a subset of this finite set, so there is some finite size n that is not achieved. Thus, any sentence that says there are at most n objects must come out logically true. However, intuitively, this is just an empirical matter, and not one that is up to logic alone, so the account must be wrong. Even worse, sentences of this sort can be expressed in a language that doesn’t even involve quantifiers or identity, so we can’t blame this on some sort of error in identifying logical constants. (One might try to sneak out of this objection by pointing out that the sentences involved always have more than n symbols – so every sentence that would exist in such a case would get the “correct” logical truth-value. However, there are sentences that are true in every finite model, but not in all models, and these raise a similar problem.)

However, I think this isn’t really a terrible worry – Tarski’s account of consequence (like his account of truth) makes essential use of quantification over sets. Thus, anyone who’s even prepared to consider it as an account of consequence must be making use of some sort of set theory. But just about every set theory that has been proposed guarantees the existence of infinitely many objects (of some sort or another), so we don’t need to worry about finiteness. Etchemendy suggests that this is putting the cart before the horse, in making logical consequence depend on something distinctly non-logical. But perhaps this isn’t really bad – after all, Tarski didn’t claim that his definition was logically the same as the notion of consequence, but rather that it was conceptually the same. Just because the truth of set theory isn’t logical doesn’t mean that set theory isn’t a conceptual truth – and if some set of axioms guaranteeing the existence of infinitely many objects is conceptually necessary (as neo-logicists seem to claim), then Tarski’s account could be extensionally adequate as a matter of conceptual necessity, even if not of logical necessity.

As for the requisite epistemic and modal features, there might be a bit more worry here. After all, nothing about models seems to directly indicate anything modal or epistemic. However, it does seem eminently plausible that every way things (logically) could have been would be represented by some model. In fact, we can basically prove this result for finite possible worlds using an extremely weak set theory (only an empty set, pairing, and union are needed). It seems likely that the same would obtain among the actual sets for infinite possible worlds as well. However, ZFC doesn’t see to provide any natural way of extending this to all possible worlds – in fact, ZFC can prove that there is no model that correctly represents the actual world, because there are too many things to form a set! Fortunately, this problem doesn’t seem to arise for other set theories, like Quine’s NF, and certainly not for a Russellian type theory. And even in ZFC, the fact that Gödel could prove his completeness theorem provides some guide – any syntactically consistent set of sentences has a model, so that even if there is no model representing a particular logically possible world, there is at least a model satisfying exactly the same sentences, so that logical consequence judgements all come out right. But that’s a bit unsatisfying, seeing as how it makes the semantic notion of consequence depend on the syntactic one.

At any rate, it seems available to classical logicians to suggest that it is a matter of conceptual necessity that every way things could logically have been is adequately represented by the sets – and thus that Tarski’s account is correct and Etchemendy’s criticisms inconclusive. I’m pretty sure something like this has to be right.

Of course, the non-realist about mathematics has to give a different account of consequence (as Hartry Field tries to do starting with “On Conservativeness and Incompleteness”) – but this will just be part of paraphrasing away all the many uses we have for set theory. This one is remarkably central (especially given that linguists now suggest that something like it is at the root of all natural language semantics) and so it will be the important test case in a reduction. But the criticism will be substantially different from Etchemendy’s – before the paraphrase, the non-realist can still make the same arguments I’m suggesting above.

(Of course, if Etchemendy’s criticisms are right, they could themselves form the starting point for a useful dispensability argument for mathematical non-realism – if we don’t need sets for consequence, then the strongest indispensability consideration is gone, and we’re just left with the physical sciences, all of which seem to require something much weaker than full set theory.)





Formal Philosophy

11 05 2006

I went to Stanford yesterday with a bunch of my colleagues for a relatively informal workshop put together by Johan van Benthem. In addition to the Berkeley and Stanford students, there were visitors from Amsterdam and Paris in town, so it was quite a nice chance to meet people working in formal areas of philosophy in a variety of locations. Because the people presenting were mostly Johan’s students and Branden Fitelson‘s students, there was an interesting mix of talks on dynamic epistemic logic and talks on probability. Future work to synthesize these two approaches to representation of uncertainty should be quite interesting.

It’s a shame that there haven’t been more interactions like this between the Berkeley and Stanford philosophy departments, but I guess it’s because we’re extremely far apart for two universities in the same metropolitan area. Anyway, it sounds like more such things will go on in future – and this was a great warmup for FEW!





Monism and the Possibility of Anti-Gunk

8 05 2006

Here’s some thoughts I had, inspired by Jonathan Schaffer’s talk at the APA a month and a half ago. Basically, I point out that if gunk is a problem for the nihilist, then anti-gunk (if it makes sense) is a problem for the monist. But gunk might not be a problem for the nihilist in the end.

The paper is called “From Nihilism to Monism”, where he argued that any argument leading one to believe that there are no composite objects should in fact push one all the way to believing that there is only one object – the entire universe. Unfortunately, I didn’t stick around for the comments by Ted Sider and Ned Markosian, which I’m sure shed some light on very interesting issues. However, I’m wondering whether some of the arguments could be turned around. For instance, the seeming possibility of gunk (stuff such that every part of it has even smaller parts) can’t be paraphrased by the minimal nihilist (someone who thinks there are just lots of small simples), though it can by the monist (someone who thinks there’s just one big simple).

But what about the possibility of anti-gunk? Just as we used to have an unquestioned assumption that every object has atomic parts, don’t we also have an unquestioned assumption that there is a biggest thing, that is not a part of anything else? For instance, if everything that there is has a finite size, but there is unrestricted (finitary) composition, we could have bigger and bigger things without end. This possibility could not be paraphrased by the monist, but the minimal nihilist could deal with it just fine.

The standard representation of unrestricted (perhaps finitary) composition is with all the objects being elements in a boolean algebra (the bottom element is the only one that doesn’t represent an object). A relatively straightforward theorem points out that a dense subset of the algebra will suffice to represent every element of the algebra as a set of parts. If the algebra is atomic, then the set of atoms will be a dense set. But, as Ted Sider points out in “Van Inwagen and the Possibility of Gunk”, if it’s atomless (or has an atomless part) then any dense set will contain two elements, one of which is a part of the other. This is incompatible with the nihilist position, on which no object is a part of any other.

The dual worry for the monist is if we drop the top element of the boolean algebra, just as we drop the bottom element. Or perhaps if we consider just some distributive lattice, rather than a boolean algebra. There’s no a priori reason why the objects should form a boolean algebra rather than one of these other structures (at least, not obviously, not any more than that the algebra should be atomic).

There might be a paraphrase strategy, where we just talk about some fictional largest thing. But maybe we can do the same in the other direction – even if there are no atoms, we can talk as if there are some! Just as we can fictionally add a top element to the algebra, we can fictionally add elements at the bottom of chains – that is, instead of considering elements of the algebra, we can consider infinite descending chains of elements. Any element can then be represented as the set of all chains containing it. This is exactly analogous to the process by which we represent real numbers as Dedekind cuts or Cauchy sequences of rationals – we add ideal elements at the limits of chains, even though in the “actual” structure, there are no limits. Sider says, “A hunk of gunk does not even have atomic parts ‘at infinity’; all parts of such an object have proper parts.” However, for any boolean algebra in which there is gunk (ie, some non-atomic object), there is an atomic boolean algebra in which it can be embedded. Every object in the old algebra will be represented as some object in the new one containing continuum-many atoms. This might raise some concern, because the atomic algebra will have, in addition to the atoms, many new objects (like the finitary joins of atoms, and possibly some countable joins as well) – but the monist can say that the reason we don’t talk about those in ordinary language is that our grasp on the world only gets really large, crude chunks, rather than anything closer to the atoms – this is why the world looks gunky.

Thus, the possibility of gunk isn’t really much of a worry for the nihilist. Sider says, “Surely there are both atomistic possible worlds and gunk worlds, and for that matter in-between worlds with both atoms and gunk.” But I suggest that the nihilist could say that there are only atomistic possible worlds – the ones we might ordinarily call gunk worlds are really just ones in which all our ordinary predicates pick out continuum-sized sets of atoms with certain uniformity properties.