End of APA

31 12 2005

There were a bunch of other talks I went to, but some I didn’t understand the material enough, some I didn’t take good enough notes, and some I was just too tired for. Also, I’ll let Greg Frost-Arnold discuss his own paper (which was an interesting suggestion for semantics of non-uniquely-referring singular terms, in a sense somewhat dual to that of free logic, for non-referring singular terms). The session on Jody Azzouni‘s book (with Mike Resnik, Gideon Rosen, and Otávio Bueno criticizing, and Mark Colyvan chairing) gave me a lot of material to think about, and I’ll probably mention it several times over the course of my next several posts, rather than making one post about it.

And the “informational session” on epistemic modals by Kai von Fintel, Thony Gillies, and John MacFarlane was also excellent. I’ll let one of them with a blog discuss it first (especially since Brian Weatherson had to fill in at the last minute for Kai von Fintel – which he did excellently), unless they choose not to. At any rate, they discussed three very different aspects of the semantics of epistemic modals (“might” and “must” in claims like “oh, I see your umbrella is wet, so it must be raining” and “as far as I know, he might be in Boston”). Thony Gillies tried to claim that the models of information states he was talking about were very sexy, but I think he was beaten by John MacFarlane, who pointed out that his theory of relativism about truth might better be described as “bicontextualism”.

And it was also great to meet the many people that I did in different sessions and at the “smokers” in the evening (for non-attendees, that’s apparently what everyone calls the “reception” in the main ballroom on the middle two nights), including a bunch of readers of this blog, as well as several other bloggers. I can only hope that this conference is marginally close to as fun as this in the year that I have to go on the job market and wear a suit all day, and run back and forth between interviews instead of sessions, and impress people to hire me rather than just meeting people more casually.


APA Blogging: Dorr, Bennett

31 12 2005

Cian Dorr‘s talk, “Of Numbers and Electrons” on Thursday morning made me realize that we’ve got a lot of the same metaphysical goals. The point of his talk was to show that a weakened (and therefore tractable) version of Hartry Field’s program will be able to support realism about theoretical entities of physics and anti-realism about mathematical entities. The scientific anti-realist might suggest a theory like the following:
BAD: As far as observable matters are concerned, it is just as if T
where T is our actual scientific theory, that talks about electrons and other unobservables. However, almost everyone agrees that such a theory is bad (hence the name Dorr has given it). The mathematical realist then claims that the mathematical anti-realist would have to give a theory like:
AS-IF: As far as the concrete world is concerned, it is just as if T
where T is our actual scientific theory, that talks about functions and numbers and other abstract entities. Dorr proposes an alternative.
Read the rest of this entry »

APA Blogging: Rabin

28 12 2005

Michael Rabin started his talk by mentioning that the traditional picture of proof says that in principle a proof is formalizable, can be verified in an automated way, is reproducible, publishable, and can be transferred (that is, if I have a proof, then I can give it to you and then you will have a proof). In practice, this isn’t entirely correct, because actual proofs are rarely formalized, and are normally checked by a social process rather than an automated one. Apparently there have been case studies showing that even restricting attention to the Hilbert problems, a large number of results have been “proven” once, and then years later reproven in a way showing that the initial proof was incorrect.

He argued that with the rise of new methods of proof (some of which I discussed earlier), there are even more differences – now, there can be proofs that are non-transferrable and non-publishable.
Read the rest of this entry »

APA Blogging: Hamkins

28 12 2005

Joel David Hamkins gave a talk last night on the modal logic of forcing, based on work he had done with Benedikt Löwe. (The talk isn’t listed here yet, but several related ones are.) He said that the aim was to do for forceability what Solovay had done for provability. The idea is that if M is a model of ZFC, and G is some generic filter over some partial order in M, then M[G] is a model of ZFC that is accessible from M, because M has names for all the elements of M[G], and can prove many of the logical relations between facts about M[G] (in fact, in many cases it knows the truth value in M[G] of every sentence with no free variables). Using this accessibility relation, with the “worlds” being models of set theory, we can then define a modal logic. I’ll now write []p to be the sentence of set theory (which is in fact expressible standardly) that says that p is true in every generic extension of the actual universe, and <>p the sentence that says that p is true in some generic extension.
Read the rest of this entry »

APA, holidays

24 12 2005

I’ve noticed that readership dropped this week, understandably, with people traveling for the holidays, and the semester ending in North America. (I suppose it ended a while ago in Australia – I have no idea how it works in Europe, and especially Latin America, Africa, and Asia, though I don’t get many hits from those areas.)

Anyway, I’ll be in New York next week for the APA Eastern division conference. I told Richard that I’ll try to see what I can do about liveblogging some of the talks (well, during free periods in the afternoons or evenings, not actually during the talks), though it may end up having to wait until after the thing’s over. So I’ll either post nothing next week, or a whole bunch between the 27th and 30th – we’ll see.

Happy Holidays!


23 12 2005

I’ve now put up a draft of my expository talk on forcing, which I gave a month ago to the math grad students at Berkeley. I’m hoping that it should make basic independence proofs intelligible to math grad students, and to interested philosophers with a fairly technical disposition. If you read it, certainly leave comments here, or e-mail me (easwaran AT berkeley DOT edu), if there are any questions, unclarities, inaccuracies, comments, or anything else. I think the first page has a pretty pompous tone, so I should probably change that, but I’m not sure about the rest.

Anyway, the reason I decided to write it up (apart from being able to explain this material more clearly to people in future discussion and talks), is because it’s seemed to me that there’s no accessible introduction to this stuff available (EDIT: since making this post, I noticed that Ars Mathematica mentioned forcing for dummies by Tim Chow – I’m reading it now). All the set theory books seem to either just do basic stuff with ordinals and cardinals (excluding forcing and large cardinals and determinacy and the like) or put forcing at least 100 or 200 pages in and rely on a lot of the material that’s discussed in earlier chapters. However, I recently discovered (thanks to the latest issue of Phi-News, which I ran into through a post by Gillian Russell) John Bell’s book Set Theory: Boolean-Valued Models and Independence Proofs. This book presupposes some amount of familiarity with set theory, but it just jumps right into this material right away. And I’ve found it much easier to read than the relevant chapters of Jech or Kunen (but perhaps that’s at least in part because I spent so much time in August and September going through the relevant chapters of Jech). So there’s not as much of a hole to fill as I thought, but I’m trying not to presuppose any theorems of set theory beyond Russell’s paradox.

Bell’s book is also quite interesting to me because it presents the material entirely in the framework of boolean-valued models, rather than forcing. The results and the methods are almost entirely equivalent, but the formulation is different. The method of forcing requires the existence of countable transitive models of ZF (which aren’t guaranteed by Con(ZF)), but then gives a standard model-theoretic consistency proof by explicitly creating a model of the new theory. The method of boolean-valued models works on the universe as a whole, rather than on some subset of it. But as a result, it doesn’t actually construct the domain of some structure for the theory – instead it gives a proper class with some boolean-valued (rather than true/false-valued) relations on it representing identity and set membership, and shows that the set of formulas receiving value “1” must be consistent, and can be made to include theories of appropriate sorts. The boolean-valued approach has the advantage of making all the calculations of “truth-values” for sentences much easier, but the disadvantage of making the model somehow “blurry” and indistinct. Forcing, on the other hand, gives a clear model, in exchange for some extra calculational difficulties.

I’ve always felt more comfortable with an approach highlighting boolean-valued models much more than Kunen does, and probably even a bit more than Jech, but Bell’s approach has felt alien by the fact that it doesn’t mention what seems to be the more standard approach at all until page 88. At any rate, it’s been an interesting read so far.

Joyce on Evidential Weight

17 12 2005

A new paper by Jim Joyce, “How Probabilities Reflect Evidence” discusses more clearly some of the issues I mentioned in a previous post, where I suggested that Henry Kyburg opposed subjective probabilities because he misunderstood some of the ideas of Bayesian epistemology. What I was talking about there looks just like the contrast between what Joyce calls the “balance” and the “weight” of evidence. He also mentions “specificity” of evidence, and shows that Bayesian epistemology can deal with this as well. All three of these distinctions are made in examples where the agent has hypotheses about objective chance processes, but I’m sure that some of this can eventually be generalized beyond those circumstances. Anyway, it’s quite an interesting paper – I was especially intrigued by the mention of some attempts to avoid something like Bertrand’s paradox for the Indifference Principle (which Joyce calls “The Principle of Insufficient Reason”) about randomly generated squares.

(Thanks to Brian Weatherson and Jon Kvanvig for pointing me to the relevant issue of Philosophical Perspectives. I might as well also mention now that I’ll be on a panel about philosophy blogging with both of them, as well as Gillian Russell, at the Pacific APA in Portland in March.)

Realism vs. Anti-Realism or Plenitude vs. Non-Plenitude?

15 12 2005

As suggested at the end of my last post, I’m beginning to think that a more important issue in the philosophy of mathematics than the question of whether or not mathematical entities exist is the question of whether questions independent of ZFC (or PA, or some suitable other theory) can be decided. The former question is that of “realism in ontology”, while the latter is more like the “realism in evidence” that I once mentioned. (And there’s also traditionally the question of “realism in truth-values”, about whether or not these statements have truth-values independent of our abilities to come to know them.) Both platonists and anti-platonists have taken both answers to the question of “realism in evidence”.

Gödel motivated his search for new axioms in platonism, but it was platonism of a very definite sort, in which there was one universe of sets that our theory is aiming to describe. The new axioms were supposed to be further truths about this universe. The natural contrast at the time was with something like the formalist position, on which any consistent set of axioms was as good as any other, so that if our interest was in ZFC, then any extension of it was as good as any other. Later, Hartry Field developed a more sophisticated form of anti-realism, though he too has suggested that there is very little we can say beyond ZFC.

Since then, people like Ed Zalta and Mark Balaguer have suggested that there is a sense of platonism on which all consistent theories describe actual universes of mathematical entities. This view has been called “plentiful platonism”, “full-blooded platonism”, and “plenitudinous platonism”, among others. Traces of this view can be found probably at least as far back as Carnap’s “Empiricism, Semantics, and Ontology”. For these philosophers however, the plenitudinousness is more important than the platonism – both have suggested that an anti-realist interpretation of their view is plausible as well.

I think there are problems with such views, and perhaps the most prominent defender of the non-plenitudinous views is Penelope Maddy, whose ontological views are at this point also neither clearly platonist nor anti-platonist. (More accurately, she seems to be against ontological claims of either sort.) Interestingly, this evening I heard a talk by Daniel Isaacson (from Oxford), advocating a sort of structuralism that he suggested would also support the search for new axioms. Although the model of mathematics may not be unique, he suggests that the structure up to isomorphism should be (ie, the theory should be categorical), and thus there is at least a truth-value to statements beyond those of first-order ZFC, even if we can’t immediately find out what that truth-value is.

I still want to defend anti-platonist views of the ontology of mathematics, but questions about the plenitude or not of mathematical entities (or stories) seem to be more pressing, because they affect the actual mathematical practice of set theorists. Now that set theory has pressed so far beyond the widely accepted axioms of ZFC, this question is taking over some of the importance of the traditional foundational questions. We are all fairly confident that mathematics will not turn out to be unjustifiable (though some people I know have suggested they think ZFC might be inconsistent), so one way or another the foundational issues can probably be either resolved or ignored. But whether set theorists are doing anything mathematically worthwhile is a more controversial question.

What’s the Difference between Realism and Anti-Realism?

11 12 2005

One of the debates in philosophy of math that I’m quite interested in is the question of whether mathematical objects actually exist or not. This debate seems to have been one of the most central ones in the field in the last several decades. However, mathematicians tend to dismiss this debate, though they do care about some others, about methods of proof, justification of axioms, the role of explanation, and the like. Many philosophers often feel the same way, both about this ontological debate, and about other debates in analytic metaphysics.

John MacFarlane has pressed me on several occasions with a worry about something like this, and a related point is discussed in the introduction to Hartry Field’s Reason, Mathematics, and Modality (I think). If someone asserts that a very skilled detective lived on Baker Street in London in the 19th century, but then says that this assertion was meant in a merely fictional way, then there is a clear change in my reaction to the assertion. Instead of verifying it by looking at real birth records and legal histories and the like, I investigate fictional works (in this case by Conan Doyle). Instead of basing historical arguments on the facts mentioned, I base arguments aobut the fictional world and the like. However, if a mathematician tells me that every continuous closed curve cuts a plane into two disjoint connected regions, and then a philosopher tells me that this assertion was made merely within the fiction of mathematics, it’s not clear what difference this could make in my relation to the assertion. In either case I would verify it by proofs using the axioms, and make the same physical and mathematical applications of the theorem. My acceptance or non-acceptance of this philosophical thesis will not be manifest in any of my actions (other than my bestowing or withholding the honorific “exists”, or “literally exists” or the like). So something about the debate seems potentially misguided.

Worries like this may be behind Jody Azzouni’s assertions in Deflating Existential Consequence that there is no rational argument in favor of any particular criterion for ontological commitment, though we as a community have chosen to adopt ontological independence as such a criterion. He is here primarily concerned with mathematical entities, but also theoretical entities of other kinds as well. These seem to be precisely the kinds of entities where the above worry is strongest. Once he has adopted this criterion, then claims of mathematical anti-realism can be manifested in the kind of free-wheeling postulation of entities that he claims is characteristic of mathematics (and other “ultrathin posits”, though I think there’s room to contest this claim about mathematics). This postulation is, I think, what he calls “ontological dependence”, and is characteristic of fictional entities and other paradigmatic examples of non-existent posits. If this is right, then realism about mathematics would be manifested by trying to establish what he calls either thick or thin epistemic access to mathematical entities.

This may not be the best way to cash out the distinction between a commitment to the existence and non-existence of entities in reality, but however this distinction is made, I think we can get implications for the fictionalist position. However, if a philosopher manifests her belief in the real existence or non-existence of objects in a certain way, then she should manifest her belief in the fictional existence or non-existence of objects in a similar way. For instance, on Azzouni’s criterion, a fictionalist about mathematical objects should look for fictional epistemic access of either a thick or thin nature. Since this doesn’t seem to be plausible, Azzouni would have to say that even in most reasonable fictions, mathematical objects don’t exist.

However, someone with a more Quinean criterion might be able to take a fictionalist position. If our existence criterion is “playing an explanatory role in our best theory of the world”, then realist truth about mathematics would make verification dependent on applicability in scientific theories. (We don’t obviously seem to do this, which is why Maddy and others reject a Quinean realism about mathematics, but I think that we may have done this for some small number of axioms, so that it’s not obvious whether or not we have manifested a commitment to realist truth in this way.) Fictionalist truth would be manifest in an attempt to show that mathematical entities fictionally explain our observations – and I think this is exactly Hartry Field’s project. This makes sense of the fact that Field seems to turn the Quinean arguments on their head to say that mathematical objects don’t exist actually, but merely fictionally.

On a more Hilbertian criterion, that every postulated set of axioms describes some objects, it seems that there could be no reasonably different fictional manifestation of acceptance of an existence claim. Thus, for someone like Ed Zalta, a worry like John MacFarlane’s would be relevant. But this seems to be ok for him, because he has described his view both as a sort of platonized naturalism (which I take to be realist), in “Naturalized Platonism vs. Platonized Naturalism”, with Bernard Linsky, and also as a sort of nominalism, in “A Nominalist’s Dilemma and its Solution”, with Otávio Bueno.

Thus, the burden is on any such philosopher to show that mathematicians do in fact manifest their acceptance of mathematical statements in the way that the philosopher says they should (whether realist, fictionalist, or other). The difference between Quine and Field is just such a debate, as is the disagreement between Azzouni and the platonist Zalta. However, these two debates are someone orthogonal to one another, as they take acceptance of a statement to be manifested in a different way, so their disagreements may be merely verbal, as someone like MacFarlane might worry. But at any rate, those involved in these debates do seem to be engaged in the project of showing that mathematicians behave the way their theories predict, so MacFarlane’s worry doesn’t seem to damage any of these projects directly.

Large Cardinals and their Justifications

5 12 2005

Most mathematicians are willing to use ZFC (or something fairly similar) as a foundation for their work. However, since Gödel, we know that these axioms are essentially incomplete – that is, any consistent, recursive extension of them will still leave various statements undecided. Some people seem willing to say that this just means that there is no fact of the matter about statements that go beyond ZFC, but I think this is just too hasty.

After all, if we are platonists, then we think that ZFC is true, and thus consistent. And if we are fictionalists, we at least think that it’s a good story, but part of being a good story seems to require consistency. In fact, on just about any reasonable view of mathematics, it seems that however much we can actually say to be the case should at least be consistent. Thus, whatever theory T we use, we should probably be willing to say “T is consistent” as well.

Now, “T is consistent” is not itself a mathematical statement, but there is a susbtitute, which I will call “Con(T)”, which states that no natural number has a certain property, which we intuitively understand as saying that it codes a proof from T of a contradiction. Whatever we might or might not be able to say about the natural numbers, I think we understand them well enough to say that this proof-coding mechanism is actually correct. (You can think of this coding as the way that a computer codes text, and can reason syntactically about this text to see which strings are correct proofs in our formal system – there’s another coding that’s easier to work with and is therefore more standard in the literature, involving powers of primes.) Thus, if T is an appropriate formalization of some part of the truths (or useful fictions, or whatever) of mathematics, then Con(T) should be as well.

Now, by Gödel’s Completeness theorem (which requires some fragment of ZFC to prove), we know that if Con(T) is the case, then there is actually some set and a collection of relations and functions on that set that can be used as the interpretation of the symbols in T to make it come out true. That is, any consistent theory (according to the numerical coding) has a model. Thus, if we think that ZFC+Con(ZFC) is a reasonable formulation of some part of mathematics, then there must in addition be some model of ZFC. If in addition, we assume Con(ZFC+Con(ZFC)), then there is a model of ZFC, which itself contains a model of ZFC.

Now, these models of ZFC may bear very little resemblance to the “real thing”. In particular, the symbol for the set membership relation may be interpreted as some relation that has no connection to the real thing. In fact, there is a model whose domain is just the natural numbers, and whose “set membership relation” is some relation among numbers. But if there is an inaccessible cardinal I, then there is in fact a set (called VI) such that using the actual set membership relation on this domain creates a model of ZFC. This set contains all members of its elements, so it actually “knows” what sets they are (unlike one of those models whose “sets” are natural numbers, and assigns them elements in some seemingly arbitrary way), and also contains the actual powersets of its elements (and thus all their subsets, not just some of them), and thus has a lot more properties in common with the whole of mathematical reality than the other sorts of models of ZFC. In fact, it contains all the natural numbers and knows which set is the set of all natural numbers, so it knows any true statement (or true according to the fiction of mathematics, or whatever) of the form Con(T). So if there is an inaccessible cardinal, then VI is a model of ZFC, so Con(ZFC) is true, so VI is also a model of ZFC+Con(ZFC), so Con(ZFC+Con(ZFC)) is true, so VI is also a model of Con(ZFC+Con(ZFC+Con(ZFC))), etc. So finding a model of this appropriate sort guarantees a lot of consistency statements.

As I said above, we have good reason to adopt all these statements. In addition, for any (even non-recursive) combination of consistency statements, a model of the form Vk satisfies all of them, so in some sense, saying that a model of this form exists just says that these non-recursive sets of axioms are consistent, just as the statement Con(T) said that a certain recursive set of axioms was consistent. Thus, we seem to have good reason to adopt the existence of an inaccessible as well as all the consistency statements.

Of course, once we have adopted ZFC+Inaccessible, we can start the whole process all over again, and eventually reach ZFC+”There are at least two inaccessibles”. Similar arguments get us arbitrarily large numbers of inaccessibles (even into infinite orderings of them).

I’m told that larger large cardinal axioms state similar properties about the universe and guarantee that we can bring them down to models of a certain form, extending Gödel’s Completeness Theorem.

Now, I’ll consider the endpoint of all this. The collection of all true (or fictional, or whatever) statements of set theory should be consistent. Thus, to extend Gödel’s Completeness Theorem, there should be some model of the form Vk that satisfies all true statements of set theory, ie, is elementarily equivalent to the universe as a whole. Now, the statement that such a model exists is certainly not consistent if it can be stated (because it would violate Gödel’s Second Incompleteness Theorem, saying that no theory can assert its own consistency, or equivalently the existence of a model of the entire theory). Perhaps more relevantly, I think it can’t even be stated, because Tarski showed that we can’t define the truth-predicate for a model inside that very model, so we can’t state that a model of “all true statements” exists. So this statement can’t be adopted as a “final large cardinal”.

However, even if it could be stated, or if it were consistent, we might still want to go farther. Since the universe contains an elementary submodel of this form, let us say, Va, then Va must also contain such a model, say Vb. Thus, there should be a and b such that Vb is an elementary submodel of Va. And this statement can in fact be expressed in the language of set theory.

I don’t know if this statement can be shown to be inconsistent (which would call into question this means of justifying large cardinals). It might also be equivalent to some already-known large cardinal axiom (my friend Adam Booth suggested to me that it sounds like it could be a consequence of the existence of an inaccessible that is the limit of a set of measurables). If anyone with a relevant background in set theory could tell me, that would be great.

Anyway, all this is just one means of justifying large cardinal axioms, but it seems to make sense to me. It also has the added benefit of not requiring a platonist view of mathematics, but works also on a fictionalist view, and probably a variety of other views as well. All that is needed is to assume that whatever set of axioms we should adopt should be consistent, and that the set of natural numbers that our axioms describe should satisfy Con(T) iff T is actually consistent. Of course, there will still be further statements that we won’t be able to decide with a principle like this, but it gives us a means of going far beyond ZFC. Recent work of Hugh Woodin and others suggests that it won’t be enough to settle the Continuum Hypothesis however.