## Dimensionless Constants

21 09 2006

Over at the n-category cafe, John Baez discusses dimensional analysis in physics, as the beginning of a “collection of wisdom on gnarly issues in physics”. (It sounds like “gnarly issues” means “philosophical issues” – so philosophers of physics should definitely go contribute!)

Many quantities in physics (like the speed of light) inherently carry with them certain “dimensions” (in this case, length divided by time). In general, physicists can solve a lot of problems quickly by noting that it only makes sense to add quantities that have the same dimensions, and that in many situations, a single dimensionless constant will be the only relevant factor. For instance, once you know that the only factors relevant to the length of time it takes for an object to fall to the ground are its initial altitude and the acceleration due to gravity, and one knows that the dimensions for these three quantities are time, length, and length divided by time squared, then one can quickly see that when the gravitational acceleration is fixed, the time it takes to fall is proportional to the square root of the initial altitude.

John Baez points out that this sort of reasoning seems to beg a question, in that we have no clear justification for why dimensionless quantities in our theories are always constant. To put it more perspicuously, there are some constants that appear in our theories (like the speed of light and Newton’s universal gravitational constant) that have dimensions associated with them, and the numerical value therefore depends on the choice of units (say, feet versus meters for length, and seconds versus minutes for time). However, other constants (like the “fine structure constant”, which is the square of the charge on the electron, divided by 4π times the permittivity of the vacuum, Planck’s constant, and the speed of light) are dimensionless, and their numerical values (in this case approximately 1/137) don’t depend on our choice of units. Why do physicists find the latter particularly compelling?

At any rate, I found that this stuff all made more conceptual sense after I had read Hartry Field’s Science Without Numbers a few times. In this book, Field argues that numbers don’t really exist – they (like all mathematical abstractions) are just useful fictions that let us extend our physical theories in ways that make calculations much easier. Just as Hilbert showed how to properly axiomatize geometry in terms of betweenness, distance-congruence, and identity of points and lines without talking about real numbers or coordinates or actual distances or the like, Field shows how to add a few more basic relations on spatial points and axiomatize Newtonian gravitational mechanics without any talk of real numbers or coordinates. When thought of like this, it seems to become more clear why the laws have to be phrased in dimensionless terms (or in terms where the dimensions on opposite sides of an equality sign are equal). All the relevant relations that occur in Field’s theories are something like distance-congruence, which is a relation between four points, that are taken to define two distances in pairs. There just is no relation in the language that considers a pair of points defining a distance and a pair of points defining a duration. All the quantities involved are actual physical things (no dimensionless numbers), and the operations only make sense when the quantities are of the same sort.

I don’t know how much this makes sense of what’s going on, but it seems to me to help clarify some things. (I don’t know if it’s evidence that Field’s view is the right way to see things – I’m tempted to think it is, but that may just be my fictionalist bias.) Another way to clarify these things is in the language of torsors, as a commentor on Baez’ post mentions. (I’ve only heard of torsors once before, on a page by John Baez explaining them. But even then, I was struck by how they seem to relate to the way Field thinks of physics and mathematics.)

## Fictionalist Abstraction Principles

24 04 2006

I’ve been in Paolo Mancosu’s seminar this semester going through John Burgess’ new book Fixing Frege on the various approximately Frege-like theories and how much of classical mathematics they can do. Of course, Frege’s original system could do all of it, but turned out to be inconsistent. Burgess’ book starts with the weakest (and most clearly consistent) systems, and moves on towards stronger and stronger systems that capture more of mathematics, but get closer towards contradiction.

Last week we were going through the first few sections that allow impredicative comprehension (that is, in this system, concepts can be defined by using formulas with quantifiers ranging over concepts – including the one that is being defined!). These systems supplement second-order logic with various “abstraction principles” adding new objects – that is, we add a function symbol to the language, define some equivalence relation on the type of entities that are in the domain of the function, and state that two outputs of the function are identical iff the inputs bear the relation to one another. Effectively, the “abstracts” are like equivalence classes under the relation.

Two well-known abstraction principles are Frege’s Basic Law V, which gives objects known as extensions to concepts that are coextensive (this is how he introduced the notion of a set); and what Boolos and other’s have called Hume’s Principle, which assigns cardinalities to concepts that are equinumerous. It turns out that Basic Law V is in fact inconsistent – a slightly impredicative comprehension principle for concepts gives us Russell’s Paradox. Hume’s Principle on the other hand is consistent – it turns out that Hume’s Principle plus any impredicative amount of second-order logic is equiconsistent with the same amount of second-order Peano Arithmetic.

Crispin Wright, Bob Hale, and others have used this fact to try to motivate Hume’s Principle as a logical principle that guarantees the existence of numbers, and reduces arithmetic to a kind of logic. However, beyond worries about whether or not this is a sort of logical principle, Øystein Linnebo and others have pointed out that there is an important second kind of impredicativity in Hume’s Principle, and most other abstraction principles that add new objects to the domain. Namely, the outputs of the abstraction function (cardinalities) are taken to be in the domain of objects that must be quantified over to see if two concepts are coextensive. Burgess points out that we can avoid this by taking the range of the abstraction function to be a new sort of entities beyond the objects and concepts in the original language. (This is in a sense a way to avoid Frege’s “Julius Caesar” problem of wondering whether Julius Caesar might be the number 3 – by stipulating that number and set abstracts get put in their own logical sort, we guarantee that none will be identical to any pre-existing object like Julius Caesar.)

He remarks on p. 158 (and proves a related result around p. 134) that any abstraction principle that is predicativized like this ends up being consistent! In fact, it’s a conservative extension of the original language, and is additionally expressively conservative, in that any sentence at all in the extended language can be proven equivalent to one phrased in the restricted language. The reason for this is that the only sentences in our new language that even mention the abstracts are identity claims among them (because all our other relations only apply to entities of pre-existing sorts), and these identity claims can be translated away in terms of the equivalence relation on the elements of the domain. (Incidentally here, I think if we add abstraction principles for every equivalence relation, each in a new sort, then we get what model theorists call Meq, which I think is an important object of study. Unless I’m misremembering things.) One nice historical point here is that it suggests that Frege’s concerns about the Julius Caesar problem were in fact quite important – the fact that he didn’t just stipulate the answer to be “no” is what allowed his system to become inconsistent.

The problem with putting these abstracts into new sorts is that one of the major motivations for Hume’s Principle was to guarantee that there were infinitely many objects – once you’ve got 0, you can get 1 as the cardinality of the concept applying just to 0, and 2 as the cardinality of the concept applying just to 0 and 1, and 3 for the concept applying just to 0,1,2, and so on. This obviously can’t happen with a conservative extension, and in particular it’s because concepts can’t apply (or fail to apply) to entities in the new sort. So we can get a model with one object, two concepts, and two cardinalities, and that’s it. So it’s not very useful to the logicist, who wanted to get arithmetic out of logic alone.

However, it seems to me that a fictionalist like Hartry Field might be able to get more use out of this process. If the axioms about the physical objects guarantee that there are infinitely many of them (as Field’s axioms do, because he assumes for instance that between any two points of space there is another), then there will be concepts of every finite cardinality, and even some infinite ones. The fact that the extension talking about them is conservative does basically everything that Field needs the conservativity of mathematics to do (though he does need his more sophisticated physical theory to guarantee that one can do abstraction to get differentiable real-valued functions as some sort of abstraction extension as well). Of course, there’s the further problem that this process needs concepts to be the domain of some quantifiers even before the abstraction, and I believe Field really wants to be a nominalist, and therefore not quantify over concepts. But this position seems to get a lot of the work that Field’s nominalism does, much more easily, with only a slight compromise. To put it tendentiously, perhaps mathematical fictionalism is the best way to be a neo-logicist.

## “As If” Theories and the Challenge of Approximation

6 01 2006

Most people (except for extreme scientific anti-realists) say that a theory of the form “There are no Xs, but everything observable is just as it would be if there were Xs” is bad. As I mentioned before, Cian Dorr would like to give a fairly novel explanation of why these theories are bad, but here I’m going to try to focus on what I take to be the more traditional account, and its application to the debate about mathematical nominalism. In particular, one objection to fictionalism about mathematical entities that I have seen mentioned in work of Mark Colyvan, Mike Resnik, and possibly Penelope Maddy, is that not only do we need the claims of ZFC to be true for our best applied theories, but we also need them to be true even to use physical theories (like Newton’s) that we take to be false. I will discuss this objection towards the end, but first I will return to the instrumentalist move, saying that things behave just as if there were Xs, even though there aren’t.

The point seems to be one about inference to the best explanation. If things look just as if there were Xs, then one explanation would be if there actually were Xs. However, the “as if” style theory explicitly denies this explanation, without giving another one. Therefore, in one sense it’s not a theory, but rather the data that the theory needs to explain.

However, such data often can be explained. An idealized Adam Smith in an idealized free-market world might have observed that prices generally stay close to levels as if they were set by some “invisible hand” observing the needs of society at large. However, there are decent reasons to believe there is no actual invisible hand, so Adam Smith sought another explanation, and found one in the mechanisms of competition for both supply and demand.

One might try to be purely instrumentalist about the material world, saying there are no material objects, but things appear just as if there were. In particular, I might say “there is no strawberry in front of me, but it looks just as if there were”. However, while the instrumentalist might want to say this always, even the realist says this on certain occasions, when a mirror of a certain type is used. There is no strawberry there, but because there is one three inches below that spot, and the mirror is curved in exactly the right way, and you’re looking at it from an angle that is high enough for your line of sight to intersect the mirror in that place, and low enough not to see the actual strawberry, it looks just as if there were a strawberry there. The advertisement claims the images “defy, yet demand explanation” – it’s true that they demand explanation, but they don’t defy a suitably optically-informed explanation. At any rate, there is a clear contrast between the case of such an illusion and the ordinary cases of seeing an actual strawberry. Realists can make sense of and explain this contrast, but instrumentalists have to be a bit more careful. (I’m sure that it’s possible for them to cash out just what it means to look like there’s a strawberry there without really looking like there’s a strawberry there, or something, but it’ll be more complicated.)

There seems to be no reason why one couldn’t be a global instrumentalist about everything (except maybe sense data, or something of the sort), but at intermediate levels, it seems that one really does need an explanation. Hartry Field, in Science Without Numbers attempts to do something like this for a Newtonian universe – he can explain why everything acts just as if there were the real numbers and continuous functions that Newton talked about, even though all there actually is is just regions of space-time with various three- and four-place betweenness and equidistance properties. He still needs to help himself to some fairly strong logic (a quantifier saying “there are infinitely many Xs such that…”), but it’s a nice development.

More simply, a nominalist can explain why our counting practice works the way it does, just as if there actually were abstract entities known as numbers, even though there aren’t any (according to the nominalist). This explanation would point out the isomorphism between the counting process and the successor operation. It would point out that for any particular application of counting, a non-standard semantics can be given for the numerical terms on which they denote the objects counted rather than numbers. And it would point out that every particular numerical statement can be translated into a statement with numerical quantifiers, which can be translated in terms of existential and universal quantifiers, connectives, and identity.

However, even if this project of nominalizing our best scientific theory succeeds, realists might object that we would still need mathematics to be true to explain our practices of approximation. For instance, say that the population in any year of some herd of bison is given by a difference equation in terms of the populations in the previous few years. In many cases, the behavior of difference equations is hard to predict, but they can be usefully approximated by differential equations, where exact solutions can often be achieved. Thus, we might use a differential equation to model the population of the bison, even though we don’t believe that the population actually increases between breeding seasons, and we believe that the population is always integer-valued rather than real-valued as the differential equation requires. The realist can explain why the differential equation is a good approximation by pointing out all sorts of mathematical theorems about the systems of equations involved. However, even if the nominalist has managed to nominalize away all talk of numbers and equations in using the difference equation, she will have trouble explaining why the differential equation is a good approximation. She can’t appeal to facts about the mathematical structures denoted, because she says there are no such structures. And she presumably can’t nominalize the differential equation in any nice way, because it refers to fractional bison, and bison born at the wrong time of year. Instead, she’ll have to fall back to something like the instrumentalist position and say that the differential equation is not correct, but it makes very good predictions, and she can’t explain why.

The only remotely promising move I see at this point is to say that “according to the fiction of mathematics, the differentical equation and the difference equation will always make very similar predictions”. However, if mathematics isn’t actually true, it will be hard for her to explain why it is still correct about when it says two theories make similar predictions. In a sense, this is the same problem that Field ran into in expressing the claim “mathematical physics is a conservative extension of nominalist physics”, which is what justifies the practice of using mathematical physics to make predictions even though it is not literally true. Except here, we have to deal not only with conservative extensions, but with good approximations that aren’t conservative.

It would be very unappealing to end up in the situation where mathematics was unnecessary to make correct predictions, but necessary for various methods that give approximate predictions. (Some examples other than differential equations involve frictionless planes, infinitely deep oceans, light that travels infinitely fast, and the like.) In that case, the indispensability argument would only apply through our practice of approximation, and not through actual science. At this point, I think more people would be willing to bite the nominalist bullet and say that we’re not really justified in using our approximations, but it would still be an odd situation.

Fortunately, most of science remains un-nominalized, and the people that think we can some day nominalize most of it will probably believe that we can nominalize our approximation methods in some way too. It’s just an extra challenge to meet.

## What’s the Difference between Realism and Anti-Realism?

11 12 2005

One of the debates in philosophy of math that I’m quite interested in is the question of whether mathematical objects actually exist or not. This debate seems to have been one of the most central ones in the field in the last several decades. However, mathematicians tend to dismiss this debate, though they do care about some others, about methods of proof, justification of axioms, the role of explanation, and the like. Many philosophers often feel the same way, both about this ontological debate, and about other debates in analytic metaphysics.

John MacFarlane has pressed me on several occasions with a worry about something like this, and a related point is discussed in the introduction to Hartry Field’s Reason, Mathematics, and Modality (I think). If someone asserts that a very skilled detective lived on Baker Street in London in the 19th century, but then says that this assertion was meant in a merely fictional way, then there is a clear change in my reaction to the assertion. Instead of verifying it by looking at real birth records and legal histories and the like, I investigate fictional works (in this case by Conan Doyle). Instead of basing historical arguments on the facts mentioned, I base arguments aobut the fictional world and the like. However, if a mathematician tells me that every continuous closed curve cuts a plane into two disjoint connected regions, and then a philosopher tells me that this assertion was made merely within the fiction of mathematics, it’s not clear what difference this could make in my relation to the assertion. In either case I would verify it by proofs using the axioms, and make the same physical and mathematical applications of the theorem. My acceptance or non-acceptance of this philosophical thesis will not be manifest in any of my actions (other than my bestowing or withholding the honorific “exists”, or “literally exists” or the like). So something about the debate seems potentially misguided.

Worries like this may be behind Jody Azzouni’s assertions in Deflating Existential Consequence that there is no rational argument in favor of any particular criterion for ontological commitment, though we as a community have chosen to adopt ontological independence as such a criterion. He is here primarily concerned with mathematical entities, but also theoretical entities of other kinds as well. These seem to be precisely the kinds of entities where the above worry is strongest. Once he has adopted this criterion, then claims of mathematical anti-realism can be manifested in the kind of free-wheeling postulation of entities that he claims is characteristic of mathematics (and other “ultrathin posits”, though I think there’s room to contest this claim about mathematics). This postulation is, I think, what he calls “ontological dependence”, and is characteristic of fictional entities and other paradigmatic examples of non-existent posits. If this is right, then realism about mathematics would be manifested by trying to establish what he calls either thick or thin epistemic access to mathematical entities.

This may not be the best way to cash out the distinction between a commitment to the existence and non-existence of entities in reality, but however this distinction is made, I think we can get implications for the fictionalist position. However, if a philosopher manifests her belief in the real existence or non-existence of objects in a certain way, then she should manifest her belief in the fictional existence or non-existence of objects in a similar way. For instance, on Azzouni’s criterion, a fictionalist about mathematical objects should look for fictional epistemic access of either a thick or thin nature. Since this doesn’t seem to be plausible, Azzouni would have to say that even in most reasonable fictions, mathematical objects don’t exist.

However, someone with a more Quinean criterion might be able to take a fictionalist position. If our existence criterion is “playing an explanatory role in our best theory of the world”, then realist truth about mathematics would make verification dependent on applicability in scientific theories. (We don’t obviously seem to do this, which is why Maddy and others reject a Quinean realism about mathematics, but I think that we may have done this for some small number of axioms, so that it’s not obvious whether or not we have manifested a commitment to realist truth in this way.) Fictionalist truth would be manifest in an attempt to show that mathematical entities fictionally explain our observations – and I think this is exactly Hartry Field’s project. This makes sense of the fact that Field seems to turn the Quinean arguments on their head to say that mathematical objects don’t exist actually, but merely fictionally.

On a more Hilbertian criterion, that every postulated set of axioms describes some objects, it seems that there could be no reasonably different fictional manifestation of acceptance of an existence claim. Thus, for someone like Ed Zalta, a worry like John MacFarlane’s would be relevant. But this seems to be ok for him, because he has described his view both as a sort of platonized naturalism (which I take to be realist), in “Naturalized Platonism vs. Platonized Naturalism”, with Bernard Linsky, and also as a sort of nominalism, in “A Nominalist’s Dilemma and its Solution”, with Otávio Bueno.

Thus, the burden is on any such philosopher to show that mathematicians do in fact manifest their acceptance of mathematical statements in the way that the philosopher says they should (whether realist, fictionalist, or other). The difference between Quine and Field is just such a debate, as is the disagreement between Azzouni and the platonist Zalta. However, these two debates are someone orthogonal to one another, as they take acceptance of a statement to be manifested in a different way, so their disagreements may be merely verbal, as someone like MacFarlane might worry. But at any rate, those involved in these debates do seem to be engaged in the project of showing that mathematicians behave the way their theories predict, so MacFarlane’s worry doesn’t seem to damage any of these projects directly.

## Are Mathematical Posits Ultrathin?

17 08 2005

As mentioned in my previous post, Jody Azzouni thinks that ultrathin posits (the kind that have no epistemic burdens at all) aren’t taken to exist. He suggests that mathematical posits are ultrathin, and thus we shouldn’t take them to exist, so we get nominalism the easy way.

[Ultrathin posits] can be found – although not exclusively – in pure mathematics where sheer postulation reigns: A mathematical subject with its accompanying posits can be created ex nihilo by simply writing down a set of axioms; notice that both individuals and collections of individuals can be posited in this way.

Sheer postulation (in practice) is restricted by one other factor: Mathematicians must find the resulting mathematics “interesting”. But nothing else seems required of posits as they arise in pure mathematics; they’re not even required to pay their way via applications to already established mathematical theories or to one or another branch of empirical science. (pgs. 127-8)

In a footnote to this passage, he suggests that he argues for this claim more thoroughly in his 1994 book Metaphysical myths, mathematical practice: the ontology and epistemology of the exact sciences. I haven’t looked at that book yet (though I plan to in the not-too-distant future), but the claim here just doesn’t seem convincing at all.

It’s true that mathematical posits aren’t required to have any applications, even to other areas of mathematics. After all, who thought of the applications of non-Euclidean geometries, aperiodic tilings, or measurable cardinals? Generally not the people that came up with them.

But as any set theorist knows from experience, mathematicians don’t just go along with any postulation that one makes. They require it to be consistent, and often even want it to be known to be consistent – so many mathematicians still object to various large cardinal axioms that postulate the existence of structures that actually even have a lot of the unnecessary virtues, like mathematical interest and application. But mathematicians sometimes seem not to want set theorists to just postulate these things willy-nilly. And on a more down-to-earth level, we can’t just postulate solutions to differential equations, or even algebraic equations – we have to prove existence theorems. In algebra, fortunately, people building on the work of Galois were able to show that for any polynomial with coefficients in some field K, there is some field K’ containing K in which that polynomial has a solution (this required a lot of tricks of algebra and knowledge of how to actually construct fields from scratch). Kurt Gödel actually helped this cause of proving existence theorems greatly – thanks to his completeness theorem, we (those who accept ZFC) now know that any consistent set of axioms defines a non-empty class of mathematical structures (this leads to much shorter, more powerful proofs of the existence of things like algebraically closed fields). Thus, thanks to Gödel (and our antecedent acceptance of at least a sufficient fragment of ZFC), Azzouni is almost right about postulating the existence of structures. But we still need to prove them consistent, or otherwise argue for their existence in the cases where we can’t prove them consistent.

Thus, mathematical posits don’t live down to his ultrathin expectations for them. It’s not clear to me if they are really “thin” in his sense either (and they’re almost certainly not “thick”), so it’s not immediately clear whether his criterion should say they exist or not.

## Types of Realism in Mathematics

5 06 2005

The issues noted in my “Eight Views of Mathematics” have of course been noted by other people before (after all, I was talking about the programs of Field, Steel/Woodin/Martin et al, and Dummett). However, it seems that at least the second one of these questions has been somewhat overlooked in past. Some people (I don’t recall who off the top of my head) refer to the first position as “anti-realism in ontology” and the third as “anti-realism in truth value”, but they don’t seem to have a name for the middle one. Similarly, Mark Colyvan (on p. 2 of his The Indispensability of Mathematics) talks about “metaphysical realism” and “semantic realism”. I suppose I’m tempted to call the third a sort of question of “epistemological realism”, or “realism in evidence” or something of the sort.

The first question is about what objects mentioned in mathematical statements exist, the second is about what mathematical statements we can know, and the third is about what mathematical statements are true or false. Dummett seems to want to link the first and third question. But it seems that most people try to assimilate the middle question to one of the other two. I suppose that’s where it falls to me to defend my position that we can know (or believe or accept) mathematical statements much beyond ZFC without being committed to mathematical objects, or to the claim that all mathematical statements are either true or false. (Though I’m probably agnostic about that last claim, rather than opposed.)

## Spooky Action at a Distance

4 05 2005

I’ve been rereading a bunch of the stuff on Field in the last couple days (my exam is in about 14 hours – I hadn’t planned on publicizing this blog until afterwards, but I guess I ended up doing so early) and I’ve been struck by the thought that a lot of nominalist worries about the causal efficacy of mathematical objects could be put a bit more mildly.

In general I’m a believer in the Quine-Putnam argument, though I think that mathematics actually is dispensible, so it ends up being a nominalist argument. This means that if mathematics had turned out to be indispensible, then I’d have to believe in its entities and confront the epistemological worries. But I think they aren’t really a serious problem, because our theory can be confirmed as a whole. I do think there is a slight issue though, in that these numbers that enter into our physical theories aren’t located near the entities they’re correlated with, or connected to them in any obviously causal way. The appearance of abstract objects in a physical theory seems to me like it’s at least a minor point against the theory, because it’s basically a stronger version of the “action at a distance” that Einstein and the early modern scientists disliked so much.

Of course, if the best theory predicts action at a distance, then I guess we have to live with it, and if the best theory predicts action from nowhere then we have to live with that too. But it doesn’t mean I have to like it any more than I should have to like a theory that is essentially non-deterministic (like quantum mechanics) or has any of a number of other blemishes that I’d like to think an ideal scientific theory wouldn’t have.

## Eight Views of Mathematics

3 05 2005

It seems to me that the three topics of my qualifying exam relate to three different questions about the objectivity of mathematics. The first is Hartry Field’s question about whether or not mathematical objects exist. The second is the one involved in higher set theory about whether or not there are methods of verifying mathematical statements that go beyond proof from already established axioms. And the third is Dummett’s question about whether or not there is a fact of the matter about mathematical statements that go beyond whatever verification procedures we have. Of course, in the latter two questions, the term “verification” should be taken with a grain of salt, because unless the answer to the first question is affirmative, mathematical statements won’t be literally true. But I think that in whatever sense one approves of standard mathematical theorems as opposed to their negations, one can raise the corresponding two questions.

Now, it seems plausible to me that these three questions are in fact independent, and I’ll try to sketch views that would be characterized as each of these combinations.

Y,Y,Y – I would think that something like this is the view of most set theorists working on large cardinals, and in particular was probably Gödel’s view. He took a naively platonistic view of sets, thought that there were means to discover new axioms beyond ZFC, and that every question about sets had a resolution in this platonic universe. Contemporary set theorists like Woodin and Martin retreat a little bit and call themselves “conditional platonists” or something of the sort, and also have a more sophisticated view of what counts as confirmation for an unprovable axiom. For instance, for them the axiom of projective determinacy (and thus the existence of infinitely many Woodin cardinals) is confirmed not just by the fact that each of its stateable instances is provable, but also that it predicted claims about sets of Turing degrees and continuous functions that not only have themselves been individually confirmed, but have also since become important topics in the study of their respective structures. Thus, the axiom makes many important and divergent explanatory predictions, and thus should be accepted as true.

Y,Y,N – Any position that states that mathematical objects exist, but that the facts of the matter don’t extend beyond our means of verification is going to be at least somewhat awkward. But I may be able to say that a position like this is what Steel suggests when he says that the continuum hypothesis might be ambiguous. That is, there might be two equally good set-theoretic universes, each isomorphic to a generic extension of the other, one of which satisfies CH and one of which doesn’t. Every statement a set theorist working in one structure makes can be translated into a statement about a generic extension of the universe by a set theorist working in the other. There is no fact of the matter as to which one of the two is talking about the “real” universe of sets, because each can be seen as being contained in the other. Most interesting mathematical statements will be decidable as above, but some (like CH) will just be considered ambiguous because they vary between the two models.

Y,N,Y – This is perhaps the naive view one takes after reading about Gödel’s incompleteness results. One believes that the universe of sets exists, and there are many facts of the matter about it, but that we have no access to any of these facts beyond ZFC. Never mind how we have knowledge about the axioms of ZFC. At any rate, there are strict limits to how much we can know.

Y,N,N – This is perhaps the slightly more sophisticated version of the above view. There are many universes of sets, and each consistent extension of ZFC is true in one of them. However, there is no fact of the matter as to which consistent extension is the “right” one, and we have no principled way to adjudicate between them.

N,Y,Y – This position also seems slightly awkward, but I can imagine someone like Hartry Field holding it. Mathematical entities don’t exist, but to the extent that we pretend they do, we might as well pretend they have a complete theory. Presumably the methods of set theorists will alow us to determine what statements should be satisfied beyond the axioms of ZFC. Or maybe we’re just allowed to make it up freely?

N,Y,N – I think this is the position that I am closest to. I agree with Field that mathematical objects don’t exist, and I agree with Dummett that there’s no point in saying there’s a fact of the matter about things we can’t assert (perhaps even more so when the “fact” of the matter is about a fiction). However, I am also convinced by Martin and others about the fact that certain extensions of ZFC are far more natural than others. If an author leaves enough hints in a novel, it seems that there can be a fact of the matter as to who the murderer was in the story, even though it was never explicitly stated. Similarly, if there is enough evidence in the theory of ZFC, it seems that projective determinacy must be true in the fiction as well, even though it was never explicitly stipulated.

N,N,Y – I’m not sure what would persuade one to adopt this position. However, there may be something like this in an instrumentalist view of science like van Fraassen’s. An agnosticism about the facts of the theory might allow one to believe that they could be secretly true, even though we have no way of knowing them to be true, and there aren’t objects to make them true. But enough agnosticism may allow one to consider this position while considering the next one as well.

N,N,N – This seems to be closest to the one that Dummett describes. If mathematical truth is identified with provability from the axioms, then negative answers to the second and third question follow immediately from Gödel’s incompleteness results. And a negative answer to the first question seems to be presupposed in identifying mathematical truth with provability rather than facts about the mathematical objects.

Now of course, all these views can probably be further multiplied by considering their restrictions to particular domains of mathematical discourse, like Peano arithmetic, ZFC, small large cardinals, large large cardinals, and statements like CH and beyond. So someone like Sol Feferman might take the views “Y,Y,Y” about number theory, but “N,N,N” about statements beyond ZFC. But this is just a general outline to show how these different questions might interact, and that they are at least somewhat independent.

## Reconstructive Nominalism and Representation Theorems

6 04 2005

In “Science Without Numbers”, Hartry Field shows how to give a nominalistic theory for Newtonian gravity that agrees with established, platonistic theory in all its nominalistic predictions. One part of showing that it agrees is by showing that (assuming ZFC) any model of the nominalistic theory is isomorphic to a submodel of the platonistic theory.

In “A Subject With No Object”, Burgess and Rosen reconstruct Field’s argument, along with those of Chihara and Hellman, by trying to show that each one of them is able to construct a nominalistic theory over which the platonistic theory is conservative. However, rather than accepting Field’s (admittedly somewhat weak) arguments for the conservatism of mathematics in general, they try to prove a reverse representation theorem, establishing that the real numbers can be represented by some k-tuples of physical objects actually countenanced by Field’s theory. If all reference to real numbers can be replaced by reference to (say) triples of space-time points, then clearly we can translate the platonistic theory into a purely nominalistic form and preserve all the standard results.

However, this was definitely not Field’s strategy. Burgess and Rosen note that with this strategy, Field would be able to define multiplication directly on triples of points, and thus wouldn’t need his cardinal comparison relations, which are not purely first-order definable. Thus, they suggest that if he had followed their strategy, he could avoid many of the logical worries that plague him towards the end of “Science Without Numbers” and in many of his exchanges with Stewart Shapiro (and more recently Otávio Bueno).

However, I think I can explain why Field used a different strategy. Field didn’t want to find surrogates for real numbers, so that (say) the weight function would return a tuple of points rather than a real number – he wanted to define weight comparison relations, so that there is no entity at all that can be said to be the weight of an object; we can just talk about when one object is heavier than another, and when the differences between the weights of two pairs of objects are the same. The particular surrogates Burgess and Rosen use are in fact quite problematic, because there seems to be no reason why particular space-time points should be connected with an object in the way that its weight would have to be. While it gets over the anti-platonist argument that there’s no way at all the weights could be causally connected to the objects, there still seems to be no plausible way in which the weights are connected to the objects. And similar only slightly weakened versions of epistemic arguments would still apply as well. In addition, Burgess and Rosen point out that such a strategy requires the existence of infinitely many (in fact, uncountably many) physical objects in order to represent all the real numbers.

Thus, there’s little sense in which the Burgess and Rosen style of reconstruction would be a scientific improvement over platonistic theories, and thus the arguments in their last chapter would have a lot of force. However, I think Field’s theory really does limit itself to primitives (like weight comparison and betweenness) that seem perspicuous, whereas Burgess and Rosen’s nominalistic theory has to have a primitive that says when a triple of points represents the weight of a particular object. Field’s theory actually seems to me to be an improvement on standard Newtonian theory in just the same way that Hilbert’s is an improvement on Euclidean geometry. Few people will actually want to work with the newly reconstructed system, but it is characterized in a much more purely internal way, and thus can be more easily generalized and more compactly axiomatized. (That is, there is no need to add extra axioms to spell out all the details of the mathematical apparatus that goes along with the physical part.)

## Conservative Extensions Are Permissible, But Non-Conservative Ones Can Be Mandatory

18 03 2005

On the face of it, Hartry Field’s insistence that the conservativity of mathematics (together with the existence of acceptable nominalistic scientific theories) means that we shouldn’t believe in numbers flies in the face of mathematical history. It seems that historically, it’s precisely when complex numbers were shown to be conservative over the reals that they were first fully accepted. However, I think that this can be explained in a Fieldian way. The mathematician who believes in complex numbers believes in them exactly the way she believes in real numbers – which I think a hermeneutic reading of Field’s fictionalism suggests is just fictional. The established conservativity of the complex numbers means that they are a consistent extension that is conservative over the nominalistic physical theory we already have, and therefore it is just as acceptable to talk about complex numbers as it is to talk about real numbers. This sort of acceptability doesn’t mean that the claims should be literally read as true (on Field’s account), but they are still legitimate to talk about as fictional entities.

This legitimacy is meant to contrast with the non-acceptability of the talk of a set of all sets, or Frege’s Basic Law V. Once they were established to be conservative, complex numbers became just as much a permissible subject of mathematical discourse as the real numbers. The Russell set or Fregean extensions have never become permissible, because of the particular form of their non-conservativity (ie, contradictoriness).

However, non-conservativity of an extension sometimes makes the extension mandatory rather than impermissible. The parts of our physical theories that talk about subatomic particles are not conservative over the nicely formulated parts that don’t, but we must accept them and their implied entities anyway. This is because the extra statements that get proven by the non-conservative extension have explanatory power for facts that were observable at the macroscopic level. Similarly, Donald Martin (in “Mathematical Explanation”, in eds. Dales and Oliveri, Truth in Mathematics, 1998, Oxford University Press) suggests that once one countenances the entities whose existence can be derived from ZFC, one should countenance large cardinals as well, because they are a non-conservative addition in precisely a way that explains certain facts about the existence of Turing cones and Wadge degrees in all known sets of a certain complexity. So in whatever sense one accepts ZFC, one must accept certain large cardinal axioms, and in whatever sense one accepts one’s observations of macrosopic objects one must accept subatomic particles.

However, just by accepting macroscopic objects and subatomic particles, one is not forced to accept sets. They can be taken (fictionally, according to Field) or left.