I’ve been in Paolo Mancosu’s seminar this semester going through John Burgess’ new book Fixing Frege on the various approximately Frege-like theories and how much of classical mathematics they can do. Of course, Frege’s original system could do all of it, but turned out to be inconsistent. Burgess’ book starts with the weakest (and most clearly consistent) systems, and moves on towards stronger and stronger systems that capture more of mathematics, but get closer towards contradiction.
Last week we were going through the first few sections that allow impredicative comprehension (that is, in this system, concepts can be defined by using formulas with quantifiers ranging over concepts – including the one that is being defined!). These systems supplement second-order logic with various “abstraction principles” adding new objects – that is, we add a function symbol to the language, define some equivalence relation on the type of entities that are in the domain of the function, and state that two outputs of the function are identical iff the inputs bear the relation to one another. Effectively, the “abstracts” are like equivalence classes under the relation.
Two well-known abstraction principles are Frege’s Basic Law V, which gives objects known as extensions to concepts that are coextensive (this is how he introduced the notion of a set); and what Boolos and other’s have called Hume’s Principle, which assigns cardinalities to concepts that are equinumerous. It turns out that Basic Law V is in fact inconsistent – a slightly impredicative comprehension principle for concepts gives us Russell’s Paradox. Hume’s Principle on the other hand is consistent – it turns out that Hume’s Principle plus any impredicative amount of second-order logic is equiconsistent with the same amount of second-order Peano Arithmetic.
Crispin Wright, Bob Hale, and others have used this fact to try to motivate Hume’s Principle as a logical principle that guarantees the existence of numbers, and reduces arithmetic to a kind of logic. However, beyond worries about whether or not this is a sort of logical principle, Øystein Linnebo and others have pointed out that there is an important second kind of impredicativity in Hume’s Principle, and most other abstraction principles that add new objects to the domain. Namely, the outputs of the abstraction function (cardinalities) are taken to be in the domain of objects that must be quantified over to see if two concepts are coextensive. Burgess points out that we can avoid this by taking the range of the abstraction function to be a new sort of entities beyond the objects and concepts in the original language. (This is in a sense a way to avoid Frege’s “Julius Caesar” problem of wondering whether Julius Caesar might be the number 3 – by stipulating that number and set abstracts get put in their own logical sort, we guarantee that none will be identical to any pre-existing object like Julius Caesar.)
He remarks on p. 158 (and proves a related result around p. 134) that any abstraction principle that is predicativized like this ends up being consistent! In fact, it’s a conservative extension of the original language, and is additionally expressively conservative, in that any sentence at all in the extended language can be proven equivalent to one phrased in the restricted language. The reason for this is that the only sentences in our new language that even mention the abstracts are identity claims among them (because all our other relations only apply to entities of pre-existing sorts), and these identity claims can be translated away in terms of the equivalence relation on the elements of the domain. (Incidentally here, I think if we add abstraction principles for every equivalence relation, each in a new sort, then we get what model theorists call Meq, which I think is an important object of study. Unless I’m misremembering things.) One nice historical point here is that it suggests that Frege’s concerns about the Julius Caesar problem were in fact quite important – the fact that he didn’t just stipulate the answer to be “no” is what allowed his system to become inconsistent.
The problem with putting these abstracts into new sorts is that one of the major motivations for Hume’s Principle was to guarantee that there were infinitely many objects – once you’ve got 0, you can get 1 as the cardinality of the concept applying just to 0, and 2 as the cardinality of the concept applying just to 0 and 1, and 3 for the concept applying just to 0,1,2, and so on. This obviously can’t happen with a conservative extension, and in particular it’s because concepts can’t apply (or fail to apply) to entities in the new sort. So we can get a model with one object, two concepts, and two cardinalities, and that’s it. So it’s not very useful to the logicist, who wanted to get arithmetic out of logic alone.
However, it seems to me that a fictionalist like Hartry Field might be able to get more use out of this process. If the axioms about the physical objects guarantee that there are infinitely many of them (as Field’s axioms do, because he assumes for instance that between any two points of space there is another), then there will be concepts of every finite cardinality, and even some infinite ones. The fact that the extension talking about them is conservative does basically everything that Field needs the conservativity of mathematics to do (though he does need his more sophisticated physical theory to guarantee that one can do abstraction to get differentiable real-valued functions as some sort of abstraction extension as well). Of course, there’s the further problem that this process needs concepts to be the domain of some quantifiers even before the abstraction, and I believe Field really wants to be a nominalist, and therefore not quantify over concepts. But this position seems to get a lot of the work that Field’s nominalism does, much more easily, with only a slight compromise. To put it tendentiously, perhaps mathematical fictionalism is the best way to be a neo-logicist.
I think I mentioned this a while ago, but Hale and Wright have never presented HP as a logical principle. The ‘neo’ is neo-logicism is usually taken to indicate a recognition that this isn’t logicism as Frege conceived it – HP is taken to implicitly define ‘cardinal number’ or ‘the number of’ operator, and so it’s status is meant to be analyticity (or something close enough). The end result isn’t meant to be that arithmetic has the status of logic; it’s that our knowledge of arithmetic is no more problematic than our grasp of logic (albeit higher-order) plus a single principle implicitly defining the target concept.
That aside, Hale and Wright have been very sensitive to the issue of impredicativity – it’s one of the main issues in the cluster of essays they wrote in response to Dummett 1991 (for example, the suggestively titled ‘On the Harmless Impredicativity of N=’). They’ve also been very concerned by the Caesar problem; they too have concluded that Frege’s recognition of the Caesar problem started the slide into inconsistency, since it caused him to reject as inadequate the implicit definition of number in Hume’s Principle and instead adopt an explicit definition in terms of extensions.
The point, then (finally!), is that I’m not sure I see why fictionalism is the best way to be a neo-logicist; unless, that is, we dismiss somehow what Hale and Wright have said on the issue of impredicativity and their attempt to show that the platonist-Fregean has the resources to solve the Caesar problem. There may be good grounds for such a dismissal, but I don’t know what those are yet.
That Burgess book sounds *really* cool. I wonder if my library has it. (Nope.)
Out of curiosity, what does it mean for a logic to get “closer toward contradiction”? Do you mean in the sequence L1,…,Ln of logics in order of increasing strength, Ln is closer to being inconsistent than L1 to Ln-1?
Isn’t that only a problem if each Li is obtained from Li-1 by adding *more* axioms to Li than Li-1?
Aidan – That clarifies things quite a bit for me. As I might have mentioned, the Burgess book is great on the technical issues, but leaves a lot of the philosophical concerns implicit. The point about HP defining “cardinal number” makes some comments by Linnebo that Burgess cites make a bit more sense. Burgess shows that one of the systems is strong enough to interpret a substantial amount of PA, but the numbers have to get a slightly non-standard interpretation. Linnebo seemed to suggest that this wasn’t good enough, probably for this reason.
I’ve been meaning to read “On the Harmless Impredicativity…” but didn’t get around to it when it was assigned this semester. I’ll try to do that soon.
Maybe fictionalism isn’t the best way to be a neo-logicist, but it means that you really don’t have to worry about those other problems. Though I guess you still have to worry about the problems Field has.
lumpy pea coat – As for “closer to inconsistency”, I think I only mean that in a good way. As I was typing it, I realized that I wasn’t clear if it actually meant anything, but it’s certainly how set theorists talk about large cardinals and other stronger axioms.
“Burgess shows that one of the systems is strong enough to interpret a substantial amount of PA, but the numbers have to get a slightly non-standard interpretation. Linnebo seemed to suggest that this wasn’t good enough, probably for this reason.”
Kenny, how does Aidan’s point help with this particular problem? The system that Linnebo doesn’t like is one that also has SOL and Hume’s Principle, and differs from classical Frege Arithmetic only in what it takes to be the “”Burgess shows that one of the systems is strong enough to interpret a substantial amount of PA, but the numbers have to get a slightly non-standard interpretation. Linnebo seemed to suggest that this wasn’t good enough, probably for this reason.”
Kenny, how does Aidan’s point help with this particular problem? The system that Linnebo doesn’t like is one that also has SOL and Hume’s Principle, and differs from classical Frege Arithmetic only in what it takes to be the “
Actually Fabrizio, I think you’re right – in fact, Burgess’ definition of order seems much more in line with our intuitive concept than the Fregean definition! (Well, if we’re really thinking of numbers as cardinals – the Fregean account might work better if we think of them as ordinals.)
I’d agree that Burgess’ definition of order is more in line with our intuitive concept. Indeed, it’s the one I used in my paper (2001) “Systems for a Foundation of Arithmetic” (www.andrewboucher.com/papers/foundations_of_arithmetic.pdf), page 1
x is less than or equal to y iff Nx and Ny and (there exist P,Q)(P is a subset of Q and Mx,P and My,Q),
where “Mx,P” means “P numbers x”.
I should add that the definition is so natural it surely has been used before me as well…
As to Field’s approach, the traditional way of getting real numbers is to take them as “abstracted ratios” (Newton’s expression in the Univerfal Arithmetick) of geometric magnitudes. This easily reduces to taking them to be abstractions with respect to the equivalence relation on triples of points on which a,b,c is equivalent to d,e,f if and only if the proportion ab:ac::de:df holds. This representation of real numbers is what is implicit in Field’s presentation and explicit in A Subject with No Object.
As to the definition of order I use, the obvious historical inspiration is Cantor. What is peculiar in the approch in FF is the definition of (proto)natural number.