Not Countably Many

26 04 2005

This post has nothing to do with my qual topics, except that it uses a little bit of set theory. However, I have a bit of a soft spot for mereology.

I was browsing through Brian Weatherson’s archive last night and stumbled across a post wondering how many things* there could be in an infinite universe if all mereological fusions exist. Daniel Nolan answered that if there is no atomless gunk (ie, if every object has a part with no proper part), then there must be 2^k objects for some cardinal k. This only obviously rules out inaccessible cardinalities. Gabriel Uzquiano then mentioned a result that said there are complete atomless boolean algebras of every inaccessible cardinality, and since a world composed entirely of atomless gunk is basically just a complete atomless boolean algebra, this seems to suggest that every infinite cardinality is at least possibly the cardinality of some universe of unrestricted mereological fusion.

However, the countable cardinality (I’ll use “A” instead of aleph_0 for typographical considerations) isn’t obviously inaccessible in the same way as uncountable inaccessibles, and I conjectured that it might not have a complete atomless boolean algebra. Today, after consulting Thomas Jech’s Set Theory (3rd Millenium Edition), I found an exercise on pg. 88 that proves my conjecture. In fact, any universe of unrestricted mereological fusion has to be at least the size of the continuum. If there is no atomless gunk, then the size of the universe is the size of the powerset of the set of atoms. If there is some object composed entirely of atomless gunk, then it’s clear that this object has infinitely many disjoint parts (because if n was the greatest natural number such that it had n disjoint parts, then each of those parts would have to be atoms). But then every collection of these parts forms an object, so the universe must have at least as many objects as this powerset, which is at least the size of the continuum, QED.

In fact though, the result mentioned in Jech is substantially stronger. It says that if K is the cardinality of some complete boolean algebra, then K=K^A, where again A is the countable cardinality. Exactly which class of cardinalities satisfy this property is unclear, because of things like the Singular Cardinals Hypothesis, and Easton’s Theorem (which states that it is consistent that the cardinalities of the powersets of each regular cardinal be anything, as long as they are non-decreasing, and the powerset of each K has cofinality strictly greater than K). It is clear that 2^K satisfies it (since (2^K)^A=2^(KA)=2^K, by simple cardinal arithmetic). And the result Gabriel Uzquiano cites suggests that every (uncountable, strong) inaccessible satisfies it as well.

However, it is clear that nothing with cofinality A is a possible size of the universe, which rules out A itself (so the universe must be uncountable, answering Brian Weatherson’s first question negatively), and aleph_A, and aleph_(aleph_A), and aleph_(A+A), and epsilon_0 (the first fixed point of the aleph function, ie aleph_(aleph_(…))).

In addition, assuming there aren’t a proper class of objects, the universe can be separated into a part consisting of all the atoms and a part consisting of all the atomless gunk. Since every object can be partitioned the same way, the cardinality of the universe is going to be the product of the cardinalities of these two parts, which is just the larger of the two cardinalities, by a basic result of cardinal arithmetic. The atomic part has cardinality 2^K, which is fairly restrictive, since this means that it can’t have cofinality A and can’t be inaccessible. In addition, the only 2^K that can have cofinality aleph_1 is 2^A, so either the atomic part is required to have cofinality at least aleph_2, or 2^A is at least aleph_(aleph_1), in which case the universe is required to have at least that cardinality, which rules out uncountably many cardinalities. Similar results obtain for every cofinality, so that if the atomic part of the universe has size at least 2^K, then it has cofinality strictly greater than K, which rules out most limit cardinals. GCH seems to be the way to make this the most permissive, since allowing any one limit cardinal with cofinality L requires increasing 2^K to be that cardinal, which rules out uncountably many lower cardinals. So intuitively, using forcing to make every powerset as small as possible is the way to rule out the fewest cardinals, so we get GCH, which says that the powerset carinalities are exactly the successors, so under that hypothesis, the atomic part of the universe has a successor cardinality.

Now, the atomless part of the universe is less clear. Gabriel Uzquiano says that every inaccessible is possible as a cardinality for this part, and this result sounds quite plausible. The appropriate chapter in Jech didn’t seem to give any sufficient conditions on a cardinal for it to have a complete atomless boolean algebra, just necessary conditions, so all I can say is that the atomless part could have inaccessible cardinality, and it certainly has cardinality such that K^A=K, so it does not have countable cofinality.

So an answer to Brian Weatherson’s second question requires a solution to the generalized continuum problem for the atomic part of the universe (ie, what cardinals are the sizes of powersets), and some more knowledge of complete atomless boolean algebras than I have for the atomless part.

But at any rate, the universe is not countable, and does not have countable cofinality, and is also at least the size of the continuum.

*This entire discussion assumes that the sets aren’t part of the universe, so that we can properly talk about the cardinality of the universe. One way to accomplish this is to be a nominalist and think that there just aren’t any sets, but allow ourselves to use sets fictionally to talk about cardinalities. Another way is to consider the universe from the perspective of some “bigger universe” containing more sets, so that the actual universe (including all its sets) forms a set and not a proper class. However, if the universe satisfies ZFC, then we can’t be total mereological universalists, because not every collection of sets has a fusion – otherwise, the universe would form a set, since we could take the fusion of all the singletons. To address the case where the universe contains sets, I think you have to be careful just how you phrase the complete mereology axiom. The discussion here seems like it will be quite useful.





Tertium non Datur

24 04 2005

[UPDATE 1/28/06: Since everyone seems to be finding this page (and my blog in general) by googling “Tertium non Datur”, I figure I’ll explain the principle right here. The Wikipedia page on the Law of Excluded Middle may be useful as well. “Tertium non Datur” is, I believe, Latin for “the third is not given”, meaning that there is no third option beyond true and false for sentences. Michael Dummett (in the introduction to Truth and Other Enigmas) distinguishes this from the Law of Bivalence, which says that not only does no sentence have a value other than true or false, but in fact every sentence does have one of those two values. They are very similar principles, but are importantly different if one doesn’t assume classical logic.]

I commented on a paper this weekend at the Berkeley-Stanford-Davis Graduate Philosophy Conference that was a defense of the law of excluded middle. However, it seems to me that it might be better to call it a defense of the law of tertium non datur, at least according to the confusing terminology Dummett adopts on page xviii-xix of the preface to Truth and Other Enigmas in an attempt to clarify the essay “Truth”. According to Dummett’s usage, the law of excluded middle is the logical law asserting that “A or not A” is always true. The closely related law of bivalence states that every statement is either true or false. He distinguishes these from a principle he calls tertium non datur, which states that no statement is neither true nor false.

It seems clear that the law of bivalence implies tertium non datur (if there were a statement that was neither true nor false, then bivalence would be false, so there must not be such a statement), but the reverse implication only works if one believes the law of double negation, which someone like Dummett rejects.

The reason Dummett felt obliged to include all this terminology in the preface to his collection is that in “Truth”, he takes himself to be defending tertium non datur while attacking the law of bivalence.

I won’t discuss the paper I was commenting on, but I will point out that Paul Horwich seems to have been victim of the same confusion in his book Truth. In Question 18, he seeks to show how his minimal conception of truth can preserve the fact that the product of ideal inquiry is true, while attacking the notion that all truth is the product of some ideal inquiry, which he takes to be a constructivist or anti-realist position of a sort. (Some of the same confusions arise in Questions 26-28, though perhaps Horwich is more clear about what’s going on there.)

He points out that there are facts beyond the reach of ideal investigation (he question-beggingly calls them “truths”), such as vague predicates, or perhaps Dummett’s example about the courage of a now-dead individual who was never exposed to danger in her life. In such a case, neither A nor not-A is verifiable, and thus A is neither true nor false. But (using “T” for the truth predicate and “~” for negation), this just means ~T(A)&~T(~A), which (using the T-schema (which Horwich supposes everyone should accept)) just comes to ~A&~~A, which is a clear contradiction.

But this just points up some of the dangers of supposing that a statement’s truth just consists in its ideal verification, not with a sort of anti-realism in general. I think means that an anti-realist who wants to preserve the T-schema should say that the truth of an atomic statement consists in its ideal verifiability, while the truth of a negation consists in the ideal verifiability that no actually possible investigation will establish the truth of the original proposition. Thus, rather than “X was not brave” meaning that it is verifiable that X would have acted cowardly had she been exposed to danger, this anti-realist should say that “X was not brave” means that no process of inquiry will reveal that X would have acted bravely had she been exposed to danger. Thus, it is conceivable to conclude that X was not brave and X was not cowardly, without concluding that neither was X brave nor was she not brave.

Thus, on this view, establishing that a particular atomic proposition is undecidable will establish its falsity.

Since the meaning of a disjunction is taken to consist in the idealized verification of one of the disjuncts, we see that the universalized law of excluded middle might fail, though tertium non datur holds. That is, we can’t guarantee that either there is a verification of a statement or a verification that it can’t be verified. This doesn’t mean that we leave it open that there is such a statement (because for such a statement to exist is to verify both that it can’t be verified, and that there is no verification that it can’t be verified, which is contradictory), but it also means that we can’t guarantee that the law holds in general. We can just show that in every particular instance, it won’t fail (though we don’t know in which one of the two ways it will not fail).

Of course, it might also be possible for an anti-realist to deny the T-schema, but I don’t think that’s necessary. At any rate, it would bring the discussion too far away from the positions that Horwich is contemplating here.





Dummett on Connectives

21 04 2005

There’s an interesting post here on the restrictions Dummett had for logical connectives. In particular, the discussion is about the introduction and elimination rules for a connective being in “harmony” with one another.

I haven’t read that particular chapter yet, but whatever notion he was after, it seems to me that his position that he takes in “The Justification of Deduction” (that deduction is justified in so far as it is a transformation of evidence for the premises into evidence for the conclusion) suggests that the meaning of a connective is just the specification of what counts as evidence for a complex statement built up using it. The standard connectives are such that evidence for a formula is simply related to evidence for the subformulas. For instance, evidence for a conjunction is just the combination of evidence for both conjuncts. This can be used to justify an introduction rule, in that if I have evidence for A and evidence for B, I can easily transform this into evidence for A&B just by concatenating the evidence. Similarly, this justifies the elimination rules, in that anyone who has evidence for the conjunction must have evidence for both conjuncts, and thus can easily generate evidence for either one alone.

For a connective like “tonk”, it seems that no good meaning can be found that yields both the introduction rule and the elimination rule. To justify the introduction rule, one would need evidence for the left contonkt to be sufficient evidence for the whole statement, but to justify the elimination rule, one needs evidence for the whole to be sufficient for the right contonkt. Since the subformulas are general, there doesn’t seem to be any way this can be satisfied.

Thus, on a meaning theory for connectives like Dummett’s, introduction and elimination rules are secondary to the evidentiary relations, and (since he prefers intuitionist logic) it should be clear that truth-tables are largely irrelevant as well.





Reconstructive Nominalism and Representation Theorems

6 04 2005

In “Science Without Numbers”, Hartry Field shows how to give a nominalistic theory for Newtonian gravity that agrees with established, platonistic theory in all its nominalistic predictions. One part of showing that it agrees is by showing that (assuming ZFC) any model of the nominalistic theory is isomorphic to a submodel of the platonistic theory.

In “A Subject With No Object”, Burgess and Rosen reconstruct Field’s argument, along with those of Chihara and Hellman, by trying to show that each one of them is able to construct a nominalistic theory over which the platonistic theory is conservative. However, rather than accepting Field’s (admittedly somewhat weak) arguments for the conservatism of mathematics in general, they try to prove a reverse representation theorem, establishing that the real numbers can be represented by some k-tuples of physical objects actually countenanced by Field’s theory. If all reference to real numbers can be replaced by reference to (say) triples of space-time points, then clearly we can translate the platonistic theory into a purely nominalistic form and preserve all the standard results.

However, this was definitely not Field’s strategy. Burgess and Rosen note that with this strategy, Field would be able to define multiplication directly on triples of points, and thus wouldn’t need his cardinal comparison relations, which are not purely first-order definable. Thus, they suggest that if he had followed their strategy, he could avoid many of the logical worries that plague him towards the end of “Science Without Numbers” and in many of his exchanges with Stewart Shapiro (and more recently Otávio Bueno).

However, I think I can explain why Field used a different strategy. Field didn’t want to find surrogates for real numbers, so that (say) the weight function would return a tuple of points rather than a real number – he wanted to define weight comparison relations, so that there is no entity at all that can be said to be the weight of an object; we can just talk about when one object is heavier than another, and when the differences between the weights of two pairs of objects are the same. The particular surrogates Burgess and Rosen use are in fact quite problematic, because there seems to be no reason why particular space-time points should be connected with an object in the way that its weight would have to be. While it gets over the anti-platonist argument that there’s no way at all the weights could be causally connected to the objects, there still seems to be no plausible way in which the weights are connected to the objects. And similar only slightly weakened versions of epistemic arguments would still apply as well. In addition, Burgess and Rosen point out that such a strategy requires the existence of infinitely many (in fact, uncountably many) physical objects in order to represent all the real numbers.

Thus, there’s little sense in which the Burgess and Rosen style of reconstruction would be a scientific improvement over platonistic theories, and thus the arguments in their last chapter would have a lot of force. However, I think Field’s theory really does limit itself to primitives (like weight comparison and betweenness) that seem perspicuous, whereas Burgess and Rosen’s nominalistic theory has to have a primitive that says when a triple of points represents the weight of a particular object. Field’s theory actually seems to me to be an improvement on standard Newtonian theory in just the same way that Hilbert’s is an improvement on Euclidean geometry. Few people will actually want to work with the newly reconstructed system, but it is characterized in a much more purely internal way, and thus can be more easily generalized and more compactly axiomatized. (That is, there is no need to add extra axioms to spell out all the details of the mathematical apparatus that goes along with the physical part.)