Monism and the Possibility of Anti-Gunk

8 05 2006

Here’s some thoughts I had, inspired by Jonathan Schaffer’s talk at the APA a month and a half ago. Basically, I point out that if gunk is a problem for the nihilist, then anti-gunk (if it makes sense) is a problem for the monist. But gunk might not be a problem for the nihilist in the end.

The paper is called “From Nihilism to Monism”, where he argued that any argument leading one to believe that there are no composite objects should in fact push one all the way to believing that there is only one object – the entire universe. Unfortunately, I didn’t stick around for the comments by Ted Sider and Ned Markosian, which I’m sure shed some light on very interesting issues. However, I’m wondering whether some of the arguments could be turned around. For instance, the seeming possibility of gunk (stuff such that every part of it has even smaller parts) can’t be paraphrased by the minimal nihilist (someone who thinks there are just lots of small simples), though it can by the monist (someone who thinks there’s just one big simple).

But what about the possibility of anti-gunk? Just as we used to have an unquestioned assumption that every object has atomic parts, don’t we also have an unquestioned assumption that there is a biggest thing, that is not a part of anything else? For instance, if everything that there is has a finite size, but there is unrestricted (finitary) composition, we could have bigger and bigger things without end. This possibility could not be paraphrased by the monist, but the minimal nihilist could deal with it just fine.

The standard representation of unrestricted (perhaps finitary) composition is with all the objects being elements in a boolean algebra (the bottom element is the only one that doesn’t represent an object). A relatively straightforward theorem points out that a dense subset of the algebra will suffice to represent every element of the algebra as a set of parts. If the algebra is atomic, then the set of atoms will be a dense set. But, as Ted Sider points out in “Van Inwagen and the Possibility of Gunk”, if it’s atomless (or has an atomless part) then any dense set will contain two elements, one of which is a part of the other. This is incompatible with the nihilist position, on which no object is a part of any other.

The dual worry for the monist is if we drop the top element of the boolean algebra, just as we drop the bottom element. Or perhaps if we consider just some distributive lattice, rather than a boolean algebra. There’s no a priori reason why the objects should form a boolean algebra rather than one of these other structures (at least, not obviously, not any more than that the algebra should be atomic).

There might be a paraphrase strategy, where we just talk about some fictional largest thing. But maybe we can do the same in the other direction – even if there are no atoms, we can talk as if there are some! Just as we can fictionally add a top element to the algebra, we can fictionally add elements at the bottom of chains – that is, instead of considering elements of the algebra, we can consider infinite descending chains of elements. Any element can then be represented as the set of all chains containing it. This is exactly analogous to the process by which we represent real numbers as Dedekind cuts or Cauchy sequences of rationals – we add ideal elements at the limits of chains, even though in the “actual” structure, there are no limits. Sider says, “A hunk of gunk does not even have atomic parts ‘at infinity’; all parts of such an object have proper parts.” However, for any boolean algebra in which there is gunk (ie, some non-atomic object), there is an atomic boolean algebra in which it can be embedded. Every object in the old algebra will be represented as some object in the new one containing continuum-many atoms. This might raise some concern, because the atomic algebra will have, in addition to the atoms, many new objects (like the finitary joins of atoms, and possibly some countable joins as well) – but the monist can say that the reason we don’t talk about those in ordinary language is that our grasp on the world only gets really large, crude chunks, rather than anything closer to the atoms – this is why the world looks gunky.

Thus, the possibility of gunk isn’t really much of a worry for the nihilist. Sider says, “Surely there are both atomistic possible worlds and gunk worlds, and for that matter in-between worlds with both atoms and gunk.” But I suggest that the nihilist could say that there are only atomistic possible worlds – the ones we might ordinarily call gunk worlds are really just ones in which all our ordinary predicates pick out continuum-sized sets of atoms with certain uniformity properties.

Advertisements




Definabilism and Combinatorialism

18 07 2005

I’ve been reading more of Maddy’s book Naturalism in Mathematics, and she gives an interesting case history for a realist argument against Gödel’s axiom of constructibility, V=L. She does this by giving parallels with the development of physical science from Mechanism to a broader Physicalism. I think the analogies might well have something to say in other philosophical debates as well.

Apparently, in the wake of Gaileo’s and Newton’s successes in explaining parts of nature mechanically, there was a thought that everything in nature could be explained in terms of particles acting only on one another, in the direction of the line segment separating them, with forces that depend only on their distance from one another. Eventually, with the kinetic theory of gases, this model was able to explain a huge range of phenomena. However, with Oersted’s experiments on how electric currents affect a compass, it became clear that some forces can act perpendicularly and with a strength that depends on the speed of the current and not just the distance. Though ad hoc modifications were able to temporarily save Mechanism, it eventually became clear that the electromagnetic field was required in addition to the particles. Thus, although Mechanism had succeeded for a long time, we eventually needed to broaden our scope of theories, and this became Physicalism.

Maddy then describes a similar development in mathematics from a position she calls Definabilism to one she calls Combinatorialism. Definabilism is the idea that all functions (and in a sense, all objects) capable of mathematical study are somehow definable. Combinatorialism is the more modern picture, where absolutely arbitrary functions and objects are permitted. Descartes apparently considered only algebraically definable curves when he invented the idea of coordinate geometry. Later, in investigating the behavior of vibrating strings using partial differential equations, D’Alembert noted that the behavior of an actual plucked string couldn’t be modelled by his theory, because the initial condition (when the string is shaped like two straight lines with an angle between them) wasn’t a mathematical function. But Euler and one of the Bernoullis were eventually able to show that this function could be described properly (as the sort of piecewise function we’re familiar with from high school calculus textbooks today) and the differential equation could still be solved. Fourier showed how to represent such a function as a sum of sines and cosines, and conjectured that all such functions could be so reprsented. But as people worked towards proving this theorem (and meanwhile rigorizing calculus), they started coming up with new counterexamples, leading eventually to the pathological functions considered in any real analysis class today (the function that is 1 on the rationals and 0 on the irrationals, the one that takes value 0 on the irrationals and 1/q on any rational p/q in lowest terms, Weierstrass’ function that is continuous everywhere and differentiable nowhere). Still, these functions were all definable, in some suitably general sense. But after the work of Cantor in analyzing the sets of points of discontinuity of such functions, the French and Russian analysts were eventually able to classify the Borel and analytic sets, and showed by cardinality arguments that there must be still stranger functions. Thus, they gradually adopted the modern Combinatorialist approach, whereby a function is just an arbitrary association.

Maddy suggests that adopting V=L would be a return to Definabilism, and in fact a very restricted form of it (where only predicative definitions are allowed). Instead, because of adopting the Combinatorialist position, we should opt for “maximizing” rather than “minimizing” principles in set theory.

I think the situation here should be somewhat familiar from debates in metaphysics. The two I’m thinking of in particular are about the range of possible worlds that exist (they can be ersatz worlds), and about what collections of objects compose a further object. I’m not sure what most metaphysicians think is the range of possible worlds that exist, but I think most people think it’s more than just the worlds compatible with actual physical laws, and less than the set of all logically possible worlds. (Of course, there’s also the Australians who believe in impossible worlds as well.) And in mereology, we have the debate between fans of unrestricted composition and those that argue for a more moderate answer to the compositions question. If there are enough analogies with the mathematical situation that Maddy describes, then perhaps we should adopt the most permissive answer possible in each case. I tend to think however that there should be some restriction in all three cases (set theory, possible worlds, and composition). The restrictions I would pose are much more permissive than V=L, or merely finite composition in mereology. But I don’t think there’s a coherent way to say “any collection of things forms a set” or “any collection of objects composes a further object” without running into the set-theoretic paradoxes. But I have no idea how one could phrase this sort of middle way that I propose (which most people would probably still consider to be quite extreme towards the universalist picture).





Time Gunk and Zeno

30 05 2005

I suppose this is an odd topic to post about immediately after FEW, having nothing to do with what went on there. But I talked to several people about this at lunches and in the evening, and they don’t think anyone has written up this idea before.

It seems that some objects may be “metaphysically simple” in the sense that they have no proper parts. For instance, it was once thought that atoms had this property (hence their name) and it may still be thought that quarks and electrons do. However, some people believe it’s at least possible that some objects are atomless, in that every part of the object can be further subdivided into more parts. Any such object is said to be made up of “atomless gunk”.

Of course, all this is talking about the spatial parts of some object. But as Ted Sider points out (in his chapter on Temporal Parts), it seems natural to consider the temporal parts of some objects as well. Some examples he gives are “Ted when he had long hair”, “Ted when he had short hair”, “the piece of clay while it was shaped into a statue”, and “the current time slice of the Eiffel Tower”.

But the talk of time slices (Sider’s “stage view”, I think) seems to me to make the presumption that he didn’t want to make in the case of spatial parts. If we think that some spatial parts of objects may be atomless, in that they have no part without a proper spatial subpart, then why not think that some temporal parts of some objects might have a similar divisibility property? In the spatial case, it seems possible that some physical objects have atomic parts and some not. In the temporal case this would seem weirder, but there’s obviously no logical contradiction with this idea.

I think Sider wants to allow that space and time both be coordinatized by real numbers. Thus, it could still make sense to talk about points of space even if they are located in the region spanned by some gunky object – there’s no need to suppose that the object and space have any parts in common, especially if one is a relationalist about space rather than a substantivalist. Similarly, an object could be temporally gunky even if it makes sense to talk about instantaneous moments of time. This would allow a temporally gunky object to exist at the same time as some non-gunky one.

If two objects cannot spatiotemporally overlap without sharing some parts, then this would also mean that no matter how you break the universe up into parts, there are only countably many disjoint ones. (Assuming the coordinatization of space-time is Archimedean.)

If all objects are temporally gunky, then this would provide a nice resolution of Zeno’s paradox of the arrow. The paradox says that at any moment, the arrow has a specific location. Thus, there is no moment at which the arrow moves. So the arrow must be stationary. However, if the arrow is temporally gunky, then it doesn’t make sense to talk about the arrow at any particular instant. It may only make sense to talk about the arrow extended over a (perhaps extremely small) interval of time. Any such part of the arrow occupies slightly different spatial regions at each moment, and thus every part of the arrow is moving. The “time slices” don’t move, but they also don’t exist on this picture.

For someone who believes what I imagine Sider to believe, that every collection of objects forms a further object, no matter how the parts are arranged in space and time, the paradox also goes away even if time is atomic. However, the explanation seems to miss something about the intuitive idea of motion. The theory suggests that it doesn’t make sense to talk about the motion of a temporally non-extended object, and I would agree. But this would mean that there are some objects (temporally extended ones) of which it makes sense to ask if they are in motion, and some objects (the stages) of which it doesn’t. Perhaps this is no worse than saying that an object must be spatially extended in order to ask whether it has a direction, but it does seem at least minorly more troubling.

And to say that there are some times at which the arrow is moving and some times at which it isn’t would only make sense (if at all) when talking about the whole arrow, and not its stage, because the stage is also part of many other objects that don’t move at all. But if it has no temporally atomic parts, then in the interval when the arrow moves from the bow to the target, it has no parts which are not moving. In some larger intervals it may have some parts that move and some that don’t. But at any rate, Zeno’s paradox disappears





Nihilism vs. Universalism

18 05 2005

When I was at the APA Pacific Division conference at the end of March, I attended an interesting talk by Matthew Slater opposing mereological moderation – the doctrine that for some things, there is an object made up of them as parts, and for some things, there is no object such that every part has a part that is part of one of those things. However, he suggested that he might be neutral between nihilism (the thought that no collections of things are the parts of some further thing) and universalism (the thought that any collection of things is the parts of some further thing).

It seems to me that if these really are the choices, then Hartry Field’s nominalist reconstruction of physics pushes us most definitely towards the latter. He has to quantify over non-atomic entities in order to do physics, so at least some entities must have non-trivial parts. But this just means that nihilism is not an option. To embrace this nihilism would mean a definite rejection of Field’s program, and thus an embrace of (perhaps more troubling?) abstract objects, rather than composite physical objects.

Not knowing the literature, I don’t know if this argument has already been made (or if the naturalistic prejudice it has is considered legitimate there), but it’s what I was thinking when I was at that talk.





Not Countably Many

26 04 2005

This post has nothing to do with my qual topics, except that it uses a little bit of set theory. However, I have a bit of a soft spot for mereology.

I was browsing through Brian Weatherson’s archive last night and stumbled across a post wondering how many things* there could be in an infinite universe if all mereological fusions exist. Daniel Nolan answered that if there is no atomless gunk (ie, if every object has a part with no proper part), then there must be 2^k objects for some cardinal k. This only obviously rules out inaccessible cardinalities. Gabriel Uzquiano then mentioned a result that said there are complete atomless boolean algebras of every inaccessible cardinality, and since a world composed entirely of atomless gunk is basically just a complete atomless boolean algebra, this seems to suggest that every infinite cardinality is at least possibly the cardinality of some universe of unrestricted mereological fusion.

However, the countable cardinality (I’ll use “A” instead of aleph_0 for typographical considerations) isn’t obviously inaccessible in the same way as uncountable inaccessibles, and I conjectured that it might not have a complete atomless boolean algebra. Today, after consulting Thomas Jech’s Set Theory (3rd Millenium Edition), I found an exercise on pg. 88 that proves my conjecture. In fact, any universe of unrestricted mereological fusion has to be at least the size of the continuum. If there is no atomless gunk, then the size of the universe is the size of the powerset of the set of atoms. If there is some object composed entirely of atomless gunk, then it’s clear that this object has infinitely many disjoint parts (because if n was the greatest natural number such that it had n disjoint parts, then each of those parts would have to be atoms). But then every collection of these parts forms an object, so the universe must have at least as many objects as this powerset, which is at least the size of the continuum, QED.

In fact though, the result mentioned in Jech is substantially stronger. It says that if K is the cardinality of some complete boolean algebra, then K=K^A, where again A is the countable cardinality. Exactly which class of cardinalities satisfy this property is unclear, because of things like the Singular Cardinals Hypothesis, and Easton’s Theorem (which states that it is consistent that the cardinalities of the powersets of each regular cardinal be anything, as long as they are non-decreasing, and the powerset of each K has cofinality strictly greater than K). It is clear that 2^K satisfies it (since (2^K)^A=2^(KA)=2^K, by simple cardinal arithmetic). And the result Gabriel Uzquiano cites suggests that every (uncountable, strong) inaccessible satisfies it as well.

However, it is clear that nothing with cofinality A is a possible size of the universe, which rules out A itself (so the universe must be uncountable, answering Brian Weatherson’s first question negatively), and aleph_A, and aleph_(aleph_A), and aleph_(A+A), and epsilon_0 (the first fixed point of the aleph function, ie aleph_(aleph_(…))).

In addition, assuming there aren’t a proper class of objects, the universe can be separated into a part consisting of all the atoms and a part consisting of all the atomless gunk. Since every object can be partitioned the same way, the cardinality of the universe is going to be the product of the cardinalities of these two parts, which is just the larger of the two cardinalities, by a basic result of cardinal arithmetic. The atomic part has cardinality 2^K, which is fairly restrictive, since this means that it can’t have cofinality A and can’t be inaccessible. In addition, the only 2^K that can have cofinality aleph_1 is 2^A, so either the atomic part is required to have cofinality at least aleph_2, or 2^A is at least aleph_(aleph_1), in which case the universe is required to have at least that cardinality, which rules out uncountably many cardinalities. Similar results obtain for every cofinality, so that if the atomic part of the universe has size at least 2^K, then it has cofinality strictly greater than K, which rules out most limit cardinals. GCH seems to be the way to make this the most permissive, since allowing any one limit cardinal with cofinality L requires increasing 2^K to be that cardinal, which rules out uncountably many lower cardinals. So intuitively, using forcing to make every powerset as small as possible is the way to rule out the fewest cardinals, so we get GCH, which says that the powerset carinalities are exactly the successors, so under that hypothesis, the atomic part of the universe has a successor cardinality.

Now, the atomless part of the universe is less clear. Gabriel Uzquiano says that every inaccessible is possible as a cardinality for this part, and this result sounds quite plausible. The appropriate chapter in Jech didn’t seem to give any sufficient conditions on a cardinal for it to have a complete atomless boolean algebra, just necessary conditions, so all I can say is that the atomless part could have inaccessible cardinality, and it certainly has cardinality such that K^A=K, so it does not have countable cofinality.

So an answer to Brian Weatherson’s second question requires a solution to the generalized continuum problem for the atomic part of the universe (ie, what cardinals are the sizes of powersets), and some more knowledge of complete atomless boolean algebras than I have for the atomless part.

But at any rate, the universe is not countable, and does not have countable cofinality, and is also at least the size of the continuum.

*This entire discussion assumes that the sets aren’t part of the universe, so that we can properly talk about the cardinality of the universe. One way to accomplish this is to be a nominalist and think that there just aren’t any sets, but allow ourselves to use sets fictionally to talk about cardinalities. Another way is to consider the universe from the perspective of some “bigger universe” containing more sets, so that the actual universe (including all its sets) forms a set and not a proper class. However, if the universe satisfies ZFC, then we can’t be total mereological universalists, because not every collection of sets has a fusion – otherwise, the universe would form a set, since we could take the fusion of all the singletons. To address the case where the universe contains sets, I think you have to be careful just how you phrase the complete mereology axiom. The discussion here seems like it will be quite useful.