Domain Name Trouble

28 01 2006

Sorry about the blog being down most of the past week – it looks like I started the blog over a year ago, and the domain name registry expired after a year. It seems to be intermittent right now, but hopefully it’ll be working fully again soon. I have two posts that came up after the domain name went down, so you may want to check them out.

Advertisements




Do Mathematical Concepts Have Essences?

24 01 2006

In John MacFarlane’s seminar today on Brandom’s Making it Explicit, the distinction was discussed between necessary and sufficient conditions for a concept and the essence of a concept. The distinction is roughly that necessary and sufficient conditions for the application of a concept doesn’t necessarily tell you in virtue of what the concept is satisfied. For instance, we have two extensionally equivalent notions – that of being a pawn in chess; and that of being permitted to move forwards one square, capture diagonally forwards one square, move two squares forwards in certain contexts, and so on. At first it might seem correct to say that the piece can move forwards one square because it is a pawn, but further reflection suggests that this would leave the notion of being a pawn unanalyzed. After all, a piece is not a pawn in virtue of its shape (we could be playing chess with an elaborately carved ivory set, or labeled checkers rather than standard chess pieces, or even patterns of light on a computer screen) nor by its position on the board (any piece could be in most of the positions a pawn could be in), nor most anything else. It seems that in fact, the reason it is appropriate to call this piece a pawn is because we give it the normative status of being able to move in certain ways (along with giving other pieces related permissions and obligations). Thus, it seems that it is a pawn in virtue of its normative statuses, and it has these statuses in virtue of our agreement to treat it thus (or something like that).

Now, whether this “in virtue of” makes sense or not is a contentious debate I’m sure. But if it does, then it seems to motivate various projects, both philosophical and otherwise. For instance, the physicalist program seeks to find physical facts in virtue of which all other facts hold (whether about consciousness, laws of nature, normativity, life, etc.) and in general any reductionist program seeks to show that a certain set of facts holds in virtue of some other set (though they may argue that even the distinction between these two sets of facts is merely illusory).

Another example that was used in seminar to motivate this distinction was that we know that necessarily, any straight-edged figure whose external angles are all equal to the sum of the other internal angles is a triangle, and vice versa. However, there is a sense in which it is a triangle in virtue of its having three sides, rather than in virtue of this fact about the external angles. So I wondered how much this idea can be extended in mathematics.

At first I thought it would quickly fail – after all, it’s extremely common not to have a single definition for a mathematical object. For instance, an ordinal can be defined as a transitive set linearly ordered by the membership relation, a transitive set of transitive sets, the Mostowski collapse of a well-ordering, and probably in many other ways. In different presentations of set theory, a different one of these is taken to be the definition, and all the others are proven as theorems. Similarly, a normal subgroup of a group G can be defined either as the kernel of a homomorphism from G to some group H, or as a subgroup of G that is closed under conjugation by elements of G, or probably in many other ways as well.

However, I’m starting to think that maybe there still is a notion of essence here. For most of the uses of the normal subgroup concept, the fact that it is the kernel of a homomorphism is really the fundamental fact. This explains why you can take quotients by normal subgroups, and more easily generalizes to the notion of an ideal in a ring. With the ordinal concept, it’s a bit harder to see what the fundamental fact is, but it’s clear that well-ordering is at the bottom of it – after all, when generalizing to models of set theory without the Axiom of Foundation, we normally restrict the notion of ordinal to well-founded sets unlike the first two definitions.

If this is right, then I think that a lot of the history of the development of mathematics can be seen as a search for the essences of our concepts (and for the important concepts to find the essences of). Although we often think that theorems are the main product of mathematics, it seems that a lot of the time just identifying the “right” structures to be talking about is really the goal.

Something like this can be seen in the history of algebraic geometry. At first, it was the study of curves in the real plane defined by polynomials. Eventually, it was realized that setting it in the complex plane (or the plane over any algebraically closed field) makes a lot of things make more sense. (For instance – Bezout’s theorem is true in this setting, that a curve of degree m and a curve of degree n always intersect in exactly mn points counting multiplicity.) Then it was generalized to n dimensional spaces, and projective spaces as well to take care of a few troubling instances of Bezout’s theorem, and to make sure that every pair of algebraic curves (now called varieties) intersect. After noticing the connection between algebraic curves and ideals in the ring of polynomials on the space (there is a natural pairing between algebraic subsets of a space and ideals closed under radicals in the ring of polynomials), it became natural to define a ring of polynomial-like functions on the algebraic curves themselves. With this definition, it was clear that projective spaces are somehow the same as algebraic curves in higher-dimensional spaces, and affine spaces are their complements. Thus, instead of restricting attention to affine and projective n-spaces over algebraically closed fields, the spaces of study became “quasiprojective varieties” – intersections of algebraic subsets of these spaces and their complements. In the ’50s and ’60s, this notion was generalized even further to consider any topological space with an associated ring satisfying certain conditions – that is, the objects of study became sheaves of rings over a topological space satisfying certain gluing conditions. Finally (I think it was finally), Grothendieck consolidated all of this with the notion of a scheme.

At various points in the development of algebraic geometry, the spaces under study changed in various ways. At first, extra restrictions were imposed by requiring the field to be algebraically closed. But then other restrictions were removed by allowing the dimension to be higher. Moving to projective spaces was another restriction (in a sense), but then moving to quasiprojective varieties was a generalization. Moving to locally ringed spaces, and then sheaves, and finally schemes were greater generalizations that (I think) incorporated the spaces that were originally removed by requiring algebraic closure. However, the ones that were excluded by that first move could now be understood in much better ways using the tools of ideal theory, the Zariski topology, and much more that was naturally developed in the more restricted setting. I am told that the notion of a scheme helped tie everything together and allow algebraic geometers to finally reach a concept that had all the power and interest they wanted to give good explanations for facts like the analogs of Bezout’s theorem, and also to start dealing with problems of number theory in geometric terms.





Is Truth Necessary for Scientific Explanation?

24 01 2006

Over the last year or two I’ve become pretty convinced (though not necessarily for good reason) that inference to the best explanation (IBE) is the main (if not the only) tool of inference in non-deductive contexts. Even in mathematics I’d like to say this is the case, both in arriving at standard axioms, in supporting new ones, and in developing conjectures. (Of course, this is pending an account of explanation in mathematics.)

Anyway, some discussions I’ve had during that time with Peter Gerdes have made me wonder some more about the nature of explanation. He has on several occasions argued against the use of inference to the best explanation, making a claim something like saying that for something to be a good explanation presupposes that it is true, so we can’t recognize good explanations until after we’ve recognized the truth of the explainer.

Now, I don’t know the literature on this at all (I should probably look at this quite a bit before I decide to get around to graduating), so I don’t even know for instance what sorts of things A and B are supposed to be if A explains B. (Theories? Propositions? Facts? Events?) At any rate, it seems clear that you can’t explain something that didn’t happen, so B should be true (or actual, or whatever the appropriate property is for the sort of entity at question). However, I think this doesn’t seem so clear for A.

In ordinary usage, it does at first seem that A has to be true – I can’t explain why Mary is looking around on the ground by saying that she lost her wallet, unless she actually did lose her wallet. However, in the scientific case (and I would guess, the more complicated ordinary cases as well), it seems that good explanations really can come from false theories. For instance, Newton’s laws of gravitation and of motion explain Kepler’s laws of planetary motion (or at least, the data leading to his postulation of them) quite well – even though we all believe Newton’s laws don’t actually obtain. In fact, for this particular set of data, it’s not at all clear that relativity (or quantum gravity, or string theory, or …) is at all a better explanation just because it happens to be true (or closer to the truth).

There does seem to be something to the simplicity of the Newtonian explanation that makes it preferable. In addition, Newtonian mechanics is close enough to being correct that it seems to be useful as an explanation even though it’s actually false. That is, it helps us conceptualize what’s going on, make predictions about related facts, remember Kepler’s laws when we’re not literally memorizing them, and so on. There are very few senses in which saying “God is crying” is a good explanation for why it’s raining, and a lot more in which saying something not quite accurate about warm fronts and dew points and such is. If our notion of explanation wasn’t tolerant of falsehood in the antecedent, then science would rarely (if ever) help us explain anything – after all, we have good reason to believe that every scientific theory believed more than twenty years ago was false, which itself gives us good evidence to believe that current ones are false as well. However, it seems clear that science generally provides us better and better explanations of all sorts of phenomena, suggesting that false theories can in fact provide good explanations.

If explanation really is falsity-tolerant in the antecedent, as I think, then I think we can get IBE off the ground. Of course, we’d need to tell some story like what Jonah Schupbach was saying about a year ago at his blog, about why IBE tends to lead us towards the truth, even if it doesn’t presuppose it. And we’d have to watch out for the worries van Fraassen raises for using IBE as a supplement to probabilistic reasnong (which I learned about from a post by Dustin Locke on his blog). I think these are compatible, if Jonah is right that IBE is just a heuristic for simplifying bayesian computations, rather than a supplement to them as van Fraassen supposes. But we’d need to work things out in more detail of course.





Resnik’s “Euclidean Rescues”

11 01 2006

Mike Resnik’s book Mathematics as a Science of Patterns gives a picture of mathematics that I generally agree with. He takes something like a Quinean position, saying that mathematics, logic, science, and observation are all together in our web of belief, and the process of confirmation is always global. Any statement in the web is in principle revisable. However, we still have the intuition that particular experiments test particular components of our theory individually, because a change in that component wouldn’t reverberate as much through our web of belief. We are free to make the larger changes if further experiments eventually suggest that it would be correct, but the changes involved would often be so drastic as to be effectively ruled out. Thus, we get the illusion of non-holism in our practice of confirmation.

Resnik denies that mathematics is in some absolute sense a priori – instead, it is just more general than any science and therefore less susceptible to refutation, due to the greater effects any change on it would have in all our other beliefs. “The relative apriority of mathematics is thus due to its role as the most global theory science uses rather than to some purely logical considerations that shield it from experiential refutation. The same good sense counsels us to use Euclidean rescues to save our mathematical hypotheses from empirical refutation, that is, to save them by holding that a putative physical application failed to exhibit a structure appropriate to the mathematics in question.” (p. 173) He calls such a move a “Euclidean rescue” by analogy with the case of Euclidean geometry – when Einstein and others gave evidence showing that physical space did not obey Euclidean geometry, geometry was reinterpreted as being about abstract points and lines, ratehr than physical ones as had always been presupposed. Every non-Euclidean geometry contains within it models of Euclidean geometry, so we can save the truth of Euclid’s theorems by saying that he was talking about these models, rather than actual space.

However, it’s not clear if his notion of a Euclidean rescue really allows us to shield our web of belief from the repercussions of a change. Let’s consider an example. Say at some hypothetical future point in time, people have been led to adopt some axiom A of number theory that goes beyond PA (since PA is relatively weak, this might even be some consequence of ZFC, but it might not be). Because this axiom will be taken to be true of our concept of number, it will end up having an impact on various complicated computations in the natural sciences. But then let’s say that our notion of physics has also progressed to the extent that we have theoretical justification for saying that a particular machine can carry out a hypercomputation. (That is, it can carry out steps in exponentially decreasing amounts of time, so that it can go through an entire sequence of omega steps in a finite amount of time.) We can then use such a machine to check a simple universally quantified statement of number theory, let’s say C, and assume that C is a consequence of PA+A. If the machine tells us that C is false, then we can either say that our physical theory was wrong in saying that the machine accurately models hypercomputation, or reject PA+A as our correct number theory, or possible revise logic in some way so that C is no longer a consequence of PA+A.

Clearly, the last option is going to be unpalatable, because this will have repercussions throughout our entire web of belief. Ordinarily, we would use this experiment to reject our physical theory (as we reject the theory that a calculator is built in a certain way, if it tells us that 3+5=7), but in this case it seems that the justification for A is likely to be much weaker than that for PA, and possibly even weaker than that for our physical theory. In that case, the right thing to do is reject A.

Resnik suggests that then, for purposes of minimal mutilation of our overall theory, we should perform a Euclidean rescue on PA+A, and say that it’s not meant to be a theory of the actual notion of number, but just some other structure (say, “schmumber”). That’s fine as far as it goes, but it doesn’t prevent us from having to take back many of the other empirical consequences of A that we derived earlier – all of those were derived from A’s role as an axiom about number, not schmumber. So performing the Euclidean rescue doesn’t save us from having to revise large parts of our web of belief. Of course, revising the physical theory might be just as bad (especially if it’s a fairly far-reaching theory, and the number-theoretic statement is of relatively limited application), so we’ll have to make drastic changes of this sort no matter what choice we make. But I’m just suggesting that Resnik is wrong to say that the Euclidean rescue can prevent most of this revision.

“Because one can always save a consistent branch of mathematics via a Euclidean rescue (and we have assumed that our mathematicians have excluded this), for them to reject the axioms of ZFC+A would be to take them to be inconsistent!” (p. 134) He’s right that we probably won’t want to say just because of this calculation that PA+A is inconsistent. But saying that it is consistent but false is not the same as attempting a Euclidean rescue – we don’t automatically have a structure that realizes the axioms, the way we actually did in the Euclidean case as a subset of physical space.

His statement that a Euclidean rescue is possible for any consistent theory presumes Gödel’s completeness theorem, which guarantees that every consistent theory has a model. If ZFC is true (or a certain fragment of it at least) then we have the completeness theorem. In fact, even if ZFC is false as a set of statements about sets (the way four dimensional Euclidean geometry is false as a set of statements about actual space-time), but we have performed a Euclidean rescue on it to say that it talks about some domain other than the “actual sets”, then we can use the rescued completeness theorem to give us a domain (not necessarily among the actual sets) for any consistent theory we want to talk about.

But this means that if some piece of empirical evidence were to challenge ZFC (or some principle of the weaker system that proves the completeness theorem), we wouldn’t immediately be justified in performing a Euclidean rescue on it, the way we can be for another theory in the context of ZFC. We would either have to be rejecting it in favor of some theory that proves ZFC has a model, or else we would have to have other reason to believe that a structure supporting the rescue exists.

One reason Resnik might presuppose all of ZFC is that it seems to be necessary for an adequate (Tarskian) account of first-order logical validity. But in the very difficult section 8.3 of his book, he endorses something like a relativist view about logic. (Not that the truth of logical sentences is relative, but merely their status as logical truths, rather than other sorts, is relative.) I’ll have to read this section again more closely, and I’d be glad if any readers could help clarify it for me. But at any rate, it doesn’t seem clear to me that a role in explicating logic would make ZFC indispensable on Resnik’s account, so it’s not clear how he gets ZFC off the ground to perform his Euclidean rescues later on. (Unless he just means that for us (ie, people who believe in ZFC) a Euclidean rescue is always possible for mathematical theories.)

The only way a Euclidean rescue can prevent large-scale mutilation of our web of belief is if we can find a structure closely related to the original one, for which the theory actually is true. But the only way to be sure we can always provide a Euclidean rescue seems to be through the completeness theorem, which doesn’t guarantee that the new structure is at all related to the old one. So his two uses of Euclidean rescues seem to work at cross purposes with one another.





Philosophy Blogs

9 01 2006

I’m participating in a session on blogging in philosophy at the Pacific APA in March, with Brian Weatherson, Gillian Russell, and Jon Kvanvig, so I spent a large part of this weekend glancing through the blogs on Dave Chalmers’ list. Sorting them by date created, I noticed some interesting patterns – for instance there were big bursts of new blogs formed in May/June 2004, and January 2005 (including this one). The May/June 04 burst is when most of the group blogs were formed (though a few of them really only have one person who does almost all the posting), and is also the time when topically focused blogs (like basically all the ones in the sidebar here) started becoming common. It looks like December 2005 may also have been a burst of new blog formation, but it remains to be seen how many of them last and how many just haven’t been noticed by Dave Chalmers (or any of the blogs he does list) yet.

Anyway, I discovered some other interesting links:

Someone else has discovered the Frege hair coloring salon, which I meant to take a picture of last time I was visiting near UCLA.

The list of papers for the USC/UCLA Graduate Conference is up. (In addition to me on this blog, you can find Neil Tognazzini at The Garden of Forking Paths and Aidan McGlynn at The Boundaries of Language. I don’t think the other three have blogs.)

David Corfield has a blog! Somehow I didn’t notice this (and neither has Dave Chalmers yet), but I’ll certainly have to start reading it. (And maybe this means I’ll finally get around to reading his book, which I’ve been meaning to read for almost two years, since a friend pointed me to John Baez’ review.





“As If” Theories and the Challenge of Approximation

6 01 2006

Most people (except for extreme scientific anti-realists) say that a theory of the form “There are no Xs, but everything observable is just as it would be if there were Xs” is bad. As I mentioned before, Cian Dorr would like to give a fairly novel explanation of why these theories are bad, but here I’m going to try to focus on what I take to be the more traditional account, and its application to the debate about mathematical nominalism. In particular, one objection to fictionalism about mathematical entities that I have seen mentioned in work of Mark Colyvan, Mike Resnik, and possibly Penelope Maddy, is that not only do we need the claims of ZFC to be true for our best applied theories, but we also need them to be true even to use physical theories (like Newton’s) that we take to be false. I will discuss this objection towards the end, but first I will return to the instrumentalist move, saying that things behave just as if there were Xs, even though there aren’t.

The point seems to be one about inference to the best explanation. If things look just as if there were Xs, then one explanation would be if there actually were Xs. However, the “as if” style theory explicitly denies this explanation, without giving another one. Therefore, in one sense it’s not a theory, but rather the data that the theory needs to explain.

However, such data often can be explained. An idealized Adam Smith in an idealized free-market world might have observed that prices generally stay close to levels as if they were set by some “invisible hand” observing the needs of society at large. However, there are decent reasons to believe there is no actual invisible hand, so Adam Smith sought another explanation, and found one in the mechanisms of competition for both supply and demand.

One might try to be purely instrumentalist about the material world, saying there are no material objects, but things appear just as if there were. In particular, I might say “there is no strawberry in front of me, but it looks just as if there were”. However, while the instrumentalist might want to say this always, even the realist says this on certain occasions, when a mirror of a certain type is used. There is no strawberry there, but because there is one three inches below that spot, and the mirror is curved in exactly the right way, and you’re looking at it from an angle that is high enough for your line of sight to intersect the mirror in that place, and low enough not to see the actual strawberry, it looks just as if there were a strawberry there. The advertisement claims the images “defy, yet demand explanation” – it’s true that they demand explanation, but they don’t defy a suitably optically-informed explanation. At any rate, there is a clear contrast between the case of such an illusion and the ordinary cases of seeing an actual strawberry. Realists can make sense of and explain this contrast, but instrumentalists have to be a bit more careful. (I’m sure that it’s possible for them to cash out just what it means to look like there’s a strawberry there without really looking like there’s a strawberry there, or something, but it’ll be more complicated.)

There seems to be no reason why one couldn’t be a global instrumentalist about everything (except maybe sense data, or something of the sort), but at intermediate levels, it seems that one really does need an explanation. Hartry Field, in Science Without Numbers attempts to do something like this for a Newtonian universe – he can explain why everything acts just as if there were the real numbers and continuous functions that Newton talked about, even though all there actually is is just regions of space-time with various three- and four-place betweenness and equidistance properties. He still needs to help himself to some fairly strong logic (a quantifier saying “there are infinitely many Xs such that…”), but it’s a nice development.

More simply, a nominalist can explain why our counting practice works the way it does, just as if there actually were abstract entities known as numbers, even though there aren’t any (according to the nominalist). This explanation would point out the isomorphism between the counting process and the successor operation. It would point out that for any particular application of counting, a non-standard semantics can be given for the numerical terms on which they denote the objects counted rather than numbers. And it would point out that every particular numerical statement can be translated into a statement with numerical quantifiers, which can be translated in terms of existential and universal quantifiers, connectives, and identity.

However, even if this project of nominalizing our best scientific theory succeeds, realists might object that we would still need mathematics to be true to explain our practices of approximation. For instance, say that the population in any year of some herd of bison is given by a difference equation in terms of the populations in the previous few years. In many cases, the behavior of difference equations is hard to predict, but they can be usefully approximated by differential equations, where exact solutions can often be achieved. Thus, we might use a differential equation to model the population of the bison, even though we don’t believe that the population actually increases between breeding seasons, and we believe that the population is always integer-valued rather than real-valued as the differential equation requires. The realist can explain why the differential equation is a good approximation by pointing out all sorts of mathematical theorems about the systems of equations involved. However, even if the nominalist has managed to nominalize away all talk of numbers and equations in using the difference equation, she will have trouble explaining why the differential equation is a good approximation. She can’t appeal to facts about the mathematical structures denoted, because she says there are no such structures. And she presumably can’t nominalize the differential equation in any nice way, because it refers to fractional bison, and bison born at the wrong time of year. Instead, she’ll have to fall back to something like the instrumentalist position and say that the differential equation is not correct, but it makes very good predictions, and she can’t explain why.

The only remotely promising move I see at this point is to say that “according to the fiction of mathematics, the differentical equation and the difference equation will always make very similar predictions”. However, if mathematics isn’t actually true, it will be hard for her to explain why it is still correct about when it says two theories make similar predictions. In a sense, this is the same problem that Field ran into in expressing the claim “mathematical physics is a conservative extension of nominalist physics”, which is what justifies the practice of using mathematical physics to make predictions even though it is not literally true. Except here, we have to deal not only with conservative extensions, but with good approximations that aren’t conservative.

It would be very unappealing to end up in the situation where mathematics was unnecessary to make correct predictions, but necessary for various methods that give approximate predictions. (Some examples other than differential equations involve frictionless planes, infinitely deep oceans, light that travels infinitely fast, and the like.) In that case, the indispensability argument would only apply through our practice of approximation, and not through actual science. At this point, I think more people would be willing to bite the nominalist bullet and say that we’re not really justified in using our approximations, but it would still be an odd situation.

Fortunately, most of science remains un-nominalized, and the people that think we can some day nominalize most of it will probably believe that we can nominalize our approximation methods in some way too. It’s just an extra challenge to meet.