## What Sorts of Proofs do Mathematicians Want?

30 11 2005

My advisor, Branden Fitelson, recently pointed me to a couple papers by Don Fallis about standards for proof in mathematics. In “Intentional Gaps in Mathematical Proofs” he points out that mathematicians very often (perhaps even almost always) publish results with only incomplete proofs. Even ignoring the fact that most of these proofs couldn’t be translated into a formal language in any reasonable way, people occasionally miss steps and give genuinely incorrect proofs, but also often intentionally leave out many steps. The latter occurs whenever words like “as one can easily see” or “as the reader can verify” occur, but also when appeal is made to a generally known theorem, especially “folk theorems” that are known by specialists but have never been proven (correctly) in print. These and related practices are generally accepted, though in some cases (like the computer proof of the four-color theorem, or the massively collaborative proof of the classification theorem for finite simple groups) there is some controversy because no human either has or could verify the whole thing. In fact, skipping steps in a proof is not just tolerated, but strongly encouraged, because it helps clarify the structure of the whole argument, and prevents the reader from getting bogged down in unilluminating details that take more time than insight to verify.

However, though mathematicians accept incomplete deductive proofs, they don’t accept probabilistic proofs. In “What Do Mathematicians Want?: Probabilistic Proofs and the Epistemic Goals of Mathematicians”, Fallis argues that there is no good reason for this prejudice. Any epistemic goals that deductive proofs can satisfy can also be satisfied by some probabilistic proofs, and any goal that rules out probabilistic proofs would rule out many seemingly acceptable deductive proofs as well. In particular, certainty is not better served by requiring deductive proofs – after all, it is far more likely that the several hundred (several thousand?) pages of dense, specialized work that together proved the classification of finite simple groups contain some mistakes, than that the two hundred-digit numbers generated probabilistically by an RSA algorithm are composite. There is possibility of error in both cases, and the probabilistic proof has one more potential source of incorrectness, but since the overall proof is much shorter, the possibility of error in the long and complicated deductive proof is greater.

I think Fallis overstates some of his point in this paper – a deductive proof can be rechecked many times, decreasing the likelihood of error arbitrarily far; for an inductive proof, to decrease the likelihood of error, one must do more work beyond verifying the parts of the proof that are already present (assuming the proof is complete, contra the other paper). This fact seems potentially relevant. In addition, extremely long and complicated deductive proofs are often considered questionable, like the proofs of the four-color theorem and the classification of finite simple groups. It’s true that they have been accepted while probabilistic proof have not, but for an already-established result, an exceedingly long and complicated deductive proof is no more likely to be published than a probabilistic one. When mathematicians seek new proofs of old results, they want the new proof to somehow be enlightening, revealing some new connection, or avoiding some complicated technology used in the old proof. Or the new argument should provide a better explanation, or suggest new generalizations, or apply new technologies from some other field. Probabilistic proofs are unlikely to do any of these, except avoid complicated technologies. So the choices of mathematicians may be rational after all.

However, I think Fallis’ apparent program to point out some fallibilities in existing epistemological practices in mathematics is quite interesting. Mathematicians would do well to reconsider whether or not they really do or should require complete deductive proofs of their results. They simultaneously seem to have higher and lower standards than their own rhetoric indicates they should.

## Explanations in Fiction

28 11 2005

In Tony Martin’s paper, “Evidence in Mathematics” (in Truth in Mathematics, edited by Dales and Oliveri), he gives arguments that one should adopt axioms well up the large cardinal hierarchy (countably many Woodins, I believe) because they provide a good explanation for various facts that we can already observe from ZFC. This is because they are (approximately) equivalent to projective determinacy, which states that every “projective” set of real numbers has various nice properties. The investigation of these properties led to new unifying results in recursion theory, stating that various sets of Turing degrees contain cones, and that the Wadge degrees have a particular nice structure up to a very high level. Results related to these properties are now important in recursion theory and wouldn’t have been discovered without the axiom, and every particular consequence of these results that has been considered has in fact been verified directly from ZFC (though often in more difficult ways). Since projective determinacy is known to be independent of ZFC (if consistent), it seems that we need to postulate it in order to properly explain the phenomena we can already observe, just knowing ZFC.

Based on arguments like this, it seems that Quine misstated the naturalist position, when he said that it should tell us to adopt ZFC+V=L. His reasoning was that ZFC should be accepted because it is an indispensable part of our scientific explanations of the world. However, the only particular sets that are indispensable in these explanations are all constructible, so those are the only ones whose existence we should countenance (all the other sets seem to be in some sense idle, like the angels that make sure gravity keeps doing its job, and the elves that make quarks obey the strong nuclear force). Thus, we should believe that the constructible sets are all that exist, so V=L.

But V=L is incompatible even with fairly weak large cardinal axioms (the existence of a measurable cardinal), and therefore with the stronger axioms advocated by Martin as part of an explanation of what’s going on with sets of Turing degrees and such. So I think Martin’s argument suggests that we do in fact have evidence for sets beyond L. This evidence may not be based directly in the physical world, but if Quine is serious about his holistic picture of science, then ZFC is just as much a part of science as relativity, and just as ZFC is justified because we need (large parts of) it for relativity (and just about every other scientific theory we’ve ever considered), and relativity is justified because it gives the best explanations of our observations, it seems that projective determinacy is justified because it gives the best explanations of phenomena in ZFC.

So if we believe the indispensability theorist, then we really should believe most of the large cardinal axioms, and not just ZFC. I mentioned this point in passing in a recent post.

However, I think most of this will be able to go through for the fictionalist just as well as for the indispensability-argument realist. If ZFC isn’t actually indispensable for our science, but is still quite useful, then someone like Hartry Field is willing to accept it at least as a good story, even if not literally the truth. But once we’re considering the story, I think we should adopt projective determinacy within the story as well. It seems to me that what is true in a fiction is not just literally what the author has asserted, but furhter facts may be as well, if they provide good explanations for what the author has in fact asserted. For instance, in a detective novel with a stupid detective, there may be enough clues presented for the reader to find out who did it, even if the detective never does and the author never explicitly says who did. And in a movie, it may become clear that a certain scene was actually a dream and not reality, because that’s the best way to reconcile it with the rest of the characters’ actions and desires. I think the audience discovers these facts in just the same way that we use inference to the best explanation in science (and our ordinary lives). Such inferences are always defeasible (we may find a better explanation, the author may explicitly deny the truth of the inference, further evidence may count against the inference, etc.) but it seems plausible that they are always active, whether in fiction or reality.

Therefore, I think that the fictionalist is just as justified in ascending the large cardinal hierarchy as the indispensability theorist, and both of them are in fact justified. Penelope Maddy is worried that they might not be, because of Quine’s argument I’ve paraphrased above (I’ve paraphrased that argument from my memory of Maddy’s paraphrase of it, so I may have misrepresented one or both of them through an inaccurate memory). This worry is a large part of what drives her to her position in Naturalism in Mathematics, but I think it is unjustified. Both the fictionalist and the Quinean naturalist should accept large cardinal axioms, just as Maddy believes set theorists should.

## Statistics

26 11 2005

This post has no philosophical (or really mathematical) content – hopefully I’ll soon have a finished draft of the talk I gave last week to the Berkeley math grad students on forcing and the Continuum Hypothesis. It’s got no original material, but I think it should be a fairly accessible introduction (well, as accessible as it’s going to get), and I don’t know of any that exist elsewhere. If anyone else knows of one, let me know! I’ll put mine up soon once I’ve proofread it and such.

Also, I started using Google Analytics to check my viewing stats, and they’re finally available. I’m getting far more views than I expected – 51 on Monday, despite no new content in over a week! In addition to the unsurprising clusters in college towns, like 5 from Stanford, 4 in Ann Arbor, and 3 from Melbourne, there’s 3 from Iasi, Romania, and several from South America. I’m guessing the latter is because of a link from this blog by Juan de Mairena – unfortunately I don’t read enough Spanish to understand much of what’s going on there, or why he linked to all the logic blogs I know of, plus one in Spanish (which seems to have interesting liar- and Yablo-type paradoxes).

I suppose it might be a bit spooky that I can see how many visitors I get from various locations, and from which referrers, etc. But I think most bloggers have some way of checking this – I just hadn’t signed up for such a service until now, and I might be able to get a bit more info about readers because I have so many fewer than some of the bigger blogs.

Anyway, real content soon, I promise.

11 11 2005

## To Know or to Do?

9 11 2005

Yesterday I was explaining to a chemist friend just what sorts of questions philosophers of physics, biology, and math are interested in, and we were speculating what philosophers of chemistry might work on. (I had just found Synthese’s June 1997 special issue on philosophy of chemistry, but hadn’t read any of the articles yet.) It became clear in our discussion that he saw the primary goal of science as enabling us to do useful things, while I had always seen the goal as enabling us to understand how the world works.

Of course, it’s clear that having either as a fundamental goal licenses the other as an instrumental goal – it’s generally hard to change the world without having any understanding, and hard to understand the world without using various aspects of technology to change small parts of it. There does seem to be an ordinary language distinction between science and technology, in which science focuses on understanding and technology focuses on acting. But it’s also probably true that this distinction is overstated – it’s likely that large numbers of scientists see each of “understanding the world” and “making the world a better place” as their primary goal, and an even larger group might say that it’s some combination of the two. So we can’t just ask the scientists which is more important.

Arguing in favor of the understanding side, it seems to be a very (scientifically) unsatisfactory situation when pharmacologists are able to provide medicines that treat various conditions, even though they have no understanding of the underlying mechanisms. If we compare this situation to the converse, we see that in mathematics, it’s a perfectly normal (and not distressing) situation to develop understanding of some system without thereby increasing our practical powers. But the defender of the practically-based picture of science might respond that math is a non-representative case, and point out that a large part of the string theory controversy is exactly about the fact that string theory may explain the world, but it doesn’t help us do anything. It might just be a prejudice of philosophers to say that understanding is the more fundamental goal of inquiry, and ability is only secondary – after all, in our profession, epistemology is central, while philosophy of action and even ethics are somewhat secondary.

Of course, to switch from a view of science as aimed at explanation to a view of science as aimed at practical results would mean a radical change in a more pragmatist direction. But it’s not clear just how we can argue that such a shift would be wrong.

In unrelated news, Kansas has changed the definition of science “so that it is no longer limited to the search for natural explanations of phenomena.”

## Disjunctive Justifications for Mathematics

7 11 2005

One of Penelope Maddy’s main reasons for objecting to the indispensability argument in her book Naturalism in Mathematics is because it seems to make mathematics too contingent – as she says, if indispensability were our grounds for believing in the axioms, then set theorists arguing about large cardinal axioms should be paying attention to quantum gravity and other cutting-edge physics to see what sorts of math are indispensable for it. And more importantly, she thinks that Quine has shown that indispensability arguments only get us ZFC and not the further large cardinal axioms – and that in fact we are limited to V=L, which is incompatible with most of the larger axioms that set theorists emphatically want us to adopt (and which she thinks we have good mathematical reason to adopt).

However, it seems that a Fieldian nominalist has an easier time justifying our mathematical practice, if the program can ever be made to succeed. The goal is to show that mathematics is actually dispensable (and thus the entities it appears to talk about don’t actually exist) using the Fieldian strategy of giving an attractive nominalistic physical theory that the platonistic theory conservatively extends. If this can be done, it undercuts mathematics in one sense, by saying that it is not literally true. But it supports it in another (perhaps more important?) sense, by showing that it’s a perfectly useful way to talk that (while not itself true) will help us get to the truth more easily in the domains we’re actually concerned with, namely the physical.

Thus, the problem for the Quinean realist justification is that we need to show that the entities quantified over in mathematics are indispensable for our scientific theories in order to justify our mathematical talk. The problem for Fieldian nominalist justification is that we need to show that these same entities are dispensable in our scientific theories. Thus, Maddy rejects both attempts at justification, which are based on reading the indispensability argument in opposite directions, and instead suggests that mathematics needs no external justification, just as most naturalist philosophers think about science as a totality. However, I think that Maddy’s move is unnecessary, and that we may even be able to put together these two justifications to show why!

#### Combining Platonistic and Nominalistic Justifications

First, note that the important trouble steps in these approaches are nicely complementary. However, they aren’t negations of one another. In one case, we need to show that the only (nice) physical theory that explains all our data is a theory that includes mathematics. In the other case, we need to show that there is a nice theory that explains all the data and includes the mathematics, and there is also a nice theory that explains all the data that doesn’t include the mathematics. So to get my proposed disjunctive justification off the ground, we’ll need to show that there is a nice physical theory including the relevant mathematics that explains all our data. Fortunately, for any theory up to the strength of ZFC, we’ve got such a theory. (In another post I’ll mention why I think going further up isn’t such a problem. For now, I think it’s good enough to observe that as long as we think the large cardinal axioms are at least consistent, then we can extend our theory to one including all of them.)

So let E be the set of all our relevant evidence, let M be the relevant mathematical theory, and T+M be our nice theory that currently explains all of E. At this point there are two possibilities – either every nice theory that explains E includes M, or there is some alternate theory T’ that doesn’t include M that explains E equally well. (For now let’s assume that T’ is a nominalistic theory referring only to entities also referred to in T+M.) In the first case, the indispensability argument suggests to us that M is in fact true, and thus refers to a class of existing entities, and is therefore a justifiable part of our scientific discourse. (Never mind that those entities may well be acausal, atemporal, or whatever – they’re indispensable for our science, so we know about them just as we know about quarks.) In the second case, the indispensability argument suggests that M is in fact false, and there are no entities of the type it refers to. However, following Field, we can still use M in our scientific reasoning, because it is part of T+M, which is just as good a way of making predictions about E as T’ is. In either case, M is justified as part of our scientific reasoning, so Maddy needn’t be concerned.

#### Some Generalizations

If E is a complete theory, then both T’ and T+M will be conservative extensions of it, so we’ll be in exactly the situation Field takes himself to have given us for Newtonian gravitation. Of course, E is our set of actual observations, so it won’t be complete, but there’s a sense in which this doesn’t matter. Alternative scientific theories don’t have to agree with our current ones in every prediction – they just have to be equally good at explaining our data. (In fact, they don’t even necessarily have to be equally good at all of it – if one theory does a better job of explaining some data, and the other theory does a better job on a different set, then both might be useful theories.) So in a sense, Field might be aiming too high when he aims for conservativity of mathematical theories over nominalistic ones. All he needs is something more like empirical and explanatory adequacy. I think he comes around to a position like this in his 1985 “On Conservativeness and Incompleteness” where he suggests that it might be ok for the nominalistic theory to miss out on some translations of Gödel sentences – these are unlikely to appear in the data, so they aren’t a good reason to decide between two theories that differ severely in their ontological virtues.

Now, let’s note that once we know that T+M exists, we don’t need to know anything about what T’ is, or even whether it exists. In the ideal situation (which Field approximately gives us for Newtonian gravitation and calculus, and also sketches for a very general theory of counting medium-sized dry goods and the natural numbers) we know exactly what T’ is, and that T+M is a conservative extension of it. But even if we don’t know it to be conservative, we’re justified in using M either way. If we even know it not to be conservative, we may then be able to empirically test which theory’s predictions are correct – but until then, they both explain our current data equally well. But even if we don’t know whether such a T’ exists, we’re justified by the existence of T+M in using M.

The only way we can lose this justification (without simultaneously replacing it by a Fieldian one) is by coming up with some other theory U that does a substantially better job of explaining E, and doesn’t contain M. Since this theory is better than T+M, it would undermine our indispensability justification. But if it doesn’t contain M, then we’d need to show that U+M was a conservative extension of U (and a useful one) in order to get a Fieldian justification.

Field (early in Science Without Numbers) claims to have an argument that mathematical theories are conservative extensions of any physical theory (though not necessarily useful extensions). But even ignoring this claim, I find it hard to imagine that we will find a useful theory to explain the world that neither includes (some substantial fragement of) ZFC nor has a useful conservative extension including it. This is even less plausible if we’re talking about the theory of real-valued functions. And I venture to say that it’ll be impossible to describe the world in a way that wouldn’t be usefully and conservatively supplemented by Peano arithmetic. So whatever the status of our current theories, I think this disjunctive justification will let us use ZFC in good conscience, or at least the theory of real-valued functions, and certainly PA. And once we’ve got these axioms, I think we can get all the way up to where Maddy wants us to be, as I’ll show in a later post.

So Maddy really has no reason to be concerned about indispensability arguments depriving us of mathematics. Field has shown how to convert indispensability refutations into alternative justifications for mathematics, showing why the minor amount of empiricism the indispensability argument brings to mathematics is so utterly invisible to us.