I just read a very interesting paper by Tyler Burge on computer proofs. (“Computer Proofs, Apriori Knowledge, and Other Minds”, Phil. Perspectives 12, 1998 ) I suspect that many mathematicians will find the paper quite interesting, as well as philosophers. His major claim is that, contrary to most assumptions, computer proofs can in fact give mathematicians a priori knowledge of their theorems.
An important question here is just what it means for knowledge to be a priori. Burge defines the notion as just stating that the knowledge doesn’t depend for its justification on any sensory experience – however, he allows that a priori knowledge may depend for its possibility on sensory experience. This account allows for the knowledge that red is a color to be a priori, even though having this knowledge requires having sensory experience of red in order to have the concepts required to even formulate the idea. Burge’s main goal here is to defend a sort of rationalism, which is the claim that a lot of a priori knowledge is possible, so it’s somewhat important that his account of the a priori is broader than one might initially think. One might worry that this waters down the notion of the a priori so much as to make this form of rationalism uninteresting, but I think it still gets at some interesting distinctions. For instance, his account will end up saying that almost all mathematical knowledge is a priori (or at least, can be) while very little knowledge in the other sciences can be. This may be very interesting for those wanting to claim a principled difference between mathematics and the other sciences. (For those of you familiar with the talk I’ve been giving recently about probabilistic proofs, I suspect that in addition to a priority, the notion I call “transferability” gives mathematics a lot of its specific character, but that’s a different issue.)
The biggest premise that Burge asks us to grant is that most ordinary mathematical knowledge that doesn’t rely on computer proofs is itself a priori. In order to grant this, we have to grant that testimony can preserve a priority, since very many mathematical proofs depend on theorems or lemmas that the author has never worked through, but believes just on the basis of testimony (having skimmed a proof, or even just read a result published in another paper). For an extreme instance, consider the classification of finite simple groups – no one has worked through all the steps in the proof, yet it seems at least plausible that our knowledge of the result is a priori in some interesting sense. The sense that Burge suggests this is is that although a mathematician may depend on testimony for her knowledge of the theorem, the actual steps in the justification of the result were a priori for whoever carried them out.
Burge needs to do a lot of work to suggest that this transfer of knowledge through testimony can preserve a priority – he ends up showing that we can often get a priori justification for the claim that there are other minds by the same means. He suggests that the very fact that some part of the world appears to express a propositional content gives us some defeasible a priori reason to believe the content of that claim, and also to believe that some mind somewhere is responsible for that content, even if at some remove. (That is, although books and computers can display things expressing propositional contents despite lacking minds themselves, the only reason they normally succeed in doing this is because some mind somewhere gave them this capacity, either by directly putting the symbols on them, or causing them to shuffle symbols in intelligible ways. Although you often get significant and meaningful patterns in nature, it’s exceedingly rare that something natural appears to express a specific proposition. See Grice for more on this.)
Once we’ve got this idea that testimony can preserve a priority, it becomes more plausible to think that computer proofs can generate a priori knowledge. However, he still has to go through a lot of work to argue that we have an undefeated warrant for believing the claims of the computer. Clearly, in many cases where someone utters a sentence, we have strong reason to doubt the truth of that sentence, unless we have positive reason to believe that the person is a very skilled mathematician. In the case of something like Fermat’s Last Theorem, it seems that even that is insufficient (after all, even Fermat’s word on the theorem really doesn’t seem like sufficient grounds for knowledge). Burge needs to do some fancy footwork to argue that the means by which we build up our trust in a source of utterances can itself be a priori, since it only depends on success of apparent utterances, and not on the fact that the source of the utterances really is as it appears to be. (It doesn’t matter to me whether Andrew Wiles (whom Burge embarrassingly refers to as Michael Wiles!) is a real person, a computer, or a simulation in the Matrix – he’s done enough to prove that he’s a reliable source of knowledge of complicated mathematical statements of certain forms.)
I think in the end, Burge spends a lot of time focusing on the wrong sorts of support for the reliability of a person or computer who claims to have proved something difficult. He mostly focuses on the fact that this source has gotten many other difficult statements correct before. I think in actual mathematical practice the more important criterion is that the outline of the strategy used looks much more promising than previous attempts at the problem, and the source has given good indication of being able to carry out particular sub-parts of the proof. Burge does deal with this justification, but in not as central a way as might be desired.
So I think Burge has come up with some interesting arguments that computer proof still preserves the fact that most mathematical knowledge is a priori, even though I think he makes some mathematical errors in focus and about particular claims of mathematical history. I think his defense of computer proof also still allows for the fact that other mathematical arguments (like DNA computing, for instance) really don’t preserve a priority. After all, in these cases, the computer isn’t going through something like a reasoning process, but rather is doing something more like an observation of an experiment. The way that most ordinary computers process data still shares abstract features of reasoning that are important for a notion of the a priori, in ways that DNA computing don’t. (If we didn’t grant this, then it might seem that there’s no such thing as any a priori knowledge, since we always in some sense rely on the physical reliability of our own brains – he gives some good arguments for why we should dismiss this worry.)
I think this sort of epistemology of mathematics is probably of much more practical interest for mathematicians than the standard questions of ontology and logic that more traditional philosophy of mathematics deals with.