## Probabilistic Proofs of Undecidable Sentences?

22 04 2009

As I suggested (though perhaps not made as clear as I would have liked) in my forthcoming paper “Probabilistic Proofs and Transferability”, I think that probabilistic proofs can be used to provide knowledge of mathematical facts, even though there are reasons why we might think they should remain unacceptable as part of mathematical practice. (Basically, even though mathematicians gain knowledge through testimony all the time, there may be reasons for a methodological insistence that all mathematical arguments not rely essentially on testimony, the way probabilistic proofs (just like experiments in the natural sciences) do.) Given that this allows for a non-deductive means of acquiring mathematical knowledge, a natural question would be whether probabilistic proofs can be used to settle questions that are undecided by standard axiom sets. Unfortunately, I think the answer is probably “no”.

The sort of probabilistic proofs that I endorse aren’t just any old argument that isn’t deductively valid. In my paper, I contrast probabilistic proofs of the primality of a given number with certain arguments in favor of the Goldbach Conjecture and the Riemann Hypothesis, as well as lottery arguments. The primality arguments work by showing that if n is prime, then a certain result is guaranteed, while if n is composite, then the probability of the same sort of result is at most 1/4k, where k can be made as large as one wants by generating more random numbers. As a result the evidence “tracks” the truth of the claim involved (that is, the evidence is very likely if the claim is true and very unlikely if the claim is false).

In the case of the Goldbach Conjecture, one of the pieces of “probabilistic” evidence that we have is that none of the first several million even numbers is a counterexample to the claim. However, although this evidence is very likely if the claim is true, we have no good a priori bounds on how likely this evidence would be if the claim were false. Thus, the evidence may give us some reason to believe the Goldbach Conjecture, but it’s hard to see just how much more evidence we’d need in order for this to give us knowledge that the Goldbach Conjecture is in fact true.

In the case of the Riemann Hypothesis, I don’t have room (or the knowledge) to state all the relevant pieces of evidence, but in this case the evidence is more of the type of “inference to the best explanation”, but again lacks any sort of probabilistic bounds on the possibility of it arising in misleading cases. This is not to say that this type of evidence can’t provide knowledge – in fact, we may already have enough evidence to count as knowing that the Riemann Hypothesis is true, though I’m definitely not certain of this. At any rate, if this does provide knowledge, it’s of a different sort than probabilistic proof, and may well have its own problems for acceptability in the mathematical canon.

In “lottery cases”, we have strong probabilistic evidence for the claim that a particular ticket will not win (in particular, that this ticket is in a fair lottery, and that there are a billion separate tickets, exactly one of which is a winner). However, most people want to say that we don’t have knowledge of this claim. Again, at least part of the reason is that our evidence doesn’t track the truth of the claim – nothing about our evidence is sensitive to the ticket’s status, and the evidence is just as likely if the claim is false (that is, the ticket is a winner) as if it’s true (that the ticket isn’t a winner).

Therefore, it seems that for any sort of evidence that we’d count as a “probabilistic proof” that gives us knowledge of some mathematical claim, we’d want to have good bounds on the probability of getting such evidence if the claim were true, as well as good bounds on the probability of getting such evidence if the claim were false. At the moment, I can’t really conceive of ways of generating mathematical evidence where there would be clear bounds on these probabilities other than by randomly sampling numbers from a particular bounded set. (There may be possibilities of sampling from unbounded sets, provided that the sampling method doesn’t rely on a non-countably-additive probability, but it’s a bit harder to see how these sorts of things could be relevant.)

Now let’s consider some claim that is independent of Peano Arithmetic, and see if there might be any way to come up with a probabilistic technique that would give evidence for or against such a claim, that had the relevant probabilistic bounds.

Unfortunately, I don’t think there will be. Consider two models of PA – one in which the claim is true, and one in which it’s false. We can assume that one of these models is the standard model (that is, it just consists of the standard natural numbers) while the other is some end-extension of this model. If our probabilistic method involves generating natural numbers through some random process and then checking their properties, then this method will have the same probabilities of generating the same numbers if applied in either one of these models (since presumably, the method will only ever generate standard natural numbers, and not any of the extra things in the non-standard model). Thus, any result of such a test will have the same probability whether the statement is true or false, and thus can’t be sensitive in the sense demanded above.

Now, one might worry that this argument assumes that conditional probabilities will have to be probabilities given by considering a particular non-standard model. This clearly isn’t what we’re doing in the case of primality testing however – we don’t consider the probabilities of various outcomes in a model where n is prime and in a model where n is not prime, not least because we have no idea what a model of the false one would even look like.

However, I think this is what we’d have to do for statements that could conceivably be independent of PA. Consider any process that involves sampling uniformly among the integers up to k, and then performing some calculations on them. Knowing the truth of PA, we can calculate what all the possible results of such a process could be, and verify that these will be the same whether our independent statement is true or false. Since this calculation will tell us everything we could possibly gain from running the probabilistic process, and it hasn’t given us knowledge about the independent statement, the probabilistic process can’t either, since it adds no new information, and therefore no new knowledge.

If we allow for probabilistic processes that sample from all natural numbers (rather than just sampling uniformly from a bounded range of them) I think we still can’t get any relevant knowledge. Assuming that the sampling process must have some countably additive probability distribution, then for any ε there must be some sufficiently large N that the probability that the process never generates a number larger than N is at most ε. By the above argument, the process can’t tell us anything about the independent statement if it never generates a number larger than N. Therefore, the difference in probability of various outcomes given the statement and its negation can’t be any larger than ε. But since ε was arbitrary, this means that the difference in probabilities must be 0. Thus, the process still fails to meet the sensitivity requirement, and therefore can’t give knowledge about the independent statement.

Thus, we can’t get probabilistic information about independent claims. Of course, since many people will be willing to say that we have good reason to believe the Goldbach Conjecture, and the Riemann Hypothesis, on the evidence mentioned above, we may be able to get arguments of those sorts for independent claims. But such arguments aren’t really “probabilistic” in the sense standardly discussed.

Additionally, there are many claims independent of Peano Arithmetic that we can prove using standard mathematical means. Some examples include Goodstein’s Theorem, the Paris-Harrington Theorem, and Borel Determinacy. (The latter is in fact so strong that it can’t be proven in Zermelo set theory, which is ZF set theory without the Axiom of Replacement.) In fact, the arguments given above show that probabilistic methods not only fail to decide claims that are undecided in PA, but in fact fail to decide claims that are independent of substantially weaker systems, like Robinson arithmetic and probably even Primitive Recursive Arithmetic. Since basically everything that is of mathematical interest is independent of PRA (basically all it can prove is statements saying that a recursive function has a specific value on specified inputs, like “376+91=467”) this means that probabilistic proofs really can’t do much. All they can do is give feasible proofs of claims that would otherwise take a very long time to verify, as in the primality of a particular number.

## Computer Proofs Give A Priori Knowledge

30 06 2008

I just read a very interesting paper by Tyler Burge on computer proofs. (“Computer Proofs, Apriori Knowledge, and Other Minds”, Phil. Perspectives 12, 1998 ) I suspect that many mathematicians will find the paper quite interesting, as well as philosophers. His major claim is that, contrary to most assumptions, computer proofs can in fact give mathematicians a priori knowledge of their theorems.

An important question here is just what it means for knowledge to be a priori. Burge defines the notion as just stating that the knowledge doesn’t depend for its justification on any sensory experience – however, he allows that a priori knowledge may depend for its possibility on sensory experience. This account allows for the knowledge that red is a color to be a priori, even though having this knowledge requires having sensory experience of red in order to have the concepts required to even formulate the idea. Burge’s main goal here is to defend a sort of rationalism, which is the claim that a lot of a priori knowledge is possible, so it’s somewhat important that his account of the a priori is broader than one might initially think. One might worry that this waters down the notion of the a priori so much as to make this form of rationalism uninteresting, but I think it still gets at some interesting distinctions. For instance, his account will end up saying that almost all mathematical knowledge is a priori (or at least, can be) while very little knowledge in the other sciences can be. This may be very interesting for those wanting to claim a principled difference between mathematics and the other sciences. (For those of you familiar with the talk I’ve been giving recently about probabilistic proofs, I suspect that in addition to a priority, the notion I call “transferability” gives mathematics a lot of its specific character, but that’s a different issue.)

The biggest premise that Burge asks us to grant is that most ordinary mathematical knowledge that doesn’t rely on computer proofs is itself a priori. In order to grant this, we have to grant that testimony can preserve a priority, since very many mathematical proofs depend on theorems or lemmas that the author has never worked through, but believes just on the basis of testimony (having skimmed a proof, or even just read a result published in another paper). For an extreme instance, consider the classification of finite simple groups – no one has worked through all the steps in the proof, yet it seems at least plausible that our knowledge of the result is a priori in some interesting sense. The sense that Burge suggests this is is that although a mathematician may depend on testimony for her knowledge of the theorem, the actual steps in the justification of the result were a priori for whoever carried them out.

Burge needs to do a lot of work to suggest that this transfer of knowledge through testimony can preserve a priority – he ends up showing that we can often get a priori justification for the claim that there are other minds by the same means. He suggests that the very fact that some part of the world appears to express a propositional content gives us some defeasible a priori reason to believe the content of that claim, and also to believe that some mind somewhere is responsible for that content, even if at some remove. (That is, although books and computers can display things expressing propositional contents despite lacking minds themselves, the only reason they normally succeed in doing this is because some mind somewhere gave them this capacity, either by directly putting the symbols on them, or causing them to shuffle symbols in intelligible ways. Although you often get significant and meaningful patterns in nature, it’s exceedingly rare that something natural appears to express a specific proposition. See Grice for more on this.)

Once we’ve got this idea that testimony can preserve a priority, it becomes more plausible to think that computer proofs can generate a priori knowledge. However, he still has to go through a lot of work to argue that we have an undefeated warrant for believing the claims of the computer. Clearly, in many cases where someone utters a sentence, we have strong reason to doubt the truth of that sentence, unless we have positive reason to believe that the person is a very skilled mathematician. In the case of something like Fermat’s Last Theorem, it seems that even that is insufficient (after all, even Fermat’s word on the theorem really doesn’t seem like sufficient grounds for knowledge). Burge needs to do some fancy footwork to argue that the means by which we build up our trust in a source of utterances can itself be a priori, since it only depends on success of apparent utterances, and not on the fact that the source of the utterances really is as it appears to be. (It doesn’t matter to me whether Andrew Wiles (whom Burge embarrassingly refers to as Michael Wiles!) is a real person, a computer, or a simulation in the Matrix – he’s done enough to prove that he’s a reliable source of knowledge of complicated mathematical statements of certain forms.)

I think in the end, Burge spends a lot of time focusing on the wrong sorts of support for the reliability of a person or computer who claims to have proved something difficult. He mostly focuses on the fact that this source has gotten many other difficult statements correct before. I think in actual mathematical practice the more important criterion is that the outline of the strategy used looks much more promising than previous attempts at the problem, and the source has given good indication of being able to carry out particular sub-parts of the proof. Burge does deal with this justification, but in not as central a way as might be desired.

So I think Burge has come up with some interesting arguments that computer proof still preserves the fact that most mathematical knowledge is a priori, even though I think he makes some mathematical errors in focus and about particular claims of mathematical history. I think his defense of computer proof also still allows for the fact that other mathematical arguments (like DNA computing, for instance) really don’t preserve a priority. After all, in these cases, the computer isn’t going through something like a reasoning process, but rather is doing something more like an observation of an experiment. The way that most ordinary computers process data still shares abstract features of reasoning that are important for a notion of the a priori, in ways that DNA computing don’t. (If we didn’t grant this, then it might seem that there’s no such thing as any a priori knowledge, since we always in some sense rely on the physical reliability of our own brains – he gives some good arguments for why we should dismiss this worry.)

I think this sort of epistemology of mathematics is probably of much more practical interest for mathematicians than the standard questions of ontology and logic that more traditional philosophy of mathematics deals with.

## An Economic Argument for a Mathematical Conclusion

27 04 2008

How valuable is an income stream that pays $1000 a year in perpetuity? Naively, one might suspect that since this stream will eventually pay out arbitrarily large amounts of money, it should be worth infinitely much. But of course, this is clearly not true – for a variety of reasons, future money is not as valuable as present money. (One reason economists focus on is the fact that present money can be invested and thus become a larger amount of future money. Another reason is that one may die at any point, and thus one may not live to be able to use the future money. Yet another reason is that one’s interests and desires gradually change, so one naturally cares less about one’s future self’s purchasing power as one’s current purchasing power.) Thus, there must be some sort of discount rate. For now, let’s make the simplifying assumption that the discount rate is constant over future years, so that money in any year from now into the future is worth 1.01 times the same amount of money a year later. Then we can calculate mathematically that the present value of an income stream of$1000 a year in perpetuity is given by the sum $\frac{1000}{1.01}+\frac{1000}{1.01^2}+\frac{1000}{1.01^3}\dots$. Going through the work of summing this geometric series, we find that the present value is $\frac{1000/1.01}{1-1/1.01}=\frac{1000}{1.01-1}=100,000$. However, there is an easier way to calculate this present value that is purely economic. The argument is not mathematically rigorous, but there are probably economic assumptions that could be used to make it so. We know that physical intuition can often suggest mathematical calculations that can later be worked out in full rigor (consider things like the Kepler conjecture on sphere packing, or the work that led to Witten’s Fields Medal) but I’m suggesting here that the same can be true for economic intuition (though of course the mathematical calculation I’m after is much simpler).

The economic argument goes as follows. If money in any year is worth 1.01 times money in the next year, then in an efficient market, there would be investments one could make that pay an interest of 1% in each year. Investing $100,000 permanently in this and taking out the interest each year gives rise to this income stream, and thus one can fairly trade$100,000 to receive this perpetual income stream, so they must be equal in value. We don’t need to sum the series at all.

Now perhaps there is a sense in which the mathematical argument given above and the economic argument given below can be translated into one another, but it’s far from clear to me. Thus, it looks like at least sometimes, economic intuition can solve mathematical problems. People often talk about the “unreasonable effectiveness of mathematics in the sciences”, but here I think I have another example of the unreasonable effectiveness of the sciences in mathematics.

## Putnam on Indispensability

29 10 2007

I mostly wrote this post three weeks ago, when Hilary Putnam was at Berkeley to give the three Townsend lectures, but I didn’t get around to finalizing it until just now.

Putnam gave three talks, which were all quite interesting. The first discussed his own type of realism, and tried to counteract false impressions people have had of his views changing drastically with regards to this question over time. (He admitted that they had changed at a few points, but several other apparent changes were just due to some poor choice of terminology on his part. In particular, he had used the phrases “internal realism” and “scientific realism” in a few different ways by accident.)

The second discussed the Lucas/Penrose arguments trying to use Gödel’s theorems to undercut mechanism about the mind – he claimed that although these arguments couldn’t really work, there is an interesting modification that should undercut Chomsky’s thesis that there is a shared innate “scientific competence” that underlies the notion of scientific justification. If there were such an algorithm, then we could never recognize that algorithm as the true one – but since this thesis has to be an idealization (because of the infinities involved) it doesn’t make much sense to argue that our competence is represented by a Turing machine without our being able to say which one. However, I wonder if there’s still some sense to be made of the question – perhaps the idealization says that our scientific competence is given by some arithmetically definable set. But in that case, it is an interesting question whether the best such set is itself Turing computable (even though we can’t know which Turing machine gives it), or more complicated.

The third discussed his picture of “mathematics as modality”. What I found most interesting about this talk was his discussion of the so-called “Quine-Putnam indispensability argument”. He pointed out that when he gave the argument, it had a very different form and conclusion from Quine’s – while Quine used it as an argument for “realism in ontology” about mathematics (that is, that mathematical objects actually exist), Putnam claimed merely that it established “realism in truth-value”. Putnam’s argument was intended to show that there is a serious tension between scientific realism and verificationism about mathematics, on which mathematical claims don’t have truth-values unless they can be proven or disproven. The scientific realist says that there are some sets of equations (say, equations of motion, or wave-equations) that correctly describe the physical world. Now, if these equations have enough complexity (I suppose a three-body system under Newtonian gravitation suffices), then given the truth of certain initial conditions, there will be some interval (a,b) and some time t such that the question of whether one of the objects is in that interval or not at that time is undecidable. But if there is a fact of the matter for the physical claim, and the equations correctly describe the physical world, then there must be a fact of the matter about the mathematical claim, despite its being undecidable.

Putnam took care to point out that this only establishes that there is a fact of the matter about the mathematics, and not that the mathematical objects actually exist. Also, he took care to point out that he was not trying to make an epistemological point – nothing in this argument establishes that this is how we know the truth-values of mathematical claims, just that there must be such truth values. Since Quine concluded that mathematical claims must be about something (because of his picture of ontology as being given by the existential commitments of the best theory of the world), he also concluded that this is how we know which mathematical axioms are true.

16 09 2007

First, I’ll mention that I’ve updated my blogroll – there’s been a real burst in math blogs over the summer, at least in part instigated by my friends at the Secret Blogging Seminar, but also by the spurt of Fields Medalists with blogs. (Are we up to 10% of the total number now?) I’ve also added a few philosophy blogs that I’ve been reading for a while, and a couple that I should have been reading, but of course I’m sure I’m missing others.

Anyway, there’s new math job search gossip stuff going on on the web – I think the discussion on that post is interesting and relevant across disciplines for people trying to figure out whether this is generally a good thing or not.

Tim Gowers discusses the way logarithms and other abstract things should be taught. He advocates a way that’s a bit more formalist than some others suggest, but it sounds reasonable to me. There’s also interesting discussion of formalism there in the comments, though some of it sounds more like structuralism to me. See for example Terence Tao’s comment, “I guess there is a fundamental transition in mathematical learning when one realises that what mathematical objects are (and how they are constructed) may be less important than what mathematical objects do (e.g. what properties they obey).”

Also, a discussion about the Axiom of Choice at The Everything Seminar (I may add that one to my links later too), focusing on a puzzle I first heard from my friend Lukas Biewald. There’s interesting discussion in the comments that reveals implicit ideas about platonism and formalism among mathematicians. I think the anti-platonist majority there should be a bit more careful though, because similar issues apply in arithmetic, thanks to Gödel’s results. I think we should be much more hesitant to say that the natural numbers are just something we make up than they are with the universe of ZFC (or a topos, or whatever), as I mentioned before.

## Crazy Neo-Falsificationism

2 09 2007

Karl Popper’s criterion of “falsifiability” for scientific theories (saying that a theory counts as scientific only if there is some hypothetical observation that would prove it to be false) is a very good heuristic for thinking about what science (or any sort of evidence-based procedure for finding out about the world) is like. However, regardless of what scientists say (whether they be physicists yelling about string theory, biologists yelling about intelligent design, or anyone railing at crackpots, or economists, or anyone they don’t like) it just isn’t right as even part of a criterion for what counts as science. But I think there is perhaps a way to use something like it as a criterion for what counts as a belief, though perhaps my suggestion is crazy.

First, a quick rundown of the problems with falsificationism as a criterion for science. As Popper was well aware, it can’t apply to statistical theories – in most cases, no evidence could actually rule out a statistical theory, rather than just making it extremely improbable, and you might think we shouldn’t rule something out just because it’s extremely improbable, because (in the long run) we’re bound to get unlucky and rule out the truth at some point. A bigger problem is the Quine-Duhem problem – basically no theory is falsifiable in a strict sense, because falsification of a theory by evidence always depends on auxiliary hypotheses, which can be let go of to save the theory. For instance, an observation of Uranus or Mercury in a place where you don’t expect it to be might look like a straightforward falsification of Newtonian mechanics, but there’s also room to postulate a so-far-unobserved planet (Neptune or Vulcan), or to argue that there was some optical artifact in the way the telescope was working, or even just that the astronomer misremembered or misrecorded the observation. Thus, there is no sharp line that can be drawn by a falsifiability criterion of this sort. In addition, theories that look straightforwardly unfalsifiable can still serve as useful heuristics for the further development of science – for instance, the theory that there actually are quarks (as opposed to the theory that protons and neutrons and cloud chambers and the like all behave “as if quarks existed”) can lead one to think of different modifications of the Standard Model in the face of recalcitrant data.

But despite all these problems, I think there’s still something very useful about the idea of falsificationism. But rather than a logical criterion, as Popper considered it, I’d rather think of it as an epistemological, or perhaps even psychological one. Popper thought that a theory needed to be specific enough that certain observations would be logically inconsistent with it, in order to count as a scientific theory. I’d rather say that a belief needs to be flexible enough that certain observations would lead the agent to give it up, in order for the belief to count as a “rational” or “scientific” one. (Or perhaps even to count as a belief at all, rather than just an article of faith, or something like that.) That is, it doesn’t need to be inconsistent with any set of observations – it just needs to be held in a way that is not totally unshakable. Although this is a psychological criterion I’m suggesting, I don’t think that the observations that would lead an agent to give it up need to be known to the agent – they just need to actually have the relevant dispositions. This removes the worries about statistical theories and the Quine-Duhem problem – although it might be that any theory could logically be saved from the data by giving up enough auxiliaries, it seems plausible that any rational agent would have some limit to the lengths that they would go to to save the theory. (I don’t know if comparative amounts of evidence needed to shake one’s belief should say anything interesting about the comparison between two agents.) This also applies to the more “standardly unfalsifiable” theories that I’d like to defend – I say that they’re important because they give useful heuristics for how to modify theories that are different from their empirically identical peers. But if these heuristics never seem to lead one to good modifications, then eventually one would likely give up this theory. It can’t be falsified, but one can still be made to give it up by seeing how fruitless it is and how much more fruitful its competitor is (which is just as unfalsifiable in this respect).

One might have worries about mathematical truths, or other potential “analytic” truths. Popper explicitly set these aside and said that his criterion only applied to things that weren’t logical truths (or closely enough related to logical truths). However, I suspect that something like my criterion might still apply here – although there is no possible observation that is inconsistent with Cauchy’s theorem on path integrals in the complex plane, I suspect that there are possible observations that would make anyone give up their belief in this theorem. For instance, someone could uncover a very subtle logical flaw that appears in every published proof of the theorem, and then exhibit some strange function that is complex-differentiable everywhere but whose integral around a closed curve is non-zero. Or at least, someone could do something that looks very much like this and would convince everyone, even though I think they couldn’t actually discover such a function because there isn’t one. It’s tougher to imagine what sort of observation would make mathematicians give up their beliefs in much simpler propositions, like the claim that there are infinitely many primes, or that 2+2=4, but as I said, there’s no need for the agents to actually be able to imagine the relevant observations – the disposition to give up the belief in certain circumstances just has to exist.

I think this is a relatively low bar for a belief to reach – I suspect that just about all apparent beliefs that people have would actually be given up under certain observations. However, with logical beliefs and religious beliefs, people often claim that no possible observation would make them give it up (this is called “analyticity” for logical beliefs, and “faith” for religious beliefs). I don’t know if that should actually count as a defect for either of these types of belief, but I think it is good reason to worry about them, at least to some extent.

## Mathematical Existence

2 05 2007

Last night, I mentioned to my roommate (who’s a mathematician) the fact that some realist philosophers of mathematics have (perhaps in a tongue-in-cheek kind of way) used the infinitude of primes to argue for the existence of mathematical entities. After all, if there are infinitely many primes, then there are primes, and primes are numbers, so there are numbers.

As might be expected, he was unimpressed, and suggested there was some sort of ambiguity in these existence claims. He wanted to push the idea that mathematically speaking, all that it means for a statement to be true (in particular, an existence statement) is that it be provable from the relevant axioms. I was eventually able to argue him out of this position, but it was surprisingly harder than expected, and made major use of the Gödel and Löwenheim-Skolem-Tarski theorems.

The picture that is appealing to the mathematician is that we’re just talking about any old structure that satisfies the axioms, just as is the case when we talk about groups, graphs, or topological spaces. However, I pointed out that the relevant axioms for the natural numbers can’t just be Peano Arithmetic, because we’ll all also accept the arithmetical sentence Con(PA), once we’ve figured out what it’s saying. This generalizes to whatever set of axioms we accept for the natural numbers. As a result, Gödel’s theorem says that we don’t have any particular recursive set of axioms in mind.

To make things worse for the person who wants to think of the natural numbers as just given by some set of axioms, there’s also the fact that we only mean to count the members of the smallest model of PA, not any model. By the LST theorem, any first-order set of axioms that could possibly characterize the natural numbers will also have uncountable models. In order to pick out the ones we really mean, we either have to accept some mathematical class of pre-existing entities (the models of PA, so we can pick out the smallest one), or accept a second-order axiomatization, or give up any distinction between standard and non-standard natural numbers.

Of course, the same argument can be run for set theory, and at this point, accepting a second-order axiomatization causes trouble. Some philosophers might argue a way around this using plural quantification, but at least on its face, it seems that second-order quantification requires a prior commitment to sets, so it can’t underlie set theory. And we really don’t want to admit non-standard models of set theory either, because they give rise to non-standard models of arithmetic, which include certain infinite-sized members as “natural numbers”.

So it seems, for number theory and set theory at least, the mathematician has to admit that something other than just axioms underlies what they’re talking about. I’ve argued that the existence of and agreement on these axioms means that mathematicians don’t often have to think about what’s going on there. And from what I can see so far, the mathematician could take a platonist, fictionalist, or some other standpoint without any trouble. And in fact, most of the reasoning is going on at a high enough level that even if there’s some problem with the axioms (say, ZFC turns out to be inconsistent) this will have almost no effect on most other areas of mathematics, just as a discovery that quantum mechanics doesn’t correctly describe the behavior of subatomic particles will have relatively little effect on biology. But the mathematician can’t insist that existence in mathematics just means provability from the axioms – she can insist on some distinction between mathematical existence and ordinary physical existence, but it will be something like the minor distinction that the platonist recognizes, or possibly something like the distinction between real and fictional existence.

## Piecemeal Discovery in Mathematics

31 03 2007

I recently saw an interesting post pointing out that the invention/discovery of calculus didn’t take place totally out of the blue (as we’re used to thinking) but was really the culmination of a lot of historical developments that happened to be coming together at the right time. (Possibly including the development two centuries earlier, in southern India, of many particular cases of Taylor series!)

Anyway, this seems to relate to an earlier discussion that Brian Weatherson and I had about the notion of discovery in mathematics. On some pictures of discovery (such as where a proposition is discovered to be true iff someone brings it from being not believed to being known), if someone earlier had a justified true belief in a proposition, then it turns out that no one really discovered it. But in many instances of the real, complicated mathematical and scientific history, we don’t really want to attribute a particular discovery to anyone because particular cases and broad patterns had been noticed by many people independently, and the person to whom the discovery is usually attributed was just the first to codify it in some useful and very general way. Of course, there are also many cases (like the one of Cavalieri’s principle mentioned in the post referred to above, and the case of Lagrange’s theorem in group theory, which came almost a century before the modern definition of a group) in which the standard attribution pushes the discovery anachronistically early, because the development of the special case was thought of as the key to the discovery, rather than the generalization that came later on.

## Return

20 09 2006

I just realized it’s been almost 2 months since I’ve posted here! You may have noticed trouble leaving comments in the last month or so – apparently my host site updated something in their system, and only today did I find the change I needed to make to make it work again.

Anyway, after finishing up at the Canada/USA Mathcamp, I visited some friends in Bellingham and Vancouver, and then had the beginning of the semester to deal with, which all distracted me from blogging for a while. Last week I was in Berlin for a workshop, Towards a New Epistemology of Mathematics, attached to the Gesellschaft für Analytische Philosophie’s large conference. An overview of the workshop by David Corfield is here.

A few talks caught my attention that he didn’t mention, so I’ll briefly mention those here. Tatiana Arrigoni presented some discussion of the candidate set-theoretic axiom V=L, mentioning that although many philosophers of set theory argue that it should be rejected, there are some (like Ronald Jensen) that argue in its favor. Her idea seemed to be that there might be at least two different ideas of set-theoretic intuition that lead to different sets of axioms.

Curtis Franks suggested that although Hilbert is traditionally regarded as a mathematical formalist, some of his early writings suggest that his program was motivated by a sort of naturalism, perhaps in Maddy’s vein. He objected to the intuitionists and others by saying that standard mathematical practice is just obviously justified, because it’s been so successful and hasn’t led to any problems – in a sense, their philosophical arguments are no better than those of the skeptic. However, Hilbert wanted to reformulate the consistency of mathematics as a mathematical (rather than philosophical) question – Gödel just showed that this was impossible.

And in the main conference, Øystein Linnebo suggested that John Burgess’ system inspired by Bernays and Boolos, although it very elegantly derives all of ZFC (and large cardinals up to indescribables) just by means of a fairly straightforward plural quantification system together with something like Cantor’s limitation of size principle, doesn’t provide a much stronger justification than Bernays’ original system. The particular plural logic used here does matter, and a seemingly similarly justified limitation of size principle leads to Russell’s paradox. I’m not convinced that Linnebo undermines Burgess’ system terribly much, but it’s definitely interesting to see how these systems develop.