I just got back from the 7th annual Midwest Philosophy of Mathematics Workshop at Notre Dame, which was really quite a good event. I met a lot of people working on mathematical issues, including a bunch of students from Pittsburgh and Ohio State (surprisingly, I don’t remember meeting any students from Notre Dame. Anyway, here’s my thoughts on some of the talks:
Jeremy Heis gave a very interesting talk on Ernst Cassirer’s philosophy of mathematics. I knew nothing about Cassirer before this talk, but it sounded quite interesting, in that it applied a more naturalistic version of Kantian methods to discuss issues of the applicability of mathematics. Not being a historian, I can’t say much more about this, but the talk was quite interesting.
Alan Baker gave an interesting talk on “countermathematicals”, which are sentences of the form “If A were the case, then B would be”, where A contradicts some mathematical truth. The main interest here is that most theories of counterfactuals can’t deal with situations like this, where the antecedent is impossible. Either the sentence comes out trivially true (so that there is no distinction between different countermathematicals) or we run into other problems. Baker discussed some particular examples of such sentences – he mentions discussion by David Lewis of proofs by reductio, where we say “If A were true, then B would be, but that contradicts C (which we already know), so A is false”. Baker points out that reductios seem to behave more like indicative conditionals than subjunctive though – we’re allowed to call upon any previously proved statement, whereas in subjunctives we expect some of them not to be relevant to the situation we’re considering. The one’s he found most interesting though were a bunch related to “spoof perfect numbers” – a perfect number is one that equals the sum of its divisors (like 6 and 28); a spoof perfect number is one that would be perfect if one of its divisors were prime. For instance, “If 4 were prime, then 60 would be perfect” or (in a quote from Descartes) “If 22021 were prime, then 198585576189 would be perfect”. Baker’s discussion focused on these claims and suggested that they motivated a theory of countermathematicals, based on mathematical practice.
Philip Ehrlich had some interesting discussion of John Conway’s “surreal numbers”, which are a generalization both of von Neumann ordinals and Dedekind cuts. He showed that the surreal numbers have a lot of interesting maximality and uniqueness properties so that every theory of the continuum, or of infinite sizes in general, can be seen as a subset of them.
Øystein Linnebo began the next day with an interesting discussion of the “bad company” objections to neologicism. He suggested that the problems are so great with isolating “good” abstraction principles, that instead of accepting universal comprehension for concepts and trying to restrict the abstraction principles, we should accept universal principles of abstraction and instead restrict comprehension for concepts to the good ones. His particular method involved breaking up the universe of objects and concepts into a bunch of layers, with an accessibility relation given by one being an extension of the other. Provided that this relation is “directed” (so that any two extensions of one layer have a common extension further on), we can then quantify over all objects whatsoever by combining a modal box with the universal quantifier. Linnebo comes up with his particular theory by expressing restrictions on when new concepts can occur in new layers, expressible in this modal language. It’ll be interesting to see whether this gives a provably consistent theory, and if so how much of the neologicist program it can reconstruct.
Andrew Arana gave an interesting talk on purity of methods in proof. He suggested that for any mathematical problem, we have both a goal, and a set of concepts that one must understand in order to understand the goal. He said that a solution to such a problem is “pure” iff one needs no understanding of concepts for the solution except for those that were already needed for the problem itself. The reason he suggests we might want purity is that it’s the one way to be sure that the result of our solution is exactly the same as the original goal, given the same understanding of the concepts involved. Thus, an impure solution runs the risk of “changing the subject”. I wasn’t quite sure if this was always the case – in some examples he gave (such as a coordinatized proof of a statement in pure synthetic geometry) I could see that this was going on, while in others (a topological proof of the infinitude of primes) this didn’t seem like a risk to me. I also thought that this explains why we sometimes want impure proofs, exactly because they give a different understanding of the concepts involved in the original problem – when one uses complex numbers to solve a cubic, one gets a more algebraic understanding of the reals; when one uses elliptic curves and modular forms to solve Fermat’s Last Theorem, one gets a better understanding of natural numbers as they relate to these other objects. Anyway, it’s a very interesting project.
Finally, Colin McLarty discussed category theory as a method for implementing mathematical structuralism. His primary point, I gather, was that Bourbaki had tried to give a unified understanding of mathematical structure in largely set-theoretic terms in their series of books in the mid-20th century, and ended up failing, in part because they only considered isomorphisms rather than more general maps of structures, and in part because they couldn’t come up with a unified notion of structure in this language. He suggested that modern philosophical structuralists are trying very similar strategies, without realizing this, and that they should instead look to category theory as the foundation, treating maps as basic, rather than elements.