Do Mathematical Concepts Have Essences?

24 01 2006

In John MacFarlane’s seminar today on Brandom’s Making it Explicit, the distinction was discussed between necessary and sufficient conditions for a concept and the essence of a concept. The distinction is roughly that necessary and sufficient conditions for the application of a concept doesn’t necessarily tell you in virtue of what the concept is satisfied. For instance, we have two extensionally equivalent notions – that of being a pawn in chess; and that of being permitted to move forwards one square, capture diagonally forwards one square, move two squares forwards in certain contexts, and so on. At first it might seem correct to say that the piece can move forwards one square because it is a pawn, but further reflection suggests that this would leave the notion of being a pawn unanalyzed. After all, a piece is not a pawn in virtue of its shape (we could be playing chess with an elaborately carved ivory set, or labeled checkers rather than standard chess pieces, or even patterns of light on a computer screen) nor by its position on the board (any piece could be in most of the positions a pawn could be in), nor most anything else. It seems that in fact, the reason it is appropriate to call this piece a pawn is because we give it the normative status of being able to move in certain ways (along with giving other pieces related permissions and obligations). Thus, it seems that it is a pawn in virtue of its normative statuses, and it has these statuses in virtue of our agreement to treat it thus (or something like that).

Now, whether this “in virtue of” makes sense or not is a contentious debate I’m sure. But if it does, then it seems to motivate various projects, both philosophical and otherwise. For instance, the physicalist program seeks to find physical facts in virtue of which all other facts hold (whether about consciousness, laws of nature, normativity, life, etc.) and in general any reductionist program seeks to show that a certain set of facts holds in virtue of some other set (though they may argue that even the distinction between these two sets of facts is merely illusory).

Another example that was used in seminar to motivate this distinction was that we know that necessarily, any straight-edged figure whose external angles are all equal to the sum of the other internal angles is a triangle, and vice versa. However, there is a sense in which it is a triangle in virtue of its having three sides, rather than in virtue of this fact about the external angles. So I wondered how much this idea can be extended in mathematics.

At first I thought it would quickly fail – after all, it’s extremely common not to have a single definition for a mathematical object. For instance, an ordinal can be defined as a transitive set linearly ordered by the membership relation, a transitive set of transitive sets, the Mostowski collapse of a well-ordering, and probably in many other ways. In different presentations of set theory, a different one of these is taken to be the definition, and all the others are proven as theorems. Similarly, a normal subgroup of a group G can be defined either as the kernel of a homomorphism from G to some group H, or as a subgroup of G that is closed under conjugation by elements of G, or probably in many other ways as well.

However, I’m starting to think that maybe there still is a notion of essence here. For most of the uses of the normal subgroup concept, the fact that it is the kernel of a homomorphism is really the fundamental fact. This explains why you can take quotients by normal subgroups, and more easily generalizes to the notion of an ideal in a ring. With the ordinal concept, it’s a bit harder to see what the fundamental fact is, but it’s clear that well-ordering is at the bottom of it – after all, when generalizing to models of set theory without the Axiom of Foundation, we normally restrict the notion of ordinal to well-founded sets unlike the first two definitions.

If this is right, then I think that a lot of the history of the development of mathematics can be seen as a search for the essences of our concepts (and for the important concepts to find the essences of). Although we often think that theorems are the main product of mathematics, it seems that a lot of the time just identifying the “right” structures to be talking about is really the goal.

Something like this can be seen in the history of algebraic geometry. At first, it was the study of curves in the real plane defined by polynomials. Eventually, it was realized that setting it in the complex plane (or the plane over any algebraically closed field) makes a lot of things make more sense. (For instance – Bezout’s theorem is true in this setting, that a curve of degree m and a curve of degree n always intersect in exactly mn points counting multiplicity.) Then it was generalized to n dimensional spaces, and projective spaces as well to take care of a few troubling instances of Bezout’s theorem, and to make sure that every pair of algebraic curves (now called varieties) intersect. After noticing the connection between algebraic curves and ideals in the ring of polynomials on the space (there is a natural pairing between algebraic subsets of a space and ideals closed under radicals in the ring of polynomials), it became natural to define a ring of polynomial-like functions on the algebraic curves themselves. With this definition, it was clear that projective spaces are somehow the same as algebraic curves in higher-dimensional spaces, and affine spaces are their complements. Thus, instead of restricting attention to affine and projective n-spaces over algebraically closed fields, the spaces of study became “quasiprojective varieties” – intersections of algebraic subsets of these spaces and their complements. In the ’50s and ’60s, this notion was generalized even further to consider any topological space with an associated ring satisfying certain conditions – that is, the objects of study became sheaves of rings over a topological space satisfying certain gluing conditions. Finally (I think it was finally), Grothendieck consolidated all of this with the notion of a scheme.

At various points in the development of algebraic geometry, the spaces under study changed in various ways. At first, extra restrictions were imposed by requiring the field to be algebraically closed. But then other restrictions were removed by allowing the dimension to be higher. Moving to projective spaces was another restriction (in a sense), but then moving to quasiprojective varieties was a generalization. Moving to locally ringed spaces, and then sheaves, and finally schemes were greater generalizations that (I think) incorporated the spaces that were originally removed by requiring algebraic closure. However, the ones that were excluded by that first move could now be understood in much better ways using the tools of ideal theory, the Zariski topology, and much more that was naturally developed in the more restricted setting. I am told that the notion of a scheme helped tie everything together and allow algebraic geometers to finally reach a concept that had all the power and interest they wanted to give good explanations for facts like the analogs of Bezout’s theorem, and also to start dealing with problems of number theory in geometric terms.

Advertisements

Actions

Information

5 responses

29 01 2006
Peter

Kenny — page and comments seem to be OK again.

My comment on this post is just a minor observation: You say: “it’s extremely common not to have a single definition for a mathematical object.”

Indeed, having more than one definition of an object is often how we understand the object. As an example, consider a derivative, which can be understood in terms of epsilon-delta arguments, or infinitesimals, or as a functor between appropriate categories, and so on.

29 01 2006
Kenny

That definitely seems right, that having all these different sets of necessary and sufficient conditions on something, we tend to get better understanding of it. This is why people always look for classification theorems, and ways to characterize which Xs have certain property Y. (What they seem to mean is some characterization that doesn’t make explicit mention of X and Y, but is somehow rather more “direct” or something.)

For many of these concepts though, we don’t use the alternative characterization as a definition, but for some (like ordinals) we do. One of my colleagues told me a story about her first model theory class, when the professor mentioned the phrase “complete theory”, and she asked what it was. Rather than saying “a theory that logically entails every formula or its negation”, he responded “a theory, all of whose models have isomorphic ultraproducts”.

31 01 2006
David Corfield

A very interesting post. I think that’s right to look beyond definitions which are equivalent in specific situations. One of them often extends to a more general situation in a thoroughly pleasing way. It is also possible that a pair of definitions can generalise in different ways, leaving open the possibility of some future convergence.

“Although we often think that theorems are the main product of mathematics, it seems that a lot of the time just identifying the “right” structures to be talking about is really the goal.” Precisely!

Raising the topic of essences as you do, the natural next step is to think about natural kinds, laws, etc. I made a start in Mathematical Kinds, or Being Kind to Mathematics. Admittedly, it’s not the most careful piece, but then I often feel close to quitting philosophy of math and don’t want to leave half-formed ideas out.

18 07 2006
John Baez

Kenny Easwaran writes:


Finally (I think it was finally), Grothendieck consolidated all of this with the notion of a scheme.

I’m very much in favor of what you say in this post. But I’m not sure that mathematical essences are ever revealed to us in a final “ultimate” form.

In particular, while your history of algebraic geometry is excellent, it may have stopped a little too soon, because after Grothendieck gave his famous definition of a scheme as a ringed space pieced together out of spectra of commutative rings, he gave a much simpler and more general definition in which a scheme is simply a functor from commutative rings to sets.

It seems nobody has ever put much work into studying this definition, because Grothendieck had already shot too far ahead of his time and exhausted everyone who was trying to keep up. Even now, most mathematicians still find his first definition of scheme to be painfully abstract and general. But his later definition is beautiful and simple, and may someday be taken up and given serious study.

18 07 2006
Kenny

You’re right – I should have been more careful with that use of “finally”, because I agree that, like in any other science, we rarely (or perhaps never?) reach the final theory of anything. But it sounds like you’re suggesting that I even missed a point in the historical development that has happened – I’ll have to check that out, if my limited knowledge of algebraic geometry is sufficient.

Anyway, thanks for the interest and the confirmation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: