More than a Century Old

19 08 2007

Joe Shipman recently posted an interesting e-mail on the Foundations of Math e-mail list:

I propose the thesis “any mathematics result more than a century old is suitable for undergraduate math majors”.

Note that the original proofs may be too difficult for undergraduates, I am only requiring that today a “boiled-down” proof (which may be embedded in a much larger theory than existed at the time of the original proof) could be taught.

So far I have only found one significant counterexample, Dirichlet’s theorem (which, in its logically simplest form, states that if a is prime to b, there exists a prime congruent to a mod b).

Can anyone think of better counterexamples? Does anyone know of a proof of Dirichlet’s theorem that does not require prerequisites beyond the standard undergraduate curriculum?

(Two other possible counterexamples, the Prime Number Theorem and the Transcendence of Pi, are proven sufficiently easily at the following links that they would, in my opinion, be appropriate for a senior seminar:

http://www.ma.utexas.edu/users/dafr/M375T/Newman.pdf

http://sixthform.info/maths/files/pitrans.pdf

).

Another version of the thesis is “any mathematics result more than 200 years old is suitable for freshmen” (note that most high schools offer a full year of Calculus). Results that were merely conjectured more than 200 years ago but not really proved until later don’t count.

— JS

I’ve sometimes considered something like this. Can anyone else think of potential counterexamples? I wonder if there were some results known on solutions of differential equations in the 18th century that would be too advanced for first-years. And probably some particular calculations done in the 19th century that are just too large for an undergraduate to properly manage. I think it’s also possible that some of Cantor’s results on the possible Borel structures of the sets of discontinuities of real-valued functions might be too advanced, but it’s also possible that advanced seniors can manage them. Or perhaps the Riemann-Roch theorem? (I don’t actually even know enough to state that theorem myself.)

Another interesting corollary to this discussion – what’s the earliest result of the 20th century that is beyond the reach of an advanced undergraduate?





Betting Odds and Credences

17 08 2007

I was just reading the interesting paper When Betting Odds and Credences Come Apart, by Darren Bradley and Hannes Leitgeb, at least in part because of some issues that are coming up in my dissertation about the relations between bets and credences. Their paper is a response to a paper by Chris Hitchcock arguing for the 1/3 answer in the Sleeping Beauty problem, where he shows that if Beauty bets as if her credences were anything other than 1/3, then she is susceptible to a Dutch book.

They end up agreeing that she should bet as if her credences were 1/3, but they argue that this doesn’t mean that her credences should actually be 1/3, because of some similarities this case has to other cases where betting odds and credences come apart. I know at least Darren supports (or has supported) the 1/2 answer in the Sleeping Beauty case, so he’s got a reason to argue for this position.

I think in the end though, their paper has convinced me of the opposite – the correct thing to do in this situation is to bet as if one’s credence is 1/2, even though one’s credence should actually be 1/3! I get the 1/3 credence argument from a bunch of sources (especially Mike Titelbaum’s work on the topic). But for the betting as if one’s credence is 1/2, I might be using the term “bet” in a somewhat non-standard way. However, I think my usage is inspired by my attempt to resist some of the claims of Bradley and Leitgeb.

They give some examples of other cases in which it might look as if one should bet at different odds than one’s credences. For instance, if one is offered a bet on a coin coming up heads, but knows that this bet will only be offered if the coin has actually come up tails, then it looks as if one should bet at odds different from one’s credences. However, they agree that in this case one’s credences change as soon as the bet is offered, and one should bet at odds equal to the new credences.

Their next example is very similar, but without the shift in credences. One is offered a bet on a coin coming up heads, but knows that if the coin actually came up heads then the bet is carried out with fake money (indistinguishably replacing the real money in your and the bookie’s pockets) and is real if the coin actually came up tails. In this case, it looks like one should bet at odds different from one’s credences, which should still be 1/2.

However, I think that in this case what’s going on is that one isn’t really being offered a proper bet on heads at odds of 1/2. Functionally speaking, the money transfer involved will be like a bet on heads at odds of 1. It might be described as a bet at different odds, but I think bets should be individuated in some sort of functionalist way here, rather than according to their description in this sense. Thus, since one’s credence in heads is less than 1, one shouldn’t accept this bet.

Bradley and Leitgeb then say that what goes on in Hitchcock’s set-up of the Sleeping Beauty bets is similar. The bet will be repeated twice if the coin comes up tails (because Beauty and the bookie both forget the Monday bet), and thus this is a situation like the one with the bet that might turn out to be with pretend money, but in the opposite direction. Thus, this bet ends up being one that costs the agent $20 if the coin comes up heads, and wins her $20 if it comes up tails, so it’s functionally a bet at odds of 1/2. I think this is the set of bets she should be willing to accept, but that her credence in heads should be 1/3, so her betting odds and credences should come apart.

Of course, there may be a slight difference between the situations. In this version of the Sleeping Beauty bets, the bet gets made twice if the coin comes up tails, rather than paying off double. Perhaps the fact that it’s agreed to multiple times doesn’t make the same difference that having money replaced by something twice as valuable would. If so, then this bet really was properly described as a bet at odds of 1/3, so that I would no longer think that this is an example where betting odds and credences should come apart.

So I think I don’t really accept the particular claims that Bradley and Leitgeb make in this paper, but it’s only because I’m trying to do something subtle about how to individuate bets in functional terms. I’m sure there are good examples out there on which betting odds and credences could rationally come apart, but I’m not convinced whether the Sleeping Beauty case is one of them.





The Principal Principle

3 08 2007

A very plausible normative principle relating subjective degree of belief to objective chance is David Lewis’ “Principal Principle”. In a simplified version, this principle says that if you know the objective chance of some inherently chancy outcome, then your degree of belief in that outcome should equal the chance. Thus, if you know that the coin is fair, then you should have degree of belief 1/2 that it will come up heads.

This has some added bite because the chance information overrules a lot of other information – if you know the coin is fair, then it doesn’t matter how it happened to come up on the last 1000 flips, you should still believe in heads to degree 1/2. Even if the last 1000 flips were all tails – this is one idea of what’s fallacious about the gambler’s fallacy (or inverse gambler’s fallacy).

Of course, some sorts of information can overrule the chance information – if a very accurate fortuneteller has told you that the coin will come up heads, then maybe you should believe to a degree higher than 1/2, even though you still believe the coin is fair. This sort of information is what Lewis called “inadmissible” information. The question for the Principal Principle then is just what counts as inadmissible information?

To answer this, I think we need to consider just what chance really is. On one notion of chance, it requires that the world be objectively indeterministic, so that there is no fact of the matter about future chancy events. On this account, the idea of an accurate fortuneteller for chancy events doesn’t even make sense. This might be a natural view of chance that arises from the many-worlds interpretation of quantum mechanics. On this view, the chance of an event could potentially depend on anything for which there is a fact of the matter – but this only includes facts about the past and present. But since you’d need to know all this information (or the relevant parts anyway) to know the chances, there will trivially be no possibility of inadmissible evidence, so the Principal Principle stands (if at all) in a very simple form!

But there are other notions of chance I’ve heard people talk about. One is supposed to be compatible with strict determinism. I don’t know too many of the details, but I suspect that the idea is that there’s some natural class of “nearby worlds”, and chance is just some sort of probability measure on those worlds. This can definitely give rise to non-extreme values for chances, even though there is no possibility other than necessity. However, on this interpretation of chance, I don’t see why anything like the Principal Principle would have any normative force at all. I suppose if you can somehow narrow things down enough to know what the chances are, but can’t eliminate any of the worlds in the class that defines the chances, then it would make sense. But it’s far from clear to me why this situation would be at all common.

Then of course there’s Lewis’ own characterization of chance. I believe his idea is that one can read off the natural laws of a world by seeing what best systematizes the entire history of it. If there are certain types of events that have no interesting pattern to them at all except for a certain limiting frequency, then the best way to systematize these will be with chancy laws. In this setting it’s not clear how one would justify the Principal Principle, or how one would claim to have knowledge about the chances.

At any rate, the Principal Principle seems to say different things on these different interpretations of chance, and it gives rise to either different justifications or different accounts of what should count as “inadmissible evidence”.