I went to a math graduate student talk yesterday about regular primes and their relations to Fermat’s Last Theorem, class numbers of fields, zeta functions, and the like. The thing that struck me most about the talk was how many “proofs” due to Euler were used that really did nothing like what a proof is supposed to do.

Here’s a simple example of the sorts of “proof” involved in the lecture – we know that if a geometric series 1+r+r^2+r^3+… converges, then a simple calculation shows that it converges to 1/(1-r). (If we just multiply through by 1-r, we see that every term cancels except for the 1 – more rigorously, if we multiply the partial sums by 1-r, we get 1-r^(n+1), and if |r|p-adic distance for some prime p, we get the p-adic numbers as the completion. Amazingly enough, in the 2-adics, the series 1+2+4+8+16+… really does converge to -1. And in the 5-adics, the series 1+5+25+125+… really does converge to -1/4. (The argument from above actually works completely unchanged, except that |r|Here’s another blogospheric discussion of this phenomenon.

### Like this:

Like Loading...

*Related*

Theo(10:53:55) :Many, although not all, of Euler’s manipulations can be made rigorous by taking averages:

Consider a formal series $\sum a_i$, and let $s_i$ denote its sequence of partial sums. Cauchy tells us when the series converges: $\sum a_i = A$ if $s_i \to a$ in the usual sense. I want to call this “0th order convergence” (although probably that term is used for something else too).

This kind of convergence makes sense of sums like $1 + 1/2 + 1/4 + 1/8 + …$ but fails with $1 – 1 + 1 – 1 + 1 – …$. We should not rule out the latter sum outright, just because, say, rearrangement of terms leads to problems. No, Cauchy does tell us how to evaluate $1 – 1/2 + 1/3 – 1/4 + 1/5 – ….$, even though this is also susceptible to rearrangement.

The sequence of partial sums associated with $\sum (-1)^n$ is $1,0,1,0,1,0,1,0,…$, which does converge in the average to $1/2$. This suggests the following: given a sequence of partial sums $s_i = s_i^{(0)}$, consider the new sequence $s_i^{(1)} = \frac{1}{i} \sum_{j=0}^{i-1} s_j^{(0)}$. If the original sequence converges, then so does this average sequence, and to the same thing.

We can continue to take averages, so that we can make series like $1 – 2 + 3 – 4 + 5 – …$ converge. First, what should we expect this to converge to, a la Euler? Well, Euler points out that $(\sum a_i)(\sum b_i) = \sum\sum_{j=0}^{i} a_j b_{i-j}$, so we expect $(1 – 1 + 1 – 1 + …)^2 = 1 – 2 + 3 – 4 + …$. And, sure enough, after taking averages a couple times, it turns out that $1 – 2 + 3 – 4 + …$ does indeed converge to $1/4$.

I call a series “nth-order convergent” if its n-times average converge; then it turns out that any sum or product of nth- and mth-order convergent series is (n+m+1)th-order convergent (I don’t remember if you actually need the +1). And, of course, if a series’ n-times average converges, then so does its m-times average for all m>n, and to the same thing.

(Incidentally, the Hahn-Banach theorem allows us to extend such convergences to all bounded series; one can construct a sequence of 0s and 1s that never converges no matter how many times you take averages, but using Choice we can pick a consistent limit for it (consistent with assigning limits of other sequences to their averages).)

I’m not sure what to do with Euler’s other manipulations, on things like $1 + 4 + 9 + 16 + …$ or factorizing power series. I’m guessing that the factorization you quote goes through, because analytic functions are very constrained. Indeed, I’d expect that a thorough sojourn through complex analysis would get us a lot of the way.

Euler is much maligned for his often-fuzzy manipulations. And he did get things wrong, arguing, for instance, that $\sqrt{-2}\sqrt{-3} = \sqrt{6}$. But more can be made correct by this kind of argument than many people give him credit for, and I agree with you that it’s important to remember that these kinds of formal manipulations can lead to great insights.

Peter(11:09:27) :Appropos intuitions for infinite series: At times, the great Indian mathematician Ramanujan apparently had very good intuitions and insights about infinite series, identifying as true claims which he or others only later proved true. But some of his intuitions were poor — ie, some claims he believed to be true were proven false. It is not at all clear what nature of thing intuition is, if even for a genius sometimes it is correct and sometimes not.

Kenny(19:04:50) :Theo – if you could e-mail me with some references about that, that would be great. I’ve been thinking about a puzzle involving deciding which of two games is better, when neither has an expected utility because $\sum xP(X=x)$ is only conditionally convergent. Some people have suggested that we can treat such games as having any expected utility we like, just as you’ve suggested here with these non-convergent sequences of 0s and 1s.

Matt Weiner(19:05:34) :I don’t think I ever knew enough math to follow sigfpe’s entry, but I love that its URL is in part product-of-all-primes-is-42

(Hmph. No HTML in comments. Boldface the ’42’.)

logicnazi(03:38:55) :I think you are being unfair. Most of these operations on divergent series can be readily verified as not causing problems in the individual cases in question. It is actually a fairly straightforward proof to show that if a geometric series converges then the above method is valid.

Why is this not just another instance where most people know what they would go do. If they wanted to go calculate out the errors and show they went to 0 they could make these arguments rigorous. Isn’t this just another situation of leaving out the details.

Kenny(03:51:23) :I think in this case it really isn’t clear what to do. At least, when the formula for computing the sums of geometric series was first discovered, people had no idea about p-adics or anything else like that, so the answers in many cases didn’t make sense. Only later did some more of what was going on become clear.