Probabilistic Methods in Mathematics

3 12 2005

In a comment to my previous post, Anthony Widjaja To points out an ambiguity in the term “probabilistic proof”. There are three senses I can think of:

Erdös’ “Probabilistic Method” in Graph Theory

This method is not related to what I was talking about before, because it gives a deductively valid proof of the result, though it uses facts about probabilities in intermediate stages. An example is the following proof of a bound for Ramsey numbers. (Recall that the Ramsey number R(k) is defined as the least n such that among any n people, there is a group of k such that every member of the group knows every other, or no member of the group knows any other. R(2)=2,R(3)=6,R(4)=18 and other Ramsey numbers are unknown.)

Represent the people as a set of vertices, with a red edge between them if they know each other, and a blue edge if they don’t. We want to find n such that for any coloring of such a graph, there is either a set of k vertices all of whose edges are blue, or such a set all of whose edges are red. So color such a graph randomly, with each edge having probability 1/2 of being blue, and 1/2 of being red. For any given set of k vertices, there are k(k-1)/2 edges between them, so it has probability 21-k(k-1)/2 of being monochromatic. If n=R(k), then the probability of there being such a monochromatic subgraph must be 1. Therefore, the number of such subgraphs must certainly be at least 2k(k-1)/2-1, so that the probability of at least one of them being monochromatic can be 1. So if R(k)=n, then n must be such that n choose k2k(k-1)/2-1.

More complicated applications of this method start with probabilities other than 1/2, and often make use of more advanced concepts in probability, like variance, Chebyshev’s Inequality, and the like. But they all end up giving deductive proofs of the result.

Probabilistic Proofs of Primality

The methods that Don Fallis was talking about, which I discussed in my previous post, are proofs that don’t give a guarantee that the result holds. For instance, Michael Rabin proved in 1980 that for any n, if it is not prime, then at least half the numbers less than it have a certain property that is easy to check. Therefore, if we check 20 of these numbers (say, chosen by some pseudo-random number generator, seeded by the nanoseconds reading on the computer clock when the user hits a key 20 separate times), then if n is composite, we only have about a 1 in a million chance of getting a false positive for being prime. On the other hand, if it’s prime, then we are guaranteed to get a positive result.

Note that many deductive proofs that mathematicians accept are so complicated that there’s far more than a 1 in a million chance that the proof went wrong somewhere, even if the result is true.

Inductive Generalizations from Instances

The Goldbach Conjecture, stating that every even number (except 2) is the sum of two primes, is widely believed to be true. However, the best evidence that I know of in favor of it is the fact that people have searched for counterexamples and shown that there are none less than 1011. (There are other related results, like the fact that every even number is the sum of no more than 300,000 prime numbers, and that every even number is the sum of a prime and a product of at most two primes, according to this site.)

Both of these last two methods may be useful for raising one’s subjective credence in some result on a Bayesian account of mathematics (suitably modified to avoid the fact that standard Bayesian epistemology implies logical omniscience). After all, for any n I should assign some probability to there being a counterexample below n if the theorem is false. As n goes to infinity, it seems that this probability should go to zero. So for any epsilon, there is some finite number of cases to check that would reduce my subjective probability in the theorem below epsilon. Similarly, for the primality result, if I assign some probability p to n being composite, and I check k numbers and find no witnesses, then I should assign probability p2-k to the number being composite. Choosing k appropriately lets me make this posterior probability as small as I like.

The difference between these two cases is that the method is purely subjective for the verification of Goldbach’s Conjecture, while there are some relatively objective bounds one can give in the case of primality. By choosing the potential witnesses to check using the nanoseconds reading on the computer clock, we can reasonably suggest that the chosen numbers won’t follow any pattern related to n. There is probably some complex relationship between n and the precise distribution of Rabin witnesses for compositeness, and if we used some mathematical formula to choose the potential witnesses, then we might be unlucky and use a formula that just so happens to always give witnesses, so the probability of not being a witness wouldn’t be 1/2 each time. By choosing the numbers using independent readings of the nanoseconds reading, we can be justified in regarding each of these events as independent, giving us a conditional probability of a false positive given that n is compositen of 1/2k. So in some relatively objective sense, we can measure the likelihood, and see that we have very good evidence of primality.

We can also get a decently objective value for the prior probability that n is prime, using the Prime Number Theorem. This theorem states that there are approximately n/log n primes less than n, and we can use this to estimate that n has a probability approximately 1/log n of being prime. This is more questionable than the previous calculation, because, for instance, we know that no even number (other than 2), and no number that ends with 5 or 0 (except 5) is prime, so if n is not of this form, then it is more likely to be prime. So we don’t have a very good methodology for establishing a prior probability that n is prime.

But we can estimate the likelihoods in some fairly objective way using this method, which is why it’s much better than the method of verification for the Goldbach Conjecture. After all, someone might provide a relatively straightforward proof that if there is a counterexample to the Goldbach Conjecture, then it is at least 10100, and then the empirical evidence that we have already would be of no use. But it’s unlikely that someone will demonstrate a correlation between the timing of my keystrokes and the witnesshood of numbers less than n, so we can at least get a likelihood there.

There’s a sense in which a deductive proof only gives a likelihood as well, since there is a certain chance of it being wrong. However, again we have no good means of estimating the likelihood of producing such a seemingly correct (but actually incorrect) proof given that the theorem is true, or given that it’s false. Of course, the same worry arises with the probabilistic proof of primality, so the difference between the two is only in this extra possibility of error, which can be bounded in a rational way. But it’s interesting to note the advantages that this type of proof has over other non-deductive means of verification.

Advertisements

Actions

Information

4 responses

4 12 2005
Peter

Well, one branch of pure mathematics uses probalisitic proofs all the time (speaking of “probabilistic proof” in your second sense above) — namely, probability theory, and its close relative, measure theory.

These areas of mathematics have many theorems of the form:

“Statement X is true almost everywhere” (ie, true except possibly on a set of measure zero)

OR

“Statement X(n) is eventually true for all n” (ie, there is some k, such that X(n) is true, for all n > k)

OR

“Statement X(n) is true with probability p(n)”

etc.

The greatest achievement of 20th century mathematical statistics — Neyman & Pearson’s theory of statistical inference, which tamed the uncertainty inherent in inductive inference — is a result of this kind. Contemporary experimental and quantitative social science could barely exist without this result, and its descendants.

4 12 2005
Anthony Widjaja To

Your example of why probabilistic proofs of primality is not well-accepted in mathematics reminds me of the reason why computer scientists try to “derandomize” randomized algorithms, the best such example being the recently discovered algorithm that witnesses that primality is polynomial-time computable. It is a truism in computer science that randomized algorithms are often simpler and more elegant than its deterministic counterpart. However, as commonly argued by many algorithmists (not including myself), randomized algorithms hide the trees from the forest, i.e., that they do not reveal the structures why some computational problems (such as primality testing) can be solved efficiently. In other words, we want to derandomize randomized algorithms because we want to understand more. So they say at least. On the other hand, as you have pointed out, although we have managed to derandomize a certain randomized algorithm, this still does not leave out the possibility that the proof of the correctness of the derandomized algorithm contains mistakes, which is possible as deterministic algorithms are often more complicated than their randomized counterparts.

Anyway, talking about the notion of “almost always true” in mathematics, there is an interesting result in computer science that says that the problem of deciding whether a statement in first-order logic is almost always true can be decided in polynomial space (contrast this to Turing’s result that deciding whether a first-order statement is true is incomputable). Interestingly, if you go to second-order logic, the problem quickly becomes undecidable đŸ˜‰

4 12 2005
Theo

Of course, the same worry arises with the probabilistic proof of primality, so the difference between the two is only in this extra possibility of error, which can be bounded in a rational way.

Except, of course, for the differences in length of proof — presumably the chance that a long proof is wrong is less than the chance that a longer proof is wrong, other things being equal.

5 12 2005
Kenny

Oh, that’s true of course – but I suppose I just meant to say that this is why probabilistic primality testing is controversial, but probabilistically establishing the Goldbach Conjecture is just out of the question. (Though every believes it’s true, as far as I know.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: