I was browsing in a bookstore today and saw a copy of The Vastness of Natural Languages, by Langendoen and Postal (1984), which had been mentioned to me last semester by my semantics professor as a controversial attempt to show natural languages are infinite. I had wondered then why that claim was controversial, but on looking at the book, I can see why – they claim that any actual natural language has not just countably many sentences, but actually a proper class! That is, there are more sentences of English than any finite or transfinite cardinality! I only read the chapter arguing for this conclusion, and skimmed a bit of the chapter discussing some of the implications. But I think that their arguments are possibly somewhat careless (for instance, they would prove too much if applied to domains other than natural language), and that even if their conclusion is correct, it might not be relevant. But it makes some interesting connections between the traditional philosophy of mathematics and linguistics.
They first look at the traditional arguments that say that there is no finite upper bound on the length of sentences in any natural language. They say that the standard argument (which they attribute among others to Chomsky, and some earlier work by Postal) is just unsound. The argument says that if S is any grammatical sentence of English, then “I know that S” is a grammatical sentence. Similar versions say that a word like “very” can obviously be repeated any number of times, or that two sentences can always be conjoined to produce a longer one. However, they point out (I think rightly) that the argument is question-begging against someone who presupposes that there is some finite limit to the length of sentences in a language – we say the resulting longer sentences are grammatical because they are constructed by grammaticality-preserving rules. But the evidence that these rules are grammaticality-preserving should be based on judgements of grammaticality of individual sentences. So we should have to recognize these extremely large sentences as grammatical before accepting the rule as grammaticality-preserving, but the opponent has already claimed that these long sentences are too long to be grammatical, so the rules are clearly not grammaticality preserving, even though they appear to be (and are for smaller sentences). (They also point out the analogies between this argument and Dedekind’s argument for the existence of infinitely many concepts, or sets, or objects of thought, where he assumes that for any thought T, there is a distinct thought, “T is a thought”. They point out that Dedekind’s set is exactly the sort that gets one into the semantic paradoxes.)
They also point out what they say is the only non-question-begging argument that natural languages have no finite bound on the length of sentences, which they attribute to Jerrold Katz. This argument depends more specifically on some of the methodology of linguistics. When analyzing the grammatical structure of some language L, we always work with some (necessarily finite) set of sentences that have been judged to be grammatical by a native speaker, which they call “IB(L)”, as an inductive basis. We attempt to find the rules of L by observing regularities in IB(L), but we don’t want to generalize accidental features of IB(L). For instance, we generally don’t want to say that to be grammatical in L just is to be a member of IB(L) – that would be too accidental and tied to the particular set of data we gathered. Similarly, since the set of sentences is finite, there will in fact be some greatest length k of any sentence of IB(L). But to say that every sentence of L must be at most as long as k would also seem to tie features of L too closely to accidental features of IB(L). And to choose any other length as the upper bound would be arbitrary – there is no clear reason to choose one natural number over another as the bound. So there should be no rule of the grammar of L directly specifying the maximum length of a sentence of L.
One might try to suggest that people will never be able to remember or understand a sentence that is a million words in length. However, this ties the language too much to contingent “performance limitations” of speakers. I’m sure there’s a decent amount of controversy around the relevance of performance limitations, but I’m willing to concede to them the claim that to get a good theory of a natural language, one should abstract away from the performance capacities of its actual speakers.
They point out that nothing in this argument depends on the putative size limitation of sentences being finite. Thus, the only good argument that shows that there is no in-principle limitation of the length of sentences of English also shows (according to them) that there are sentences of English of arbitrary transfinite length, as well as arbitrary finite length. Since there is a proper class of possible lengths then, we see that English must contain a proper class of grammatically well-formed sentences, as they claim.
I think their argument moves too fast. Just because we don’t want to generalize a size limitation that we find in some particular inductive basis IB(L) doesn’t mean that we can’t get any principled size limitation. For instance, we might discover that all the other syntactic rules that we extract from the data give some constraint on the possible lengths of sentences. For instance, in a propositional logic with only binary connectives, we can see that every sentence of the language must have an odd number of symbols in it – even if we don’t make this generalization from the observation of some IB(L) where every sentence has an odd number of symbols, we will still arrive at it by noticing that the only well-formed sentences are either single-symbol sentences, or of the form (A^B), where A and B are well-formed, or similarly for other connectives. (In fact, these rules will generate the constraint that every sentence has a length that is 1 mod 4.) If a language is complicated enough, the rules might well generate naturally some constraint that ensures sentences can never be more than k symbols long – even if k is not itself the upper bound of the lengths of strings in IB(L).
Of course, it doesn’t seem that the syntactic rules of English generate such a constraint. (We really do seem to be able to iterate “very” as many times as we want.) But another non-arbitrary constraint may arise. If we observe the frequency of sentences of various lengths in our IB(L), we might discover that sentences of length k occur in proportion to the square-root of 1,000,000-k2. Then, even if the inductive basis happens not to contain any sentences of length greater than 900, we might suppose that 1000 is the maximal length of any sentence of the language. Perhaps more naturally, we might notice that sentences of length k occur in proportion to 2-k, which would lead us not to put any finite bound on the length of sentences. However, this would give us good reason to suppose that there are no sentences of any infinite length, since their occurrence would be proportional to 0.
Of course, this presupposes that IB(L) was chosen through some means of representative sampling, which it generally isn’t. (It’s generally invented entirely by the linguist, and then confirmed by a native speaker as containing only grammatical sentences. And actually, there’s normally a set, say IB*(L) of sentences judged ungrammatical by a native speaker, to constrain the theory from the other side. The fact that both sets are likely to be bounded by the same length k makes it even more clear that we shouldn’t just judge that anything beyond length k is ungrammatical.) And it gives more weight to the performance limitations of speakers than Postal and Langendoen might want.
I was struck in the course of reading their chapter that many of the arguments for a finite bound on the length of sentences in a natural language are like the arguments given by ultrafinitists for an upper bound on the size of natural numbers that exist. (For instance, “no one could conceive of counting a number as large as 21,000,000“, or concerns based on the total number of particles in the universe.) But if Langendoen and Postal are right, then I think their arguments should carry over to the case of natural numbers, which would seem to show that there is no (finite or transfinite) upper bound to the size of natural numbers!
Fine, let’s concede this and just say that all of Cantor’s ordinals are themselves “natural numbers” in the sense that all these transfinitely long strings are in fact “grammatical sentences of English”. There are still many important situations in which people want to just work with the standard natural numbers – and I think it’s even more clear that there are important situations in which people only care about the “finitary fragment” of these extremely vast “natural languages” that Postal and Langendoen want to talk about. One of the important consequences they list for their theory is that it means that every traditional account of syntax is wrong, because they all insist that the set of grammatical sentences is recursively-enumerable, and thus countable. If they’re right, then every traditional account is wrong. Yet if these theories aim not to account for the full “natural language” as Postal and Langendoen want to, but merely for the finitary fragment, then I don’t see how they can criticize these theories. Perhaps they’ll argue that such a restricted theory won’t be able to get at what’s psychologically real in our syntactic structures – but I’d like to see some evidence from their side that they get anything different without severely increasing the complexity of the rules for the finitary fragment. And in addition, the techniques of modern proof theory and recursion theory show how to extend notions of computability into the transfinite – perhaps traditional accounts of syntax can be extended into the transfinite similarly, generating extremely small changes to existing theories to account for all the new sentences Postal and Langendoen want to consider.
Unless they can defuse these criticisms, I don’t see what allowing transfinite sentences gets for them. But it’s definitely an interesting place to try comparing arguments about ultrafinitism, finitism, constructivism, and platonism in mathematics with a similar set of positions in language!