Coding Truth and Satisfaction

23 07 2007

I just got back from a week visiting the Canada/USA Mathcamp, where I spent a few days teaching a class on Gödel’s theorems. I only got to the incompleteness theorems on the last day, when I introduced Gödel numbering, showed that some syntactic relations were definable, suggested that provability was therefore definable, and showed that truth (in the standard interpretation) is not definable. Thus, I didn’t get to anything like the full incompleteness theorems, but just showed that there is no recursive set of axioms such that truth in the standard interpretation corresponds to provability from those axioms.

The thing about showing that truth is not definable is that you normally go through satisfaction instead. Given the fact that syntactic relations are definable, we can define functions NUMERAL(z) (which returns the code number for the numeral expressing z) and SUBST(x,y) (which takes x as the code of a formula with one free variable, and substitutes in the term that y is the code of, and returns the code of the resulting sentence). Then SAT(x,y) (which says that x is the code of a formula satisfied by number y) can be defined in terms of TRUE(x) by TRUE(SUBST(x,NUMERAL(y))).

Then, it’s fairly straightforward to show that SAT(x,y) is not definable. If it were, then we could define \lnot SAT(x,x), which would have some code number n. But then \lnot SAT(n,n) would be true iff n didn’t satisfy \lnot SAT(x,x), which is a contradiction. Therefore, SAT(x,y) is not definable.

Afterwards, someone pointed out that this proof of the undefinability of SAT(x,y) doesn’t make any assumption about how the coding works, which seems somewhat surprising. After all, I could take highly non-effective codings, and the undefinability of SAT would still hold. This is surprising, because TRUE(x) can be defined under certain encodings. One such encoding is just to separate the sentences into true and false ones, and code them respectively by even and odd numbers, using some sort of lexicographic ordering. Then, since being even and being odd are both definable, truth would be definable as well. But it turns out that no trick like this is going to let satisfaction be definable, because of the diagonalization argument I gave above.

My guess for why this is is because satisfaction requires something like a coding of some of the syntactic structure as well as some of the semantic structure. Standard codings make the syntax definable, but then the semantics fails. Certain bizarre codings let the semantics be definable, but they make the syntax fail. But since the syntax and semantics aren’t recursive in terms of each other, there’s no way to make them both recursive, the way satisfaction seems to require. (Of course, you could probably choose some awful coding that makes both sets of relations undefinable, but why would you want to do that?)

Anyway, going back to camp is always great for precisely this sort of reason – I get to try to teach something new and interesting that I haven’t taught before, and people ask questions that also help me understand the material in a new way.

Advertisements




Five Days of Formal Philosophy, and Uniform Solutions

22 05 2007

I just finished quite a streak of formal talks in philosophy. From Thursday night until Sunday, I (like Marc Moffett) was in Vancouver for the Society for Exact Philosophy, which was quite a fun little conference with a lot of interesting talks. Then on Monday, those of us at Berkeley working with Branden Fitelson got together with the people at Stanford working with Johan van Benthem for an informal workshop, which like last year had a lot of talks on probability from the Berkeley side and dynamic epistemic logic from the Stanford side, and again helped reveal a lot of interesting connections between these two rather different formal approaches to epistemological questions. And then today we had our quasi-monthly formal epistemology reading group meeting at Berkeley, with Jonathan Vogel from UC Davis.

There was a lot of interesting stuff discussed at all these places, but I’m glad there’s a bit of a break before FEW 2007 in Pittsburgh. Anyway, it’s also very nice to know that there is all this work going on relating formal and traditional issues, both in epistemology and other areas of philosophy.

Anyway, among the many interesting talks, the one that’s got me thinking the most about things I wasn’t already thinking about was the one by Mark Colyvan, on what he calls the “principle of uniform solution”. The basic idea is that if two paradoxes are “basically the same”, then the correct resolution to both should also be “basically the same”. So for instance, it would be very strange for someone to claim that the correct approach to Curry’s Paradox is that certain types of circularity make sentences ill-formed, while the correct approach to the Liar Paradox is to adopt a paraconsistent logic. Mark pointed out that there are some problems with properly formulating the principle though – do we decide when paradoxes are “basically the same” in terms of their formal properties, the sorts of solutions they respond to, or the role they’ve played in various arguments? For instance, Yablo’s Paradox was explicitly introduced in order to point out that self-reference is not the key issue in the Liar, Curry, and Russell paradoxes – which suggests either that the relevant formal property they share is something else, or that the proper way to think of paradoxes is something else.

In hearing this, I started to wonder just why we should believe anything like this principle of uniform solution anyway. The strongest cases of the relevant form of argument seem to me like the appeal in Tim Williamson’s Knowledge and its Limits to various different forms of the Surprise Examination Paradox – he points out that some traditional resolutions only solve the most traditional version, but that a slightly modified version gets through, and that his proposed solution to that version blocks the traditional version as well. Since both cases seem problematic, and one “covers” the other, it seems that we only need to worry about solving the covering case. I take it that something like this is at work as well when Graham Priest uses the Liar paradox to argue for dialetheism, and then suggests a return to Frege’s inconsistent axiomatization of mathematics rather than using the much more complex system of ZFC.

If this is the form of argument, then we shouldn’t always expect the principle of uniform solution to be worth following. If I (like most philosophers that don’t work directly on this sort of stuff) think that something like ZFC is the right approach to Russell’s Paradox, and something like Tarski’s syntactically typed notion of truth is the right approach to the Liar Paradox, then both get solved, but neither approach would work for both. Their formal similarities are interesting, but there’s no reason they should have the same solution, since there isn’t an obvious solution that works for both (unless you go for something as extreme as Priest’s approach). Formal or other similarities in paradoxes often help show that resolving one will automatically resolve the other, so that the above argument will work, but there’s no reason to think that this will always (or even normally) be the case.

But at the same time, something like this principle seems to work much more generally than in the case of paradoxes. There are certain similarities between the notion of objective chance, and the notion of subjective uncertainty, so it makes sense that we use a single mathematical formalism (probability theory) to address both. Alan Hájek has suggested that these analogies continue even to the case of conditionalizing on events of probability zero, though I think that this case isn’t as strong. (Though that might just be because I’m skeptical about objective chances.) There’s a general heuristic here, that similar issues should be dealt with similarly. In some sense, it seems very natural to suggest that differences in approaches to different issues should somehow line up with the differences between the issues. But we don’t expect it to always work out terribly nicely.

Anyway, there’s a lot of interesting methodological stuff here to think about, for paradoxes in particular, and for philosophy in general (as well as mathematics and the sciences).





Etchemendy on Consequence

15 05 2006

I’m really not totally sure what to make of Etchemendy’s objections to Tarski’s account of consequence in The Concept of Logical Consequence – perhaps I shouldn’t admit that while I’m TAing a class that covers this book, and grading papers about it. In general, the particular points he makes seem largely right, but I’m not really sure how they add up to the supposed conclusion. I suppose this all means that at some point I should put in some more serious study of the book. But the middle of exam week may not be the time for that.

His objections basically seem to be that Tarski’s account is either extensionally inadequate (if the universe is finite) or adequate for purely coincidental reasons; and that it doesn’t seem to be able to guarantee the right modal and epistemic features.

The worry about finiteness runs as follows – Tarski says that a sentence is a logical truth iff there is no model in which it is false. If there are only finitely many objects (including sets and models and the like), then every model has a domain which is at most a subset of this finite set, so there is some finite size n that is not achieved. Thus, any sentence that says there are at most n objects must come out logically true. However, intuitively, this is just an empirical matter, and not one that is up to logic alone, so the account must be wrong. Even worse, sentences of this sort can be expressed in a language that doesn’t even involve quantifiers or identity, so we can’t blame this on some sort of error in identifying logical constants. (One might try to sneak out of this objection by pointing out that the sentences involved always have more than n symbols – so every sentence that would exist in such a case would get the “correct” logical truth-value. However, there are sentences that are true in every finite model, but not in all models, and these raise a similar problem.)

However, I think this isn’t really a terrible worry – Tarski’s account of consequence (like his account of truth) makes essential use of quantification over sets. Thus, anyone who’s even prepared to consider it as an account of consequence must be making use of some sort of set theory. But just about every set theory that has been proposed guarantees the existence of infinitely many objects (of some sort or another), so we don’t need to worry about finiteness. Etchemendy suggests that this is putting the cart before the horse, in making logical consequence depend on something distinctly non-logical. But perhaps this isn’t really bad – after all, Tarski didn’t claim that his definition was logically the same as the notion of consequence, but rather that it was conceptually the same. Just because the truth of set theory isn’t logical doesn’t mean that set theory isn’t a conceptual truth – and if some set of axioms guaranteeing the existence of infinitely many objects is conceptually necessary (as neo-logicists seem to claim), then Tarski’s account could be extensionally adequate as a matter of conceptual necessity, even if not of logical necessity.

As for the requisite epistemic and modal features, there might be a bit more worry here. After all, nothing about models seems to directly indicate anything modal or epistemic. However, it does seem eminently plausible that every way things (logically) could have been would be represented by some model. In fact, we can basically prove this result for finite possible worlds using an extremely weak set theory (only an empty set, pairing, and union are needed). It seems likely that the same would obtain among the actual sets for infinite possible worlds as well. However, ZFC doesn’t see to provide any natural way of extending this to all possible worlds – in fact, ZFC can prove that there is no model that correctly represents the actual world, because there are too many things to form a set! Fortunately, this problem doesn’t seem to arise for other set theories, like Quine’s NF, and certainly not for a Russellian type theory. And even in ZFC, the fact that Gödel could prove his completeness theorem provides some guide – any syntactically consistent set of sentences has a model, so that even if there is no model representing a particular logically possible world, there is at least a model satisfying exactly the same sentences, so that logical consequence judgements all come out right. But that’s a bit unsatisfying, seeing as how it makes the semantic notion of consequence depend on the syntactic one.

At any rate, it seems available to classical logicians to suggest that it is a matter of conceptual necessity that every way things could logically have been is adequately represented by the sets – and thus that Tarski’s account is correct and Etchemendy’s criticisms inconclusive. I’m pretty sure something like this has to be right.

Of course, the non-realist about mathematics has to give a different account of consequence (as Hartry Field tries to do starting with “On Conservativeness and Incompleteness”) – but this will just be part of paraphrasing away all the many uses we have for set theory. This one is remarkably central (especially given that linguists now suggest that something like it is at the root of all natural language semantics) and so it will be the important test case in a reduction. But the criticism will be substantially different from Etchemendy’s – before the paraphrase, the non-realist can still make the same arguments I’m suggesting above.

(Of course, if Etchemendy’s criticisms are right, they could themselves form the starting point for a useful dispensability argument for mathematical non-realism – if we don’t need sets for consequence, then the strongest indispensability consideration is gone, and we’re just left with the physical sciences, all of which seem to require something much weaker than full set theory.)





Thoughts, Words, Objects

17 04 2006

I just got back yesterday from the University of Texas Graduate Philosophy Conference, which was a lot of fun. In fact, I think it was the most fun I’ve had at a conference other than FEW last year, which coincidentally was also in Austin – maybe it’s just a fun town! At any rate, it was a lot of very good papers, and I got a lot of good ideas from the discussion after each one as well. Even the papers about mind-body supervenience, and Aristotelian substance (which aren’t issues I’m normally terribly interested in) made important use of logical and mathematical arguments that kept me interested. And the fact that both keynote speakers and several Texas faculty were sitting in on most of the sessions helped foster a very collegial mood. I’d like to thank the organizers for putting on such a good show, and especially Tim Pickavance for making everything run so smoothly, and Aidan McGlynn for being a good commentator on my paper (and distracting Jason Stanley from responding to my criticism!)

Because the theme was “thoughts, words, objects”, most of the papers were about language and metaphysics, and perhaps about their relation. There seems to be a methodological stance expressed in some of the papers, with some degree of explicitness in the talks by Jason Stanley and Josh Dever, that I generally find quite congenial but others might find a bit out there. I’ll just state my version of what’s going on, because I’m sure there are disagreements about the right way to phrase this, and I certainly don’t pretend to be stating how anyone else thinks of what’s going on.

But basically, when Quine brought back metaphysics, it was basically with the understanding that it wouldn’t be this free-floating discipline that it had become with some of the excesses of 19th century philosophy and medieval theology – no counting angels on the heads of pins. Instead, we should work in conjunction with the sciences to establish our best theory of the world, accounting for our experiences and the like and giving good explanations of them. And at the end, if our theories are expressed in the right language (first-order logic), we can just read our ontological commitments off of this theory, and that’s the way we get our best theory of metaphysics. There is no uniquely metaphysical way of figuring out about things, apart from the general scientific and philosophical project of understanding the world.

More recently, it’s become clear that much of our work just won’t come already phrased in first-order logic, so the commitments might not be transparent on the surface. However, the growth of formal semantics since Montague (building on the bits that Frege and Tarski had already put together) led linguists and philosophers to develop much more sophisticated accounts that can give the apparent truth-conditions for sentences of more complicated languages than first-order logic, like say, ordinary English. Armed with these truth-conditions from the project of semantics, and the truth-values for these statements that we gather from the general scientific project, we can then figure out just what we’re committed to metaphysically being the case.

Of course, science is often done in much more regimented and formalized languages, so that less semantic work needs to be done to find its commitments, which is why Quine didn’t necessarily see the need to do formal semantics. Not to mention that no one else had really done anything like modern formal semantics for more than a very weak fragment of any natural language, so that the very idea of such a project might well have been foreign to what he was proposing in “On What There Is”.

In addition to the obvious worry that this seems to do so much metaphysical work with so little metaphysical effort, there are more Quinean worries one might have about this project. For one thing, it seems odd that formal semantics, alone among the sciences (or “sciences”, depending on how one sees things) gets a special role to play. On a properly Quinean view there should be no such clear seams. And I wonder if the two projects can really be separated in such a clear way – it seems very plausible to me that what one wants to say about semantics might well be related to what one wants to say about ordinary object language facts of the matter, especially in disciplines like psychology, mathematics, and epistemology.

In discussion this afternoon, John MacFarlane pointed out to me that this sort of project has clear antecedents in Davidson, when he talks about the logical structure of action sentences, and introduced the semantic tool of quantifying over events. This surprises me, because I always think of Davidson as doing semantics as backwards from how I want to do it, but maybe I’ve been reading him wrong.

At any rate, thinking about things this way definitely renews my interest in the problems around various forms of context-sensitivity. The excellent comments by Julie Hunter on Elia Zardini’s paper helped clarify what some of the issues around MacFarlane-style relativism really are. Jason Stanley had been trying to convince me of some problems that made it possibly not make sense, but she expressed them in a way that I could understand, though I still can’t adequately convey. It seems to be something about not being able to make proper sense of “truth” as a relation rather than a predicate, except within a formal system. Which is why it seems that MacFarlane has emphasized the role of norms of assertion and retraction rather than mere truth-conditions, and why he started talking about “accuracy” rather than “truth” in the session in Portland.

Anyway, lots of interesting stuff! Back to regularly-scheduled content about mathematics and probability shortly…





Quantifying into Sentence Position

23 02 2006

In his “Concept of Truth in Formalized Languages”, Tarski considers an alternative truth-definition that involves sentence-position variables inside quotes. In what he calls a “formally correct” truth definition, we would have a condition of the form “forall x (x is true iff …)”. “x” here is a variable that ranges over mentioned sentences, and “…” should be filled in with our definition. The attempt under consideration is
forall x (x is true iff exists σ (x=”σ” and σ)).
Here, “σ” is a variable that is supposed to range over used sentences, rather than mentioned sentences. We will say that x is true if it is a sentence that can be used to mean σ, so we need x to be an expression for σ, which is why we need to say “x=”σ””. However weird sentence-position quantification might be, the worse problem here is that we have to quantify into quotation marks. Note that the quoted letter sigma appears inside the truth-definition in a position where we have to quantify into it, but in my first sentence after that definition, I used that same expression to name the variable, not to give an expression with a free variable inside it naming a sentence. That usage is what we would expect given ordinary rules for using quotation marks, but Tarski considers what would happen if we allowed for this unusual usage (which would make it tough to talk about the language) and shows that we can get versions of the liar paradox, which would undermine the whole goal of trying to define truth.

However, I’ve got another reason to think that we shouldn’t have quantification into quotes that behaves the way we want it to in the attempted truth-definition – instead it should behave the way it does in my first sentence after the definition. The reason is mainly going to be because we often want to have distinct object-language sentences with the same truth-conditions (or perhaps more generally, possibly the same meanings). As a result, the range of values for the sentence-position variable will have to come with both intensional and extensional information. That is, to be used in sentence-position, it will have to have at least the extensional information of the truth-conditions, but in order to get different values for quote sigma given extensionally equivalent values of sigma, it will need to somehow have intensional information carried with it. Now, this is possible, but somewhat unwieldy.

In addition, if sigma is a variable that appears in the object language as well as in the metalanguage, then we’ll have to have a different procedure to indicate that we mean to refer to that variable, rather than have a free sentence-position variable inside quotation marks. This is also possible – in the LaTeX markup language, one can do this for special characters by putting a backslash in front of them; in some other languages one doubles up the special character, or uses some other way to “escape” it. Of course, if one wants to have that expression in quotes, rather than just the letter sigma, then one needs a further set of commands to escape the relevant characters. It’s possible, but it involves replacing a lot of the standard names for certain symbols in the language inside quotes.

So we can reconsider why we wanted to be able to quantify into quotes to begin with. The reason was so that we can have one position that names a sentence while another position uses the same sentence, with the sentence being quantified over. Since every sentence has exactly one meaning (or set of truth-conditions), while truth-conditions are in general shared by multiple sentences, it seems most natural for our metalanguage function to go the other way. Instead of going from use to mention, it should go from mention to use, because that function should be well-defined – multiple intensions correspond to the same extension, but not vice versa. Thus, we should be able to express our truth-definition roughly as the following:
forall x (x is true iff exists S (x=S and F(S))),
where “F” is the function that gets us from use to mention, or intension to extension. That is, “F(S)” corresponds to “σ” and “S” corresponds to “”σ””. We don’t have to worry here about any collision between object language and metalanguage variables, so I think this proposal is overall more natural.

But we can see that this definition is equivalent to
forall x (x is true iff F(x)),
which we see means that “F” just is the truth-predicate. I think this is why natural language has a truth-predicate rather than a quote-quantifying sentence-place variable. They can express all the same things, but one is more convenient than the other. Semantic descent is easier than semantic ascent, so that’s why it’s the function that we have built into our language.

As a result, we have to go to more work to define truth, but Tarski has showed us that this is generally possible, as long as we don’t mind the problems Field points out of the definition being non-systematic and non-explanatory.





Logic in Mathematics, Philosophy, and Computer Science

7 08 2005

In discussion with Jon Cohen in the past few weeks, I’ve realized a bit more explicitly the different ways logic seems to be treated in the three different disciplines that it naturally forms a part of. I intended to post this note as a comment on his recent post, but for some reason the combination of old operating system and browser on the computer I’m using here wouldn’t let me do it. So I’m making it my own post.

The thing that has struck me most in my experience with doing logic in both mathematics and philosophy departments is that in mathematics, “logic” is seen as just being classical propositional or first-order logic, while in philosophy a wide range of other logics are discussed. The most notable example is modal logic of various sorts, though intuitionist logic and various relevance logics and paraconsistent logics are also debated in some circles. But in talking to Jon I’ve realized that there are far more logics out there that very few philosophers are even aware of, like linear logics, non-commutative logics, and various logics removing structural rules like weakening, contraction, or exchange (which basically allow one to treat the premises as a set, rather than a multiset or sequence). In his sketch of history, it seems mathematicians are stuck in the 1930’s, and philosophers are stuck in the early 1980’s, in terms of what sorts of systems they admit as a logic. Of course, all three disciplines have developed large amounts of logical material relating to their chosen systems.

The reason for these divisions seems to be a disagreement as to what a logic is. Mathematicians just want to formalize mathematical reasoning in a sense, and so have fixed on classical logic, as it seems to best capture the work that mathematicians find acceptable and necessary. Philosophers on the other hand, have debates about whether classical logic, intuitionism, some sort of relevance, or some other logic is the “one true logic” (or one of the several true logics as advocated by Greg Restall and JC Beall). Although computer scientists study even more types of logic, they don’t seem to argue about which is an appropriate logic for doing their reasoning in – from what I understand, they do all their metareasoning in classical logic (or some approximation thereof). The various systems are studied to gain structural insights, and to model the capacities of various computational systems, but not to talk about truth per se.

Does this sound about right?





Azzouni on Deflation

27 07 2005

Now that I’ve finished Maddy’s Naturalism in Mathematics, I’ve started reading Jody Azzouni’s recent book, Deflating Existential Consequence, which apparently tries to argue that although existential claims about mathematical entities (and many other entities) may be indispensable to our best scientific theories, this doesn’t mean we’re “ontologically committed” to them. I suppose I’ll get into that stuff later, but for now I’m reading his early chapters about truth.

He suggests that we must be deflationists about truth in order to use the truth predicate the way we do. One of the important uses, he suggests, is “blind ascription”, which is when I say something like “What Mary said is true”, rather than actually exhibiting a sentence in quotation marks followed by “is true”. It’s clear that we have a reason to engage in blind ascription of truth for a variety of reasons in our scientific theorizing, either when we talk about the consequences of an infinite theory (or at least an unwieldy and large one), or when we use a simplified version of a theory to make a calculation (like replacing “sin t” by “t” in calculating the period of oscillation of a pendulum) and suggest “something in the vicinity of this result is true”. In order for blind ascription to work, he suggests, we need to have a theory of truth that endorses every Tarski biconditional of the form “‘__’ is true iff__”. But he suggests that only the deflationist about truth can really do this.

The problem is that any supplementation of deflationist (Tarski-biconditional governed) truth faces a dilemma. Either it falsifies a Tarski biconditional (and so proves unfit for blind ascription), or it fails to be a genuine supplementation of the deflationist notion of truth.

As an example, he considers the requirement one might have that a certain type of compositionality holds. That is, “snow is white” is true just in case there is some object that “snow” refers to and a predicate referred to by “white” that applies to that object. If this requirement goes beyond the requirement of the biconditional, then such a compositional notion of truth will be unfit for blind ascription. But if it doesn’t, then he says that this requirement is “toothless”, and doesn’t get us a notion of truth any different from the deflationist.

This latter seems to me to be wrong though. Both Davidson (in “Truth and Meaning”) and Field (in “Tarski’s Theory of Truth”) apply such a requirement to the truth predicate. While Davidson seems to be happy to take a deflationist account of truth, and then use the compositionality requirement to explicate the notions of reference and meaning, Field seems to do something different. Field (at least at the time of that paper, and probably into the mid-80’s) wanted a physicalist explanation of the notions of reference and meaning for individual words, and then used the notion of compositionality to define truth. Then, using the Tarski biconditionals, we can understand just what our ordinary sentences commit us to, and we have used truth as a step in the understanding of language, rather than using understanding language as a step in explaining reference as Davidson wanted.

To see that Field’s notion of truth in this case isn’t just deflationary, I point out a usage I believe Field mentions in a much later paper (“Correspondence Truth, Disquotational Truth, and Deflationism”, from 1986). This is the example that convinced me not to be a deflationist about truth, though ironically I hear that Field became one very soon afterwards. But basically, for the deflationist, the Tarski biconditionals are constitutive of the meaning of the truth predicate, so “‘Grass is white’ is true” has just the same content as “Grass is white”. Similarly, “‘Grass is white’ might have been true” is the same as “Grass might have been white”. To see the problem, we note that as a result, “If we had used our words differently, “Grass is white” might have been true” comes out the same as “If we had used our words differently, grass might have been white”. But intuitively, the former is correct and the latter not, so the deflationist must be wrong. I think the compositional account of truth gets this right, and the biconditionals are then useful only to understand how language works and to establish the the practice of blind assertion.

So I think on this point, Azzouni is not quite right, and we can have a non-deflationary position that asserts all the same biconditionals, and is thus fit for blind assertion, and thus for science. In fact, I think there’s reason to believe that this is the right sort of truth theory, but of course that’s quite hard to argue. He has a footnote that points out what I take to be the Davidsonian position, but I think he might miss the Fieldian position. Unless I’m wrong about what this footnote is supposed to mean.