Non-Factive Knowledge

28 04 2006

Bayesians have suggested that belief is not an all-or-nothing notion, but rather one that comes in degrees from 0 to 1 (which happen to obey the Kolmogorov axioms of probability, at least for rational agents). I’ve lately been wondering whether we can do the same with knowledge – on something like a justified true belief account, we can obviously grade knowledge based on the strength of the belief involved, but it could also be graded based on the level of justification. If instead we can figure out a way to grade knowledge directly, maybe we can get a more sophisticated account of knowledge out of this, rather than seeking the “fourth condition”. The most natural attempts would probably involve something like the tracking account given by Sherri Roush in Tracking Truth (which I embarrassingly still haven’t read yet).

Anyway, thinking about this, I was struck by this account of knowledge given by Roger Shuy, over at Language Log:

1. One believes it to be true.
2. One has good reason to believe it to be true.
3. There is a substantial probability that it is true.

It seems quite parallel to the JTB account (or, BJT in this ordering), except that he seems to have weakened the truth condition quite a bit! I’ve sometimes thought that slightly relaxing the factivity condition on knowledge could make it fit much better with ordinary linguistic usage, but everyone tells me I’m crazy when I suggest this.

If we consider the correctness of knowledge attributions, rather than the obtaining of the actual state of having knowledge, then maybe this makes more sense – an agent A can judge an assertion that S knows P to be correct to degree D, where D=J*T*B, and J is S’s degree of justification for P, T is A’s subjective probability that P is true, and B is S’s subjective probability that P is true. Perhaps J and B should actually be modified to be A’s estimate of S’s degree of justification and subjective probability, rather than being the actual degree of justification and subjective probability.

Of course, this still doesn’t account for Gettier cases unless we understand “degree of justification” in some very strong way, and this might be totally crazy, but it’s just something I’m playing around with.





Fictionalist Abstraction Principles

24 04 2006

I’ve been in Paolo Mancosu’s seminar this semester going through John Burgess’ new book Fixing Frege on the various approximately Frege-like theories and how much of classical mathematics they can do. Of course, Frege’s original system could do all of it, but turned out to be inconsistent. Burgess’ book starts with the weakest (and most clearly consistent) systems, and moves on towards stronger and stronger systems that capture more of mathematics, but get closer towards contradiction.

Last week we were going through the first few sections that allow impredicative comprehension (that is, in this system, concepts can be defined by using formulas with quantifiers ranging over concepts – including the one that is being defined!). These systems supplement second-order logic with various “abstraction principles” adding new objects – that is, we add a function symbol to the language, define some equivalence relation on the type of entities that are in the domain of the function, and state that two outputs of the function are identical iff the inputs bear the relation to one another. Effectively, the “abstracts” are like equivalence classes under the relation.

Two well-known abstraction principles are Frege’s Basic Law V, which gives objects known as extensions to concepts that are coextensive (this is how he introduced the notion of a set); and what Boolos and other’s have called Hume’s Principle, which assigns cardinalities to concepts that are equinumerous. It turns out that Basic Law V is in fact inconsistent – a slightly impredicative comprehension principle for concepts gives us Russell’s Paradox. Hume’s Principle on the other hand is consistent – it turns out that Hume’s Principle plus any impredicative amount of second-order logic is equiconsistent with the same amount of second-order Peano Arithmetic.

Crispin Wright, Bob Hale, and others have used this fact to try to motivate Hume’s Principle as a logical principle that guarantees the existence of numbers, and reduces arithmetic to a kind of logic. However, beyond worries about whether or not this is a sort of logical principle, Øystein Linnebo and others have pointed out that there is an important second kind of impredicativity in Hume’s Principle, and most other abstraction principles that add new objects to the domain. Namely, the outputs of the abstraction function (cardinalities) are taken to be in the domain of objects that must be quantified over to see if two concepts are coextensive. Burgess points out that we can avoid this by taking the range of the abstraction function to be a new sort of entities beyond the objects and concepts in the original language. (This is in a sense a way to avoid Frege’s “Julius Caesar” problem of wondering whether Julius Caesar might be the number 3 – by stipulating that number and set abstracts get put in their own logical sort, we guarantee that none will be identical to any pre-existing object like Julius Caesar.)

He remarks on p. 158 (and proves a related result around p. 134) that any abstraction principle that is predicativized like this ends up being consistent! In fact, it’s a conservative extension of the original language, and is additionally expressively conservative, in that any sentence at all in the extended language can be proven equivalent to one phrased in the restricted language. The reason for this is that the only sentences in our new language that even mention the abstracts are identity claims among them (because all our other relations only apply to entities of pre-existing sorts), and these identity claims can be translated away in terms of the equivalence relation on the elements of the domain. (Incidentally here, I think if we add abstraction principles for every equivalence relation, each in a new sort, then we get what model theorists call Meq, which I think is an important object of study. Unless I’m misremembering things.) One nice historical point here is that it suggests that Frege’s concerns about the Julius Caesar problem were in fact quite important – the fact that he didn’t just stipulate the answer to be “no” is what allowed his system to become inconsistent.

The problem with putting these abstracts into new sorts is that one of the major motivations for Hume’s Principle was to guarantee that there were infinitely many objects – once you’ve got 0, you can get 1 as the cardinality of the concept applying just to 0, and 2 as the cardinality of the concept applying just to 0 and 1, and 3 for the concept applying just to 0,1,2, and so on. This obviously can’t happen with a conservative extension, and in particular it’s because concepts can’t apply (or fail to apply) to entities in the new sort. So we can get a model with one object, two concepts, and two cardinalities, and that’s it. So it’s not very useful to the logicist, who wanted to get arithmetic out of logic alone.

However, it seems to me that a fictionalist like Hartry Field might be able to get more use out of this process. If the axioms about the physical objects guarantee that there are infinitely many of them (as Field’s axioms do, because he assumes for instance that between any two points of space there is another), then there will be concepts of every finite cardinality, and even some infinite ones. The fact that the extension talking about them is conservative does basically everything that Field needs the conservativity of mathematics to do (though he does need his more sophisticated physical theory to guarantee that one can do abstraction to get differentiable real-valued functions as some sort of abstraction extension as well). Of course, there’s the further problem that this process needs concepts to be the domain of some quantifiers even before the abstraction, and I believe Field really wants to be a nominalist, and therefore not quantify over concepts. But this position seems to get a lot of the work that Field’s nominalism does, much more easily, with only a slight compromise. To put it tendentiously, perhaps mathematical fictionalism is the best way to be a neo-logicist.





Church-Turing Theses

18 04 2006

On my way back from Austin, I visited my friend Bob McGrew in Palo Alto, and we caught up with each other a bit. At a certain point, when we were talking about our respective research, the subject of the Church-Turing Thesis came up (I don’t remember how, because it’s not really directly related to what either of us does, though not terribly far from either). In our discussion, we both clarified distinct versions of the CT Thesis that the other hadn’t realized.

The basic idea is that the CT thesis is an analysis of the pre-theoretic notion of computability – in this version the thesis approximately states: A problem is intuitively computable iff it can be solved by a Turing machine. However, it’s easy to forget that this is a conceptual analysis claim (especially if one isn’t a philosopher, and thus exposed to conceptual analyses all the time). There are only a few other equally widely accepted conceptual analyses – perhaps Tarski’s account of logical consequence (which has been disputed by John Etchemendy), and maybe one or two others. But in general, these are highly contentious claims, that are often disputed, the way Gettier disputed the justified true belief account of knowledge.

Because this particular analysis has been so successful, it’s easy for computer scientists to forget that it is one, and instead think the CT thesis is the following: A problem is physically computable iff it can be solved by a Turing machine. Now that we’re used to physical computers doing this sort of stuff all the time, it seems that this is what we’re talking about – but there was a notion of what an algorithm was, or an effective process, long before we had machines to carry them out. And people can still intuively recognize whether a sequence of instructions is algorithmic without trying to reduce it to a physically programmable algorithm. However, CS texts very often say something like, “the CT thesis is a statement that is not amenable to proof or verification, but is just a matter of faith”. This statement doesn’t accurately describe this second thesis though – after all, it’s a matter of empirical physical investigation whether relativity or quantum mechanics allow for any processes that are not Turing-computable. Perhaps more relevantly, it may also be an empirical matter whether a physical device with the full strength of a Turing machine can be created! After all, a Turing machine needs an unbounded tape, and actual computers have finite memory space – and every device that can only accept or reject finitely many strings is automatically computable on just about every sense of the word (including ones much weaker than Turing-computable, like finite-state-machine recognizable, and the like).

At any rate, one might want to make the three-way identity claim: The pre-theoretic notion of computability, Turing’s account of computability, and physical computability all coincide.

What Bob pointed out to me was that many computer scientists actually subscribe to a claim that’s much stronger than these ones. Rather than focusing on the general notion of what is computable, it deals with the speed of the computability. Roughly speaking, this thesis says: Every physically realistic model of computation is polynomially reducible to every other one. I believe that this thesis has lately been threatened by the feasibility of quantum computation – although it preserves the absolute notion of computability, quantum computation allows computations that could only be done in non-deterministic polynomial time (NP) to be done in deterministic polynomial time (P). I suppose if P=NP turns out to be true, then this doesn’t damage this thesis, but since most people believe that it’s false, this seems to be evidence against this thesis.

However, there is a slightly weaker version available, that is still much stronger than the earlier ones, saying: Every mathematical model of computation from a certain list is polynomially reducible to every other one. This seems to be true with register machines, Turing machines, and apparently lots of the other formalisms that have been developed, both in the 1930’s and later. Of course, the claim can’t be strengthened too much, because an oracle for Turing computability could solve all computable problem in constant time. We also need to specify an appropriate notion of time and/or space complexity for each notion of computing. But so far, the most natural-seeming notions have ended up having this feature. Which is really quite interesting when you think about it.





Thoughts, Words, Objects

17 04 2006

I just got back yesterday from the University of Texas Graduate Philosophy Conference, which was a lot of fun. In fact, I think it was the most fun I’ve had at a conference other than FEW last year, which coincidentally was also in Austin – maybe it’s just a fun town! At any rate, it was a lot of very good papers, and I got a lot of good ideas from the discussion after each one as well. Even the papers about mind-body supervenience, and Aristotelian substance (which aren’t issues I’m normally terribly interested in) made important use of logical and mathematical arguments that kept me interested. And the fact that both keynote speakers and several Texas faculty were sitting in on most of the sessions helped foster a very collegial mood. I’d like to thank the organizers for putting on such a good show, and especially Tim Pickavance for making everything run so smoothly, and Aidan McGlynn for being a good commentator on my paper (and distracting Jason Stanley from responding to my criticism!)

Because the theme was “thoughts, words, objects”, most of the papers were about language and metaphysics, and perhaps about their relation. There seems to be a methodological stance expressed in some of the papers, with some degree of explicitness in the talks by Jason Stanley and Josh Dever, that I generally find quite congenial but others might find a bit out there. I’ll just state my version of what’s going on, because I’m sure there are disagreements about the right way to phrase this, and I certainly don’t pretend to be stating how anyone else thinks of what’s going on.

But basically, when Quine brought back metaphysics, it was basically with the understanding that it wouldn’t be this free-floating discipline that it had become with some of the excesses of 19th century philosophy and medieval theology – no counting angels on the heads of pins. Instead, we should work in conjunction with the sciences to establish our best theory of the world, accounting for our experiences and the like and giving good explanations of them. And at the end, if our theories are expressed in the right language (first-order logic), we can just read our ontological commitments off of this theory, and that’s the way we get our best theory of metaphysics. There is no uniquely metaphysical way of figuring out about things, apart from the general scientific and philosophical project of understanding the world.

More recently, it’s become clear that much of our work just won’t come already phrased in first-order logic, so the commitments might not be transparent on the surface. However, the growth of formal semantics since Montague (building on the bits that Frege and Tarski had already put together) led linguists and philosophers to develop much more sophisticated accounts that can give the apparent truth-conditions for sentences of more complicated languages than first-order logic, like say, ordinary English. Armed with these truth-conditions from the project of semantics, and the truth-values for these statements that we gather from the general scientific project, we can then figure out just what we’re committed to metaphysically being the case.

Of course, science is often done in much more regimented and formalized languages, so that less semantic work needs to be done to find its commitments, which is why Quine didn’t necessarily see the need to do formal semantics. Not to mention that no one else had really done anything like modern formal semantics for more than a very weak fragment of any natural language, so that the very idea of such a project might well have been foreign to what he was proposing in “On What There Is”.

In addition to the obvious worry that this seems to do so much metaphysical work with so little metaphysical effort, there are more Quinean worries one might have about this project. For one thing, it seems odd that formal semantics, alone among the sciences (or “sciences”, depending on how one sees things) gets a special role to play. On a properly Quinean view there should be no such clear seams. And I wonder if the two projects can really be separated in such a clear way – it seems very plausible to me that what one wants to say about semantics might well be related to what one wants to say about ordinary object language facts of the matter, especially in disciplines like psychology, mathematics, and epistemology.

In discussion this afternoon, John MacFarlane pointed out to me that this sort of project has clear antecedents in Davidson, when he talks about the logical structure of action sentences, and introduced the semantic tool of quantifying over events. This surprises me, because I always think of Davidson as doing semantics as backwards from how I want to do it, but maybe I’ve been reading him wrong.

At any rate, thinking about things this way definitely renews my interest in the problems around various forms of context-sensitivity. The excellent comments by Julie Hunter on Elia Zardini’s paper helped clarify what some of the issues around MacFarlane-style relativism really are. Jason Stanley had been trying to convince me of some problems that made it possibly not make sense, but she expressed them in a way that I could understand, though I still can’t adequately convey. It seems to be something about not being able to make proper sense of “truth” as a relation rather than a predicate, except within a formal system. Which is why it seems that MacFarlane has emphasized the role of norms of assertion and retraction rather than mere truth-conditions, and why he started talking about “accuracy” rather than “truth” in the session in Portland.

Anyway, lots of interesting stuff! Back to regularly-scheduled content about mathematics and probability shortly…





Off to Texas

13 04 2006

I’ll be out of town this weekend at the University of Texas Graduate Philosophy Conference, so probably no posting until the middle of next week. My commenter is Aidan McGlynn, who I met at the USC/UCLA graduate conference a few months ago. It should be fun!





Tarski and Fraenkel-Mostowski

10 04 2006

I had a bit of a blogging hiatus last week because I was fairly busy with some talks. In particular, I gave a talk to the math graduate students about the Fraenkel-Mostowski method of proving the independence of the Axiom of Choice from a slight weakening of Zermelo-Fraenkel set theory. Coincidentally, Sol Feferman was in town giving the Tarski Lectures, and the one that day was on a topic with some similarities, namely Tarski’s notion of a logical constant.

The question of which symbols count as logical is an important one in many characterizations of the notion of logical consequence, so after discussing his notion, Tarski started work on a characterization of the logical symbols. The idea he hit upon is phrased in terms of a sort of Russellian type-theory, but it could probably be equally well formulated in something more like ZF set theory. Basically, the idea is that we start with some domain of basic objects, and then form sets of these objects, and sets of those sets, and so on. Now, we consider any permutation of the basic objects, and see what effect these permutations have on sets of various levels. Tarski’s idea was that the ones that are fixed under all permutations are the sets that can be adequately denoted by a logical symbol. For instance, the identity relation (consisting of all ordered pairs of a basic object and itself) is fixed under every permutation, as is its negation, the trivial relation that holds between no objects, and the relation that holds between any two objects. At a level one step higher, the sets of sets that are fixed include the set of all non-empty sets (which corresponds to the existential quantifier), the set consisting of just the empty set (which corresponds to the negated universal quantifier), the set consisting of all sets of cardinality k (which corresponds to a cardinality quantifier), and so on. So it seems like a pretty good match for the intuitive notion. Feferman went on to point out that this idea was further developed by a variety of people over later decades, and notably Vann McGee who advocated a slightly different characterization.

The Fraenkel-Mostowski method actually works fairly similarly. We start with a collection of basic objects (“urelements”) and construct a hierarchy of sets above them. The existence of urelements is a slight weakening of standard ZF set theory, which only allows a single object with no elements, namely the empty set. Thus, this theory is called ZFU. If we start with a model of ZFU with infinitely many urelements, then we can construct a model of ZFU that falsifies choice fairly easily. Basically, we consider all the permutations, and take just the sets that are fixed under “most” permutations, rather than the sets fixed under all of them, like in Tarski’s idea for the purely logical notions. To measure what “most” means, we say that a set is fixed by most permutations if we can choose finitely many urelements, such that any permutation fixing all of them also fixes the set in question. Of course, the finitely many urelements to be fixed will in general be different for each set in this class, which I will call “FB”, for the sets with “finite basis”.

It is straightforward to see that in FB, every set containing just urelements is either finite, or it contains all but finitely many urelements. But since the set of urelements is infinite, this is a violation of the Axiom of Choice, which is what we want. However, FB isn’t quite a model of ZFU – for instance, it includes the full powerset of the set of urelements, even though there may be sets of urelements that are not included among the sets with a finite basis. We want our model to be transitive – that is, any element of a set in the model is itself in the model. This condition is very useful for verifying the axioms of Extensionality and Foundation. So we define HFB (for “hereditarily FB”) to be the sets in FB whose elements are all in HFB – this is a recursive characterization of HFB that basically guarantee that its elements are in FB, as are their elements, and their elements, and so on. It doesn’t take too much work to prove that HFB is a model of ZFU (the hard axioms to check are powerset and replacement – some lemmas to use are the fact that any formula satisfied by a sequence of sets is also satisfied by their images under any permutation, and that a permutation applied to a set of finite basis gives another set with finite basis).

Thus, a very similar method gives Tarski’s account of the logical constants, and also a proof of the independence of Choice! Of course, both needed to be updated to deal with trouble cases, but it’s interesting to see that a similar start gives rise to two seemingly unrelated results.