I just finished quite a streak of formal talks in philosophy. From Thursday night until Sunday, I (like Marc Moffett) was in Vancouver for the Society for Exact Philosophy, which was quite a fun little conference with a lot of interesting talks. Then on Monday, those of us at Berkeley working with Branden Fitelson got together with the people at Stanford working with Johan van Benthem for an informal workshop, which like last year had a lot of talks on probability from the Berkeley side and dynamic epistemic logic from the Stanford side, and again helped reveal a lot of interesting connections between these two rather different formal approaches to epistemological questions. And then today we had our quasi-monthly formal epistemology reading group meeting at Berkeley, with Jonathan Vogel from UC Davis.
There was a lot of interesting stuff discussed at all these places, but I’m glad there’s a bit of a break before FEW 2007 in Pittsburgh. Anyway, it’s also very nice to know that there is all this work going on relating formal and traditional issues, both in epistemology and other areas of philosophy.
—
Anyway, among the many interesting talks, the one that’s got me thinking the most about things I wasn’t already thinking about was the one by Mark Colyvan, on what he calls the “principle of uniform solution”. The basic idea is that if two paradoxes are “basically the same”, then the correct resolution to both should also be “basically the same”. So for instance, it would be very strange for someone to claim that the correct approach to Curry’s Paradox is that certain types of circularity make sentences ill-formed, while the correct approach to the Liar Paradox is to adopt a paraconsistent logic. Mark pointed out that there are some problems with properly formulating the principle though – do we decide when paradoxes are “basically the same” in terms of their formal properties, the sorts of solutions they respond to, or the role they’ve played in various arguments? For instance, Yablo’s Paradox was explicitly introduced in order to point out that self-reference is not the key issue in the Liar, Curry, and Russell paradoxes – which suggests either that the relevant formal property they share is something else, or that the proper way to think of paradoxes is something else.
In hearing this, I started to wonder just why we should believe anything like this principle of uniform solution anyway. The strongest cases of the relevant form of argument seem to me like the appeal in Tim Williamson’s Knowledge and its Limits to various different forms of the Surprise Examination Paradox – he points out that some traditional resolutions only solve the most traditional version, but that a slightly modified version gets through, and that his proposed solution to that version blocks the traditional version as well. Since both cases seem problematic, and one “covers” the other, it seems that we only need to worry about solving the covering case. I take it that something like this is at work as well when Graham Priest uses the Liar paradox to argue for dialetheism, and then suggests a return to Frege’s inconsistent axiomatization of mathematics rather than using the much more complex system of ZFC.
If this is the form of argument, then we shouldn’t always expect the principle of uniform solution to be worth following. If I (like most philosophers that don’t work directly on this sort of stuff) think that something like ZFC is the right approach to Russell’s Paradox, and something like Tarski’s syntactically typed notion of truth is the right approach to the Liar Paradox, then both get solved, but neither approach would work for both. Their formal similarities are interesting, but there’s no reason they should have the same solution, since there isn’t an obvious solution that works for both (unless you go for something as extreme as Priest’s approach). Formal or other similarities in paradoxes often help show that resolving one will automatically resolve the other, so that the above argument will work, but there’s no reason to think that this will always (or even normally) be the case.
But at the same time, something like this principle seems to work much more generally than in the case of paradoxes. There are certain similarities between the notion of objective chance, and the notion of subjective uncertainty, so it makes sense that we use a single mathematical formalism (probability theory) to address both. Alan Hájek has suggested that these analogies continue even to the case of conditionalizing on events of probability zero, though I think that this case isn’t as strong. (Though that might just be because I’m skeptical about objective chances.) There’s a general heuristic here, that similar issues should be dealt with similarly. In some sense, it seems very natural to suggest that differences in approaches to different issues should somehow line up with the differences between the issues. But we don’t expect it to always work out terribly nicely.
Anyway, there’s a lot of interesting methodological stuff here to think about, for paradoxes in particular, and for philosophy in general (as well as mathematics and the sciences).
Recent Comments