Karl Popper’s criterion of “falsifiability” for scientific theories (saying that a theory counts as scientific only if there is some hypothetical observation that would prove it to be false) is a very good heuristic for thinking about what science (or any sort of evidence-based procedure for finding out about the world) is like. However, regardless of what scientists say (whether they be physicists yelling about string theory, biologists yelling about intelligent design, or anyone railing at crackpots, or economists, or anyone they don’t like) it just isn’t right as even part of a criterion for what counts as science. But I think there is perhaps a way to use something like it as a criterion for what counts as a belief, though perhaps my suggestion is crazy.
First, a quick rundown of the problems with falsificationism as a criterion for science. As Popper was well aware, it can’t apply to statistical theories – in most cases, no evidence could actually rule out a statistical theory, rather than just making it extremely improbable, and you might think we shouldn’t rule something out just because it’s extremely improbable, because (in the long run) we’re bound to get unlucky and rule out the truth at some point. A bigger problem is the Quine-Duhem problem – basically no theory is falsifiable in a strict sense, because falsification of a theory by evidence always depends on auxiliary hypotheses, which can be let go of to save the theory. For instance, an observation of Uranus or Mercury in a place where you don’t expect it to be might look like a straightforward falsification of Newtonian mechanics, but there’s also room to postulate a so-far-unobserved planet (Neptune or Vulcan), or to argue that there was some optical artifact in the way the telescope was working, or even just that the astronomer misremembered or misrecorded the observation. Thus, there is no sharp line that can be drawn by a falsifiability criterion of this sort. In addition, theories that look straightforwardly unfalsifiable can still serve as useful heuristics for the further development of science – for instance, the theory that there actually are quarks (as opposed to the theory that protons and neutrons and cloud chambers and the like all behave “as if quarks existed”) can lead one to think of different modifications of the Standard Model in the face of recalcitrant data.
But despite all these problems, I think there’s still something very useful about the idea of falsificationism. But rather than a logical criterion, as Popper considered it, I’d rather think of it as an epistemological, or perhaps even psychological one. Popper thought that a theory needed to be specific enough that certain observations would be logically inconsistent with it, in order to count as a scientific theory. I’d rather say that a belief needs to be flexible enough that certain observations would lead the agent to give it up, in order for the belief to count as a “rational” or “scientific” one. (Or perhaps even to count as a belief at all, rather than just an article of faith, or something like that.) That is, it doesn’t need to be inconsistent with any set of observations – it just needs to be held in a way that is not totally unshakable. Although this is a psychological criterion I’m suggesting, I don’t think that the observations that would lead an agent to give it up need to be known to the agent – they just need to actually have the relevant dispositions. This removes the worries about statistical theories and the Quine-Duhem problem – although it might be that any theory could logically be saved from the data by giving up enough auxiliaries, it seems plausible that any rational agent would have some limit to the lengths that they would go to to save the theory. (I don’t know if comparative amounts of evidence needed to shake one’s belief should say anything interesting about the comparison between two agents.) This also applies to the more “standardly unfalsifiable” theories that I’d like to defend – I say that they’re important because they give useful heuristics for how to modify theories that are different from their empirically identical peers. But if these heuristics never seem to lead one to good modifications, then eventually one would likely give up this theory. It can’t be falsified, but one can still be made to give it up by seeing how fruitless it is and how much more fruitful its competitor is (which is just as unfalsifiable in this respect).
One might have worries about mathematical truths, or other potential “analytic” truths. Popper explicitly set these aside and said that his criterion only applied to things that weren’t logical truths (or closely enough related to logical truths). However, I suspect that something like my criterion might still apply here – although there is no possible observation that is inconsistent with Cauchy’s theorem on path integrals in the complex plane, I suspect that there are possible observations that would make anyone give up their belief in this theorem. For instance, someone could uncover a very subtle logical flaw that appears in every published proof of the theorem, and then exhibit some strange function that is complex-differentiable everywhere but whose integral around a closed curve is non-zero. Or at least, someone could do something that looks very much like this and would convince everyone, even though I think they couldn’t actually discover such a function because there isn’t one. It’s tougher to imagine what sort of observation would make mathematicians give up their beliefs in much simpler propositions, like the claim that there are infinitely many primes, or that 2+2=4, but as I said, there’s no need for the agents to actually be able to imagine the relevant observations – the disposition to give up the belief in certain circumstances just has to exist.
I think this is a relatively low bar for a belief to reach – I suspect that just about all apparent beliefs that people have would actually be given up under certain observations. However, with logical beliefs and religious beliefs, people often claim that no possible observation would make them give it up (this is called “analyticity” for logical beliefs, and “faith” for religious beliefs). I don’t know if that should actually count as a defect for either of these types of belief, but I think it is good reason to worry about them, at least to some extent.
I’d rather say that a belief needs to be flexible enough that certain observations would lead the agent to give it up, in order for the belief to count as a “rational” or “scientific” one. (Or perhaps even to count as a belief at all, rather than just an article of faith, or something like that.)
I’d recommend against making this sort of flexibility a necessary condition for being a belief. Utterances like “John believes X, and there’s no possible set of evidence that could lead him to give it up” don’t sound contradictory to my ear. Articles of faith seem to me a species of belief, rather than a different thing altogether.
You know this stuff a whole lot better than me, but I once heard someone say that a belief held with credence 1 won’t ever be revised away. If it even makes sense to say something like that, it seems that we can have unshakeable beliefs.
I think I agree that this is too strong a criterion for belief, but I’m not certain of that. The utterances of the sort you mention I would explain away as some sort of hyperbole. I probably shouldn’t have said “articles of faith” either, because I suspect that for most articles of faith, there are in fact hypothetical sets of evidence that would force the person to give them up, even if they don’t realize this right now.
It’s true that under an orthodox Bayesian picture, where you only ever update beliefs by conditionalization, and only ever conditionalize on events of non-zero probability, credence 1 never goes away. However, I suspect that one can often conditionalize on events of probability zero (though not ones that are regarded ahead of time as impossible, just things like an infinitely thin dart hitting a specific line on a dartboard, or the value of a calculation being some precise real value) which will reduce some 1’s at least occasionally. And I also suspect that even rational agents sometimes consider something absolutely impossible that is in fact true – in this case they can’t update by conditionalization, but they have to change their beliefs somehow, and in this case many of their previous certainties might be shaken. It would be fairly surprising to me to find many apparent beliefs that would survive any such process. (One of the few that might plausibly do so would be the belief “I exist”.)
Anyway, that’s also why I said this position is “crazy” neo-falsificationism 😉
Once you allow that “there’s no need for the agents to actually be able to imagine the relevant observations – the disposition to give up the belief in certain circumstances just has to exist,” I think you’ve set the bar so low as to be indistinguishable from “anything goes” a la Paul Feyerabend. All anyone would ever have to do is change their attitude toward their belief, not the belief itself.
That’s not a problem to me, mind you. I quite like Feyerabend.
I’m not quite sure why they wouldn’t have to change the belief. I’m just saying that they don’t have to know what the circumstances are under which they would give it up, just that there have to be such potential circumstances.