Karl Popper’s criterion of “falsifiability” for scientific theories (saying that a theory counts as scientific only if there is some hypothetical observation that would prove it to be false) is a very good heuristic for thinking about what science (or any sort of evidence-based procedure for finding out about the world) is like. However, regardless of what scientists say (whether they be physicists yelling about string theory, biologists yelling about intelligent design, or anyone railing at crackpots, or economists, or anyone they don’t like) it just isn’t right as even part of a criterion for what counts as science. But I think there is perhaps a way to use something like it as a criterion for what counts as a belief, though perhaps my suggestion is crazy.
First, a quick rundown of the problems with falsificationism as a criterion for science. As Popper was well aware, it can’t apply to statistical theories – in most cases, no evidence could actually rule out a statistical theory, rather than just making it extremely improbable, and you might think we shouldn’t rule something out just because it’s extremely improbable, because (in the long run) we’re bound to get unlucky and rule out the truth at some point. A bigger problem is the Quine-Duhem problem – basically no theory is falsifiable in a strict sense, because falsification of a theory by evidence always depends on auxiliary hypotheses, which can be let go of to save the theory. For instance, an observation of Uranus or Mercury in a place where you don’t expect it to be might look like a straightforward falsification of Newtonian mechanics, but there’s also room to postulate a so-far-unobserved planet (Neptune or Vulcan), or to argue that there was some optical artifact in the way the telescope was working, or even just that the astronomer misremembered or misrecorded the observation. Thus, there is no sharp line that can be drawn by a falsifiability criterion of this sort. In addition, theories that look straightforwardly unfalsifiable can still serve as useful heuristics for the further development of science – for instance, the theory that there actually are quarks (as opposed to the theory that protons and neutrons and cloud chambers and the like all behave “as if quarks existed”) can lead one to think of different modifications of the Standard Model in the face of recalcitrant data.
But despite all these problems, I think there’s still something very useful about the idea of falsificationism. But rather than a logical criterion, as Popper considered it, I’d rather think of it as an epistemological, or perhaps even psychological one. Popper thought that a theory needed to be specific enough that certain observations would be logically inconsistent with it, in order to count as a scientific theory. I’d rather say that a belief needs to be flexible enough that certain observations would lead the agent to give it up, in order for the belief to count as a “rational” or “scientific” one. (Or perhaps even to count as a belief at all, rather than just an article of faith, or something like that.) That is, it doesn’t need to be inconsistent with any set of observations – it just needs to be held in a way that is not totally unshakable. Although this is a psychological criterion I’m suggesting, I don’t think that the observations that would lead an agent to give it up need to be known to the agent – they just need to actually have the relevant dispositions. This removes the worries about statistical theories and the Quine-Duhem problem – although it might be that any theory could logically be saved from the data by giving up enough auxiliaries, it seems plausible that any rational agent would have some limit to the lengths that they would go to to save the theory. (I don’t know if comparative amounts of evidence needed to shake one’s belief should say anything interesting about the comparison between two agents.) This also applies to the more “standardly unfalsifiable” theories that I’d like to defend – I say that they’re important because they give useful heuristics for how to modify theories that are different from their empirically identical peers. But if these heuristics never seem to lead one to good modifications, then eventually one would likely give up this theory. It can’t be falsified, but one can still be made to give it up by seeing how fruitless it is and how much more fruitful its competitor is (which is just as unfalsifiable in this respect).
One might have worries about mathematical truths, or other potential “analytic” truths. Popper explicitly set these aside and said that his criterion only applied to things that weren’t logical truths (or closely enough related to logical truths). However, I suspect that something like my criterion might still apply here – although there is no possible observation that is inconsistent with Cauchy’s theorem on path integrals in the complex plane, I suspect that there are possible observations that would make anyone give up their belief in this theorem. For instance, someone could uncover a very subtle logical flaw that appears in every published proof of the theorem, and then exhibit some strange function that is complex-differentiable everywhere but whose integral around a closed curve is non-zero. Or at least, someone could do something that looks very much like this and would convince everyone, even though I think they couldn’t actually discover such a function because there isn’t one. It’s tougher to imagine what sort of observation would make mathematicians give up their beliefs in much simpler propositions, like the claim that there are infinitely many primes, or that 2+2=4, but as I said, there’s no need for the agents to actually be able to imagine the relevant observations – the disposition to give up the belief in certain circumstances just has to exist.
I think this is a relatively low bar for a belief to reach – I suspect that just about all apparent beliefs that people have would actually be given up under certain observations. However, with logical beliefs and religious beliefs, people often claim that no possible observation would make them give it up (this is called “analyticity” for logical beliefs, and “faith” for religious beliefs). I don’t know if that should actually count as a defect for either of these types of belief, but I think it is good reason to worry about them, at least to some extent.