## Logic in Mathematics, Philosophy, and Computer Science

7 08 2005

In discussion with Jon Cohen in the past few weeks, I’ve realized a bit more explicitly the different ways logic seems to be treated in the three different disciplines that it naturally forms a part of. I intended to post this note as a comment on his recent post, but for some reason the combination of old operating system and browser on the computer I’m using here wouldn’t let me do it. So I’m making it my own post.

The thing that has struck me most in my experience with doing logic in both mathematics and philosophy departments is that in mathematics, “logic” is seen as just being classical propositional or first-order logic, while in philosophy a wide range of other logics are discussed. The most notable example is modal logic of various sorts, though intuitionist logic and various relevance logics and paraconsistent logics are also debated in some circles. But in talking to Jon I’ve realized that there are far more logics out there that very few philosophers are even aware of, like linear logics, non-commutative logics, and various logics removing structural rules like weakening, contraction, or exchange (which basically allow one to treat the premises as a set, rather than a multiset or sequence). In his sketch of history, it seems mathematicians are stuck in the 1930’s, and philosophers are stuck in the early 1980’s, in terms of what sorts of systems they admit as a logic. Of course, all three disciplines have developed large amounts of logical material relating to their chosen systems.

The reason for these divisions seems to be a disagreement as to what a logic is. Mathematicians just want to formalize mathematical reasoning in a sense, and so have fixed on classical logic, as it seems to best capture the work that mathematicians find acceptable and necessary. Philosophers on the other hand, have debates about whether classical logic, intuitionism, some sort of relevance, or some other logic is the “one true logic” (or one of the several true logics as advocated by Greg Restall and JC Beall). Although computer scientists study even more types of logic, they don’t seem to argue about which is an appropriate logic for doing their reasoning in – from what I understand, they do all their metareasoning in classical logic (or some approximation thereof). The various systems are studied to gain structural insights, and to model the capacities of various computational systems, but not to talk about truth per se.

Does this sound about right?

### 4 responses

8 08 2005

Can I add a few words as a computer scientist?

I think the primary (although not the only) motivation for the interest by computer scientists in non-classical logics is to find formal models able to represent and mimic ordinary human reasoning. Certainly, the interest in non-monotonic logics from the 1970s was and is driven by the desire to create machines able to exhibit intelligence, which, until recently, was taken to mean intelligence-in-a-form-recognizable-to-humans. (Only with the recent rise of ideas of swarm intelligence and other non-humanoid intelligence has this bias shifted.)

Claims in classsical logics are not defeasible — once proven, they stay true. This is clearly inadequate as an account of everyday human reasoning (despite the claims of some logicians), since we humans change our beliefs, and our intended actions, all the time. Thus, the focus in AI on non-monotonic logics and similar defeasible knowledge representation and reasoning systems (such as Bayesian Belief Networks). A similar motivation — aiming to build software entities capable of reasoning with all the sophistication and complexity of humanoid life-forms — has motivated Computer Scientists’ recent interest in formal models of argument and dialog.

The focus in AI and CS on modal logics is also motivated by this ambition. If you want to build an intelligent machine in an inter-connected world, then it will need to be able to reason about other machines, notably about their actions and intentions, and perhaps, in consequence, about their beliefs, desires and values. Hence, the attention now paid to epistemic, temporal, deontic, and doxastic modal logics in AI.

8 08 2005

So in a sense, philosophers are thinking of logic as a normative model for reasoning, while (at least some) computer scientists are looking for a descriptive model?

8 08 2005

In computer science, it is not particularly useful to just know that something is “true”. Why is it true? How can we show it? How much does it cost to show it? Thus, merely knowing that it is true or not is a kind of very special and computationally uninteresting situation – similar to just having an oracle.

Even within computer science, there are different religions. As Peter said, some people in the AI community see logic as a tool for describing agents’ behaviours. Some of the logics these guys use are seriously weird and it is not clear how useful they are (perhaps because the field has a long way to mature). A much more mature use of temporal logic is in model checking concurrent and distributed systems – the most famous example of this is SPIN (your comment system seems to strip html tags – the link is http://spinroot.com/spin/whatispin.html). Again, this is a descriptive situation – a branching temporal logic with quantification over paths is precisely what is needed to be able to talk about alternative computation paths in an effective manner. And then you get those silly people who are interested in applying stuff like proof theory in order to reason about computation and do some automated reasoning 🙂

9 08 2005

To Kenny — yes, with the qualification that at least some computer scientists desire descriptive models which describe something other than actual human reasoning (eg, human-like reasoning, and non-human reasoning).

To Jon (first para) — Morever, large parts of computer science are not even devoted to “truth” at all (ie, to logics for beliefs), but to action — reasoning about actions, building machines which can plan, execute, reason and argue about actions, etc. As yet, we have nothing like the Tarskian account of truth to apply to actions. At best, all we have are the criteria of economics — Pareto optimality, for instance. But these criteria and the associated economic models are partial and very inadequate, because they assume an idealized world (eg, agents with perfect information, or without resource constraints, or engaged in utility-maximization, etc).

Computer science now suffers from the lack of attention historically paid to reasoning about action. To show the historical bias in philosophy, I like to cite (an otherwise-nice) introductory text on philosophy by prominent philosopher Stephen Toulmin, called “Knowing and Acting: An Invitation to Philosophy” (Macmillan, 1976), which has 18 chapters devoted to beliefs and just 1 chapter devoted to actions.