these notions of “computation” do not describe most of the actual computation we see around us

I suspect that most of the time there is little distinction between these notions, that’s why no distinction is made. I expect that most questions about processes that start before their inputs have arrived can be rephrased in terms of conventional computing machines. However, this isn’t necessarily the case for all types of computation.

Processes that start before heir inputs are received typically involve multiple agents and such processes are usually known as ‘protocols’. And quantum protocols appear to be different from classical protocols because quantum encryption, an example of a quantum protocol, allows you to be provably “almost certain” that nobody could have eavesdropped on your message. When asking questions about ordinary computations we’re often asking “with these assumptions about our universe, does there exist an algorithm such that…”. Classical and quantum computing give essentially the same answers because a quantum computer can be simulated by a classical one. But with encryption protocols we’re asking questions like “does there exist an algorithm such that there is no algorithm such that…”. And here we see appear to see a big difference between classical and quantum computing.

]]>But perhaps more importantly, the reason these notions are of any use is to say what sorts of functions can be computed with them (and how fast). The fact that we use computers to do all sorts of things other than computing functions just means that other abstract models will be useful for those purposes, not that these models are useless.

]]>Nearly all of us, everyday, are using operating systems which violate these assumptions. Increasingly, with service-oriented computing, agent-oriented and P2P systems, we use applications which also violate these assumptions.

IMO, the interesting question is sociological rather than philosophical: Why do so many computer scientists still cling to a model of computation which is violated by their own everyday experience?

]]>is the class of all problems for which a randomized poly-time algorithm correctly identifies membership with a probability bounded away from 0.5 i.e usually 2/3 prob. of being correct, and a 1/3 prob. of being wrong.

Since the algorithm may choose not to use random coins, BPP contains P. The question P =? BPP, also unknown, can be paraphrased as “Does randomness help in computation”. Probably people if polled, will suspect that P = BPP, but we don’t know what the true situation is.

I am not sure what you mean when you say that P is “too large”. is it because it includes problems with running times of n^100, that are not efficient in practice ?

PSPACE includes NP, yes. The inclusion goes

P \subseteq NP \subseteq PSPACE. it is possible that P = PSPACE, just as it is possible that P = NP, although i suspect P = PSPACE is believed with a lot less certitude.

]]>You’re right that just because it’s an empirical question doesn’t make it easy.

Thanks for the clarification on quantum computation – I thought they already knew how to do NP-hard problems on a quantum computer, except that they just don’t have the physical machines working beyond four or five qubits or whatever.

Is BPP slightly larger than P, or slightly smaller? Because it seems that in some other sense of effective, P is too large. Though maybe there’s no principled way to cut it down.

PSPACE includes NP, right?

Anyway, thanks for your comment!

]]>Regarding CT thesis II, I don’t know if it’s *just* an empirical matter of verifying whether relativity or quantum physics can simulate non-Turing computations. Even with explicit physical systems like quantum computers, it is not trivial to figure out how powerful they are (and a quantum computer can be simulated by a Turing machine without changing any notions of computability.

You rightly argue that CT III (the “effective” CT thesis is what we often call it) is what people often believe, and is threatened by quantum computing. However, contrary to what you indicate, it is NOT the case that quantum computers can solve NP-hard problems, at least not yet.

as an aside, it’s not so much whether a quantum computer can solve an “NP” problem. Since P is contained in NP, this is always true. what is relevant is whether a quantum computer can solve an NP-complete problem in polynomial time, where an NP-complete problem is one that is in a formal sense the hardest problem in NP, and one that can be used to solve all problems in NP.

Finally, you are absolutely right that one can’t willy-nilly define any model of computation, since oracles are an easy way of subverting the effective CT thesis. However, the strength of this last thesis is that any physically realizable model we have been able to come up with (i.e all steps can be “implemented”) is polytime reducible to every other model. Quantum computing is fascinating precisely because it doesn’t seem to behave this way. But we have developed very sophisticated notions of time and space complexity to “stratify” these classes so to speak.

P, the class of poly-time solvable problems, is really the class that some believe captures effective computations. BPP, the class of efficient randomized algorithms, is probably a better choice, but one could nitpick ever so slightly about random sources, and I won’t get into that.

PSPACE, on the other hand, the class of poly-space computations, appears to be quite a bit more powerful. Again, we just don’t know how much more. So there is a good deal of stratification in complexity classes, but for effective computations, it really boils down to P (or BPP) vs NP.

]]>