This post aims to show a surprising connection between a reasonably well-known computer science concept (NP-complete problems) and the reasons for why computers have become so ubiquitous. If you’ve never heard of NP-complete problems the “Why computers are special devices” section might still be of interest!
I thought that this was an interesting and provocative essay, but there's one link in the argument that doesn't quite go through, as far as I can tell.
You want to identify a given complexity class with a restricted class of universal machines - one which monitors the resources consumed and cuts off the computation when the resource limit is reached.
So let's say that you are commissioned to make such a machine for the class P (or NP, whichever you want). Then I give you a.problem instance, and ask you to run it within a polynomial-time constraint.
So .... how many steps does your polynomial-time machine run it for before stopping it?
The difficulty is that class P is defined by "problems", which are really sets of problem instances, and the members of P are in that class because of the asymptotic behavior as their problem instances get larger. A given universal computer when confronted with a *single* problem instance, and asked to run it, has no way to know what "problem" (i.e. set of problem instances) it belongs to. So I think you may have difficulty defining a "polynomial-time" machine that cuts off computation after a polynomial number of steps.
I didn't quite understand this sentence: "For virtually any class we might define there will be a way of handicapping the universal computer such that it is still able to solve all of the problems in the class while at the same time not breaking the class specific resource constraint.". This seems to be asserting that any class of problems that is complete can be solved by a universal computer, but as far as I know it's still not known whether, for example, NP problems can be solved. From what you wrote earlier in the article I assume you agree with that which means I am misunderstanding something.
Your insight is very interesting. My very speculative question is whether there might be a way of looking at the universal computer’s resources such that it might be viewed as a handicapped version of a more powerful “super universal” computer? For instance at any time and state of knowledge the laws of physics in our universe will constrain the resources actually available to any physically realisable universal computer we could build [modulo that knowledge], but improved knowledge would relax those constraints somewhat? In this speculative vein the state of our knowledge would “guarantee” the related degree of handicap at all times. Apologies if this is too much “woo woo” speculation but interested in your thoughts.
"Clearly this shapeshifting capability has its limits. It’s difficult to imagine what kind of software you’d have to install on your laptop to turn it into a coffee grinder."
Is computational universality a bad name because of these limits? What does computational universality really mean then? What is the class of things that ENIAC can do?
I thought that this was an interesting and provocative essay, but there's one link in the argument that doesn't quite go through, as far as I can tell.
You want to identify a given complexity class with a restricted class of universal machines - one which monitors the resources consumed and cuts off the computation when the resource limit is reached.
So let's say that you are commissioned to make such a machine for the class P (or NP, whichever you want). Then I give you a.problem instance, and ask you to run it within a polynomial-time constraint.
So .... how many steps does your polynomial-time machine run it for before stopping it?
The difficulty is that class P is defined by "problems", which are really sets of problem instances, and the members of P are in that class because of the asymptotic behavior as their problem instances get larger. A given universal computer when confronted with a *single* problem instance, and asked to run it, has no way to know what "problem" (i.e. set of problem instances) it belongs to. So I think you may have difficulty defining a "polynomial-time" machine that cuts off computation after a polynomial number of steps.
Let me know if I am missing something.
I didn't quite understand this sentence: "For virtually any class we might define there will be a way of handicapping the universal computer such that it is still able to solve all of the problems in the class while at the same time not breaking the class specific resource constraint.". This seems to be asserting that any class of problems that is complete can be solved by a universal computer, but as far as I know it's still not known whether, for example, NP problems can be solved. From what you wrote earlier in the article I assume you agree with that which means I am misunderstanding something.
Speaking of constrained computational cost, this has been done within Ethereum to prevent attacks on the infrastructure: https://www.sofi.com/learn/content/what-is-ethereum-gas/
Your insight is very interesting. My very speculative question is whether there might be a way of looking at the universal computer’s resources such that it might be viewed as a handicapped version of a more powerful “super universal” computer? For instance at any time and state of knowledge the laws of physics in our universe will constrain the resources actually available to any physically realisable universal computer we could build [modulo that knowledge], but improved knowledge would relax those constraints somewhat? In this speculative vein the state of our knowledge would “guarantee” the related degree of handicap at all times. Apologies if this is too much “woo woo” speculation but interested in your thoughts.
"Clearly this shapeshifting capability has its limits. It’s difficult to imagine what kind of software you’d have to install on your laptop to turn it into a coffee grinder."
Is computational universality a bad name because of these limits? What does computational universality really mean then? What is the class of things that ENIAC can do?