This post aims to show a surprising connection between a reasonably well-known computer science concept (NP-complete problems) and the reasons for why computers have become so ubiquitous. If you’ve never heard of NP-complete problems the “Why computers are special devices” section might still be of interest!
I thought that this was an interesting and provocative essay, but there's one link in the argument that doesn't quite go through, as far as I can tell.
You want to identify a given complexity class with a restricted class of universal machines - one which monitors the resources consumed and cuts off the computation when the resource limit is reached.
So let's say that you are commissioned to make such a machine for the class P (or NP, whichever you want). Then I give you a.problem instance, and ask you to run it within a polynomial-time constraint.
So .... how many steps does your polynomial-time machine run it for before stopping it?
The difficulty is that class P is defined by "problems", which are really sets of problem instances, and the members of P are in that class because of the asymptotic behavior as their problem instances get larger. A given universal computer when confronted with a *single* problem instance, and asked to run it, has no way to know what "problem" (i.e. set of problem instances) it belongs to. So I think you may have difficulty defining a "polynomial-time" machine that cuts off computation after a polynomial number of steps.
Yes I deliberately simplified the argument and this is one of the important details that got cut out. The way the reduction actually works is that the "handicapped machine" is given the program, the input and the *exact number of steps the simulation can go on for in unary*. This has several consequences but one of them is getting around the problem you describe. When we do the reduction we actually need to decide exactly how many time steps we will allow the handicapped machine to run for (as well as waste at least that many steps producing this input). Many thanks for the comment this really helps with gauging how much to include in the future!
Thanks. Isn't it more than just an oversimplification in explanation though?
I took you to be arguing that a complexity class such as P corresponds to a (single) appropriately-restricted universal machine. It's not obvious to me that you can construct such a machine.
Yes, you can create a machine that takes as input a program and a precise number of steps that the program will be allowed to run. The question is how you have arrived at that number of steps that will make the overall machine be polynomially restricted.
Imagine that I give the machine a program N and a number-of-steps bound S. The execution of that program takes would normally take more than S steps, so our restricted machine stops the computation. But maybe the polynomial that describes the asymptotic behavior of the overall problem is 10x as large as we were assuming (since constant factors don't matter), so a problem belonging properly in P is not computed by the machine.
The two questions that remain for me are:
1) Are you in fact claiming that we can construct a single appropriately-restricted universal machine that can solve exactly the problems that are in class P (halting on the others)?
2) If so, and if we are doing this with a simulation technique where we supply the number of steps allowed as part of the input, then how is that number of steps determined in general?
To answer them in reverse order because I think it's clearer:
2) there is no way to do so in general but likewise there is no way to determine the program you should pass to the machine in general either. To use the restricted computer to solve the problem one needs to discover the relevant program and one can think of the polynomial as the additional part of the program code one needs to figure out. But if there is a polynomial that works there will be a way of using this restricted universal machine to solve it via simulation in a way that scales only polynomially.
which brings us to...
1) that depends on the exact meaning of "solve". If by solve we mean solve by this simulation while the time only scaling polynomially then the answer is "yes". (We do so as shown above and the process clearly fails for any problem where all programs that solve it scale exponentially time-wise). There are senses of "solve" that are weaker that I'd guess are still true but demonstrating that would mean showing various difficult conjectures. And some senses where it might be false.
I didn't quite understand this sentence: "For virtually any class we might define there will be a way of handicapping the universal computer such that it is still able to solve all of the problems in the class while at the same time not breaking the class specific resource constraint.". This seems to be asserting that any class of problems that is complete can be solved by a universal computer, but as far as I know it's still not known whether, for example, NP problems can be solved. From what you wrote earlier in the article I assume you agree with that which means I am misunderstanding something.
So there are two different meanings of "solve" here. An earlier version of the draft actually contained a clarification but in the end I took it out.
The sense in which it is not known whether NP problems can be solved is whether there is a *polynomial time algorithm* that solves them. There are brute force exponential algorithms for all NP problems.
So all NP problems can be *reduced* (the second meaning of solve) to the universal computer problem in a straightforward way by submitting a program for one of those exponential time algorithms. The simulation will then take exponential time of course.
I feel like this might still not answer your question fully but it should give a sense of the difference. Thanks for the comment, much appreciated!
Your insight is very interesting. My very speculative question is whether there might be a way of looking at the universal computer’s resources such that it might be viewed as a handicapped version of a more powerful “super universal” computer? For instance at any time and state of knowledge the laws of physics in our universe will constrain the resources actually available to any physically realisable universal computer we could build [modulo that knowledge], but improved knowledge would relax those constraints somewhat? In this speculative vein the state of our knowledge would “guarantee” the related degree of handicap at all times. Apologies if this is too much “woo woo” speculation but interested in your thoughts.
It's definitely imaginable. For example the universal machine is usually thought to be finite state which is a very real limitation that I could see being relaxed. And as you indicate if laws of physics are different there are other possibilities as well. Hope this is a good enough answer and thank you for your comment!
"Clearly this shapeshifting capability has its limits. It’s difficult to imagine what kind of software you’d have to install on your laptop to turn it into a coffee grinder."
Is computational universality a bad name because of these limits? What does computational universality really mean then? What is the class of things that ENIAC can do?
Roughly speaking the idea is that it can do any kind of information processing/computation. For example what Turing managed to show is that if you have a computer that has a finite number of states that operates on an infinite tape, then any function that machine could compute ENIAC could also compute. That is very general. Now in practice computers get connected to all kinds of input/output devices which expands the use of computers in practice quite a bit. Hope that roughly answers the question, thanks for the comment!
I thought that this was an interesting and provocative essay, but there's one link in the argument that doesn't quite go through, as far as I can tell.
You want to identify a given complexity class with a restricted class of universal machines - one which monitors the resources consumed and cuts off the computation when the resource limit is reached.
So let's say that you are commissioned to make such a machine for the class P (or NP, whichever you want). Then I give you a.problem instance, and ask you to run it within a polynomial-time constraint.
So .... how many steps does your polynomial-time machine run it for before stopping it?
The difficulty is that class P is defined by "problems", which are really sets of problem instances, and the members of P are in that class because of the asymptotic behavior as their problem instances get larger. A given universal computer when confronted with a *single* problem instance, and asked to run it, has no way to know what "problem" (i.e. set of problem instances) it belongs to. So I think you may have difficulty defining a "polynomial-time" machine that cuts off computation after a polynomial number of steps.
Let me know if I am missing something.
Yes I deliberately simplified the argument and this is one of the important details that got cut out. The way the reduction actually works is that the "handicapped machine" is given the program, the input and the *exact number of steps the simulation can go on for in unary*. This has several consequences but one of them is getting around the problem you describe. When we do the reduction we actually need to decide exactly how many time steps we will allow the handicapped machine to run for (as well as waste at least that many steps producing this input). Many thanks for the comment this really helps with gauging how much to include in the future!
Thanks. Isn't it more than just an oversimplification in explanation though?
I took you to be arguing that a complexity class such as P corresponds to a (single) appropriately-restricted universal machine. It's not obvious to me that you can construct such a machine.
Yes, you can create a machine that takes as input a program and a precise number of steps that the program will be allowed to run. The question is how you have arrived at that number of steps that will make the overall machine be polynomially restricted.
Imagine that I give the machine a program N and a number-of-steps bound S. The execution of that program takes would normally take more than S steps, so our restricted machine stops the computation. But maybe the polynomial that describes the asymptotic behavior of the overall problem is 10x as large as we were assuming (since constant factors don't matter), so a problem belonging properly in P is not computed by the machine.
The two questions that remain for me are:
1) Are you in fact claiming that we can construct a single appropriately-restricted universal machine that can solve exactly the problems that are in class P (halting on the others)?
2) If so, and if we are doing this with a simulation technique where we supply the number of steps allowed as part of the input, then how is that number of steps determined in general?
Thank you for your indulgence and explanations.
To answer them in reverse order because I think it's clearer:
2) there is no way to do so in general but likewise there is no way to determine the program you should pass to the machine in general either. To use the restricted computer to solve the problem one needs to discover the relevant program and one can think of the polynomial as the additional part of the program code one needs to figure out. But if there is a polynomial that works there will be a way of using this restricted universal machine to solve it via simulation in a way that scales only polynomially.
which brings us to...
1) that depends on the exact meaning of "solve". If by solve we mean solve by this simulation while the time only scaling polynomially then the answer is "yes". (We do so as shown above and the process clearly fails for any problem where all programs that solve it scale exponentially time-wise). There are senses of "solve" that are weaker that I'd guess are still true but demonstrating that would mean showing various difficult conjectures. And some senses where it might be false.
Happy to respond more!
I didn't quite understand this sentence: "For virtually any class we might define there will be a way of handicapping the universal computer such that it is still able to solve all of the problems in the class while at the same time not breaking the class specific resource constraint.". This seems to be asserting that any class of problems that is complete can be solved by a universal computer, but as far as I know it's still not known whether, for example, NP problems can be solved. From what you wrote earlier in the article I assume you agree with that which means I am misunderstanding something.
So there are two different meanings of "solve" here. An earlier version of the draft actually contained a clarification but in the end I took it out.
The sense in which it is not known whether NP problems can be solved is whether there is a *polynomial time algorithm* that solves them. There are brute force exponential algorithms for all NP problems.
So all NP problems can be *reduced* (the second meaning of solve) to the universal computer problem in a straightforward way by submitting a program for one of those exponential time algorithms. The simulation will then take exponential time of course.
I feel like this might still not answer your question fully but it should give a sense of the difference. Thanks for the comment, much appreciated!
Speaking of constrained computational cost, this has been done within Ethereum to prevent attacks on the infrastructure: https://www.sofi.com/learn/content/what-is-ethereum-gas/
Your insight is very interesting. My very speculative question is whether there might be a way of looking at the universal computer’s resources such that it might be viewed as a handicapped version of a more powerful “super universal” computer? For instance at any time and state of knowledge the laws of physics in our universe will constrain the resources actually available to any physically realisable universal computer we could build [modulo that knowledge], but improved knowledge would relax those constraints somewhat? In this speculative vein the state of our knowledge would “guarantee” the related degree of handicap at all times. Apologies if this is too much “woo woo” speculation but interested in your thoughts.
It's definitely imaginable. For example the universal machine is usually thought to be finite state which is a very real limitation that I could see being relaxed. And as you indicate if laws of physics are different there are other possibilities as well. Hope this is a good enough answer and thank you for your comment!
"Clearly this shapeshifting capability has its limits. It’s difficult to imagine what kind of software you’d have to install on your laptop to turn it into a coffee grinder."
Is computational universality a bad name because of these limits? What does computational universality really mean then? What is the class of things that ENIAC can do?
Roughly speaking the idea is that it can do any kind of information processing/computation. For example what Turing managed to show is that if you have a computer that has a finite number of states that operates on an infinite tape, then any function that machine could compute ENIAC could also compute. That is very general. Now in practice computers get connected to all kinds of input/output devices which expands the use of computers in practice quite a bit. Hope that roughly answers the question, thanks for the comment!