LM101-075: Can computers think? A Mathematician’s Response using a Turing Machine Argument (remix)
Episode Summary:
In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines.
Show Notes:
Hello everyone!Welcome to the 75th podcast in the podcast series Learning Machines 101. In this series of podcasts my goal is to discuss important concepts of artificial intelligence and machine learning in hopefully an entertaining and educational manner. This particular podcast is dedicated to my mom Sandy Golden who passed away on March 29, 2017 at the age of 83. She was a great educator and an amazing person who had a tremendous impact on my life, the lives of my family, the lives of her students and colleagues, and the lives of many others. My mom was a great personal inspiration to me and I really miss her. Since her birthday was December 12, 1934, I thought December 12th would be a good day to deliver “her podcast”.I’ve posted a hyperlink to a short commentary regarding my mom which appeared in the San Diego Jewish World by Donald Harrison in the show notes at the end of this episode on the website: www.learningmachines101.com.
Today, we will learn how mathematicians have approached the problem of characterizing the computational limits of computing machines. The podcast today has lots of examples but before jumping to the examples, let’s briefly discuss the core idea that will be covered today.
The basic idea is as follows. We will define a simple logical rule-based machine which can, in principle, replicate the computations of every possible modern electronic digital computer. This simple logical rule-based machine consists of a collection of IF-THEN rules and can be considered to be a special type of production system such as discussed in Episode 74. We will call this simple but very special logical rule-based machine a “Turing Machine”. We will then review a mathematical result which states that for no matter how carefully we try to define the logical rules that govern the behavior of the Turing Machine, there will always be certain problems in deductive inference that can’t be solved by the machine. Since the simplified machine can represent the behavior of any possible electronic digital computer and the simplified Turing Machine can not solve certain problems, it then follows that those problems can not be solved by every possible computer.
This is an interesting argument because it is a theoretical argument about the limitations of what can and can not be computed which is applicable not only to every possible computer that exists today but additionally every possible computer that could be built in the distant future!
Ok…now that we have reviewed the key idea of this podcast…let’s discuss each of the components of this key idea.The first component is the concept of a “theoretical argument”. An experimental argument involves making some observations and gathering evidence to support some hypothesis. Experimental arguments are fundamentally inductive arguments. The result of an experimental argument could be called an“experimental result.” In contrast, a “theoretical argument” involves making certain assumptions about the world and the using deductive inference to generate the implications of those assumptions. The result of a theoretical argument could be called a “theoretical result” and is typically presented in the form of a mathematical theorem.
This is the power of mathematics. Using mathematical reasoning it is sometimes possible to figure out by logical deduction the outcome of an experiment without actually doing the experiment at all!! This type of mathematical reasoning works as follows. First, we try to write down all the assumptions characterizing the context of the experiment. Then we try to use logical deduction to determine what are the implications of those assumptions and come to conclusions regarding what will happen. If we can explicitly write down every single assumption in an exact manner and if we can explicitly document every specific logical deduction which we used to come to our conclusions, then we have what is called a theoretical result. Such theoretical results are summarized as theorems. A theorem states that IF we make certain assumptions about the world and the experimental context, THEN we can state the outcome of the experiment even though we never did the experiment at all!
Theorems are always true. However, just because a theorem is always true, doesn’t mean the theorem makes a true prediction of reality in the world in which we live. That is because the assumptions of the theorem may or may not be true of the physical world. That is, the IF part of the theorem are often an “abstraction” or “approximation” to the real world. For example, the laws of Newtonian gravitational physics are sometimes used to calculate the gravitational force between the earth and the sun where both the earth and sun are treated as “point masses”. However, this is just an approximation to reality.
Finally it is important to emphasize that the above discussion is not intended to argue the irrelevance of mathematics. I consider myself an applied mathematician. I am interested in useful mathematics which can help me solve real-world problems.If the assumptions and conclusions of the mathematical model make contact with reality, then mathematics is an amazingly powerful tool which plays a crucial role in the development and evaluation of complex systems such as artificially intelligent machines.
The Great Mathematician Hilbert thought the field of mathematics was not built on sturdy foundations. All of mathematics is based upon logical deduction. In mathematics, as we previously noted, you make assumptions and then you prove theorems. That is, you do logical thought experiments exploring the implications of specific assumptions. Hilbert wanted to prove a theorem which said that if the field of mathematics comes up with just the right set of IF-THEN rules describing mathematical results then it would be possible by logical deduction to prove whether or not any statement in mathematics was true or false. Hilbert was unable to prove this theorem.
Although this problem may not seem as if itis related to artificial intelligence, it is in fact very central to the field.For example, suppose that we have a robot and we have provided the robot with a knowledge base in the form of IF-THEN rules. Some of these IF-THEN rules correspond to knowledge about how the world works while other IF-THEN rules correspond to knowledge about strategies for guiding behavior. When we provide the robot with a request, then it tries to apply its IF-THEN rules in order to comply with the request. This is actually equivalent to proving a mathematical theorem where the premise of the theorem corresponds to the current state of the world and the conclusion of the theorem corresponds to your request to the robot. The robot must figure out how to apply the IF-THEN rules so that the request can be achieved given the current state of the world. Or equivalently, the robot must prove a theorem!
Thus, Hilbert’s problem was not only a problem about proving mathematical theorems it also had the equivalent interpretation as a problem about deriving plans of action from IF-THEN rules for robots! One of the first important advances in Artificial Intelligence was the development of computer programs that could automatically prove theorems from a collection of IF-THEN rules. The first such program which was based upon propositional logic was The Logic Theory Machine developed by Allen Newell, Herbert Simon, and J.C. Shaw in 1956. Since then, theorem proving has continued to be an important powerful tool in building artificially intelligent systems. Automated Theorem Proving is widely used today in many applications including problems in microprocessor software and hardware design.
For many reasons, one might think that that essentially any problem in artificial intelligence could be addressed by reformulating that problem as a theorem to be proved. Given the initial conditions of the problem, knowledge represented as IF-THEN rules, and the conclusion of the theorem corresponding to the desired behavior. An Automated Theorem Proving Machine would figure out which IF-THEN rules to select and the order in which those IF-THEN rules should be selected in order to generate the desired behavior of the intelligent machine. This certainly a great idea which has much success and continued to play a critical contribution to work in the field of Artificial Intelligence. However, there are intrinsic limitations to this approach.
First, of course, there is a limitation of using IF-THEN rules to represent knowledge as discussed in Episode 74 and Episode 71 in the Learning Machines 101 series. I will let you refer to those episodes to learn more about these intrinsic limitations of knowledge representation using IF-THEN rules. But second, there is an even more fundamental limitation associated with the fundamental concept of developing a theorem proving approach. There are some theorems that just can not be proved regardless of how clever you are in designing your IF-THEN rules!
This second fundamental limitation was identified by the Great Mathematician Gödel in the year 1929. Gödel showed that no matter how hard you try that you will never be able to come out with a set of IF-THEN rules for a sufficiently rich collection of statements that will allow you to determine whether each member of the set of statements is definitely true or definitely false. A famous example of such a situation which was closely related to the approach Gödel used in his analysis is called the “Liar’s Paradox”. The Liar’s paradox can be described as follows.
Suppose we are trying to cross a bridge but there is a monster troll who guards the bridge. The monster troll will let uscross the bridge but only if we ask the troll a question that it can not answercorrectly. After consulting briefly with Gödel, we ask the troll the following question:
“Monster Troll! Here is the question we ask in order to cross the bridge! Please tell us whether or not the meaning of the statement on this piece of paper is either TRUE or FALSE. The statement on this piece of paper is: “THIS STATEMENT IS FALSE.”
Note that if the Monster Troll decides the meaning of the statement on the piece of paper is true then this is a contradiction because the meaning of the statement on the piece of paper is false. If the Monster Troll decides the meaning of the statement on the piece of paper is false then this is a contradiction because that implies that the meaning of the statement on the piece of paper is true!
The “Liar’s Paradox” as illustrated by the Monster Troll story essentially implies that for certain collections of IF-THEN statements representing logical rules that there will always exist some statements that can’t be proved to be either true or false.
In other words, all possible problems in logic (including all possible theorem-proving problems in artificial intelligence) may not be solvable using logical deduction. This is apparently a serious limitation regarding what types of computers can be built.
About 6 years after Gödel’s analysis of Hilbert’s Problem. In the Year 1936 (10 years before the first digital computer ENIAC had been built in 1946) the Great Mathematician Alan Turing was interested in pursuing a “thought experiment”.Specifically, he was trying to convince other researchers as well as himself that if a logical machine was actually built that it could do all sorts of amazing things. We will call this a Turing Machine. The Turing Machine can be interpreted as a type of production system for manipulating logical rules. We discussed production systems in Episode 74 in the Learning Machines 101 podcast series.
Here is a description of Turing’s logic machine which is called a Turing Machine.
Turing imagined a logic machine that had the following three ingredients:
(1) A finite number of IF-THEN rules
(2) An infinite amount of memory storage
(3) The machine can run forever if necessary
A machine that incorporates these 3 elements is said to have implemented an “effective procedure”. Every modern computer can be viewed as a special case of this more general machine so if we understand the limitations of computing for this machine then this helps us understand the limitations of computation for a wide range of computing machines.
This is the first amazing contribution of Turing!!! When we are trying to develop a THEORETICAL RESULT, in many cases 50% of the problem is coming up with the right assumptions. Proving an amazing theoretical result is always a lot easier when you have the right assumptions. In this case, Turing formulated a collection of assumptions which to some extent capture the essence of all “computing”. Not just electronic computers but also biological computers such as human and animal brains!!!! Note, however, the key assumption of Turing’s analysis is that “computing” means “computing with logical rules”.
We will define a Turing Machine as “solving a computational problem” if it stops in a finite number of steps following the formulation of the mathematician Kleene. This is a little different from Turing’s original definition.
Ok…so now..Turing proved the following Theorem.
- it doesn’t matter how many rules you provide the computing machine…
- it doesn’t matter how much time you give the computing machine…
- it doesn’t matter how much knowledge
(i.e., the IF part or Assumptions of the IF-THEN rules) you give the machine…
there will always be some computational problems that the computing machine will never be able to answer. This theorem turned out to be another way of stating Gödel’s theorem but it was formulated in terms of the problem of computing.
In Chapter 27 of the fiction book “The Hitchhikers Guide to the Galaxy” by Douglas Adams, a race of hyperintelligentpandimensional beings builds a fantastic supercomputer the size of a small city. They build the supercomputer in the hopes that it will be able to provide answers to their social problems as well as answers to all problems associated with the universe.
This supercomputer was called “Deep Thought”. On the day of the “Great On-Turning”,two of the hyperintelligent pandimensional beings approach the supercomputer and ask the supercomputer for “The ANSWER toLife, the Universe, and Everything”. Deep Thought responds that it would take about Seven and a Half Million years to figure out the answer. The pandimensional beings were disappointed but agreed to wait for the answer.
Seven and a half million years later, a great celebration was held among the hyperintelligent pandimensional beings since this was the day that Deep Thought had said that the answer would be revealed.
All the hyperintelligent pandimensional beings gathered to hear Deep Thought’s Answer. Deep Thought began to talk. His first words were: “Well…I have the answer but you guys are not going to like this answer!” The hyperintelligent pandimensional beings were not dissuaded…indeed they had waited 7.5 million years for the answer…they wanted tohear it!!! And so after some persuasion, Deep Thought provided them with the “TheANSWER to Life, the Universe, and Everything”. Deep Thought’s answer was: 42!
There was a long dead silence as all of the hyperintelligent pandimensional beings tried to absorb Deep Thought’s statement.
Finally, in great consternation, the hyperintelligent pandimensional beings cried out that they did not understand the meaning of this answer and asked the Stupendous Supercomputer Deep Thought to explain. But the Stupendous Supercomputer simply said that if the hyperintelligent pandimensional beings had a better understanding of the question, then the answer 42 would be more meaningful to them. To this, the hyperintelligent pandimensional beings asked the Stupendous Supercomputer Deep Thought if he could provide theQUESTION associated with the answer 42 in more detail.
The Stupendous Supercomputer Deep Thought responded that its computing power was limited and even if it worked on this problem forever it would be incapable of figuring out the Ultimate Question even though he had the computational power to provide the ANSWER to the Ultimate Question. The hyperdimensional beings were upset with this news, but Deep Thought managed to make them feel a little better by saying the following.
Deep Thought explained that it (Deep Thought) was capable of designing an even more powerful computer which, in fact, after ten million years of computations would be able to compute the Ultimate QUESTION to Life, the Universe, And Everything whose ANSWER is 42.
Deep thought described this new and infinitely more powerful supercomputer (which Deep Thought would design) as a computer of “infinite and subtle complexity that organic Life itself shall form part of its operational matrix”.Moreover, the new computer will take ten million years to complete thecalculation of the Ultimate Question. Deep Thought refers to this new computeras: Earth.
And so the Earth was created by Deep Thought……
The concept of the Earth as a giant supercomputer is intriguing. We can visualize the evolution
of all physical processes on Earth at multiple levels: subatomic, atomic, molecular, organic, geological. The current state of the Earth can be visualized as a long list of features each of which takes on the value of “present” or the value of “absent”. As each physical process evolves from one time instant to the next the state of the Earth is recomputed from the previous state of the Earth according to a large collection of rules.
If these rules could be represented as IF-THEN rules then why couldn’t we imagine the Earth to be some sort of super advanced computer that is doing some sort of super advanced calculation?
On the other hand, one might argue that the state variables of the planet Earth are not true or false but rather are numerical. If this argument is correct, then certain features of the physical dynamics of the Earth can not be represented as IF-THEN rules in which case either we must conclude that the Earth is not a computer or it is a computer which can’t necessarily be represented as a Turing machine and maybe it is even more powerful than a Turing machine. Or, in other words, perhaps the concept of a “computer” should not be limited to a machine that is restricted to symbolic IF-THEN rule-based computations?
We can develop a generalization of the concept of a symbolic rule-based machine such as a Turing machine by introducing the idea of a dynamical system. Every Turing machine can be represented as a dynamical system but there are many dynamical systems that can not be represented by Turing machines. A dynamical system consists of a collection of variables.
The particular value a variable takes on is called the variable’s state. The state of a variable
in a Turing machine is restricted to taking on the values of 1 (true) and 0 (false). However, in a dynamical system the range of possible values a state variable can realize is not restricted to the values of 0 or 1 but is extended to many possible values. In fact, a state variable can have a numerical state which corresponds to an infinite collection of values.
Consider the problem of computing the number pi which is the ratio of the circumference to the diameter of a circle. This is an irrational number with an infinite number of digits. It is impossible for a Turing machine to represent the number pi because a Turing machine can only represent numbers with a finite number of digits. However, a Turing machine can compute the value of pi to any desired number of digits. This is a common theme in the foundation of mathematics. We can define the number pi, talk about the number pi, and use the number pi but we can never actually “know” the actual number pi. All we can do is compute the number pi to an arbitrary number of decimal points. This example argues that any computation that requires continuous-states can be manipulated by a discrete-state machine without loss in generality with a level of precision which can be arbitrarily chosen.
Another feature of a Turing machine is that it operates in discrete-time rather than continuous-time. A dynamical system generates a particular system state at some future time based upon the current system state and current time. Thus, the dynamical system is a mathematical model of a physical machine. Many physical systems are naturally modeled as continuous-time systems rather than discrete-time systems. For example, Newton’s Laws of motion are based upon the concepts of velocity and acceleration which involve the concepts of continuous-time rather than discrete-time. As another example, an electric razor, motorcycle, and sun-powered sundial are examples of machines that are not based upon digital logic and operate using numerically-continuous-states and in continuous-time. Perhaps some types of computers might be naturally represented as continuous-time systems?
If we had a continuous-time Turing machine, what would that look like? An example of a continuous-time Turing machine, is called the Zeno machine. The Zeno machine performs its first computation step in 1 second, the second computation step in ½ second, the third computation step in ¼ second, the fourth computation step in 1/8 second, and so on. If you go to your bookshelf and take out your calculus book, you will find that the sum of the numbers 1 + ½ + ¼ + 1/8 + 1/16 + 1/32 + going on into infinity adds up to 2. So basically, in 2 seconds the Zeno machine is capable of an infinite number of computations in 2 seconds! So, first, if we could build a Zeno machine this would be very useful and REALLY GREAT because any computation no matter how complicated could be completed in 2 seconds using the Zeno machine! On the other hand, the Zeno machine is not a very helpful model of computation for the purpose of distinguishing time-consuming versus non time-consuming computations for a digital computer since all computations take 2 seconds! Since one of the key benefits for postulating a Turing machine is to help distinguish computations that are easy versus computations that are difficult, the idea of a continuous-time Turing machine such as a Zeno machine seems like a fun but not very useful idea.
On the other hand, mathematicians and computer scientists have begun exploring more general notions of computation involving systems with either (or both) continuous-time dynamics and numerically-continuous-states. Computations with such machines have been referred to as: Super-Turing computation or hypercomputation. Super-Turing computations and hypercomputations use more general dynamical system computers not representable as Turing machine dynamical systems which are physically realizable as machines but can’t be exactly represented on a digital computer (or Turing Machine). Such dynamical systems are designed to be mathematical models of the physical world and are not limited to symbolic representations that involve IF-THEN rules.
Recent theoretical work by Blum, Shab, and Smale and others is highly mathematical yet suggests that it might be reasonable to extend the current definition of “computable” to go beyond IF-THEN rules. If this was indeed the case, then it would follow that biological computers such as human brains might be capable of computing things which would be impossible using Turing machines or equivalently standard computer technology.
Interestingly enough, in this series of podcasts we will discuss a number of learning machines (some of which are called “statistical learning machines” while others are called “artificial neural networks”) which are capable of physically realizable super-Turing computations. Like biological brains, such statistical learning machines might be capable of super-Turing computations as well.
Still, the concept of “super-turing computation” and “hypercomputation” is not a mainstream topic in computer science and artificial intelligence. The reason for this is that one could argue that if one had a Super-Turing Computer which was in principle more powerful than a Turing computer, then the argument is that it should always be possible to approximately simulate the Super-Turing computer on a digital computer. If so, then the computational power of a Super-Turing computer would be reduced to the computational power of a Turing machine since a Turing machine can represent any computation that is implemented on any digital computer! On the other hand, one might argue that this theoretical response is based upon an “approximation assumption” which has not been properly formalized.
One item on my future reading list is to take a closer look at a book published by Professor Hava T. Siegelmann which is titled “Neural Networks and Analog Computation: Beyond the Turing Limit (Progress in Theoretical Computer Science”. The book is designed to introduce the concepts of computational complexity to a broader audience and explore possible alternative models of computation involving analog computation machines which may be more powerful than the classical Turing machine. A hyperlink to this book can be found in the show notes of this episode which is Episode 75 of Learning Machines 101.
Another item on my futurereading list will be to take a closer look at the SpecialIssue on Hypercomputation in the journal AppliedMathematics and Computation which was published in 2006. A hyperlink to this special issue can befound in the show notes on the website: www.learningmachines101.com. This special issue on hypercomputation contains a variety of articles discussingissues of super-turing computers and hypercomputation. Here are a few of the more intriguing titles of some of the articles in this special issue.
- Thec ase for hypercomputation by Mike Stannett
- A hypercomputational alien by Udi Boker and Nachum Dershowitz
- How much can analog and hybrid systems be proved (super-)Turing by Olivier Bournez
- TheChurch-Turing thesis: Still valid after all these years? By Anthony Galton
- Analog computation beyond the Turing limit by Jerzy Mycka
- Relativistic computers and the Turing Barrier by Istvan Nemeti and Gyula David
- The many forms of hypercomputation by Toby Ord
But the most intriguing title was the title of the article associated with the author who was invited to provide the background for all of these articles on hypercomputation and Superturing computers. This author was Martin D. Davis an American Mathematician who has won numerous prizes in mathematics and was selected in 2012 as a fellow of the American Mathematical Society. Professor Davis’s expertise is precisely in the area of investigating issues associated with Godel’s Theorem. The title of his paper was: “Why there is no such discipline as hypercomputation”. I will now quote the first two sentences of his introduction to this special issue on Hypercomputation and Superturing computers published in the 2006 Special Issue on Hypercomputation from the journal Applied Mathematics and Computation.
“The editors have kindly invited me towrite an introduction to this special issue devoted to hypercomputation despite their perfect awareness of my belief there is no such subject. …In [previous published work, for example,] I rather thoroughly debunked the claims of two of the leading proponents of this supposed branch of learning….”
The value of alternative theories of computation is therefore notclear. I need to think some more about these issues. However, since this is my podcast, here are my current thoughts.
First, one should keep in mind that a major interest in the field of computer science is understanding what types of computations are going to be easy or difficult using digital computer technology. Computer scientists are not pursuing research for the purpose of developing theories about what typesof computations are going to be easy or difficult using biological braincomputers or analog computers. Similarly, mathematicians are concerned withproving theorems and as we mentioned previously proving theorems is formallyequivalent to digital computation. By the way, mathematicians prove lots oftheorems about dynamical systems in continuous-time and continuous-space whichcould be interpreted as computational mechanisms!
A major interest in the field of computational neuroscience, on the other hand, is understanding what types of computations are going to be easy or difficult to implement in biological brains. Computational neuroscientists are simply not interested in developing theories about what types of computations are going to be easy or difficult using any arbitrary digital computer that has been constructed in the past or could ever be constructed in the future. New types of machine learning algorithms formulated as dynamical systems in continuous-time and continuous-space may easily solve classes of important computational problems which are difficult for conventional digital computers to solve. In such situations, one might consider extending the definition of the fundamental concept of computation….but we should be careful here about throwing out the baby with the bathwater…
So…the take-home question is: If one had a Hypercomputer or SuperTuring computer and you programmed a Turing-Machine or digital computer to simulate that Hypercomputer, would that be beneficial or just a waste of time!
Thank you again for listening to this episode of Learning Machines 101! I would like to remind you also that if you are a member of the Learning Machines 101 community, please update your user profile and let me know what topics you would like me to cover in this podcast. You can update your user profile when you receive the email newsletter by simply clicking on the: “Let us know what you want to hear” link!
If you are not a member of the Learning Machines 101 community, you can join the community by visiting our website at: www.learningmachines101.com and you will have the opportunity to update your user profile at that time. You can also post requests for specific topics or comments about the show in the Statistical Machine Learning Forum on Linked In.
From time to time, I will review the profiles of members of the Learning Machines 101 community and comments posted in the Statistical Machine Learning Forum on Linked In and do my best to talk about topics of interest to the members of this group!
And don’t forget to follow us on TWITTER. The twitter handle for Learning Machines 101 is “lm101talk”!
Also please visit us on ITUNES and leave a review. You can do this by going to the website: www.learningmachines101.com and then clicking on the ITUNES icon. This will be very helpful to this podcast! Thank you so much. Your feedback and encouragement are greatly valued!
Keywords: Features, Logical Rules, Production Systems, Superturing, Hypercomputation, Theorem-Proving
Further Reading:
- The
Illustrated Hitchhiker’s Guide to the Galaxy by Douglas Adams
(Harmony Brooks, New York, 1994). [See Chapter 27. Easy reading!!!]. - Charles Petzold. The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine (a little advanced) but fun to skim. (Wiley, 2008).
- What is Gödel’s Theorem? (Henriksen, 1999, Scientific American) http://www.scientificamerican.com/article/what-is-Gödels-theorem/
- Siegelmann, Hava (1999). Neural networks and analog computation: Beyond the Turing Limit (Progress in Theoretical Computer Science). https://www.amazon.com/Neural-Networks-Analog-Computation-Theoretical/dp/0817639497
- The Wikipedia article on Hypercomputation.
http://en.wikipedia.org/wiki/Hypercomputation
which is relatively readable. - Special Issue on Hypercomputation in the journal Applied Mathematics and Computation (https://www.sciencedirect.com/journal/applied-mathematics-and-computation/vol/178/issue/1). Edited by F. A. Doria and J. F. Costa (2006).
- Martin Davis (2004). The myth of hypercomputation. Christof Teuscher (Ed.), Alan Turing: Life and Legacy of a Great Thinker, Springer, Berlin (2004), pp. 195-212
- Broersma, Stepney, Wendin (2017). Computability and complexity of unconventional computing devices. https://arxiv.org/abs/1702.02980 (very readable…complete paper is available at no charge..just click on PDF link).
- Blum’s review of his work on SuperTuring Computation which is very
relevant to this discussion: http://www.cs.cmu.edu/~lblum/PAPERS/TuringMeetsNewton.pdf
(still pretty hard to understand but easier than reference 6).
10. Blum, L., M. Shub, andS. Smale, “On a theory of computation and complexity over the real numbers: NP-completeness,recursive functions and universal machines”, Bulletin of the AMS, Vol. 21, No.1, 1-46(July 1989). [requires advanced knowledge]
Copyright Notice:
Copyright © 2014-2018 by Richard M. Golden. All rights reserved.