This distinction is crucial for AI, because unless it can be clearly made, the research will be aimless, and to draw this line is equivalent to giving "intelligence" a working definition. [Special Topic: working definition]
Since a working definition should be faithful to the common usage of the concept, we should start with how the word "intelligence" is used by people. However, like most words in a natural language, "intelligence" is used with many different sense in different contexts, so to define it "as it is", even in its edited version, as in a dictionary, is still too messy to satisfy the other requirements of a good working definition. Therefore, every researcher in the field has tried to focus on certain "essence" of the concept. Since human intelligence is the best example of intelligence as far as we know, it is natural that all the existing working definitions of "intelligence" generalize certain aspect of human intelligence, to the extent that it can also be applied to non-human systems.
But which aspect? Within the information system framework, we can describe a system from inside, by talking about its goals, actions, and knowledge, or from outside, by talking about its experience and behavior. Since it is easier to evaluate the system's outside activities, and all the difference in the inside will eventually show up in the outside, whether (or how much) a system is intelligent is usually judged from the outside.
The same problem happens on variations of the above working definition. In AI textbooks, it is common to define intelligence by human capability (though not necessarily human behavior). For example, master chess players are usually considered as very intelligent, so if a computer can reach that level of capability, it should also be considered as intelligent, too. Though this opinion sounds natural, it suffers from several problems. First, a master-level chess playing program usually has little capability on other fields, which conflict with our intuition that intelligence is versatile. Furthermore, why chess playing requires intelligence, while solving many other problems do not? After all, nowadays computer systems have solved many problems better than any human can do, but why we still feel that they are not intelligent? What is missing in conventional computer systems?
If we take each concrete goal of a system as a "problem", and the related actions of the system as an "solution" to the problem, then there are two typical types of systems.
In one type of system, the same problem always gets the same solution. The best example of this type is a conventional computer program, where the same input data is always processed in the same way, and produces the same output data. Mathematically, this program serves as a function that maps a (valid) input into a desired output. We can find the same kind of input-output mapping in many low-level animals, where the same stimulus lead to the same response. Since in such a system the input-output mapping is innate or inborn, I call it "instinctive system".
On the contrary, in the human mind, most of the problem-solution mappings are learned, so that for the same problem, the solutions often changes over time. Furthermore, this change is not arbitrary, but the result of the system's adaptation to its environment. In this process, the system has to deal with the problems by taking its past experience into consideration, and works with the restriction of available knowledge and resources. I call such a system "intelligent system".
Intelligence, as the experience-driven form of adaptation, is the ability for an information system to achieve its goals with insufficient knowledge and resources.The content of this brief definition can be further explained as the following:
First, we can compare the intelligence of different systems according to the definition. If everything else is equal, but one system is faster in adapting to the changes in the environment, then the former is more intelligent than the latter. The same is true if the former is open to more forms of input information, or is more efficient in using available resources.
Based on this kind of comparison, we can define a relative measurement of a system's intelligence: if a system has n comparable systems, and is more intelligent than m of them, then the system's degree of intelligence is m/n. Therefore, among comparable systems, the most intelligent one will have a degree of 1, while the least intelligent one will have 0.
Of course, this solution still cannot compare the degree of intelligence between two arbitrary systems. However, it is good enough for this theory at the current stage. Hopefully in the future we can develop a more general measurement of intelligence, based on the new knowledge coming from the progress in this field.