On the other hand, since human intelligence is just a special case of intelligence in general, this theory makes no attempt in explaining the human-specific properties and mechanisms. Therefore, one of the biggest challenges to a general theory of intelligence is to separate the intelligence-general aspects from the human-specific aspects in the literatures of psychology, linguistics, philosophy, and even neuroscience, because in these fields, little effort has been made to distinguish "intelligence" from "human intelligence" (and the same is true for "cognition", "mind", "thinking", etc.).
The working definition of intelligence introduced in Section 2.1 is indeed consistent with our knowledge about human intelligence, while not bounded by human-specific (biological, evolutionary, or social) factors.
Jean Piaget (1950) said intelligence "is the most highly developed form of mental adaptation", and there is a huge psychological literature documenting the adaptive nature of the human mind. The restrictions on human cognition is also not a new discovery for psychologists. For example, an influential textbook on Cognitive Psychology, Medin and Ross (1992), "is organized around the idea that much of intelligent behavior can be understood in terms of strategies for coping with too little information and too many possibilities". Indeed, if the human mind always had enough knowledge and resources, most cognitive facilities wouldn't have been developed, either at the species level or at the individual level.
The only non-intuitive aspect of the definition is its treatment of "intelligence" as a meta-capability, rather than a class of capabilities in solving concrete domain problems. In the everyday usage of the term, "intelligence" is used to indicate both concrete problem-solving capabilities (in playing chess, proving theorems, solving puzzles, composing music, etc.) and meta-level learning capability (in acquiring the above capabilities). However, as analyzed in Section 2.2, though in the human mind the two types are closely correlated, in general it is not necessarily the case. In AI and computer science, there have been many systems with strong problem-solving capability in limited domain, while have little ability to learn and to adapt. When these two types of capability must be distinguished, it seems more natural to call the domain-dependent and problem-specific capability "skill" or "expertise", and reserve the term "intelligence" for the domain-independent and general-purpose meta-capability by which the skills are obtained.
Though this theory is not proposed to compete with psychology, linguistics, etc., in detailed description of human mind, it does contribute to the understanding of human intelligence by providing new interpretations, explanations, and justifications, to be introduced in the following chapters.
Instead of taking "AI" to mean "What the AI researchers do", this theory argues for a paradigm shift in the research, by re-define the research goal of the field of AI. As analyzed in the previous sections, most of the existing project in the AI field are not aimed at intelligent systems, as defined in Section 2.1. Though these research has its theoretical and practical values, they are still largely in the field of Computer Science.
The common usage of a term, including "AI", is often formed in history by all kinds of accidental reasons, rather than by rational considerations on what the term should mean. Therefore, we cannot expect the common usage of AI to be changed very soon, if it will ever happen. It is for this reason, some researchers, including myself, begin to use alternative terms, like Artificial General Intelligence (AGI), to mean what they think AI should mean. [Special Topic: AI or AGI]
No matter which label is used, the difference between this theory and the current mainstream AI research can be clearly recognized by comparing the working definition given in Section 2.1 and the ones that defining intelligence as human problem solving capabilities. In the following chapters, I will show the implications of this difference.
As a result, though this book proposes a high-level theory for AI, it does not provide solutions to many existing "AI problems". Instead, it dismisses many of them as problems in computer science, and clarifies some "real AI problems", with suggested solutions. Concretely, this theory will build a normative model for adaptive systems working with insufficient knowledge and resources, by discussing the design of these systems, as well as their performance. After reading this book, hopefully the readers will see that such systems can be built using existing technology, and the systems built in this way will indeed show properties that are usually associated with the notion of intelligence. Of course, given the nature of this book, the description will be conceptual, not technical, which is provided by my other publications.
Most people will agree that some animals are more "intelligent" than the others, though they may have different opinion on how to judge and compare them on this property. For the current discussion, we do not to get into those details, but to decide on the nature of intelligence, when the concept is used in the study of animal behaviors.
The study of animal intelligence usually happens in fields like "animal cognition" or "comparative psychology". In this study, animal behaviors are often categorized into those that are mostly attributed to "instinct" and those that are mostly attributed to "intelligence", though these two often tangled together in each case. Discussion of this topic can be find in the book "Instinct and Intelligence: Behavior of Animals and Man" of S. A. Barnett (1967). Even though the concrete situations are complicated, the general principle behind this distinction remains clear: "intelligence" is responsible for learned behavior that depends on the experience of an individual, while "instinct" for innate behavior that depends on the heritage of a species. This distinction has nothing to do with the complexity of a behavior, nor whether it is similar to a human behavior. Therefore, in this field intelligence is defined in the same way as proposed in this theory.
Since animal learning is relatively simpler than human learning, it can make important contribution to a theory on intelligence in general. For example, Pavlovian Conditioning shows how a system (animal) builds and adjusts simple causal knowledge, according to its past experience, and uses it to predict the future. Though the details of the process vary from species to species, we can expect the same principle to be followed by all intelligent systems.
On the other hand, there are certain special features of animal learning that do not have much to do with intelligence in general. For example, many behavioral changes in animal are mainly caused by maturation, and therefore have more to do with instinct than with intelligence. However, when the two process happen at the same time, it is very hard to separate them. actually the same issue also happens in human beings. It is well known that certain skills must be learned before a certain age, after that it is very hard to be acquired. This phenomenon is not necessarily there in intelligent computers, which are not grown up from a simpler system, like the situation in biological intelligent systems.
A similar situation happens when the "degree of intelligence" is considered. In both analysis and development, the study should start at simple systems with relatively low intelligence, then move up to more intelligent ones. However, there is no strong argument (though some researchers believe it) that in AI research we should first target insect-level intelligence, then mammal-level, and eventually human-level, because AI does not have to follow the exact evolution path and lead to human intelligence. [Special Topic: intelligence and evolution]
Like the situation of animal intelligence, a group system can be analyzed in terms of its "instinct" and "intelligence", that is, fixed behavior and learned behavior. When such a system shows certain properties, we may want to refer to it as "group intelligence", "collective intelligence", or "swarm intelligence". In AI research, technical terms on related topics include "distributed intelligence" and "multi-agent system".
It is worth the trouble to introduce a new concept here, because the behavior of a group of systems often cannot be seen as the sum, or average, of that of the individuals. Instead, various relations among the individuals play an important role in the analysis. On the other hand, the capability of each individual obviously still matter for the whole group.
We are going to return to this topic in Chapter 6, where we will focus on certain aspects of group systems, after we have a more detailed discussion on individual system.
A true general theory of intelligence should also cover alien intelligence, but since we haven't found any, then there is no knowledge we can use from this type of intelligence. Even so, this possibility allow us to have some interesting thought experiments, which will help the establishing of such a theory.
Here we are not going to analyze the proposed arguments for and against the existence of alien intelligence. Instead, let us talk about its recognition, that is, how can we recognize an alien intelligence when we meet a suspect in the future?
First, it seems the concept of "alien intelligence" is considered as meaningful and not self-contradictory, that is, we all agree that it is possible for some unknown and non-human objects to be considered as intelligent when one day we meet them. This acknowledgment itself already excludes several popular working definitions of "intelligence".
Unless we only attribute alien intelligence to "parallel universes", there is no reason for us to expect an alien intelligence to go through the same evolution path in the same environment, and therefore end up with the same behavior or capability as human beings. To judge its intelligence using the Turing Test sounds ridiculous. It's easy to image that there are things we can do better, and other things they can do better.
In such a context, for the concept of "alien intelligence" to make any sense, what need to be checked is not the alien's behavior or problem-solving capability, but its learning capability, that is, whether it can adapt to a novel environment. Once again, the working definition proposed in this theory is more proper than the alternative definitions in this field.
Though alien intelligence is not the focus of this book, hopefully the conclusions developed in this theory will help the progress in that field.