Generally speaking, an adaptive system changes its behavior in the direction of improving its problem-solving (that is, goal-achieving) ability.
As information systems, adaptive systems and instinctive systems have fundamental differences.
A well-known presentation of this point of view is given by David Marr (1982), where he identified three levels of analysis when we want to understand a cognitive function, then reproduce it in a computer:
However, a computer system designed and built in this way is not adaptive. The system's goal is the specified computation, and its knowledge is the algorithm that achieves the goal by the system's actions (i.e., the sequence of operations). What the system can do, and how to do it, are constant, and independent of the system's experience. On the contrary, in an adaptive system there is no one-to-one mapping between the input and the output, and therefore their relation cannot be specified as "function" or "computation", in the mathematical sense of these terms.
The above conclusion does not mean that a computer system cannot be adaptive. After all, it is possible to build a computer system, whose life-long behavior is fully determined by the system's initial state and life-long experience, but each "problem" (which is a section of the life-long experience) may correspond to different "solutions" (which are sections of the life-long behavior), depending on where the problem appear in the experience.
However, according to the previous distinction, a system built in this way is an instinctive systems, because its functionality is fully determined by its design, and has nothing to do with its experience. It can carry out the specified computation, but is brittle, and have little tolerance to variations in the environment and the system.
For an adaptive system, by definition its input-output relationship cannot be specified as a computation, nor can its problem-processing process follow an algorithm, because both change over time. At different moments, the same problem may be solved in different ways and get different results. Its behavior is not only determined by its initial state and innate mechanism, but also by its experience.
Of course, this does not mean that the system cannot be accurately designed or implemented in a computer. The designer of such a system still specifies computations and design algorithms to carry them out, though these computations and algorithms are at a higher level, or a meta-level, with respect to the (object-level) problems the system will deal with itself.
For example, an adaptive system may have the ability to learn chess, though there is no chess-playing algorithm built in it. There are algorithms in the system, which make the system adaptive (among other things), though they do not directly indicate predetermined paths for the system to achieve its goals. To achieve a goal, what the system can depend on is its experience, accumulated to the current moment.
Assuming a system's practical problem-solving capability can be measured numerically, then its adaptation capability can be displyed by a function showing how this value changes over time. An instinctive system corresponds to a horizontal line (since the value does not change), while a system with a constant "adaptation rate", or "learning rate" corresponding to a line with a positive slope (since its capability increases at a fixed rate). Intuitively, the slope, or derivative, of the capability function indicates the system's adaptation ability, or its intelligence.
For human beings, their problem-solving capability and learning capability are highly correlated. Since human babies are born with very similar innate problem-solving capability, their difference in problem-solving at a later age is largely attributed to their different learning capability. Actually this is how Intelligence Quotient, or IQ, was originally defined: as the ratio of mental age to chronological age, therefore a ten-year old child of IQ 120 had learned the knowledge as a normal twelve-year old, as shown by the child's current problem-solving capability.
On the contrary, the problem-solving capability and learning capability of computer systems are largely independent of each other. There are many computer systems with very high innate problem-solving capability, but little learning capability, and we can also build learning systems that cannot do much at the very beginning. Consequently, many capabilities that must be learned in human beings can be built-in for computers. Just checking the problem-solving capability of a computer system at a certain moment usually cannot tell whether it is built-in or learned, while in human beings, this distinction usually is clear.
Therefore, to see intelligence as problem-solving capability or learning capability will lead to very different research approaches. What Marr described is the former, while this theory is the latter. Of course, both are useful, but they are very different.
Most of the works in machine learning still agree with Marr's three levels, though here it is not the human designers who will specify the computation, design the algorithm, and develop the implementation. Instead, some of the jobs are done by machines themselves.
Clearly, compared to systems without any learning ability, these learning systems are closer to the adaptive systems defined above. Even so, they are still too much constrained by the traditional theory on computation and algorithm to cover the full range of adaptation.
According to the previous description of adaptive system, we can see that there is no guarantee that its actions to various goals will converge to a stable state, so as to be abstracted into "computation" and "algorithm". Instead, the adaptation process may be open-ended, and never converge to any fixed mapping, that is, the system's response remain experience-dependent and context-sensitive. Furthermore, the adaptation process itself may be too flexible to be specified as a "learning algorithm", therefore, we can only talk about it as a process, not as a computation following a predetermined algorithm.
Roughly speaking, "intelligence" corresponds to experience-dependent changes within an individual, while "evolution" corresponds to experience-independent changes within a species. Though the two have similarities, their differences are also important. For a system, the changes produced by intelligence are usually more conservative, gradual, and cautious, while the changes produced by evolution are usually more radical, abrupt, and vital. In general, we cannot say which one is better, since they are good for different situations.
A more detailed comparison between the two is left for [Special Topic: Intelligence and Evolution], and in the rest of the book is focused on intelligence.
It largely depends on the environment. If the environment never changes or only charges in a circular, or some other predictable, way, then an instinctive system is more stable and efficient in achieving its goals, because its knowledge provides a determined way to invoke the needed actions whenever necessary. This explains why for many tasks conventional computer systems work better than human beings --- when a goal can be routinely achieved by carrying out a sequence of operations, it is better to build an instinctive system, following the three levels Marr outlined, than to give it to an adaptive or intelligent system, because the flexibility of the latter can only make things worse.
If the environment changes in a unpredictable way, an instinctive system will not be able to always achieve its goals, given its fixed, and therefore outdated, knowledge about the effect of its action in the environment. It is in such an environment that an adaptive (including intelligent) system has some chance. The system attempts to adjust its knowledge to capture the changes in the effects of its actions, so as to better achieve its goals. As far as the environment does not change to quickly or radically, these attempts may success, after some failures.
It is important to remember that by definition, an intelligent system only tries to adapt, which does not mean that it always becomes adapted well enough to achieve its goals. Since the only guidance of the intelligence is the system's past experience, and the future experience will surely be different, failures in prediction is inevitable.
David Hume (1748) demonstrated clearly that there is no reasonable way to establish a "Principle of the Uniformity of Nature" that guarantees the correctness of our prediction about the future, even in a probabilistic sense. Therefore, when we say that an adaptive system is "better" than an instinctive system in a unpredictable environment, it is because we can still have some hope for the former, while the latter is completely hopeless. As for an intelligent system itself, it does not follow its experience because it believes the future will be the same as, or similar to, the past, but because it is built to behave in that way. "To predict the future according to the past" is how adaptation is defined, therefore it is "meta-knowledge" in each intelligent system, even though it does not always lead to correct results when applied to each concrete situation.