For NARS, the innate properties are in its design, as briefly described in Chapter 3, and further specified in [Wang, 2006]. Concretely, the representation language determines what tasks and beliefs the system can have, the inference rules determines how the beliefs can be used to process the tasks, and the memory and control mechanism determine how the system selects the task and the belief used in each inference step. Furthermore, in each implementation of NARS, a set of built-in operations determines the basic actions the system can take, which implicitly determines the scope of achievable goals of the system.
On the other hand, the acquired properties of an NARS, as the concrete implementation of the design in a computer, are determined by the system's experience, that is, the stream of input tasks (judgments, questions, and goals). Initially, the memory of the system is empty. It is with the coming of the input tasks that concepts and beliefs are built, and the inference actively of the system add new items (tasks, beliefs, and concepts) into the system, remove some old ones, and modifies existing items, including the input tasks.
Technically, it is easy to save the memory of an NARS at a certain moment, and use a copy of it as an initial memory of another NARS. We can even directly edit the memory of the system. However, in principle all of these activities should be considered as alternatives to experiences of the system, that is, their overall effect should be equivalent to some experience the system can have. Conceptually, the system's design (its nature) is a function that maps the system's experience (its nurture) into the state of its memory.
Therefore, the nature and nurture of NARS are clearly separated. The former is built into the system, is described in a meta-language (English, in this book), and corresponds to the possibilities (what the system could become); the latter is learned by the system from experience, is described in an objective language (Narsese), and corresponds to the reality (what the system has become). When the system is designed, no assumption is made about the content of its experience. On the other hand, the learning results are all in the object-level, not in the meta-level.
In this aspect, artificial intelligence, as represented by NARS, is different from human and animal intelligence. Since computer is not a biological system, it does not have a growth process, which is neither typical nature nor typical nurture. NARS is like a special type of baby that born with an adult body, but has no experience at all. Since according to this theory, "intelligence" is the nature of the system, it is not learned itself in NARS. Instead, it is the ability to learn.
According to this theory, adaptation and learning at the meta-level is mainly by evolution, rather than by intelligence. [Special Topic: Intelligence and Evolution]
On the contrary, in NARS learning and reasoning are considered as different descriptions of the same processes. Since in the system "learning" means the changes in the memory, and all of the changes, according to the description of Chapter 3, are either governed by the inference rules, or part of the inference control mechanism, learning is carried out by reasoning. On the other hand, all reasoning activities caused irreversible changes in the memory, as attempts of extending past experience to new situations, so reasoning is carried out by learning. Therefore, "learning" and "reasoning" are different names of the same process.
This unification is possible, because in NARS reasoning is not binary deduction, but "ampliative", in the sense that the statement in the conclusion states something not in the premises, therefore it is "learning". In the meanwhile, the truth-value of the conclusion indicates the evidential support it obtained from the premises, so in another sense the conclusion introduces no new information from nowhere, but only re-organizes the information in the promises in a different form, so it is logically justifiable. In this way, NARS provides a "logic of learning".
Even so, "reasoning" and "learning" still have some important difference: when the process is perceived as reasoning, as in Chapter 3, the focus is on the promise-conclusion relation in each step; when it is perceived as learning, as in this chapter, the focus is its long-term effect in the system. Later we will see that the same is true on many other topics, that is, many cognitive processes traditionally considered as separated from each other are all unified in the same process in NARS, though the traditional notions are still useful in focusing on different facets of the process.
According to the working definition of intelligence advocated in this theory, "intelligence" is not "problem-solving capability", but "learning capability". For a system like NARS, its initial problem-solving capability can be anything (determined by its innate goals, actions, and beliefs), while its "learning capability" is comparable to that of a human adult. After the system begins to communicate with its environment, its problem-solving capability usually increases as the result of self-organization, while its learning capability remains more or less the same through its life time.
In this way, the level of intelligence of a system like NARS is reflected in the expressing power of its knowledge representation language, the inferential power of its inference engine, and the resource efficiency of its management mechanism. These factors are independent to the goals, actions, beliefs, and concepts the system has at a given moment. It will be possible to define a quantitative measurement of a system's intelligence by comparing the relevant factors of a system to those in a reference class, though this measurement is only indirectly related to measurements of the system's problem-solving capability.
Under the restriction of insufficient knowledge and resources, learning in a system like NARS is inevitably biased and restricted by the system's experience and capability. This self-organization process is driven by two fundamental conflicts within the system persistently:
"Rationality" actually has different forms, under different assumptions about the system and its environment. An intelligent system makes mistakes all the time, in the sense that its expectations and predictions fail to be realized. However, these expectations and predictions can be "rational", as far as they are the "best" the system can find, given its knowledge and resources. In this sense, what is considered as rational by one system may be considered as irrational by another system, and even for the same system, a rational conclusion in one situation may be considered as irrational in another situation.
In the self-organization process of an intelligent system, "experience" plays both a destructive role and a constructive role. Since new experience often disconfirms old knowledge, it does not merely add materials into the memory structure monotonically, but also tears down the disqualified parts. At the same time, experience constantly suggests new patterns and regularities for self-reorganization.
While changing the memory, new experience gets changed in the process, too. Input tasks are almost never processed exactly as they are. Instead, they are recognized, interpreted, and categorized, according to the system's existing knowledge. They are processed as what the system thinks they are. Using Piaget’s term, assimilation and accommodation are the two sides of adaptation and learning. We are going to see various examples of it in the rest of this chapter.
For a system designed under the assumption of insufficient knowledge and resources, the above situation becomes inevitable. Since the system has to open to unpredictable experience all the time, it cannot follow a predetermined learning procedure or stop at a predetermined state. Instead, learning becomes a life-long, open-ended, context-sensitive, and irreversible process, in which the system consistently adjusting the contents of its memory to better organize its experience to deal with the current tasks. It has to make the best use of the available knowledge and resources, even though they are never enough to provide the perfect solutions.
In this situation, to specify learning in the framework of an inference system has its advantage. Even though it is impossible to specify a complete learning process in advance, it is still possible to specify all valid types of step in such a process, as well as how steps can be combined together to become a process. In this way, the rigidness of individual steps and the flexibility of their combination allow a system to be adaptive in a realistic situation, while still justifiable as following some principles of rationality.