A General Theory of Intelligence

Chapter 4. Self-Organizing Process


Section 4.1. Learning as self-organization

For a system like NARS, its behaviors are determined by two factors: its design (i.e., built-in inference rules and control algorithms) and its experience (its history of interaction with the environment).  More concretely, the former factor determines the possibility (what the system could become), and the latter turns one possibility into reality (what the system has become).

The self-organization of a system like NARS is driven by two fundamental conflicts within the system persistently:

  1. The system has to predict future with past experience, which is always insufficient for the purpose.
  2. The system has to achieve its goals with its constant resources, which is always insufficient for the purpose.
Therefore the success of the self-organization should not be judged according to the correctness or optimum of the system's behaviors, because that will need to compare the behaviors with future experience, and ignore the knowledge and resources constraints under which the behaviors are produced.  Instead, the aim of self-organization is to produce "reasonable behaviors" which have the best expectation to achieve the current goals, given available knowledge and resources.

Different from the common understanding in the "Machine Learning" community, learning in a system like NARS is not a computation process following an algorithm. Instead, it is a unpredictable and open-ended process influenced by many factors in the system and the environment. It is not an "improving routine" that is additional to the "working routine" of the system, neither. Instead, in a system like NARS, "learning" is just the long-term effect of the reasoning process.


Section 4.2. The self-organization of goals

For a system like NARS, all the goals of the system are derived from a group of original goals built into, or imposed upon, the system, and to them the system has no way to choose or to restrict, as far as they are in a format recognizable to the system.

While some of the goals can be directly achieved by system's actions, most of them need to be achieved indirectly via the achieving of derived goals. Due to the insufficiency of knowledge and resources of the system, the derived goals may turn out to conflict with their parent goals, which is a phenomenon called "alienation". Though often seen as undesired, this phenomenon is also responsible for the autonomy, initiative, and originality of intelligent systems.

Also because of the insufficiency of knowledge and resources, the system cannot process goals one by one, but have to process many of them in parallel, in a time-sharing manner. At any moment, the system's decisions are usually made by taking multiple goals into consideration, rather than determined by a single goal. In the competitions among goals, the original goals do not always win over derived goals. The system attempts to unevenly distribute resources among the goals to achieve the highest overall satisfaction.

Another important aspect of self-organization of goals is to resolve the conflicts among the goals by selecting actions based on compromise among goals. When a goal can be achieved by multiple actions, the selection is often based on the impacts of those actions on other goals. 


Section 4.3. The self-organization of actions

The actions of the system at a given moment is the set of operations the system can perform on its internal structure or external environment.  The system is born with a set of primary actions, as well as operators to compose compound actions from existing ones.  The self-organization of action is mainly in the process of selectively building and maintaining compound actions.

The meaning of an action, either primary or composed, is mostly revealed by its sufficient and necessary conditions indicating the cause and effect of the action. For a system with insufficient knowledge and resources, the meaning of an action is never fully given, but gradually obtained from the experience of the system.

To be able to reason about actions allows the system to predict the effect of an action without actually executing it.  This is the "internalization" of actions, that is, simulations of the corresponding actions by reasoning on them.

To use the limited resources efficiently, an adaptive system must use its experience to guide the self-reorganization of operations, which means (1) to create a compound action only when it solves an existing task, (2) to adjust the related truth-values and priority according to their past performance, and (3) to gradually forget the actions that are not very useful. During such a process, the system reorganizes its actions according to the repeatedly appeared goals in its experience, so as to improve its overall efficiency.

When the system is equipped with (external) tools, similar procedure happens within the system, because with a tool, an existing action changes it meaning.  The system needs exercise and practice to use a tool skillfully.


Section 4.4. The self-organization of beliefs

In an intelligent system like NARS, the self-organization of beliefs become necessary because (1) with insufficient knowledge, the system must extend its past experience into current and future situations, and (2) with insufficient resources, the system must compress its experience, so as to use time and space efficiently.

As a result, the extreme aim of belief self-organization is not to get a "true description" of the world, but to efficiently connect the system's goals to its actions, according to past experience.  For this purpose, there are three requirements that the system should try to satisfy when organizing its beliefs:

These three requirements are independent to each other.  When there are conflicts among them, no one is logically superior to the others in general, though for a given situation one may be more important.

The self-organization of beliefs are carried out by constantly generating derived beliefs from existing ones, adjusting truth-values of beliefs by combining evidence for the same statements collected in different way, adjusting priority of beliefs according to their usefulness and relevance, and removing beliefs with low priority.


Section 4.5. The self-organization of concepts

Concepts provide an intermediate level of structure between the whole memory and the individual goals, actions, and beliefs. In a system like NARS, a concept is both a storage unit to keep all directly related items (goal, action, and belief) together, and a processing unit for inference activities, because entities directly interact with each other only if they are semantically related by sharing a common term.

A concept is not an internal representation of an external object (or a set of them), but an identifiable ingredient in the system's experience. A concept has no "true" or "real" meaning, but a meaning that depends on the system's experience with it. To improve efficiency, self-organization in concept attempts to give a concept a relatively clear and stable meaning. However, with the changing of context and the coming of new experience, the process usually does not converge to a final meaning of the concept.

To improve the efficiency of summarizing experience, the inference process constantly compose new concepts, as novel ways to cluster related items. This process is not random, but data-driven, in the sense that a new term (and the related concept) is built only when it provide a preferred way to organize some experience for a certain situation. Whether this concept has long-term value is to be determined by the following experience of the system. The quality of a concept is not a matter of true or false, but good or bad.

The self-organization process evaluates and adjusts the priority distribution among the concepts, to form a conceptual hierarchy by arranging concepts according to their inheritance relations. When the system encounters a new situation, a perception/categorization process relates it with existing concepts, and the result suggests possible behaviors according to the system's experience in similar situations.