A General Theory of Intelligence     Chapter 4. Self-Organizing Process

Section 4.5. The self-organization of concepts

Concepts of the system

As explained in Section 1.2, every information system can be described in terms of its goals, actions, and beliefs. However, not all information systems explicitly use concepts in its internal structure.

In NARS, concepts provide an intermediate level of structure between the memory and the individual goals, actions, and beliefs. Intuitively, the memory contains concepts, and each concept contains the related tasks and beliefs. Therefore, a concept is both a storage unit (by containing all tasks and beliefs that are directly related to each other by sharing a common term) and a processing unit (since inference only happens between a task and a belief that have a common term in them). Technically speaking, the tasks and beliefs are not stored within the related concepts, but linked from them.

A concept can either come from the experience, or be built by the system. If an input task contains a new term, the system will create a concept, and link it to the task. When a composition or decomposition rule (explained in Section 3.4) is applied, the conclusion will contain a term that is not in the premises. If the term is also new to the whole memory, a new concept is created, and linked to the task. Therefore, the creation and modification of concept is straightforward — a task (input or derived) is linked from every concept corresponding to a term in the task, as far as inference can happen between the task and beliefs sharing that term, and if the concept does not exist in memory, create it first.

In NARS, each concept has a maximum storage capacity, which is a system parameter, with a value determined when the system is "born". When the tasks and beliefs contained it (technically, referred from it) use up all the allocated space, some of them (with low priority) will be removed to release space. Similarly, the memory has a maximum storage capacity, and when it is full, some concepts will be removed.

At any moment, there is a priority distribution among the concepts in the memory, as well as among tasks and beliefs in each concept, that determines the relative probability for each item (concept, task, or belief) to be accessed in the next working cycle.

Meaning of concept

In NARS, the "meaning" of a concept, or what the concept means to the system, is closely associated with the meaning of the term naming the concept, which is defined in Section 3.3. Since the meaning of a term is defined to be "its experienced relations with other terms", the meaning of a concept can be seen as "its experienced relations with other concepts", which is specified by the tasks and beliefs linked to the concept.

Here it is important to distinguish this kind of "relative definitions" from "circular definitions" which should be avoided. In the former, some concepts get their (partial) meanings by their mutual relations, which is very common in natural language (such as "big" and "small", "old" and "young"). In the latter, some concepts are "reduced" into each other, and lead to infinite loop.

This kind of definition often goes holistic — since everything can be related to everything else, to specify the meaning of a concept, all other concepts must be mentioned, which is theoretically undesired and practically impossible. Fortunately, in NARS this problem is naturally solved by the knowledge-resource restriction. Even though in principle any two concepts can be somehow related ("with no known relation" is a relation itself), in reality the system only considers the ones what present themselves in the system's experience, with its affordable time. Therefore, the meaning of a concept in the system, as defined by its currently linked tasks and beliefs, does not exhaust the full potentials and possibilities, but only the "revealed" or "explicit" part. When the system gets new tasks and beliefs, as well as forget old ones, the meaning of the concept changes.

Furthermore, when a concept is involved when processing a task, only some of the tasks and beliefs are accessed (due to resource restriction), which consist of the "current meaning" of the concept. For the same concept, its "current meaning" may change from situation to situation, depending on the diversity among its tasks and beliefs. As a special case, if a concept is named by a compound term, then its relations with the component terms are among the relations that defines its meaning, but these "syntactic relations" cannot fully determine the "semantic relations" that relate the compound term as a whole to other terms in experience. Consequently, when the current meaning of the term is dominated by its syntactic relations, the term is used "literally", and its meaning is derived from those of its components. Otherwise, then the term is basically used as a whole, its meaning cannot be reduced into those of its components, even if that is how the concept was composed in the first place.

Intra-concept self-organization

Within a concept, self-organization is responsible for the meaning volution of the concept.

On one hand, new experience and inference activity constantly reveal new relations with other concepts, and new evaluations on the existing relations. On the other hand, to improve efficiency, self-organization in concept attempts to give a concept a relatively clear and stable meaning. This effort is mainly implemented in the resource-allocation policy among beliefs.

For example, everything else being equal, higher priority will be given to beliefs that

In the long run, a concept may form a relatively stable "core meaning", or "essence", that consists of a small number of high-priority beliefs useful on various situations. Consequently, each time the concept is used, the current meanings are all similar to each other, and the task processing tends to be efficient. On the contrary, a concept may also end up "messy", by having a large number of loosely related beliefs, which tend to give only weak solutions to problems. Of course, the difference on this matter is always a matter of degree, and where a concept falls depends on the relevant experience of the system.

Inter-concept self-organization

Since in a system like NARS, the memory mainly consists of a group of concepts, there is also a self-organizing process at the conceptual level.

To improve the efficiency of summarizing experience, the inference process constantly compose new concepts new concepts, as novel ways to cluster related items. This process is not random, but data-driven, in the sense that a new term (and the related concept) is built only when it provide a preferred way to organize some experience for a certain situation. Whether this concept has long-term value is to be determined by the following experience of the system. It is a common mistake to assume concepts are created by going through all possibilities and using a fixed criterion to harvest the desired ones — once again, the system has neither the knowledge (to recognize the "good" concepts once for all) nor the resources (to exhaust all possible compound terms) to do that.

The quality of a concept is not a matter of true or false, but good or bad. Desired properties of concepts are similar to those of beliefs:

Similar to the situation inside a concept, the self-organizing of memory is also implemented in the resource-allocation policy among concepts. For example, everything else being equal, higher priority will be given to concepts that It is not a surprise that human concepts satisfying the above requirements roughly belong to the "basic-level categories" studied in psychology. We can expect the same situation in any intelligent system, though the actual concepts with this status differ from system to system, since they depends on the system's body and experience.

Since the inter-concept relations are inheritance and its variants (similarity, implication, and equivalence), in NARS concepts form a partial "special-general" order, however, the memory is not a well-structured hierarchy, like suggested by many works in the AI subfield of "ontology". Instead, the relations can be built between any pair of concepts, in any direction.

Conceptual perception

Equipped with concepts, NARS has the ability to form conceptual perception of situations. That is, when facing a situation initially described by a sequence of input sentences, these "raw data" may be soon replaced by a description mainly using concepts that do not come from the input sentences, but from the system's concept repository. Though this change has the danger of misunderstanding, it allows the system to understand the situation, by relating it to the ones that the system has met and categorized, and therefore make the system easier to prepare responses to it.

Even from the same input, the situation can be perceived in different ways, in terms of the granularity and scope of the description (the two are usually in reverse proportion to each other), the facets under focus, the emotional tone, the associated response, etc. There is no "objective" way to decide a best one, and the one that actually appears in the system is formed by multiple factors, including the dominant goal(s), the system's experience on similar situations, etc. Change in these factors cause the system to form different perceptions to different occurrences of the same situation.

Since these conceptual perceptions are learned, not built into the system, they are products of self-organization. They are necessary for an intelligent system, because even though accurate description of the current situation does not match anything in history, partial and approximate descriptions do associate the current situation to previous ones, for which the system has (direct or indirect) experience. In this sense, in an intelligent system "categorizing" is nothing but the process in which past experience of the system is applied to the current situation, to decide what the system should do so as to achieve its goals. The ability to describe a situation at different conceptual levels allows the system to match it with the past in different scales and scopes, to serve different purposes.

This understanding of perception is fundamentally different from the theories that insist to use a fixed vocabulary to describe a situation, such as seeing it as a point in a multi-dimensional space or a Turing machine. In those theories, "perception" is basically a "selection", that is, to select an answer from a predetermined candidate list. Conceptual perception, on the other hand, its a constructive process, with the internal representation as a construct that is actually built by the system, maybe for the first time.

As far as the current discussion is concerned, there is no major difference between "perception" and "categorization", though usually the input materials in the former is closer to the sensorimotor level, while those in the latter is closer to the symbolic level. Whatever the materials are, they are processed following the same principle, that is, to selectively focus on parts of the information, so as to turn them into something the system already knows.

In other words, by reorganizing its beliefs and concepts, the system constantly attempts to invent a "theory" to explain the experience, or the environment. We call it a "theory" because the resulting beliefs and concepts are organized together into a compact structure, which is more general than experience, but can explain experience by providing relations among the events in it, as well as to predict the future events.