A General Theory of Intelligence     Chapter 4. Self-Organizing Process

Section 4.4. The self-organization of beliefs

Beliefs of the system

In general, "beliefs" in an information system provide relations between goals and actions. In reactive systems, such a relation directly associates a goal to an action that can achieves the goal. In more complicated systems, however, the relation between goals and actions are indirect, and go through many intermediate stops. Therefore, most beliefs in such a system are relations between concepts. In intelligent systems, beliefs change as the system adapts to its environment, while in instinctive systems they remain fixed.

In NARS, a "belief" is a Narsese statement with a truth-value, like "birdanimal <0.9; 0.8>", as explained in Section 3.3. Such a belief indicates, according to the system's experience, to what extent term bird specialized term animal, or, equivalently, term animal generalized term bird. This belief allows the system to use one term as the other in certain contexts.

In NARS, a belief is created by copying a task that is a judgment. Therefore, Narsese sentence "birdanimal <0.9; 0.8>" in the memory, may exist as a task, a belief, or both. The distinction between "task" and "belief" is introduced in NARS for a technical reason. An inference rule typically have two premises, one from a task, and the other from a belief. When the task is a judgment, the rule does forward inference, and produce a judgment as conclusion; the task is a question (or goal), the rule does backward inference, and produce a question (or goal) as conclusion. Intuitively speaking, "tasks" are active, corresponding to the problems the system is working on, and "beliefs" are passive, corresponding to the knowledge by which the problems are solved. The memory usually contains much more beliefs than tasks, and a belief is usually kept much longer than the task that create it in the first place.

If we define "consistency" as "every statement can only has one truth-value", then the beliefs of NARS is not necessarily consistent, because it is quite common for the system to derive conclusions with the same content (i.e., the same statement) but different truth-values, by taking different fragments of experience into consideration. Since the system works with insufficient knowledge and resources, it normally cannot base all of conclusions on its whole experience. Actually, sometime that is not even desired, when the system want to focus on some special experience (examples: analogy and metaphor). To allow this kind of inconsistency does not mean the system would do nothing about it. When a task and a belief have the same content, but truth-values from different evidence (i.e., fragments of the experience), the revision rule is applied to get a conclusion based on the pooled evidence.

The control mechanism assigns priority values among beliefs, mainly based on their usefulness in the history and relevance to the current context. Beliefs with low priority will be forgot by the system, even though they may turn out to be needed at a future time.

The "beliefs" in NARS include what we usually call "knowledge", "facts", "opinions", "thought", "hypotheses", "guesses", etc, with the subtle difference among the words represented by truth-value, source, and other properties of the beliefs.

Reorganizing beliefs

The importance of domain knowledge has been stressed by may AI researchers, and consequently, many AI systems are knowledge-based. However, in most of them, knowledge is simply stored in a knowledge-base, and waiting to be accessed by the other parts of the system when solving problems. If there is any structure in the knowledge-based (in the form of index, rank, cluster, etc.), they are established by the designer before the system begins to run, rather than by the system itself when it is running.

On the contrary, NARS organizes its beliefs by carrying out forward inference for new knowledge to reveal their implications, adjusting the priority ranking among related beliefs, and removing unproductive beliefs to save resources. Once again, this is because the system only has insufficient knowledge and resources.

In principle, every new problem may be different from all the problems the system known, but for an intelligent system, it has no choice but to solve it according to its experience, meaning to perceive the current as similar to something in the past. For this purpose, the system not only need to remember the actual experience as it was, but also to reconstruct the information in the experience in other forms, so as to get "equivalent experience" that can be used to solve problems that cannot be directly mapped into actual experience. This is what forward inference is about in NARS. For example, knowledge "Swans are birds" and "Swans swim" provide positive evidence for "Birds swim", though cannot make the conclusion absolutely true. That is not a problem in NARS, because the confidence factor in the truth-value of that inductive conclusion explicitly indicates that it is only supported by one piece of positive evidence. Even deductive conclusions are not absolutely true, though their confidence values are usually higher than that of inductive conclusions.

Forward inference alone is not enough, because there are simply too many possible ways to derive conclusions using a piece of new knowledge and all the existing beliefs as premises, not to mention that the conclusions can be used as premises to derive even more conclusions. With insufficient resources, NARS never tries to find out all implications a piece of new knowledge may produce in theory. Instead, each new task is given a priority value (explained in Section 3.5), and the forward inference for this task continues until the task gets removed from the resources competition when its priority is too low.

When a given problem (question or goal) cannot be directly solved, it will trigger backward inference. In terms of self-organization, the function of backward inference is to activate the relevant beliefs. For example, if the system cannot directly answer "Do swans fly?", this question and the belief "Birds fly" will derive question "Are swans birds?". If this question is directly answered by belief "Swans are birds", this belief will be "activated" into a task, which, when meets "Birds fly", will derive "Swans fly", an answer to the original question. Afterwards, beliefs "Swans are birds" "Birds fly", and "Swans fly" all have their priority increased, to reward their contribution to the solving of a task.

Objective of self-organization

In NARS, the extreme aim of belief self-organization is not to get a "true description" of the world, but to efficiently connect the system's goals to its actions, according to past experience. The system's reasoning ability allows the past to be extended to handle the current and to predict the future. Consequently, "internal thinking" can simulate "external doing", even though the simulation cannot be perfect.

For this purpose, the objectives of the reorganization of beliefs are:

These three requirements are independent to each other, so achieving one does not necessarily lead to another. Actually, they often point to different directions. For example, the most correct description of the experience is the experience itself, but it is not compact at all; compact summary of experience can be too generalized to make concrete prediction; concrete beliefs are less compact, and tend to have more counterexamples. Facing these conflicts, no one objective is logically superior to the others, though for a given situation it is still possible to choose among different ways to organize the given experience.

To stress a certain objective but to ignore another one is a common problem among existing AI approaches. For example, some people take "intelligence" as the ability to compress information. This opinion covers the Correctness and Compactness, but even a lossless compression of complete past experience cannot provide guidance for the system to handle a novel situation where something never appear in the experience.

To achieve the objectives altogether in a balanced manner, NARS ranks beliefs by priority, which depends on frequency, confidence, usefulness in history, and so on. Since high priority leads to high retrieve probability, correct and concrete beliefs are more often used. The compactness of the beliefs are achieved by forgetting beliefs with low priority.

The self-organizing of beliefs is an irreversible process, and the beliefs with their priority distribution at any moment is caused by the whole history of the system. Even forgot beliefs may have impacts left behind. For a given piece of knowledge, the system can "neutralize" it by revising it with its negation, so the positive evidence and negative evidence weight the same.  However, the total amount of evidence increased in this process, so to make future revisions harder. With the accumulating of evidence, the system will become less and less sensitive to the same amount of new evidence, that is, the confidence factor increases monotonically and converges to 1, but the frequency factor is not necessarily converge at all.

In a relatively stable environment, the system will gradually form a stable belief structure, which will make the routine problems to be solved more efficiently. However, this stability also means bias when novel problems show up. In that situation, the system will try to handle them in the "good old ways" first, and start to think differently only when the old methods don't work. As a result, the belief structure will be changed, more or less.