In NARS, a "belief" is a Narsese statement with a truth-value, like "bird → animal <0.9; 0.8>", as explained in Section 3.3. Such a belief indicates, according to the system's experience, to what extent term bird specialized term animal, or, equivalently, term animal generalized term bird. This belief allows the system to use one term as the other in certain contexts.
In NARS, a belief is created by copying a task that is a judgment. Therefore, Narsese sentence "bird → animal <0.9; 0.8>" in the memory, may exist as a task, a belief, or both. The distinction between "task" and "belief" is introduced in NARS for a technical reason. An inference rule typically have two premises, one from a task, and the other from a belief. When the task is a judgment, the rule does forward inference, and produce a judgment as conclusion; the task is a question (or goal), the rule does backward inference, and produce a question (or goal) as conclusion. Intuitively speaking, "tasks" are active, corresponding to the problems the system is working on, and "beliefs" are passive, corresponding to the knowledge by which the problems are solved. The memory usually contains much more beliefs than tasks, and a belief is usually kept much longer than the task that create it in the first place.
If we define "consistency" as "every statement can only has one truth-value", then the beliefs of NARS is not necessarily consistent, because it is quite common for the system to derive conclusions with the same content (i.e., the same statement) but different truth-values, by taking different fragments of experience into consideration. Since the system works with insufficient knowledge and resources, it normally cannot base all of conclusions on its whole experience. Actually, sometime that is not even desired, when the system want to focus on some special experience (examples: analogy and metaphor). To allow this kind of inconsistency does not mean the system would do nothing about it. When a task and a belief have the same content, but truth-values from different evidence (i.e., fragments of the experience), the revision rule is applied to get a conclusion based on the pooled evidence.
The control mechanism assigns priority values among beliefs, mainly based on their usefulness in the history and relevance to the current context. Beliefs with low priority will be forgot by the system, even though they may turn out to be needed at a future time.
The "beliefs" in NARS include what we usually call "knowledge", "facts", "opinions", "thought", "hypotheses", "guesses", etc, with the subtle difference among the words represented by truth-value, source, and other properties of the beliefs.
On the contrary, NARS organizes its beliefs by carrying out forward inference for new knowledge to reveal their implications, adjusting the priority ranking among related beliefs, and removing unproductive beliefs to save resources. Once again, this is because the system only has insufficient knowledge and resources.
In principle, every new problem may be different from all the problems the system known, but for an intelligent system, it has no choice but to solve it according to its experience, meaning to perceive the current as similar to something in the past. For this purpose, the system not only need to remember the actual experience as it was, but also to reconstruct the information in the experience in other forms, so as to get "equivalent experience" that can be used to solve problems that cannot be directly mapped into actual experience. This is what forward inference is about in NARS. For example, knowledge "Swans are birds" and "Swans swim" provide positive evidence for "Birds swim", though cannot make the conclusion absolutely true. That is not a problem in NARS, because the confidence factor in the truth-value of that inductive conclusion explicitly indicates that it is only supported by one piece of positive evidence. Even deductive conclusions are not absolutely true, though their confidence values are usually higher than that of inductive conclusions.
Forward inference alone is not enough, because there are simply too many possible ways to derive conclusions using a piece of new knowledge and all the existing beliefs as premises, not to mention that the conclusions can be used as premises to derive even more conclusions. With insufficient resources, NARS never tries to find out all implications a piece of new knowledge may produce in theory. Instead, each new task is given a priority value (explained in Section 3.5), and the forward inference for this task continues until the task gets removed from the resources competition when its priority is too low.
When a given problem (question or goal) cannot be directly solved, it will trigger backward inference. In terms of self-organization, the function of backward inference is to activate the relevant beliefs. For example, if the system cannot directly answer "Do swans fly?", this question and the belief "Birds fly" will derive question "Are swans birds?". If this question is directly answered by belief "Swans are birds", this belief will be "activated" into a task, which, when meets "Birds fly", will derive "Swans fly", an answer to the original question. Afterwards, beliefs "Swans are birds" "Birds fly", and "Swans fly" all have their priority increased, to reward their contribution to the solving of a task.
For this purpose, the objectives of the reorganization of beliefs are:
To stress a certain objective but to ignore another one is a common problem among existing AI approaches. For example, some people take "intelligence" as the ability to compress information. This opinion covers the Correctness and Compactness, but even a lossless compression of complete past experience cannot provide guidance for the system to handle a novel situation where something never appear in the experience.
To achieve the objectives altogether in a balanced manner, NARS ranks beliefs by priority, which depends on frequency, confidence, usefulness in history, and so on. Since high priority leads to high retrieve probability, correct and concrete beliefs are more often used. The compactness of the beliefs are achieved by forgetting beliefs with low priority.
The self-organizing of beliefs is an irreversible process, and the beliefs with their priority distribution at any moment is caused by the whole history of the system. Even forgot beliefs may have impacts left behind. For a given piece of knowledge, the system can "neutralize" it by revising it with its negation, so the positive evidence and negative evidence weight the same. However, the total amount of evidence increased in this process, so to make future revisions harder. With the accumulating of evidence, the system will become less and less sensitive to the same amount of new evidence, that is, the confidence factor increases monotonically and converges to 1, but the frequency factor is not necessarily converge at all.
In a relatively stable environment, the system will gradually form a stable belief structure, which will make the routine problems to be solved more efficiently. However, this stability also means bias when novel problems show up. In that situation, the system will try to handle them in the "good old ways" first, and start to think differently only when the old methods don't work. As a result, the belief structure will be changed, more or less.