However, as in all realistic systems, resource is an issue, and NARS is designed according to the assumption of insufficient knowledge and resources. Consequently, in NARS the memory and control mechanism must be responsible for resource management, which means to selectively turns some possibilities allowed by the language and rules into reality, by actually applying the corresponding rules on the premises, and at the same time to ignore some other possibilities, even though they are logically relevant.
Under the assumption of insufficient knowledge, the solutions NARS provides to problems cannot be guaranteed to be absolutely optimal, because the system does not have all the knowledge; under the assumption of insufficient resources, they are not even optimal relative to the available knowledge of the system. Instead, what can be expected in this situation is for them to be optimal relative to the available knowledge and resources.
As described in Section 3.4, in NARS "to process a task" means to let it, and its derived tasks, interact with the system's relevant beliefs, one per inference step. Since at each step, the task-belief combination completely decides which inference rules can use them as premises, and what kind of conclusions can be derived from them, the processing of a task is determined by the beliefs selected by the control mechanism to interact with it.
For a given task, how many beliefs should be taken into consideration? There are several conventional answers to this question, but none of them is proper in this case:
To indicate the importance of a task, it is given a numerical priority value, which is roughly proportional to the amount of processor time the task will be given in the near future. This priority value is not an absolute deadline, or number of beliefs interacted with, but is relative to the priority values of the other concurrent tasks. Consequently, the same task with the same priority value may get very different amount of resources in different situations, in some of which the system is busy, while in the others, idle.
In this way, the priority of a task indicates its processing speed, relative to other tasks. By default, the longer a task exists in the system, the less valuable it becomes, so the priority decays over time, at a rate specified by the durability value of the task. Together, priority and durability specifies the current time budget of a task, that is, the amount of processing time the task is going to get in the near future.
The budget of a task is adjusted from time to time. Beside the gradual priority decay process, a priority can be significantly dropped when a best-so-far solution is found, since the task becomes less demanding, compared to the others. However, since there is no such a thing as a "perfect solution" in NARS, the budget of a "solved" problem is usually still non-zero, so as to allow the system to look for better solutions, though with less resources spent on it. On the other hand, the budget of a task can be increased if it is repeatedly derived, that is, there are constant drives for its solution.
The priority value of tasks are also used in space management. Under the assumption of insufficient resources, the storage space for tasks is finite. With the constant adding of new tasks (coming from the environment and derived by the system itself), there will eventually be a short of space. Whenever such a situation happens, the tasks with the lowest priority values will be removed to make space for new tasks.
Now we can see that there are two types of forgetting happening to tasks:
To let all tasks and beliefs interact and compete within the whole system is neither efficient nor necessary. Since in NARS each inference step typically takes a task and a belief as premises, and the two must contain a common term, it is natural to introduce an intermediate unit for processing and storage, identified by a term.
In NARS, each term can name a concept, which holds all the existing tasks and beliefs containing that term. Consequently, in each inference step, the two premises must come from the same concept. For example, all sentences containing statement "robin → bird" and statement "bird → animal" are collected in concept "bird". If in an inference step, one is the content of the selected task, and the other, the selected belief, then the derived task will have statement "robin → animal" as content, and it will be sent to both concept "robin", concept "animal", and concept "bird → animal" (a compound term can also name a concept).
At the level of concept, the same resource allocation mechanism is used. Each concept has a priority value and a durability value attached to indicate its current budget. Intuitively, the budget of a concept reflects the total budgets of the tasks in it, and preference is also given to the concepts that are relatively "well-defined", that is, its beliefs are more certain.
Now we can envision the memory of NARS as a two-layer structure: first, the memory can be seen as a collection of concepts, each named by a term, then, within each concept, there is a collection of tasks, and a collection of beliefs. On all three types of "collections", the items in it (concepts, tasks, or beliefs) have associated priority and durability values, as well as a quality measurement. Or, we can envision the memory of NARS as a network, with terms as nodes, and tasks and beliefs as links. In this image, a concept is a node with all the directly associated links. On each node and link, there is a priority-durability-quality triple attached.
The complete processing of a given task normally consists of multiple working cycles. As described above, the overall process and result depends on the task itself, the related beliefs, as well as other factors like the priority distribution in the system, and the existence of other tasks. From the task and the beliefs, the possible solutions can be determined in theory, though which ones will be actually produced, and in which order, are determined by the other factors.
When the environment (human user or another computer) assigns an input task to NARS, it can either use the default budget, or to assign specific priority and durability to influence the system's handling of the task. However, since these initial values will be modified many times by the system, it is the system itself that actually decide how much resources to spend on each task, according to the nature of the task, as well as the resource availability at the time. Consequently, there is no fixed "processing cost" for a task.
In summary, as discussed in Section 2.2, in NARS the processing of a task does not follow a predetermined algorithm, nor have a time-space complexity. Instead, tasks are processed in a case-by-case manner, each of them is handled according to the current available knowledge and resources. In the process, each step still follow an algorithm, but the steps are assembled into a problem-solving process in an experience-dependent and context-sensitive manner.
For example, the constant k in the definition of confidence in Section 3.3 can take any positive value, though if the value is too large or too small, the system's behavior will look abnormal. Even so, it is hard to argue that there is an optimal value for this parameter in all possibly intelligent systems. Instead, it is more like a "personality parameter" of the system — different choices will let confidence increase at different rate as evidence accumulates, and therefore lead to different preference or bias in the system's behavior. Each choice may have some advantage and some advantage, so there is no best value, just like there is no "best character" for a person.
Similar cases happen in many places in NARS. For example, how much space a concept should occupy? How fast should a belief be forgot? What should be the default confidence value for input judgments? In each case, there is a rough notion of "normal values", though its boundary is fuzzy, and the choice within it is more or less arbitrary. On the other hand, to keep internal consistency and coherence, in each system it is better to have the parameters fixed, though different systems (though they are all NARS by definition) can choose different values.
Consequently, when multiple copies of NARS are implemented, each of them may have a different personality, determined by its "DNA", the values of the system parameters. Even if they are given exactly the same experience, the systems may behave more or less differently, though within a certain range.