This function is necessary and possible in intelligent systems, because the relations between goals and actions are not fixed, as in instinct systems. The adaptive nature of the system allows the internal relations to be established and revised according to experience, and the insufficiency of knowledge and resources makes it impossible for the relations to be established once for all.
An instinctive system can be described as a determined stimulus-response black box, or a function that maps input into output. For an adaptive system, on the contrary, such a description is not proper, because there are always novel stimulus or input for which no response is ready-made. Instead, the system has to solve the problem when it appears. For an intelligent system, adaptation means to handle the situation according to experience, even though in the past no situation is exactly like the current one, when all the details are taken into consideration.
Such an understanding is very different from the traditional view that valid reasoning is the process from "truth to truth", while still keeps the demand for rationality:
While the validity of the deduction rule seems obvious, that is not the case for the other rules. The induction rule has the form of {(M → S), (M → P)} ⊢ (S → P), and the abduction rule, {(S → M), (P → M)} ⊢ (S → P). Unlike deduction, in these two rules even when the premises approach absolute truth, nor does the conclusion. Instead, in these cases the conclusion is supported by a piece of positive evidence of a unit amount, therefore its truth-value approaches <1, 0.5>.
The justification of non-deductive inference is a well-known difficult problem in logic and philosophy. In NARS, this problem is solved by using an experience-grounded semantics, so non-deductive rules is valid in the sense that their conclusion is "true" to the extent corresponding to their evidential support, not to the extent corresponding to the "matter of fact". Non-deductive conclusions are usually less certain than deductive conclusions, but the difference is not in the confidence factor, not the frequency factor, of their truth-values.
The confidence factor of conclusions (deductive or not) can be increased by the revision rule, which merges evidence collected from different fragments on the same statement together. This rule also settles inconsistency among the system's beliefs, by letting the frequency of the conclusion to be a compromise of the conflicting premises.
These four rules (deduction, induction, abduction, and revision) forms the core of an non-axiomatic logic, defined on inheritance statements with simple terms. These rules are syllogistic, because like the rules in Aristotle's Syllogism, their premises and conclusions all have the "subject-copula-predicate" form, and for each rule, its two premises must have a shared term. However, unlike traditional syllogism, the rules in NARS covers types of inference that are beyond binary deduction.
Beside the above rules defined on inheritance, in NARS there are other syllogistic rules defined on other copula (similarity, implication, and equivalence) that carry out inference of the types of comparison, analogy, prediction, explanation, and so on.
As described in Section 3.3, in terms of structure there are two types of terms: simple and compound. A compound term can either directly come from the system's experience, or be built by the system itself, using certain composition rules. Similarly, there are decomposition rules that introduce terms from components in compound terms in the premises.
For each type of compound term in Narsese, there are corresponding composition rules and decomposition rules.
For example, given terms T1 and T2, their "extensional intersection" is a compound term, (T1 ∩ T2), with an extension (i.e., specializations) consisting of the term that in both the extension of T1 and the extension of T2, just like a "green light" is a "light" that is "green". Though in theory such a compound can be built from any pair of terms, the system cannot afford the resources for that. Instead, the compound is built when the system meets an instance for it. So there is the following composition rule {(M → T1), (M → T2)} ⊢ (M → (T1 ∩ T2)), where the conclusion approaches truth when both premises approach truth. It is like "If this is a light and it is green, then it must be a green light". Similarly, there is a decomposition rule for inference like "If this is a light but not a green light, then it must not be green".
Like syllogistic rules, composition/decomposition rules also have associated truth-value functions, designed according to the experience-grounded semantics. In this way, the function of compound terms in NARS is to summarize repeatedly appeared patterns in the experience of the system. If terms T1 and T2 have many common specializations, so does (T1 ∩ T2); if they have no known common specialization, (T1 ∩ T2) won't even exist in the system (unless it directly appears in the input sentences), even though it is a valid term by definition.
However, since NARS is designed with the assumption of insufficient resources, it cannot do this. When the task to be processed is a question or a goal, it must use it to guide the inference and limit the scope of premise selection, so as to focus the system's attention to the relevant beliefs only. Fortunately, with a term logic, like the one used in NARS, there is a direct relation between rules used in forward inference and those in backward inference.
For example, assume the task is a question Q, for which there is no belief in the system that can directly answer it. However, such an answer J can be derived by a forward rule {J1, J2} ⊢ J, plus judgments J1 and J2. Also assume that J2 already exists as a belief in the system, then what the system can do is to use a backward inference rule {Q, J2} ⊢ Q1, where Q1 is a derived question that can be answered by J1. If this question can be answered, then its answer, plus the belief J2, can be used to derive an answer for the original question by forward inference. The same is true if the question is replaced by a goal, which will be achieved by achieving multiple derived subgoals.
Now we can uniformly represent all the above mentioned rules into the following format: {T, B} ⊢ T1, where T is a task, B is a belief (a judgment), and T1 is the derived task. When the type of T is judgment, this is forward inference, otherwise (then T is a question), this is backward inference. T1 has the same type as T, and this task derivation process can be repeated for any number of times, until the derived task is directly solved.
For a question, a "direct solution" means there is a belief that can directly answer it. When there are multiple candidate answers, a choice rule is applied to pick a better one. For a "yes-no" question, a better answer should have a higher confidence value (which is defined in Section 3.3); for a "wh-question", a better answer should have a higher expectation value, which is an estimation for chance of the answer to be confirmed in the future.
For a goal, a "direct solution" means there is an executable operation that can directly satisfy it. Since in NARS at any moment there is typically many goals coexist, additional care is needed to prevent the operation to produce undesired effect on other goals. Therefore, goal derivation actually consists of two steps. After the backward rule {G, B} ⊢ G1 derives G1 from goal G and belief B, G1 is not immediately treated as a new goal. Instead, it will increase the "desire-value" of the corresponding statement S, which is the content of G1. In a similar way, other goals may decrease the desire-value of S. It is only when the desire-value S becomes high enough, that a decision-making rule will turn it into a goal to be actually pursued by the system. In this way, every new goal is established based on the overall consideration, rather than the need of a single goal.
Many people have argued for the illogical nature of human thinking. Given the mistakes it makes all the time, how can we argue that the mind follow a "logic"? It is not a contradiction here, since when a solution of the system is considered as a "mistake", it is compared with the system's future experience, and the conclusion can be "logical" with respected with the system's knowledge and resources restriction when it was made.
Another common objection to the logical approach toward AI is that the it is not psychologically and biologically plausible. Since NARS is a normative model of general intelligence, not a descriptive model of human intelligence, it does not have to explain how the process is carried out in the brain in all the details. However, we can still see that, compared with other logic systems, NARS is more plausible, because its representation language is basically a conceptual network, in which two concepts are linked when they can substitute each other (in opposite directions) in a thinking process, and inference is basically the transition and expansion of this substitutability. Especially, the treatment of positive and negative "evidence" in NAL is consistent with Hebbian learning at the neural level and Pavlovian conditioning at the behavioral level.