What makes the situation more complicated than in instinctive systems is that the system needs to deal with goals for which the best action sequence is unknown at the moment, and has to be found by the system itself. To do this according to experience means to remember the "history" of each action, like its precondition, cause, and consequences. Since the new situations are usually not identical to the old situations, and the system cannot afford the resources to remember all the details, the knowledge about each action must be represented in a general form, which is not only compact, and therefore more efficient, but also ignores irrelevant details, so that the individual aspects of a new situation can be recognized as the same of those that happened in the past.
To make the above happen, the system needs to systematically represent its goals and actions, as will as its beliefs that relate them to one another. This representation should allow the description of a situation to happen at different levels, each with its granularity and details, to satisfy different requirements on the accuracy and complexity of the description. A language is nothing but this "systematic way of representation".
Concretely speaking, in an inference system a language may play two major functions:
As mentioned in Section 3.2, traditional inference systems are based on model-theoretic semantics, where the "meaning" of a word is the entity it denotes in the model, and the "truth-value" of a statement indicates whether it matches with a fact in the model.
Under the assumption of insufficient knowledge and resources, NARS cannot assume the existence of a "model", as a complete and consistent description of the environment, and specify meaning and truth-value accordingly. Instead, the system's knowledge about the environment all comes from its experience. Consequently, the system's semantics has to be "experience-grounded", that is, the meaning of every word and the truth-value of every statements in NARS are all defined with respect to the given experience of the system, at least in principle.
This experience-grounded semantics is fundamentally different from model-theoretic semantics in several major aspects:
In mathematical logic, the design of the formal language is strongly influenced by mathematical languages, and has deliberately kept a distance from natural languages, to avoid ambiguity and other forms of uncertainty. For example, the language used in FOPL consists of constants and variables to represent outside objects, as well as predicates to represent properties of, and relations among, the objects. Each simple proposition represents a statement about a property of an object, or a relation among some objects. Complicated information are represented using logical operations (such as "and", "or", and "not") to connect simple propositions into compound propositions.
The precondition for the usage of such a language is that the world has already been clearly categorized into objects, properties, and relations, where each category has well-defined boundary and determined criteria, specified independently from the system that lives and works in the world. Furthermore, the state of affair in the world is fully expressible in the language, so what the system needs to do is simply to identify the true statements among all possible statements.
For a system whose language is not used to represent the world as it is, but as what the system has experienced, then the situation is fundamentally different. Here the basic unit of representation is not individual objects, but primitive and atomic components in the system's experience, that is, its "percepts" and "acts". From them, complex components are built hierarchically to represent the repeatedly appeared structures or patterns in experience, and some of them intuitively correspond to what we usually call "object", "property", or "relation". However, since all of them are summaries of experience, not names of pre-existing entities, their application boundaries are fuzzy, relatively-defined, and context-sensitive.
To deal with this situation, Narsese is designed to be a "term logic" (also called "categorical logic"), where a basic statement has the form of "subject-copula-predicate". In this structure, the subject and the predicate are both terms, and the "copula" links the former to the latter.
A term can be simple, as a string of characters from a certain alphabet, or compound, as a structure composed of other (simpler) terms by a term-operator from a certain set. The basic type of copula is called inheritance in Narsese, and written as "→". As a result, the simplest statement form in Narsese is like "s → p". To make the description easier, in the following examples English words are used as terms, though it is not claimed that the meaning of such a word in NARS is exactly the same as what it means to a native English speaker, but only related to it. At as under this condition we can use Narsese statement "bird → animal", and say that it intuitively means "Bird is a kind of animal" in English.
As the above example shows, the statement specifies the relation between the two terms as that "bird is a specialization of animal", or equivalently, that "animal is a generalization of bird", where each term represents an abstraction of certain aspects of the system's experience, and the statements says that one can be treated as the other in certain ways.
As said above, the "meaning" of a term to the system is determined by the role it plays in the experience of the system, so now we can put the definition in a more concrete form: the meaning of a term T is determined by its specializations (i.e., all the x that satisfy "x → T") and its generalizations (i.e., all the y that satisfy "T → y"). Furthermore, since this "inheritance" relation is clearly transitive, it can be proved that "S → P" if and only if all the specializations of S are also specializations of P, or, equivalently, if all the generalizations of P are also generalizations of S.
In the technical writings on NARS, the above "specializations" and "generalizations" are called "extension" and "intension", respectively.
However, it does not mean every statement is equally justified --- in an adaptive system, all justification is a matter of degree. Here a numerical measurement is preferred, not because it is more accurate, but it is compact and general. Though it may be more informative to remember all the relevant details about the confirmation/disconfirmation of a statement in the past, the system usually cannot afford the resources for that, and also, that will make the comparison between competing conclusions very complicated. For this reason, NARS uses numerical measurements for truth-value, even though the numbers do not provide all the relevant information about the "history" of a statement.
Since statement "S → P" is equivalent to "all the specializations of S are also specializations of P, and all the generalizations of P are also generalizations of S", it can be seen as a summary of many cases in the system's experience. Therefore, if a case is consistent with this summary, it can be considered as a piece of positive evidence, otherwise it is negative evidence. Therefore, for statement "bird → animal", its positive evidence include the shared specializations and generalizations of terms bird and animal, while its negative evidence include the specializations of bird that are not specializations of animal, as well as the generalizations of animal that are not generalizations of bird.
Assume the available amount of positive evidence and negative evidence of a statement are written as w+ and w-, respectively, then the total amount of evidence is w = w+ + w-. The frequency of the statement is f = w+/w, and the confidence of the statement is c = w/(w+k), where k is a positive constant. For the current discussion, we can take k = 1. The truth-value of the statement is the pair <f, c>.
Therefore, frequency is the proportion of positive evidence among total current evidence; confidence is the proportion of current evidence among all evidence after the coming of new evidence of amount k. While frequency indicates the extent to which the statement is consistent with the system's experience, confidence indicates the extent to which the frequency can be modified by future evidence. These two measurements are independent of each other, in the sense that from one, the other cannot be determined, or even bounded, except in trivial cases.
The confidence measurement is the major factor that differs the NARS approach of uncertainty processing from the other approaches, such as Bayesian network and fuzzy logic. Since each belief in NARS is based on finite evidence, its confidence value can indefinitely approach the upper bound 1, but never reach it. Consequently, "absolute truth" is represented by truth-value <1, 1>, a limit of the actual truth-values in the system.
Unlike simple terms bird and animal that have no internal structure, a compound term consists of one or more component terms that are bundled together by a term operator.
An event is a statement with relative temporal attribute, specified with respect to other events. For example, "Tom met Mary, then they became friends". In Narsese, the two basic temporal relationships are "subsequent" and "simultaneous".
An operation is an event that correspond to the system's action, such as "to move forward" in a robot. It is a statement with a procedural interpretation.
Many practical problems are most efficiently represented in special format, such as matrix in mathematics and tree in data structure. Narsese allows these representations to be embedded in it, using relations and compound terms. In this way, Narsese is a common meta-language of various special representation formats and languages.