In Chapter 3 and Chapter 4, the experience of NARS is described as an incoming stream of Narsese sentences, each treated as a task by the system. Like descriptions about all aspects of an information system, this description is an abstraction of the process, at a certain level.
There are two aspects of the process that have been omitted in this abstraction:
NARS, as described in previous chapters, seems to be an easy target of some existing criticisms of traditional "symbolic AI", which are considered as "disembodied" and "ungrounded", because such a system has no sensorimotor mechanism. Accurately speaking, these criticisms often get the real issue wrong. In a general sense, every implemented system has a "body", which is the hardware, wetware, or other forms of hosting entity. As far as the system interact with its environment, the system has a "sensorimotor mechanism" carrying out the interaction, and it can be described in the language of physics, chemistry, biology, or some other theory. The traditional symbolic AI systems are indeed disembodied and ungrounded, but it is not because they have no body or sensor, but because the system's behaviors are largely independent of the system's experience, therefore it is not necessary to mention its sensorimotor mechanism when describing its interaction with the environment.
NARS is embodied and grounded, because Narsese has an experience-grounded semantics (as described in Section 3.3), and the meaning and truth-value of terms are experience-dependent. The body and sensorimotor mechanism are not mentioned, because as far as the system's experience can be abstractly described as a stream of incoming Narsese sentences, the actual physical device and process that carry out the interaction can be omitted from the description, which does not mean that they do not exist or cannot be described. NARS is fully intelligent, according to the working definition of intelligence introduced in Chapter 2.
Even so, the practical problem-solving capability of NARS can be extended by using tools, as explained in Section 4.3. Now we can refer to any "NARS + tools" system as "NARS+" ("NARS plus"). In all systems in this category, the "NARS" parts are the same (with minor difference in parameter values), while the "plus" parts can be very different from each other, given the huge number of possible tools that can be used by NARS. In a NARS+, the system's interaction with the environment is not restricted in Narsese anymore, but can include many other forms that can be converted to and from Narsese by the tools used by NARS.
As a general theory of intelligence, in this book no assumption is made about the nature, scope, and granularity of the sensory channels an intelligent system can have. Actually they can be anything, not limited to the sensory organs of humans and animals. Roughly speaking, sensory experience is processed in two stages. In the first stage, the the recognizable signals are registered in the system, or we can say that the stimulus are transformed into an internal representation within the system. This is usually referred to as "sensation". Then, in the second stage, the internal representations of the stimulus are reorganized and related to the existing contents in the memory. This is usually referred to as "perception".
Any non-trivial description of the sensation process has to address the underlying physical, chemical, or biological process, and therefore highly depends on the modality. On the other hand, perception is a process that is much more universal across all modality. This process has two major aspects:
In NARS+, some terms will correspond to recognizable signals in the supported sensory channel, so that whenever a signal is sensed (not passively, but as the result of an operation of the system), the corresponding term will be activated, and interpreted as an event indicating the sensing of the signal. At the same time, compound terms are also composed or activated, corresponding to patterns of these events. In NARS+, the primary "patterns" are temporal. As mentioned in Section 3.3, compound term can be built for subsequent or simultaneous events, therefore there will be terms corresponding to complex events with longer duration and richer content. Spatial patterns and other patterns are special cases of temporal patterns, in the sense that they are nothing but certain (temporal) arrangements of signals and actions.
Like other compound terms, patterns built in this way will be forgot by the system soon, unless they repeatedly appear in the system's experience, and turn out to be useful in achieving goals of the system. Some large-scale patterns with stable properties will be considered as "objects" in the environment. As other terms in NARS, the meaning of such a term is determined by its experienced relation with other terms, including those that directly correspond to recognizable signals, though it cannot be reduced to the signals, since the other relations, like the ones with existing concepts, beliefs, goals, and actions all contribute to what the term "means" to the system.
The beliefs related to sensations are produced and revised in the same way as other beliefs in the system. Since each belief indicate an inheritance (or its variants, similarity, implication, and equivalence), a belief related to a sensed pattern typically contributes to its perception or understanding, by indicating what it "can be seen as", and to what extent. That is, the experience-grounded semantics can be applied to the "sensory terms" just like it can be applied to the more abstract terms. It implies that the same set of inference rules should be used to process them. It should not be a surprise, because the system is basically facing the same issues at different levels of abstraction.
An intelligent system needs to learn about when an action can be executed, and what effects it will have. This learning is usually achieved through a sensorimotor feedback loop: when an action is executed, its observed effect and its expected effect are compared, so as to revise the system's beliefs about the action. On the other hand, the sensory capacity of the system usually depends on the motor capacity, because many observations require the execution of certain actions. Therefore, sensorimotor should be treated as one mechanism, with a sensation aspect and a motion aspect. Furthermore, this mechanism is a natural extension of the system's general-purpose intelligence, into the domain of a concrete modality.
NARS can be implemented with different system parameters, and each implementation will have a different "personality", and it is hard to say which one is "better" than the others. Since NARS interact with its environment in Narsese, its behavior is independent to its body, as far as the body allows Narsese to be used in its complete form.
However, it is no longer the case for NARS+. In such a system, the "NARS" part remains the same (except the values of the system parameters), while the "plus" part can change from system to system. We can think NARS as the intelligent core of the system, and the "plus" part as the tools or sensorimotor mechanism that serves as an interface between the core and the environment. In such a system, its "body" not only include the hardware/wetware hosting the core, but also the sensorimotor devices that to a large extend decide the type (though not the content) of the experience, by specifying the recognizable signals and executable operations. Consequently, if multiple NARS+ have incompatible sensorimotor mechanisms, their beliefs and concepts may have little overlap. Even when the systems are put into the same physical world, each of them will form its own "world view", partially shaped by its sensorimotor mechanism. Among them, there is no "correct" or "objective" way to talk about the world.
Therefore, it is correct to demand an AI system to be "embodied" and "situated"; however, it is wrong to suggest that they can only be achieved by robots or system with simulated human sensorimotor mechanism, because the system's experience can be very different from that of a human being. Similarly, it is correct to claim that since AI systems won't have a human body and human experience, their beliefs and concepts will never be identical to that of a typical human being; however, it is wrong to conclude that because of that, a computer system can be truly intelligent, because "intelligence" is not defined by concrete beliefs and concepts, but by how they are related to the system's experience.
In this sense, the content of a mind depends on its body, though all types of mind share certain core functionality, which is largely body-independent. NARS is an attempt to specify the shared core (mind design), while leave the details of sensorimotor (body design) to be specified by the various NARS+.