On the other hand, the content of the above entities are surely different. The system has different actions and goals with respect to these two parts of the environment, and they usually cannot be substituted by each other. Consequently, the beliefs and concepts the system developed to describe the experience in these two parts have different vocabulary.
As explained in Section 4.3, in NARS some actions are voluntary, which are specified in Narsese and controlled by the decision-making process of the system. The current NARS design allows all kinds of inner operations to be specified, and "plugged" into the system to extend it into various NARS+s.
Some of the inner-oriented operations have no direct relation to the reasoning process, and can be handled exactly the same way as the outer-oriented operations. In a robotic NARS+, examples of such operations may include one that detects low battery charge level and generates "hungry" signal, and many that detect body damage and generate various "pain" signals. Of course, these signals will not be identical to what we can "hungry" and "pain", though serve very similar functions in the system.
These operations give the system the ability of self-awareness and self-control, and allow it to modify the internal processing, which is otherwise fully carried out by the automatic actions.
There are restrictions in what the system can know and can do about itself:
If only depending on the automatic actions embedded in the control mechanism of NARS, what the system does is like "free association", which has all the necessary ingredient of intelligent problem-solving, though for complicated problems, deliberation and reflection are needed, in which the system thinks about its own thinking process. For this reason, it may be necessary to consider some of the above operations as part of NARS, while to leave the others as optional, to be included in various types of NARS+.
The self-other distinction is learned by the system in its interaction with the environment. Here what really matters is not physical proximity and connectivity, but accessibility and controllability. This is especially the case for artificial systems. Since they are not biological, what is considered as "one system" cannot be recognized according to whether it is growed out of the same origin, but whether, or to what extent, it functions coherently in achieving the goals. A system's "self" can be distributed spatially, while still preserve its conceptual integrity.
Similar to other knowledge, a system's "self understanding" evolves over time, and is never finished or completed. Since the system cannot fully specify the precondition and consequence of any of its operation, the system as a whole cannot always accurately predict the effect of its future behavior, or fully explain why its past behavior happens.
The system's self-awareness is restricted to the scope of its inner-oriented operations, which cannot detect all events happen in the system's body. Even for the detectable events, the system usually does not have the resources to record and follow all of them. Consequently, some of the events are perceived by the system, which consist of what we usually call the "conscious mind", while the other events consist of the "unconscious mind". The distinction between the two is mainly determined by the self-perception of the system, rather than by how they works. According to the current theory, the conscious mind and unconscious mind basically follow the same "logic", and their difference is usually a matter of degree. For example, conscious thinking usually focuses on a small number of high-priority tasks, while unconscious thinking covers everything else happening in the system.
When the system is facing a choice among several alternatives, at the moment usually the system can perceive the alternatives and its decision-making criteria, though not with all the details. When the situation is complicated enough, the system needs time to evaluate the alternatives and to compare their respective outcomes, but before the final decision is reached, the system feels that it can choose any of them if it wants, which is the feeling of free will. However, if an observer can simulate all the relevant events in the system with enough accuracy, the system's choice seems to be deterministic, and there is no room left for a "free will". Therefore, where a system has free will depends on whose viewpoint is under consideration. When the complexity of the system goes beyond the simulation capability of the observer, even the observer has to use "free will" to describe the system's choices, even though it does not mean that the system's thinking process cannot be further explained.
As soon as a system can think and talk about its own thinking process, some form of self-reference becomes inevitable. Very often, self-reference is needed in self-awareness and self-control, though certain forms of self-reference do not produce anything useful, as revealed by the "Liar Paradox". In NARS, all these paradoxical statements will be given a confidence 0, since they cannot get any evidential support from the system's experience. Therefore they make little contribution to the system's behavior. Their existence in the system will cause little trouble, as far as the system does not spend too much time to think about them.
According to the previous discussion, the opinion presented here is neither dualism nor reductionism. There is no separate existences of a brain and a mind, nor are there separate physical events and mental events. Instead, what we see here are two descriptions of the same underlying process. Dualism is wrong by taking the mental processes as separate (though often parallel) to the physical processes, to the extent that one can exist without the other (for example, there can be zombie that behave exactly like a normal human but has no consciousness). Reductionism is wrong by taking the physical processes as "real", from them the mental processes "emergent".
Since the "physical language" and the "mental language" both come from (different) sensorimotor mechanisms, neither of the two describe the substance and process "as it is". Instead, each of them "see" the world through the "filter" provided by the available operations, under the influence of the current goals and beliefs. Since the operations are different, there is no accurate translation between sentences of the two languages, though some rough correspondence can be established.
According to this opinion, a zombie, as defined above, cannot exist, because if a system have no self-awareness and self-control, it will fail to behave like a normal human being, since these functions are needed by a human being constantly. On the other hand, subjective experience cannot be fully expressed in physical language. Even if in one day we can observe a living brain to the desired details, the (third-person) descriptions of the brain activity still cannot fully exchangeable with the (first-person) experience of the brain owner, even after the correspondence between the two languages is fully known.