A General Theory of Intelligence     Chapter 5. Experience and Socialization

Section 5.2. Self-awareness and self-control

Internal environment

The "environment" of an information system includes both an external part and an internal part. Concretely speaking, some of the voluntary actions the system can take changes its own internal states to achieve certain goals. There are beliefs that links the actions, goals, and other beliefs. From a logical point of view, there is no fundamental difference between the external environment and the internal environment, in terms of how the involved goals, actions, and beliefs are represented, interpreted, and processed.

On the other hand, the content of the above entities are surely different. The system has different actions and goals with respect to these two parts of the environment, and they usually cannot be substituted by each other. Consequently, the beliefs and concepts the system developed to describe the experience in these two parts have different vocabulary.

As explained in Section 4.3, in NARS some actions are voluntary, which are specified in Narsese and controlled by the decision-making process of the system. The current NARS design allows all kinds of inner operations to be specified, and "plugged" into the system to extend it into various NARS+s.

Some of the inner-oriented operations have no direct relation to the reasoning process, and can be handled exactly the same way as the outer-oriented operations. In a robotic NARS+, examples of such operations may include one that detects low battery charge level and generates "hungry" signal, and many that detect body damage and generate various "pain" signals. Of course, these signals will not be identical to what we can "hungry" and "pain", though serve very similar functions in the system.

Inference monitoring

There are also inner-oriented operations that directly manipulate the reasoning process of the system. We can expect these operations to include the following: The above operations are not equivalent to the corresponding built-in algorithms (in the source code of the system). For example, the operations on truth-value, desire-value, and priority-value do not use numbers, but relative indicators, like "high" and "low".

These operations give the system the ability of self-awareness and self-control, and allow it to modify the internal processing, which is otherwise fully carried out by the automatic actions.

There are restrictions in what the system can know and can do about itself:

Even with these restrictions, these operations still allow the system to reason about its own reasoning process. They also provide procedural meaning to terms like "think", "know", "want", "feel", etc.

If only depending on the automatic actions embedded in the control mechanism of NARS, what the system does is like "free association", which has all the necessary ingredient of intelligent problem-solving, though for complicated problems, deliberation and reflection are needed, in which the system thinks about its own thinking process. For this reason, it may be necessary to consider some of the above operations as part of NARS, while to leave the others as optional, to be included in various types of NARS+.

Self, consciousness, and free will

Like all concepts in NARS, the meaning of "I", or "self", is determined by the related experience of the system. Especially, in this case the scope of "I" is not bounded to a physical body, but to what can be reached and managed by the system's operations. That is why a familiar tool often feels like a body part, while a malfunctioning organ does not feel like part of oneself anymore.

The self-other distinction is learned by the system in its interaction with the environment. Here what really matters is not physical proximity and connectivity, but accessibility and controllability. This is especially the case for artificial systems. Since they are not biological, what is considered as "one system" cannot be recognized according to whether it is growed out of the same origin, but whether, or to what extent, it functions coherently in achieving the goals. A system's "self" can be distributed spatially, while still preserve its conceptual integrity.

Similar to other knowledge, a system's "self understanding" evolves over time, and is never finished or completed. Since the system cannot fully specify the precondition and consequence of any of its operation, the system as a whole cannot always accurately predict the effect of its future behavior, or fully explain why its past behavior happens.

The system's self-awareness is restricted to the scope of its inner-oriented operations, which cannot detect all events happen in the system's body. Even for the detectable events, the system usually does not have the resources to record and follow all of them. Consequently, some of the events are perceived by the system, which consist of what we usually call the "conscious mind", while the other events consist of the "unconscious mind". The distinction between the two is mainly determined by the self-perception of the system, rather than by how they works. According to the current theory, the conscious mind and unconscious mind basically follow the same "logic", and their difference is usually a matter of degree. For example, conscious thinking usually focuses on a small number of high-priority tasks, while unconscious thinking covers everything else happening in the system.

When the system is facing a choice among several alternatives, at the moment usually the system can perceive the alternatives and its decision-making criteria, though not with all the details. When the situation is complicated enough, the system needs time to evaluate the alternatives and to compare their respective outcomes, but before the final decision is reached, the system feels that it can choose any of them if it wants, which is the feeling of free will. However, if an observer can simulate all the relevant events in the system with enough accuracy, the system's choice seems to be deterministic, and there is no room left for a "free will". Therefore, where a system has free will depends on whose viewpoint is under consideration. When the complexity of the system goes beyond the simulation capability of the observer, even the observer has to use "free will" to describe the system's choices, even though it does not mean that the system's thinking process cannot be further explained.

As soon as a system can think and talk about its own thinking process, some form of self-reference becomes inevitable. Very often, self-reference is needed in self-awareness and self-control, though certain forms of self-reference do not produce anything useful, as revealed by the "Liar Paradox". In NARS, all these paradoxical statements will be given a confidence 0, since they cannot get any evidential support from the system's experience. Therefore they make little contribution to the system's behavior. Their existence in the system will cause little trouble, as far as the system does not spend too much time to think about them.

The mind-body problem

Since the outer-oriented and inner-oriented sensorimotor mechanisms consist of two different set of operations, the system uses two languages to describe its external and internal environments, respectively. When concepts in these two language are involved in the same statement, the mind-body problem appears — what is the relation between the events in the mind and the events in the body? According to dualism, the mind and the body are parallel substances with parallel processes, while according to reductionism, the mind is an abstraction, so the events in mind can be fully explained by the events in the body, especially, the brain. There are many other opinions on this issue, though the above two are the most influential.

According to the previous discussion, the opinion presented here is neither dualism nor reductionism. There is no separate existences of a brain and a mind, nor are there separate physical events and mental events. Instead, what we see here are two descriptions of the same underlying process. Dualism is wrong by taking the mental processes as separate (though often parallel) to the physical processes, to the extent that one can exist without the other (for example, there can be zombie that behave exactly like a normal human but has no consciousness). Reductionism is wrong by taking the physical processes as "real", from them the mental processes "emergent".

Since the "physical language" and the "mental language" both come from (different) sensorimotor mechanisms, neither of the two describe the substance and process "as it is". Instead, each of them "see" the world through the "filter" provided by the available operations, under the influence of the current goals and beliefs. Since the operations are different, there is no accurate translation between sentences of the two languages, though some rough correspondence can be established.

According to this opinion, a zombie, as defined above, cannot exist, because if a system have no self-awareness and self-control, it will fail to behave like a normal human being, since these functions are needed by a human being constantly. On the other hand, subjective experience cannot be fully expressed in physical language. Even if in one day we can observe a living brain to the desired details, the (third-person) descriptions of the brain activity still cannot fully exchangeable with the (first-person) experience of the brain owner, even after the correspondence between the two languages is fully known.