A General Theory of Intelligence

Chapter 5. Experience and Socialization


Section 5.1. Sensorimotor mechanism

Every information system, intelligent or not, interacts with its environment. In this sense, it has sensorimotor mechanism, though it may be completely different from human sensorimotor mechanism. A system may have several sensory channels, each can recognize a certain type of signal (light, sound, ...). In each channel, the recognizable signals are determined by the system's hardware/wetware, and usually remain unchanged after the system is born/implemented.  The system can use tools to extend its sensorimotor capability.

With the coming of each recognizable signal, an action of the system is triggered to generate an internal representation of the signal. The underlying process of such an action is specific to the concrete (physical/chemical/...) property of the signal, though the overall cause and effect can be described abstractly as information transferring.

Systems with different sensorimotor mechanisms may perceive the same environment in different ways, and therefore form different world view. On the other hand, they may still have very similar "intelligence", that is, how the experience is processed and used. Given the difference between human body and computer hardware, we should not expect an AI system to have identical beliefs and concepts, therefore behavior, with a typical human being.

Since low-level perception (in sensorimotor) and high-level perception (in categorization) basically face the same problem, and work under the same restriction, we can expect them to follow similar principles, though the details of the processing may be very different.

An intelligent system needs to learn about when an action can be executed, and what effects it will have. This learning is usually achieved through a sensorimotor feedback loop: when an action is executed, its observed effect and its expected effect are compared, so as to revise the system's beliefs about the action. Also, the sensory capacity of the system usually depends on the motor capacity, because many observations require the execution of certain actions. Therefore, sensorimotor should be treated as one mechanism, with a sensation aspect and a motion aspect.


Section 5.2. Self-awareness and self-control

When talking about the "environment" of an information system, it should include the "internal environment". The sensory ability represents internal states and their change as beliefs about self, and the motor ability carry out certain state changes. As with outside environment, the system's knowledge and control on its internal structure and activity are limited and selective.

Inner-oriented sensorimotor mechanism follows the same principles as outer-oriented sensorimotor mechanism, though the two use different sensors and actuators. Consequently, the system develops different concepts and beliefs when describing internal and external events, which is where the "mind-body problem" starts. Since the internal events are only observable to the system itself, their descriptions are inevitably from a first-person point of view. On the contrary, the external events happen in the shared environment, so can be described from a third-person point of view. The relations between these two types of events cross the mind-body boundary in their description, though it should not be taken as a relation between "mind" and "body", as independent entities.

Since not all internal events are represented in the system's beliefs, we can distinguish voluntary processes from automatic processes.  The former can be manipulated by the system's information processing mechanism, while the latter cannot. This is also where the "Self and Other" distinction comes — "Self" is defined by self-awareness and self-control.

Self-consciousness is developed in advanced intelligent systems for complicated adaptive behaviors. It is not something that comes as "additional" or "optional" to those behaviors. Some AI systems will be self-conscious, but because of its intrinsic "first-person" nature, we cannot directly observe it, but have to recognize it in the system's behaviors.


Section 5.3. Communication and language

Communication is a type of abstract interaction between two information systems, where one system can serve as sensor, processor, or actuator of the other, in a way similar to tool usage. Communication provides shared experience for the systems, and increases their capabilities via cooperation.

Communication happens in a language, which is a sign system associated to the concepts of the systems.  Though language comprehension and production are supported by sensorimotor, the conventional nature of language allows the systems to interact with each other at conceptual level (to directly describe beliefs, goals, and actions), and to ignore the details of sensorimotor. A communication language provides approximate many-to-many mappings between signs in the language and concepts in the systems, and the mapping is established in history by an evolving convention.

Communication is a goal-directed process between two or more information systems, though their goals for the process may not be the same. For a communication to be successful, the signals involved should correspond to similar concepts in all the systems, though "perfect mutual understanding" is usually impossible.  Similar to sensorimotor, the language comprehension/production ability of a system is highly language-specific, and is acquired from language-specific experience.

Language usage presumes a categorization and inference mechanism. Historically, language capability starts at pragmatics, since communication is goal-directed activity in the systems participated. The stable conventions on the word-concept relation formed in communication becomes semantics. Finally, syntax and grammar appear to express complicated semantic structure. Language acquisition and processing happens at these three levels simultaneously, and are carried out by the same mechanism responsible for intelligence and cognition in general.


Section 5.4. Socialization

As soon as two or more intelligent systems begin to communicate with each other, they start to have common experience, which will shape their beliefs, goals, and actions. In the long run, a system's behavior will be strongly influenced by the society it lives in, via such a socialization process. Though living in a society is not a precondition for intelligence, social experience consists of a major part of experience for advanced intelligent systems. If an intelligent system only gets its knowledge from its own sensorimotor mechanism, its capability will be highly restricted.

The common beliefs accepted by most of the members of the society at the current time provides an "objective world view". To an individual system, a large part of this common knowledge is directly accepted, and becomes the system's beliefs. In places where common knowledge conflicts with personal beliefs, the result is usually a compromise.

As a special case, language-related conventions are the common knowledge of a community of users of a given language. To effectively communicate with the others, an individual must follow the common usage of the language. On the other hand, given the special experience and need, violations of the common usage are inevitable, which are the forces behind language evolution.

Socialization not only provides the knowledge of the system, but also regulates the development of the goal complex of individuals. A system will obtain reward or punishment during socialization, depends on the compatibility of its goals and the goals of the other system.  Morality and ethics knowledge is also acquired in this process.

Furthermore, socialization extends the system's available action set, by allowing individuals to participate in social cooperation.


Section 5.5. Education

Education, or training, is a special type of socialization.  In this process the system being educated or trained is provided with a predetermined partial experience, to get desired goals, beliefs, and actions. It is a semi-compulsory socialization. For a society, education is an efficient way to pass certain experience to new members. Education is necessary for a society to adapt to its environment, and to keep its internal consistency, though very often various biases are spread in this process, too.

With the born of truly intelligent computer systems, "education of AI" will become a necessary step. This is the stage where domain-specific requirements are taken into consideration, which should not be hard-wired into the system. Unlike human mind, for an AI system it is possible to "implant" knowledge into it, though that cannot completely replace education.

The education of AI will to a large extent follows the same principles and procedures of human education. Just loading a huge amount of "facts" into a system is not the right way to educate it, because a proper memory structure should also include knowledge about the relative priority among the beliefs, as well as the related questions and goals that make the beliefs useful to the system.

The "possible behavior space" of an intelligent system is determined by three factors:

  1. its internal heritage — how the system is designed,
  2. its external heritage — how the system is educated before it is "on its own",
  3. its personal experience — what situations the system has gone through.
To make an AI system safe for the human beings, the key is in its education, because its design is content-independent, and its personal experience is not fully controllable.