With the coming of each recognizable signal, an action of the system is triggered to generate an internal representation of the signal. The underlying process of such an action is specific to the concrete (physical/chemical/...) property of the signal, though the overall cause and effect can be described abstractly as information transferring.
Systems with different sensorimotor mechanisms may perceive the same environment in different ways, and therefore form different world view. On the other hand, they may still have very similar "intelligence", that is, how the experience is processed and used. Given the difference between human body and computer hardware, we should not expect an AI system to have identical beliefs and concepts, therefore behavior, with a typical human being.
Since low-level perception (in sensorimotor) and high-level perception (in categorization) basically face the same problem, and work under the same restriction, we can expect them to follow similar principles, though the details of the processing may be very different.
An intelligent system needs to learn about when an action can be executed, and what effects it will have. This learning is usually achieved through a sensorimotor feedback loop: when an action is executed, its observed effect and its expected effect are compared, so as to revise the system's beliefs about the action. Also, the sensory capacity of the system usually depends on the motor capacity, because many observations require the execution of certain actions. Therefore, sensorimotor should be treated as one mechanism, with a sensation aspect and a motion aspect.
Inner-oriented sensorimotor mechanism follows the same principles as outer-oriented sensorimotor mechanism, though the two use different sensors and actuators. Consequently, the system develops different concepts and beliefs when describing internal and external events, which is where the "mind-body problem" starts. Since the internal events are only observable to the system itself, their descriptions are inevitably from a first-person point of view. On the contrary, the external events happen in the shared environment, so can be described from a third-person point of view. The relations between these two types of events cross the mind-body boundary in their description, though it should not be taken as a relation between "mind" and "body", as independent entities.
Since not all internal events are represented in the system's beliefs, we can distinguish voluntary processes from automatic processes. The former can be manipulated by the system's information processing mechanism, while the latter cannot. This is also where the "Self and Other" distinction comes — "Self" is defined by self-awareness and self-control.
Self-consciousness is developed in advanced intelligent systems for complicated adaptive behaviors. It is not something that comes as "additional" or "optional" to those behaviors. Some AI systems will be self-conscious, but because of its intrinsic "first-person" nature, we cannot directly observe it, but have to recognize it in the system's behaviors.
Communication happens in a language, which is a sign system associated to the concepts of the systems. Though language comprehension and production are supported by sensorimotor, the conventional nature of language allows the systems to interact with each other at conceptual level (to directly describe beliefs, goals, and actions), and to ignore the details of sensorimotor. A communication language provides approximate many-to-many mappings between signs in the language and concepts in the systems, and the mapping is established in history by an evolving convention.
Communication is a goal-directed process between two or more information systems, though their goals for the process may not be the same. For a communication to be successful, the signals involved should correspond to similar concepts in all the systems, though "perfect mutual understanding" is usually impossible. Similar to sensorimotor, the language comprehension/production ability of a system is highly language-specific, and is acquired from language-specific experience.
Language usage presumes a categorization and inference mechanism. Historically, language capability starts at pragmatics, since communication is goal-directed activity in the systems participated. The stable conventions on the word-concept relation formed in communication becomes semantics. Finally, syntax and grammar appear to express complicated semantic structure. Language acquisition and processing happens at these three levels simultaneously, and are carried out by the same mechanism responsible for intelligence and cognition in general.
The common beliefs accepted by most of the members of the society at the current time provides an "objective world view". To an individual system, a large part of this common knowledge is directly accepted, and becomes the system's beliefs. In places where common knowledge conflicts with personal beliefs, the result is usually a compromise.
As a special case, language-related conventions are the common knowledge of a community of users of a given language. To effectively communicate with the others, an individual must follow the common usage of the language. On the other hand, given the special experience and need, violations of the common usage are inevitable, which are the forces behind language evolution.
Socialization not only provides the knowledge of the system, but also regulates the development of the goal complex of individuals. A system will obtain reward or punishment during socialization, depends on the compatibility of its goals and the goals of the other system. Morality and ethics knowledge is also acquired in this process.
Furthermore, socialization extends the system's available action set, by allowing individuals to participate in social cooperation.
With the born of truly intelligent computer systems, "education of AI" will become a necessary step. This is the stage where domain-specific requirements are taken into consideration, which should not be hard-wired into the system. Unlike human mind, for an AI system it is possible to "implant" knowledge into it, though that cannot completely replace education.
The education of AI will to a large extent follows the same principles and procedures of human education. Just loading a huge amount of "facts" into a system is not the right way to educate it, because a proper memory structure should also include knowledge about the relative priority among the beliefs, as well as the related questions and goals that make the beliefs useful to the system.
The "possible behavior space" of an intelligent system is determined by three factors: