For a society, education is an efficient way to pass certain experience to new members in the society. This mechanism is necessary for a social system to adapt to its environment, while keeping internal consistency among its members, though very often various biases are spread by this process, too.
With the born of truly intelligent computer systems, "education of AI" will become a necessary step for the system to get the desired functionality. Now it is the time to summary the developing stages a NARS implementation will go through in its life cycle:
The system's behavior is influenced by the above factors in different ways. Roughly speaking, its "intelligence" comes from its intrinsic core, and is innate, fixed, general-purpose, and domain-independent; its "capability" comes from the other factors in the list, and is mostly acquired, growing, special-purpose, and domain-dependent. The two cannot be exchanged with each other. Even if a system is fully intelligent by design, it may still fail to achieve any useful capability if the education process is confusing and preposterous. On the other hand, an innate defect in the intelligence of a system usually cannot be made up by education.
For an AI system like NARS, just load a large number of tasks and beliefs into its memory is not the right way to educated, because the memory does not merely contain the tasks and beliefs that are directly expressible in the experience of the system. More accurately, we can say that the system's knowledge cannot be directly and efficiently acquires as a sequence of tasks and beliefs, for the following reasons:
It means an educator should have an education plan, as well as a good understanding about the usual processing procedure in the student system to be educated. To achieve a given objective, the following factors should be taken into consideration:
One advantage of developing human-compatible AI is that such a system can be educated with human knowledge, which already exist in various forms and with all kinds of content.
Since most of human knowledge is expressed in natural language, it will be convenient for the system to learn a natural language first (as described in Section 5.3), then it can be educated using materials in that language.
Another source of knowledge is the data and knowledge in various computer-processible formats, such as databases, spreadsheets, markup languages, etc. To acquire knowledge from these sources, a system like NARS can either directly learn how to convert each data item from its native format into Narsese, or use a special-purpose software tool for the conversion, even doing some data-mining or knowledge-discovery first, then only feed the system the result of this preprocessing, rather than the "raw data".
The human knowledge involved in this process can either be common-sense knowledge or expertise knowledge. Though these two types of knowledge often come from different sources, there is no reason to believe that they should be processed differently in an intelligent system. There is no separate mechanisms for "common-sense reasoning" and "expertise reasoning", though as knowledge, expertise is usually more accurate and less ambigioums than common-sense.
No matter how carefully the teaching materials are chosen and how education is carried out, usually an AI system won't have knowledge and its exactly like a typical human being — at least it usually does not have human biological experience and social experience, and simulation of them has a limit. It is unrealistic to expect an AI to behave exactly as a human, just like to expect people growing up in very different societies can fully agree with each other on everything. It is important to understand that such a difference cannot be used as reason to consider one system as "more intelligent" than another. Instead, they may be as intelligent as each other, but only have gone through different education and socialization process, so end up with different behaviors, while as far as the current discussion is concerned, it cannot say which of the systems is "better" — it is better to just consider them as "different".
This concern is understandable. Though many advances in science and technology have solved many problems for us, they also create various new problems, and sometimes it is hard to say whether a specific theory or technique is beneficial or harmful. Given the potential impact of AI on the human society, we AI researchers do have the responsibility of carefully anticipating the social consequence of their research results, and do our best to bring the benefits of the technique, while preventing the harms from it.
Previously the factors influencing the system's behavior have been listed. Among them, the core intelligence, as represented by NARS, is morally neutral, that is, the degree of intelligence of a system has nothing to do with the system is considered as beneficial or harmful, either by a single human or by the whole human species, because the intelligence mechanism is independent of the content of the system's goals, actions, and beliefs, which are determined mainly by the system's experience.
Therefore, to control the behavior of an intelligence means to control its experience, that is, to educate the system. We cannot design a human-friendly AI, but have to teach an AI to be human-friendly, by using carefully chosen materials to shape its goals, actions, and beliefs. Initially, we can load its memory with certain goals and beliefs, in the spirit of Asimov's Three Laws of Robotics, as well as many more detailed requirements and regulations.
The difficulty of this topic comes from the fact that for a sufficiently complicated intelligent system, it is practically impossible to fully control its experience. Or, put it in another way, if a system's experience can be fully controlled, its behavior will be fully predictable, however, such a system cannot be fully intelligent. As explained in Section 4.2, the derived goals of an intelligent system are not always consistent with their parents goals. Similarly, the system cannot fully anticipate all consequences of its actions, so even if its goal is benign, the actual consequence may still turn out to be harmful, to the surprise of the system itself.
As a result, the basic fundamental ethical and moral status of AI is the same as most other science and technology — neither beneficial in a foolproof manner, nor inevitably harmful. The situation is similar to what every parent has learned: a friendly child is usually the product of education, not bioengineering, though this "education" is not a one-time event, and one should always be prepared for unexpected events. The AI researchers have to always keep the ethical issues in mind, and make the best selections at each design stage, without expecting to settle down the issue once for all, or to cut off the research all together just because it may go wrong — that is not how an intelligent species deals with uncertain situations.