A General Theory of Intelligence

Chapter 3. Inference System


Section 3.1. Formalization of information system

Formalization brings accuracy to description.  To formalize an information system means to use symbols (including numbers) to represent its goals, actions, and beliefs. There are three major traditions: Though in principle these frameworks have equivalent expressing and processing power, for a concrete problem they may have very different easiness and naturalness. For (general-purpose) intelligent systems, the framework of inferential system is preferred, because of its domain independence, step justifiability, and process flexibility.

Section 3.2.  Types of inference systems

According to the assumptions about knowledge and resources, three types of inference systems can be distinguished: Pure-axiomatic systems are studied in mathematics, and it not directly related to AI. Most of the existing AI works in the inference framework belong to the category of semi-axiomatic system, which attempt to make partial extension or revision of mathematical logic, while keeping the other parts. What AI really needs to do is to build non-axiomatic systems, which do not assume the sufficiency of knowledge and resources in any aspect of the system.

NARS (Non-Axiomatic Reasoning System) is a concrete example of non-axiomatic system. It will be used as a concrete model in the following description of intelligent systems.


Section 3.3. NARS: language

NARS uses a formal language Narsese to represent goals, actions, and beliefs.

The basic unit of the language is term, which can be intuitively thought of as the name or label of a concept in the system. A term can either be a simple identifier, or a compound consists of component terms.

Two terms connected by an inheritance relation (or its variants) forms a statement, which indicates that one term can be used as the other term, as specialization (extension) or generalization (intension), to certain extent. The extent, or truth-value of the statement, in indicated by a frequency value (the proportion of positive evidence supporting the statement among all evidence) and a confidence value (the proportion of current evidence among available evidence in the near future). Consequently, the system's beliefs are represented as statements with their truth-value, and they summarize the system's experience, in terms of the substitutability among terms (and concepts).

The meaning of a term is determined by its extension and intension, which are the collection of the inheritance relations between this term and other terms, obtained from the experience of the system. NARS includes three variants of the inheritance relation: similarity (symmetric inheritance), implication (derivability), and equivalence (symmetric implication). They also contribute to the meaning of the terms involved.

To represent complicated experience, Narsese uses compound terms for common ways to combine terms into patters or structures. The meaning of a compound term is partially determined by its logical relations with its components, and partially by the system's experience on the compound term as a whole.

Event is a special type of statement that have a time-dependent truth-value. Operation is a special type of event that can occur by the system's decision. Goal is a special type of event, that the system is attempting to realize, by carrying out certain operations. Beside goals to be achieved, NARS can accept tasks that are knowledge to be absorbed and questions to be answered.


Section 3.4.  NARS: rules

The basic function of inference rules in NARS is to derive new beliefs from current beliefs. As a term logic, in NARS the premises in each inference step must share at least one common term. The position of the shared term and the combination of premise types decide the type of the inference. Each inference rule has an associated truth-value function that calculates the truth-value of the conclusion, solely depending on the evidence provided by the premises.

The truth-value in NARS is intuitively related to probability (as studied in probability theory) and degree of membership (as studied in fuzzy logic), though not identical to either of them. The truth-value calculation in the system can be seen as an extended Boolean algebra, where the value range of variables is the continuous interval [0, 1], not the binary set {0, 1}.

NARS uniformly formalizes multiple types of inferences, including revision, choice, deduction, abduction, induction, exemplification, comparison, analogy, resemblance, conjunction, disjunction, negation, etc. Some of the rules (like deduction) can produce high-confident conclusions, while some others (like induction) can only produce low-confident conclusions (hypotheses). When evidence collected from different sources are pooled together by the revision rule, the confidence of the conclusion increases.

Beside deriving relations among existing terms, NARS inference rules also derive new terms to summarize complicated experience. The inference rules can also be used backwards to derive a goal (or question) from a belief and a goal (or question), under the condition that the achieving of the goal (or the answer to the question) can be used with the belief to achieve the original goal (or to answer the original question).

If a event is judged to imply the achieving of a goal, then the desirability of the event is increased, and the system will also evaluate its plausibility, that is, how likely it is to be achieved as a goal. When an event is both desirable and plausible, the system will make the decision to turn the event into a goal to be actually pursued.


Section 3.5.  NARS: memory and control

As a term logic, in NARS every inference step requires the premises to share a common term. Therefore, all inference are "local" and happens within a concept, where the tasks and beliefs named by the shared term are linked.

The system's memory consists of a collection of concepts, as well as a buffer for new tasks that come from the environment or the previous reasoning activity. The system repeatedly execute a basic working cycle, in which a concept is selected probabilistically, then a task and a belief are selected in the concept, also probabilistically. The task and the belief are used as premise to produce derived tasks, according to applicable rules.

Since the system works with insufficient resources, it cannot afford the time to let a task interact with every belief in the concept, nor can it keep all the derived beliefs. Instead of treating all tasks and beliefs as equal, the system maintains priority distributions among concepts, as well as among tasks and beliefs in a concept, to indicate the amount of resources the system plans to spend on it. After each inference step, the priority values of the involved items (concept, task, and belief) are adjusted according to the direct feedback collected in the step. In the long run, the system attempts to give more resources to the more relevant and useful items.

The meaning of a concept is its experienced relations with other concepts. With the constant adding of new and derived beliefs, the deleting of useless beliefs, and the changing of priority values of beliefs, the meaning of a concept evolves over time, though the process is not arbitrary, but adaptive to the system's experience. Furthermore, each time a concept is used in processing a task, only part of its meaning is used.