NARS as an AGI

Evidence and Uncertainty

1. Induction as an example

Example: How to predict the next number if the observed sequence is 1, 2, 3, 4? Or, how to predict the color of the next raven if 100 ravens have been observed, and all of them are black?

Francis Bacon: You can find a regularity by induction.

David Hume: You cannot, unless the future is just like the past, but how can you know that? Induction is just a mental habit.

Karl Popper: There is no such a thing as "induction" in science. You can only falsify a general statement, but never verify it.

Rudolf Carnap: At least we can evaluate the probability of a hypothesis, under certain assumptions.

Thomas Bayes: Degrees of belief should be treated as probability (Dutch Book Arguments), and belief revision as conditional probability calculation. Induction is included as a special case.

Ray Solomonoff: Assuming the environment follows some unknown but computable probability distribution, the optimal prediction can be determined by giving simpler hypotheses (that fit the observation) higher probability.

2. Validity in reasoning

In traditional reasoning systems, the validity of inference must be justified by the (at least probabilistic) correctness of its conclusions with respect to the real world or the future observations.

However, under the Assumption of Insufficient Knowledge and Resources (AIKR), such a validity cannot be achieved anymore. On the other hand, validity still makes sense in adaptive systems, where it means a conclusion must be justified according to the system's past experience.

Under the restriction of resources, the system cannot take the whole past experience into account when evaluating a conclusion. Instead, it can only depend on the evidence collected from the past using available resources. In this aspect, validity means the most efficient strategy in resource allocation among the tasks, for the system as a whole.

In summary, there are multiple standards of rationality or validity, each of which is applicable to a certain type of system. In a non-axiomatic system, a "valid" (or "rational", "reasonable") conclusion is one that is supported by available evidence.

3. Amount of evidence

In an non-axiomatic system, the truth-value of a statement is determined by its relation with evidence, which is (input or derived) information that has impact on the truth-value of a statement in an inconclusive manner.

A quantitative representation is necessary for an adaptive system, since the amount of evidence matters when a selection is made among competing conclusions. The advantage of a numerical measurement of evidence is its generality, not its accuracy. Furthermore, an interpretation of the measurement is required, which usually defines the measurement in an idealized situation.

One simple case is enumerative induction, where a statement summarizes many observations.

Nicod's Criterion: for "Ravens are black" (Is it the same as "All ravens are black" or "The next raven is black"?), black ravens are positive evidence, non-black ravens are negative evidence, and non-ravens are irrelevant.

In ideal situations (ignoring fuzziness, inaccuracy, etc.), amount of evidence can be represented by a pair of non-negative integers. It can be generalized into a pair of non-negative real numbers w+ and w-. We use w for "all available evidence", which is the sum of w+ and w-.

4. Truth value

When truth is evaluated according to experience, "truth-value" measures evidential support, and shows degree of belief in the system.

Though in principle the information is already carried by the amount of evidence, very often a relative and bounded measurement is preferred, especially for derived beliefs.

A natural indicator of truth is the frequency (proportion) of positive evidence in all evidence, that is, f = w+ / w. While frequency compares positive and negative evidence, a second measurement, confidence, can compare past and future evidence, in the same manner. Here the key idea is to only consider to a constant horizon in the future, that is, c = w / (w + k).

The ⟨frequency, confidence⟩ pair can be used as the truth-value of a statement in a non-axiomatic system. It is fully defined on available evidence, without any assumption about future evidence. It is related to the binary truth-value and probability, but cannot be reduced to either of them.

5. Representations of uncertainty

The frequency value will be in the interval [lower, upper] in the near future specified by the horizon k.

The three representations: {w+, w}, ⟨f, c⟩, and [l, u] can be transformed into each other. They all represent the system's degree of belief on the statement, or its evidential support.

At the input/output interface, the truth-value of a statement can also be represented imprecisely. If there are N verbal labels, the [0, 1] interval can be divided into N equal-width subintervals. Or, default values can be used, so that the users can omit the numbers.

There are reasons to use high-accuracy representations inside the system, while allow low-accuracy representations outside the system.

In an non-axiomatic system, a normal statement always has a finite amount of evidence. However, there are two limit cases that are discussed in meta-language only: null (zero) evidence and full (infinite, or no future) evidence.


Reading