Reasoning with Uncertainty
Though there are various types of uncertainty in various aspects of a reasoning system, the "reasoning with uncertainty" (or "reasoning under uncertainty") research in AI has been focused on the uncertainty of truth value, that is, to allow and process truth values other than "true" and "false".
Generally speaking, to develop a system that reasons with uncertainty means to provide the following:
Nonmonotonic logics are used to formalize plausible reasoning, such as the following inference step:
Birds typically fly. Tweety is a bird. -------------------------- Tweety (presumably) flies.Such reasoning is characteristic of commonsense reasoning, where default rules are applied when case-specific information is not available.
The conclusion of nonmonotonic argument may turn out to be wrong. For example, if Tweety is a penguin, it is incorrect to conclude that Tweety flies. Nonmonotonic reasoning often requires jumping to a conclusion and subsequently retracting that conclusion as further information becomes available.
All systems of nonmonotonic reasoning are concerned with the issue of consistency. Inconsistency is resolved by removing the relevant conclusion(s) derived previously by default rules. Simply speaking, the truth value of propositions in a nonmonotonic logic can be classified into the following types:
A related issue is belief revision. Revising a knowledge base often follows the principle of minimal change: one conserves as much information as possible.
One approach towards this problem is truth maintenance system, in which a "justification" for each proposition is stored, so that when some propositions are rejected, some others may need to be removed, too.
Major problems in these approaches:
Justification: though no conclusion is absolutely true, the one with the highest probability is preferred. Under certain assumptions, probability theory gives the optimum solutions.
To extend the basic Boolean connectives to probabilty functions:
Bayesian Networks are directed acyclic graphs in which the nodes represent variables of interest and the links represent informational or causal dependencies among the variables. The strength of dependency is represented by conditional probabilities. Compared to other approaches of probabilistic reasoning, Bayesian network is more efficient, though its actual computational cost is still high for complicated problems.
Challenges to probabilistic approaches:
Examples of fuzzy concepts: "young", "furniture", "most", "cloudy", and so on.
According to fuzzy logic, whether an instance belongs to a concept is usually not a matter of "yes/no", but a matter of degree. Fuzzy logic uses a degree of membership, which is a real number in [0, 1].
A major difference between this number and probability is: the uncertainty in fuzzy concepts usually does not get reduced with the coming of new information. Compare the following two cases:
Challenges to fuzzy approaches:
The basic idea is to see the truth-value of a statement as measureing the evidential support the statement gets from the system's experience. Such a truth-value consists of two factors: frequency (the proportion of positive evidence among available evidence) and confidence (the proportion of currently available evidence among all evidence at a near future).
This approach attempts to uniformly represent various types of uncertainty.