CIS 5603. Artificial Intelligence

Uncertain Reasoning

Between everyday reasoning and mathematical reasoning, one of the differences is the uncertainty of various types. In the study of AI, the representing and processing of uncertainty has been an active field for decades, with many approaches influenced by the theories in mathematics, logic, psychology, decision theory, game theory, economics, etc.

1. Reasoning with uncertainty

Everyday reasoning is "uncertain" in (at least) two senses: The latter issue has been addressed in the discussion of non-monotonic reasoning, though it becomes more complicated when combined with the former issue.

For the purpose of AI, a three-valued logic is usually not powerful enough to choose among competing hypotheses.

An intuitively appealing idea is to use "probability" as truth-value, so as to merge binary logic and probability theory. Various models of probabilistic logic have been proposed, including to add probability into Prolog.

There are still many theoretical issues, such as

Alternative approaches:

2. Bayesian networks

To directly use statistical inference in AI, a major problem is the requirement for a joint distribution function on all the random variables involved. To solve this problem, Judea Pearl created Bayesian Networks, also known as Belief Networks. Basic ideas: As a further development, structural causal models are proposed to distinguishes causality and correlation, and to introducing intervention and counterfactual into consideration.

Other related works:

3. Decision making

In practical reasoning, "achieving a goal" can be generalized to "achieving the highest reward/utility".

Decision theory: When each state has a utility associated, and each action has a probability distribution on the states it may lead to, the best action is the one that has the Maximum Expected Utility (MEU).

Markov Decision Process (MDP) is one formalization of multi-step decision making or probabilistic planning, where the optimal policy maximizes the expected (discounted) total rewards.

These classical models are successful in many applications, though still have problems in explaining human decision making, as well as limitations from the perspective of AI.


Reading