Uncertain Reasoning
Between everyday reasoning and mathematical reasoning, one of the differences is the uncertainty of various types. In the study of AI, the representing and processing of uncertainty has been an active field for decades, with many approaches influenced by the theories in mathematics, logic, psychology, decision theory, game theory, economics, etc.
1. Reasoning with uncertainty
Everyday reasoning is "uncertain" in (at least) two senses:
- The truth value of a statement is not binary (true/false);
- The truth value of a statement may change.
The latter issue has been addressed in the discussion of non-monotonic reasoning, though it becomes more complicated when combined with the former issue.
For the purpose of AI, a three-valued logic is usually not powerful enough to choose among competing hypotheses.
An intuitively appealing idea is to use "probability" as truth-value, so as to merge binary logic and probability theory. Various models of probabilistic logic have been proposed, including to add probability into Prolog.
There are still many theoretical issues, such as
Alternative approaches:
- Fuzzy logic differs from probabilistic logic both in the interpretation and the calculation of truth-value,
- Dempster-Shafer theory and Imprecise probability theory address imprecision and ignorance by replacing a probability value with a probability interval,
- Using confidence to measure evidential support in an open system where beliefs may conflict and change.
2. Bayesian networks
To directly use statistical inference in AI, a major problem is the requirement for a joint distribution function on all the random variables involved. To solve this problem, Judea Pearl created Bayesian Networks, also known as Belief Networks. Basic ideas:
- Interpret probability as degree of belief given available evidence,
- Update probability with new evidence according to Bayes's rule,
- Use assumptions on independence to reduce computational cost.
As a further development, structural causal models are proposed to distinguishes causality and correlation, and to introducing intervention and counterfactual into consideration.
Other related works:
3. Decision making
In practical reasoning, "achieving a goal" can be generalized to "achieving the highest reward/utility".
Decision theory: When each state has a utility associated, and each action has a probability distribution on the states it may lead to, the best action is the one that has the Maximum Expected Utility (MEU).
Markov Decision Process (MDP) is one formalization of multi-step decision making or probabilistic planning, where the optimal policy maximizes the expected (discounted) total rewards.
These classical models are successful in many applications, though still have problems in explaining human decision making, as well as limitations from the perspective of AI.
Reading
- Poole and Mackworth: Chapters 9 & 11
- Russell and Norvig: Chapters 12 & 13 & 16
- Luger: Chapters 2, Sections 14.1-2