3203. Introduction to Artificial Intelligence

Automated Reasoning

Since reasoning is an important cognitive function, the study of "automated reasoning", in particular, "proving theorems in predicate logic", has been a topic of AI study from the very beginning of the field.

### 1. Resolution and refutation

Since it is hard to directly implement the inference rules of predicate logic, other methods have been developed. A well-known one is resolution, which uses a single rule to carry out all types of inference.

The resolution rule for the propositional calculus can be stated as following: from (P ∨ Q) and (¬P ∨ R), we can derive (QR). Here P is an atomic proposition, while Q and R can be any (atomic or not) propositions, as well as be absent.

There are several ways to see the soundness of this rule:

• With truth table, it can be proven that ((P ∨ Q) ∧ (¬P ∨ R)) → (Q ∨ R);
• The two premises can be equivalently rewritten as (¬Q → P) and (P → R), and from them we get (¬Q → R), which is (Q ∨ R);
• Since P and ¬P is one true and one false, Q and R cannot both be false, otherwise one of the premises must be false.
For this rule to be used on any premises set D, we must convert the well-formed formulas (wffs) into conjunctions of clauses, which are disjunctions of atomic propositions or their negations. Any wff in propositional calculus can be converted to an equivalent conjunction of clauses by the following procedure:
1. Eliminate implication and equivalence signs by using their equivalent forms.
2. Reduce the scope of negation sign to atomic propositions with De Morgan's Law.
3. Convert to conjunctive normal form by using the associative and distributive laws.
For example, assume the wff to be converted is ¬(P → Q) ∨ (R → P), then it becomes, step by step
1. ¬(¬P ∨ Q) ∨ (¬R ∨ P)
2. (P ∧ ¬Q) ∨ (¬R ∨ P)
3. (P ∨ ¬R) ∧ (¬Q ∨ ¬R ∨ P)
A conjunction of clauses is usually expressed as a set of clauses, with the conjunction of them implied. Thus the above result is represented as {P ∨ ¬R, ¬Q ∨ ¬R ∨ P}.

Resolution by itself is not complete, that is, there are certain valid conclusions that cannot be derived by this rule alone. For example, (P ∨ Q) cannot be derived from P by resolution, even thought the latter entails the former. However, it has be proven that resolution refutation is complete. The basic idea of refutation is to show that the negation of the conclusion is inconsistent with the premises.  That is, (Dw) is equivalent to ¬(D ∧ ¬w).

In general, a resolution refutation for proving an arbitrary wff, w, from a set of wff, D, proceeds as follows:

1. Convert the wffs in D into clause form, that is, a (conjunctive) set of clauses.
2. Convert the negation of w, the wff to be proved, into clause form, too.
3. Merge the above two sets of clauses into a single clause set, S.
4. Iteratively apply resolution to the clauses in S and add the results to S, either until no more result can be generated (then w cannot be proven), or until an "empty clause" is generated (then w is proven).
The conclusion is reached in this way, because an "empty clause" is generated by the resolution rule only when a pair of inconsistent clauses P and ¬P are used as premises.

A similar procedure can be used to prove a theorem w, by skipping the first step — a theorem is "unconditionally true".

The resolution refutation procedure described above is decidable in propositional calculus.

An example: to derive A from {B, B → C, (B∧C) → A}.

1. The clause set is {B, ¬B∨C, ¬B∨¬C∨A, ¬A}.
2. B and ¬B∨C derives C.
3. B and ¬B∨¬C∨A derives ¬C∨A.
4. C and ¬C∨A derives A.
5. A and ¬A derives empty clause.

### 2. Resolution in predicate logic

To extend the above resolution refutation procedure into predicate calculus, variables and quantifiers need to be handled properly.

When wffs are converted into clauses, variables and quantifiers need to be processed, so the process is more complicated. The conversion procedure consists of the following steps:

1. Eliminate implication and equivalence signs — same as in propositional calculus.
2. Reduce scopes of negation sign — same as in propositional calculus, except that ¬(∀x)P(x) becomes (∃x)¬P(x), and ¬(∃x)P(x) becomes (∀x)¬P(x).
3. Standardize variables so that no quantifier share variable name.
4. Eliminate existential quantifiers. If an existential quantifier is within the scopes of some universal quantifiers, the existential variable is replaced by a function of those universal variables. Otherwise (i.e., it is not in the scope of any universal quantifier), it is replaced by a new constant.
5. Drop all universal quantifiers, so that all remaining variables are universal by default.
6. Convert to conjunction normal form — same as in propositional calculus.
7. Turn conjunctions into clauses in a set, and rename variables so that no clause share variable name.
For the resolution rule to be applied, in predicate calculus the two clauses do not have to contain one proposition and its exact negation, as in (P ∨ Q) and (¬P ∨ R). Instead, they can be (P1 ∨ Q) and (¬P2 ∨ R), where P1 and P2 can be unified. That is, by renaming the variables and substituting variables by constants, they become identical.

For example, P(x, y, x) and P(A, u, v) can be unified by substitution {A/x, y/u, A/v} (that is, using A to replace x and v, and using y to replace u) — both [P(x, y, x)]{A/x, y/u, A/v} and [P(A, u, v)]{A/x, y/u, A/v} are P(A, y, A). On the other hand, P(x, y, x) and P(A, u, B) cannot be unified.

For given clauses (P1 ∨ Q) and (¬P2 ∨ R), if s is a substitution such that [P1]s is identical to [P2]s, then the resolution rule can be applied to get [(Q R)]s.

Resolution refutation in predicate calculus is semi-decidable — if the clause set is inconsistent, the rule will eventually derive an empty clause from it. If the set is consistent, the inference process may never stop. This is the case because of the existence of variable and function. Example: {P(x) ∨ ¬P(f(x)), ¬P(a)}.

### 3. Horn clauses and Prolog

A Horn clause is a clause with at most one positive literal.

A Prolog fact is a Horn clause with one positive literal and no negative literial; A Prolog rule is a Horn clause with one positive literal (the head) and one or more negative literials (the body); A Prolog goal is a Horn clause with no positive literal and one or more negative literials.

Inference in Prolog can be seen as a special type of resolution.

### 4. Control strategy

Generally speaking, an inference system (or reasoning system) consists of a logic part and a control part. The former includes a formal language (deciding how knowledge is represented) and a set of inference rules (deciding what new knowledge can be derived in a single step); the latter can be a simple algorithm or some complicated processes for premises/rule selection (deciding what to do in each step).

Since in resolution refutation there is only one rule to use, the control strategy is simplified — here the only decision to be made is the selection of the two clauses in each inference step.

A necessary condition for the two to contain unifiable atomic propositions with opposite sign (positive vs. negative), but when there are many such pairs, a good strategy become important. The strategy does not only influence the efficiency of the system, it may also influence its completeness (though it never influence its soundness).

Several ideas have been proposed as selection preferences. For example, the clauses that come from conclusion may be preferred, so that the inference is carried out in a "goal-oriented" manner (i.e., backward inference) when the premises are not all relevant to the conclusion. Another idea is to prefer shorter clauses because they are "closer" to an empty clause.

Even with all these ideas, a theorem proving system using resolution refutation is usually very slow, because most time is wasted in the inference steps that have no contribution to the final solution. One reason of this is the conversion to clause form — though the logical information of the wffs are persevered in the process, the original knowledge structure is lost, which contains information on the preferred usage of the knowledge.

### References

Portoraro, Frederic, Automated Reasoning, The Stanford Encyclopedia of Philosophy (Summer 2014 Edition), Edward N. Zalta (ed.)

Automated theorem proving, Wikipedia