3203. Introduction to Artificial Intelligence

Issues in Search

 

1. Using heuristic search

In the early years of AI, some researchers believed that to certain extent "AI" can be reduced to "symbolic problem solving", which can be further reduced to "heuristic search". Such an opinion can be found in "Computer Science as Empirical Inquiry: Symbols and Search", the 1975 ACM Turing Award Lecture by Newell and Simon.

Since "best-first search" is a well-defined algorithm, the only issue left is to find a good heuristic function. Among general-purpose heuristics, an example is means-ends analysis used in General Problem Solver. The idea is to measure the difference between the current state and the goal state, and to choose actions that reduce the difference as much as possible.

Heuristic search can be successfully used when

Now the problem is: what should we do if the above conditions are not satisfied yet? Some people will try harder to satisfy them, while some other people will turn to other approaches.
 

2. Beyond heuristic search

Though it sounds natural to treat problem-solving as graph search, there are implicit assumptions in this approach which limit its application. In the following let us analyze these assumptions, and introduce the alternative approaches.

[All options are listed at each step]

To represent a problem solving process as heuristic search in a state graph assumes that all possibilities are taken into consideration at each moment, and the best is selected to follow.

First, available "operations" often exist in different "spaces", and to compare them is improper. A natural solution is to group the operations into a hierarchy, and solve the problem in "macro-level" before going into "micro-level".

Also, unknown possibilities always exist in human problem solving. Actually, to recognize possibilities are often more crucial than to evaluate and to compare them.

Finally, it is important for the efficiency of the system that certain possibilities are ignored. In everyday life, bad alternatives are seldom explicitly evaluated and eliminated. Example: chess masters do not evaluate the bad moves at all.

For these reasons, to properly represent a problem as a graph is often a harder task than to search the graph. Instead of depending on an existing graph, an AI system often need to build its knowledge representation while solving problems.

[There is a predetermined heuristic function]

To let the search be guided by a predetermined heuristic function assumes that the system has no memory, and does not learn from its experience. For example, if the same problem is repeated, the same search procedure will also be repeated accurately.

One way to combine search and memory is to use search to revise heuristic function. For example, a Checkers program developed by Arthur Samuel plays by search, but automatically adjusts parameters in its heuristic function according to the result of each game.

In general, it means domain knowledge not only includes "facts" and "factual relations" that can be used by the logic part of the system, but also "guiding information" that can be used by the control part of the system, and both types of knowledge can be learned from the system's experience. In this way, the system's capability is no longer restricted by its initial design.

[In each step, a choice is made among several options]

This kind of sequential search often misses good solutions which are just a few steps away.

Parallel search will solve this problem, but given hardware restriction, to explore all paths in parallel is impossible, nor is it really desired.

Parallel search can be simulated sequentially, by time-sharing. Actually, breadth-first search can be seen as an extreme case here. However, it fails to use available heuristic information.

A better solution is to explore different paths at different speeds, depending on the heuristic function. In this way, "choice among options" become "resource distribution among options".

One related approach is genetic algorithm that let multiple candidate solutions co-exist, compete, and evolve, in a manner similar to natural selection. Example: Genetic Algorithm - Traveling Salesman Problem.

[Each operation leads to a certain state]

To treat an operation as a way to reach a certain state assumes that all operations have deterministic results.

In practical situation, all operations are incompletely specified, and may have unexpected results. One way to represent this information is to attach a probability value to the state transition function, as in Markov decision process, where "best-first" means to pursue the maximum expected utility or reward.

Also, "planning-then-acting" should be changed into a "sense-plan-act" cycle, with environmental feedback. Such an approach is widely used in robotics.

[The process stops at a goal state]

To let the search process stop at predetermined final states assumes that all the goal states are equally good, and all the other states are equally bad as ending states.

With insufficient computational resources, a "satisficing" strategy is often more realistic, where the search stops when a "good enough" answer is found, even if it is not the best one.

Instead of ending when a fixed standard is met, an anytime algorithm returns the best answer it has found so far even when it is not allowed to run to completion, and may improve on the answer if it is allowed to run longer. Such an algorithm let the user, not the system (and the algorithm designer), decide when to stop.

The above discussion shows that in AI the hard part of research is often not in solving problems within a given framework, but in deciding when to jump out of a framework (i.e., the representations, the goals, and the assumptions of the research).

 

3. Game playing with/without search: an example

Let's see a simple 2-player game (a simple version of Nim): Given N (N > 3) stones to start with, two players take turn to remove the stones, and dice are used to decide which player moves first. Each time a player must take 1 or 2 stones away, and whoever takes the last stone wins the game.

There are the following algorithms:

  1. Always take 1.
  2. If N = 1, take 1; otherwise randomly take 1 or 2.
  3. If N%3 = 2, take 2; otherwise take 1.
  4. Use Minimax/Alphabeta search to decide how many to take.
  5. Use a N-by-2 matrix to keep a score for each choice. After a winning game, increase the scores of the choices made in this game; after a losing game, decrease the scores. In the following game, in each step choose the move with a higher score.
Discussion: