CIS203 A.I.
December 9, 2001
Really fast
fruitflies:
A study of
Artificial Intelligence in Gaming
“I propose to consider the question, ‘Can Machines Think?’" (-Alan Turing) With these simple words the discipline of Artificial Intelligence (A.I.) was born some 50 years ago. Since that time researchers from around the world have endeavored to create machines (in modern terms computers) that are capable of intelligent thought. One of the domains within A.I. that has received considerable attention over the past 50 years is the area of game-playing and puzzle-solving. This domain is often considered one of the most successful domains within the field of A.I. due to its many impressive milestones. Among these milestones are a chess program capable of playing at grandmaster level and an Othello program that can defeat the best human players in the world. However, by looking closely at the methods used to develop these programs we can see that this domain has contributed very little to the field of A.I. in terms of scientific knowledge. In fact, one of the only things that research in this domain has proven is that computational power can mimic intelligence on a game board.
The
game of chess seems to be the most logical place to start a discussion about
game-playing A.I.. After all, the first
game programs written were chess programs, and much of modern game research has
focused on chess. However, before we look into how most chess programs work it
might be helpful to have some knowledge of the game's history.
According
to most chess playing experts, chess was invented in the 6th century AD in
India. Sam Sloan, a linguist and a
ranked chess player who has extensively researched the origins of chess,
contests this assessment. Sloan insists
that chess was originally developed in China sometime during the 2nd century
BC. Regardless of its exact origins, it
is generally accepted that after chess' origination, it spread throughout the
world and took on new variations of play.
The variation of chess that is most often played by Western Europeans
and Americans evolved in Europe throughout the Middle Ages and the
Renaissance. (Sloan)
The
rules of modern European chess, though harder than some, are not difficult to
learn and can be picked up by most people in a single session of play. Playing the game well, on the other hand, is
not such an easy task. Mastering the
game of chess requires a tremendous amount of talent and a lifetime of
devotion. In fact, many are of the
opinion that the ability to master chess is an achievement no less significant
than winning a Nobel Prize. It is for
this reason that research into designing a chess playing program was started. Researchers believed that if they could
design a program that could play chess, then they would have succeeded in their
efforts to create Artificial Intelligence.
Thus,
in the early 1950's Alan Turing wrote the first modern chess program. The program, which Turing had to simulate by
hand due to a lack of resources, lost to a weak human opponent. Following Turing’s work were a series of
chess programs each apparently more successful then the previous one. Due to the success of some of these
programs, tournaments were started so that computer programmers could pit their
chess programs against one and other. A
few of the big names in these tournaments were Cray Blitz, which ran on a Cray
supercomputer, and HITECH, which boasted 64 specialized VLSI chips. The efforts of all of these various programs
culminated in 1987 when the chess machine ChipTest played and defeated a human
grandmaster. This defeat aroused enough
interest that chess giant Garry Kasparov agreed to play a two game exhibition
match against the machine in 1989.
Kasparov won both matches. ChipTest
went on to evolve first into DEEP THOUGHT and then later into DEEP BLUE. All three machines utilized a collection of
specialized "chess" chips that were based on VLSI technology. In 1997 Kasparov agreed to play Deep Blue in
a 6 game exhibition match. The final
score of that match resulted in 1 win for Kasparov, and 2 wins for DEEP BLUE,
with the remainder of the matches ending in draw. (IBM, Schaeffer)
With
DEEPBLUE's victory over world chess champion Garry Kasparov it seemed that the
domain of games research had achieved a tremendous milestone. After all, if chess requires intelligence to
play and humans are capable of creating machines that play chess as well as the
best human players, then humans must be capable of creating intelligent
machines. If computers played chess the
same way that humans do, then this hypothesis could be accurate. However, the way in which a computer plays
chess and the way in which a human plays chess are very different.
Let
us first examine how a computer plays chess and then compare this method with
that used by human players. Without
exception all of today’s most successful chess programs use some kind of search
algorithm as their foundation for choosing moves. To understand exactly how these search algorithms work, it might
be helpful to pretend that we were a computer for a moment.
If
we were a computer that was programmed to play chess, via a search algorithm,
the first thing that we would do is write down all 20 of the possible opening
moves that we could make. Then we would
assign values to each of these moves based on a mathematical function that we
knew. After we had done that,
underneath each one of our possible opening moves, we would write down all of
the moves that our opponent could take given that move. Again we would assign a value to each of
these moves based on our function. This
would leave us with 20 lists. Each list
would consist of one of our moves at the top and all of our opponent’s moves
that could result from that move underneath.
Next, underneath each of our opponents moves we would write down all of
the possible moves that we could take given their move, and again we would
assign them values. This process would
continue until we no longer had the resources to write down any more moves
(i.e. we ran out of memory or processing power). At this point we would look at all of the sequences of moves we
could take and find the sequence that leaves us with the highest assigned
value. We would then choose the first
move of this sequence and play it.
After playing our first move we would then erase all of the lists and
sub-lists that are no longer possible because we have already chosen our first
move. We would then start writing down
moves again, this time going deeper into the game.
Although
this is an oversimplification of how a computer plays chess, the idea is still
the same. Basically the computer
repeats the same thing over and over again until the game ends. That doesn't seem very smart. So how is it that this algorithm beat Garry
Kasparov in chess? Hardware. Simply put, what makes this algorithm
effective is not the algorithm's design, but instead the speed with which it is
executed. The DEEPBLUE chess machine
has 480 specially designed "chess" chips that are capable of considering
an estimated 40,000 moves per second.
When all of these chips are combined together DEEP BLUE is capable of
considering roughly 200 million chess positions a second. It was through this phenomenal amount of
calculations that DEEP BLUE was able to defeat Garry Kasparov. (Ginsberg, Schaeffer)
In
fact, if we compare the number of chess positions that DEEP BLUE can consider,
roughly 200 million per second, with the number of chess positions that
Kasparov can consider, 2 per second, it seems a wonder that Kasparov won at
all. How is it possibly that Kasparov
looked at so few positions, compared to DEEP BLUE, and still managed to
win? The answer lies within the way
humans play chess.
As
Mathew Ginsberg a research associate at the University of Oregon writes,
"Considering the far greater computational capacity of computers, how is
it that humans can win at all? The answer is that although we look at a
relative handful of successor positions, we look at the right handful." He goes on to further identify how we look
at the "right" positions by saying,
We identify the best positions to consider by a process known as pattern matching. It is how we immediately identify the four-legged platform in a room as a table or the images in a police mug shot as those of the same person despite the different orientations of the front and profile views. An expert chess player compares a given chess position to the tremendous number of such positions that he has seen over the course of his career, using lessons learned from analyzing those other positions to identify good moves in the current game. (Ginsberg)
Supporting
Ginsberg's assessment that humans process the game of chess in a parallel
fashion, Donald Michie, Professor Emeritus of the University of Edinburgh writes,
Leading chess players tightly focus their searches-they tend to look at only a few branches in the tree of possibilities, and only a few moves ahead. Grandmasters famously compensate for this by using pattern recognition to form judgments about the broad line that future play might take. An admirer once asked the grandmaster Richard Réti how many moves he looked ahead. "One," he replied, "the right one." Réti was exaggerating, of course. But it is true that a grandmaster does not ordinarily examine more than about thirty positions in the tree, and rarely more than fifty. In some cases, however, Deep Blue examines 50 billion or more. (Michie)
In
addition to searching for patterns on the chess board humans also consider the
strategic merit of their moves, thus further reducing the number of positions
they consider. For example, John
McCarthy a noted Stanford professor of computer science writes of, "how a
tree of depth 20 can be reduced to a four node tree by ignoring distinctions
between black’s moves that are tactically similar and regarding moving the
white king along a certain path as a single action as long as black keeps his
king in a certain region." Here
again we see the human ability to distill knowledge into simpler form, or to
put it simply the ability to think.
(McCarthy)
It
is this ability to think that allows human game players to compete with machine
game players. As McCarthy writes,
"Chess programs compensate for the lack of this intellectual mechanism by
doing thousands or, in the case of Deep Blue, many millions of times as much
computation." (McCarthy)
It
should be obvious, however, that simply throwing greater computational power at
a problem does not make it go away. In
the domain of game-playing, this point can be seen when one considers the game
of Go. As with chess it might be
helpful to have some knowledge of the history of Go before we go into the
details of the game.
Most
historians believe that the game of Go originated in China some 3,000 to 4,000
years ago. Like chess, Go traveled
after its creation, and eventually made its way to Japan some 1,200 years
ago. In Japan, Go developed a huge
following among the warrior class of samurai.
(McAdams)
The
rules of modern Go are believed to be the same as those of ancient Go. There are two sides, black and white, and
each takes turns putting stones on a 19 X 19 board. When an opponents stones surround a stone or group of stones the
surrounded stone(s) are removed from the game.
Other than that there are few other rules. As with chess, however, the game of Go is easy to learn and hard
to master. In fact, the game is so
difficult to master, that children in Japan who show promise in the game are
schooled full-time by a professional Go master. This kind of intensity is also seen in the training which Olympic
athletes undergo. Another noteworthy
example of the game's complexity comes from the fact that Chairman Mao of China
forced his generals to play Go in order to sharpen their minds towards
strategy. (McAdams)
The
first known Go programs were developed in the 1970's and since then little
progress has been made. As it stands
now most Go programs are rated somewhere around the middle amateur level. Though this rating is sometimes disputed by
Go playing experts, who suggest it is much lower. Why is there such a discrepancy between the level to which chess
programs can perform and the level to which Go programs can perform? The answer lies within the methods used to
create a Go playing program.
When
designing a Go playing program, "brute-force" searching is about as
effective as "trying to solve an algebra equation by chewing
bubblegum." (Schmich) This is due to the enormous amount of possible moves
that one can take at any single point during the game. In chess there are usually around 35 moves
that one can make at any given time. In
Go there are usually around 200 different moves that one can make. If we look back at our analogy of how computers
play chess we can see the problem quite clearly. The first thing that we would do is right down all of the
starting moves that we could possibly make.
In chess that number summed to 20 different moves. In Go the number of opening moves sums to
361, quite a few more. As New York
Times writer George Johnson so effectively points out, "a Go computer as
fast as Deep Blue (which analyzed some 200 million chess positions per second)
would take a year and a half to mull over a single move." (Johnson)
The problem of searching Go is so large, in fact, that even if we had
computers that were far superior to all modern machines we still would not be
able to utilize searching algorithms to play Go. As Mathew Ginsberg notes, "Even with the most powerful
computer imaginable--say, one performing perhaps 10^17 operations per second
(one operation in the amount of time it takes light to traverse the space of a
hydrogen atom)--you still would not be able to make a dent in solving a game
such as Go, for which there are some 10^170 possible positions."
(Ginsberg) Furthermore, even if we were able to search such a massive amount of
data it could not be so easily analyzed as it was with chess. In our chess playing analogy we simply
computed a mathematical formula for the value of each move, whereas in Go,
there is no simple formula for determining the value of specific move. Johnson illustrates well in his writing,
... a chess-playing computer tries to choose the move that will leave
it in the strongest position. It determines this by using fairly simple
formulas called evaluation functions. Each piece can be assigned a number
indicating its rank (pawns are worth 1, knights and bishops 3, rooks 5, queens
9). This figure can be multiplied by another number indicating the strength of
the piece's position on the board. Other formulas quantify concepts like
"king safety," or how wellprotected that piece is. These rules,
called heuristics, are hardly infallible, but they give the computer a rough
sense of the state of the game and a basis on which to make its decisions. (Johnson)
He goes on to write that,
Go does not succumb to such simple analysis. There is no single piece, like a king, whose loss decides the game. Even counting the amount of territory each player has captured is not very revealing. With the placement of a single stone, a seeming underdog might surround the grand structure his opponent has been assiduously building and turn it -- jujitsu-like -- into his own (Johnson).
Given
that Go exhibits all of these attributes, the question that arises is: "How is it that humans play
Go?" The answer to that question
is the same as it was for chess: pattern-recognition. Human Go experts recognize patterns of stones on the Go board and
determine whether those stones are "alive" (incapable of capture),
"dead" (those fated to be captured), or undecided. Here again we see that it is the human
ability to understand information, not just process it, which makes us
intelligent and allows us to play games.
As McCarthy writes, "The research of Go programs is still in its
infancy, but we shall see that to bring Go programs to a level comparable with
current chess programs, investigations of a totally different kind than used in
computer chess are needed." (McCarthy)
So
now that we have identified the differences between man and machine, is there
not some way in which to create chess programs that play like humans? The final answer to that question is yet
beyond the scope of modern research. However,
chess programs that were designed to mimic human play have been around since
the beginning. The problem is that they
were not very successful in tournament play.
As McCarthy so aptly points out, "the fixation of most computer
chess work on success in tournament play has come at scientific cost."
(McCarthy) He further goes on to cite several attributes that a computer chess
player must have in order to play on a more human level. Some of these attributes were included in
early chess playing programs, but were replaced by brute-force searching
techniques. The three attributes that
he listed were: forward pruning of the search tree, comparing moves to those
found in a parallel position and pruning them if they were originally
unsuccessful, and partitioning of positions into sub-positions. The first of these attributes, forward
pruning of the search tree, refers to the process of selectively searching
certain paths of play that seem promising, while removing/pruning others that
are seen as less noteworthy. This
feature was initially present in most early chess programs but was scraped as
computers became faster, and the program could search more area. However, this feature is absolutely
essential in Go where the number of possible moves is beyond calculation. The second feature, comparing current
positions with parallel previous positions, seems a blatantly obvious feature
of human chess play. Basically what it
boils down to is reject a move based on past experience, something that humans
do naturally. However, as McCarthy
writes "Present programs spend most of their time rejecting the same moves
millions of times apiece." (McCarthy) The third attribute is also perhaps
the most complicated. It involves
breaking down a problem into smaller sub-problems, which is one of the fundamental
challenges of A.I. and one of the fundamental strengths of human
intelligence. McCarthy writes, "
Human chess players often partition a position into subpositions, for
example the king's side and queen's side. We analyze the subpositions somewhat
separately and then consider their interaction. Present chess programs only
consider the position as a whole, and computer power makes up for the
inefficiency. Computer Go programs are weak, because partitioning is absolutely
necessary for Go. We still don't know how to make computers partition effectively.
Imagine playing wide chess on an 8-by-32 board. Humans would play almost as
well as on the normal board, because humans use partitioning on all problems,
but programs based on present principles would be unable to search deeply.
(McCarthy)
McCarthy
has no suggestions on how to solve this last problem, but at least it has been
identified.
In
1965 Alexander Kronrod said, "Chess is the Drosophila of Artificial
Intelligence." (paper) By this statement he was making an analogy between
the study of chess in A.I. and the study of fruit flies in genetics. In this analogy he was implying the
importance of the study of chess to A.I..
Now that we have examined the ins and the outs of computer chess, we can
see this importance. However, we can
also see how research in this field was twisted into a bizarre man vs. machine
sport that provided us with little usable scientific knowledge. This twist is best interpreted by John
McCarthy who writes, "...computer chess has developed much as genetics
might have if the geneticists had concentrated their efforts starting in 1910
on breeding racing Drosophila. We would
have some science, but mainly we would have very fast fruit flies."
(Schaeffer)
Works Cited
Ginsberg, Mathew L. "Computers, Games and the Real
World." Scientific American Nov. 1998. 5 Dec. 2001
<http://www.sciam.com/specialissues/1198intelligence/1198ginsberg.html>.
IBM. Research Pages: Deep Blue. 5 Dec. 2001
<http://www.research.ibm.com/deepblue/meet/html/d.2.html>.
Johnson, George. "To Test a Powerful Computer,." The
New York Times 29 July 1997.
5 Dec.2001 <http://www.ishipress.com/times-go.htm>.
McAdams, Mindy. Introduction to Go. 5 Dec. 2001
<http://www.well.com/~mmcadams/gointro.html>.
McCarthy, John. Abstract: What is AI. 5 Dec. 2001
<http://www-formal.stanford.edu/jmc/whatisai.pdf>.
- - -. AGA Computer Go. 5 Dec. 2001
<http://www.usgo.org/computer/>.
- - -. MAKING COMPUTER CHESS SCIENTIFIC. 5 Dec. 2001
<http://www-formal.stanford.edu/jmc/chess.html>.
- - -. Review of Newborn Article. 5 Dec. 2001
<http://www-formal.stanford.edu/jmc/newborn/newborn.html>.
Michie, Donald. "Slaughter on Seventh Avenue." New
Scientist 7 June 1997. 5 Dec. 2001
<http://www.newscientist.com/hottopics/ai/slaughter.jsp>.
Schaeffer, Jonathan. "A Gamut of Games." A.I.
Magazine Fall 2001: 7-9.
Schmich, Mary. "Advice, like youth, probably just wasted on
the young." Chicago Tribune
1 June 1997. 5
Dec. 2001
<http://chicagotribune.com/news/columnists/chi970601sunscreen.column?coll=chi%2Dhomepagenews%2Dutl>.
Sloan, Sam. The Origin of Chess. Sloan Publishers, 1985. 5
Dec. 2001 <http://www.shamema.com/origin.htm>. ISBN 0-9609190-1-5