CIS 203: Artificial Intelligence

 

 

 

Fall Semester 2001

 

 

 

Professor: Dr. Pei Wang

 

 

 

 

Report on Autonomous Vehicle Systems

 

 

 

 

By

 

 

 

James Laurence


Table of Contents

 

 

 

 

 

Page: 2………………. Motivation for Autonomous Vehicle Systems

 

 

 

Page: 2………………. Some of the Technology Used in Autonomous 

                          Vehicle Systems

 

 

 

Page: 5………………. Practical Applications / Examples of Some

                          Autonomous Vehicles

 

 

 

Page: 8………………..What must be done in the Field of Autonomous

                           Systems?

 

 

 

Page: 9………………..Summary of Autonomous Vehicle Systems:

 

 

 

Page: 10………………..Works Consulted

 


 

Autonomous Vehicle Systems

 

 

Motivation for Autonomous Vehicle Systems:

 

 

            In many situations encountered in life, it is undeniable that direct human intervention is unnecessary or even undesirable. Environments that are hostile to humans such at the vacuum of space or the high pressures of the ocean floor represent real dangers to human exploration, which, if possible, should be avoided. In addition, humans often place themselves in situations that are hazardous. Examples of such situations span from engaging in drinking and driving activity all the way to the extreme of engaging in combat against an enemy, such as in aircraft combat. If such a means could be developed to avoid or suppress the human element in these activities, many lives could be saved. This is one of the motivations behind the construction and implementation of autonomous systems. There are also cases in which human involvement in certain activities is clearly of no life threatening risk but could certainly be seen as tedious or physically exhausting. Examples of such activities are landscaping work (grass cutting, bush trimming) and carpet vacuuming. Again, autonomous systems are the solution to these cases. By giving people time to do what they enjoy instead of spending time on mundane chores, (I would argue that most people don’t enjoy vacuuming for instance), autonomous systems can improve the quality of life.   

 

 

Some of the Technology Used in Autonomous Vehicle Systems:

 

Autonomous Navigation Systems:

 

The two major autonomous navigation systems encountered during research were the AURORA (Automotive Run-Off-Road Avoidance System) and the DAMN (Distributed Architecture for Mobile Navigation). Both systems are being developed at the Robotics Institute at Carnegie Mellon University in Pittsburgh, PA.

AURORA is able to negotiate roads via an onboard camera system. The cameras are pointed downward at the road and rely on a reflective source as a means of guiding the vehicle (Chen 2). For instance the same paint that is used to guide human drivers guides AUROA. Whether it is the double yellow bars of paint in the center of the road or the single side bar of white signifying a lane or the edge of the road; AUROA’s cameras are able to keep a ‘lock’ on the paint. What this means is that the on-board computer can determine the location (laterally) of the car on the road by measuring the distance between the car and either of the painted lane markers, since the width of the car is a known constant (4).

A sub-system of AURORA implements a so-called ‘roadway departure warning.’ This system is designed to gauge when the human driver has drifted too far into another traffic lane and thus put him or her self in danger (12). If such an incident occurs, the system sounds an audible alarm to the driver who must then make the required course correction. Future implementations of AURORA promise to feature a navigation system which can actually take over for the human driver if he/she is driving in an erratic manner, presumably while intoxicated (2).   

The success of this system was originally highly dependent on the condition of the road. Specifically proper navigation of the road was related to how high of a contrast there was between the paint and the road in order to give a reliable signal to the camera(s) (9). Also, the system was seen as too sensitive to how most people drive. Most humans do not drive in a perfectly straight line along a given path of road. In fact many people tend to stray from left to right and back again on a regular basis. And of course lane changing would be the ultimate insult to a system geared to stay within the lines. With this style of driving the system would be put into a perpetual state of signaling false alarms (12). Such a system is of little value (everyone would switch it off); so additional algorithms have been put in place so as to give AURORA a ‘feel’ for how humans drive. AURORA is now at the point that it can reliably navigate along highways and two lane rural roads at realistic speeds of up to 60 MPH (16)!

The other autonomous navigation project researched for this report is DAMN. DAMN is based on accepting and processing many types of sensor data and then performing an action in response to the data.

 

Unlike AURORA, DAMN is based on robot navigation. The processing of the data seems to be done in a manner consistent with neural networking. The reasoning of the network is based on the fact that every piece of raw sensor data received by DAMN is given a weight (or vote) as to its importance in the robot’s overall decision-making process (Rosenblatt 2). This ‘voting’ approach is justified since, as Rosenblatt states in his paper, “[combining sensor data] into one coherent system…has proven to be very difficult” (1). DAMN is thus given a rather high degree flexibility in how can respond to obstacles. The voting system gives DAMN the option of going forward (if the obstacle is still a fair distance away), making a ‘soft right’ turn, or making a ‘hard right’ turn (5).

While the robot is traversing a course votes are taken in regarding, among other things, the distance from the obstacle. Therefore DAMN has the liberty of altering course whenever it ‘feels’ it is necessary. This dependence on the current environment is a strength for DAMN in that is allows the system to dynamically come up with a course of action for each unique situation it encounters (6). In the end, what DAMN does is compare what the votes are saying (what the actual situation is), to what its goal state is (where the robot wants to go). The strength of DAMN is its ability to take in a wide assortment of sensory data and utilize it in a useful manner. The tradeoff is that DAMN cannot process the data fast enough to be utilized in a practical manner (i.e. to go as fast) as is the AURORA system. However, the choices DAMN makes are, in my opinion, more reliable since it is based on more than one signal, as is the case with AURORA.     

 

Autonomous Programming Languages:

 

                One programming language that has been created to deal with the control of autonomous systems is the Task Descriptive Language (TDL). TDL is designed to handle the three parts of a robot’s control system: behavior, planning and executive (Simmons 1). The first part, behavior, is low-level programming having to do with controlling actuators and the robot’s sensors. The planning level is concerned with achieving the robot’s goals. Finally, the executive layer is a go between the first two, executing commands and handling exceptions generated when those commands cannot be implemented (1). Like LISP, the underlying structure of TDL is that of trees fitting called task trees, although TDL is firmly based on C++. Regarding these tree structures, Simmons explains that, “Each task tree node has an action associated with it…an action can perform computations, dynamically add child nodes to the task tree, or perform some physical action in the world…(2). The strength of this structure is that a new tree node can be generated by the goals of the system. Goals of the system are dynamic as well since they depend on sensor data. The tree consists of goal nodes with corresponding command (task) nodes as their children (3). Also of importance is the support of concurrency. For instance, the robot should be able to move and ‘see’ at the same time. This requires two command nodes to be executed simultaneously. Even after a brief summary it is clear that TDL shows promise as a control language for robotics.  

 

Practical Applications / Examples of Some Autonomous Vehicles:

 

ANDI and CIMP

 

            ANDI (Automated Nondestructive Inspector) and CIMP (Crown Inspection Mobile Platform) combined, form an alternative to traditional aircraft inspection. As stated in the introduction, one of the more noble goals of constructing autonomous systems is to improve the quality and safety of human life. Traditional aircraft inspection is conducted by human workers who, by the nature of the occupation, are in danger of serious bodily injury.

            The autonomous aircraft inspection vehicle ANDI is the first part of the answer to this problem.  ANDI is used primarily to detect minor skin problems with an airplane. At present, ANDI does not have a high enough resolution on its cameras to detect serious structural faults in an aircraft (Siegel 3). ANDI is used merely to conduct what Siegel refers to “opportunistic kinds of visual inspection” which consists mostly of dents in the aircraft (3).

            The second part of the inspection team is CIMP. CIMP has an array of sensors to scan the aircraft with. Correspondingly it uses a vast amount of image enhancement software to visually identify most kinds of defects in an aircraft (4). Without going into the details about how CIMP does this, some of the features of the system are as follows: image understanding, edge detection and feature vector calculation (9). What all these enhancement algorithms results in is that CIMP us almost as reliable in finding cracks and scratches and most importantly in discerning between the two (9). With even more software, the CIMP design team hopes to make it as dependable as a human inspector. Together, ANDI and CIMP promise to make human inspection of aircraft obsolete.

 

Sage

 

            The Sage robot represents a collaboration between the Carnegie Museum of Natural History and the Carnegie Mellon University Robotics Institute (Nourbbakhsh 1). The goal of SAGE is to attract museumgoers to the less frequently visited exhibits at the museum, while at the same time give them useful information about what they are seeing  (1). 

            The navigation system of Sage is based upon sonar. Signals are sent out to detect any approaching humans. If anyone comes into the path of Sage, it will alter its course in order to avoid a collision (8). In order to be more ‘lifelike’ in its movements, Sage has to incorporate human characteristics such as graceful starts and stops. To accomplish this, Sage calculates its best path of avoidance every clock cycle (Sage runs on a Pentium 166). This reflects the fact that humans (especially children) may walk in a sporadic manner and as such, changing an existing internal representation (in memory) of the visitors in the room would be counter productive. Here is some sample c code that summarizes Sage’s obstacle avoidance methodology:

 

int calcspeed(int dir, int maxSpeed, int cycleTime)

{

/* update virtual sonar in the travel direction */

updateVirtualSonars(dir);

/* if close obstacle, then stop */

if (frontBlocked()){

normal_acc();

return (0);

}

/* else if middle distance obstacle, proportional speed */

else if (forwardObstacle()){

middle_acc();

return (calcSpeedObstacle(maxSpeed-100));

}

/* else if NO obstacles for a long distance, full steam ahead !

*/

else if(clearTowardGoal()){

smooth_acc();

return (maxSpeed);

}

/* finally, if path clear but far obstacle, reasonable speed */

else return (maxSpeed - 100);

}

     

What Sage does maintain is an internal representation of the museum hallways. In order to know where it is, markers are installed at the end of museum hallways. When the robot detects the marker with its camera, it updates its position in memory (12). To date, Sage has never hit any obstacles in the museum.

The most important part of Sage is not its navigational abilities but rather its ability to communicate with museum visitors in a productive manner. Sage is responsible for the ‘Dinosaur Hall’ exhibit at the museum. This responsibility entailed that the robot give a thirty minute tour of the area, complete with explanations at each point of interest. (18). It was therefore imperative that Sage be able to communicate information in a manner humans (especially children) would find enjoyable (18). Techniques used included outfitting the robot with a laserdisc player for multimedia presentation and a speech synthesizer for narratives. The true test of Sage’s being ‘intelligent’ in this case is the robot’s acceptance by human visitors. Acceptance refers to the level of respect visitors gave to the robot  and the response the robot gave when it was in fact being disrespected. The following table summarizes problems Sage has in the beginning and what the design team did to correct  them (23).

Table 1 - Ways of promoting human/robot interaction:

     By giving the robot a way to respond to negative actions by its human visitors (referring to the last entry in the table), the developers were able to make Sage more lifelike. The results of this experiment in human-computer interaction are mixed. While many people enjoyed the robot, most (74%) only stayed with it for half the tour, when presumably their interest was dulled. Also most visitors rated a traditional (human) guided tour as superior to the tour given by Sage (25). Clearly, more work needs to be done if robot guides are to accepted.  

  

What must be done in the Field of Autonomous Systems?

 

            Based on the information above on autonomous systems, there are two things that must be done so that the field us seen as a serious discipline. After all, successful autonomous systems must be more than just toys. This goal can be achieved in two ways.

            First, autonomous systems must be proven to perform better than their human counterpart in order to justify their use. It is not enough to design an ‘automatic pilot’ for a car, for example, if the system cannot handle ‘stop and go’ situations since such driving is more difficult than that of the highway. This is justified as follows. A system to control the car on the highway is great but if the use of the system is indeed to prevent accidents due to intoxication for instance, the system must do all the driving. In addition, we cannot rely on an intoxicated passenger to give accurate directions home. For this reason, such a navigational control system would certainly need to have the assistance of some external navigational system (such as GPS) in order to reliably get the passenger home safely.

            Second, autonomous systems, when needed, must incorporate the major aspects of human behavior. One of the reasons some humans dislike computers so much is that they are impersonal. Loosing all of one’s work only to have the computer reply ‘stack overflow’ or ‘this program has performed an illegal operation’ for instance, is not a good way to establish good relations between computer and user. Therefore, more cooperation needs to be done between sociologists, psychologists and the system designers to ensure an acceptable (i.e. friendly) user interface. Only when these ideas are implemented can effective autonomous systems be realized.

 

Summary of Autonomous Vehicle Systems:

               

                From the discussion above, I hope to convey my feeling that the field of autonomous vehicle systems shows much promise. While there is certainly still much work to be done in autonomous systems, I would argue that autonomous systems has an advantage over other areas of AI in terms of how soon the general public can expect to see practical applications emerge from this field. Other areas of AI research, particularly speech recognition/production, try in some way to replicate human brain activity. We need to know to some degree how the brain works in order to teach a computer to understand speech. Such an understanding is not absolutely necessary to teach a computer how to drive a car, unless of course we wish to teach it “road rage”! While designing a computer system to flawlessly drive a vehicle or perform some other task is certainly not trivial, it the main constraint of  autonomous systems is the current technology available. For this reason it is hard to imagine that the near future will not see everyday implementations (such as the autonomous lawnmower or autonomous vacuum cleaner) of autonomous systems.

 


Works Consulted

 

 

Ackerman, Robert. “Processing technologies give robots the upper hand.” Signal. Jul.

 2001: 17-20.

 

 

Adams, Jarret. “Beautiful Vision.” Red Herring. Aug. 2000.

 

 

Al-Shihabi, Talal. “Developing Intelligent Agents for Autonomous Vehicles in Real-Time

 Driving Simulators.” Online. Internet. 17 Oct. 2001. Available:

 http://www.coe.neu.edu/~mourant/velab.html

 

 

Chen, Mei, Todd Jochem, and Dean Pomerkeau. “AURORA: A Vision-Based Roadway

 Departure Warning System.” Carnegie Mellon University. Pittsburgh, PA, 1997.

 

 

Foessel-Bunting. “Radar Sensor Model for Three-Dimensional Map Building.” Carnegie

 Mellon University. Pittsburgh, PA, 2000.

 

 

Nourbakhsh, Illah R. “An Affective Mobile Robot Educator with a Full-time Job.”

Carnegie Mellon University. Pittsburgh, PA, 1999.

 

 

Rosenblatt, Julio K. “DAMN: A Distributed Architecture for Mobile Navigation.” Diss.

 Carnegie Mellon University. Pittsburgh, PA, 1997.

 

 

Siegel, Mel and Priyan Guanatilake. “Remote Inspection Technologies for Aircraft Skin

 Inspection.” Carnegie Mellon University. Pittsburgh, PA, 1997.

 

 

Simmons, Reid and Apfelbaum. “A Task Description Language for Robot Control.”

 Carnie Mellon University. Pittsburgh, PA, 1998.

 

 

West, James. “Computer Vision.” University of Sunderland. Sunderland, UK, 1997.