Research

 

My research focuses on the control and coordination of intelligent agents in highly dynamic environments. My goal is to discover efficient techniques for controlling autonomous teams of agents and to allow the human user to interface such teams in a more natural way. This research has applications in many areas, including physical robots engaged in cooperative activity (e.g. reconnaissance, toxic waste clean-up, soccer) and software agents in training simulations or computer games.

Hivemind

As part of my dissertation, I created a multi-robot control architecture called HIVEMind, which is an instantiation of a class of coordination protocols known as broadcast-and-aggregate mechanisms. HIVEMind maintains a constant shared situational awareness between all robots, allowing the team to respond in real-time to contingencies sensed by individual robots while using surprisingly little bandwidth.

In many cases, each agent on a cooperative team has a limited view of the world, i.e. some significant changes in the environment are observable by a few, or even just one, member of the team. Broadcast-and-aggregate protocols establish a single synchronized representation of the world across the team by aggregating shared data from all team members. Each agent can then autonomously make appropriate control decisions based on that team-synchronized view of the environment.

There are many techniques for sharing data, ranging from simple broadcast protocols to reliable remote procedure call mechanisms. I was able to formally show that, given the following assumptions:

  • Relevant aspects of the environment change relatively quickly.
  • All team members must be informed of these changes in bounded time.
  • All team members must be able to detect if they are failing to receiving data in a timely manner from their teammates.

having each robot periodically broadcast all its data to its teammates is actually optimal in the number of communication packets required for synchronization. This data sharing technique was used in the HIVEMind architecture.

The HIVEMind architecture was implemented on a team of physical robots that performed a set of coordinated systematic search tasks. The robots were controlled by a human commander through the use of a console that had a limited natural language interface and status displays that indicate the present state of the team. The robots were successful in cooperatively performing the complex tasks they were assigned while using a very small fraction of the available network bandwidth.

Two robots starting out on a search task

Finding the ball in the "find static object" task

Trapping the human scum in the "Capture Evading Target" task

Videos (in Divx format):

Flexbot

Origins

Flexbot began as an attempt to construct infrastructure on which I could build a software-only implementation of HIVEMind. Greg Dunham and Sanjay Sood were the first people to work on Flexbot during Spring of 2001, with guidance from Ian Horswill and myself. Nick Trienens came aboard the project during Summer of 2001. The initial goal for the Flexbot project was to construct a bot, or non-player character (NPC), interface to the Half-Life game. This would allow us to build bots that could interact with Half-Life, a first person shooter game.

By the end of that summer, we had Flexbot version Alpha2 released to the world. At this point, Flexbot could be considered to be a successful project. The implementation was stable (we had Flexbot running continuously for days at a day, sometimes weeks), the Flexbot interface was well fleshed out and we had a set of debugging tools for developers. At the time of writing (Feb 1, 02), we’re working on Flexbot Beta1, which will have some new sensors, some bug fixes and a modified architecture to allow developers to create new debugging and monitoring tools more easily.

Research on Flexbot

Ledgewalker in action

Ultimately, the purpose of Flexbot is to provide an extensible architecture on which to build research systems. The first project we worked on was a bot that played deathmatch on Half-Life. The original incarnation of this project was codenamed Ledgewalker, and the current version is known as Groo. See our paper submitted to the AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment for more information on this. Some Groo movies:

PackHunter (now known as Groopack) and Hunter-Runner make use of the HIVEMind architecture to demonstrate close team coordination. PackHunter attempts to coordinate deathmatch bots on the same team. Hunter-Runner scenarios involve one Runner attempting to escape from a set of 3-5 Hunters. The Hunters cooperatively search the map until they locate the Runner and are able to trap him. When the Hunter bots trap the Runner, they open fire and destroy the Runner.

We have started work on some other research areas using Flexbot as a foundation. Patton is a wireless control system built on top of Flexbot. Human commanders can grab statistical information from the Flexbot game and display it on PDAs, while injecting commands to the system through a simple browser interface. We are also working on new approaches to navigation that do not utilize A*, and experimenting with genetic programming for automated bot development.

Flexbot as teaching tool

Flexbot has also been used as a simulator for the Behavior-based Robotics class that was held in Fall 2001. The students in the class, using techniques they learned for the physical robots, implemented deathmatch bots that squared off in a tournament. Two possible development paths were given to the students : a deathmatch and a non-violent path. The non-violent task involved collecting a set of objects on the map. The person who collected the most objects in the shortest amount of time won. All the DLLs for the winners can be obtained from the downloads page of the Flexbot site.

Future Work

In the future, I plan to continue investigating techniques for controlling cooperative multi-agent teams. Present unmanned vehicles (for example, the Predator UAV) still require multiple human controllers per unit; the problem of controlling multiple units per human user is still an area of active research. I believe there are two ways of approaching this problem. First, the degree of autonomy of the agents can be increased through more intelligent control systems. In particular, it would be useful for the agents to have the ability to dynamically integrate new information into the system, either gleaned from previous experience or explicitly provided by the human user. Accomplishing this will involve explicitly representing long term memory within the control system of the agents. An important observation here is that this representation has to be efficient in order to be effective, since the agents operate in a highly dynamic environment. Second, we can improve the human-agent interface by developing more natural ways of monitoring and tasking agent teams. For example, incorporating team-wide question-answering ability (e.g. “why did you go to that location?”) and case-based reasoning (e.g. “this problem is similar to mission alpha.”) would allow more natural interactions between the human user and the autonomous agent team.

khoo at cs.northwestern.edu | Room 330, 1890 Maple Ave, Evanston | (847)491-8931