Introduction

There are many efforts in the academia that aims for creating universal agent architectures or that try to develop a kind of agent oriented languages that would speed up the development process of agents in various environments. Despite the years of efforts in this area there hasn't been a real breakthrough in kind of agent language or architecture. The most wide used solutions are still finite state machines or behavior trees. Complex agent architectures/languages from academia were often used only by their creators in proof-of-concept examples and more complex analysis if the approach really offers benefits to programmers is lacking. And if the analysis is performed, the results are usually mixed [Pibil, JASON paper; our JAVA vs POSH paper]. In this document we will try to pin out several reasons for this according to our experience with intelligent virtual agents development.

Problems

  1. The environments nowadays available are not complex enough to really benefit from complex agent languagens such as JSoar, Jason, etc.
  2. Nowadays environments usually define few specific problems that if you solve properly the agent behavior then is perceived like intelligent, even if other problems are not solved that well - nice example is path finding in FPS games.
  3. Which behaviors even require complex decision-making-architecture in 3D FPS games? Picture yourself limited view of the player (for instance Oblivion, where she cannot see the whole world, only small part), thus interacting with only a few agents at once, where you can display intelligent behavior there when all you have is “move”, “say”, “attack” actions? All what can player possibly see is just “execution” of one intention, she cannot perceive complex decisions behind them …

Methodology

Research in CS disciplines like machine learning, computer vision, planning or database systems relies heavily on methodology where one:

  1. implements new algorithm
  2. tests its performance on some STANDARDIZED dataset/task/domain

This enables one to compare performance of different approaches and evaluate usefulness of new features and possibly advance the research faster. One common task used in agent programming languages is “Tower of Hanoi” but it doesn't reflect complexity and constraints of decision making in complex virtual environments.

In order to define such standardised environments/tasks we have to:

  1. identify key components of all agent systems - e.g. environment, decision making, middleware connecting ENV and DM
  2. identify features of those components - e.g. does the middleware make some ontology driven inference on percepts, or is this functionality realised by DM? …
  3. define standardised tasks/environments and provide them in an open source package that will enable others to make REPRODUCIBLE results - e.g.

With this environment and standardised tasks we can eventually measure performance of different components.

Maybe check out http://www.robocuprescue.org/agentsim.html what they have.

Note:

Motto: one or two GOOD agent programming languages are enough, we don't need 20.

Inherent obstacle

Different components of multiagent systems aren't easily interchangeable, connecting new environment to existing middleware (e.g. Pogamut and UDK) needs several weeks of work of experienced programmer. The same applies for connecting agent programming languages. It is much easier to build on work of others in e.g. math where you can take newly proved lemma and start your own theory on it.