From: Daniel Lowd (lowd_at_cs.washington.edu)
Date: Sun Oct 19 2003 - 16:21:03 PDT
Jacob Eisenstein's term paper "Evolving Tank Controllers" describes his
development of the genome representation TableRex and his success in using
it to evolve robot tank controllers.
The first contribution of this paper is the presentation of Robocode as a
good environment for studying artificial intelligence in general and
genetic algorithms in particular, since it features a clear objective but
many interesting challenges. His success supports this claim and could
lead to more work in this area. Perhaps environments such as Robocode,
more complicated than the microworlds of the 60's but simpler than the
real world, could spur the development of new AI techniques that would
generalize to other situations.
The most important contribution however, is probably the development of an
effective representation for Robocode tank controllers. Placing commands
and results in a table so that commands can easily refer to the results of
other commands is a novel and efficient way to represent a piece of
event-driven code. Not being familiar with the original language REX, I
cannot judge how much additional insight was required to create this
development, but it appears to be fairly novel. Furthermore, it might be
possible to use such a representation for the development of other
real-time controllers, not just robot tanks.
Unfortunately, the author limits his discussion to possible advancements
in producing robot controllers. While Robocode is an interesting
environment, it hardly seems like a goal in itself. This work could have
greater impact if it suggested possible implications into broader areas of
research.
It is still unknown much of what makes this particular GA solution
effective, and how it might compare to other, slightly different GA
solutions. For example, how does shortening or lengthening the genome
affect the capability of the controllers and the time required to learn
them? Is there an optimal length? What would be the effect of changing
the available commands or inputs slightly? To what extent is this problem
robust to such changes? Answering these questions would help one
understand why Eisenstein's approach works.
Furthermore, it would be good to learn if a robot could be trained against
multiple combatants in random starting locations, given enough compute
power. It seems like the only reason why that scenario couldn't be tested
was a lack of computing power, but with a distributed testing framework it
might be possible to train such a robot. If one could succeed in random
starting locations against varied opponents, then perhaps it could compete
effectively in full Robocode tournaments against hand-coded robots, the
truest measure of success.
This archive was generated by hypermail 2.1.6 : Sun Oct 19 2003 - 16:21:08 PDT