From: Alan L. Liu (aliu_at_cs.washington.edu)
Date: Sun Oct 19 2003 - 23:41:18 PDT
Paper Reviewed: Evolving Robot Tank Controllers
Author: Jacob Eisenstein
Summary: The author used genetic algorithm techniques on genome
sequences, representing RoboCode controller programs, to evolve tanks
that were effective competitors against adversaries.
Important ideas:
* GA techniques can be successfully used to develop competitive robots.
Eisenstein's training yielded robots that could beat adversaries on
1-on-1 matches beginning either from fixed or multiple starting
positions, as well as multiple adversaries from fixed starting
positions. He did this using limited computational and time resources.
This validates the use of GAs for robot evolution.
* The success of using GA techniques greatly hinges upon the problem
formulation.
Eisenstein noted that evolving Java code directly was a far from ideal
approach. He had to develop a new language, TableRex, to represent
controller code, so that every time his simulation generated a new
controller, it would be valid and would not need to go through a costly
compilation process. In addition, minor changes to his fitness and
evaluation functions led to unexpected and undesired outcomes (e.g.,
making the raw fitness function correspond to the total number of points
led to robots that would win a few fights by a wide margin and lose the
rest).
Flaws:
At many points in the paper, Eisenstein speculates on workarounds to the
drawbacks and limitations of his experimental setup.
For instance, he wonders how he might encourage robots to bother
shooting. He proposes something akin to a "childhood" phase to ensure
that robots are trained on targetting, etc. There's an inherent flaw in
his reasoning -- evolution is essentially dumb and has no knowledge
built-in to it. It therefore has no explicit goal either. However,
Eisenstein's strays from this approach because he wants to alter the
simulation to fit his expectations.
Instead of focusing on whether GAs can successfully evolve individuals
in this virtual world the way evolution does in the real world, this
paper ends up focusing more on how to modify a particular GA to get
expected results. It doesn't even attempt to resolve the question of
whether a general GA can be used for any given problem or whether every
use of GAs must be accompanied by the same amount of tweaking.
Open research questions:
In section 7.1, Eisenstein suggests that neural networks might be more
effective than GAs in supporting actions requiring complicated
mathematics (like targetting). Does that mean GAs are unsuitable for
these types of problems, or can a smarter problem formulation allow GAs
to work just as well? What's at stake is the issue of whether GAs have
inherent limitations that render them useful to only a small class of
problems (where even the relatively simple space of RoboCode controllers
is too hard).
As a continuation of the issue I raised in the Flaws section, can GAs be
effective on a large class of applications without so much simulation
parameter tweaking? One of the upsides to GAs are their supposed ability
to find solutions to problems where the right answer isn't readily
apparent. Unfortunately, this paper sends the message that a successful
GA must have pre-existing knowledge geared towards "coaxing" out good
results.
This archive was generated by hypermail 2.1.6 : Sun Oct 19 2003 - 23:41:22 PDT