From: Andrei Alexandrescu (andrei_at_cs.washington.edu)
Date: Mon Oct 20 2003 - 13:18:20 PDT
<html>
<pre>
Evolving Robot Tank Controllers by Jacob Eisenstein
Summary: The paper evaluates the design and evolution of a Robocode
controller conceived using genetic programming techniques. The
controller is written in an language amenable to genetic
transformations, TableRex.
Most important ideas:
* Genetic programming can create successful controllers. This sounds
like a tautology, but given that the whole Genetic Programming field
is in need for identity and confirmation, the point is important.
Eisenstein's contribution was to find a suitable encoding (TableRex)
and transformations to that encoding that yielded increasingly good
behavior.
* Evaluation and definition of the fitness function influence genetic
code evolution in essential ways. Eisenstein showed that the way we
humans distinguish a "good" robot from a "bad" one has effect on the
behavior of robots. Too coarse of a measuring method, and that's not
enough for the algorithm to learn. His algorithms yielded controllers
that did well by some measure of performance, without being too "good"
at an intuitive level.
Largest flaws:
* Overspecialization is not intelligence. Using the real adversaries
as sparring partners is a luxury often not present in the real world.
One could think that new, improved robots would do more of the same
things that existing robots do, but there's absolutely no evidence
from the paper that the genetic controller could do well in situations
it hasn't been trained for. Actually, there is negative evidence in
that only changing as minor a detail as starting positions degrades
performance to unimpressive.
* Each event handling function is independent from the others. There
is much opportunity for speeding up learning if the functions can
exploit the obvious correlations that exist between events.
Open questions:
* How would an expressive, general purpose genetic programming
language look like? Researchers have used LISP S-expressions in the
past, and Eisenstein uses a limited (no loops!) linear language. The
ideal language suitable for meaningful genetic transformations is
still around the corner.
* How to build reflexes, goals, and instincts into genetic evolution?
In the real world, learning babies do have many reflexes and instincts
built in; there are many mysteries about infant and child behavior
that escape scientists. How can we incorporate the ineffable mistery
that makes life so wonderfully fervid, into the initial condition and
entire learning process of genetic algorithms?
</pre>
</html>
This archive was generated by hypermail 2.1.6 : Mon Oct 20 2003 - 13:18:01 PDT