From: snavely_at_cs.washington.edu
Date: Mon Oct 20 2003 - 02:17:26 PDT
Jacob Eisenstein's paper, "Evolving Robot Tank Controllers" describes
the author's experience applying genetic programming to generate
successful controllers for Robocode robots.
The way the author chose to represent evolved controllers (i.e. as
programs in the TableRex language) is probably the most important
consideration in the design of his genetic programming experiment. In
this case, the sensors and actions available to evolved programs are
fairly fixed and straightforward (though the choice of providing the
bots with the constants 1, 2, 10, and 90 is interesting), so the most
interesting aspects of the design of the search problem are the
language the author chose for the programs and the modifications he
made to the language based on observation. For instance, observing
that hand-coded agents often used global variables to communicate
between event handlers, Eisenstein gave the each event handler program
the ability to provide input the the others. Also, Einenstein had to
make sure that the evolved programs could be interpreted efficiently.
As a result TableRex programs are interpreted linearly from start to
finish, so there is no real control-flow (although comparisons can be
performed). In particular, there can be no loops. It is not clear
whether observation of hand-coded agents supports ruling out loops,
though it may be hard to put guarantees on running time of programs
that contain backwards branches. Similarly, there can be no calls to
evolved TableRex subroutines, only the fixed routines listed in the
appendix. Again, it is not clear whether humans have discovered any
useful subroutines that have become commonly used in hand-coded
robots. If such subroutines do exist, it would be interesting to see
if they could be evolved automatically, or alternatively if they could
be used as effectively if provided as primitives. In general, the
paper could have better explained the rational behind the various
design decisions made, either with respect to observation or merely
common sense.
Eisenstein was not able to run experiments that trained agents against
multiple agents with multiple starting points for very many
generations because of time constraints. It would be interesting to
see if these experiments produced robots that were more general in
their ability to win, or if instead the techniques that worked in
other cases would fail. Also, it would be interesting to see whether
coevolution could work if the process were altered in the ways the
author suggests, e.g. rewarding robots for incremental progress (such
as moving); to me, it does not seem obvious that this could help the
robots explore the search space enough to develop more and more lethal
strategies without the designer leading them on too much.
This archive was generated by hypermail 2.1.6 : Mon Oct 20 2003 - 02:17:26 PDT