From: Lincoln Ritter (lritter_at_cs.washington.edu)
Date: Sun Oct 19 2003 - 10:46:25 PDT
Evolving Robot Tank Controllers
Jacob Eisenstein
Reviewed by: Lincoln Ritter
Summary:
This paper explores the application of genetic programming/genetic
algorithm techniques to evolve competitive agent strategies for the
RoboCode simulated combat environment.
Review:
The author states that there have been other attempts to evolve robot
controllers using similar techniques as the those described in this paper,
but that these endeavors have not borne any fruit. Comparatively, the
results of this study are seem to imply that the use of genetic algorithms
is potentially useful. Eisenstein implies that his design of the genome
is part of the reason for the relative success of his research. Although
the author discusses criticism that modeling behavior at as high a level
as was done in this study can impose human bias on the process, he also
notes that he has been unable to find success stories which employ lower
level modeling. The lesson to be learned here, above and beyond the
particulars of this project, is that knowledge representation at the
proper level/degree of granularity is pivitol to the success or failure of
genetic techniques.
Another startling result of this study is that, even if human bias is
encoded in the representation used, the evolved behavior my not be at all
what is desired or expected. The evolution of controllers in this study
which do not fire is quite remarkable as all human-coded systems included
firing, so we might assume the expected behavior of tanks is to fire.
Perhaps this is a bug. But perhaps it is actually a feature. Though we
may think we know a good solution, we may be deluding ourselves.
Although the results of this study are encouraging, the study is lacking
in an important regard. The resultant robot was evolved using a
particular set of robots. At the conclusion of the evolutionary stage of
robot development, the evolved robot was tested against the same set of
robots that it had trained (evolved) against. As a result, the study does
not address the effectiveness of using evolutionary techniques to develop
a robot controller for use against general opponents.
Another possible drawback of this study is that the controller evolved
only by assessing the effectiveness of the agent's actions as a whole at
the end of each round. While this approach was able to generate a fairly
competitive controller, applications to real world situations may not be
so forgiving. After all, if a tank is blown up in the real world, it does
not get to be reincarnated after evaluating its success.
The work presented in this paper suggests a number of intriguing
possibilities for future work. For example, investigation into reasons
behind the lack of evolution of effective targeting strategies could prove
quite insightful. Eisenstein suggests that an entirely different model of
representation may be necessary to evolve effective targeting. This leads
to the question, is it possible to evolve effective targeting using the
existing model? Is it possible to evolve a robot controller using
entirely the model used for targeting? In other words, are there inherent
limitations on the effectiveness of a given model within an evolutionary
model? Insights gain from investigation into this question could lead to
more effective engineering of robot controllers and evolved AI as a whole.
Another interesting idea for future research stemming from this paper is
that of evolved cooperation among multiple agents. Evolving cooperating
teams of agents would not only make for some entertaining battle scenes,
but would lead to insights into the way that knowledge/intelligence can be
distributed and the possibilities for intelligent behavior to arise
emergently from simpler, "lower" levels.
This archive was generated by hypermail 2.1.6 : Sun Oct 19 2003 - 10:46:26 PDT