(no subject)

From: Sandra B Fan (sbfan_at_cs.washington.edu)
Date: Thu Oct 16 2003 - 20:13:35 PDT

  • Next message: Sandra B Fan: "(no subject)"

    "Evolving Robot Tank Controllers"
    Jacob Eisenstein

    Using a language he designed called TableRex, Eisenstein uses subsumption
    architecture to genetically evolve Robocode tank controllers that beat
    human-coded ones in simpler cases, but not in the most complex ones.

    Two most important ideas: one is that genetic programming will not
    necessarily evolve machines that act the way we want them to, so we must
    be careful how we define their fitness function. For instance, at first,
    the raw fitness function merely looked at the total number of points a
    robot gains over a series of battles, but that led to robots that simply
    won one or two big fights instead of trying to win them all. Also,
    genetically evolving the robots led to robots that avoided firing, since
    they could gain more points in other ways. Perhaps not firing really is
    the best way to win, but we're placing our own biases on the robots by
    trying to make them fire. So, we have to look carefully at what our goal
    for the robots (or whatever we are trying to evolve) are, and whether or
    not we want to place our own preconceptions on them.

    The second most important idea is that it seems the main reason the more
    complicated experiments could not be performed was due to lack of
    resources-time and hardware-which means that this work is somewhat
    promisingly in the right direction, given that we well eventually be able
    to take care of those restraints.

    One of the flaws is addressed in one of the important ideas. We are
    assuming that a good robotic controller will fire, and so, we try to make
    them fire. Maybe we shouldn't.

    One possible open research question is that of co-evolution. Is there a
    way to co-evolve programs without running into the problem that Eisenstein
    did of robots that simply stand around doing nothing? The ability to
    successfully co-evolve programs will make it easier to evolve programs in
    which the features of a successful goal program are unclear, or to perform
    a task for which other pre-made "robots" might not exist for them to test
    against.

    Another direction for research is finding out how to evolve robots for the
    "multiple adversary, multiple starting positions" condition more quickly.
    This is important because, of course, in real life, the conditions are not
    as simple and restricted as the ones for which Eisenstein's tanks did
    perform well. Eisenstein managed success at one level; now let's see if
    we can take it to another.


  • Next message: Sandra B Fan: "(no subject)"

    This archive was generated by hypermail 2.1.6 : Thu Oct 16 2003 - 20:13:35 PDT