From: Craig M Prince (cmprince@cs.washington.edu)
Date: Wed Dec 01 2004 - 02:21:01 PST
The Evolutionary Origins of Complex Features
by Richard E. Lenski, Charles Ofria, Robert T. Pennock, and Christoph
Adami
This paper attempts to probe the question as to whether digital
"organisms" with op-code "DNA" can evolve higher-level behavior through
mutations of the virtual DNA using a survival-of-the-fittest approach --
namely what are necessary requisites and how does this higher-level
behavior develop.
One of the most important ideas presented in this paper is that higher
level functions were never discovered unless there were lower-level
building blocks developed and these lower level building blocks provided
some benefit. This implies that for genetic evolution algorithms to be
successful the reward function for the "organisms" must allow for
incremental rewards and cannot simply be an all-or-nothing function. This
seems that it could limit the diversity of the problems that this approach
can work for. It seems that there is a parallel between finding such a
reward function and developing a good heuristic for a search algorithm.
The heuristic directs the search, much like the reward function directs
the genetic mutation (since poor candidates eventually disappear).
There seem to be many parallels between this evolutionary approach and
search. Specifically, this seems like a specialized case of randomized
search. We are searching through the op-code space making random choices
sometimes and making choices based on a heuristic (reward) other times.
This is just wrapped in the framework of being "based on evolution".
Another important discovery in the paper is that the evolutionary approach
allows "bad" mutations to linger which can lead to bigger future gains.
This is the way in which "evolution" can escape local minima.
One big issue I have with the paper is that the digital organisms are
exceedingly complex to begin with. There is a mechanism that turns the
abstract op-codes into something that computes. The conversion from
representation to action is artificial. This issue is similar to the issue
of discovering features in machine learning. Classically, given a set of
features and their values I can learn from them and make inference;
however, these features are assumed to include all the relevant ones, how
do we discover/derive new features?
I think the important research issues still to explore are exactly what is
the power of this approach. Can evolution be used to create code to solve
more general problems. What types of problems does this method work well
for and when does it fail. The second important issue to examine is how to
create good reward functions. This paper seemed to imply that there are
important nuances when setting up rewards, but it was unclear how this
applies to general problems. Can we simply use any good heuristic (perhaps
one derived via relaxation)?
This archive was generated by hypermail 2.1.6 : Wed Dec 01 2004 - 02:21:01 PST