J. Eisenstein. Evolving robot tank controllers. May 2003. reviewed by jessica Wilkinson summary ------- Creation of robocode controllers using genetic programming techniques and the use of TableRex to facilitate more rapid and efficient evolution ------------------------- Eisenstein claims that genetic programming (GP) techniques are well suited to the robocode simulation environment, primarily because tank controllers must "adapt" to unpredictable adversary behaviors in order to "survive" battles. Unfortunately, his discussion of agent optimization using subsumption-based GP is obscured behind constant conjectures about biological realism. False analogies aside, this paper does provide a case study for GP, with important contributions regarding problem space modelling and proposed solutions for the computational complexity inherent in GP applications. Eisenstein presented some valid improvements which addressed commonly cited shortcomings of GP techniques. For example, his use of additional mutations when all controllers reached the same fitness was a simple way to deal with the problem of overly-rapid convergence (reaching a "local maxima"). It would have been interesting to see more precise evaluation of this technique, however, since it is not immediately apparent how population diversity could be significantly improved in the heavily overdominant case. This is especially important given that increased diversity has been shown to improve progress rates, as discussed by Goldberg. Eisenstein's development of TableRex was an interesting way to restructure the GP model of program trees (a natural way to view tank controllers) as encoded n-bit hypothesis strings, which may be evolved using less computationally-intense genetic algorithm (GA) operations, such as point mutation and crossover. The biggest problem I saw with this paper was that Eisenstein did not model the problem space in a sufficiently precise way. He asserted that survival was a "clear and objective fitness metric"; however, even in a simulated environment, survival is in fact a complex problem. It would have been more useful if it could have been broken down into several smaller sub-problems or goals. As discussed by DeJong and others, it is a common mistake to apply GA algorithms best suited to narrow-focus optimizations to attempts to solve long-term problems, such as adaptation in an unpredicatable environment. I was extremely disappointed in the lack of precise outcome measures as well. I don't understand why Eisenstein would cite Koza and Goldberg's papers on "tournament fitness", and then claim that battling agents one-on-one was an open research question, either. In future research, it would be interesting to use strategies such as cooperative teams in concert with a well-defined problem segregated into smaller sub-problems. Smaller problems requiring less adaptation could be split off as tasks for "drone" agents, while more complex solutions were being evolved. This might serve to decrease computational complexity, as well as reserving those adaptations which were beneficial. Continuous evolution as described by Eisenstein can destroy valid functional groupings through large destructive actions such as crossover.