SATplan

From: Nan Li (annli_at_cs.washington.edu)
Date: Tue Apr 08 2003 - 11:58:34 PDT

  • Next message: Krzysztof Gajos: "Automatic SAT-Compilation -- review"

    Automatic SAT-Compilation of Planning Problems/Ernst, M.; Millstein, T.;
    and Weld, D.

    The authors presents a framework to compare varies encodings
    asympotically, and build an automatic compiler that generates axioms from
    STRIPS operators to support their theory.

    One important part of the paper is to parameterize the encoding space
    along two major dimensions: action an frame representation. The former
    dimension includes four levels, along the axis of varible size; The second
    contains two approaches to constrain unaffected fluents when an action
    occurs. The multiple-dimension idea is a smart yet simple way to orangize
    clearly variable existing encodings. And it naturally intrigues novel ways
    of encoding. Also, it offers a well-formatted foundation of any automatic
    compiler to generate encodings.

    Of course, the automatically generated axioms needs to be optimized for
    the sake of computation. This is the second important part of the paper.
    The main idea is factoring: use as few conjunctions as possible to
    represent the constraints and effects of an action. The asympotically
    analyst and the experimental results show that these optimization
    techniques help a lot in the two mid levels along the action dimension,
    those "splitting" action representations, but very poorly at the ends
    along the dimension. It highlights, as the authors want, the tradeoff
    between variable and clause sizes.

    And my question comes from the tradeoff. It seems obviously to me that
    there must be some tradeoff between the two sizes. Afterall, the total
    account of information needed to be represented is fixed. The bitwise
    approach looks like that it is proposed to be in this paper. I mean, it's
    very unlikely that someone would use this representation. Actually, if we
    re-order the four levels based on the integrity of the representation.
    when it's loosely structured, there will be more opportunity for
    optimizations.

    One other thing I'm not clear from the paper is about the "state-based"
    encoding. The authors mentione one sentence about it and then totally omit
    it. But can this approach fit into the framework? To my understanding, it
    is an approach trying to use only fluents to represent both actions and
    other domain-specific knowledges. Surely it can be obtained "by resolving
    aways the actions in the encodings" in the paper, as the authors say, but
    doesn't the whole action dimension be resolved changes other things? I
    don't know the answer yet.

    --
    

  • Next message: Krzysztof Gajos: "Automatic SAT-Compilation -- review"

    This archive was generated by hypermail 2.1.6 : Tue Apr 08 2003 - 11:58:35 PDT