From: Tal Shaked (tshaked_at_u.washington.edu)
Date: Mon Apr 07 2003 - 22:51:15 PDT
Automatic SAT-Compilation of Planning Problems – M. Ernst, T. Millstein, D.
Weld
This paper extends previous work done on compiling planning problems into
satisfiability problems by considering eight different encodings plus
several additional enhancements. Several tests are run comparing the
different encodings and analysis is given comparing the sizes and expected
efficiency of different compilations.
The paper does a good job in presenting many ideas on how to simplify the
encodings as well as tradeoffs between the size of variables, literals,
clauses, and how much simplification is likely based on the representation.
It is very confusing to figure out what exactly is going on and why certain
results take place, so it is nice that there is a fair amount of analysis to
accompany the graphs. Of course with so many different parameters to change
and all of them affecting each other in somewhat unpredictable ways, it is
hard to say anything with certainty.
It seems that some of the most important observations are that explanatory
frame axioms tend to do better than classical frame axioms, parallelism is a
big advantage (and worth the addition of conflict exclusion rather than
complete exclusion), and that although factoring can sometimes lead to no
gain, in practice it tends to be very helpful. Type inference was also
useful, and suggests that further domain specific axioms, which can be
automatically generated, might be helpful.
One problem I had when reading this paper is that it presented so many
ideas, and that it went back and forth between them making it easy to get
lost and confused. Perhaps it helps to have a little more familiarity with
the basic SAT compilation techniques, or maybe there would be a better way
to organize the ideas to make them more consistent or show the contrasting
ideas more clearly. The graphs, although informative, were also confusing
to figure out or find any patterns since so much data was plotted. The text
helped summarize the important results, but otherwise it was hard to look
just at the graphs and draw any conclusions quickly.
Another question is that it was not clear how difficult the planning domains
were in the experiments, and how they differed from each other. It might
have been interesting to see how to artificially create the best and worst
possible domains for the different encodings. There was something mentioned
about fridge, but it was not clear what that meant.
Overall this paper illustrated that there are many ways to compile planning
problems, and that one has to choose wisely about which techniques to focus
on and what are the most likely to be beneficial. Generally looking for
redundancy in the form of factoring, and looking for domain-specific axioms
(such as type inference) automatically are probably the best ways to make
future progress, as mentioned in the paper. Although this paper also
suggest a number of hybrids that make a tradeoff between variables and
compactness, as the results tended to show, the “smarter” simplifications
were more significant.
This archive was generated by hypermail 2.1.6 : Mon Apr 07 2003 - 22:51:16 PDT