
|

CSE 573 –
Artificial Intelligence - Autumn 2004
|
|
Mini Project 1 - Satisfiability
[ CSE
573 2004 Autumn Home | Problem Sets | Mini Project 2 ]
In this project you should experiment with (at least) two
different satisfiability solvers. An natural direction would be to compare a
systematic algorithm (such as DPLL) and a stochastic one. Alternatively, you
could pick one algorithmic family and try several variations, heuristics, data
structures, or bookkepping strategies. In any case, you should study them
experimentally, and try to design and test your own improved solver. Below, I
describe one way that you might go, but feel free to try something different.
Regardless of your approach, however, I would like you to abide by the file
formats described below.
- Implement:
- Algorithm 1. (E.g. the DPLL
systematic satisfiability solver.)
- Algorithm 2. (E.g. the WALKSAT
stochastic satisfiability solver.)
- A generator of satisfiability
problems in CNF (conjunctive normal form). This generator should input
the desired number of variables, clauses, variables per clause, and any
other parameters you may decide to allow, and output a random CNF
formula. Successive calls to the generator should produce different
random CNFs.
- Study and compare the two solvers
empirically. Here you might characterize the hard and easy (types of)
problems for each. Identify the similarities and differences between the
problems that are hard for DPLL and those that are hard for WALKSAT.
"Hard" and "easy" should be measured in terms of how
long it takes to produce an answer, be it positive or negative (and of the
percentage of problems for which the time bounds you set were
exceeded). In other words, the fundamental dependent variable in
your empirical study will be the time to solution, and the independent
variables will be the parameters you allow in the generator and relations
among them (e.g., the ratio of clauses to variables). Note that the
algorithms may require significant parameter tuning to perform well, and
that this tuning is a significant part of the empirical work. Your writeup
(see below) should discuss this process and what you learned from it.
- Based on your interpretation of the
experimental results and your own ideas, propose and implement an improved
satisfiability solver. Of course, the goal is to produce a solver that: a)
outperforms both DPLL and WALKSAT in some identifiable subset of the
"hard" satisfiability problems; and b) is significantly
different from any solver previously proposed in the literature. However,
its not crucial to satisfy both of these constraints. And if your
experimental analysis is thorough enough you could do a nice project by
imply looking at a previously reported approach in a more informative
manner (i.e. your experimental study should add to our understanding of
the algorithm). If you do aim for a novel solution and it turns out to
underperform DPLL and WALKSAT, then your report should have a sensible
rationale for your method, and propose an empirically-supported
explanation of why it didn't work. A novel solver could be a refinement of
DPLL, a refinement of WALKSAT, combine elements of both, or incorporate a
new approach of your own design. Improvements could be either
"internal" (e.g., optimizations to the algorithm that make it
run faster) or "external" (e.g., using additional sources of
information that allow you to solve the problem better). Developing a new
solver that outperforms DPLL and WALKSAT will likely involve several
iterations of design, experimentation and interpretation of results. Below
you will find a short reading list that will give you some background on
the current state of satisfiability research, guidance on how to conduct
your experiments, and maybe as a source of inspiration.
- See the main project 1 page for a
description of what to turn in.
Recommended reading:
- Stephen A. Cook and David G.
Mitchell, Finding Hard Instances of the Satisfiability Problem: A Survey.
In Satisfiability Problem: Theory and Applications. Du, Gu and Pardalos (Eds).
Dimacs Series in Discrete Mathamatics and Theoretical Computer Science,
Volume 35.
[
http://www.cs.utoronto.ca/~mitchell/papers/cook-mitchell.ps]
- Joao P. Marques-Silva, An Overview
of Backtrack Search Satisfiability Algorithms. In Proc. Fifth
International Symposium on Artificial Intelligence and Mathematics,
January 1998.
[http://www.cs.berkeley.edu/~russell/classes/cs289/f04/readings/Marques-Silva:1998.pdf]
- David McAllester, Bart Selman, and
Henry Kautz, Evidence for Invariants in Local Search. In Proc. AAAI-97, Providence, RI, 1997.
[http://www.cs.cornell.edu/home/selman/papers-ftp/97.aaai.invariants.ps]
- Bart Selman, Henry Kautz, and David
McAllester, Ten Challenges in Propositional Reasoning and Search. In Proc.
IJCAI-97, Nagoya, Japan, 1997.
[http://www.research.att.com/~kautz/challenge/challenge.ps].
Or better yet, see the new updated
version.
- David S. Johnson, A Theoretician's
Guide to the Experimental Analysis of Algorithms. Technical report,
AT&T Research.
[http://www.research.att.com/~dsj/papers/exper.ps]
- Yongshao Ruan, Henry Kautz, and
Eric Horvitz, The Backdoor Key: A Path to Understanding Problem Hardness .
In Proc. AAAI-04.
[This is just a sample of the recent
satisfiability research here at UW.]
Much of the recent research on satisfiability has appeared in AAAI, the National
Conference on Artificial Intelligence. The proceedings are available in the
library, and many of the papers can be found online.
Standard file formats to be used:
Your sat-solver should accept files of the following format:
numvars
numclauses
clauselength
clause1
clause2
...
clausen
Where
numvars, numclauses and clauselength are integers and the format for a clause
is a parenthesized list of literals.
( lit1 lit2 ... litm )
Literals will be represented by integers, with negated literals represented by negative
numbers. For example, the 5-clause, 4-variable 3-CNF formula
( A1 v ~A2 v A3) ^ ( ~A3 v ~A1 v A4 ) ^ ( A2 v A1 v ~A4 )
^ ( ~A2 v A3 v ~A4) ^ ( ~A1 v A2 v A3 )
Would be represented by the file:
4
5
3
( 1 -2 3 )
( -3 -1 4 )
( 2 1 -4 )
( -2 3 -4 )
( -1 2 3 )
The program should have, somewhere in its output, a line that reads
"Solution:" followed by a line with the string "Fail" if no
solution was found or a parenthesized list of variable assignments sorted by
variable number. So for the input above, the program might print:
Solution:
( 1 2 -3 -4 )
Meaning that the variable assignment:
A1 = True, A2 = True, A3 = False, A4 = False
was found as a solution to this problem.
Note the spaces before and after each literal in the above i/o formats.
Your program should be able to read the test file from the standard input
stream. Any parameters your program uses should be set using command line
options. For example, if you want to be able to control the number of
restarts and flips, you should create command line options.
walksat -r10000 -f1000 < problem.wff > problem.out
The generate program should take at least three inputs: numvars, numclauses and
clauselen. Any other inputs you'd like to specify should be clearly documented.
Code provided: To help with the experimentation phase, we are
providing some infrastructure, but note that this has not been checked and may
suffer from bit-rot. No warranty provided. The file run-exp.pl
should do multiple runs of your program over test files. See the file
itself for a description of how to use it. The file check-soln.pl should check that the solution
returned by your solver is valid. They're both perl scripts, so after
saving them on a unix system, you'll have to make them executable.
chmod 700 run-exp.pl
check-soln.pl