First Project: Satisfiability Solvers
Due Date: Wednesday, November 10, 1999
In this project you will implement a systematic satisfiability solver and
a stochastic one, study them experimentally, and design and test your
own improved solver.
What to do:
Form into groups of two. Email Adam (firstname.lastname@example.org) with the
people in your group, your choice of language and your email addresses.
The DPLL systematic satisfiability solver.
The WALKSAT stochastic satisfiability solver.
A generator of satisfiability problems in CNF (conjunctive normal form).
This generator will input the desired number of variables, clauses,
variables per clause, and any other parameters you may decide to allow,
and output a random CNF formula. Successive calls to the generator should
produce different random CNFs.
Study and compare the two solvers empirically.
Characterize the hard and easy (types of) problems for each. Identify
the similarities and differences between the problems that are hard for
DPLL and those that are hard for WALKSAT. "Hard" and "easy" should be measured
in terms of how long it takes to produce an answer, be it positive or negative
(and of the percentage of problems for which the time bounds you set were
exceeded). In other words, the fundamental dependent variable in
your empirical study will be the time to solution, and the independent
variables will be the parameters you allow in the generator and relations
among them (e.g., the ratio of clauses to variables). Note that the algorithms
may require significant parameter tuning to perform well, and that this
tuning is a significant part of the empirical work. Your writeup (see below)
should discuss this process and what you learned from it.
Based on your interpretation of the experimental results and your own
ideas, propose and implement a new satisfiability solver. The goal is
to produce a solver that: a) outperforms both DPLL and WALKSAT in some
identifiable subset of the "hard" satisfiability problems; and b) is
significantly different from any solver previously proposed in the
literature. However, you may still obtain a passing grade on the
project if one of these two constraints is not met. Specifically, you
could implement a novel solution which doesn't turn out to outperform
DPLL and WALKSAT, or you could implement an extension that is largely
based on one in the literature. In the first case, you should have a
sensible rationale for your method, and propose an
empirically-supported explanation of why it didn't work. In the second
case, your experimental study should add to our understanding of the
algorithm. If your solver does outperform DPLL and WALKSAT, you should
provide empirical evidence of it. Your solver can be a refinement of
DPLL, a refinement of WALKSAT, combine elements of both, or
incorporate a new approach of your own design. Improvements could be
either "internal" (e.g., optimizations to the algorithm that make it
run faster) or "external" (e.g., using additional sources of
information that allow you to solve the problem better). Developing a
new solver that outperforms DPLL and WALKSAT will likely involve
several iterations of design, experimentation and interpretation of
results. Below you will find a reading list that will give you some
background on the current state of satisfiability research and
guidance on how to conduct your experiments. You can also use this
list as a source of inspiration for ideas, and/or to check that your
ideas are new; and you can use it as a source of further pointers to
the literature for these purposes.
What to turn in:
We may ask you to do a demo / oral discussion of the project.
The code you wrote: DPLL, WALKSAT, your own algorithm, problem
generator(s), and any other code you used to run experiments. The code
should be clear and reasonably documented (comments, brief description
of architecture / guide to the code, and brief user
manual). Also include instructions for building the executables
and any command line options the grading program should use.
A description of what parts of the project were done by each of the two
An article describing your proposals, experiments and results. This article
should be written as a research article for submission to a conference.
It should have a maximum of 20 letter-sized pages in 12pt font with 1"
margins, including all tables and figures, but excluding references. The
project will be graded according to the AAAI review
form. Research articles typically have sections describing: the problem
they address and its motivation; the new solution(s) proposed, and the
rationale for them; empirical and/or theoretical evidence for their superiority
to previous approaches; a discussion of relations between the new proposals
and previous research, including a candid description of the new approach's
limitations and disadvantages; and directions for future research. Citations
made in the body of the paper are collected in a list of references at
the end. If the algorithm(s) you proposed didn't outperform DPLL and WALKSAT,
you can propose (and possibly test) your explanation(s) in the empirical
and/or discussion sections.
Acceptable languages for the project are: LISP, C/C++, and Java. Other languages may be allowed by special request.
Much of the recent research on satisfiability has appeared in AAAI, the
National Conference on Artificial Intelligence. The proceedings are available
in the library, and many of the papers can be found online.
Stephen A. Cook and David G. Mitchell, Finding Hard Instances of the
Satisfiability Problem: A Survey. In Satisfiability Problem: Theory
and Applications. Du, Gu and Pardalos (Eds). Dimacs Series in
Discrete Mathamatics and Theoretical Computer Science, Volume 35.
Joao P. Marques-Silva, An Overview of Backtrack Search Satisfiability
Algorithms. In Proc. Fifth International Symposium on Artificial
Intelligence and Mathematics, January 1998.
David McAllester, Bart Selman, and Henry Kautz, Evidence for Invariants
in Local Search. In Proc. AAAI-97, Providence, RI, 1997.
Bart Selman, Henry Kautz, and David McAllester, Ten Challenges in Propositional
Reasoning and Search. In Proc. IJCAI-97, Nagoya, Japan, 1997.
David S. Johnson, A Theoretician's Guide to the Experimental Analysis of
Algorithms. Technical report, AT&T Research.
Standard file formats to be used:
Your sat-solver should accept files of the following format:
Where numvars, numclauses and clauselength are integers and the format
for a clause is a parenthesized list of literals.
( lit1 lit2 ... litm )
Literals will be represented by integers, with negated literals
represented by negative numbers. For example, the 5-clause,
4-variable 3-CNF formula
( A1 v ~A2 v A3) ^ ( ~A3 v ~A1 v A4 ) ^ ( A2 v A1 v ~A4 )
^ ( ~A2 v A3 v ~A4) v ( ~A1 v A2 v A3 )
Would be represented by the file:
( 1 -2 3 )
( -3 -1 4 )
( 2 1 -4 )
( -2 3 -4 )
( -1 2 3 )
The program should have, somewhere in its output, a line that reads "Solution:"
followed by a line with the string "Fail" if no solution was found or a
parenthesized list of variable assignments sorted by variable number.
So for the input above, the program might print:
( 1 2 -3 -4 )
Meaning that the variable assignment:
A1 = True, A2 = True, A3 = False, A4 = False
was found as a solution to this problem.
Note the spaces before and after each literal in the above i/o formats.
The grading program will require that you follow the correct format exactly.
Your program should be able to read the test file from the standard
input stream. Any parameters your program uses should be set using
command line options. For example, if you want to be able to
control the number of restarts and flips, you should create command
walksat -r10000 -f1000 < problem.wff > problem.out The
generate program should take at least three inputs: numvars,
numclauses and clauselen. Any other inputs you'd like to specify
should be clearly documented.
To help with the experimentation phase, we are providing some infrastructure.
The file run-exp.pl will do multiple runs of your
program over test files. See the file itself for a description of
how to use it. The file check-soln.pl
will check that the solution returned by your solver is valid. They're
both perl scripts, so after saving them on a unix system, you'll have to
make them executable.
chmod 700 run-exp.pl check-soln.pl