From: Danny Wyatt (danny@arbitrary.com)
Date: Wed Apr 28 2004 - 01:56:02 PDT
Before reading this paper, I was unaware of the descriptive complexity
definitions of P, NP, and PSPACE. I find them satisfying because
showing membership in a class by finding a new description of a problem
might provide more insight into the problem than showing the same
membership via a new algorithm.
On the topic of databases, they show that relational queries are
equivalent to FOL queries and that the complexity of a query depends
only on the query's length and not the size of the database. What I
find specifically interesting about this is that they also define P as
equivalent to first-order logic with recursively defined predicates. A
concrete example for this comes from homework 1: what is the length of
the query that finds actors with a Bacon number of infinity? The way I
structured my query it would grow linearly with the the length of a path
in the Bacon connected component. This means that the length of the
query was dependent on the size of the database. Thus, query engines
that support a fixpoint operator cannot guarantee a bound on the running
time of a query based on the length of the query alone, since they have
admitted queries whose "true" lengths (were they expanded---akin to
Immerman's quantifier blocks iterated polynomially many times) depend on
the size of the database.
In the last paper we saw the use of DB2's fixpoint operator to support
the "//" operator in XQuery. Intuitively, the "//" operator admits
queries whose length depends on the size of the data. The theories in
this paper suggest that there is a fundamental mismatch in complexity
between XQuery and relational algebra queries, and that there can be no
fixed matching between the two.
This archive was generated by hypermail 2.1.6 : Wed Apr 28 2004 - 01:56:13 PDT