From: Alan L. Liu (aliu_at_cs.washington.edu)
Date: Wed Oct 22 2003 - 09:24:23 PDT
Paper Reviewed: Two Theses of KR
Review author: Alan Liu
Summary: This paper critiques the idea that general-purpose KR systems
should have a restricted language in order to function in poly-time and
should have separate terminological and assertional knowledge.
Most Important Ideas
* Classification is important
The paper talks about how general classification can be used in
reasoning, and is important in knowledge-base maintenance. The authors
argue that it is a good example of knowledge-base operation because KB
representations are often organized around taxonomic hierarchies.
* Worst-case complexity is an inadequate evaluator, utility might be a
good one, but in the end there is no "general purpose" language
To take too long on an operation is a bad thing, but the authors argue
that since most instances of problems aren't hard, one should not
lobotomize the KR language just to avoid those instances. Indeed, in
doing so not only do those instances become impossible, but so do many
valid, useful language constructs. This leads to system designers
hacking around limitations, or sometimes even total abandonment of the
system.
Using utility to factor in user needs can be a powerful tool towards
relaxing the time limits place on an AI system -- instead of assuming
that users will not tolerate long computations and limiting what they
can ask a KB about, a good approach might be to inform user of
potentially long computations, or make a decision based on how much time
the user tells the system he/she is willing to wait.
Unfortunately, no matter what the KR language is, there will always be
trade-offs between its expressiveness and complexity, so no one KR
language can hope to perform adequately on all possible applications.
Flaws
The paper could have spent more time on the restricted classification
thesis, which I felt lacked the same amount of attention given to
dismantling the restricted language thesis.
The authors admit that it's unclear how to compute average time
efficiency for any given algorithm, but they gloss over this by saying
that empirical evidence is an acceptable heuristic. Their approach lacks
rigor. Time-critical applications would still have to use a Levesque and
Brachman style restricted language, because in any given situation, this
empirically devised heuristic might be wrong and make the system choose
to do a lengthy amount of computation when there is no chance of
finishing on time. The paper does not offer a better solution to that
problem.
Important and Open Research Questions
I think it will be interesting to see whether iterating on estimates of
average time complexity can produce decent results in reasonable amount
of time, and how effective that empirical evidence will be when used by
a utility-based KB system. If they are effective, then utility-based KB
systems would offer a vast upgrade over restricted language-based
systems. Of course, what would be even better would be a theoretically-
grounded approach to computing average time complexity.
The paper lists several alternative approaches, all which deserve some
attention because they open up classes of applications. For instance,
there are valid reasons to want to do classification over relevant
knowledge when knowledge and assumptions are changing.
This archive was generated by hypermail 2.1.6 : Wed Oct 22 2003 - 09:24:28 PDT