PAST review

From: David Coleman (dcoleman_at_cs.washington.edu)
Date: Wed Mar 03 2004 - 14:59:30 PST

  • Next message: Muench, Joanna: "Review of PAST"

    When studying PAST, the first thing that occurred to me was that PAST
    seems to be a capability-based system. You cannot express the thought of
    a file that you have not been given an ID for. File IDs are so large
    (160 bits) that fabricating one is essentially impossible. It wasn’t
    clear to me, and maybe I simply missed it in the paper, whether or not
    any sort of rights were associated with the file ID. The only thing that
    somewhat acts as a right is the owner credentials. They are used to sign
    the certificate that is stored with the file.

    To deal with the problem of storage balancing, I cannot see why a simply
    building a new file ID that maps into the area of the system that has
    storage space is a problem. Generating a random number in between two
    128-bit numbers and then randomly extending it out to 160 bits should be
    (although mathematically probably is not) as random and unique as the
    SHA-1 hashing function using the file name, owner’s public key, and a
    random salt value. ID collisions are already a possibility and already
    dealt with sufficiently. The 160-bit address space should be
    sufficiently large for non-random values as well as pseudo-random values.

    I was a little surprised by the rules regarding available space on a
    node when joining the network. It is certainly cleaner to work with
    essentially uniform nodes, but after a certain length of time storage
    imbalancing and general use end up fragmenting space available on nodes
    so that any balancing attempted based on pseudo-uniform node size would
    be invalid.

    I was also surprised that compression (loss-less of course) wasn’t
    considered. Given the amount of encryption that was being used, a little
    extra time for compression would have saved a significant amount of
    network traffic. Given the orders of magnitude speed difference between
    the CPU and the network, any time spent compressing would have been
    saved in transmission. Also, it saves space.

    I also got a chuckle out of the fact that the benchmarks were run on a
    Unix variant by a researcher from MSR. But hey, it’s nice to have a
    64-bit OS available to play with.

    I would like to have more time to study this concept and specifically
    these two papers. They are both fairly deep papers content-wise and I
    don’t feel that I’ve done them justice (I couldn’t fully present the
    concepts and details yet) in the time I’ve had to spend on them. I can
    see why this area of CS research is hot – there seems to be quite a few
    applications of these concepts.


  • Next message: Muench, Joanna: "Review of PAST"

    This archive was generated by hypermail 2.1.6 : Wed Mar 03 2004 - 14:59:37 PST