From: David Coleman (dcoleman_at_cs.washington.edu)
Date: Wed Feb 25 2004 - 16:14:44 PST
Porcupine strikes me as the most functional system that we’ve read about
to date (excluding core technologies such as RPC). It is one of the most
complicated systems we’ve looked at but that seems reasonable given the
dynamic nature of it.
One thing that struck me regarding the design of hard versus soft state
is persistence and frequency of change. Is it true that if data doesn’t
change very often, it is a better candidate for hard state simply due to
the reduced number of replications assuming that the data is replicated
for available, etc.
An additional factor in fail-over was discussed in Grapevine.
Overprovisioning didn’t work well in that when a system failed and the
backup machine took over, it couldn’t handle both its own workload and
the additional load from the failed machine. A cluster of small machines
means each machine has a relatively minimal load versus a single or
several large machines where a dedicated fail-over would be necessary
due to the capacity of the original machine.
In today’s virus- and worm-prevalent environments, homogeneity can be a
problem. A single effective worm would bring the cluster to its knees.
Even with the strictest safeguards, a virus or worm can still breech the
network. A heterogeneous environment would help to guard against the
effectiveness of these attacks.
I would how it would handle today’s larger message sizes. Just this
afternoon, we discussed the best method for getting a 570MB file to a
user at another company and email was discussed as a solution (only
briefly, but still it came up). I routinely send multi-megabyte files to
multiple users. I also wonder if the strategy used in one of the systems
we’ve studied (I can’t remember which) of not physically distributing
the same message to multiple users but having the users share messages
would work in Porcupine?
This archive was generated by hypermail 2.1.6 : Wed Feb 25 2004 - 16:10:30 PST