From: Brian Milnes (brianmilnes_at_qwest.net)
Date: Wed Jan 28 2004 - 14:18:19 PST
Implementing Remote Procedure Calls - Birrell and Nelson
The authors review their implementation of a realistic remote procedure
call system. They identify the major issues of an RPC design: semantics over
reliability, lack of shared address space, integration with the programming
system, the protocol and security. Their intent was to make distributed
computation easy with a powerful efficient and secure RPC.
They reject sharing address spaces over the network and thus to give
semantics to pointer data because they feel it would be too expensive and
difficult to integrate with their programming
language Mesa. Their approach uses a match-making service that generates
stub calls. They use the rather nice for its time Grapevine distributed
database to build an RPC locating and authenticating service. RPC servers
must register and be the appropriate host and
user. RPC clients can lookup one or more instances of a service by name and
type. The service attempts to give the client the network closest service.
Instead of using an existing connection oriented network they built their
own connectionless protocol to minimize server cost. The protocol is
opportunistic in that it sends a single packet and response in the best
case. The sender is responsible to retry and request acknowledgement in case
of loss and timeout. If the arguments do not fit in a single packet they are
all acknowledged except for the last packet. They could really do with a
sliding window optimization
to their protocol to minimize the transmission.
Mesa provides catchable exceptions which sound much like a more modern
mechanism in languages such as Java and SML. They use this to generate and
handle exceptions caused by RPCs such as call failed. They say that their
process fork costs only about as much as ten local
procedure calls. I'd love to know why. The server stockpiles processes to
handle the RPCs. They adjust their ethernet packet driver to dispatch to
waiting processes by call id. This predates the Berkely Packet Filter which
I've used to dispatch from the link layer to new protocols.
They use Grapevine as an authentication standard and encrypt using DES.
They achieve a 2 Mb/s throughput on a 3 Mb/s network which was tough to get
on a shared media like Ethernet on coaxial cable.
This is a very strong paper but it fails to try and use the internet
protocols and to use different protocols for different distance and
reliability of communication. They also don't benchmark their server speed
which has become critical and they were way ahead of their time.
This archive was generated by hypermail 2.1.6 : Wed Jan 28 2004 - 14:18:33 PST