1. P&D 9.17 a) 1KB/packet * 8bits/byte / 100,000 Kbps = 0.08 ms/packet b) Basically anything larger than 500 ms is fine. I suppose you should really add in the transmission delay at one end so you are talking about end-to-end, and maybe some slop time, but all of these are going to be dominated by 500ms, and can't really be determined anyhow. c) Required buffering is the amount of data that arrives on average in the 500ms, so (100 Mbps / 8bits/byte) * 0.5s = 6.25 MB 2. The obvious idea to use multicast to manage updates to web pages is to establish either a single multicast address (easier on address assignment) or an address per object (easier on the network once things are set up, since it limits transmissions better) onto which updates about objects are broadcast. Clients would first go to the server to get the most recent version of the object, and more importantly, to learn the multicast address being used to discuss it. They could then display the object and subscribe to the multicast address for it. Whenever anybody wants to change an object they can just broadcast to the multicast address for that object, sending the new version. Changers should probably insist on an ack from the server to ensure that their changes are permanent. The problem is that if updates can arrive out of order you can get WAW conflicts, because some clients could receive long-delayed updates after a logically later update, and display the superceeded value. This problem can be fought the same way that it was in lecture, which is to say that every sender maintains a version number and keeps track of the most recent version numbers it has seen from everybody else, and transmits that vector as the "required context" with every message, so that messages are delayed at receivers until the required context has arrived. This ensures that updates are applied in order at every site. A simpler algorithm could get by with just a version number on every object, and when a client makes a change they increment whatever they thought was the current version number. When clients receive a message, they adjust their belief about the current version number. All of these schemes can get into trouble if multiple clients make updates simultaneously. The only easy way around that is to go through the server and have it serialize the updates. This has obvious problems with scalability and fault-tolerance. 3. Cancelled 4. a) If the second request (the read) is routed to a different server than the first request (the update), and that second server hasn't gotten the new information, then the client will not see its own update. b) One easy solution is to insist that clients continue to go through the same server they started with. This however will impact fault-tolerance if that server goes down. What we need is for a client to not talk to any server which is not aware of its most recent update, either finding another server or blocking until the updates arrive. There will need to be some mechanism to deal with the possibility that the client's last update died with the server it was made on, either a timeout/failure or transactional behavior at the servers and faith that when they do come back up the updates will be propagated. One way to do this would be to use a vector of version numbers to keep track of the collective state of the servers (i.e. each server keeps a version number that it increments when it makes changes, and every server keeps track of the last known version number from all the others). Whenever a client makes a change it gets back the vector of version numbers, and when it makes a request it passes back the last vector it receives. If a server gets a request from a client where one of the numbers is greater than the corresponding one the server knows, then that client needs to wait until the server gets the updates up to that point. c) The only real way to get sequenial ordering is to only update owned data. There are two obvious ways to do this: 1) each server is responsible for some subset of the database: anybody can read anywhere, but updates must go through the responsible server. This will obviously give availability problems if the server goes down. 2) servers dynamically pass ownership of database entities around. There needs to be some way to find out who the current owner is. There are ways of doing this right, but they haven't seen any of them. There would typically be some sort of voting mechanism to deal with network partitions.