1) P&D 8.3 The basic idea is to tweak the computation for FQ to adjust the "transmission finishing times." This algorithm maintains the conceptual idea of sending out the packet that would finish first in a bit interleaved setup, where the prioritys now control how many bits out of a set each stream gets: 1) compute F_i = MAX(F_{i-1}, A_i) + (W * P_i) / W_i 2) send the lowest F_i first W is the total weight, I don't believe that it is required. The definition of a tick has been tweaked to be the time to send a bit, not the time to send a bit from each stream. 2) P&D 8.5 The graph needs to plot phase (round trips) versus window size. The packets are included here because that is the easy way to derive it phase winsize packets 1 1 1 2 2 2,3 3 3 4,5,6 4 4 7,8,(9),10 5 2 9,11 6 3 12,13,14 7 4 15,16,17,18 8 5 19,20,21,22,23 9 6 24,(25),26,27,28,29 10 3 25,(30),31 11 1 30 12 2 31,32 13 3 33,34,35 14 4 36,37,(38),39 15 2 38,40 16 3 41,42,43 17 4 44,45,46,47 18 5 48,49,(50),51,52 19 2 50,53 3) P&D 8.6 (53 packets * 1 KB/packet) / (19 rtt * 100 ms/rtt) = 27.89 KB/s 4) P&D 8.8 In order for fast retransmit to work, the sender needs to be able to keep sending data, so that the receiver will keep sending duplicate acks. Consider a case where a substantial number of packets have dropped. Before progress can be made, all of these packets must be retransmitted, but there can easily be not enough other packets in the window that get through to provide enough duplicate acks for all the dropped packets to be resent. If this happens then the sender will run out of window space to send, and will be forced to block until the coarse-grained timeouts fire. 5) P&D 8.17 The idea is to discuss why it is hard to build a network with good congestion control for bursty traffic. My take is this: if you want to do congestion management really right, you want to do bandwidth reservation. Consider a system in which congestion is absolutely forbidden, then all streams will have to reserve for their maximum requirements, which will on average waste most of the network if the traffic is bursty. On the other hand, if we design for the average case, then we might get into trouble if the bursts hit at once. 6) P&D 5.19 D R2 Top rcvr R1 Left rcvr R4 Lower left left rcvr Lower left right rcvr R5 Right rcvr E R6 Lower left left rcvr Lower left right rcvr R3 Left rcvr R4 R2 Top rcvr R7 Right rcvr There are a couple of arbirary choices here. D to Left rcvr can go through R4 and R3, and E to Top rcvr can go through R7, R5 and R2 7) The obvious problem here is that a multicast tree is not a linear structure, and it isn't even a regular tree. There are lots of possible ways to encode the tree, but all of them will get large and unwieldy very fast. It is also difficult to have the sender know all of the receivers and their routes for widely dissemiated information. 8) The problem setup is two networks with two paths between them. Since the source and destination networks are running different protocols (and probably each treating the other as a single entity), they can come independantly to different conclusions about which gateway to use. The sender, being efficient, will only send packets to the receiver through the gateway it chose, but the receiver, being efficient, will drop packets coming in that way because they are not on what it considers to be the good path. The only real solution that I'm aware of involves forcing one of the networks to change its choice. The best way I can think of is for the gateways to get together (they all must know about each other) and have them decide who will carry the traffic. The others then lie when asked if they can get data through, forcing the traffic through the chosen gateway. This has all sorts of awful implications.