Re: [Etherboot-developers] [RFC] Multicast reception....
Brought to you by:
marty_connor,
stefanhajnoczi
|
From: <ebi...@ln...> - 2002-06-12 06:12:41
|
Anselm Martin Hoffmeister <an...@ho...> writes: > What about the standard protocol (or less-standard?) you want to have for > multicast? Do you have documentation at hand? You can scan the code for some more details but here is the basic protocol I am using. The target is multicast data on local networks. So I have ttl set to 1 on all of my nodes. The security is approxiametly equal to tftp. There are 3 types of packets (DATA, NACK, REQ). There are 2 channels multicast and unicast. There are 2 kinds of clients master/non-master. The protocol handles both small and terabyte sized files, a variable length binary encoding is used for the numbers to keep the overhead down. The DATA packet is transmitted over the multicast channel, it contains: transaction number, total file size, block size, packet number, block size bytes worth of data The NACK pact is transfered unicast to the server. It contains from the begining of the file, pairs of. received packets, requested packets The REQ packet is transfered unicast to the clients, it contains. transaction number, total file size, block size, Unicast is used for the NACK and REQ packets because they aren't broadcast to everyone on a pure layer 2 switch, and are a little more likely to get through. Additionally in any direction now there is only one type of packet per ip address, port number pair. server->REQ->client unicast server<-NACK<-client unicast server->DATA->client multicast The client: At startup it first listens on the multicast channel and if it finds data it downloads it, otherwise after a timeout the client sends a NACK from the server to get the download started. After receiving data if the client has not received everything it waits for the transmission to restart, after the appropriate timeout it sends a NACK doubles the possible timeout interal, and waits again. The exponential backoff of the clients should keep the network quiet when the clients are running and the network is busy. When all of the data has been received, if the client has transmitted a NACK, it should transmit an additional NACK to the master consisting of just the byte 0, to indicate it is going away. The final empty nack is an optimization to tell the server the client is gone. If the packet doesn't make it oh well. Another optimization involves the server sending a REQ to the client. In which case the client forgets it's timeout and sends a NACK immediately to the server. This allows the server to pick a ``master'' client and pick on it until that client has all of it's data, ensuring some level of fairness. The server: Starts up and listens for nacks. For every NACK it gets it adds the sending machine to it's list of known clients. Unless it is the special disconnect NACK in which case it removes the client. Looking at the data from the NACKs the server decides to send some data. Generally all the clients requested data is sent but on large files it can be beneficial to send only as much as the server can easily cache. After the data is transmitted the server picks on a known client and sends a REQ. If the client doesn't respond within the servers timeout it picks on another client and sends a REQ, and forgets the previous client even existed. Comments: By doing all of the control packets over udp, and having no explicit acknowledgement that the data even arived, some interesting things result. 1) Minimum network packet count. (except in the case of a slow server, whom all of the clients NACK) 2) Multiple policies can be implemented by both the client and the server. 3) By tracking which packets of the entire transmission have arrived, and delaying the NACKs full network bandwidth can be achieved. As opposed to TFTP which is limited by the round trip time. >Else I would like to start > adding tftp-mcast support as my quickly hacked daemon for that protocol at > least runs, on low-load-testing with two clients stably. Perhaps if ready I > could have a mass-test (ok, 15 clients is not much, but better that nothing) > the after-next weekend at my "private testing laboratory", until then I > should have made a release out of it, announce will follow. But of course, if > you have a better protocol at hand, please let me know. Everything is now in CVS. Take whichever one you prefer. It wouldn't be evil to have both in etherboot, but I would be surprised if the experimental multicast tftp had any advantages except better documentation. Eric |