From: James G. <ja...@ww...> - 2002-07-24 16:37:00
|
I'm pretty new to Player/Stage, so I apologize if this has been discussed before. I was wondering if anyone had thought of using multicast for server->client communication, and then having a unicast connection from client->server communication. It seems that as long as all the networks between the server and client were multicast enabled it would solve the problem that was mentioned in the iros01-player.ps paper. If there were 50 clients receiving the laser data at 10Hz across wireless and it was multicast enabled, there would actually only need to be one 10Hz "stream" from the server out and then all the packets would be distributed ala multicast to hosts that have subsccribed to each sensor "stream". James |
From: Richard V. <va...@hr...> - 2002-07-24 17:03:02
|
Hi James, yes, multicast would work as you say. From talking to our users, it seems quite unusual for very many clients to be consuming the same data, though absolutely possible as you describe. Almost all users we know connect approximately 1 client to each robot (though we think using multiple clients is an interesting idea - Brian published a paper about this at IROS01). The main headache with a multicast stream is that it uses UDP, so the application would have to manage the data stream very carefully to make sure that no packets were dropped. TCP does a great job of this, so we take advantage of the built-in quality control. If your application could really take advantage of multicast, and doesn't care if packets are sometimes lost, it sounds interesting and we'd like to hear about it. Of course, you're absolutely free to add multicast support to the code yourself! It shouldn't be very difficult... cheers, Richard. On Wed, 24 Jul 2002, James Golovich wrote: > I'm pretty new to Player/Stage, so I apologize if this has been discussed > before. > > I was wondering if anyone had thought of using multicast for > server->client communication, and then having a unicast connection from > client->server communication. It seems that as long as all the networks > between the server and client were multicast enabled it would solve the > problem that was mentioned in the iros01-player.ps paper. If there were > 50 clients receiving the laser data at 10Hz across wireless and it was > multicast enabled, there would actually only need to be one 10Hz "stream" > from the server out and then all the packets would be distributed ala > multicast to hosts that have subsccribed to each sensor "stream". > > James > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Playerstage-users mailing list > Pla...@li... > https://lists.sourceforge.net/lists/listinfo/playerstage-users > -- Richard Vaughan HRL Laboratories [va...@hr...] |
From: James G. <ja...@ww...> - 2002-07-24 17:33:56
|
On Wed, 24 Jul 2002, Richard Vaughan wrote: > yes, multicast would work as you say. From talking to our users, it seems > quite unusual for very many clients to be consuming the same data, though > absolutely possible as you describe. Almost all users we know connect > approximately 1 client to each robot (though we think using multiple > clients is an interesting idea - Brian published a paper about this at > IROS01). The main headache with a multicast stream is that it uses UDP, so > the application would have to manage the data stream very carefully to > make sure that no packets were dropped. TCP does a great job of this, so > we take advantage of the built-in quality control. If your application > could really take advantage of multicast, and doesn't care if packets are > sometimes lost, it sounds interesting and we'd like to hear about it. Of > course, you're absolutely free to add multicast support to the code > yourself! It shouldn't be very difficult... > I figured the reliability of udp was probably one of the reasons it wasn't used. In the applications that you have worked on have you tried artificially creating high latency and packet loss? I'm curious if the overhead of tcp along with all of its reliability would provide any major benefit over vanilla udp. For example if a few tcp packets get dropped (assuming 10 Hz laser data) and the tcp stack starts to retransmit, would this slight delay be worse than just not receiving the packets? I guess its possible that the extra overhead of tcp would make it worse on average (assuming a reliable/uncongested network) than vanilla udp. Now if multicast was used, and there were clients located in multiple locations. Like 2 clients on the same network, 2 clients on a network across campus, and 5-10 clients somewhere out on the internet. Now if all the clients would let their polling interval be determined by their network location (if they want something different they could use unicast), the server would send out packets at the fastest rate (10 Hz?) that was available with a ttl of 1, then every X Hz the ttl would increase. So it wouldn't require seperate streams for each different interval. James |
From: brian g. <bg...@po...> - 2002-07-26 21:40:59
|
James Golovich scribed: > > On Wed, 24 Jul 2002, Richard Vaughan wrote: > > > yes, multicast would work as you say. From talking to our users, it seems > > quite unusual for very many clients to be consuming the same data, though > > absolutely possible as you describe. Almost all users we know connect > > approximately 1 client to each robot (though we think using multiple > > clients is an interesting idea - Brian published a paper about this at > > IROS01). The main headache with a multicast stream is that it uses UDP, so > > the application would have to manage the data stream very carefully to > > make sure that no packets were dropped. TCP does a great job of this, so > > we take advantage of the built-in quality control. If your application > > could really take advantage of multicast, and doesn't care if packets are > > sometimes lost, it sounds interesting and we'd like to hear about it. Of > > course, you're absolutely free to add multicast support to the code > > yourself! It shouldn't be very difficult... > > > > I figured the reliability of udp was probably one of the reasons it wasn't > used. > > In the applications that you have worked on have you tried artificially > creating high latency and packet loss? I'm curious if the overhead of tcp > along with all of its reliability would provide any major benefit over > vanilla udp. For example if a few tcp packets get dropped (assuming 10 Hz > laser data) and the tcp stack starts to retransmit, would this slight > delay be worse than just not receiving the packets? I guess its possible > that the extra overhead of tcp would make it worse on average (assuming a > reliable/uncongested network) than vanilla udp. > > Now if multicast was used, and there were clients located in multiple > locations. Like 2 clients on the same network, 2 clients on a network > across campus, and 5-10 clients somewhere out on the internet. Now if all > the clients would let their polling interval be determined by their > network location (if they want something different they could use > unicast), the server would send out packets at the fastest rate (10 Hz?) > that was available with a ttl of 1, then every X Hz the ttl would > increase. So it wouldn't require seperate streams for each different > interval. > hi James, i think that you're correct in that UDP would be more efficient than TCP, and probably more often than you think. in my experience with UDP on modern networks, *very* few packets are dropped or mis-ordered, even over wireless links. thus, the client has little to worry about with regards to quality control; it just has to tolerate a missing packet here and there (of course, that would complicate the request/reply interactions, but we could fix it with a handshake). however, TCP has worked fine for us so far; i have yet to see or hear of a case in which the Player data/command traffic exceeds available bandwidth. with regard to the client limits that we discovered and reported at IROS 2001, i now believe that the problem was nothing to do with available bandwidth, and was instead a result of our over-use of threads. at that time, each client was serviced by a pair of threads. with many clients come many threads, and the Linux scheduler was (and still is) very bad at spawning and running large numbers of processes (LinuxThreads are essentially processes). now that we're down to 3 threads total (and i'm working on cutting that to 1), i think that Player could service a great many clients, more or less up to available bandwidth (anybody want to run tests?). if you do have a compelling case for decreasing the network traffic, i'd like to hear it, as i'm not opposed to the idea of adding optional UDP multicast support. by the way, the paper that Richard mentioned, with many clients connected to one server was actually presented at ICRA 2002. you can read it if you want: http://robotics.usc.edu/~gerkey/research/final_papers/icra02-collab.ps.gz http://robotics.usc.edu/~gerkey/research/final_papers/icra02-collab.pdf brian. |