From: Kratochvil, B. E. <kra...@ir...> - 2004-04-29 20:03:29
|
Hello, =20 Thanks for the reply. After pondering adding another interface for a = while, I independently came to the conclusion you suggested. I'm just = going to use the position interface and not implement all the = functionality of the y axis. =20 I've been hacking away at it the past couple of days, and think I've = made some good progress. I have a much better idea of how the Player = server works now. I'd just like to say nice work to the architects out = there :) =20 I added an aioproxy to the c++ clientlib. It's pretty simple, but I can = send you the file if you'd like to add it to the source. I think I will = also be extending the dio (and possibly the aio) interfaces to encompass = output as well as the current input. If you're interested, let me know. =20 I've been using the p20s driver as reference so far, and life has been = going well. I have run into a bit of a sticky situation though. These = Sensoray cards can control only 4 motors apiece. We need 5. So, I'm = pondering how to add a second Sensoray card to the system. The driver = for the card supports it, so that's not a problem. It's tricky to = control multiple cards with a driver setup like the p2os one. Do you = think your new driver will make life any easier on this front? If not, = I'll probably have to ponder a little more. =20 Thanks again, =20 Brad =20 --------------------- Institute of Robotics and Intelligent Systems http://www.iris.ethz.ch |
From: Kratochvil, B. E. <kra...@ir...> - 2004-05-10 14:32:05
|
Hello again, I've run into an interesting little problem. I made up a dummy player driver that just accepts commands as a position device and returns the values of the command (xpos, speed, etc) as the data. I wanted to see what kind of throughput you could get on a local machine. I'm able to control the system at roughly 100Hz (the theoretical limit), but I get some strange results from the C++ proxy. When I run it at that speed, I use the print() command every 10 commands I send to the server. After a couple of seconds, the print() commands stop working, but the server still receives commands (I've checked this using another client that only listens for data at a slower rate). By "stops working" I mean that no new data is displayed to the console. A few seconds later I start getting from player: Player warning : clientdata.cc:Write(): 273 bytes leftover on write() to the client This happens irregardless of whether or not I have the second client connected. Now, the plus side of this is that I don't need to run 100hz control loops, so it probably won't be a problem. I'm just curious what's going on? Any ideas on where to start looking at this? On a related note, with the new Linux 2.6 kernel I've heard that the scheduling granularity has been reduced to 1ms. Is there anything keeping player for working at the increased granularity? (potential for 1000Hz as opposed to 100Hz operation) Best regards, Brad |
From: Brian G. <ge...@ro...> - 2004-05-10 22:43:12
|
On Mon, 10 May 2004, Kratochvil, Bradley Eugene wrote: > I've run into an interesting little problem. I made up a dummy player > driver that just accepts commands as a position device and returns the > values of the command (xpos, speed, etc) as the data. > > I wanted to see what kind of throughput you could get on a local > machine. I'm able to control the system at roughly 100Hz (the > theoretical limit), but I get some strange results from the C++ proxy. > When I run it at that speed, I use the print() command every 10 commands > I send to the server. After a couple of seconds, the print() commands > stop working, but the server still receives commands (I've checked this > using another client that only listens for data at a slower rate). By > "stops working" I mean that no new data is displayed to the console. A > few seconds later I start getting from player: > > Player warning : clientdata.cc:Write(): > 273 bytes leftover on write() to the client > > This happens irregardless of whether or not I have the second client > connected. > > Now, the plus side of this is that I don't need to run 100hz control > loops, so it probably won't be a problem. I'm just curious what's going > on? Any ideas on where to start looking at this? hi Brad, Data has backed up on the TCP socket going from client to server. The warning indicates that both the send and receive buffers are now full, and so Player has stopped writing data to the client. Btw, you can monitor this like so: $ netstat -A inet -c You'll see the size of send and receive queues for all your Internet domain sockets, including the ones between Player and your client(s). In general, the queues should be empty on the Player sockets. The reason for this problem is usually that your client is reading too slowly: the server is producing data faster than your client is consuming it. As for why that's happening, I'm not sure. Are the data or command packets particularly large? Maybe you're doing a lot of processing in your Print() method, or somewhere else in the loop between Read()s? Don't sleep on the client side; use the blocking Read() as your clock. > On a related note, with the new Linux 2.6 kernel I've heard that the > scheduling granularity has been reduced to 1ms. Is there anything > keeping player for working at the increased granularity? (potential for > 1000Hz as opposed to 100Hz operation) Interesting; Player should just run at 1MHz in that case. The main server loop blocks on a call to usleep(0), which yields the processor for the length of a scheduling timeslice. So if the timeslice gets smaller, the loop gets faster. I have seen strange behavior with some kernels (can't rembmer which ones) on the usleep(0), so if you get, for example, a busy loop, let me know. brian. -- Brian P. Gerkey ge...@ro... Stanford AI Lab http://ai.stanford.edu/~gerkey |
From: Kratochvil, B. E. <kra...@ir...> - 2004-05-11 07:41:15
|
>Data has backed up on the TCP socket going from client to server. >The warning indicates that both the send and receive buffers are now full, >and so Player has stopped writing data to the client. >Btw, you can monitor this like so: > $ netstat -A inet -c >You'll see the size of send and receive queues for all your Internet domain >sockets, including the ones between Player and your client(s). In general, >the queues should be empty on the Player sockets. >The reason for this problem is usually that your client is reading too >slowly: the server is producing data faster than your client is consuming >it. As for why that's happening, I'm not sure. Are the data or command >packets particularly large? Maybe you're doing a lot of processing in >your Print() method, or somewhere else in the loop between Read()s? Don't >sleep on the client side; use the blocking Read() as your clock. That's exactly what's happening. When I run netstat, I can watch the send and receive queues grow. (On a side note, I'd suggest running "netstat -A inet -c -p" because this better allows you to view the program name that has the open socket.) So, with this tool in hand, I've managed to track down the bug (a little), and will continue to chase it today. This is kind of a show stopper for me right now because we need Player to run on a relatively tight control loop (roughly >=3D50 Hz). So, without further ado.. I've written a small client to demonstrate the problem, which is at the end of this mail. On the server side, you can run player with the player-mod-0.1 driver and a slight change to the driver from PLAYER_READ_MODE to PLAYER_ALL_MODE. Now you have a client that connects and writes to the server and prints the results in a tight loop. No additional processing occurs on the client side. If only a Print() command is executed, there is no problem with the buffers backing up, but as soon as the SetSpeed command is issued, you can watch the queues back up. This happens at refresh speeds as low as 20 Hz and gets bad at around 40Hz on my machine (2.8Ghz Pentium 1GB RAM). The Player Recv-Q tends to be the first to back up, but once that gets large then the rest go too.=20 So, my question are: Do you think this behavior is a bug or should this be expected? It seems to me that sending position commands at 40Hz to one positioning device shouldn't be too much to expect. This only happens with commands and not data, so I think the throughput should be ok. If you do suspect this is improper behavior, do you have any ideas where should I start trying to track this one down? Best regards, Brad Test program: #include "playerclient.h" #include <stdlib.h> /* for exit() */ #include <unistd.h> int main(int argc, char *argv[]) { PlayerClient robot("localhost"); robot.SetFrequency(40); // This is given as Hz PositionProxy pp0(&robot,0,'a'); for(;;) { if(robot.Read()) exit(1); =20 pp0.SetSpeed(10,0); pp0.Print(); } } |
From: Brian G. <ge...@ro...> - 2004-05-14 06:55:17
|
On Tue, 11 May 2004, Kratochvil, Bradley Eugene wrote: > > >Data has backed up on the TCP socket going from client to server. > >The warning indicates that both the send and receive buffers are now > full, > >and so Player has stopped writing data to the client. > That's exactly what's happening. When I run netstat, I can watch the > send and receive queues grow. (On a side note, I'd suggest running > "netstat -A inet -c -p" because this better allows you to view the > program name that has the open socket.) > > Now you have a client that connects and writes to the server and prints > the results in a tight loop. No additional processing occurs on the > client side. If only a Print() command is executed, there is no problem > with the buffers backing up, but as soon as the SetSpeed command is > issued, you can watch the queues back up. This happens at refresh > speeds as low as 20 Hz and gets bad at around 40Hz on my machine (2.8Ghz > Pentium 1GB RAM). The Player Recv-Q tends to be the first to back up, > but once that gets large then the rest go too. > > So, my question are: Do you think this behavior is a bug or should this > be expected? It seems to me that sending position commands at 40Hz to > one positioning device shouldn't be too much to expect. This only > happens with commands and not data, so I think the throughput should be > ok. If you do suspect this is improper behavior, do you have any ideas > where should I start trying to track this one down? hi Brad, This does seem to be a bug. And the fact that you're the first to report it tells me that most users aren't turning up the data rate... I've played around with your example. Here's what's happening: your client is sending commands faster than Player is consuming them. Eventually the client->server socket backs up, and your client's next attempt to send a command hangs, waiting for the socket to clear. Then the server->client socket backs up, because the server is writing, but your client is no longer reading. Then you get the warning messages on the console from the server. The question is: why can't the server keep up? I'm sure yet, but I'll look into it. In the meantime, here's a workaround: in server/main.cc, in main(), comment out the call to usleep(), and re-compile Player. Then you can turn your client's rate up to about 1MHz, without trouble. brian. |
From: Brian G. <ge...@ro...> - 2004-04-30 15:51:07
|
On Thu, 29 Apr 2004, Kratochvil, Bradley Eugene wrote: > I've been hacking away at it the past couple of days, and think I've made some good progress. I have a much better idea of how the Player server works now. I'd just like to say nice work to the architects out there :) > Thanks; we do what we can... > I added an aioproxy to the c++ clientlib. It's pretty simple, but I can send you the file if you'd like to add it to the source. I think I will also be extending the dio (and possibly the aio) interfaces to encompass output as well as the current input. If you're interested, let me know. That sounds good. If you send a patch, I'll merge it into CVS. > I've been using the p20s driver as reference so far, and life has been going well. I have run into a bit of a sticky situation though. These Sensoray cards can control only 4 motors apiece. We need 5. So, I'm pondering how to add a second Sensoray card to the system. The driver for the card supports it, so that's not a problem. It's tricky to control multiple cards with a driver setup like the p2os one. Do you think your new driver will make life any easier on this front? If not, I'll probably have to ponder a little more. Why is it hard to manage multiple cards with one driver? Can't you just multiplex the low-level device I/O (e.g., using poll()) into the main driver thread? And, no, I don't expect that the changes I have in mind for the driver API will make this situation any easier to handle. brian. -- Brian P. Gerkey ge...@ro... Stanford AI Lab http://ai.stanford.edu/~gerkey |