From: Avram D. <do...@es...> - 2007-06-15 14:09:01
|
I think I've clarified this bug a little more. It appears that if the client is killed w/out being able to send an indication to the server that it is shutting down, then the server treats all consecutive client connections as though they began at the same time as the first one. So if your first client transmits for 10 seconds, and then dies ungracefully, and than a minute passes before the second client attempts, you get 70 reports like this instantly from the server: [ 4] 27.0-28.0 sec 0.00 Bytes 0.00 bits/sec 0.000 ms 0/ 0 (nan%) before you start getting real reports. Then when you get a final report from the server, it includes the full time in it's calculations. -Avram On Jun 15, 2007, at 7:56 AM, Avram Dorfman wrote: > Hello, > > I seem to be experiencing a server bug when using reporting intervals. > > On the first connection from a client, everything is fine. > > If the client dies, maybe only if it dies ungracefully, the next > time a client connects, the server seems to begin with the time > interval that the previous client ended with. So if the first > client ran for 200 seconds, and I have a 1 second reporting > interval, I see 200 reports whiz by with all-zero data in them > before I start seeing real reports. > > Unfortunately, when the new client terminates, the erroneous time > is included in the final report calculations by the server. > > Anyone else experience this? Anyone figure out a simple fix I can > stick in the code for it? > > Thanks, > Avram |