From: David W. <wo...@pl...> - 2002-10-26 20:23:41
|
It turns out that I was using gettimeofday incorrectly. I've fixed that and now the results look much more reasonable. Take a look. Dave -- ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |
From: Nathan Y. <na...@yo...> - 2002-10-26 22:34:22
|
Ahhh - much cleaner - I was wondering about that still ;-) I did a bit of looking, but you beat me too it.. there is also a clock_t and clock() function that do the same thing (no more/less accurate) with clock ticks, thereby allowing us to measure CPU time vs wall clock time, but i think we are really only interested in wallclock. Interesting to note, my speeds seem to show that the first clients connected get slightly higher average transfers: Forking 21 clients. 0.101624 sec/message, 9729.83 bytes/sec 0.107163 sec/message, 9226.97 bytes/sec 0.113180 sec/message, 8736.43 bytes/sec 0.112972 sec/message, 8752.53 bytes/sec 0.121449 sec/message, 8141.61 bytes/sec 0.123179 sec/message, 8027.26 bytes/sec ... etc.. However, does 9729.83 bytes/sec = ~9.7K/sec? I am not up to speed on my K/k/B/b etc notation/math. Is this a good speed? it seems respectable on a per message (less than a second) basis.. but overall? I also get different sec/message rates depending on the # of clients. ~ 0.001 with < 10, 0.01 10-15, 0.1 15-20, but that makes sense, as each must share bandwidth with the others. Nate Nate On Sat, 2002-10-26 at 13:23, David Wolff wrote: > > It turns out that I was using gettimeofday incorrectly. I've > fixed that and now the results look much more reasonable. Take > a look. > > Dave > > > -- > ---------------------------------------------------------------- > I encourage correspondence using GnuPG/OpenPGP encryption. > My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt > > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > securenfs-devel mailing list > sec...@li... > https://lists.sourceforge.net/lists/listinfo/securenfs-devel > |
From: David W. <wo...@pl...> - 2002-10-26 22:57:32
|
Nathan Yocom wrote: > Ahhh - much cleaner - I was wondering about that still ;-) I did a bit > of looking, but you beat me too it.. there is also a clock_t and clock() > function that do the same thing (no more/less accurate) with clock > ticks, thereby allowing us to measure CPU time vs wall clock time, but i > think we are really only interested in wallclock. > Yeah, I think the gettimeofday routine is working fine for what we want (wall clock time). I can't imagine how CPU time will be of interest to us. > Interesting to note, my speeds seem to show that the first clients > connected get slightly higher average transfers: > > > Forking 21 clients. > 0.101624 sec/message, 9729.83 bytes/sec > 0.107163 sec/message, 9226.97 bytes/sec > 0.113180 sec/message, 8736.43 bytes/sec > 0.112972 sec/message, 8752.53 bytes/sec > 0.121449 sec/message, 8141.61 bytes/sec > 0.123179 sec/message, 8027.26 bytes/sec > ... Hmmm... Do you find these numbers to be repeatable? I'd guess that that small of a difference is probably "within the noise." The good thing is that all clients seem to be getting about the same response rates (within 2%). > etc.. However, does 9729.83 bytes/sec = ~9.7K/sec? I am not up to > speed on my K/k/B/b etc notation/math. Is this a good speed? it seems > respectable on a per message (less than a second) basis.. but overall? > Yes, I believe that 9729 ~ 9.5K. However, there are 21 clients that are sharing the same "pipe" so that is actually 9.5K * 21 bytes/sec, or about 200K/sec. Now, this is assuming that they all start and end at the same time which is not exactly true, but hopefully close. Do you agree with that logic? > I also get different sec/message rates depending on the # of clients. ~ > 0.001 with < 10, 0.01 10-15, 0.1 15-20, but that makes sense, as each > must share bandwidth with the others. > Yes, I get that too, and that makes perfect sense. The important thing is that they share the load equally. My results show that all timings are consistent for all clients (seem to be sharing the load equally). Good news! Dave ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |
From: Nathan Y. <na...@yo...> - 2002-10-27 16:13:07
|
> Yeah, I think the gettimeofday routine is working fine for what > we want (wall clock time). I can't imagine how CPU time will be of > interest to us. Ditto > Yes, I believe that 9729 ~ 9.5K. However, there are 21 clients that > are sharing the same "pipe" so that is actually 9.5K * 21 bytes/sec, or > about 200K/sec. Now, this is assuming that they all start and end at > the same time which is not exactly true, but hopefully close. Do you > agree with that logic? I agree with the logic, but over a local bus on the same machine we should see a much bigger pipe that 200k/sec... Could have something to do with how we recieve the messages too (byte by byte vs a whole buffer at a time), but I would think we could get MUCH higher. Perhaps its time we ran the server on shemp and beat against it with a few clients to see what kind of responses we get? > > I also get different sec/message rates depending on the # of clients. ~ > > 0.001 with < 10, 0.01 10-15, 0.1 15-20, but that makes sense, as each > > must share bandwidth with the others. > > > > Yes, I get that too, and that makes perfect sense. The important thing > is that they share the load equally. My results show that all timings > are consistent for all clients (seem to be sharing the load equally). > Good news! I agree - Very good news! :-) Nate |
From: David W. <wo...@pl...> - 2002-10-26 23:00:58
|
>> >> Forking 21 clients. >> 0.101624 sec/message, 9729.83 bytes/sec >> 0.107163 sec/message, 9226.97 bytes/sec >> 0.113180 sec/message, 8736.43 bytes/sec >> 0.112972 sec/message, 8752.53 bytes/sec >> 0.121449 sec/message, 8141.61 bytes/sec >> 0.123179 sec/message, 8027.26 bytes/sec >> ... Whoops, I should have seen this earlier. Of course this list is not in order of the clients as they are forked, it is in the order of clients as they finish! So you should always see better timings for the first ones! :) -- ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |
From: Nathan Y. <na...@yo...> - 2002-10-27 16:10:23
|
> Whoops, I should have seen this earlier. Of course this list > is not in order of the clients as they are forked, it is in the > order of clients as they finish! So you should always see better > timings for the first ones! :) Doh! Good catch... sometimes its the simple things ;-) Perhaps we should have each fork also output its relative position to the others (via a counter), or perhaps its token or something - just so we can identify trends (if there are any) relating to order of connection etc. Nate |
From: David W. <wo...@pl...> - 2002-10-27 20:47:16
|
> Perhaps we should have each fork also output its relative position to > the others (via a counter), or perhaps its token or something - just so > we can identify trends (if there are any) relating to order of > connection etc. Good idea, I just went ahead and added that. Dave ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |
From: David W. <wo...@pl...> - 2002-10-27 20:52:09
|
> >>Yes, I believe that 9729 ~ 9.5K. However, there are 21 clients that >>are sharing the same "pipe" so that is actually 9.5K * 21 bytes/sec, or >>about 200K/sec. Now, this is assuming that they all start and end at >>the same time which is not exactly true, but hopefully close. Do you >>agree with that logic? > > > I agree with the logic, but over a local bus on the same machine we > should see a much bigger pipe that 200k/sec... Could have something to > do with how we recieve the messages too (byte by byte vs a whole buffer > at a time), but I would think we could get MUCH higher. Perhaps its > time we ran the server on shemp and beat against it with a few clients > to see what kind of responses we get? Hmmm.. You make an interesting point. I guess the question is, how does Linux handle a localhost socket connection? As far as I know, it uses TCP/IP even though client and server are on the same host. Hence my guess is that it is not just a communication over the bus. The data may be routed through some kind of serial interface. I'm guessing that we should expect slower performance than say just regular IPC. Yes, we definately should test it out with clients on different machines. I've already done a little of that on my home network, but I haven't compared the results to localhost communications. > > >>>I also get different sec/message rates depending on the # of clients. ~ >>>0.001 with < 10, 0.01 10-15, 0.1 15-20, but that makes sense, as each >>>must share bandwidth with the others. >>> >> >>Yes, I get that too, and that makes perfect sense. The important thing >>is that they share the load equally. My results show that all timings >>are consistent for all clients (seem to be sharing the load equally). >>Good news! > > > I agree - Very good news! :-) > Yeah! Have a beer in celebration (hey any chance to celebrate eh?). Dave -- ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |
From: Nathan Y. <na...@yo...> - 2002-10-27 21:58:28
|
On Sun, 2002-10-27 at 12:52, David Wolff wrote: > Hmmm.. You make an interesting point. I guess the question is, how does > Linux handle a localhost socket connection? As far as I know, it > uses TCP/IP even though client and server are on the same host. Hence > my guess is that it is not just a communication over the bus. The > data may be routed through some kind of serial interface. I'm guessing > that we should expect slower performance than say just regular IPC. True - good point. I did some tests with other network services (apache/sftp/ftp) and file transfer rates on a localhost connection, and indeed i seem to transfer a 171 MB file in a little over a minute (making for about 300K/sec) - so we may be on the right track. > Yes, we definately should test it out with clients on different > machines. I've already done a little of that on my home network, but > I haven't compared the results to localhost communications. Cool - why dont we meet on Thursday and give this a shot? we can run it on our office machines as well as the oracle machine and shemp itself... should give us a pretty good idea of how things look. > > I agree - Very good news! :-) > > > Yeah! Have a beer in celebration (hey any chance to celebrate eh?). Celebration ;-) means more than just A beer hehe. Sounds like a damn good idea though, I think I will follow up on that. Later! Nate |