From: David W. <wo...@pl...> - 2002-10-26 22:57:32
|
Nathan Yocom wrote: > Ahhh - much cleaner - I was wondering about that still ;-) I did a bit > of looking, but you beat me too it.. there is also a clock_t and clock() > function that do the same thing (no more/less accurate) with clock > ticks, thereby allowing us to measure CPU time vs wall clock time, but i > think we are really only interested in wallclock. > Yeah, I think the gettimeofday routine is working fine for what we want (wall clock time). I can't imagine how CPU time will be of interest to us. > Interesting to note, my speeds seem to show that the first clients > connected get slightly higher average transfers: > > > Forking 21 clients. > 0.101624 sec/message, 9729.83 bytes/sec > 0.107163 sec/message, 9226.97 bytes/sec > 0.113180 sec/message, 8736.43 bytes/sec > 0.112972 sec/message, 8752.53 bytes/sec > 0.121449 sec/message, 8141.61 bytes/sec > 0.123179 sec/message, 8027.26 bytes/sec > ... Hmmm... Do you find these numbers to be repeatable? I'd guess that that small of a difference is probably "within the noise." The good thing is that all clients seem to be getting about the same response rates (within 2%). > etc.. However, does 9729.83 bytes/sec = ~9.7K/sec? I am not up to > speed on my K/k/B/b etc notation/math. Is this a good speed? it seems > respectable on a per message (less than a second) basis.. but overall? > Yes, I believe that 9729 ~ 9.5K. However, there are 21 clients that are sharing the same "pipe" so that is actually 9.5K * 21 bytes/sec, or about 200K/sec. Now, this is assuming that they all start and end at the same time which is not exactly true, but hopefully close. Do you agree with that logic? > I also get different sec/message rates depending on the # of clients. ~ > 0.001 with < 10, 0.01 10-15, 0.1 15-20, but that makes sense, as each > must share bandwidth with the others. > Yes, I get that too, and that makes perfect sense. The important thing is that they share the load equally. My results show that all timings are consistent for all clients (seem to be sharing the load equally). Good news! Dave ---------------------------------------------------------------- I encourage correspondence using GnuPG/OpenPGP encryption. My public key: http://www.cs.plu.edu/~dwolff/pgpkey.txt |