|
From: Donal K. F. <don...@ma...> - 2011-03-26 10:50:52
|
On 25/03/2011 13:49, miguel sofer wrote: > I can now document my misgivings about tclbench: it is too noisy. It may > be suitable for some comparisons, but its value as a guide for > optimization work is at least doubtful. I suspect some of the issue is that tclbench itself is adding randomness of its own (i.e., there's a lot of instrument error). When I use the microbenchmark code I wrote for TclOO, I get reasonably consistent results so long as I'm not looking to get below about 1% variability (and don't run it on a machine that's overloaded with other stuff, of course). The trick is that it builds the test script, runs it once inside [time] and throws away the result, runs it again inside [time] with a small number of iterations to get an estimate for the speed, and then computes the number of iterations required to run the benchmark for a second. It's *that* value which it then uses for the performance measurement benchmarking run, and even then it double checks that it got at least a second's worth of time (if not, it increases the number of iterations and tries again; IIRC, there's also a minimum number of iterations). Finally, it reports the reciprocal of the value because I believe that bigger is better. :-) As I said, this seems to give reliable results when dealing with very fast parts of Tcl such as the command dispatch mechanism. I don't know how effective it would be when faced with longer running benchmarks. It's also not dealing with benchmarks that do heavy memory allocation or I/O; I suspect those will always have a higher degree of variability because they can/will depend on OS interaction. Donal. |