Mala Anand wrote:
>>Since you consider idle time important, what was the idle time in the
>>SPECweb numbers you *did* use? That wasn't included in Troy's brief
>>note. Did the idle time gain hold in a validated run?
> Troy's runs are 100% complaint. That means 100% CPU utilized at conforming
> throughput rate.
There is absolutely no direction relationship between conformance and cpu
utilization. A 100% compliant load with 10 connections is going to use
quite a bit less CPU than a 100% compliant load with 2300 connections.
Yes, as you add more connections you use more CPUs, but there there is no
In a perfect world where the OS was never waiting on I/O, never making
scheduling mistakes, and where the benchmark is calibrated perfectly, and
where there are no random variances in your testing conditions, you will
get 100% conforming connections, with every single last tick of every
single CPU in the system used in a productive manner. I know for a fact
that at most one of these conditions is met in my testing environment.
Troy's is no different.
One of the tenets of Specweb99 is that a compliant connection is served at
a rate of at least 320 kbytes/sec.
For argument's sake, let's pretend that we have a perfect world.
- I have a 32 Mbit/sec pipe coming out of my server.
- There is no routing, or transport layer overhead
- I have exactly enough CPU to satisfy 100 connections
- My CPUs are 100% utilized
- The OS is completely fair to each client
- each connection is served at 320 Kbytes/sec
- There is 100% conformance when tested
Now, I go to 101 connections. The bandwidth awarded to each on is the same
portion as before: total_bandwidth/clients = bandwidth_per_client, 316.831
Kbytes/sec. Since the OS is fair, each client gets this much share, no
more, no less. Since the aggregate bandwidth hasn't changed, the CPU time
hasn't changed either: still 100%. But, according to Specweb, the
compliance is 0.00000%, because all connections are served at less than 320
kbytes/sec. Tricky, huh?
Conformance percentage across runs means basically zilch. You can _never_
say that it is X% better because it increased conformance by Y%. You make
be able to make a case that RunA was better than RunB, but be very, very
When performance testing Linux, the area of maximum throughput is rarely
the most interesting place to analyze. We don't care when it is working
well, we want to know where it breaks! Take your test and triple the
requested connections. Now, that's interesting! See how your patch holds
up under sever torture. (The wli method)
This is how I understand Specweb. I would be delighted if someone more
knowledgable than me can teach me more about it.
>>I would suggest that even with the variance, this data would have been
>>more interesting than the "cycle measurements" you chose to use. That
>>is an unfamiliar measurement tool to most of the community.
> I guess that is just your opinion. After providing the information
> Ben Lehaise asked for, I have not seen any complaint from open source
> community. You all didn't even look at Dave's data properly and jumped
> to conclusions.
I've found silence in the open source community to be more damning than