On Tue, 1 Jan 2002, Jeff Dike wrote:
> I've seen the same sorts of things, and that casts doubt on the reliability
> of the rest of the numbers as far as I'm concerned.
> It's hard enough for lmbench to get good numbers on a physical machine, so
> when you stack a UML on top and run lmbench inside it, things just get worse.
> So, I've been pretty much ignoring lmbench numbers (unless they show a truly
> large change).
As absolute measure, yes, its not quite precise, but as relative
(comparing one uml to the other), its still interesting.
Another thing I did was to run UML under -mjc kernel, with preemption
enabled. Syscall latency went down to 20us from 32us.
> The scheme that I've considered (but never actually done) to minimize
> variation is to boot the host single-user, with nothing
> mounted/running except what's necessary to run UML, boot a minimal UML
> and run lmbench. Then do exactly the same procedure several more
> times to see if the numbers are somewhat consistent.
That's what I did. Not single-user but pretty much everything killed off
except few necessary things (sshd) which I don't expect to affect anything
> Or if you want the UML lmbench numbers to benefit from a warm host
> cache, don't reboot the host in between, just keep rebooting UML and
> running lmbench and see if the results converge.
That would only affect filesystem numbers, and I specifically didn't run
> The thing I did in 2.4.17 was change the number of host context
> switches per *UML context switch*, not per UML system call. And that
> doesn't seem to have made much difference in the lmbench numbers,