From: Andi K. <ak...@su...> - 2001-02-10 11:14:45
|
On Fri, Feb 09, 2001 at 03:22:50PM -0500, Hubertus Franke wrote: > > > Mike, it goes to the heart of the question whether we MUST or NEED to stick > to the current scheduler semantics. When it complicates the code for no good reason it's probably better not to stick slavishly. Linux scheduling behaviour has changed in the past too especially on SMP, e.g. from 2.2 to 2.4, just UP has been relatively stable, so it's a moving target. > I have taken the MQ scheduler and sub-divide into the cpu pools. I have > posted regarding this already under > our latest status report for the scheduling: > http://lse.sourceforge.net/scheduling/results012501/status.html#Load%20Balancing > > Running the chatroom with 30/300 gives the following results. (I will post > these on Monday on our lse.sourceforge.net/scheduling) site > for general consumption. I think one problem is that it has not been generally accepted that chatroom (so many threads on the runqueue) is a good benchmark. I think e.g. wakeup optimizations (where Linux is relatively poor ATM, and latency is important) + running a reasonable number of running threads on each CPU that get woken up by someone else would be better. Unfortunately I cannot offer a good benchmark for this. > I think that the usage of cpu_allowed can be tight into this. cpu_allowed > is only a simple mechanism and not a policy. I think we need Just call it "hack for Tux"; to work around the reordering problem in the network stack. I don't think anybody else is using it yet. Real dynamic irq affinity redirection would be much better anyways. -Andi |