From: Martin J. B. <mb...@ar...> - 2002-10-06 00:39:34
|
> Could you send me a /proc/slabinfo output from a NUMA system with your patch applied, for an arbitrary stress test (dbench, kernel compile, whatever you use)? > > Which percentage of the free calls are foreign? > > I.e. what's the reason for accessing the kmem_cache_node_t structures? > > Are the majority of the accesses from the local node, when it tries to refill/drain it's per-cpu array, or from remote nodes, when they return objects to the correct node? > > If the majority is from remote nodes, then I don't see the reason for using per-node spinlocks and pointers for the node structure (+node local memory) in kmem_cache_t - just added complexity, without reducing cross-node memory traffic. That'll depend on what scheduler he uses as well, I suspect. I would hope the addition of a NUMA scheduler would help locality a lot, if you run just raw 2.4 without even O(1) sched, you results will probably be totally wacko ;-) M. |