From: Hubertus F. <fr...@wa...> - 2001-11-05 16:32:17
|
Well my 2.5 cents: Under "Chunk" I usually imply a smaller piece of a large thing. Hence this doesn't really fit. "MemoryBlock" would fit better with me. -- Hubertus * Paul Jackson <pj...@en...> [20011102 20;09]:" > On Fri, 2 Nov 2001, Paul Dorwin wrote: > > I have a few comments regarding the cpumemset document. > > Excellent - thank-you! > > > > You refer to memory as memory nodes. Other times you refer to > > the same thing as a node. And still other times you refer to > > a node in the more familiar context of a container. To me, > > the term node refers to something which can contain cpus, > > memory, and IO. I would be more confortable with some other > > term which refers to a range of memory. > > I would be more confortable with another name as well ;). > Any suggestions? If I had to pick an alternative right > now, it would be "memory chunk". > > Earlier I tried "memory bank", but that had too many prior > connotations to me. I used "memory node" in this Version > because it seemed that on the architectures we are currently > concerned with (the big ia64 numa systems I knew of) there was > a one-to-one relation between chunks of memory and system nodes. > > I thought I had been fairly pedantic in using "memory node" > everywhere, except sometimes when using that term multiple times > in a single sentence, and I hoped that secondary references > could be abbreviated to just "nodes" without confusion. If you > see a contrary instance, I'd be happy to fix it. > > Or if you have a better name, I'm interested. > > > > > --- > > > > In 'Using CpuMemSets' you say: > > > > On systems supporting hot-swap of CPUs (or even memory, if someone can > > figure that out) the system administrator would be able to change CPUs > > and remap by changing the applications CpuMemMap, without the > > application being aware of the change. > > > > How are you doing this? > > See the Bulk Remap call (CMS_BULK_ALL). > > It goes through and alters any CpuMemMap as requested, > perhaps to remove a cpu or memory node (according to > its system numbering) from service, by replacing that > system number with another. The implementation then walks > through the tasks and vm areas in the system, recomputing > cpus_allowed and zone lists as need be, to reflect the > changed CpuMemMaps. By the time that one system call > returns, no further task will be scheduled on the mapped > out cpu, and no further memory will be allocated on the > mapped out memory node. > > > > Will there be /proc/<pid>/cpumemset and > > /proc/<pid>/cpumemmap interfaces? > > No plans for this, though it's possible. > > > > A /proc interface would also be useful for managing an application > > which is already running. > > Use the cms*() calls with "pid" arguments, such as: > > cmsQueryCMM, cmsSetCMM, cmsQueryCMSbyPid, cmsSetCMSbyPid, cmsBulkRemap > > to manage currently running applications. I find the use of /proc > to manage a system, as opposed to (1) report on it, or (2) toggle > obscure debug hooks, to be an ugly interface, and resist such. > > >From the latest work I see from Rusty Russell <ru...@ru...>: > > [PATCH] 2.5 PROPOSAL: Replacement for current /proc of shit. > > I am not alone in this opinion. > > > > One could view existing memmaps by cat /proc/123/cpumemmap. > > A line for each memmap used by the application would be printed. > > Using your example, a line could be displayed as follows: > > > > addr size 8 4,5,6,7,8,9,10,11 2 1,2 > > > > Using your example again, could one modify an existing application > > from the command line by sepcifying a memmap as follows: > > echo "migrate addr size 8 4,5,6,7,8,9,10,11 2 1,2" > /proc/123/cpumemmap > > ugh - try instead: > > cmsSetCMM (CMS_CURRENT, 123, 0, 0, &cmm); > > > > And finally, the process could be migrated to processors 4-11 > > via echo ff0 /proc/123/cpus_allowed. > > There is no visible "cpus_allowed" in CpuMemSets - rather > cpus_allowed is an implementation detail of the task scheduler > for systems with fewer than 64 cpus. > > Rather we need command line utilities, built on the CpuMemSets > infrastructure, to support such migration and related tasks. > > > > You could also use /proc/cpumemset and /proc/cpumemmap to alter the > > system defaults. > > You _could_. I hope not. Also there is no particularly interesting > "system default" map or set, beyond the initial one the kernel sets > up during boot, and uses when starting the init process. From that > point forward, all maps and sets are inherited or user specified. > > > > --- > > > > In 'Processors, Memory and Distance' your discussion of <cpu,mem> > > distances deals primarily with cache warmth issuse. Should you also > > discuss the disadvantages of scheduling a process on a cpu further > > from where the physical pages are contained? > > My recollection is that I have two distances: > > <cpu, mem> - for modeling cpu to memory latency/bandwidth > <cpu, cpu> - for modeling cache warmth > > Perhaps something in my presentation is confusing these two? > > > > For example, you run on node 0 and allocate pages from the memory on > > that node. If you sleep (maybe on IO) you no longer have any cache > > warmth. However, you would still incur a potentially more expensive > > penalty if you are scheduled on a cpu on another node because you now > > have to pull all data into cache over a longer latency/lower bandwidth > > pipe. > > This example sounds like it is getting at <cpu, mem> distances. > > > > I won't rest till it's the best ... > Manager, Linux Scalability > Paul Jackson <pj...@sg...> 1.650.933.1373 > > > _______________________________________________ > Lse-tech mailing list > Lse...@li... > https://lists.sourceforge.net/lists/listinfo/lse-tech |