From: Erich F. <ef...@hp...> - 2004-10-10 12:27:27
|
On Saturday 09 October 2004 01:50, Nick Piggin wrote: > Erich Focht wrote: > > >>I personally like the hierarchical idea. Machine topologies tend to > >>look tree-like, and every useful sched_domain layout I've ever seen has > >>been tree-like. I think our interface should match that. > > > > > > I like the hierarchical idea, too. The natural way to build it would > > be by starting from the cpus and going up. This tree stands on its > > leafs... and I'm not sure how to express that in a filesystem. > > > > Why would you ever want to play around with the internals of the > thing though? Provided you have a way to create exclusive sets of > CPUs, when would you care about doing more? Three reasons come immediately to my mind: - Move the sched domains setup out of the kernel into user space. With my proposal of filesystem with directory operations only (just moving cpuX virtual files around) the boot setup should just be: global/ cpu1 cpu2 ... The rest could be done very machine and load specific in user space. This way the kernel scheduler wouldn't need to struggle keeping up learning characteristics of new machines as they appear on the radar. - I sometimes want to create/ destroy isolated partitions at high rate (through a batch scheduler) and a reasonable API enables me to keep the domains consistent at any time. - Flexibility of isolated partitions is a bare necessity. If you simply divide your system into interactive and batch partition you'd certainly want to decrease the size of the interactive partition during the night without rebooting the machine... Regards, Erich |
From: Erich F. <ef...@hp...> - 2004-10-10 12:48:24
|
On Saturday 09 October 2004 03:05, Matthew Dobson wrote: > On Fri, 2004-10-08 at 15:51, Erich Focht wrote: > > We're building this from bottom (cpus) up and need to take care of the > > unlinking of the global domain when inserting something. But otherwise > > this could be sufficient. > > I personally like to think of it from the top down. The internal API I > came up with looks like: > > create_domain(parent_domain, type); > destroy_domain(domain); > add_cpu_to_domain(cpu, domain); > > So you basically build your domain from the top down, from your 1 or > more top-level domains, down to your lowest level domains. You then add > cpus (1 or more per domain) to the leaf domains in the tree you built. > Those cpus cascade up the tree, and the whole tree knows exactly which > cpus are contained in each domain in it. > > I think these are the three main functions you need to construct pretty > much any conceivable, useful sched_domains hierarchy. I'd suggest adding: reparent_domain(domain, new_parent_domain); When I said that the domains tree is standing on its leaves I meant that the core components are the CPUs. Or the Nodes, if you already have them. Or some supernodes, if you already have them. In a "normal" filesystem you have the root directory, create subdirectories and create files in them. Here you already have the files but not the structure (or the simplest possible structure). Anyhow, the 4 command API can well be the guts of the directory operations API which I proposed. Regards, Erich |
From: Matthew D. <col...@us...> - 2004-10-12 22:46:02
|
On Sun, 2004-10-10 at 05:45, Erich Focht wrote: > On Saturday 09 October 2004 03:05, Matthew Dobson wrote: > > I personally like to think of it from the top down. The internal API I > > came up with looks like: > > > > create_domain(parent_domain, type); > > destroy_domain(domain); > > add_cpu_to_domain(cpu, domain); > I'd suggest adding: > reparent_domain(domain, new_parent_domain); > > When I said that the domains tree is standing on its leaves I meant > that the core components are the CPUs. Or the Nodes, if you already > have them. Or some supernodes, if you already have them. In a "normal" > filesystem you have the root directory, create subdirectories and > create files in them. Here you already have the files but not the > structure (or the simplest possible structure). > > Anyhow, the 4 command API can well be the guts of the directory > operations API which I proposed. > > Regards, > Erich I like that suggestion. As Paul has been sucked away to other work, thus giving me a chance to work on my code. I will be focusing on getting the cpusets/CKRM style interface working with my sched_domains API. I like the reparent_domain() suggestion, and it makes sense with the 'mv' command, in regards to the filesystem model that cpusets/CKRM currently uses. -Matt |
From: Dinakar G. <di...@in...> - 2005-04-18 20:08:37
Attachments:
dyn-sd.patch
|
Here's an attempt at dynamic sched domains aka isolated cpusets o This functionality is on top of CPUSETs and provides a way to completely isolate any set of CPUs dynamically. o There is a new cpu_isolated flag that allows users to convert an exclusive cpuset to an isolated one o The isolated CPUs are part of their own sched domain. This ensures that the rebalance code works within the domain, prevents overhead due to a cpu trying to pull tasks only to find that its cpus_allowed mask does not allow it to be pulled. However it does not kick existing processes off the isolated domain o There is very little code change in the scheduler sched domain code. Most of it is just splitting up of the arch_init_sched_domains code to be called dynamically instead of only at boot time. It has only one API which takes in the map of all cpus affected and the two new domains to be built rebuild_sched_domains(cpumask_t change_map, cpumask_t span1, cpumask_t span2) There are some things that may/will change o This has been tested only on x86 [8 way -> 4 way with HT]. Still needs work on other arch's o I didn't get a chance to see Nick Piggin's RCU sched domains code as yet, but I know there would be changes here because of that... o This does not support CPU hotplug as yet o Making a cpuset isolated manipulates its parent cpus_allowed mask. When viewed from userspace this is represented as follows [root@llm11 cpusets] cat cpus 0-3[4-7] This indicates that CPUs 4-7 are isolated and is/are part of some child cpuset/s Appreciate any feedback. Patch against linux-2.6.12-rc1-mm1. include/linux/init.h | 2 include/linux/sched.h | 1 kernel/cpuset.c | 141 ++++++++++++++++++++++++++++++++++++++++++++++++-- kernel/sched.c | 109 +++++++++++++++++++++++++------------- 4 files changed, 213 insertions(+), 40 deletions(-) -Dinakar |
From: Nick P. <nic...@ya...> - 2005-04-18 23:44:20
|
Dinakar Guniguntala wrote: > Here's an attempt at dynamic sched domains aka isolated cpusets > Very good, I was wondering when someone would try to implement this ;) It needs some work. A few initial comments on the kernel/sched.c change - sorry, don't have too much time right now... > --- linux-2.6.12-rc1-mm1.orig/kernel/sched.c 2005-04-18 00:46:40.000000000 +0530 > +++ linux-2.6.12-rc1-mm1/kernel/sched.c 2005-04-18 00:47:55.000000000 +0530 > @@ -4895,40 +4895,41 @@ static void check_sibling_maps(void) > } > #endif > > -/* > - * Set up scheduler domains and groups. Callers must hold the hotplug lock. > - */ > -static void __devinit arch_init_sched_domains(void) > +static void attach_domains(cpumask_t cpu_map) > { This shouldn't be needed. There should probably just be one place that attaches all domains. It is a bit difficult to explain what I mean when you have 2 such places below. [...] > +void rebuild_sched_domains(cpumask_t change_map, cpumask_t span1, cpumask_t span2) > +{ Interface isn't bad. It would seem to be able to handle everything, but I think it can be made a bit simpler. fn_name(cpumask_t span1, cpumask_t span2) Yeah? The change_map is implicitly the union of the 2 spans. Also I don't really like the name. It doesn't rebuild so much as split (or join). I can't think of anything good off the top of my head. > + unsigned long flags; > + int i; > + > + local_irq_save(flags); > + > + for_each_cpu_mask(i, change_map) > + spin_lock(&cpu_rq(i)->lock); > + Locking is wrong. And it has changed again in the latest -mm kernel. Please diff against that. > + if (!cpus_empty(span1)) > + build_sched_domains(span1); > + if (!cpus_empty(span2)) > + build_sched_domains(span2); > + You also can't do this - you have to 'offline' the domains first before building new ones. See the CPU hotplug code that handles this. [...] > @@ -5046,13 +5082,13 @@ static int update_sched_domains(struct n > unsigned long action, void *hcpu) > { > int i; > + cpumask_t temp_map, hotcpu = cpumask_of_cpu((long)hcpu); > > switch (action) { > case CPU_UP_PREPARE: > case CPU_DOWN_PREPARE: > - for_each_online_cpu(i) > - cpu_attach_domain(&sched_domain_dummy, i); > - arch_destroy_sched_domains(); > + cpus_andnot(temp_map, cpu_online_map, hotcpu); > + rebuild_sched_domains(cpu_online_map, temp_map, CPU_MASK_NONE); This makes a hotplug event destroy your nicely set up isolated domains, doesn't it? This looks like the most difficult problem to overcome. It needs some external information to redo the cpuset splits at cpu hotplug time. Probably a hotplug handler in the cpusets code might be the best way to do that. -- SUSE Labs, Novell Inc. |
From: Dinakar G. <di...@in...> - 2005-04-19 08:58:04
|
On Tue, Apr 19, 2005 at 09:44:06AM +1000, Nick Piggin wrote: > Very good, I was wondering when someone would try to implement this ;) Thank you for the feedback ! > >-static void __devinit arch_init_sched_domains(void) > >+static void attach_domains(cpumask_t cpu_map) > > { > > This shouldn't be needed. There should probably just be one place that > attaches all domains. It is a bit difficult to explain what I mean when > you have 2 such places below. > Can you explain a bit more, not sure I understand what you mean > Interface isn't bad. It would seem to be able to handle everything, but > I think it can be made a bit simpler. > > fn_name(cpumask_t span1, cpumask_t span2) > > Yeah? The change_map is implicitly the union of the 2 spans. Also I don't > really like the name. It doesn't rebuild so much as split (or join). I > can't think of anything good off the top of my head. Yeah agreed. It kinda lived on from earlier versions I had > > >+ unsigned long flags; > >+ int i; > >+ > >+ local_irq_save(flags); > >+ > >+ for_each_cpu_mask(i, change_map) > >+ spin_lock(&cpu_rq(i)->lock); > >+ > > Locking is wrong. And it has changed again in the latest -mm kernel. > Please diff against that. > I havent looked at the RCU sched domain changes as yet, but I put this in to address some problems I noticed during stress testing. Basically with the current hotplug code, it is possible to have a scenario like this rebuild domains load balance | | | take existing sd pointer | | attach to dummy domain | | loop thro sched groups change sched group info | access invalid pointer and panic > >+ if (!cpus_empty(span1)) > >+ build_sched_domains(span1); > >+ if (!cpus_empty(span2)) > >+ build_sched_domains(span2); > >+ > > You also can't do this - you have to 'offline' the domains first before > building new ones. See the CPU hotplug code that handles this. > By offline if you mean attach to dummy domain, see above > This makes a hotplug event destroy your nicely set up isolated domains, > doesn't it? > > This looks like the most difficult problem to overcome. It needs some > external information to redo the cpuset splits at cpu hotplug time. > Probably a hotplug handler in the cpusets code might be the best way > to do that. Yes I am aware of this. What I have in mind is for the hotplug code from scheduler to call into cpusets code. This will just return say 1 when cpusets is not compiled in and the sched code can continue to do what it is doing right now, else the cpusets code will find the leaf cpuset that contains the hotplugged cpu and rebuild the domains accordingly However the question still remains as to how cpusets should handle this hotplugged cpu -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-19 05:55:33
|
Hmmm ... interesting patch. My reaction to the changes in kernel/cpuset.c are complicated: * I'm supposed to be on vacation the rest of this month, so trying (entirely unsuccessfully so far) not to think about this. * This is perhaps the first non-trivial cpuset patch to come in the last many months from someone other than Simon or myself - welcome. * Some coding style and comment details will need working. * The conceptual model for how to represent this in cpusets needs some work. Let me do two things in this reply. First I'll just shoot off shotgun style the nit picking coding and comment details that I notice, in a scan of the patch. Then I will step back to a discussion of the conceptual model. I suspect that by the time we nail the conceptual model, the code will be sufficiently rewritten that most of the coding and comment nits will no longer apply anyway. But, since nit picking is easier than real thinking ... * I'd probably ditch the all_cpus() macro, on the concern that it obfuscates more than it helps. * The need for _both_ a per-cpuset flag 'CS_CPU_ISOLATED' and another per-cpuset mask 'isolated_map' concerns me. I guess that the isolated_map is just a cache of the set of CPUs isolated in child cpusets, not an independently settable mask, but it needs to be clearly marked as such if so. * Some code lines go past column 80. * The name 'isolated' probably won't work. There is already a boottime option "isolcpus=..." for 'isolated' cpus which is (I think ?) rather different. Perhaps a better name will fall out of the conceptual discussion, below. * The change to the output format of the special cpuset file 'cpus', to look like '0-3[4-7]' bothers me in a couple of ways. It complicates the format from being a simple list. And it means that the output format is not the same as the input format (you can't just write back what you read from such a file anymore). * Several comments start with the word 'Set', as in: Set isolated ON on a non exclusive cpuset Such wording suggests to me that something is being set, some bit or value changed or turned on. But in each case, you are just testing for some condition that will return or error out. Some phrasing such as "If ..." or other conditional would be clearer. * The update_sched_domains() routine is complicated, and hence a primary clue that the conceptual model is not clean yet. * None of this was explained in Documentation/cpusets.txt. * Too bad that cpuset_common_file_write() has to have special logic for this isolated case. The other flag settings just turn on and off the associated bit, and don't trigger any kernel code to adapt to new cpu or memory settings. We should make an exception to that behaviour only if we must, and then we must be explicit about the exception. Ok - enough nits. Now, onto the real stuff. This same issue, in a strange way, comes up on the memory side, as well as on the cpu side. First, let me verify one thing. I understand that the _key_ purpose of your patch is not so much to isolate cpus, as it is to allow for structuring scheduling domains to align with cpuset boundaries. I understand real isolated cpus to be ones that don't have a scheduling domain (have only the dummy one), as requested by the "isolcpus=..." boot flag. The following code snippet from kernel/sched.c is what I derive this understanding from: === static void __devinit arch_init_sched_domains(void) { ... /* * Setup mask for cpus without special case scheduling requirements. * For now this just excludes isolated cpus, but could be used to * exclude other special cases in the future. */ cpus_complement(cpu_default_map, cpu_isolated_map); cpus_and(cpu_default_map, cpu_default_map, cpu_online_map); /* * Set up domains. Isolated domains just stay on the dummy domain. */ for_each_cpu_mask(i, cpu_default_map) { ... === Second, let me describe how this same issue shows up on the memory side. Let's say, for example, someone has partitioned a large system (100's of cpus and nodes) in two major halves using cpusets, each half being used by a different organization. On one of the halves, they are running a large scientific program that works on a huge data set that just fits in the memory available on that half, and they are running a set of related tools that run different passes over that data. Some of these tools might take several cpus, running parallel threads, and using a little more data shared by the threads in that tool. Each of these tools might get its own cpuset, a child (subset) of the big cpuset that defines the half of the system that this large scientific program is running within. The big dataset has to be constrained to the big cpuset (that half of the system). The smaller tools have to be constrained to their child cpusets, both for memory and scheduling. The individual threads of a tool should probably be placed using the set_mempolicy and sched_setaffinity calls, from within the tool. But the tool placement typically needs to be done from the outside, which placement cpusets handles better. This results in some 'memory domains', which are larger than a leaf node cpuset, smaller than the entire system, and which will constrain some memory allocations. In this example, the half of the system holding the big data set is a memory domain. These 'memory domains' can be naturally defined by the memory nodes contained in selected cpusets. === Looking at this mathematically, as a hierarchy of nested sets and subsets, I think we have the same problem, on both the cpu and memory side. In both cases, we have an intermediate degree of partitioning of a system, neither at the most detailed leaf cpuset, nor at the all encompassing top cpuset. And in both cases, we want to partition the system, along cpuset boundaries. Here I use "partition" in the mathematical sense: =============================================================== A partition of a set X is a set of nonempty subsets of X such that every element x in X is in exactly one of these subsets. Equivalently, a set P of subsets of X, is a partition of X if 1. No element of P is empty. 2. The union of the elements of P is equal to X. (We say the elements of P cover X.) 3. The intersection of any two elements of P is empty. (We say the elements of P are pairwise disjoint.) http://www.absoluteastronomy.com/encyclopedia/p/pa/partition_of_a_set.htm =============================================================== In the case of cpus, we really do prefer the partitions to be disjoint, because it would be better not to confuse the domain scheduler with overlapping domains. In the case of memory, we technically probably don't _have_ to keep the partitions disjoint. I doubt that the page allocator (mm/page_alloc.c:__alloc_pages()) really cares. It will strive valiantly to satisfy the memory request from any of the zones (each node specific) in the list passed into it. But for the purposes of providing a clear conceptual model to our users, I think it is best that we impose this constraint on the memory side as well as on the cpu side. And I don't think it will deprive users of any useful configuration alternatives that they will really miss. Indeed, the typical user will be striving to use this mechanism to separate different demands for memory - to isolate them on to different nodes in your sense of the word isolate. So, what we want, I claim, is two partitions of the system. 1) A partition of cpus. 2) A partition of memory nodes. I mean 'partition' in the above mathematical sense, with the one additional constraint: * Each subset in both these partitions corresponds to some cpuset. That is, for the partition of cpus, for each subset of the partition, there is a cpuset having the exactly the same cpus as that subset, no more, no less. Similary, for the partition of memory nodes. At any point in time, there would be exactly one such partitioning of cpus, and one of memory nodes, on the system. For the cpu case, we would provide a scheduler domain for each subset of the cpu partitioning. For the memory case, we would constrain a given allocation request to either the current tasks cpuset, or to the containing subset of the memory node partition, depending on per-cpuset options which will need to be developed in future patches that will enable marking either GFP_KERNEL allocations, or allocations for a named shared memory region (mapped file or such, not anonymous) to be constrained not by the current tasks cpuset, but by the encompassing subset of the current partition of memory nodes - (2) above. Observe that: * We can specify whether a given cpusets cpus define one of the subsets of the systems partitioning of cpus, in (1) above, using a per-cpuset boolean flag. * We can similarly specify whether a given cpusets memory nodes define one of the subsets of the systems partitioning of memory nodes, in (2) above, using one more per-cpuset boolean flag. * We cannot however do all this correctly just manipulating each cpuset in isolation, with per-cpuset atomic operations. Or at least it _seems_ that we cannot do this. Keep reading; I will find a way. As you discovered in some of the more complex code in your update_sched_domains() method, we are dealing with system wide properties here. The above mathematical properties of a partition must be followed. If we only have atomic operations on individual cpusets, then it would _seem_ that more or less any possible change in the partition subsets will require that we go through an intermediate state that is illegal. For example, to move a cpu from one subset to another, it would seem that we must pass through an intermediate state where it is either in both subsets, or in neither. So we require a way for the user to tell us which of the several cpusets in the system define the current partitioning of cpus, as will be used to determine scheduler domains, and we require a way for the user to tell us which of the several cpusets in the system define the current partitioning of memory nodes, as will be used to determine where specified memory allocations will be constrained, when they are allowed to escape the cpuset of the allocating task. In both these cases, we must handle the case that the user didn't follow the properties of a partition (perhaps the subsets overlap, or don't cover), and return an error without making a change. In both of these cases, the user must pass in a selection of cpusets, some specified subset of all the cpusets on a system, which the user wants to define the partition of the cpus or memory nodes on the system, henceforth. Well, not actually system wide. If the user has rights to modify some existing cpuset Foo in the system, and if the current cpu or memory partition of the system exactly aligns with that cpuset Foo (all subsets of the current cpu or memory partition of the system are either entirely within, or entirely outside) then the user could be allowed to redefine the partition subsets within Foo to another that also aligned with Foo. Perhaps the user could choose two child cpusets of Foo to define the partitions subsets, and then later revert to having just the cpuset Foo define them. This leads to a possible interface. For each of cpus and memory, add four per-cpuset control files. Let me take the cpu case first. Add the per-cpuset control files: * domain_cpu_current # readonly boolean * domain_cpu_pending # read/write boolean * domain_cpu_rebuild # write only trigger * domain_cpu_error # read only - last error msg To rebuild the cpu partitioning below a given cpuset Foo, the user would: 1) Write 0 or 1 to the domain_cpu_pending file of each cpuset Foo and below, so that just the cpusets whose cpus were desired to define the partition subsets (and hence have dedicated scheduler domains) had the value '1' in this file. 2) Write a 1 to the domain_cpu_rebuild trigger file of cpuset Foo. 3) If the write succeeded, the scheduler domains within the set of cpus in Foo were rebuilt, at that time. 4) If the write failed, read the domain_cpu_error file for an explanation. If cpuset Foo aligns with the current system cpu partition, and if the cpus of the cpusets marked domain_cpu_pending below Foo define a proper partition of the cpus in Foo, then the write will succeed, updating the values of the domain_cpu_current control files for Foo and below to the values that were in the domain_cpu_pending files, and provoking a rebuild of the scheduler domains below Foo. Otherwise the write will fail, and an error message explaining the problem made available in domain_cpu_error for subsequent reading. Just setting errno would be insufficient in this case, as the possible reasons for error are too complex to be adequately described that way. Similarly for memory, add the per-cpuset control files: * domain_mem_current # readonly boolean * domain_mem_pending # read/write boolean * domain_mem_rebuild # write only trigger * domain_mem_error # read only - last error msg Note, as a detail, that there is no interaction of this domain feature with the cpu_exclusive or mem_exclusive feature. This is good. The exclusive feature is of narrow usefulness, and attempting to integrate it into this domain feature will cause more grief than benefit. Also note that adding or removing a cpu from a cpuset that has its domain_cpu_current flag set true must fail, and similarly for domain_mem_current. There are likely (hopefully ;) other possible API's that accomplish the same thing. But in the process of describing this API, I hope I have touched on some of the properties that cpuset domains for cpu and memory should have. The above scheme should significantly reduce the number of special cases in the update_sched_domains() routine (which I would rename to update_cpu_domains, alongside another one to be provided later, update_mem_domains.) These new update routines will verify that all the preconditions are met, tear down all the cpu or mem domains within the scope of the specified cpuset, and rebuild them according to the partition defined by the pending_*_domain flags on the descendent cpusets. It's the same complete rebuild of the partitioning of some subtree, each time, without all the special cases for incrementally adding and removing cpus or mems from this or that. Complex nested if-else-if-else logic is a breeding ground for bugs -- good riddance. As stated above, there is a single system wide partition of cpus, and another of mems. I suspect we should consider finding a way to nest partitions. My (shakey) understanding of what Nick is doing with scheduler domains is that for the biggest of systems, we will probably want little scheduler domains inside bigger ones. However, if we thought we could avoid, or at least delay consideration of nested partitions, that would be nice. This thing is already abstract enough to puzzle many users, without adding that elaboration. There -- what do you think of this alternative? -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Nick P. <nic...@ya...> - 2005-04-19 06:19:44
|
On Mon, 2005-04-18 at 22:54 -0700, Paul Jackson wrote: > Now, onto the real stuff. > > This same issue, in a strange way, comes up on the memory side, > as well as on the cpu side. > > First, let me verify one thing. I understand that the _key_ > purpose of your patch is not so much to isolate cpus, as it > is to allow for structuring scheduling domains to align with > cpuset boundaries. I understand real isolated cpus to be ones > that don't have a scheduling domain (have only the dummy one), > as requested by the "isolcpus=..." boot flag. > Yes. > The following code snippet from kernel/sched.c is what I derive > this understanding from: > Correct. A better name instead of isolated cpusets may be 'partitioned cpusets' or somesuch. On the other hand, it is more or less equivalent to a single isolated CPU. Instead of an isolated cpu, you have an isolated cpuset. Though I imagine this becomes a complete superset of the isolcpus= functionality, and it would actually be easier to manage a single isolated CPU and its associated processes with the cpusets interfaces after this. > In both cases, we have an intermediate degree of partitioning > of a system, neither at the most detailed leaf cpuset, nor at > the all encompassing top cpuset. And in both cases, we want > to partition the system, along cpuset boundaries. > Yep. This sched-domains partitioning only works when you have more than one completely disjoint top level cpusets. That is, you effectively partition the CPUs. It doesn't work if you have *most* jobs bound to either {0, 1, 2, 3} or {4, 5, 6, 7} but one which should be allowed to use any CPU from 0-7. > Here I use "partition" in the mathematical sense: > > =============================================================== > A partition of a set X is a set of nonempty subsets of X such > that every element x in X is in exactly one of these subsets. > > Equivalently, a set P of subsets of X, is a partition of X if > > 1. No element of P is empty. > 2. The union of the elements of P is equal to X. (We say the > elements of P cover X.) > 3. The intersection of any two elements of P is empty. (We say > the elements of P are pairwise disjoint.) > > http://www.absoluteastronomy.com/encyclopedia/p/pa/partition_of_a_set.htm > =============================================================== > > In the case of cpus, we really do prefer the partitions to be > disjoint, because it would be better not to confuse the domain > scheduler with overlapping domains. > Yes. The domain scheduler can't handle this at all, it would have to fall back on cpus_allowed, which in turn can create big problems for multiprocessor balancing. > For the cpu case, we would provide a scheduler domain for each > subset of the cpu partitioning. > Yes. [snip the rest, which I didn't finish reading :P] >From what I gather, this partitioning does not exactly fit the cpusets architecture. Because with cpusets you are specifying on what cpus can a set of tasks run, not dividing the whole system. Basically for the sched-domains code to be happy, there should be some top level entity (whether it be cpusets or something else) which records your current partitioning (the default being one set, containing all cpus). > As stated above, there is a single system wide partition of > cpus, and another of mems. I suspect we should consider finding > a way to nest partitions. My (shakey) understanding of what > Nick is doing with scheduler domains is that for the biggest of > systems, we will probably want little scheduler domains inside > bigger ones. The sched-domains setup code will take care of all that for you already. It won't know or care about the partitions. If you partition a 64-way system into 2 32-ways, the domain setup code will just think it is setting up a 32-way system. Don't worry about the sched-domains side of things at all, that's pretty easy. Basically you just have to know that it has the capability to partition the system in an arbitrary disjoint set of sets of cpus. If you can make use of that, then we're in business ;) -- SUSE Labs, Novell Inc. |
From: Paul J. <pj...@sg...> - 2005-04-19 07:22:14
|
Nick wrote: > It doesn't work if you have *most* jobs bound to either > {0, 1, 2, 3} or {4, 5, 6, 7} but one which should be allowed > to use any CPU from 0-7. How bad does it not work? My understanding is that Dinakar's patch did _not_ drive tasks out of larger cpusets that included two or more of what he called isolated cpusets, I call cpuset domains. For example: System starts up with 8 CPUs and all tasks (except for a few kernel per-cpu daemons) in the root cpuset, able to run on CPUs 0-7. Two cpusets, Alpha and Beta are created, where Alpha has CPUs 0-3, and Beta has CPUs 4-7. Anytime someone logs in, their login shell and all they run from it are placed in one of Alpha or Beta. The main spawning daemons, such as inetd and cron, are placed in one of Alpha or Beta. Only a few daemons that don't do much are left in the root cpuset, able to run across 0-7. If we tried to partition the sched domains with Alpha and Beta as separate domains, what kind of pain do these few daemons in the root cpuset, on CPUs 0-7, cause? If the pain is too intolerable, then I'd guess not only do we have to purge any cpusets superior to the ones determining the domain partitioning of _all_ tasks, but we'd also have to invent yet one more boolean flag attribute for any such superior cpusets, to mark them as _not_ able to allow a task to be attached to them. And we'd have to refine the hotplug co-existance logic in cpusets, which currently bumps a task up to its parent cpuset when all the cpus in its current cpuset are hot unplugged, to also rebuild the sched domains to some legal configuration, if the parent cpuset was not allowed to have any tasks attached. I'd rather not go there, unless push comes to shove. How hard are you pushing? -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Nick P. <nic...@ya...> - 2005-04-19 07:57:30
|
On Tue, 2005-04-19 at 00:19 -0700, Paul Jackson wrote: > Nick wrote: > > It doesn't work if you have *most* jobs bound to either > > {0, 1, 2, 3} or {4, 5, 6, 7} but one which should be allowed > > to use any CPU from 0-7. > > How bad does it not work? > > My understanding is that Dinakar's patch did _not_ drive tasks out of > larger cpusets that included two or more of what he called isolated > cpusets, I call cpuset domains. > > For example: > > System starts up with 8 CPUs and all tasks (except for > a few kernel per-cpu daemons) in the root cpuset, able > to run on CPUs 0-7. > > Two cpusets, Alpha and Beta are created, where Alpha > has CPUs 0-3, and Beta has CPUs 4-7. > > Anytime someone logs in, their login shell and all > they run from it are placed in one of Alpha or Beta. > The main spawning daemons, such as inetd and cron, > are placed in one of Alpha or Beta. > > Only a few daemons that don't do much are left in the > root cpuset, able to run across 0-7. > > If we tried to partition the sched domains with Alpha and Beta as > separate domains, what kind of pain do these few daemons in > the root cpuset, on CPUs 0-7, cause? > They don't cause any pain for the scheduler. They will be *in* some pain because they can't escape from the domain in which they have been placed (unless you do a set_cpus_allowed thingy). So, eg. inetd might start up a million blahd servers, but they'll all be stuck in Alpha even if Beta is completely idle. > If the pain is too intolerable, then I'd guess not only do we have to > purge any cpusets superior to the ones determining the domain > partitioning of _all_ tasks, but we'd also have to invent yet one more > boolean flag attribute for any such superior cpusets, to mark them as > _not_ able to allow a task to be attached to them. And we'd have to > refine the hotplug co-existance logic in cpusets, which currently bumps > a task up to its parent cpuset when all the cpus in its current cpuset > are hot unplugged, to also rebuild the sched domains to some legal > configuration, if the parent cpuset was not allowed to have any tasks > attached. > > I'd rather not go there, unless push comes to shove. How hard are > you pushing? > Well the scheduler simply can't handle it, so it is not so much a matter of pushing - you simply can't use partitioned domains and meaningfully have a cpuset above them. -- SUSE Labs, Novell Inc. |
From: Paul J. <pj...@sg...> - 2005-04-19 20:35:11
|
Nick wrote: > Well the scheduler simply can't handle it, so it is not so much a > matter of pushing - you simply can't use partitioned domains and > meaningfully have a cpuset above them. Translating that into cpuset-speak, I think what you mean is that I can't have partitioned sched domains and have a task attached to a cpuset above them, if it matters to me that the task can actually use all the CPUs in its larger cpuset. But what you actually said was that I cannot have a cpuset above them. I can certainly _can_ have a cpuset above the cpusets that define the partitioned domains. I _have_ to have that, or toss the entire hierarchical design cpuset. The top cpuset encompasses all the CPUs on the system, and is above all others. Let's see if the following example helps clear up these confusions. Let's say we started out as one big happy family, with a single top cpuset, and a single sched domain, each encompassing the entire machine. All tasks are attached to that cpuset and load balanced and scheduled in that sched domain. Any task can be run anywhere. Then some yahoo comes along and decides to complicate things. They create my two cpusets Alpha and Beta, each covering half the system. They create two partitioned sched domains corresponding to Alpha and Beta, respectively. They move almost every task into one of Alpha or Beta, expecting hence forth that each such moved task will only run on whichever half of the system it was placed in. For instance, if they moved init into Alpha, that means they _want_ the init task to be constrained to the Alpha half of the system, even if every CPU in Beta has been idle for the last 5 hours. So far, all fine and dandy. But they leave behind a few tasks still attached to the top cpuset, with those tasks cpus_allowed still allowing any CPU in the system. They actually don't give a rat's patootie about these few tasks, because they consume less than 10 seconds each per day, and so long as they are allowed their few CPU cycles when they want them, all is well. They could have moved these tasks as well into Alpha or Beta, but they wanted to be annoying and see if they could concoct a test case that would break something here. Or maybe they were just forgetful. What breaks? You seem to be telling me that this is ver botten, but I don't see yet where the problem is. My timid guess is that about all that breaks is that each of these stray tasks will be forever after stuck in which ever one of Alpha or Beta it happened to be in at the point of the Great Divide. If say one of these tasks happened to be on the Beta side at that point, the Beta domain scheduler will never let an Alpha CPU see that task, leaving the task to only ever be picked up by a Beta CPU (even though the tasks cpuset and cpus_allowed would have allowed an Alpha CPU, in theory). Translating this back into a language my users might speak, I guess is this means I tell them: * No scheduling or load balancing is done across partitioned scheduler domains. * Even if one such domain is hugely oversubscribed, and another totally idle, no task in one will run in the other. If that's what you want, then go for it. * Tasks left attached to cpusets higher up in the hierarchy don't get moved or load balanced between partitioned sched domains below their cpuset. They will get stuck in one of the domains, willy-nilly. So if it matters to you in the slightest which of the partitions a task runs in, attach it appropriately, to one of the cpusets that define the partitioned scheduler domains, or below. In short, perhaps you were trying to make my life, or at least my efforts to understand this, simple, by telling me that I simply can't have any cpusets above partitioned sched domains. The literal translation of that into cpuset-speak throws out the entire cpuset architecture. So I have to push back and figure out in more detail what really matters here. Am I anywhere close? -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Paul J. <pj...@sg...> - 2005-04-23 23:28:57
|
A few days ago, Nick wrote: > Well the scheduler simply can't handle it, so it is not so much a > matter of pushing - you simply can't use partitioned domains and > meaningfully have a cpuset above them. And I (pj) replied: > Translating that into cpuset-speak, I think what you mean is ... I then went on to ask some questions. I haven't seen a reply. I probably wrote too many words, and you had more pressing matters to deal with. Which is fine. Let's make this simpler. Ignore cpusets -- let's just talk about a tasks cpus_allowed value, and scheduler domains. Think of cpusets as just a strange way of setting a tasks cpus_allowed value. Question: What happens if we have say two isolated scheduler domains on a system, covering say two halves of the system, and some task has its cpus_allowed set to allow _all_ CPUs? What kind of pain does that cause? I'm hoping you will say that the only pain it causes is that the task will only run on one half of the system, even if the other half is idle. And that so long as I don't mind that, it's no problem to do this. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-19 09:34:14
|
On Tue, Apr 19, 2005 at 04:19:35PM +1000, Nick Piggin wrote: [...Snip...] > Though I imagine this becomes a complete superset of the > isolcpus= functionality, and it would actually be easier to > manage a single isolated CPU and its associated processes with > the cpusets interfaces after this. That is the idea, though I think that we need to be able to provide users the option of not doing a load balance within a sched domain > It doesn't work if you have *most* jobs bound to either > {0, 1, 2, 3} or {4, 5, 6, 7} but one which should be allowed > to use any CPU from 0-7. That is the current definition of cpu_exclusive on cpusets. I initially thought of attaching exclusive cpusets to sched domains, but that would not work because of this reason > > > > In the case of cpus, we really do prefer the partitions to be > > disjoint, because it would be better not to confuse the domain > > scheduler with overlapping domains. > > > > Yes. The domain scheduler can't handle this at all, it would > have to fall back on cpus_allowed, which in turn can create > big problems for multiprocessor balancing. > I agree > >From what I gather, this partitioning does not exactly fit > the cpusets architecture. Because with cpusets you are specifying > on what cpus can a set of tasks run, not dividing the whole system. Since isolated cpusets are trying to partition the system, this can be restricted to only the first level of cpusets. Keeping in mind that we have a flat sched domain heirarchy, I think probably this would simplify the update_sched_domains function quite a bit. Also I think we can add further restrictions in terms not being able to change (add/remove) cpus within a isolated cpuset. Instead one would have to tear down an existing cpuset and make a new one with the required configuration. that would simplify things even further > The sched-domains setup code will take care of all that for you > already. It won't know or care about the partitions. If you > partition a 64-way system into 2 32-ways, the domain setup code > will just think it is setting up a 32-way system. > > Don't worry about the sched-domains side of things at all, that's > pretty easy. Basically you just have to know that it has the > capability to partition the system in an arbitrary disjoint set > of sets of cpus. And maybe also have a flag that says whether to have load balancing in this domain or not -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-19 15:28:47
|
Dinakar, replying to Nick: > > It doesn't work if you have *most* jobs bound to either > > {0, 1, 2, 3} or {4, 5, 6, 7} but one which should be allowed > > to use any CPU from 0-7. > > That is the current definition of cpu_exclusive on cpusets. > I initially thought of attaching exclusive cpusets to sched domains, > but that would not work because of this reason I can't make any sense of this reply, Dinakar. You say "_That_" is the current definition of cpu_exclusive -- I have no idea what "_That_" refers to. I see nothing in what Nick wrote that has anything much to do with the definition of cpu_exclusive. If a cpuset is marked cpu_exclusive, it means that the kernel will not allow any of its siblings to have overlapping CPUs. It doesn't mean that its parent can't overlap CPUs -- indeed it's parent must contain a superset of all the CPUs in a cpu_exclusive cpuset and its siblings. It doesn't mean that there cannot be tasks attached to each of the cpu_exclusive cpuset, its siblings and its parent. You say "attaching exclusive cpusets to sched domains ... would not work because of this reason." I have no idea what "this reason" is. I am pretty sure of a couple of things: * Your understanding of "cpu_exclusive" is not the same as mine. * We want to avoid any dependency on "cpu_exclusive" here. > Since isolated cpusets are trying to partition the system, this > can be restricted to only the first level of cpusets. I do not think such a restriction is a good idea. For example, lets say our 8 CPU system has the following cpusets: / # 0-7 /Alpha # 0-3 /Alpha/phi # 0-1 /Alpha/chi # 2-3 /Beta # 4-7 Then I see no problem with cpusets /Alpha/phi, /Alpha/chi and /Beta being the isolated cpusets, with corresponding scheduler domains. But phi and chi are not "first level cpusets." If we require a partition (disjoint cover) of the CPUs in the system, then enforce exactly that. Do not confuse a rough approximation with a simplified model. > Also I think we can add further restrictions in terms not being able > to change (add/remove) cpus within a isolated cpuset. My approach agrees on this restriction. Earlier I wrote: > Also note that adding or removing a cpu from a cpuset that has > its domain_cpu_current flag set true must fail, and similarly > for domain_mem_current. This restriction is required in my approach because the CPUs in the domain_cpu_current cpusets (the isolated CPUs, in your terms) form a partition (disjoint cover) of the CPUs in the system, which property would be violated immediately if any CPU were added or removed from any cpuset defining the partition. > Instead one would > have to tear down an existing cpuset and make a new one with the > required configuration. that would simplify things even further You've just come close to describing the approach that it took me "several more" words to describe. Though one doesn't need to tear down or make any new cpusets; rather one atomically selects a new set of cpusets to define the partition. If one had to tear down and remake cpusets to change the partition, then one would be in trouble -- it would be difficult to provide an API that allowed doing that atomically. If its not atomic, then we have illegal intermediate states, where one cpuset is gone and the new one has not arrived, and our partition of the cpusets in the system no longer covers the system ("our cover is blown", as they say in undercover police work.) > And maybe also have a flag that says whether to have load balancing > in this domain or not It's probably too early to think about that. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-20 07:31:13
|
On Tue, Apr 19, 2005 at 08:26:39AM -0700, Paul Jackson wrote: > * Your understanding of "cpu_exclusive" is not the same as mine. Sorry for creating confusion by what I said earlier, I do understand exactly what cpu_exclusive means. Its just that when I started working on this (a long time ago) I had a different notion and that is what I was referring to, I probably should never have brought that up > > > Since isolated cpusets are trying to partition the system, this > > can be restricted to only the first level of cpusets. > > I do not think such a restriction is a good idea. For example, lets say > our 8 CPU system has the following cpusets: > And my current implementation has no such restriction, I was only suggesting that to simplify the code. > > > Also I think we can add further restrictions in terms not being able > > to change (add/remove) cpus within a isolated cpuset. > > My approach agrees on this restriction. Earlier I wrote: > > Also note that adding or removing a cpu from a cpuset that has > > its domain_cpu_current flag set true must fail, and similarly > > for domain_mem_current. > > This restriction is required in my approach because the CPUs in the > domain_cpu_current cpusets (the isolated CPUs, in your terms) form a > partition (disjoint cover) of the CPUs in the system, which property > would be violated immediately if any CPU were added or removed from any > cpuset defining the partition. See my other note explaining how things work currently. I do feel that this restriction is not good -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-19 20:42:43
|
Dinakar wrote: > Also I think we can add further restrictions in terms not being able > to change (add/remove) cpus within a isolated cpuset. Instead one would > have to tear down an existing cpuset and make a new one with the > required configuration. that would simplify things even further My earlier reply to this missed the mark a little. Instead what I would say is this. If one wants to move a CPU from one cpuset to another, where these two cpusets are not in the same partitioned scheduler domain, then one first has to collapse the scheduler domain partitions so that both cpusets _are_ in the same partitioned scheduler domain. Then one can move the CPU between the two cpusets, and reestablish the more fine grained partitioned scheduler domain structure that isolates these two cpusets into different partitioned scheduler domains. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-19 09:16:38
|
On Mon, Apr 18, 2005 at 10:54:27PM -0700, Paul Jackson wrote: > Hmmm ... interesting patch. My reaction to the changes in > kernel/cpuset.c are complicated: Thanks Paul for taking time off your vaction to reply to this. I was expecting to see one of your huge mails but this has exceeded all my expectations :) > * I'd probably ditch the all_cpus() macro, on the > concern that it obfuscates more than it helps. > * The need for _both_ a per-cpuset flag 'CS_CPU_ISOLATED' > and another per-cpuset mask 'isolated_map' concerns me. > I guess that the isolated_map is just a cache of the > set of CPUs isolated in child cpusets, not an independently > settable mask, but it needs to be clearly marked as such > if so. Currently the isolated_map is read-only as you have guessed. I did think of the user adding cpus to this map from the cpus_allowed mask but thought the current approach made more sense > * Some code lines go past column 80. I need to set my vi to wrap past 80... > * The name 'isolated' probably won't work. There is already > a boottime option "isolcpus=..." for 'isolated' cpus which > is (I think ?) rather different. Perhaps a better name will > fall out of the conceptual discussion, below. I was hoping that by the time we are done with this, we would be able to completely get rid of the isolcpus= option. For that ofcourse we need to be able build domains that dont run load balance > * The change to the output format of the special cpuset file > 'cpus', to look like '0-3[4-7]' bothers me in a couple of > ways. It complicates the format from being a simple list. > And it means that the output format is not the same as the > input format (you can't just write back what you read from > such a file anymore). As i had said in my earlier mail, this was just one way of representing what I call isolated cpus. The other was to expose isolated_map to userspace and move cpus between cpus_allowed and isolated_map > * Several comments start with the word 'Set', as in: > Set isolated ON on a non exclusive cpuset > Such wording suggests to me that something is being set, > some bit or value changed or turned on. But in each case, > you are just testing for some condition that will return > or error out. Some phrasing such as "If ..." or other > conditional would be clearer. The wording was from the users point of view for what action was being done, guess I'll change that > * The update_sched_domains() routine is complicated, and > hence a primary clue that the conceptual model is not > clean yet. It is complicated because it has to handle all of the different possible actions that the user can initiate. It can be simplified if we have stricter rules of what the user can/cannot do w.r.t to isolated cpusets > * None of this was explained in Documentation/cpusets.txt. Yes I plan to add the documentation shortly > * Too bad that cpuset_common_file_write() has to have special > logic for this isolated case. The other flag settings just > turn on and off the associated bit, and don't trigger any > kernel code to adapt to new cpu or memory settings. We > should make an exception to that behaviour only if we must, > and then we must be explicit about the exception. See my notes on isolated_map above > First, let me verify one thing. I understand that the _key_ > purpose of your patch is not so much to isolate cpus, as it > is to allow for structuring scheduling domains to align with > cpuset boundaries. I understand real isolated cpus to be ones > that don't have a scheduling domain (have only the dummy one), > as requested by the "isolcpus=..." boot flag. Not really. Isolated cpusets allows you to do a soft-partition of the system, and it would make sense to continue to have load balancing within these partitions. I would think not having load balancing should be one of the options available > > Second, let me describe how this same issue shows up on the > memory side. > ...snip... > > > In the case of cpus, we really do prefer the partitions to be > disjoint, because it would be better not to confuse the domain > scheduler with overlapping domains. Absolutely one of the problem I had was to map the flat disjoint heirarchy of sched domains to the tree like heirarchy of cpusets > > In the case of memory, we technically probably don't _have_ to > keep the partitions disjoint. I doubt that the page allocator > (mm/page_alloc.c:__alloc_pages()) really cares. It will strive > valiantly to satisfy the memory request from any of the zones > (each node specific) in the list passed into it. > I must confess that I havent looked at the memory side all that much, having more interest in trying to build soft-partitioning of the cpu's > But for the purposes of providing a clear conceptual model to > our users, I think it is best that we impose this constraint on > the memory side as well as on the cpu side. And I don't think > it will deprive users of any useful configuration alternatives > that they will really miss. Indeed, the typical user will be > striving to use this mechanism to separate different demands > for memory - to isolate them on to different nodes in your > sense of the word isolate. > [...Big snip of new model...] ok I need to spend more time on you model Paul, but my first guess is that it doesn't seem to be very intuitive and seems to make it very complex from the users perspective. However as I said I need to understand your model a bit more before I comment on it > > However, if we thought we could avoid, or at least delay > consideration of nested partitions, that would be nice. > This thing is already abstract enough to puzzle many users, > without adding that elaboration. Nested sched domains are going to be nasty and I am not at all for it. Moreover I think it makes more sense to to have a flat heirarchy for sched domains -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-19 17:26:17
|
Dinakar wrote: > I was hoping that by the time we are done with this, we would > be able to completely get rid of the isolcpus= option. I won't miss it. Though, since it's in the main line kernel, do you need to mark it deprecated for a while first? > For that > ofcourse we need to be able build domains that dont run > load balance Ah - so that's what these isolcpus are - ones not load balanced? This was never clear to me. > The wording [/* Set ... */ ] was from the users point of view > for what action was being done, guess I'll change that Ok - at least now I can read and understand the comments, knowing this. The other comments in cpuset.c don't follow this convention, of speaking in the "user's voice", but rather speak in the "responding systems voice." Best to remain consistent in this matter. > It is complicated because it has to handle all of the different > possible actions that the user can initiate. It can be simplified > if we have stricter rules of what the user can/cannot do > w.r.t to isolated cpusets It is complicated because you are trying to pretend that to be doing a complex state change one step at a time, without a precise statement (at least, not that I saw) of what the invariants are, and atomic operations that preserve the invariants. > > First, let me verify one thing. I understand that the _key_ > > purpose of your patch is not so much to isolate cpus, as it > > is to allow for structuring scheduling domains to align with > > cpuset boundaries. I understand real isolated cpus to be ones > > that don't have a scheduling domain (have only the dummy one), > > as requested by the "isolcpus=..." boot flag. > > Not really. Isolated cpusets allows you to do a soft-partition > of the system, and it would make sense to continue to have load > balancing within these partitions. I would think not having > load balancing should be one of the options available Ok ... then is it correct to say that your purpose is to partition the systems CPUs into subsets, such that for each subset, either there is a scheduler domain for that exactly the CPUs in that subset, or none of the CPUs in the subset are in any scheduler domain? > I must confess that I havent looked at the memory side all that much, > having more interest in trying to build soft-partitioning of the cpu's This is an understandable focus of interest. Just know that one of the sanity tests I will apply to a solution for CPUs is whether there is a corresponding solution for Memory Nodes, using much the same principles, invariants and conventions. > ok I need to spend more time on you model Paul, but my first > guess is that it doesn't seem to be very intuitive and seems > to make it very complex from the users perspective. However as > I said I need to understand your model a bit more before I > comment on it Well ... I can't claim that my approach is simple. It does have a clearly defined (well, clear to me ;) mathematical model, with some invariants that are always preserved in what user space sees, with atomic operations for changing from one legal state to the next. The primary invariant is that the sets of CPUs in the cpusets marked domain_cpu_current form a partition (disjoint covering) of the CPUs in the system. What are your invariants, and how can you assure yourself and us that your code preserves these invariants? Also, I don't know that the sequence of user operations required by my interface is that much worse than yours. Let's take an example, and compare what the user would have to do. Let's say we have the following cpusets on our 8 CPU system: / # CPUs 0-7 /Alpha # CPUs 0-3 /Alpha/phi # CPUs 0-1 /Alpha/chi # CPUs 2-3 /Beta # CPUs 4-7 Let's say we currently have three scheduler domains, for three isolated (in your terms) cpusets: /Alpha/phi, /Alpha/chi and /Beta. Let's say we want to change the configuration to have just two scheduler domains (two isolated cpusets): /Alpha and /Beta. A user of my API would do the operations: echo 1 > /Alpha/domain_cpu_pending echo 1 > /Beta/domain_cpu_pending echo 0 > /Alpha/phi/domain_cpu_pending echo 0 > /Alpha/chi/domain_cpu_pending echo 1 > /domain_cpu_rebuild The domain_cpu_current state would not change until the final write (echo) above, at which time the cpuset_sem lock would be taken, and the system would, atomically to all viewing tasks, change from having the three cpusets /Alpha/phi, /Alpha/chi and /Beta marked with a true domain_cpu_current, to having the two cpusets /Alpha and /Beta so marked. The alternative API, which I didn't explore, could do this in one step by writing the new list of cpusets defining the partition, doing the rough equivalent (need nul separators, not space separators) of: echo /Alpha /Beta > /list_cpu_subdomains How does this play out in your interface? Are you convinced that your invariants are preserved at all times, to all users? Can you present a convincing argument to others that this is so? -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-20 06:58:09
|
On Tue, Apr 19, 2005 at 10:23:48AM -0700, Paul Jackson wrote: > > How does this play out in your interface? Are you convinced that > your invariants are preserved at all times, to all users? Can > you present a convincing argument to others that this is so? Let me give an example of how the current version of isolated cpusets can be used and hopefully clarify my approach. Consider a system with 8 cpus that needs to run a mix of workloads. One set of applications have low latency requirements and another set have a mixed workload. The administrator decides to allot 2 cpus to the low latency application and the rest to other apps. To do this, he creates two cpusets (All cpusets are considered to be exclusive for this discussion) cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0-7 0 top/lowlat 0-1 0 0-1 0 top/others 2-7 0 2-7 0 He now wants to partition the system along these lines as he wants to isolate lowlat from the rest of the system to ensure that a. No tasks from the parent cpuset (top_cpuset in this case) use these cpus b. load balance does not run across all cpus 0-7 He does this by cd /mount-point/lowlat /bin/echo 1 > cpu_isolated Internally it takes the cpuset_sem, does some sanity checks and ensures that these cpus are not visible to any other cpuset including its parent (by removing these cpus from its parent's cpus_allowed mask and adding them to its parent's isolated_map) and then calls sched code to partition the system as [0-1] [2-7] The internal state of data structures are as follows cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 2-7 0-1 top/lowlat 0-1 1 0-1 0 top/others 2-7 0 2-7 0 ------------------------------------------------------- The administrator now wants to further partition the "others" cpuset into a cpu intensive application and a batch one cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 2-7 0-1 top/lowlat 0-1 1 0-1 0 top/others 2-7 0 2-7 0 top/others/cint 2-3 0 2-3 0 top/others/batch 4-7 0 4-7 0 If now the administrator wants to isolate the cint cpuset... cd /mount-point/others /bin/echo 1 > cpu_isolated (At this point no new sched domains are built as there exists a sched domain which exactly matches the cpus in the "others" cpuset.) cd /mount-point/others/cint /bin/echo 1 > cpu_isolated At this point cpus from the "others" cpuset are also taken away from its parent cpus_allowed mask and put into the parent's isolated_map. This means that the parent cpus_allowed mask is empty. This would now result in partitioning the "others" cpuset and builds two new sched domains as follows [2-3] [4-7] Notice that the cpus 0-1 having already been isolated are not affected in this operation cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0 0-7 top/lowlat 0-1 1 0-1 0 top/others 2-7 1 4-7 2-3 top/others/cint 2-3 1 2-3 0 top/others/batch 4-7 0 4-7 0 ------------------------------------------------------- The admin now wants to run more applications in the cint cpuset and decides to borrow a couple of cpus from the batch cpuset He removes cpus 4-5 from batch and adds them to cint cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0 0-7 top/lowlat 0-1 1 0-1 0 top/others 2-7 1 6-7 2-5 top/others/cint 2-5 1 2-5 0 top/others/batch 6-7 0 6-7 0 As cint is already isolated, adding cpus causes it to rebuild all cpus covered by its cpus_allowed and its parent's cpus_allowed, so the new sched domains will look as follows [2-5] [6-7] cpus 0-1 are ofcourse still not affected Similarly the admin can remove cpus from cint, which will result in the domains being rebuilt to what was before [2-3] [4-7] ------------------------------------------------------- Hope this clears up my approach. Also note that we still need to take care of the cpu hotplug case, where any random cpu can be removed and added back and this code needs to take care of rebuilding the appropriate sched domains -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-20 19:12:19
|
Earlier, I wrote to Dinakar: > What are your invariants, and how can you assure yourself and us > that your code preserves these invariants? I repeat that question. === On my first reading of your example, I see the following. It is sinking into my dense skull more than it had before that your patch changes the meaning of the cpuset field 'cpus_allowed', to only include the cpus not in isolated children. However there are other uses of the 'cpus_allowed' field in the cpuset code that are not changed, and comments and documentation describing this field that are not changed. I suspect this is an incomplete change. You don't actually state it that I noticed, but the main point of your example seems to be that you support incrementally moving individual cpus between cpusets, without the constraint that both cpusets be in the same subset of the partition (the same isolation group). So you can move a cpu in and out of an isolated group without tearing down the group down first, only to rebuild it after. To do this, you've added new semantics to some of the operations to write the 'cpus' special file of a cpuset, if and only if that cpuset is marked isolated, which involves changing some other masks. These new semantics are something along the lines of "adding a cpu here implies removing it from there. This presumably allows you to move cpus in or out of or between an isolated cpuset, while preserving the essential properties of a partition - that it is a disjoint covering. > He removes cpus 4-5 from batch and adds them to cint Could you spell out the exact steps the user would take, for this part of your example? What does the user do, what does the kernel do in response, and what state the cpusets end up in, after each action of the user? === So far, to be honest, I am finding your patch to be rather frustrating. Perhaps the essential reason is this. The interface that cpusets presents in the cpuset file system, mounted at /dev/cpuset, is not in my intentions primarily a human interface. It is primarily a programmatic interface. As such, there is a high premium on clarity of design, consistency of behaviour and absence of side affects. Each operation should do one thing, clearly defined, changing only what is operated on, preserving clearly spelled out invariants. If it takes three steps instead of one to accomplish a typical task, that's fine. The programs that layer on top of /dev/cpuset don't mind doing three things to get one thing done. But such programs are a pain in the backside to program correctly if the affects of each operation are not clearly defined, not focused on the obvious object being operated on, or not precisely consistent with an overriding model. This patch seems to add side affects and the change the meanings of things, doing so with the most minimum of mention in the description, without clearly and consistently spelling out the new mental model, and without uniformly changing all uses, comments and documentation to fit the new model. This cpuset facility is also a less commonly used kernel facility, and changes to cpusets, outside of a few key hooks in the scheduler and allocator, are not performance critical. This means that there is a premium in keeping the kernel code minimal, leaving as many details as practical to userland. This patch seems to increase the kernel text size, for an ia64 SN2 build using gcc 3.2.3 of a 2.6.12-rc1-mm4 tree I had at hand, _just_ for the cpuset.c changes, from 23071 bytes to 28999. That's over a 25% per-cent increase in the kernel text size of the file kernel/cpuset.o, just for this feature. That's too much, in my view. I don't know yet if the ability to move cpus between isolated sched domains without tearing them down and rebuilding them, is a critical feature for you or not. You have not been clear on what are the essential requirements of this feature. I don't even know for sure yet that this is the one key feature in your view that separates your proposal from the variations I explored. But if this is for you the critical feature that your proposal has, and mine lack, then I'd like to see if there is a way to do it without implicit side affects, without messing with the semantics of what's there now, and with significantly fewer bytes of kernel text space. And I'd like to see if we can have uniform and precisely spelled out semantics, in the code, comments and documentation, with any changes to the current semantics made everywhere, uniformly. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-21 16:10:07
|
On Wed, Apr 20, 2005 at 12:09:46PM -0700, Paul Jackson wrote: > Earlier, I wrote to Dinakar: > > What are your invariants, and how can you assure yourself and us > > that your code preserves these invariants? Ok, Let me begin at the beginning and attempt to define what I am doing here 1. I need a method to isolate a random set of cpus in such a way that only the set of processes that are specifically assigned can make use of these CPUs 2. I need to ensure that the sched load balance code does not pull any tasks other than the assigned ones onto these cpus 3. I need to be able to create multiple such groupings of cpus that are disjoint from the rest and run only specified tasks 4. I need a user interface to specify which random set of cpus form such a grouping of disjoint cpus 5. I need to be able to dynamically create and destroy these grouping of disjoint cpus 6. I need to be able to add/remove cpus to/from this grouping Now if you try to fit these requirements onto cpusets, keeping in mind that it already has an user interface and some of the frame work required to create disjoint groupings of cpus 1. An exclusive cpuset ensures that the cpus it has are disjoint from all other cpusets except its parent and children 2. So now I need a way to disassociate the cpus of an exclusive cpuset from its parent, so that this set of cpus are truly disjoint from the rest of the system. 3. After I have done (2) above, I now need to build two set of sched domains corresponding to the cpus of this exclusive cpuset and the remaining cpus of its parent 4. Ensure that the current rules of non-isolated cpusets are all preserved such that if this feature is not used, all other features work as before This is exactly what I have tried to do. 1. Maintain a flag to indicate whether a cpuset is isolated 2. Maintain an isolated_map for every cpuset. This contains a cache of all cpus associated with isolated children 3. To isolate a cpuset x, x has to be an exclusive cpuset and its parent has to be an isolated cpuset 4. On isolating a cpuset by issuing /bin/echo 1 > cpu_isolated It ensures that conditions in (3) are satisfied and then removes the cpus of the current cpuset from the parent cpus_allowed mask. (It also puts the cpus of the current cpuset into the isolated_map of its parent) This ensures that only the current cpuset and its children will have access to the now isolated cpus. It also rebuilds the sched domains into two new domains consisting of a. All cpus in the parent->cpus_allowed b. All cpus in current->cpus_allowed 5. Similarly on setting isolated off on a isolated cpuset, (or on doing an rmdir on an isolated cpuset) It adds all of the cpus of the current cpuset into its parent cpuset's cpus_allowed mask and removes them from it's parent's isolated_map This ensures that all of the cpus in the current cpuset are now visible to the parent cpuset. It now rebuilds only one sched domain consisting of all of the cpus in its parent's cpus_allowed mask. 6. You can also modify the cpus present in an isolated cpuset x provided that x does not have any children that are also isolated. 7. On adding or removing cpus from an isolated cpuset that does not have any isolated children, it reworks the parent cpuset's cpus_allowed and isolated_map masks and rebuilds the sched domains appropriately 8. Since the function update_cpu_domains, which does all of the above updations to the parent cpuset's masks, is always called with cpuset_sem held, it ensures that all these changes are atomic. > > He removes cpus 4-5 from batch and adds them to cint > > Could you spell out the exact steps the user would take, for this part > of your example? What does the user do, what does the kernel do in > response, and what state the cpusets end up in, after each action of the > user? cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0 0-7 top/lowlat 0-1 1 0-1 0 top/others 2-7 1 4-7 2-3 top/others/cint 2-3 1 2-3 0 top/others/batch 4-7 0 4-7 0 At this point to remove cpus 4-5 from batch and add them to cint, the admin would do the following steps # Remove cpus 4-5 from batch # batch is not a isolated cpuset and hence this step # has no other implications /bin/echo 6-7 > /top/others/batch/cpus cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0 0-7 top/lowlat 0-1 1 0-1 0 top/others 2-7 1 4-7 2-3 top/others/cint 2-3 1 2-3 0 top/others/batch 6-7 0 6-7 0 # Add cpus 4-5 to cint alongwith existing cpus 2-3 /bin/echo 2-5 > /top/others/cint/cpus cpuset cpus isolated cpus_allowed isolated_map top 0-7 1 0 0-7 top/lowlat 0-1 1 0-1 0 top/others 2-7 1 6-7 2-5 top/others/cint 2-5 1 2-5 0 top/others/batch 6-7 0 6-7 0 As you can see there are no "side effects" here. All of these are legitimate operations and work the same even in the current cpusets code as in mainline. (Except ofcourse the isolation part) Hope this helps in clarifying all your questions However, after taking into account all of your comments so far, I have reworked my patch and reduced and simplified it quite a bit. I have maintained all of the functionality that I have described so far. (Adding one restriction viz You can also modify the cpus present in an isolated cpuset x provided that x does not have any children that are also isolated.) I'll send that in a new mail. Thanks for all your comments and review so far -Dinakar |
From: Paul J. <pj...@sg...> - 2005-04-22 21:28:37
|
Dinakar wrote: > Ok, Let me begin at the beginning and attempt to define what I am > doing here The statement of requirements and approach help. Thank-you. And the comments in the code patch are much easier for me to understand. Thanks. Let me step back and consider where we are here. I've not been entirely happy with the cpu_exclusive (and mem_exclusive) properties. They were easy to code, and they require only looking at ones siblings and parent, but they don't provide all that people usually want, which is system wide exclusivity, because they don't exclude tasks in ones parent (or more remote ancestor) cpusets from stealing resources. I take your isolated cpusets as a reasonable attempt to provide what's really wanted. I had avoided simple, system-wide exclusivity because I really wanted cpusets to be hierarchical. One should be able to subdivide and manage one subtree of the cpuset hierarchy, oblivious to what someone else is doing with a disjoint subtree. Your work shows how to provide a stronger form of isolation (exclusivity) without abandoning the hierarchical structure. There are three directions we could go from here. I am not yet decided between them: 1) Remove cpu and mem exclusive flags - they are of limited use. 2) Leave code as is. 3) Extend the exclusive capability to include isolation from parents, along the lines of your patch. If I was redoing cpusets from scratch, I might not include the exclusive feature at all - not sure. But it's cheap, at least in terms of code, and of some use to some users. So I would choose (2) over (1), given where we are now. The main cost at present of the exclusive flags is the cost in understanding - they tend to confuse people at first glance, due to their somewhat unusual approach. If we go with (3), then I'd like to consider the overall design of this a bit more. Your patch, as is common for patches, attempts to work within the current framework, minimizing change. Better to take a step back and consider what would have been the best design as if the past didn't matter, then with that clearly in mind, ask how best to get there from here. I don't think we would have both isolated and exclusive flags, in the 'ideal design.' The exclusive flags are essentially half (or a third) of what's needed, and the isolated flags and masks the rest of it. Essentially, your patch replaces the single set of CPUs in a cpuset with three, related sets: A] the set of all CPUs managed by that cpuset B] the set of CPUs allowed to tasks attached to that cpuset C] the set of CPUs isolated for the dedicated use of some descendent Sets [B] and [C] form a partition of [A] -- their intersection is empty, and their union is [A]. Your current presentation of these sets of CPUs shows set [B] in the cpus file, followed by set [C] in brackets, if I am recalling correctly. This format changes the format of the current cpus_allowed file, and it violates the preference for a single value or vector per file. I would like to consider alternatives. Your code automatically updates [C] if the child cpuset adds or removes CPUs from those it manages in isolation (though I am not sure that your code manages this change all the way back up the hierarchy to the top cpuset, and I wondering if perhaps your code should be doing this, as noted in my detailed comments on your patch earlier today.) I'd be tempted, if taking this approach (3) to consider a couple of alternatives. As I spelled out a few days ago, one could mark some cpusets that form a partition of the systems CPUs, for the purposes of establishing isolated scheduler domains, without requiring the above three related sets per cpuset instead of one. I am still unsure how much of your motivation is the need to make the scheduler more efficient by establishing useful isolated sched domains, and how much is the need to keep the usage of CPUs by various jobs isolated, even from tasks attached to parent cpusets. One can obtain the job isolation just in user code - if you don't want a task to use a parent cpusets access to your isolated cpuset, then simply don't attach a task to the parent cpusets. I do not understand yet how strong your requirement is to have the _kernel_ enforce that there are not tasks in a parent cpuset which could intrude on the non-isolated resources of a child. I provide (non open source) user level tools to my users which enable them to conveniently ensure that there are no such unwanted tasks, so they don't have a problem with a parent cpusets CPUs overlapping a cpuset that they are using for an isolated job. Perhaps I could persuade my employer that it would be appropriate to open source these tools. In any case, going (3) would result in _one_ attribute, not two (both exclusive and isolated, with overlapping semantics, which is confusing.) -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Dinakar G. <di...@in...> - 2005-04-23 07:09:38
Attachments:
sd-cpuset-v1-mail.patch
|
On Fri, Apr 22, 2005 at 02:26:18PM -0700, Paul Jackson wrote: > 3) Extend the exclusive capability to include isolation from parents, > along the lines of your patch. This was precisely the design that I first came up not so long ago, but never posted. The reason being that I thought all parties involved had already agreed to this design because of some reason (unknown to me) that was already discussed in detail during the last flurry of emails. Now that you have asked this question and actually said that this would probably be a better design, I wholeheartedly agree and whats more I already have most of the code required. Infact here it is I think I'll redo the patch and post it for review shortly -Dinakar (Warning, this has all the warts that have previosuly been pointed out and more) |
From: Paul J. <pj...@sg...> - 2005-04-23 22:33:29
|
Dinakar wrote: > cpuset cpus isolated cpus_allowed isolated_map > top 0-7 1 0 0-7 The top cpuset holds the kernel threads that are pinned to a particular cpu or node. It's not right that their cpusets cpus_allowed is empty, which is what I guess the "0" in the cpus_allowed column above means. (Even if the "0" means CPU 0, that still conflicts with kernel threads on CPUs 1-7.) We might get away with it on cpus, because we don't change the tasks cpus_allowed to match the cpusets cpus_allowed (we don't call set_cpus_allowed, from kernel/cpuset.c) _except_ when someone rebinds that task to its cpuset by writing its pid into the cpuset tasks file. So for as long as no one tries to rebind the per-cpu or per-node kernel threads, no one will notice that they in a cpuset with an empty cpus_allowed. This won't even work that well on the memory side, where we resync a task with its cpuset anytime that a task goes to allocate memory (if it can WAIT and it is not in interrupt) and we notice that someone has bumped the mems_generation for its cpuset. In other words, I strongly suspect that: 1) The top cpuset should allow all cpus, all memory nodes. 2) The way to assure that one task can't have its cpu or memory stolen by another is to put the other tasks in cpusets that don't overlap. 3) The wrong way to assure this is by refusing to have any other cpusets that have overlapping cpus_allowed or mems_allowed. 4) There are some tasks that _do_ require to run on the same cpus as the tasks you would assign to isolated cpusets. These kernel threads, such as for example the migration and ksoftirqd threads, must be setup well before user code is run that can configure job specific isolated cpusets, so these tasks need a cpuset to run in that can be created during the system boot, before init (pid == 1) starts up. This cpuset is the top cpuset. My users are successfully managing what tasks can use what cpu or memory resources by controlling which tasks are in which cpusets. They do not require the ability to disable allowed cpus or memory nodes in other cpusets to do this. It is not entirely clear to me that they even require the minimal cpu_exclusive/mem_exclusive facility that is there now. I don't understand why what's there now isn't sufficient. I don't see that this patch provides any capability that you can't get just by properly placing tasks in cpusets that have the desired cpus and nodes. This patch leaves the per-cpu kernel threads with no cpuset that allows what they need, and it complicates the semantics of things, in ways that I still don't entirely understand. Earlier you wrote: > 1. I need a method to isolate a random set of cpus in such a way that > only the set of processes that are specifically assigned can > make use of these CPUs I don't see why you need this. Nor do I think it is possible. You don't need to isolate a set of cpus; you need to isolate a set of processes. So long as you can create non-overlapping cpusets, and assign processes to them, I don't see where it matters that you cannot prohibit the creation of overlapping cpusets, or in the case of the top cpuset, why it matters that you cannot _disallow_ allowed cpus or memory nodes in existing cpusets. And this is not possible because at least the kernel per-cpu threads _do_ need to run on each cpu in the system, including those cpus you would isolate. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |
From: Paul J. <pj...@sg...> - 2005-04-19 07:01:41
|
Nick wrote: > Basically you just have to know that it has the > capability to partition the system in an arbitrary disjoint set > of sets of cpus. > > If you can make use of that, then we're in business ;) You read fast ;) So you do _not_ want to consider nested sched domains, just disjoint ones. Good. > From what I gather, this partitioning does not exactly fit > the cpusets architecture. Because with cpusets you are specifying > on what cpus can a set of tasks run, not dividing the whole system. My evil scheme, and Dinakar's as well, is to provide a way for the user to designate _some_ of their cpusets as also defining the partition that controls which cpus are in each sched domain, and so dividing the system. "partition" == "an arbitrary disjoint set of sets of cpus" This fits naturally with the way people use cpusets anyway. They divide up the system along boundaries that are natural topologically and that provide a good fit for their jobs, and hope that the kernel will adapt to such localized placement. They then throw a few more nested (smaller) cpusets at the problem, to deal with various special needs. If we can provide them with a means to tell us which of their cpusets define the natural partitioning of their system, for the job mix and hardware topology they have, then all is well. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@en...> 1.650.933.1373, 1.925.600.0401 |