From: Matthew D. <col...@us...> - 2002-08-26 19:11:24
Attachments:
simple_topo-v0.4-in_kernel-2.5.31.patch
|
Hello all, This patch is designed to be the beginning of a complete topology API. The plan is to export all this data to userspace via driverfs. For now, it only deals with CPUs, Nodes, and Memory Blocks, but it leaves it up to individual architectures to define what those are. I hope that this will enable us to have and extremely flexible, but at the same time extremely useful topology infrastructure in the kernel & available to userspace processes. This will eventually (I hope) facilitate the intelligent binding and communication of process, especially in large systems. After some discussions on & off list about the last version of my Topology API patch, I've made some fairly extensive revisions and am reposting it, albeit to a smaller audience. I believe that all of the direct recipients have some interest (however small ;) in this patch. If not, then I probably believe you should! ;) This is the in-kernel portion of the patch (hence the 1/2). The userspace portion is still be cleaned up, and will be posted later today. This patch now touches every architecture, so I'd like to hear your thoughts. Especially if I mucked around with a platform you care about! TIA! -Matt |
From: Andi K. <ak...@su...> - 2002-08-26 19:22:29
|
> This is the in-kernel portion of the patch (hence the 1/2). The userspace > portion is still be cleaned up, and will be posted later today. This patch > now touches every architecture, so I'd like to hear your thoughts. > Especially if I mucked around with a platform you care about! Why do you work with memblks in the low level API instead of just addresses? There seems to be no way to map an address to a memblk, which makes it a look bit disabled. What is the exact difference between a node and a memblk? I guess the kernel internally should only know about nodes and addresses, and if such a memblk thing should be really useful for the user API then it should be only managed in the API interface. Also I would drop the _ prefix from the low level macros. Does not seem to be needed (cosmetic nit). -Andi |
From: Matthew D. <col...@us...> - 2002-08-26 21:31:12
|
Andi Kleen wrote: > Why do you work with memblks in the low level API instead of just addresses? Well, we felt it was an important abstraction for the kernel to understand. It will likely be possible in the future (if everything goes as I've deviously planned! ;) for memory to be remapped to be on different memblks. I also feel that it is just plain useful to be able to see how the memory is physically/logically broken up (or at least how the arch wants you to see it). > There seems to be no way to map an address to a memblk, which makes it > a look bit disabled. Actually there is, but it is conspicuously absent from this patch. Pat Gaughen came up with the NUMA-Q version... I believe it is in her discontig-mem patch... I'll add that to the next revision... > What is the exact difference between a node and a memblk? Good question! I'm rewriting the API spec and FAQ about it (used to be called the NUMA Binding API, also discussed here on LSE Tech some months ago)... For now, this will have to do for a definition: A memory block (memblk) is a physically contiguous chunk of memory. A node (at least as far as this API is concerned) is totally abstract. It is little more than a collection of CPUs and memblks, although theoretically it could contain, both, either, or neither! The node is definable by the architecture. For example, on NUMA-Q, our node is comprised of (up to) 4 CPUs, and a memblk. I imagine that this definition is fairly typical, but I'm sure it is not the only definition. I've tried to leave it as open as possible, so as to allow for as much future expansion as possible. For another example: On a NUMA machine with hyperthreading, one might define 2 layers of nodes. A bottom layer node might consist of solely the 2 'virtual' CPUs contained in each CPU. A top layer node might consist of a group of those bottom layer nodes with a memblk (physical chunk of memory). That was the purpose of including the _node_to_node() function, to allow for hierarchical NUMA. Clear as mud? Great! ;) > Also I would drop the _ prefix from the low level macros. Does not seem to be > needed (cosmetic nit). Well, the userspace API portion of the patch introduces the functions without the '_' prefix (cpu_to_node vs _cpu_to_node). That should get posted later today... Cheers! -Matt > > -Andi > |
From: Erich F. <ef...@es...> - 2002-08-27 09:37:28
|
On Monday 26 August 2002 21:22, Andi Kleen wrote: > Why do you work with memblks in the low level API instead of just > addresses? There seems to be no way to map an address to a memblk, whic= h > makes it a look bit disabled. I agree here. Normally there is such a thing in DISCONTIGMEM and it could get a place in the API. > What is the exact difference between a node and a memblk? In my understanding memblks make much sense if there is more than one level in the node hierarchy. And then memblks will be typically mapped to a certain node-hierarchy level. If you have CPUs, nodes, supernodes, memblks might be identical with nodes. Unfortunately multiple levels of hierarchy are not clearly handled. It would be nice to operate with something like logical nodes (incremental numbering, starting at 0) on each level. E.g.: nodes 0..3 belong to supernode 0, nodes 4..7 belong to supernode 1. The _node_to_node() function doesn't clearly say from which level to which level we want to convert and actually forces us to use ugly supernode numbers. With the current API we'd need to have something like: 0...7 are nodes, 8, 9 are=20 supernodes. The _node_to_node and _node_to_memblk functions would be: node 0 1 2 3 4 5 6 7 8 9 _node_to_node 8 8 8 8 9 9 9 9 8 9 _node_to_memblk 0 1 2 3 4 5 6 7 ? ? ( "?" is undefined?) Hmmm, I think I missed a part of the NUMA-API discussion on LSE, don't want to restart it. But I'm currently dealing with a computer with two levels of hierarchy and have the feeling that the hierarchy levels are conceptually not "clean" in the API definition (I don't mean Matt's implementation!). So maybe just one small question: are there arguments against replacing the concept of node by node_on_hierarchy_level, which would be (node,level)? This could be done by a platform specific node_t definition, platforms which don't need it would just take typedef int node_t ... Others replace it by a structure. node_to_node would get an additional argument (target_level) and node_to_memblk would be just one of the node_to_node mappings, to one particular node level. Regards, Erich --=20 Global warming and climate change is everyones business! Sooner or later. Links? http://unfccc.int/ http://www.climnet.org/ http://www.fossil-of-the-day.org/ |
From: Andi K. <ak...@su...> - 2002-08-27 09:47:50
|
> Hmmm, I think I missed a part of the NUMA-API discussion on LSE, don't > want to restart it. But I'm currently dealing with a computer with two > levels of hierarchy and have the feeling that the hierarchy levels are > conceptually not "clean" in the API definition (I don't mean Matt's > implementation!). So maybe just one small question: are there arguments > against replacing the concept of node by node_on_hierarchy_level, which > would be (node,level)? This could be done by a platform specific > node_t definition, platforms which don't need it would just take > typedef int node_t ... (node,level) would be probably too complex. The kernel does not really want to work with graphs that complicated. If most topologies can be expressed in memblks/nodes, then that would be probably fine. Of course there is still the question e.g. how to teach the scheduler about them. -Andi |
From: Erich F. <ef...@es...> - 2002-08-28 01:26:39
|
On Tuesday 27 August 2002 11:47, Andi Kleen wrote: > (node,level) would be probably too complex. The kernel does not really > want to work with graphs that complicated. The NUMA-API proposal convinced me that we can live with the node_to_node= () hierarchy discovery (or similar). > If most topologies can be expressed in memblks/nodes, then that would > be probably fine. > > Of course there is still the question e.g. how to teach the scheduler > about them. We're currently running the node affine NUMA scheduler on a 32 CPU Itanium2 with 8 nodes and 2 supernodes and seeing benefits of the multi-hierarchy features. But the scheduler just knows about the nodes and their "distances" (memory access latency ratios), supernodes are not used directly. The number of hierarchy levels is "discovered" from the distances and used only during the setup phase. Regards, Erich |
From: Andi K. <ak...@su...> - 2002-08-28 08:43:04
|
> We're currently running the node affine NUMA scheduler on a 32 CPU > Itanium2 with 8 nodes and 2 supernodes and seeing benefits of the > multi-hierarchy features. But the scheduler just knows about the nodes > and their "distances" (memory access latency ratios), supernodes are > not used directly. The number of hierarchy levels is "discovered" from > the distances and used only during the setup phase. I see the point for the scheduler (and may even need it myself in future) But I don't think it makes sense to complicate the NUMA API just because the scheduler has more extensive requirements than everybody else. Better is probably to use a load balancer hook for this that is architecture specific and can tune the scheduler for your box. I think Robert Love posted a patch for this recently on l-k. -Andi |
From: Martin J. B. <Mar...@us...> - 2002-08-28 15:27:57
|
>> We're currently running the node affine NUMA scheduler on a 32 CPU >> Itanium2 with 8 nodes and 2 supernodes and seeing benefits of the >> multi-hierarchy features. But the scheduler just knows about the nodes >> and their "distances" (memory access latency ratios), supernodes are >> not used directly. The number of hierarchy levels is "discovered" from >> the distances and used only during the setup phase. > > I see the point for the scheduler (and may even need it myself in future) > But I don't think it makes sense to complicate the NUMA API just because > the scheduler has more extensive requirements than everybody else. > Better is probably to use a load balancer hook for this that is architecture > specific and can tune the scheduler for your box. I think Robert Love posted > a patch for this recently on l-k. Having every architecture implement its own scheduler code seems like a lot of rework to me (that people are likely to screw up). Erich - how much commonality is there between platforms at the moment ... do you feel you can make a generalised NUMA scheduler that touches no arch code if you had generic topology information? And what information would you need? M. |
From: Andi K. <ak...@su...> - 2002-08-28 15:37:46
|
> Having every architecture implement its own scheduler code seems > like a lot of rework to me (that people are likely to screw up). > Erich - how much commonality is there between platforms at the > moment ... do you feel you can make a generalised NUMA scheduler > that touches no arch code if you had generic topology information? > And what information would you need? It doesn't need to be an completely own algorithm. Just the parts the check the topology would be arch specific. -Andi |
From: Martin J. B. <Mar...@us...> - 2002-08-28 15:41:03
|
>> Having every architecture implement its own scheduler code seems >> like a lot of rework to me (that people are likely to screw up). >> Erich - how much commonality is there between platforms at the >> moment ... do you feel you can make a generalised NUMA scheduler >> that touches no arch code if you had generic topology information? >> And what information would you need? > > It doesn't need to be an completely own algorithm. Just the parts > the check the topology would be arch specific. Doesn't that make a topology API? ;-) Maybe I'm not quite seeing what you're envisaging ... could you elaborate? We could probably seperate out the scheduler info if it's totally distinct from what the other subsystems want, but I don't see any benefit at the moment, and it seems neater to keep it all in one place to me. M. |
From: Andi K. <ak...@su...> - 2002-08-28 15:44:13
|
On Wed, Aug 28, 2002 at 08:39:24AM -0700, Martin J. Bligh wrote: > >> Having every architecture implement its own scheduler code seems > >> like a lot of rework to me (that people are likely to screw up). > >> Erich - how much commonality is there between platforms at the > >> moment ... do you feel you can make a generalised NUMA scheduler > >> that touches no arch code if you had generic topology information? > >> And what information would you need? > > > > It doesn't need to be an completely own algorithm. Just the parts > > the check the topology would be arch specific. > > Doesn't that make a topology API? ;-) Maybe I'm not quite seeing > what you're envisaging ... could you elaborate? A kind of yes. But it's a specific thing tied to a scheduler, not a general facility. e.g. the load balancer asks: what CPUs would you prefer me to move this task to? Doing such things are far off from a full blow topology API that gives the whole structure. And it's a lot simpler. -Andi |
From: Andrea A. <an...@su...> - 2002-08-28 15:42:24
|
On Wed, Aug 28, 2002 at 08:25:15AM -0700, Martin J. Bligh wrote: > >> We're currently running the node affine NUMA scheduler on a 32 CPU > >> Itanium2 with 8 nodes and 2 supernodes and seeing benefits of the > >> multi-hierarchy features. But the scheduler just knows about the nodes > >> and their "distances" (memory access latency ratios), supernodes are > >> not used directly. The number of hierarchy levels is "discovered" from > >> the distances and used only during the setup phase. > > > > I see the point for the scheduler (and may even need it myself in future) > > But I don't think it makes sense to complicate the NUMA API just because > > the scheduler has more extensive requirements than everybody else. > > Better is probably to use a load balancer hook for this that is architecture > > specific and can tune the scheduler for your box. I think Robert Love posted > > a patch for this recently on l-k. > > Having every architecture implement its own scheduler code seems > like a lot of rework to me (that people are likely to screw up). that's not necessary. you can have a fallback library code enabled by a CONFIG_NUMA_SCHEDULER and archs with holes in the middle of the nodes can implement their own special arch code instead. All numa archs I work with doesn't have holes in the middle of the nodes of course so any abstraction on top of the nodes would be superflous. Furthmore I think it's flawed to put memory caming from the same numa node in two different pgdat just because there's an hole in the middle of the node. You should definitely use nonlinear from Daniel instead. The only case were nonlinear is useful is when there is an hole in the middle of the numa nodes. The reason I opposed so strongly to nonlinear originally is of course that I never heard of any arch with holes in the middle of the nodes before, but apparently somebody is apparently doing that kind of hardware for unknown reasons ;(. The scheduler hooks are the best way to deal with these issues because they don't force you on a API that may not be flexible enough for your lowlevel needs, and all normal numas will take advantage of the strightforward library code with no code replication. Andrea |
From: Erich F. <ef...@es...> - 2002-08-28 23:54:54
|
On Wednesday 28 August 2002 17:25, Martin J. Bligh wrote: > >> We're currently running the node affine NUMA scheduler on a 32 CPU > >> Itanium2 with 8 nodes and 2 supernodes and seeing benefits of the > >> multi-hierarchy features. But the scheduler just knows about the nod= es > >> and their "distances" (memory access latency ratios), supernodes are > >> not used directly. The number of hierarchy levels is "discovered" fr= om > >> the distances and used only during the setup phase. > > > > I see the point for the scheduler (and may even need it myself in fut= ure) > > But I don't think it makes sense to complicate the NUMA API just beca= use > > the scheduler has more extensive requirements than everybody else. > > Better is probably to use a load balancer hook for this that is > > architecture specific and can tune the scheduler for your box. I thin= k > > Robert Love posted a patch for this recently on l-k. > > Having every architecture implement its own scheduler code seems > like a lot of rework to me (that people are likely to screw up). > Erich - how much commonality is there between platforms at the > moment ... do you feel you can make a generalised NUMA scheduler > that touches no arch code if you had generic topology information? > And what information would you need? The current NUMA scheduler is pretty platform independent. All the code is in kernel/sched.c. It needs to build up the CPU pools (nodes) from something like cpu_to_node() and uses a SLIT-like matrix representing the latency ratios between the nodes for building up some hierarchy information. I could use something like node_to_node() here, but on IA64 we get the SLIT (System Locality Information Table) for free from ACPI. And the latency ratios are usefull for setting up delays, anyway. So all you need for a new platform is this table and cpu_to_node(). The hierarch= y information is only in the delays and the selection of the task to be stolen. In the memory allocation part we are currently using a zone ordering which is hierarchy based. node_to_node would do the job here. This is separate from the scheduler code, anyway. All we need here is to be able to specify from which node (no matter which memblk on that node) the memory should be allocated. Regards, Erich |
From: Matthew D. <col...@us...> - 2002-08-27 20:53:34
|
Erich Focht wrote: > In my understanding memblks make much sense if there is more than one > level in the node hierarchy. And then memblks will be typically mapped > to a certain node-hierarchy level. If you have CPUs, nodes, supernodes, > memblks might be identical with nodes. If you look at my last post on this thread, your statement isn't quite true. I've tried with this API to draw a line between memblk and node. My theory on this is that the node concept should be as abstract as possible. Nothing more than a container for other topology elements, really. The memblks and CPUs are obviously 'real' things, with physical representations. Other things that I hope to add to the topology in the near future are system busses (PCI busses), PCI devices, and things of that nature. Again, the node concept will act as little more than a container for these things. It is also very handy as a conversion unit. I see functions like X_to_node and node_to_X for all topology elements, ie: If you want to find the nearest memblk to a particular CPU you'd say: memblkid = node_to_memblk(cpu_to_node(cpuid)); > Unfortunately multiple levels of hierarchy are not clearly handled. It > would be nice to operate with something like logical nodes (incremental > numbering, starting at 0) on each level. E.g.: nodes 0..3 belong to > supernode 0, nodes 4..7 belong to supernode 1. The _node_to_node() > function doesn't clearly say from which level to which level we want to > convert and actually forces us to use ugly supernode numbers. Ugly? All numbers are beautiful! ;) In all seriousness though, I don't really like the level concept. Then each platform needs to define some maximum level, a top level of the hierarchy, and a number of nodes at each level. Not difficult, but I don't think it really buys us much... Please look at http://lse.sourceforge.net/numa/numa_api_rationale.html#numbering to see the rationale for using a flat node numbering to describe hierarchical numa. > With the > current API we'd need to have something like: 0...7 are nodes, 8, 9 are > supernodes. The _node_to_node and _node_to_memblk functions would be: > node 0 1 2 3 4 5 6 7 8 9 > _node_to_node 8 8 8 8 9 9 9 9 8 9 > _node_to_memblk 0 1 2 3 4 5 6 7 ? ? ( "?" is undefined?) ? would be entirely up to the platform in question. > Hmmm, I think I missed a part of the NUMA-API discussion on LSE, don't > want to restart it. But I'm currently dealing with a computer with two > levels of hierarchy and have the feeling that the hierarchy levels are > conceptually not "clean" in the API definition (I don't mean Matt's > implementation!). So maybe just one small question: are there arguments > against replacing the concept of node by node_on_hierarchy_level, which > would be (node,level)? This could be done by a platform specific > node_t definition, platforms which don't need it would just take > typedef int node_t ... > Others replace it by a structure. node_to_node would get an additional > argument (target_level) and node_to_memblk would be just one of > the node_to_node mappings, to one particular node level. I have objections to the added complexity. If I'm outnumbered here, I'll certainly follow the group's opinion on it... Cheers! -Matt |
From: Andi K. <ak...@su...> - 2002-08-27 21:09:51
|
> I have objections to the added complexity. If I'm outnumbered here, I'll > certainly follow the group's opinion on it... I agree with you that simpler is better. That is why I suggested even getting rid of the memblks. -Andi |
From: Matthew D. <col...@us...> - 2002-08-27 21:39:01
|
Well, simpler is better if the added complexity isn't worth it by the added functionality, etc. In the case of memblks, *I* feel that it is worth the added complexity because of the flexibility and extensibility it buys us. Again, if that turns out to be a minority opinion, it may change. But for now it stays... ;) -Matt Andi Kleen wrote: >>I have objections to the added complexity. If I'm outnumbered here, I'll >>certainly follow the group's opinion on it... > > > I agree with you that simpler is better. > > That is why I suggested even getting rid of the memblks. > > -Andi > |
From: Martin J. B. <Mar...@us...> - 2002-08-27 21:41:30
|
The ia64 NUMA platforms seem to need this to support discontiguous memory within a node. M. --On Tuesday, August 27, 2002 14:35:49 -0700 Matthew Dobson <col...@us...> wrote: > Well, simpler is better if the added complexity isn't worth it by the added functionality, etc. In the case of memblks, *I* feel that it is worth the added complexity because of the flexibility and extensibility it buys us. Again, if that turns out to be a minority opinion, it may change. But for now it stays... ;) > > > -Matt > > Andi Kleen wrote: >>> I have objections to the added complexity. If I'm outnumbered here, I'll >>> certainly follow the group's opinion on it... >> >> >> I agree with you that simpler is better. >> >> That is why I suggested even getting rid of the memblks. >> >> -Andi >> > > > > |
From: Michael H. <hoh...@us...> - 2002-08-27 22:49:35
|
If you do away with memblks, thus assuming that node equates to the memory contained within that node, then hierarchical NUMA becomes problematic. Not all nodes necessarily have memory associated with them, i.e., a node that contains subnodes, the subnodes containing the memory. Or the opposite case of treating a HT processor as a node which contains two processors, but no memory. Reiterating Matt's comment, a node is just a container, memory is one of the items that may be within the container. Therefore, we need to have the concept of a memblk to represent the memory within a container. It also allows for the possibility of multiple memblks within a node which may be desired for some conceivable NUMA architecture. Michael On Tue, 2002-08-27 at 14:35, Matthew Dobson wrote: > Well, simpler is better if the added complexity isn't worth it by the added > functionality, etc. In the case of memblks, *I* feel that it is worth the > added complexity because of the flexibility and extensibility it buys us. > Again, if that turns out to be a minority opinion, it may change. But for now > it stays... ;) > > > -Matt > > Andi Kleen wrote: > >>I have objections to the added complexity. If I'm outnumbered here, I'll > >>certainly follow the group's opinion on it... > > > > > > I agree with you that simpler is better. > > > > That is why I suggested even getting rid of the memblks. > > > > -Andi > > > > -- Michael Hohnbaum 503-578-5486 hoh...@us... T/L 775-5486 |
From: Niels C. <nc...@ej...> - 2002-08-27 23:07:18
|
Michael, looks like you are proposing a tool with more capabilities than what is required for the job... Making a tool for some "conceivable requirement" is not what you used to advocate! Actually. memblks are way too abstract for my cup of tea (as apparently for the tea of others too). -nc- ----- Original Message ----- From: "Michael Hohnbaum" <hoh...@us...> To: "Matthew Dobson" <col...@us...> Subject: Re: [Lse-tech] [patch][rfc] Topology API v0.4 (1/2) > If you do away with memblks, thus assuming that node equates to the > memory contained within that node, then hierarchical NUMA becomes > problematic. Not all nodes necessarily have memory associated with > them, i.e., a node that contains subnodes, the subnodes containing > the memory. Or the opposite case of treating a HT processor as a > node which contains two processors, but no memory. > > Reiterating Matt's comment, a node is just a container, memory is one > of the items that may be within the container. Therefore, we need to > have the concept of a memblk to represent the memory within a container. > It also allows for the possibility of multiple memblks within a node > which may be desired for some conceivable NUMA architecture. > > Michael |
From: Martin J. B. <Mar...@us...> - 2002-08-27 23:17:47
|
> Michael, looks like you are proposing a tool with more capabilities than > what is required for the job... Making a tool for some "conceivable > requirement" is not what you used to advocate! Actually. memblks are way > too abstract for my cup of tea (as apparently for the tea of others too). Ummm ... we just gave two specific examples of machines that don't have a 1-1 mapping between memblocks and nodes. So those capabilities surely are required? M. |
From: Niels C. <nc...@ej...> - 2002-08-27 23:45:39
|
Nah, not really! Those were what-ifs and percussions of other design decisions. Hierarchical nodes is an awkward way of handling hyper-threading. This Topology API was grafted on the root of wild rose but the graft doesn't seem to have taken. I think y'all need to go back to the garage and reinvent this (and the SGI cpumemsets, while you are at it). And I still think it should be syscall based (in addition to an interface through driverfs, which needs a major overhaul as well, incidentally). -nc- ----- Original Message ----- From: "Martin J. Bligh" <Mar...@us...> To: "Niels Christiansen" <nc...@ej...>; "lse-tech" <lse...@li...> Sent: Tuesday, August 27, 2002 6:13 PM Subject: Re: [Lse-tech] [patch][rfc] Topology API v0.4 (1/2) > > Michael, looks like you are proposing a tool with more capabilities than > > what is required for the job... Making a tool for some "conceivable > > requirement" is not what you used to advocate! Actually. memblks are way > > too abstract for my cup of tea (as apparently for the tea of others too). > > Ummm ... we just gave two specific examples of machines that don't have a > 1-1 mapping between memblocks and nodes. So those capabilities surely > are required? > > M. > > |
From: Paul J. <pj...@en...> - 2002-08-28 00:34:36
|
On Tue, 27 Aug 2002, Niels Christiansen wrote: > ... I think y'all need to go back to the garage and reinvent > this (and the SGI cpumemsets, while you are at it) ... Thanks Neils. I'm glad you still remember us. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj...@sg...> 1.650.933.1373 |
From: Niels C. <nc...@ej...> - 2002-08-28 03:48:02
|
Oh, I do, I do. Haven't seen much from you on lse-tech lately, though... -nc- > > ... I think y'all need to go back to the garage and reinvent > > this (and the SGI cpumemsets, while you are at it) ... > > Thanks Neils. I'm glad you still remember us. > > -- > I won't rest till it's the best ... > Programmer, Linux Scalability > Paul Jackson <pj...@sg...> 1.650.933.1373 |
From: Martin J. B. <Mar...@us...> - 2002-08-28 00:44:26
|
> Hierarchical nodes is an awkward way of handling hyper-threading. This > Topology API was grafted on the root of wild rose but the graft doesn't seem > to have taken. I think y'all need to go back to the garage and reinvent > this (and the SGI cpumemsets, while you are at it). And I still think it > should be syscall based (in addition to an interface through driverfs, which > needs a major overhaul as well, incidentally). IIRC, we went through this before and we decided you were wrong. Hierarchical nodes aren't just for handling hyperthreading, and you don't have to describe hyperthreaded systems like that unless you want to. So unless you have a simpler suggestion that still covers all the bases, I think we should feel free to ignore you. M. |
From: Niels C. <nc...@ej...> - 2002-08-28 03:45:59
|
I don't recalling discussing this with you but of course you should feel free to ignore whoever you want to ignore. -nc- > IIRC, we went through this before and we decided you were wrong. > Hierarchical nodes aren't just for handling hyperthreading, and you don't > have to describe hyperthreaded systems like that unless you want to. So > unless you have a simpler suggestion that still covers all the bases, I think > we should feel free to ignore you. > > M. |