* [PATCH] top level scheduler domain for ia64
@ 2004-10-19 21:27 Jesse Barnes
2004-10-20 0:02 ` Nick Piggin
` (25 more replies)
0 siblings, 26 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-10-19 21:27 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 810 bytes --]
Some have noticed that the overlapping sched domains code doesn't quite work
as intended (it results in disjoint domains on some machines), and that a top
level, machine spanning domain is needed. This patch from John Hawkes adds
it to the ia64 code. This allows processes to run on all CPUs in large
systems, though balancing is limited. It should go to Linus soon now
otherwise large systems will only have ~16p (depending on topology) usable by
the scheduler. I sanity checked it on a small system after rediffing John's
original, and he's done some testing on very large systems.
Nick, can you buy off on the sched.c change? Alternatively, do you want to
send that fix separately John?
Signed-off-by: Jesse Barnes <jbarnes@sgi.com>
Signed-off-by: John Hawkes <hawkes@sgi.com>
Thanks,
Jesse
[-- Attachment #2: sched-domains-top-level-3.patch --]
[-- Type: text/plain, Size: 3203 bytes --]
===== arch/ia64/kernel/domain.c 1.3 vs edited =====
--- 1.3/arch/ia64/kernel/domain.c 2004-10-18 22:26:51 -07:00
+++ edited/arch/ia64/kernel/domain.c 2004-10-19 14:18:07 -07:00
@@ -119,6 +119,14 @@
*/
static DEFINE_PER_CPU(struct sched_domain, node_domains);
static struct sched_group *sched_group_nodes[MAX_NUMNODES];
+
+static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
+static struct sched_group sched_group_allnodes[MAX_NUMNODES];
+
+static int __devinit cpu_to_allnodes_group(int cpu)
+{
+ return cpu_to_node(cpu);
+}
#endif
/*
@@ -149,9 +157,21 @@
cpus_and(nodemask, nodemask, cpu_default_map);
#ifdef CONFIG_NUMA
+ if (num_online_cpus()
+ > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
+ sd = &per_cpu(allnodes_domains, i);
+ *sd = SD_ALLNODES_INIT;
+ sd->span = cpu_default_map;
+ group = cpu_to_allnodes_group(i);
+ sd->groups = &sched_group_allnodes[group];
+ p = sd;
+ } else
+ p = NULL;
+
sd = &per_cpu(node_domains, i);
*sd = SD_NODE_INIT;
sd->span = sched_domain_node_span(node);
+ sd->parent = p;
cpus_and(sd->span, sd->span, cpu_default_map);
#endif
@@ -201,6 +221,9 @@
}
#ifdef CONFIG_NUMA
+ init_sched_build_groups(sched_group_allnodes, cpu_default_map,
+ &cpu_to_allnodes_group);
+
for (i = 0; i < MAX_NUMNODES; i++) {
/* Set up node groups */
struct sched_group *sg, *prev;
@@ -282,6 +305,15 @@
power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
(cpus_weight(sd->groups->cpumask)-1) / 10;
sd->groups->cpu_power = power;
+
+#ifdef CONFIG_NUMA
+ sd = &per_cpu(allnodes_domains, i);
+ if (sd->groups) {
+ power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
+ (cpus_weight(sd->groups->cpumask)-1) / 10;
+ sd->groups->cpu_power = power;
+ }
+#endif
}
#ifdef CONFIG_NUMA
===== include/asm-ia64/topology.h 1.12 vs edited =====
--- 1.12/include/asm-ia64/topology.h 2004-10-18 22:26:52 -07:00
+++ edited/include/asm-ia64/topology.h 2004-10-19 14:18:06 -07:00
@@ -58,7 +58,26 @@
| SD_BALANCE_EXEC \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
- .balance_interval = 10, \
+ .balance_interval = 1, \
+ .nr_balance_failed = 0, \
+}
+
+/* sched_domains SD_ALLNODES_INIT for IA64 NUMA machines */
+#define SD_ALLNODES_INIT (struct sched_domain) { \
+ .span = CPU_MASK_NONE, \
+ .parent = NULL, \
+ .groups = NULL, \
+ .min_interval = 80, \
+ .max_interval = 320, \
+ .busy_factor = 320, \
+ .imbalance_pct = 125, \
+ .cache_hot_time = (10*1000000), \
+ .cache_nice_tries = 1, \
+ .per_cpu_gain = 100, \
+ .flags = SD_LOAD_BALANCE \
+ | SD_BALANCE_EXEC, \
+ .last_balance = jiffies, \
+ .balance_interval = 100*(63+num_online_cpus())/64, \
.nr_balance_failed = 0, \
}
===== kernel/sched.c 1.367 vs edited =====
--- 1.367/kernel/sched.c 2004-10-18 22:26:52 -07:00
+++ edited/kernel/sched.c 2004-10-19 14:18:06 -07:00
@@ -4378,11 +4378,10 @@
printk("domain %d: ", level);
if (!(sd->flags & SD_LOAD_BALANCE)) {
- printk("does not balance");
+ printk("does not load-balance");
if (sd->parent)
printk(" ERROR !SD_LOAD_BALANCE domain has parent");
printk("\n");
- break;
}
printk("span %s\n", str);
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
@ 2004-10-20 0:02 ` Nick Piggin
2004-10-20 17:48 ` Luck, Tony
` (24 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Nick Piggin @ 2004-10-20 0:02 UTC (permalink / raw)
To: linux-ia64
Jesse Barnes wrote:
> Some have noticed that the overlapping sched domains code doesn't quite work
> as intended (it results in disjoint domains on some machines), and that a top
> level, machine spanning domain is needed. This patch from John Hawkes adds
> it to the ia64 code. This allows processes to run on all CPUs in large
> systems, though balancing is limited. It should go to Linus soon now
> otherwise large systems will only have ~16p (depending on topology) usable by
> the scheduler. I sanity checked it on a small system after rediffing John's
> original, and he's done some testing on very large systems.
>
> Nick, can you buy off on the sched.c change? Alternatively, do you want to
> send that fix separately John?
No, that looks good. It is a pretty trivial change so I think its harmless
to go in with this patch.
I'd consider increasing balancing rates for the SD_NODE_INIT domains after
this patch goes in too.
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
2004-10-20 0:02 ` Nick Piggin
@ 2004-10-20 17:48 ` Luck, Tony
2004-10-20 18:02 ` Nick Piggin
` (23 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Luck, Tony @ 2004-10-20 17:48 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 1073 bytes --]
>Some have noticed that the overlapping sched domains code
>doesn't quite work as intended (it results in disjoint domains
>on some machines), and that a top level, machine spanning domain
>is needed.
Why is the solution to jam this into the ia64 specific code? From
this description it sounds like a generic scheduler problem, so
the solution ought to be up in some generic code.
+ .min_interval = 80, \
+ .max_interval = 320, \
+ .busy_factor = 320, \
+ .imbalance_pct = 125, \
+ .cache_hot_time = (10*1000000), \
+ .balance_interval = 100*(63+num_online_cpus())/64, \
That's a lot of magic numbers and formulae ... are they right?
How would a user know if they are right.
>Nick, can you buy off on the sched.c change? Alternatively,
>do you want to send that fix separately John?
I saw the ACK from Nick ... but kernel/sched.c changes will have
to go through Andrew, not me. Are the ia64 and generic parts
separable? If the sched.c change goes in, do other architectures
need to have some equivalent change?
-Tony
[-- Attachment #2: sched-domains-top-level-3.patch --]
[-- Type: application/octet-stream, Size: 3316 bytes --]
===== arch/ia64/kernel/domain.c 1.3 vs edited =====
--- 1.3/arch/ia64/kernel/domain.c 2004-10-18 22:26:51 -07:00
+++ edited/arch/ia64/kernel/domain.c 2004-10-19 14:18:07 -07:00
@@ -119,6 +119,14 @@
*/
static DEFINE_PER_CPU(struct sched_domain, node_domains);
static struct sched_group *sched_group_nodes[MAX_NUMNODES];
+
+static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
+static struct sched_group sched_group_allnodes[MAX_NUMNODES];
+
+static int __devinit cpu_to_allnodes_group(int cpu)
+{
+ return cpu_to_node(cpu);
+}
#endif
/*
@@ -149,9 +157,21 @@
cpus_and(nodemask, nodemask, cpu_default_map);
#ifdef CONFIG_NUMA
+ if (num_online_cpus()
+ > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
+ sd = &per_cpu(allnodes_domains, i);
+ *sd = SD_ALLNODES_INIT;
+ sd->span = cpu_default_map;
+ group = cpu_to_allnodes_group(i);
+ sd->groups = &sched_group_allnodes[group];
+ p = sd;
+ } else
+ p = NULL;
+
sd = &per_cpu(node_domains, i);
*sd = SD_NODE_INIT;
sd->span = sched_domain_node_span(node);
+ sd->parent = p;
cpus_and(sd->span, sd->span, cpu_default_map);
#endif
@@ -201,6 +221,9 @@
}
#ifdef CONFIG_NUMA
+ init_sched_build_groups(sched_group_allnodes, cpu_default_map,
+ &cpu_to_allnodes_group);
+
for (i = 0; i < MAX_NUMNODES; i++) {
/* Set up node groups */
struct sched_group *sg, *prev;
@@ -282,6 +305,15 @@
power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
(cpus_weight(sd->groups->cpumask)-1) / 10;
sd->groups->cpu_power = power;
+
+#ifdef CONFIG_NUMA
+ sd = &per_cpu(allnodes_domains, i);
+ if (sd->groups) {
+ power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
+ (cpus_weight(sd->groups->cpumask)-1) / 10;
+ sd->groups->cpu_power = power;
+ }
+#endif
}
#ifdef CONFIG_NUMA
===== include/asm-ia64/topology.h 1.12 vs edited =====
--- 1.12/include/asm-ia64/topology.h 2004-10-18 22:26:52 -07:00
+++ edited/include/asm-ia64/topology.h 2004-10-19 14:18:06 -07:00
@@ -58,7 +58,26 @@
| SD_BALANCE_EXEC \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
- .balance_interval = 10, \
+ .balance_interval = 1, \
+ .nr_balance_failed = 0, \
+}
+
+/* sched_domains SD_ALLNODES_INIT for IA64 NUMA machines */
+#define SD_ALLNODES_INIT (struct sched_domain) { \
+ .span = CPU_MASK_NONE, \
+ .parent = NULL, \
+ .groups = NULL, \
+ .min_interval = 80, \
+ .max_interval = 320, \
+ .busy_factor = 320, \
+ .imbalance_pct = 125, \
+ .cache_hot_time = (10*1000000), \
+ .cache_nice_tries = 1, \
+ .per_cpu_gain = 100, \
+ .flags = SD_LOAD_BALANCE \
+ | SD_BALANCE_EXEC, \
+ .last_balance = jiffies, \
+ .balance_interval = 100*(63+num_online_cpus())/64, \
.nr_balance_failed = 0, \
}
===== kernel/sched.c 1.367 vs edited =====
--- 1.367/kernel/sched.c 2004-10-18 22:26:52 -07:00
+++ edited/kernel/sched.c 2004-10-19 14:18:06 -07:00
@@ -4378,11 +4378,10 @@
printk("domain %d: ", level);
if (!(sd->flags & SD_LOAD_BALANCE)) {
- printk("does not balance");
+ printk("does not load-balance");
if (sd->parent)
printk(" ERROR !SD_LOAD_BALANCE domain has parent");
printk("\n");
- break;
}
printk("span %s\n", str);
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
2004-10-20 0:02 ` Nick Piggin
2004-10-20 17:48 ` Luck, Tony
@ 2004-10-20 18:02 ` Nick Piggin
2004-10-20 18:03 ` Jesse Barnes
` (22 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Nick Piggin @ 2004-10-20 18:02 UTC (permalink / raw)
To: linux-ia64
Luck, Tony wrote:
>>Some have noticed that the overlapping sched domains code
>>doesn't quite work as intended (it results in disjoint domains
>>on some machines), and that a top level, machine spanning domain
>>is needed.
>
>
> Why is the solution to jam this into the ia64 specific code? From
> this description it sounds like a generic scheduler problem, so
> the solution ought to be up in some generic code.
>
So far, ia64 is the only one that uses these overlapping domains.
The code to build them is complex enough that we didn't want to
put it into the generic code at the moment (but it is fairly well
contained, all just in arch/ia64/kernel/domain.c and asm/topology.h).
> + .min_interval = 80, \
> + .max_interval = 320, \
> + .busy_factor = 320, \
> + .imbalance_pct = 125, \
> + .cache_hot_time = (10*1000000), \
> + .balance_interval = 100*(63+num_online_cpus())/64, \
>
> That's a lot of magic numbers and formulae ... are they right?
> How would a user know if they are right.
>
To be honest you really wouldn't. It would take a lot of careful
testing on numerous workloads and systems. I believe SGI is
starting to do a bit of testing... I don't have the resources to
do many "real world" tests.
At this stage I wouldn't let them worry you too much :P
Hopefully they'll gradually improve.
>
>>Nick, can you buy off on the sched.c change? Alternatively,
>>do you want to send that fix separately John?
>
>
> I saw the ACK from Nick ... but kernel/sched.c changes will have
> to go through Andrew, not me. Are the ia64 and generic parts
> separable? If the sched.c change goes in, do other architectures
> need to have some equivalent change?
>
No, it should be ok as is. The parts should be ok to seperate if
you would prefer it that way.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (2 preceding siblings ...)
2004-10-20 18:02 ` Nick Piggin
@ 2004-10-20 18:03 ` Jesse Barnes
2004-10-21 14:11 ` Xavier Bru
` (21 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-10-20 18:03 UTC (permalink / raw)
To: linux-ia64
On Wednesday, October 20, 2004 10:48 am, Luck, Tony wrote:
> >Some have noticed that the overlapping sched domains code
> >doesn't quite work as intended (it results in disjoint domains
> >on some machines), and that a top level, machine spanning domain
> >is needed.
>
> Why is the solution to jam this into the ia64 specific code? From
> this description it sounds like a generic scheduler problem, so
> the solution ought to be up in some generic code.
It used to be in generic code, but now it's arch specific, so each arch builds
its own scheduling domains. This patch adds some NUMA specific scheduling
domain code to build a top level domain on boxes with lots of nodes. It
won't affect non-NUMA or small NUMA boxes.
> + .min_interval = 80, \
> + .max_interval = 320, \
> + .busy_factor = 320, \
> + .imbalance_pct = 125, \
> + .cache_hot_time = (10*1000000), \
> + .balance_interval = 100*(63+num_online_cpus())/64, \
>
> That's a lot of magic numbers and formulae ... are they right?
> How would a user know if they are right.
John has run several tests on large systems to come up with something
reasonable, but no doubt they could use more tweaking. John or Nick, care to
comment?
> >Nick, can you buy off on the sched.c change? Alternatively,
> >do you want to send that fix separately John?
>
> I saw the ACK from Nick ... but kernel/sched.c changes will have
> to go through Andrew, not me. Are the ia64 and generic parts
> separable? If the sched.c change goes in, do other architectures
> need to have some equivalent change?
Ok, I'll send out the sched.c bit separately then as it's standalone.
Thanks,
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (3 preceding siblings ...)
2004-10-20 18:03 ` Jesse Barnes
@ 2004-10-21 14:11 ` Xavier Bru
2004-10-21 14:34 ` Nick Piggin
` (20 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Xavier Bru @ 2004-10-21 14:11 UTC (permalink / raw)
To: linux-ia64
Hello Nick & all,
Nick Piggin wrote:
> Luck, Tony wrote:
>
>
>> + .min_interval = 80, \
>> + .max_interval = 320, \
>> + .busy_factor = 320, \
>> + .imbalance_pct = 125, \
>> + .cache_hot_time = (10*1000000), \
>> + .balance_interval = 100*(63+num_online_cpus())/64, \
>>
>> That's a lot of magic numbers and formulae ... are they right?
>> How would a user know if they are right.
>>
>
> To be honest you really wouldn't. It would take a lot of careful
> testing on numerous workloads and systems. I believe SGI is
> starting to do a bit of testing... I don't have the resources to
> do many "real world" tests.
>
> At this stage I wouldn't let them worry you too much :P
> Hopefully they'll gradually improve.
Why should'nt we use the node_distance() function to build in an
independant way the Numa hierarchy and compute the right parameters for
each level ?
--
Sincères salutations.
_____________________________________________________________________
Xavier BRU BULL ISD/R&D/INTEL office: FREC B1-422
tel : +33 (0)4 76 29 77 45 http://www-frec.bull.fr
fax : +33 (0)4 76 29 77 70 mailto:Xavier.Bru@bull.net
addr: BULL, 1 rue de Provence, BP 208, 38432 Echirolles Cedex, FRANCE
_____________________________________________________________________
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (4 preceding siblings ...)
2004-10-21 14:11 ` Xavier Bru
@ 2004-10-21 14:34 ` Nick Piggin
2004-10-28 9:29 ` Takayoshi Kochi
` (19 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Nick Piggin @ 2004-10-21 14:34 UTC (permalink / raw)
To: linux-ia64
Xavier Bru wrote:
> Hello Nick & all,
>
> Nick Piggin wrote:
>
>> Luck, Tony wrote:
>>
>>
>>> + .min_interval = 80, \
>>> + .max_interval = 320, \
>>> + .busy_factor = 320, \
>>> + .imbalance_pct = 125, \
>>> + .cache_hot_time = (10*1000000), \
>>> + .balance_interval = 100*(63+num_online_cpus())/64, \
>>>
>>> That's a lot of magic numbers and formulae ... are they right?
>>> How would a user know if they are right.
>>>
>>
>> To be honest you really wouldn't. It would take a lot of careful
>> testing on numerous workloads and systems. I believe SGI is
>> starting to do a bit of testing... I don't have the resources to
>> do many "real world" tests.
>>
>> At this stage I wouldn't let them worry you too much :P
>> Hopefully they'll gradually improve.
>
>
> Why should'nt we use the node_distance() function to build in an
> independant way the Numa hierarchy and compute the right parameters for
> each level ?
>
>
Hi Xavier,
That would probably be a good idea where possible, although for many
architectures this sort of information won't be available. It may be
that we ultimately will want to represent the NUMA topology with
node_distance being the first class function/measure (I personally
think sched-domains should be extended into the memory topology). At
the present time though, it would be a backward step to force everyone
to build a node_distance table.
Two things to note - first, even if node_distance does return something
meaningful, it still has to be input into a larger field of parameters,
so there will still be some heuristics/fudging/tuning going on.
Second, we can do runtime probing to gain more information. For example,
Ingo has a patch in the works that will compute real cache transfer
times between any two CPUs, which looks promising. We can query the
number of online CPUs when deciding on balancing rates, etc etc.
So in short, we basically want as much info as we can possibly gather...
and that is the easy part :( People need to do tests on their real life
workloads with real systems to convert this information into useful
parameters.
Anyway, the scheduler isn't _quite_ at the point where you want to be
doing serious fine tuning with it yet; we've got to get a few more
things to go in (eg. Ingo's patch, improvements from John Hawkes, some
performance patches from me, etc).
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (5 preceding siblings ...)
2004-10-21 14:34 ` Nick Piggin
@ 2004-10-28 9:29 ` Takayoshi Kochi
2004-10-28 15:26 ` Jesse Barnes
` (18 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Takayoshi Kochi @ 2004-10-28 9:29 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: Text/Plain, Size: 4360 bytes --]
Hi Jesse,
From: Jesse Barnes <jbarnes@engr.sgi.com>
Subject: [PATCH] top level scheduler domain for ia64
Date: Tue, 19 Oct 2004 14:27:27 -0700
> Some have noticed that the overlapping sched domains code doesn't quite work
> as intended (it results in disjoint domains on some machines), and that a top
> level, machine spanning domain is needed. This patch from John Hawkes adds
> it to the ia64 code. This allows processes to run on all CPUs in large
> systems, though balancing is limited. It should go to Linus soon now
> otherwise large systems will only have ~16p (depending on topology) usable by
> the scheduler. I sanity checked it on a small system after rediffing John's
> original, and he's done some testing on very large systems.
Our 32way machine still isn't configured well with the overwrapping
domain partitioning. CPUs 0-15 belongs to domain (0 1 2 3 4 5 6)
and CPUs 16-31 belongs to domain (0 1 4 5 6 7), which is assymmetric
and at least does not reflect the real connection.
dmesg of the machine is attached.
The following patch makes the ia64 domain partitioning (maybe Altix
specific ;) optional and makes the magic number (6) configurable.
What do you think of this?
Signed-off-by: Takayoshi Kochi <t-kochi@bq.jp.nec.com>
Index: 269-bk/include/asm-ia64/processor.h
===================================================================
--- 269-bk/include/asm-ia64/processor.h (revision 48)
+++ 269-bk/include/asm-ia64/processor.h (working copy)
@@ -20,8 +20,10 @@
#include <asm/ptrace.h>
#include <asm/ustack.h>
+#ifdef CONFIG_IA64_SCHED_SPAN
/* Our arch specific arch_init_sched_domain is in arch/ia64/kernel/domain.c */
#define ARCH_HAS_SCHED_DOMAIN
+#endif
#define IA64_NUM_DBG_REGS 8
/*
Index: 269-bk/arch/ia64/Kconfig
===================================================================
--- 269-bk/arch/ia64/Kconfig (revision 48)
+++ 269-bk/arch/ia64/Kconfig (working copy)
@@ -188,6 +188,24 @@
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
+config IA64_SCHED_SPAN
+ bool "Bounded spanning of scheduling domains for large NUMA"
+ depends on NUMA
+ default y if IA64_SGI_SN2
+ help
+ Say Y to support bounded spanning of scheduling domains for large
+ NUMA systems, which will bound the number of nodes to span
+ for a top-level scheduling domain. If not set, the top-level
+ domain spans over all nodes.
+
+config IA64_SD_NODES_PER_DOMAIN
+ int "Nodes to span"
+ depends on IA64_SCHED_SPAN
+ default "6"
+ help
+ This is the number of nodes to span for a top-level scheduling
+ domain.
+
config IA64_CYCLONE
bool "Cyclone (EXA) Time Source support"
help
Index: 269-bk/arch/ia64/kernel/domain.c
===================================================================
--- 269-bk/arch/ia64/kernel/domain.c (revision 49)
+++ 269-bk/arch/ia64/kernel/domain.c (working copy)
@@ -12,8 +12,6 @@
#include <linux/init.h>
#include <linux/topology.h>
-#define SD_NODES_PER_DOMAIN 6
-
#ifdef CONFIG_NUMA
/**
* find_next_best_node - find the next node to include in a sched_domain
@@ -77,7 +75,7 @@
cpus_or(span, span, nodemask);
set_bit(node, used_nodes);
- for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
+ for (i = 1; i < CONFIG_IA64_SD_NODES_PER_DOMAIN; i++) {
int next_node = find_next_best_node(node, used_nodes);
nodemask = node_to_cpumask(next_node);
cpus_or(span, span, nodemask);
@@ -158,7 +156,7 @@
#ifdef CONFIG_NUMA
if (num_online_cpus()
- > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
+ > CONFIG_IA64_SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
sd = &per_cpu(allnodes_domains, i);
*sd = SD_ALLNODES_INIT;
sd->span = cpu_default_map;
Index: 269-bk/arch/ia64/kernel/Makefile
===================================================================
--- 269-bk/arch/ia64/kernel/Makefile (revision 48)
+++ 269-bk/arch/ia64/kernel/Makefile (working copy)
@@ -14,7 +14,8 @@
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_MODULES) += module.o
-obj-$(CONFIG_SMP) += smp.o smpboot.o domain.o
+obj-$(CONFIG_SMP) += smp.o smpboot.o
+obj-$(CONFIG_IA64_SCHED_SPAN) += domain.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
obj-$(CONFIG_IA64_CYCLONE) += cyclone.o
obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o
---
Takayoshi Kochi
[-- Attachment #2: dmesg-domain --]
[-- Type: Text/Plain, Size: 9023 bytes --]
Brought up 32 CPUs
Total of 32 processors activated (47813.16 BogoMIPS).
CPU0:
domain 0: span 0000000f
groups: 00000001 00000002 00000004 00000008
domain 1: span 00ffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
domain 2: span ffffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000
CPU1:
domain 0: span 0000000f
groups: 00000002 00000004 00000008 00000001
domain 1: span 00ffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
domain 2: span ffffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000
CPU2:
domain 0: span 0000000f
groups: 00000004 00000008 00000001 00000002
domain 1: span 00ffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
domain 2: span ffffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000
CPU3:
domain 0: span 0000000f
groups: 00000008 00000001 00000002 00000004
domain 1: span 00ffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
domain 2: span ffffffff
groups: 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000
CPU4:
domain 0: span 000000f0
groups: 00000010 00000020 00000040 00000080
domain 1: span 00ffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0000000f
domain 2: span ffffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f
CPU5:
domain 0: span 000000f0
groups: 00000020 00000040 00000080 00000010
domain 1: span 00ffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0000000f
domain 2: span ffffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f
CPU6:
domain 0: span 000000f0
groups: 00000040 00000080 00000010 00000020
domain 1: span 00ffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0000000f
domain 2: span ffffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f
CPU7:
domain 0: span 000000f0
groups: 00000080 00000010 00000020 00000040
domain 1: span 00ffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0000000f
domain 2: span ffffffff
groups: 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f
CPU8:
domain 0: span 00000f00
groups: 00000100 00000200 00000400 00000800
domain 1: span 00ffffff
groups: 00000f00 0000f000 000f0000 00f00000 0000000f 000000f0
domain 2: span ffffffff
groups: 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
CPU9:
domain 0: span 00000f00
groups: 00000200 00000400 00000800 00000100
domain 1: span 00ffffff
groups: 00000f00 0000f000 000f0000 00f00000 0000000f 000000f0
domain 2: span ffffffff
groups: 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
CPU10:
domain 0: span 00000f00
groups: 00000400 00000800 00000100 00000200
domain 1: span 00ffffff
groups: 00000f00 0000f000 000f0000 00f00000 0000000f 000000f0
domain 2: span ffffffff
groups: 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
CPU11:
domain 0: span 00000f00
groups: 00000800 00000100 00000200 00000400
domain 1: span 00ffffff
groups: 00000f00 0000f000 000f0000 00f00000 0000000f 000000f0
domain 2: span ffffffff
groups: 00000f00 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
CPU12:
domain 0: span 0000f000
groups: 00001000 00002000 00004000 00008000
domain 1: span 00ffffff
groups: 0000f000 000f0000 00f00000 0000000f 000000f0 00000f00
domain 2: span ffffffff
groups: 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00
CPU13:
domain 0: span 0000f000
groups: 00002000 00004000 00008000 00001000
domain 1: span 00ffffff
groups: 0000f000 000f0000 00f00000 0000000f 000000f0 00000f00
domain 2: span ffffffff
groups: 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00
CPU14:
domain 0: span 0000f000
groups: 00004000 00008000 00001000 00002000
domain 1: span 00ffffff
groups: 0000f000 000f0000 00f00000 0000000f 000000f0 00000f00
domain 2: span ffffffff
groups: 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00
CPU15:
domain 0: span 0000f000
groups: 00008000 00001000 00002000 00004000
domain 1: span 00ffffff
groups: 0000f000 000f0000 00f00000 0000000f 000000f0 00000f00
domain 2: span ffffffff
groups: 0000f000 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00
CPU16:
domain 0: span 000f0000
groups: 00010000 00020000 00040000 00080000
domain 1: span ffff00ff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
domain 2: span ffffffff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000
CPU17:
domain 0: span 000f0000
groups: 00020000 00040000 00080000 00010000
domain 1: span ffff00ff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
domain 2: span ffffffff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000
CPU18:
domain 0: span 000f0000
groups: 00040000 00080000 00010000 00020000
domain 1: span ffff00ff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
domain 2: span ffffffff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000
CPU19:
domain 0: span 000f0000
groups: 00080000 00010000 00020000 00040000
domain 1: span ffff00ff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0
domain 2: span ffffffff
groups: 000f0000 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000
CPU20:
domain 0: span 00f00000
groups: 00100000 00200000 00400000 00800000
domain 1: span ffff00ff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 000f0000
domain 2: span ffffffff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000
CPU21:
domain 0: span 00f00000
groups: 00200000 00400000 00800000 00100000
domain 1: span ffff00ff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 000f0000
domain 2: span ffffffff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000
CPU22:
domain 0: span 00f00000
groups: 00400000 00800000 00100000 00200000
domain 1: span ffff00ff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 000f0000
domain 2: span ffffffff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000
CPU23:
domain 0: span 00f00000
groups: 00800000 00100000 00200000 00400000
domain 1: span ffff00ff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 000f0000
domain 2: span ffffffff
groups: 00f00000 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000
CPU24:
domain 0: span 0f000000
groups: 01000000 02000000 04000000 08000000
domain 1: span ffff00ff
groups: 0f000000 f0000000 0000000f 000000f0 000f0000 00f00000
domain 2: span ffffffff
groups: 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
CPU25:
domain 0: span 0f000000
groups: 02000000 04000000 08000000 01000000
domain 1: span ffff00ff
groups: 0f000000 f0000000 0000000f 000000f0 000f0000 00f00000
domain 2: span ffffffff
groups: 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
CPU26:
domain 0: span 0f000000
groups: 04000000 08000000 01000000 02000000
domain 1: span ffff00ff
groups: 0f000000 f0000000 0000000f 000000f0 000f0000 00f00000
domain 2: span ffffffff
groups: 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
CPU27:
domain 0: span 0f000000
groups: 08000000 01000000 02000000 04000000
domain 1: span ffff00ff
groups: 0f000000 f0000000 0000000f 000000f0 000f0000 00f00000
domain 2: span ffffffff
groups: 0f000000 f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000
CPU28:
domain 0: span f0000000
groups: 10000000 20000000 40000000 80000000
domain 1: span ffff00ff
groups: f0000000 0000000f 000000f0 000f0000 00f00000 0f000000
domain 2: span ffffffff
groups: f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000
CPU29:
domain 0: span f0000000
groups: 20000000 40000000 80000000 10000000
domain 1: span ffff00ff
groups: f0000000 0000000f 000000f0 000f0000 00f00000 0f000000
domain 2: span ffffffff
groups: f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000
CPU30:
domain 0: span f0000000
groups: 40000000 80000000 10000000 20000000
domain 1: span ffff00ff
groups: f0000000 0000000f 000000f0 000f0000 00f00000 0f000000
domain 2: span ffffffff
groups: f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000
CPU31:
domain 0: span f0000000
groups: 80000000 10000000 20000000 40000000
domain 1: span ffff00ff
groups: f0000000 0000000f 000000f0 000f0000 00f00000 0f000000
domain 2: span ffffffff
groups: f0000000 0000000f 000000f0 00000f00 0000f000 000f0000 00f00000 0f000000
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (6 preceding siblings ...)
2004-10-28 9:29 ` Takayoshi Kochi
@ 2004-10-28 15:26 ` Jesse Barnes
2004-11-01 6:35 ` Takayoshi Kochi
` (17 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-10-28 15:26 UTC (permalink / raw)
To: linux-ia64
On Thursday, October 28, 2004 2:29 am, Takayoshi Kochi wrote:
> Our 32way machine still isn't configured well with the overwrapping
> domain partitioning. CPUs 0-15 belongs to domain (0 1 2 3 4 5 6)
> and CPUs 16-31 belongs to domain (0 1 4 5 6 7), which is assymmetric
> and at least does not reflect the real connection.
Hm, I was afraid of that since the top level domain is only built if the
system has more CPUs than SD_NODES_PER_DOMAIN * cpus_per_node. We could
change that to be a simple if (numnodes > SD_NODES_PER_DOMAIN) instead.
> dmesg of the machine is attached.
>
> The following patch makes the ia64 domain partitioning (maybe Altix
> specific ;) optional and makes the magic number (6) configurable.
>
> What do you think of this?
Maybe a boot parameter would be better for configuring SD_NODES_PER_DOMAIN?
That would allow a single kernel binary to be configured to run well on many
types of NUMA systems. It would also mean you could play with very large
node domains, with and w/o a top level domain.
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (7 preceding siblings ...)
2004-10-28 15:26 ` Jesse Barnes
@ 2004-11-01 6:35 ` Takayoshi Kochi
2004-11-01 17:07 ` Jesse Barnes
` (16 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Takayoshi Kochi @ 2004-11-01 6:35 UTC (permalink / raw)
To: linux-ia64
Hi Jesse,
From: Jesse Barnes <jbarnes@engr.sgi.com>
Subject: Re: [PATCH] top level scheduler domain for ia64
Date: Thu, 28 Oct 2004 08:26:00 -0700
> > Our 32way machine still isn't configured well with the overwrapping
> > domain partitioning. CPUs 0-15 belongs to domain (0 1 2 3 4 5 6)
> > and CPUs 16-31 belongs to domain (0 1 4 5 6 7), which is assymmetric
> > and at least does not reflect the real connection.
>
> Hm, I was afraid of that since the top level domain is only built if the
> system has more CPUs than SD_NODES_PER_DOMAIN * cpus_per_node. We could
> change that to be a simple if (numnodes > SD_NODES_PER_DOMAIN) instead.
But I don't think the domain partitioning thing will optimize anything
in all the cases that (numnodes > SD_NODES_PER_DOMAIN) is true.
> > dmesg of the machine is attached.
> >
> > The following patch makes the ia64 domain partitioning (maybe Altix
> > specific ;) optional and makes the magic number (6) configurable.
> >
> > What do you think of this?
>
> Maybe a boot parameter would be better for configuring SD_NODES_PER_DOMAIN?
> That would allow a single kernel binary to be configured to run well on many
> types of NUMA systems. It would also mean you could play with very large
> node domains, with and w/o a top level domain.
It sounds good for -mm, but not for Linus?
That would be a pain for any ordinary user to test many parameters
for each configuration (and the number makes very little difference,
which would make it more difficult to find the best number).
What I'd like to do now is to get rid of subarch specific optimization
out of generic kernels.
Do you have any objection for the patch to be included in the mainline?
---
Takayoshi Kochi
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (8 preceding siblings ...)
2004-11-01 6:35 ` Takayoshi Kochi
@ 2004-11-01 17:07 ` Jesse Barnes
2004-11-01 17:16 ` Matthew Wilcox
` (15 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-01 17:07 UTC (permalink / raw)
To: linux-ia64
On Sunday, October 31, 2004 10:35 pm, Takayoshi Kochi wrote:
> > Maybe a boot parameter would be better for configuring
> > SD_NODES_PER_DOMAIN? That would allow a single kernel binary to be
> > configured to run well on many types of NUMA systems. It would also mean
> > you could play with very large node domains, with and w/o a top level
> > domain.
>
> It sounds good for -mm, but not for Linus?
> That would be a pain for any ordinary user to test many parameters
> for each configuration (and the number makes very little difference,
> which would make it more difficult to find the best number).
>
> What I'd like to do now is to get rid of subarch specific optimization
> out of generic kernels.
>
> Do you have any objection for the patch to be included in the mainline?
I don't want it included since it hurts generic kernels on large systems (i.e.
either they wouldn't boot or wouldn't see all the CPUs). It should be
possible to use a runtime check (whether a boot parameter or just a
comparison on numnodes) instead to avoid non-overlapping domains.
If I understand you right, you don't want a top level domain for your 32 way
systems, but you *do* want the node domains to span the whole thing. Is that
right?
If so, you could do something like this I think?
if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
SD_NODES_PER_DOMAIN = numnodes;
build_node_domains(); /* each one spans the system */
} else {
SD_NODES_PER_DOMAIN = 4; /* or whatever */
build_node_domains(); /* only spans nearby nodes */
build_top_level_domain(); /* whole system, infrequently balanced */
}
Would that address your concerns?
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (9 preceding siblings ...)
2004-11-01 17:07 ` Jesse Barnes
@ 2004-11-01 17:16 ` Matthew Wilcox
2004-11-01 18:36 ` Jesse Barnes
` (14 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Matthew Wilcox @ 2004-11-01 17:16 UTC (permalink / raw)
To: linux-ia64
On Mon, Nov 01, 2004 at 09:07:32AM -0800, Jesse Barnes wrote:
> If I understand you right, you don't want a top level domain for your 32 way
> systems, but you *do* want the node domains to span the whole thing. Is that
> right?
>
> If so, you could do something like this I think?
>
> if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
> SD_NODES_PER_DOMAIN = numnodes;
> build_node_domains(); /* each one spans the system */
> } else {
> SD_NODES_PER_DOMAIN = 4; /* or whatever */
> build_node_domains(); /* only spans nearby nodes */
> build_top_level_domain(); /* whole system, infrequently balanced */
> }
>
> Would that address your concerns?
Doesn't sound like a great idea. HP's already shipping 128-way Superdome
IA-64 systems, and they'll want to be set up rather differently from the
Altix systems. I think this code needs to be autotuning so it doesn't
need to be touched whenever a vendor releases a new configuration (I
think I heard that NASA's Altixes had a custom CPU brick with twice the
CPUs in it?)
--
"Next the statesmen will invent cheap lies, putting the blame upon
the nation that is attacked, and every man will be glad of those
conscience-soothing falsities, and will diligently study them, and refuse
to examine any refutations of them; and thus he will by and by convince
himself that the war is just, and will thank God for the better sleep
he enjoys after this process of grotesque self-deception." -- Mark Twain
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (10 preceding siblings ...)
2004-11-01 17:16 ` Matthew Wilcox
@ 2004-11-01 18:36 ` Jesse Barnes
2004-11-01 18:53 ` Luck, Tony
` (13 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-01 18:36 UTC (permalink / raw)
To: linux-ia64
On Monday, November 1, 2004 9:16 am, Matthew Wilcox wrote:
> On Mon, Nov 01, 2004 at 09:07:32AM -0800, Jesse Barnes wrote:
> > If I understand you right, you don't want a top level domain for your 32
> > way systems, but you *do* want the node domains to span the whole thing.
> > Is that right?
> >
> > If so, you could do something like this I think?
> >
> > if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
> > SD_NODES_PER_DOMAIN = numnodes;
> > build_node_domains(); /* each one spans the system */
> > } else {
> > SD_NODES_PER_DOMAIN = 4; /* or whatever */
> > build_node_domains(); /* only spans nearby nodes */
> > build_top_level_domain(); /* whole system, infrequently balanced */
> > }
> >
> > Would that address your concerns?
>
> Doesn't sound like a great idea. HP's already shipping 128-way Superdome
> IA-64 systems, and they'll want to be set up rather differently from the
> Altix systems. I think this code needs to be autotuning so it doesn't
> need to be touched whenever a vendor releases a new configuration (I
> think I heard that NASA's Altixes had a custom CPU brick with twice the
> CPUs in it?)
Yeah, but still the same number of CPUs/FSB. I agree that autotuning would be
best (the above is a crude example of that). Any suggestions?
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (11 preceding siblings ...)
2004-11-01 18:36 ` Jesse Barnes
@ 2004-11-01 18:53 ` Luck, Tony
2004-11-01 19:02 ` Jesse Barnes
` (12 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Luck, Tony @ 2004-11-01 18:53 UTC (permalink / raw)
To: linux-ia64
>> > if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
>> > SD_NODES_PER_DOMAIN = numnodes;
>> > build_node_domains(); /* each one spans the system */
>> > } else {
>> > SD_NODES_PER_DOMAIN = 4; /* or whatever */
Ugh! The magic '4' is going to be different for different machines,
so there is no value that will make all of the people happy all of
the time.
>Yeah, but still the same number of CPUs/FSB. I agree that
>autotuning would be
>best (the above is a crude example of that). Any suggestions?
Can't you use the SLIT table to build a hierachy of scheduler
domains that matches the actual hardware configuration, instead
of using compiled in constants or even boot-time command line
parameters?
-Tony
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (12 preceding siblings ...)
2004-11-01 18:53 ` Luck, Tony
@ 2004-11-01 19:02 ` Jesse Barnes
2004-11-01 19:45 ` Luck, Tony
` (11 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-01 19:02 UTC (permalink / raw)
To: linux-ia64
On Monday, November 1, 2004 10:53 am, Luck, Tony wrote:
> >> > if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
> >> > SD_NODES_PER_DOMAIN = numnodes;
> >> > build_node_domains(); /* each one spans the system */
> >> > } else {
> >> > SD_NODES_PER_DOMAIN = 4; /* or whatever */
>
> Ugh! The magic '4' is going to be different for different machines,
> so there is no value that will make all of the people happy all of
> the time.
Right.
> >Yeah, but still the same number of CPUs/FSB. I agree that
> >autotuning would be
> >best (the above is a crude example of that). Any suggestions?
>
> Can't you use the SLIT table to build a hierachy of scheduler
> domains that matches the actual hardware configuration, instead
> of using compiled in constants or even boot-time command line
> parameters?
Maybe, but that will get complex very quickly I think. Right now we have
three domains on ia64, the cpu domain, the node domain, which contains
several nodes worth of CPUs, and the top level domain, which spans the whole
machine.
So there are two questions, how big should the node domain be and should it
span the whole machine (avoiding the need for a top level domain)? Obviously
the answer is pretty machine specific, and I'm not sure the SLIT helps us
much since its values are arbitrary distance values, not anything concrete.
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (13 preceding siblings ...)
2004-11-01 19:02 ` Jesse Barnes
@ 2004-11-01 19:45 ` Luck, Tony
2004-11-01 22:39 ` Jesse Barnes
` (10 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Luck, Tony @ 2004-11-01 19:45 UTC (permalink / raw)
To: linux-ia64
>Maybe, but that will get complex very quickly I think. Right now we have
>three domains on ia64, the cpu domain, the node domain, which contains
>several nodes worth of CPUs, and the top level domain, which spans the whole
>machine.
>
>So there are two questions, how big should the node domain be and should it
>span the whole machine (avoiding the need for a top level domain)? Obviously
>the answer is pretty machine specific, and I'm not sure the SLIT helps us
>much since its values are arbitrary distance values, not anything concrete.
I'd pose a broader question ... are the manufacturers of big machines happy
with three domains? Perhaps it makes sense to allow for more levels that
match the physical parameters of the machine. E.g. the NEC box has 4 cpus
per-node, and 4 nodes in a "super-node", and 2 "super-nodes" in a machine.
It would make sense to me if there was a scheduler domain level that would
handle balancing between the nodes in a super-node in addition to the top-level
domain to handle balancing between super-nodes.
While the values in SLIT can be somewhat abstract, they could be used to derive
the whole node, super-node, hyper-node, ultra-node, marketting-zeta-node
structure to build as many levels as make sense into the scheduler.
Or am I over-engineering?
-Tony
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (14 preceding siblings ...)
2004-11-01 19:45 ` Luck, Tony
@ 2004-11-01 22:39 ` Jesse Barnes
2004-11-02 0:12 ` Zou, Nanhai
` (9 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-01 22:39 UTC (permalink / raw)
To: linux-ia64
On Monday, November 1, 2004 11:45 am, Luck, Tony wrote:
> I'd pose a broader question ... are the manufacturers of big machines happy
> with three domains? Perhaps it makes sense to allow for more levels that
> match the physical parameters of the machine. E.g. the NEC box has 4 cpus
> per-node, and 4 nodes in a "super-node", and 2 "super-nodes" in a machine.
> It would make sense to me if there was a scheduler domain level that would
> handle balancing between the nodes in a super-node in addition to the
> top-level domain to handle balancing between super-nodes.
>
> While the values in SLIT can be somewhat abstract, they could be used to
> derive the whole node, super-node, hyper-node, ultra-node,
> marketting-zeta-node structure to build as many levels as make sense into
> the scheduler.
>
> Or am I over-engineering?
Just guessing, but I think that might be overkill. We'd have to collect data
to know for sure though (which means someone has to implement it :).
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (15 preceding siblings ...)
2004-11-01 22:39 ` Jesse Barnes
@ 2004-11-02 0:12 ` Zou, Nanhai
2004-11-02 7:36 ` Takayoshi Kochi
` (8 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Zou, Nanhai @ 2004-11-02 0:12 UTC (permalink / raw)
To: linux-ia64
I have a patch in the latest mm tree which allow user to adjust
scheduler domain parameters via sysctl.
However I have not include SPAN in the controllable parameters list.
Would including SPAN in sysctl parameter list helpful to this issue?
Zou Nan hai
-----Original Message-----
From: linux-ia64-owner@vger.kernel.org
[mailto:linux-ia64-owner@vger.kernel.org] On Behalf Of Jesse Barnes
Sent: Tuesday, November 02, 2004 3:02 AM
To: Luck, Tony
Cc: Matthew Wilcox; Takayoshi Kochi; linux-ia64@vger.kernel.org;
hawkes@sgi.com; nickpiggin@yahoo.com.au
Subject: Re: [PATCH] top level scheduler domain for ia64
On Monday, November 1, 2004 10:53 am, Luck, Tony wrote:
> >> > if (numnodes <= SMALL_SYSTEM_THRESHOLD) {
> >> > SD_NODES_PER_DOMAIN = numnodes;
> >> > build_node_domains(); /* each one spans the system */
> >> > } else {
> >> > SD_NODES_PER_DOMAIN = 4; /* or whatever */
>
> Ugh! The magic '4' is going to be different for different machines,
> so there is no value that will make all of the people happy all of
> the time.
Right.
> >Yeah, but still the same number of CPUs/FSB. I agree that
> >autotuning would be
> >best (the above is a crude example of that). Any suggestions?
>
> Can't you use the SLIT table to build a hierachy of scheduler
> domains that matches the actual hardware configuration, instead
> of using compiled in constants or even boot-time command line
> parameters?
Maybe, but that will get complex very quickly I think. Right now we
have
three domains on ia64, the cpu domain, the node domain, which contains
several nodes worth of CPUs, and the top level domain, which spans the
whole
machine.
So there are two questions, how big should the node domain be and should
it
span the whole machine (avoiding the need for a top level domain)?
Obviously
the answer is pretty machine specific, and I'm not sure the SLIT helps
us
much since its values are arbitrary distance values, not anything
concrete.
Jesse
-
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (16 preceding siblings ...)
2004-11-02 0:12 ` Zou, Nanhai
@ 2004-11-02 7:36 ` Takayoshi Kochi
2004-11-02 8:48 ` Gerald Pfeifer
` (7 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Takayoshi Kochi @ 2004-11-02 7:36 UTC (permalink / raw)
To: linux-ia64
Hi,
From: "Luck, Tony" <tony.luck@intel.com>
Subject: RE: [PATCH] top level scheduler domain for ia64
Date: Mon, 1 Nov 2004 11:45:44 -0800
> >Maybe, but that will get complex very quickly I think. Right now we have
> >three domains on ia64, the cpu domain, the node domain, which contains
> >several nodes worth of CPUs, and the top level domain, which spans the whole
> >machine.
> >
> >So there are two questions, how big should the node domain be and should it
> >span the whole machine (avoiding the need for a top level domain)? Obviously
> >the answer is pretty machine specific, and I'm not sure the SLIT helps us
> >much since its values are arbitrary distance values, not anything concrete.
Right, but if SLIT is implemented in real firmware (not a fake),
the numbers should have some meanings to describe how the machine
is configured.
For machines which have tree-like hierarchy NUMA, the SLIT table
can help enough to construct scheduling domains (like NEC's, and
I suppose Superdome is similar) and fits well for the sched-domain
scheduler.
The problem is that there can be other forms of topology.
It may be hard to derive its actual connection only with SLIT.
Quoting a SLIT example from the ACPI spec 3.0...
domain [0] [1] [2] [3]
[0] 10 15 20 18
[1] 15 10 16 24
[2] 20 16 10 12
[3] 18 24 12 10
This is an example which is not real, but it's possible
from the viewpoint of the ACPI spec;)
For this case, it's very hard to say it's worth making other 3
levels of scheduling domains above the cpu domain.
The userspace adjustment which Nanhai proposed does not work
for booting the 512-way Altix anyway (if we remove the node
domain from the generic kernel).
I think the short-term solution is to make the boot-time parameter
like Jesse said to control the creation and span size of the
node domain and see what is necessary next.
---
Takayoshi Kochi
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (17 preceding siblings ...)
2004-11-02 7:36 ` Takayoshi Kochi
@ 2004-11-02 8:48 ` Gerald Pfeifer
2004-11-02 9:31 ` Takayoshi Kochi
` (6 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Gerald Pfeifer @ 2004-11-02 8:48 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1050 bytes --]
On Mon, 1 Nov 2004, Matthew Wilcox wrote:
> Doesn't sound like a great idea. HP's already shipping 128-way Superdome
> IA-64 systems, and they'll want to be set up rather differently from the
> Altix systems. I think this code needs to be autotuning so it doesn't
> need to be touched whenever a vendor releases a new configuration (I
> think I heard that NASA's Altixes had a custom CPU brick with twice the
> CPUs in it?)
Yup. Though that new brick is not specific to NASA's new Columbia system.
http://www.sgi.com/company_info/newsroom/press_releases/2004/october/altix_bx2.html
| SGI Doubles Density of High-End Altix Server
|
| New Altix 3700 Bx2 Ratchets Up Performance and Cost-Efficiency with
| Twice the Bandwidth, Half the Footprint of Flagship System
|
| MOUNTAIN VIEW, Calif., (October 28, 2004) [...] Silicon Graphics today
| unveiled a new version of its SGI® Altix® 3700 system that delivers
| twice the bandwidth and processor density of its flagship high-end
| model.
Gerald (not working for SGI ;-)
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (18 preceding siblings ...)
2004-11-02 8:48 ` Gerald Pfeifer
@ 2004-11-02 9:31 ` Takayoshi Kochi
2004-11-02 21:31 ` Luck, Tony
` (5 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Takayoshi Kochi @ 2004-11-02 9:31 UTC (permalink / raw)
To: linux-ia64
From: Takayoshi Kochi <t-kochi@bq.jp.nec.com>
Subject: Re: [PATCH] top level scheduler domain for ia64
Date: Tue, 02 Nov 2004 16:36:36 +0900 (JST)
> I think the short-term solution is to make the boot-time parameter
> like Jesse said to control the creation and span size of the
> node domain and see what is necessary next.
This patch adds a new kernel parameter "nodes_per_domain"
which specifies how many nodes are included in a node
scheduling domain.
Signed-off-by: Takayoshi Kochi <t-kochi@bq.jp.nec.com>
Index: 269-bk/arch/ia64/kernel/domain.c
=================================--- 269-bk/arch/ia64/kernel/domain.c (revision 49)
+++ 269-bk/arch/ia64/kernel/domain.c (working copy)
@@ -12,9 +12,9 @@
#include <linux/init.h>
#include <linux/topology.h>
-#define SD_NODES_PER_DOMAIN 6
+#ifdef CONFIG_NUMA
+static int sd_nodes_per_domain = 6;
-#ifdef CONFIG_NUMA
/**
* find_next_best_node - find the next node to include in a sched_domain
* @node: node whose sched_domain we're building
@@ -77,7 +77,7 @@
cpus_or(span, span, nodemask);
set_bit(node, used_nodes);
- for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
+ for (i = 1; i < sd_nodes_per_domain; i++) {
int next_node = find_next_best_node(node, used_nodes);
nodemask = node_to_cpumask(next_node);
cpus_or(span, span, nodemask);
@@ -158,7 +158,7 @@
#ifdef CONFIG_NUMA
if (num_online_cpus()
- > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
+ > sd_nodes_per_domain*cpus_weight(nodemask)) {
sd = &per_cpu(allnodes_domains, i);
*sd = SD_ALLNODES_INIT;
sd->span = cpu_default_map;
@@ -379,3 +379,18 @@
#endif
}
+#ifdef CONFIG_NUMA
+static int __init
+set_nodes_per_domain (char *str)
+{
+ int tmp;
+
+ get_option(&str, &tmp);
+ if (tmp > 1 && tmp <= MAX_NUMNODES)
+ sd_nodes_per_domain = tmp;
+
+ return 1;
+}
+
+__setup("nodes_per_domain=", set_nodes_per_domain);
+#endif
---
Takayoshi Kochi
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (19 preceding siblings ...)
2004-11-02 9:31 ` Takayoshi Kochi
@ 2004-11-02 21:31 ` Luck, Tony
2004-11-03 6:15 ` Takayoshi Kochi
` (4 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Luck, Tony @ 2004-11-02 21:31 UTC (permalink / raw)
To: linux-ia64
>> I think the short-term solution is to make the boot-time parameter
>> like Jesse said to control the creation and span size of the
>> node domain and see what is necessary next.
>
>This patch adds a new kernel parameter "nodes_per_domain"
>which specifies how many nodes are included in a node
>scheduling domain.
Can you add a short write-up for Documentation/kernel-parameters.txt
to explain the effect, and how a user should pick what value to use
(examples would be good ... if SGI want to throw in some
values for small/large Altix boxes, that would be good too).
Is '6' a reasonable default if the parameter is not set?
-Tony
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (20 preceding siblings ...)
2004-11-02 21:31 ` Luck, Tony
@ 2004-11-03 6:15 ` Takayoshi Kochi
2004-11-03 16:22 ` Jesse Barnes
` (3 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Takayoshi Kochi @ 2004-11-03 6:15 UTC (permalink / raw)
To: linux-ia64
Hi,
From: "Luck, Tony" <tony.luck@intel.com>
Subject: RE: [PATCH] top level scheduler domain for ia64
Date: Tue, 2 Nov 2004 13:31:16 -0800
> >> I think the short-term solution is to make the boot-time parameter
> >> like Jesse said to control the creation and span size of the
> >> node domain and see what is necessary next.
> >
> >This patch adds a new kernel parameter "nodes_per_domain"
> >which specifies how many nodes are included in a node
> >scheduling domain.
>
> Can you add a short write-up for Documentation/kernel-parameters.txt
> to explain the effect, and how a user should pick what value to use
> (examples would be good ... if SGI want to throw in some
> values for small/large Altix boxes, that would be good too).
The diff is attached.
Though I'm not sure how to pick the best number...
Any ideas from SGI?
> Is '6' a reasonable default if the parameter is not set?
Maybe for Altix. I guess the only user of this domain split
is Altix, so it's ok ;)
The patch below is a documentation for 'nodes_per_domain=' option.
Signed-off-by: Takayoshi Kochi <t-kochi@bq.jp.nec.com>
=== my-bk/Documentation/kernel-parameters.txt 1.57 vs edited ==--- 1.57/Documentation/kernel-parameters.txt 2004-10-30 18:31:21 +09:00
+++ edited/my-bk/Documentation/kernel-parameters.txt 2004-11-03 15:07:20 +09:00
@@ -475,6 +475,15 @@
hugepages= [HW,IA-32,IA-64] Maximal number of HugeTLB pages.
+ nodes_per_domain=n
+ [IA-64] specifies number of nodes in a scheduling
+ domain. If the number of nodes in a sysmtem is
+ large enough, the node scheduling domain (usually
+ spans the whole system) is split into <n> nodes
+ for each. Default <n> is 6.
+ If you have a large NUMA machine, consider using
+ this option to reduce load balancing overhead.
+
noirqbalance [IA-32,SMP,KNL] Disable kernel irq balancing
i8042.direct [HW] Put keyboard port into non-translated mode
---
Takayoshi Kochi
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (21 preceding siblings ...)
2004-11-03 6:15 ` Takayoshi Kochi
@ 2004-11-03 16:22 ` Jesse Barnes
2004-11-03 16:57 ` Luck, Tony
` (2 subsequent siblings)
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-03 16:22 UTC (permalink / raw)
To: linux-ia64
On Tuesday, November 2, 2004 10:15 pm, Takayoshi Kochi wrote:
> Hi,
>
> From: "Luck, Tony" <tony.luck@intel.com>
> Subject: RE: [PATCH] top level scheduler domain for ia64
> Date: Tue, 2 Nov 2004 13:31:16 -0800
>
> > >> I think the short-term solution is to make the boot-time parameter
> > >> like Jesse said to control the creation and span size of the
> > >> node domain and see what is necessary next.
> > >
> > >This patch adds a new kernel parameter "nodes_per_domain"
> > >which specifies how many nodes are included in a node
> > >scheduling domain.
> >
> > Can you add a short write-up for Documentation/kernel-parameters.txt
> > to explain the effect, and how a user should pick what value to use
> > (examples would be good ... if SGI want to throw in some
> > values for small/large Altix boxes, that would be good too).
>
> The diff is attached.
> Though I'm not sure how to pick the best number...
> Any ideas from SGI?
>
> > Is '6' a reasonable default if the parameter is not set?
>
> Maybe for Altix. I guess the only user of this domain split
> is Altix, so it's ok ;)
Yep, looks good to me. Thanks.
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (22 preceding siblings ...)
2004-11-03 16:22 ` Jesse Barnes
@ 2004-11-03 16:57 ` Luck, Tony
2004-11-03 17:04 ` Jesse Barnes
2004-11-08 17:31 ` John Hawkes
25 siblings, 0 replies; 27+ messages in thread
From: Luck, Tony @ 2004-11-03 16:57 UTC (permalink / raw)
To: linux-ia64
>> > Is '6' a reasonable default if the parameter is not set?
>>
>> Maybe for Altix. I guess the only user of this domain split
>> is Altix, so it's ok ;)
>
>Yep, looks good to me. Thanks.
So tell me who wants a value other than 6, and what is the value
that they want? I was hoping for some more specific advice in
the kernel-parameters.txt file. What's a good choice for a 16-node
machine? For a 128-node machine?
-Tony
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (23 preceding siblings ...)
2004-11-03 16:57 ` Luck, Tony
@ 2004-11-03 17:04 ` Jesse Barnes
2004-11-08 17:31 ` John Hawkes
25 siblings, 0 replies; 27+ messages in thread
From: Jesse Barnes @ 2004-11-03 17:04 UTC (permalink / raw)
To: linux-ia64
On Wednesday, November 3, 2004 8:57 am, Luck, Tony wrote:
> >> > Is '6' a reasonable default if the parameter is not set?
> >>
> >> Maybe for Altix. I guess the only user of this domain split
> >> is Altix, so it's ok ;)
> >
> >Yep, looks good to me. Thanks.
>
> So tell me who wants a value other than 6, and what is the value
> that they want? I was hoping for some more specific advice in
> the kernel-parameters.txt file. What's a good choice for a 16-node
> machine? For a 128-node machine?
I don't know. The original value I chose was 4, but I think Nick changed it
to 6 to get more overlap. Any ideas John?
Jesse
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] top level scheduler domain for ia64
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
` (24 preceding siblings ...)
2004-11-03 17:04 ` Jesse Barnes
@ 2004-11-08 17:31 ` John Hawkes
25 siblings, 0 replies; 27+ messages in thread
From: John Hawkes @ 2004-11-08 17:31 UTC (permalink / raw)
To: linux-ia64
From: "Jesse Barnes" <jbarnes@engr.sgi.com>
> > So tell me who wants a value other than 6, and what is the value
> > that they want? I was hoping for some more specific advice in
> > the kernel-parameters.txt file. What's a good choice for a 16-node
> > machine? For a 128-node machine?
>
> I don't know. The original value I chose was 4, but I think Nick changed it
> to 6 to get more overlap. Any ideas John?
I haven't been able to get enough reliably reproducible performance numbers
using the recent -mm kernels to allow me to experiment with different
nodes-per-domain values. Using 6 and having an occasionally load-balancing
umbrella covering all the CPUs in the system seems to behave reasonably for
64p and above. I just haven't tried other values, other than once trying 32
and without that all-CPU umbrella, and that performance was bad.
John Hawkes
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2004-11-08 17:31 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-10-19 21:27 [PATCH] top level scheduler domain for ia64 Jesse Barnes
2004-10-20 0:02 ` Nick Piggin
2004-10-20 17:48 ` Luck, Tony
2004-10-20 18:02 ` Nick Piggin
2004-10-20 18:03 ` Jesse Barnes
2004-10-21 14:11 ` Xavier Bru
2004-10-21 14:34 ` Nick Piggin
2004-10-28 9:29 ` Takayoshi Kochi
2004-10-28 15:26 ` Jesse Barnes
2004-11-01 6:35 ` Takayoshi Kochi
2004-11-01 17:07 ` Jesse Barnes
2004-11-01 17:16 ` Matthew Wilcox
2004-11-01 18:36 ` Jesse Barnes
2004-11-01 18:53 ` Luck, Tony
2004-11-01 19:02 ` Jesse Barnes
2004-11-01 19:45 ` Luck, Tony
2004-11-01 22:39 ` Jesse Barnes
2004-11-02 0:12 ` Zou, Nanhai
2004-11-02 7:36 ` Takayoshi Kochi
2004-11-02 8:48 ` Gerald Pfeifer
2004-11-02 9:31 ` Takayoshi Kochi
2004-11-02 21:31 ` Luck, Tony
2004-11-03 6:15 ` Takayoshi Kochi
2004-11-03 16:22 ` Jesse Barnes
2004-11-03 16:57 ` Luck, Tony
2004-11-03 17:04 ` Jesse Barnes
2004-11-08 17:31 ` John Hawkes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox