Hello Nick and all, I remember this was discussed some months ago, but it still seems that on 2.6.10, SD_NODES_PER_DOMAIN is statically defined to value 6. This is not what is expected on Bull ia64 platforms, based on modules of 4 bricks of 4 cpus each. Using the cpu hot-plug mechanism to re-define dynamically the sched-domains looks heavy (please correct me if I am wrong). Hereafter a trivial patch that allows to setup SD_NODES_PER_DOMAIN at configuration time or at boot time. Boot time parameter should be helpfull, as on 32*ways machine based on 2 modules of 4 bricks of 4 cpus each, it allows to build either 1 (SD_NODES_PER_DOMAIN=8) or 2 (SD_NODES_PER_DOMAIN=4) NUMA sched-domain levels. Thanks in advance for your comments. diff --exclude-from /home15/xb/proc/patch.exclude -Nurp /tmp/linux-2.6.10/arch/ia64/Kconfig linux-2.6.10/arch/ia64/Kconfig --- /tmp/linux-2.6.10/arch/ia64/Kconfig 2004-12-24 22:35:29.000000000 +0100 +++ linux-2.6.10/arch/ia64/Kconfig 2005-02-15 17:07:46.741070673 +0100 @@ -168,6 +168,17 @@ config NUMA Access). This option is for configuring high-end multiprocessor server systems. If in doubt, say N. +config SD_NODES_PER_DOMAIN + int "Number of nodes per base sched_domains" + default "4" if IA64_DIG + default "6" + help + Number of nodes per base sched_domains. + Should be 6 for SGI platforms. + Should be 4 for DIG platforms. + This value can be provided at boot time using the sd_nodes_per_domain + boot parameter. + config VIRTUAL_MEM_MAP bool "Virtual mem map" default y if !IA64_HP_SIM diff --exclude-from /home15/xb/proc/patch.exclude -Nurp /tmp/linux-2.6.10/arch/ia64/kernel/domain.c linux-2.6.10/arch/ia64/kernel/domain.c --- /tmp/linux-2.6.10/arch/ia64/kernel/domain.c 2004-12-24 22:35:40.000000000 +0100 +++ linux-2.6.10/arch/ia64/kernel/domain.c 2005-02-15 15:04:08.964794354 +0100 @@ -13,7 +13,14 @@ #include #include -#define SD_NODES_PER_DOMAIN 6 +int sd_nodes_per_domain = CONFIG_SD_NODES_PER_DOMAIN; + +static int __init set_sd_nodes_per_domain(char *str) +{ + get_option(&str, &sd_nodes_per_domain); + return 1; +} +__setup("sd_nodes_per_domain=", set_sd_nodes_per_domain); #ifdef CONFIG_NUMA /** @@ -78,7 +85,7 @@ static cpumask_t __devinit sched_domain_ cpus_or(span, span, nodemask); set_bit(node, used_nodes); - for (i = 1; i < SD_NODES_PER_DOMAIN; i++) { + for (i = 1; i < sd_nodes_per_domain; i++) { int next_node = find_next_best_node(node, used_nodes); nodemask = node_to_cpumask(next_node); cpus_or(span, span, nodemask); @@ -159,7 +166,7 @@ void __devinit arch_init_sched_domains(v #ifdef CONFIG_NUMA if (num_online_cpus() - > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) { + > sd_nodes_per_domain*cpus_weight(nodemask)) { sd = &per_cpu(allnodes_domains, i); *sd = SD_ALLNODES_INIT; sd->span = cpu_default_map; -- Sincères salutations.