* Increase default nodes shift to 10
@ 2006-08-22 2:11 Christoph Lameter
2006-08-22 11:10 ` Horms
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Christoph Lameter @ 2006-08-22 2:11 UTC (permalink / raw)
To: linux-ia64
We have systems with 1024 nodes and 1024 processors. Could we set
the default nodes shift to 10?
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.18-rc4/arch/ia64/Kconfig
=================================--- linux-2.6.18-rc4.orig/arch/ia64/Kconfig 2006-08-06 11:20:11.000000000 -0700
+++ linux-2.6.18-rc4/arch/ia64/Kconfig 2006-08-21 19:06:30.329846676 -0700
@@ -354,7 +354,7 @@ config NUMA
config NODES_SHIFT
int "Max num nodes shift(3-10)"
range 3 10
- default "8"
+ default "10"
depends on NEED_MULTIPLE_NODES
help
This option specifies the maximum number of nodes in your SSI system.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
@ 2006-08-22 11:10 ` Horms
2006-08-22 16:58 ` Christoph Lameter
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Horms @ 2006-08-22 11:10 UTC (permalink / raw)
To: linux-ia64
On Mon, 21 Aug 2006 19:11:52 -0700 (PDT), Christoph Lameter wrote:
> We have systems with 1024 nodes and 1024 processors. Could we set
> the default nodes shift to 10?
It seems curious to set the default to the top of the available range,
persubably there aren't many of these systems around. Is there a
penalty for smaller systems?
--
Horms
H: http://www.vergenet.net/~horms/
W: http://www.valinux.co.jp/en/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
2006-08-22 11:10 ` Horms
@ 2006-08-22 16:58 ` Christoph Lameter
2006-08-22 17:47 ` Christoph Lameter
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2006-08-22 16:58 UTC (permalink / raw)
To: linux-ia64
On Tue, 22 Aug 2006, Horms wrote:
>
> On Mon, 21 Aug 2006 19:11:52 -0700 (PDT), Christoph Lameter wrote:
> > We have systems with 1024 nodes and 1024 processors. Could we set
> > the default nodes shift to 10?
>
> It seems curious to set the default to the top of the available range,
> persubably there aren't many of these systems around. Is there a
> penalty for smaller systems?
IA64 is 64 bit so 8 bit processing has no advantage.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
2006-08-22 11:10 ` Horms
2006-08-22 16:58 ` Christoph Lameter
@ 2006-08-22 17:47 ` Christoph Lameter
2006-08-22 19:06 ` Luck, Tony
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2006-08-22 17:47 UTC (permalink / raw)
To: linux-ia64
Hmmm... I thought this would only describe the size of the bit
field in the page flags. But it sseem that this also determines
MAX_NUMNODES which sizes several kernel arrays amoung them the per node
arrays of the slab allocator. So this change would lead to more memory
use. However, without this change generic kernel configurations will not
work on all IA64 machines.
Also why is this set to 8 if we just have 64 processors by default?
This means the default configuration would be 256 nodes but only 64
processors?
Could we set these limits consitently to the largest IA64 configuration to
make sure that a generic IA64 kernel is able to run on all machines?
For that
ZONES_SHIFT needs to be 10
and
NR_CPUS needs to be 1024.
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
` (2 preceding siblings ...)
2006-08-22 17:47 ` Christoph Lameter
@ 2006-08-22 19:06 ` Luck, Tony
2006-08-22 19:31 ` Christoph Lameter
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Luck, Tony @ 2006-08-22 19:06 UTC (permalink / raw)
To: linux-ia64
> Could we set these limits consitently to the largest IA64 configuration to
> make sure that a generic IA64 kernel is able to run on all machines?
We could ... but Horms has a good point that we might not
want to do this if the cost is high. Can you estimate how
much memory will be allocated in these NR_CPUS and MAX_NUMNODES
sized arrays. If it is only[1] a couple of hundred Kbytes, then
it might be worth it (even little IA-64 systems have 1GB, so
100K is 0.01%).
-Tony
[1] "only" still sounds weird when talking about more memory
than my first UNIX computer (a pdp11/34) had in total to
support up to 30 users.
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
` (3 preceding siblings ...)
2006-08-22 19:06 ` Luck, Tony
@ 2006-08-22 19:31 ` Christoph Lameter
2006-08-23 2:38 ` Horms
2006-08-23 2:43 ` Christoph Lameter
6 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2006-08-22 19:31 UTC (permalink / raw)
To: linux-ia64
On Tue, 22 Aug 2006, Luck, Tony wrote:
> > Could we set these limits consitently to the largest IA64 configuration to
> > make sure that a generic IA64 kernel is able to run on all machines?
>
> We could ... but Horms has a good point that we might not
> want to do this if the cost is high. Can you estimate how
> much memory will be allocated in these NR_CPUS and MAX_NUMNODES
> sized arrays. If it is only[1] a couple of hundred Kbytes, then
> it might be worth it (even little IA-64 systems have 1GB, so
> 100K is 0.01%).
Worst case is probably the slab allocator.
The kmem_cache struct has pointer arrays per node and per cpu. If a
corresponding cpu / node is not up then no further structures will be
allocated.
pointers to per cpu caches array cache = NR_CPUS * sizeof(void *) = 8k.
up from 64* sizeof(void *) = 512 byte.
pointers to per node l3 array = MAX_NUMNODES * sizeof(void *) = 8k.
again up from currently 2k.
So additional overhead per slab is around 13k. About 40 slabs. That
results in ~520k additional overhead.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
` (4 preceding siblings ...)
2006-08-22 19:31 ` Christoph Lameter
@ 2006-08-23 2:38 ` Horms
2006-08-23 2:43 ` Christoph Lameter
6 siblings, 0 replies; 8+ messages in thread
From: Horms @ 2006-08-23 2:38 UTC (permalink / raw)
To: linux-ia64
On Tue, Aug 22, 2006 at 12:31:07PM -0700, Christoph Lameter wrote:
>
> On Tue, 22 Aug 2006, Luck, Tony wrote:
>
> > > Could we set these limits consitently to the largest IA64 configuration to
> > > make sure that a generic IA64 kernel is able to run on all machines?
> >
> > We could ... but Horms has a good point that we might not
> > want to do this if the cost is high. Can you estimate how
> > much memory will be allocated in these NR_CPUS and MAX_NUMNODES
> > sized arrays. If it is only[1] a couple of hundred Kbytes, then
> > it might be worth it (even little IA-64 systems have 1GB, so
> > 100K is 0.01%).
>
> Worst case is probably the slab allocator.
>
> The kmem_cache struct has pointer arrays per node and per cpu. If a
> corresponding cpu / node is not up then no further structures will be
> allocated.
>
> pointers to per cpu caches array cache = NR_CPUS * sizeof(void *) = 8k.
> up from 64* sizeof(void *) = 512 byte.
>
> pointers to per node l3 array = MAX_NUMNODES * sizeof(void *) = 8k.
> again up from currently 2k.
>
> So additional overhead per slab is around 13k. About 40 slabs. That
> results in ~520k additional overhead.
I will thrown an additional 2c worth in here and say that that really
doesn't seem to be that bad. And thus I think that increasing the
default to 10, as you suggested, is quite reasonable. In the unlikely
case where a smaller system really carse about this ammount of ram it
they could always reduce the value at configure time. Which seems
altogether more agreeable than having larger systems fail to boot.
Actually, given the ammount of memory that is involves, it does
beg the question of if it needs to be configurable at all.
--
Horms
H: http://www.vergenet.net/~horms/
W: http://www.valinux.co.jp/en/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Increase default nodes shift to 10
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
` (5 preceding siblings ...)
2006-08-23 2:38 ` Horms
@ 2006-08-23 2:43 ` Christoph Lameter
6 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2006-08-23 2:43 UTC (permalink / raw)
To: linux-ia64
On Wed, 23 Aug 2006, Horms wrote:
> Actually, given the ammount of memory that is involves, it does
> beg the question of if it needs to be configurable at all.
Thank you. Here is the patch that changes both the NODES_SHIFT
and the NR_CPUS so that even big machines can boot all nodes and
processors with a generic kernel.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.18-rc4/arch/ia64/Kconfig
=================================--- linux-2.6.18-rc4.orig/arch/ia64/Kconfig 2006-08-06 11:20:11.000000000 -0700
+++ linux-2.6.18-rc4/arch/ia64/Kconfig 2006-08-22 19:40:43.466355915 -0700
@@ -258,7 +258,7 @@ config NR_CPUS
int "Maximum number of CPUs (2-1024)"
range 2 1024
depends on SMP
- default "64"
+ default "1024"
help
You should set this to the number of CPUs in your system, but
keep in mind that a kernel compiled for, e.g., 2 CPUs will boot but
@@ -354,7 +354,7 @@ config NUMA
config NODES_SHIFT
int "Max num nodes shift(3-10)"
range 3 10
- default "8"
+ default "10"
depends on NEED_MULTIPLE_NODES
help
This option specifies the maximum number of nodes in your SSI system.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-08-23 2:43 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-08-22 2:11 Increase default nodes shift to 10 Christoph Lameter
2006-08-22 11:10 ` Horms
2006-08-22 16:58 ` Christoph Lameter
2006-08-22 17:47 ` Christoph Lameter
2006-08-22 19:06 ` Luck, Tony
2006-08-22 19:31 ` Christoph Lameter
2006-08-23 2:38 ` Horms
2006-08-23 2:43 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox