public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: Don't try allocating memory from offline nodes
@ 2012-05-29 17:30 Luck, Tony
  2012-05-30  3:21 ` David Rientjes
  0 siblings, 1 reply; 4+ messages in thread
From: Luck, Tony @ 2012-05-29 17:30 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Peter Zijlstra

From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Allocators don't appreciate it when you try and allocate memory from
offline nodes.

Reported-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---

This patch has been sitting around since Friday.

 kernel/sched/core.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Index: linux-2.6/kernel/sched/core.c
===================================================================
--- linux-2.6.orig/kernel/sched/core.c
+++ linux-2.6/kernel/sched/core.c
@@ -6449,7 +6449,7 @@ static void sched_init_numa(void)
 			return;
 
 		for (j = 0; j < nr_node_ids; j++) {
-			struct cpumask *mask = kzalloc_node(cpumask_size(), GFP_KERNEL, j);
+			struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
 			if (!mask)
 				return;
 



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: Don't try allocating memory from offline nodes
  2012-05-29 17:30 [PATCH] sched: Don't try allocating memory from offline nodes Luck, Tony
@ 2012-05-30  3:21 ` David Rientjes
  2012-05-30  9:44   ` Peter Zijlstra
  0 siblings, 1 reply; 4+ messages in thread
From: David Rientjes @ 2012-05-30  3:21 UTC (permalink / raw)
  To: Luck, Tony; +Cc: Linus Torvalds, linux-kernel, Peter Zijlstra

On Tue, 29 May 2012, Luck, Tony wrote:

> Index: linux-2.6/kernel/sched/core.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched/core.c
> +++ linux-2.6/kernel/sched/core.c
> @@ -6449,7 +6449,7 @@ static void sched_init_numa(void)
>  			return;
>  
>  		for (j = 0; j < nr_node_ids; j++) {
> -			struct cpumask *mask = kzalloc_node(cpumask_size(), GFP_KERNEL, j);
> +			struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
>  			if (!mask)
>  				return;
>  

It's definitely better if we can allocate on the node, though, so perhaps 
do the same thing that I did in 
http://marc.info/?l=linux-kernel&m=133778739503111 by doing
kzalloc_node(..., node_online(j) ? j : NUMA_NO_NODE)?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: Don't try allocating memory from offline nodes
  2012-05-30  3:21 ` David Rientjes
@ 2012-05-30  9:44   ` Peter Zijlstra
  2012-05-30 21:39     ` David Rientjes
  0 siblings, 1 reply; 4+ messages in thread
From: Peter Zijlstra @ 2012-05-30  9:44 UTC (permalink / raw)
  To: David Rientjes; +Cc: Luck, Tony, Linus Torvalds, linux-kernel

On Tue, 2012-05-29 at 20:21 -0700, David Rientjes wrote:
> On Tue, 29 May 2012, Luck, Tony wrote:
> 
> > Index: linux-2.6/kernel/sched/core.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched/core.c
> > +++ linux-2.6/kernel/sched/core.c
> > @@ -6449,7 +6449,7 @@ static void sched_init_numa(void)
> >  			return;
> >  
> >  		for (j = 0; j < nr_node_ids; j++) {
> > -			struct cpumask *mask = kzalloc_node(cpumask_size(), GFP_KERNEL, j);
> > +			struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
> >  			if (!mask)
> >  				return;
> >  
> 
> It's definitely better if we can allocate on the node, though, so perhaps 
> do the same thing that I did in 
> http://marc.info/?l=linux-kernel&m=133778739503111 by doing
> kzalloc_node(..., node_online(j) ? j : NUMA_NO_NODE)?

This data isn't used overly much, only when rebuilding the sched
domains, so its not performance critical. I only used per-node
allocations because it seemed the right thing to do. If it doesn't work,
I wouldn't bother with making it more complex.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched: Don't try allocating memory from offline nodes
  2012-05-30  9:44   ` Peter Zijlstra
@ 2012-05-30 21:39     ` David Rientjes
  0 siblings, 0 replies; 4+ messages in thread
From: David Rientjes @ 2012-05-30 21:39 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Luck, Tony, Linus Torvalds, linux-kernel

On Wed, 30 May 2012, Peter Zijlstra wrote:

> On Tue, 2012-05-29 at 20:21 -0700, David Rientjes wrote:
> > On Tue, 29 May 2012, Luck, Tony wrote:
> > 
> > > Index: linux-2.6/kernel/sched/core.c
> > > ===================================================================
> > > --- linux-2.6.orig/kernel/sched/core.c
> > > +++ linux-2.6/kernel/sched/core.c
> > > @@ -6449,7 +6449,7 @@ static void sched_init_numa(void)
> > >  			return;
> > >  
> > >  		for (j = 0; j < nr_node_ids; j++) {
> > > -			struct cpumask *mask = kzalloc_node(cpumask_size(), GFP_KERNEL, j);
> > > +			struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
> > >  			if (!mask)
> > >  				return;
> > >  
> > 
> > It's definitely better if we can allocate on the node, though, so perhaps 
> > do the same thing that I did in 
> > http://marc.info/?l=linux-kernel&m=133778739503111 by doing
> > kzalloc_node(..., node_online(j) ? j : NUMA_NO_NODE)?
> 
> This data isn't used overly much, only when rebuilding the sched
> domains, so its not performance critical. I only used per-node
> allocations because it seemed the right thing to do. If it doesn't work,
> I wouldn't bother with making it more complex.
> 

Ok, if you don't think these cpumasks need locality for performance, then

Acked-by: David Rientjes <rientjes@google.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-05-30 21:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-29 17:30 [PATCH] sched: Don't try allocating memory from offline nodes Luck, Tony
2012-05-30  3:21 ` David Rientjes
2012-05-30  9:44   ` Peter Zijlstra
2012-05-30 21:39     ` David Rientjes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox