linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
@ 2015-07-08 13:20 Srikar Dronamraju
  2015-07-08 13:50 ` Rik van Riel
  2015-07-08 13:56 ` Ingo Molnar
  0 siblings, 2 replies; 10+ messages in thread
From: Srikar Dronamraju @ 2015-07-08 13:20 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
sched feature NUMA was always set to true. However this sched feature was
suppose to be enabled on NUMA boxes only thro set_numabalancing_state().

To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 587a2f6..aea72d5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5676,10 +5676,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+	if (!sched_feat(NUMA) || !sched_feat(NUMA_FAVOUR_HIGHER))
 		return -1;
 
-	if (!sched_feat(NUMA))
+	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
 		return -1;
 
 	src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 83a50e7..d4d4726 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -79,12 +79,13 @@ SCHED_FEAT(LB_MIN, false)
  * numa_balancing=
  */
 #ifdef CONFIG_NUMA_BALANCING
+SCHED_FEAT(NUMA,	false)
 
 /*
- * NUMA will favor moving tasks towards nodes where a higher number of
- * hinting faults are recorded during active load balancing. It will
- * resist moving tasks towards nodes where a lower number of hinting
- * faults have been recorded.
+ * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
+ * higher number of hinting faults are recorded during active load
+ * balancing. It will resist moving tasks towards nodes where a lower
+ * number of hinting faults have been recorded.
  */
-SCHED_FEAT(NUMA,	true)
+SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
 #endif


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-08 13:20 [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar Srikar Dronamraju
@ 2015-07-08 13:50 ` Rik van Riel
  2015-07-08 13:56 ` Ingo Molnar
  1 sibling, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2015-07-08 13:50 UTC (permalink / raw)
  To: Srikar Dronamraju, Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel, Mel Gorman

On 07/08/2015 09:20 AM, Srikar Dronamraju wrote:
> In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
> sched feature NUMA was always set to true. However this sched feature was
> suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
> 
> To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>

Reviewed-by: Rik van Riel <riel@redhat.com>

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-08 13:20 [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar Srikar Dronamraju
  2015-07-08 13:50 ` Rik van Riel
@ 2015-07-08 13:56 ` Ingo Molnar
  2015-07-08 15:26   ` Rik van Riel
  2015-07-08 16:06   ` Srikar Dronamraju
  1 sibling, 2 replies; 10+ messages in thread
From: Ingo Molnar @ 2015-07-08 13:56 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Peter Zijlstra, linux-kernel, Rik van Riel, Mel Gorman


* Srikar Dronamraju <srikar@linux.vnet.ibm.com> wrote:

> In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
> sched feature NUMA was always set to true. However this sched feature was
> suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
>
> To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.

Three typos and a non-standard commit ID reference.

>  /*
> + * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
> + * higher number of hinting faults are recorded during active load
> + * balancing. It will resist moving tasks towards nodes where a lower
> + * number of hinting faults have been recorded.
>   */
> -SCHED_FEAT(NUMA,	true)
> +SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
>  #endif
> 

So the comment spells 'favor' American, the constant you introduce is British 
spelling via 'FAVOUR'? Please use it consistently!

Also, this name is totally non-intuitive.

Make it something like NUMA_FAVOR_BUSY_NODES or so?

Also, I'm wondering how this can schedule in a stable fashion: if a non-busy node 
is not favored, how can we end up there to start building up hinting faults?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-08 13:56 ` Ingo Molnar
@ 2015-07-08 15:26   ` Rik van Riel
  2015-07-09  6:29     ` Ingo Molnar
  2015-07-08 16:06   ` Srikar Dronamraju
  1 sibling, 1 reply; 10+ messages in thread
From: Rik van Riel @ 2015-07-08 15:26 UTC (permalink / raw)
  To: Ingo Molnar, Srikar Dronamraju; +Cc: Peter Zijlstra, linux-kernel, Mel Gorman

On 07/08/2015 09:56 AM, Ingo Molnar wrote:
> 
> * Srikar Dronamraju <srikar@linux.vnet.ibm.com> wrote:
> 
>> In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
>> sched feature NUMA was always set to true. However this sched feature was
>> suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
>>
>> To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
> 
> Three typos and a non-standard commit ID reference.
> 
>>  /*
>> + * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
>> + * higher number of hinting faults are recorded during active load
>> + * balancing. It will resist moving tasks towards nodes where a lower
>> + * number of hinting faults have been recorded.
>>   */
>> -SCHED_FEAT(NUMA,	true)
>> +SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
>>  #endif
>>
> 
> So the comment spells 'favor' American, the constant you introduce is British 
> spelling via 'FAVOUR'? Please use it consistently!
> 
> Also, this name is totally non-intuitive.
> 
> Make it something like NUMA_FAVOR_BUSY_NODES or so?

It is not about relocating tasks to busier nodes. The scheduler still
moves tasks from busier nodes to idler nodes.

This code makes the scheduler more prone to move tasks from nodes where
they have fewer NUMA faults, to nodes where they have more.

Not sure what a good name would be to describe that...

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-08 13:56 ` Ingo Molnar
  2015-07-08 15:26   ` Rik van Riel
@ 2015-07-08 16:06   ` Srikar Dronamraju
  1 sibling, 0 replies; 10+ messages in thread
From: Srikar Dronamraju @ 2015-07-08 16:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, linux-kernel, Rik van Riel, Mel Gorman

> > In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
> > sched feature NUMA was always set to true. However this sched feature was
> > suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
> >
> > To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
> 
> Three typos and a non-standard commit ID reference.

Sorry .. Will fix it.
> 
> >  /*
> > + * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
> > + * higher number of hinting faults are recorded during active load
> > + * balancing. It will resist moving tasks towards nodes where a lower
> > + * number of hinting faults have been recorded.
> >   */
> > -SCHED_FEAT(NUMA,	true)
> > +SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
> >  #endif
> > 
> 
> So the comment spells 'favor' American, the constant you introduce is British 
> spelling via 'FAVOUR'? Please use it consistently!
> 
> Also, this name is totally non-intuitive.
> 
> Make it something like NUMA_FAVOR_BUSY_NODES or so?

Okay will modify as suggested.

> 
> Also, I'm wondering how this can schedule in a stable fashion: if a non-busy node 
> is not favored, how can we end up there to start building up hinting faults?

The NUMA feature is suppose to be enabled automatically on numa system.
This feature is tied with starting the hinting faults.

However the other feature NUMA_FAVOUR_HIGHER / NUMA_FAVOR_BUSY_NODES
will only affect if we want to give a numa bias when we do the regular
load balance. It wouldnt affect numa hinting faults or the tasks swaps
that we do based on numa faults. So its impact is very limited.

Would you recommend removing the feature?

-- 
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-08 15:26   ` Rik van Riel
@ 2015-07-09  6:29     ` Ingo Molnar
  2015-07-09  6:57       ` Srikar Dronamraju
  0 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2015-07-09  6:29 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Srikar Dronamraju, Peter Zijlstra, linux-kernel, Mel Gorman


* Rik van Riel <riel@redhat.com> wrote:

> On 07/08/2015 09:56 AM, Ingo Molnar wrote:
> > 
> > * Srikar Dronamraju <srikar@linux.vnet.ibm.com> wrote:
> > 
> >> In commit:8a9e62a "sched/numa: Prefer NUMA hotness over cache hotness"
> >> sched feature NUMA was always set to true. However this sched feature was
> >> suppose to be enabled on NUMA boxes only thro set_numabalancing_state().
> >>
> >> To get back to the above behaviour, bring back NUMA_FAVOUR_HIGHER feature.
> > 
> > Three typos and a non-standard commit ID reference.
> > 
> >>  /*
> >> + * NUMA_FAVOUR_HIGHER will favor moving tasks towards nodes where a
> >> + * higher number of hinting faults are recorded during active load
> >> + * balancing. It will resist moving tasks towards nodes where a lower
> >> + * number of hinting faults have been recorded.
> >>   */
> >> -SCHED_FEAT(NUMA,	true)
> >> +SCHED_FEAT(NUMA_FAVOUR_HIGHER, true)
> >>  #endif
> >>
> > 
> > So the comment spells 'favor' American, the constant you introduce is British 
> > spelling via 'FAVOUR'? Please use it consistently!
> > 
> > Also, this name is totally non-intuitive.
> > 
> > Make it something like NUMA_FAVOR_BUSY_NODES or so?
> 
> It is not about relocating tasks to busier nodes. The scheduler still
> moves tasks from busier nodes to idler nodes.
> 
> This code makes the scheduler more prone to move tasks from nodes where
> they have fewer NUMA faults, to nodes where they have more.
> 
> Not sure what a good name would be to describe that...

So I find the patch, the description and the comments in the code conflicting and 
confusing.

The patch does this:

@@ -5676,10 +5676,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
        unsigned long src_faults, dst_faults;
        int src_nid, dst_nid;

-       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+       if (!sched_feat(NUMA) || !sched_feat(NUMA_FAVOUR_HIGHER))
                return -1;

-       if (!sched_feat(NUMA))
+       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
                return -1;

        src_nid = cpu_to_node(env->src_cpu);


while the default for 'NUMA' is 0, 'NUMA_FAVOUR_HIGHER' is 1.

Which in itself is confusing: WTH do we have a generic switch called 'NUMA' and 
then have it disabled?

Secondly, and more importantly, this patch is equivalent to adding this (for the 
default case):

	return -1;

i.e. it's in essence a revert of 8a9e62a!

And it provides no explanation whatsoever. Why did we do the original change 
(8a9e62a) which was well argued but apparently broken in some fashion, and why do 
we want to change it back now?

I.e. this patch sucks on multiple grounds, and 8a9e62a probably sucks as well. And 
you added a Reviewed-by while you should have noticed at least 2-3 flaws in the 
patch and its approach. Not good.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-09  6:29     ` Ingo Molnar
@ 2015-07-09  6:57       ` Srikar Dronamraju
  2015-07-09  8:01         ` Ingo Molnar
  0 siblings, 1 reply; 10+ messages in thread
From: Srikar Dronamraju @ 2015-07-09  6:57 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Rik van Riel, Peter Zijlstra, linux-kernel, Mel Gorman

> So I find the patch, the description and the comments in the code conflicting and 
> confusing.
> 
> The patch does this:
> 
> @@ -5676,10 +5676,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
>         unsigned long src_faults, dst_faults;
>         int src_nid, dst_nid;
> 
> -       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
> +       if (!sched_feat(NUMA) || !sched_feat(NUMA_FAVOUR_HIGHER))
>                 return -1;
> 
> -       if (!sched_feat(NUMA))
> +       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
>                 return -1;
> 
>         src_nid = cpu_to_node(env->src_cpu);
> 
> 
> while the default for 'NUMA' is 0, 'NUMA_FAVOUR_HIGHER' is 1.
> 
> Which in itself is confusing: WTH do we have a generic switch called 'NUMA' and 
> then have it disabled?

NUMA feature gets enabled on multi-node boxes because of

start_kernel() -> numa_policy_init() -> check_numabalancing_enable() ->
 set_numabalancing_state() -> sched_feat_set("NUMA");

> 
> Secondly, and more importantly, this patch is equivalent to adding this (for the 
> default case):
> 
> 	return -1;

This is true on only UMA box. On a numa box, the NUMA feature would be
enabled, so it wouldnt return -1 by default.

> 
> i.e. it's in essence a revert of 8a9e62a!
> 

While 8a9e62a did miss the part where we would enable NUMA on numa
boxes, this commit doesnt completely revert 8a9e62a.

> And it provides no explanation whatsoever. Why did we do the original change 
> (8a9e62a) which was well argued but apparently broken in some fashion, and why do 
> we want to change it back now?

The original change "8a9e62a" gives preference to numa hotness compared
to cache hotness. The rational being, for numa workloads tasks are
better of looking at numa convergence than be spread based on cache
hotness.  migrate_swap/migrate_task_to already move tasks without
bothering about cache hotness so that convergence is achieved.

> 
> I.e. this patch sucks on multiple grounds, and 8a9e62a probably sucks as well. And 
> you added a Reviewed-by while you should have noticed at least 2-3 flaws in the 
> patch and its approach. Not good.
> 

-- 
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-09  6:57       ` Srikar Dronamraju
@ 2015-07-09  8:01         ` Ingo Molnar
  2015-07-10 17:28           ` Srikar Dronamraju
  0 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2015-07-09  8:01 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Rik van Riel, Peter Zijlstra, linux-kernel, Mel Gorman


* Srikar Dronamraju <srikar@linux.vnet.ibm.com> wrote:

> > So I find the patch, the description and the comments in the code conflicting and 
> > confusing.
> > 
> > The patch does this:
> > 
> > @@ -5676,10 +5676,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
> >         unsigned long src_faults, dst_faults;
> >         int src_nid, dst_nid;
> > 
> > -       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
> > +       if (!sched_feat(NUMA) || !sched_feat(NUMA_FAVOUR_HIGHER))
> >                 return -1;
> > 
> > -       if (!sched_feat(NUMA))
> > +       if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
> >                 return -1;
> > 
> >         src_nid = cpu_to_node(env->src_cpu);
> > 
> > 
> > while the default for 'NUMA' is 0, 'NUMA_FAVOUR_HIGHER' is 1.
> > 
> > Which in itself is confusing: WTH do we have a generic switch called 'NUMA' and 
> > then have it disabled?
> 
> NUMA feature gets enabled on multi-node boxes because of
> 
> start_kernel() -> numa_policy_init() -> check_numabalancing_enable() ->
>  set_numabalancing_state() -> sched_feat_set("NUMA");

Ugh, that is nonsensical!

If CONFIG_SCHED_DEBUG is disabled then sched_features is a constant value:

  # define const_debug const

  ...

  extern const_debug unsigned int sysctl_sched_features;

sched_features are _only_ meant for debugging. They turn into an unchangeable set 
of features when SCHED_DEBUG is disabled - and that is very much by design.

The whole set_numabalancing_state() muck needs to be fixed.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-09  8:01         ` Ingo Molnar
@ 2015-07-10 17:28           ` Srikar Dronamraju
  2015-07-11  8:53             ` Ingo Molnar
  0 siblings, 1 reply; 10+ messages in thread
From: Srikar Dronamraju @ 2015-07-10 17:28 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Rik van Riel, Peter Zijlstra, linux-kernel, Mel Gorman

> > > Which in itself is confusing: WTH do we have a generic switch called 'NUMA' and 
> > > then have it disabled?
> > 
> > NUMA feature gets enabled on multi-node boxes because of
> > 
> > start_kernel() -> numa_policy_init() -> check_numabalancing_enable() ->
> >  set_numabalancing_state() -> sched_feat_set("NUMA");
> 
> Ugh, that is nonsensical!
> 
> If CONFIG_SCHED_DEBUG is disabled then sched_features is a constant value:
> 
>   # define const_debug const
> 
>   ...
> 
>   extern const_debug unsigned int sysctl_sched_features;
> 
> sched_features are _only_ meant for debugging. They turn into an unchangeable set 
> of features when SCHED_DEBUG is disabled - and that is very much by design.
> 
> The whole set_numabalancing_state() muck needs to be fixed.

Would something like the below suffice. If yes I can send out a formal
patch for the same. Here we are moving numabalancing_enabled variable to
common i.e for both CONFIG_SCHED_DEBUG and !CONFIG_SCHED_DEBUG.

Also removing sched_feat_numa because its no more getting used.
numabalancing_enabled is already being used similarly in task_tick_fair
and task_numa_fault.

-------------->8------------------------------------------------------8<--------------

 kernel/sched/core.c  | 5 +++--
 kernel/sched/fair.c  | 2 +-
 kernel/sched/sched.h | 6 ------
 3 files changed, 4 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78b4bad10..69ccbda4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2059,17 +2059,18 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
+__read_mostly bool numabalancing_enabled;
+
 #ifdef CONFIG_SCHED_DEBUG
 void set_numabalancing_state(bool enabled)
 {
+	numabalancing_enabled = enabled;
 	if (enabled)
 		sched_feat_set("NUMA");
 	else
 		sched_feat_set("NO_NUMA");
 }
 #else
-__read_mostly bool numabalancing_enabled;
-
 void set_numabalancing_state(bool enabled)
 {
 	numabalancing_enabled = enabled;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 587a2f6..1b86455 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5679,7 +5679,7 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
 		return -1;
 
-	if (!sched_feat(NUMA))
+	if (!numabalancing_enabled)
 		return -1;
 
 	src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 84d4879..d460fe3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1014,14 +1014,8 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
 
 #ifdef CONFIG_NUMA_BALANCING
-#define sched_feat_numa(x) sched_feat(x)
-#ifdef CONFIG_SCHED_DEBUG
-#define numabalancing_enabled sched_feat_numa(NUMA)
-#else
 extern bool numabalancing_enabled;
-#endif /* CONFIG_SCHED_DEBUG */
 #else
-#define sched_feat_numa(x) (0)
 #define numabalancing_enabled (0)
 #endif /* CONFIG_NUMA_BALANCING */
 


-- 
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar.
  2015-07-10 17:28           ` Srikar Dronamraju
@ 2015-07-11  8:53             ` Ingo Molnar
  0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2015-07-11  8:53 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Rik van Riel, Peter Zijlstra, linux-kernel, Mel Gorman


* Srikar Dronamraju <srikar@linux.vnet.ibm.com> wrote:

> > > > Which in itself is confusing: WTH do we have a generic switch called 'NUMA' and 
> > > > then have it disabled?
> > > 
> > > NUMA feature gets enabled on multi-node boxes because of
> > > 
> > > start_kernel() -> numa_policy_init() -> check_numabalancing_enable() ->
> > >  set_numabalancing_state() -> sched_feat_set("NUMA");
> > 
> > Ugh, that is nonsensical!
> > 
> > If CONFIG_SCHED_DEBUG is disabled then sched_features is a constant value:
> > 
> >   # define const_debug const
> > 
> >   ...
> > 
> >   extern const_debug unsigned int sysctl_sched_features;
> > 
> > sched_features are _only_ meant for debugging. They turn into an unchangeable set 
> > of features when SCHED_DEBUG is disabled - and that is very much by design.
> > 
> > The whole set_numabalancing_state() muck needs to be fixed.
> 
> Would something like the below suffice. If yes I can send out a formal
> patch for the same. Here we are moving numabalancing_enabled variable to
> common i.e for both CONFIG_SCHED_DEBUG and !CONFIG_SCHED_DEBUG.
> 
> Also removing sched_feat_numa because its no more getting used.
> numabalancing_enabled is already being used similarly in task_tick_fair
> and task_numa_fault.
> 
> -------------->8------------------------------------------------------8<--------------
> 
>  kernel/sched/core.c  | 5 +++--
>  kernel/sched/fair.c  | 2 +-
>  kernel/sched/sched.h | 6 ------
>  3 files changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 78b4bad10..69ccbda4 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2059,17 +2059,18 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
>  }
>  
>  #ifdef CONFIG_NUMA_BALANCING
> +__read_mostly bool numabalancing_enabled;

s/numabalancing_enabled/sched_numa_balancing

Other than that this would be OK.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-07-11  8:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-08 13:20 [PATCH] sched/numa: Restore sched feature NUMA to its earlier avatar Srikar Dronamraju
2015-07-08 13:50 ` Rik van Riel
2015-07-08 13:56 ` Ingo Molnar
2015-07-08 15:26   ` Rik van Riel
2015-07-09  6:29     ` Ingo Molnar
2015-07-09  6:57       ` Srikar Dronamraju
2015-07-09  8:01         ` Ingo Molnar
2015-07-10 17:28           ` Srikar Dronamraju
2015-07-11  8:53             ` Ingo Molnar
2015-07-08 16:06   ` Srikar Dronamraju

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).