public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()
@ 2024-02-06  4:39 David Vernet
  2024-02-06  4:39 ` [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats() David Vernet
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: David Vernet @ 2024-02-06  4:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, bristot, vschneid, kernel-team

update_sd_pick_busiest() (and its caller) has some room for small
optimizations, and some improvements in readability.

- In update_sd_lb_stats(), we're using a goto to skip a single if check.
  Let's remove the goto and just add another condition to the if.

- In update_sd_pick_busiest(), only update a group_misfit_task group to
  be the busiest if it has strictly more load than the current busiest
  task, rather than >= the load.

- When comparing the current struct sched_group with the yet-busiest
  domain in update_sd_pick_busiest(), if the two groups have the same
  group type, we're currently doing a bit of unnecessary work for any
  group >= group_misfit_task. We're comparing the two groups, and then
  returning only if false (the group in question is not the busiest).
  Othewise, we break, do an extra unnecessary conditional check that's
  vacuously false for any group type > group_fully_busy, and then always
  return true. This patch series has us instead simply return directly
  in the switch statement, saving some bytes in load_balance().

---

v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/

v2 -> v3:
- Add Vincent's Reviewed-by tags
- Fix stale commit summary sentence (Vincent)

v1 -> v2 changes:

- Split the patch series into separate patches (Valentin)
- Update the group_misfit_task busiest check to use strict inequality

David Vernet (3):
  sched/fair: Remove unnecessary goto in update_sd_lb_stats()
  sched/fair: Do strict inequality check for busiest misfit task group
  sched/fair: Simplify some logic in update_sd_pick_busiest()

 kernel/sched/fair.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats()
  2024-02-06  4:39 [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
@ 2024-02-06  4:39 ` David Vernet
  2024-02-28 22:00   ` [tip: sched/core] " tip-bot2 for David Vernet
  2024-02-06  4:39 ` [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group David Vernet
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: David Vernet @ 2024-02-06  4:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, bristot, vschneid, kernel-team

In update_sd_lb_stats(), when we're iterating over the sched groups that
comprise a sched domain, we're skipping the call to
update_sd_pick_busiest() for the sched group that contains the local /
destination CPU. We use a goto to skip the call, but we could just as
easily check !local_group, as there's no other logic that we need to
skip with the goto. Let's remove the goto, and check for !local_group in
the if statement instead.

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Signed-off-by: David Vernet <void@manifault.com>
---
 kernel/sched/fair.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b803030c3a03..e7519ea434b1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10578,16 +10578,11 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 		update_sg_lb_stats(env, sds, sg, sgs, &sg_status);
 
-		if (local_group)
-			goto next_group;
-
-
-		if (update_sd_pick_busiest(env, sds, sg, sgs)) {
+		if (!local_group && update_sd_pick_busiest(env, sds, sg, sgs)) {
 			sds->busiest = sg;
 			sds->busiest_stat = *sgs;
 		}
 
-next_group:
 		/* Now, start updating sd_lb_stats */
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group
  2024-02-06  4:39 [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
  2024-02-06  4:39 ` [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats() David Vernet
@ 2024-02-06  4:39 ` David Vernet
  2024-02-06 13:17   ` Valentin Schneider
  2024-02-28 22:00   ` [tip: sched/core] " tip-bot2 for David Vernet
  2024-02-06  4:39 ` [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest() David Vernet
  2024-02-16 19:44 ` [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
  3 siblings, 2 replies; 10+ messages in thread
From: David Vernet @ 2024-02-06  4:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, bristot, vschneid, kernel-team

In update_sd_pick_busiest(), when comparing two sched groups that are
both of type group_misfit_task, we currently consider the new group as
busier than the current busiest group even if the new group has the
same misfit task load as the current busiest group. We can avoid some
unnecessary writes if we instead only consider the newest group to be
the busiest if it has a higher load than the current busiest. This
matches the behavior of other group types where we compare load, such as
two groups that are both overloaded.

Let's update the group_misfit_task type comparison to also only update
the busiest group in the event of strict inequality.

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: David Vernet <void@manifault.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7519ea434b1..76d03106040d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10028,7 +10028,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 		 * If we have more than one misfit sg go with the biggest
 		 * misfit.
 		 */
-		if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+		if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
 			return false;
 		break;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest()
  2024-02-06  4:39 [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
  2024-02-06  4:39 ` [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats() David Vernet
  2024-02-06  4:39 ` [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group David Vernet
@ 2024-02-06  4:39 ` David Vernet
  2024-02-28 22:00   ` [tip: sched/core] sched/fair: Simplify the update_sd_pick_busiest() logic tip-bot2 for David Vernet
  2024-02-16 19:44 ` [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
  3 siblings, 1 reply; 10+ messages in thread
From: David Vernet @ 2024-02-06  4:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, bristot, vschneid, kernel-team

When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).
Othewise, we break, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true.

Let's just return directly in the switch statement instead. This doesn't
change the size of vmlinux with llvm 17 (not surprising given that all
of this is inlined in load_balance()), but it does shrink load_balance()
by 88 bytes on x86. Given that it also improves readability, this seems
worth doing.

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Signed-off-by: David Vernet <void@manifault.com>
---
 kernel/sched/fair.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76d03106040d..fa049f866461 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10006,9 +10006,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	switch (sgs->group_type) {
 	case group_overloaded:
 		/* Select the overloaded group with highest avg_load. */
-		if (sgs->avg_load <= busiest->avg_load)
-			return false;
-		break;
+		return sgs->avg_load > busiest->avg_load;
 
 	case group_imbalanced:
 		/*
@@ -10019,18 +10017,14 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 
 	case group_asym_packing:
 		/* Prefer to move from lowest priority CPU's work */
-		if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
-			return false;
-		break;
+		return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
 
 	case group_misfit_task:
 		/*
 		 * If we have more than one misfit sg go with the biggest
 		 * misfit.
 		 */
-		if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
-			return false;
-		break;
+		return sgs->group_misfit_task_load > busiest->group_misfit_task_load;
 
 	case group_smt_balance:
 		/*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group
  2024-02-06  4:39 ` [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group David Vernet
@ 2024-02-06 13:17   ` Valentin Schneider
  2024-02-28 22:00   ` [tip: sched/core] " tip-bot2 for David Vernet
  1 sibling, 0 replies; 10+ messages in thread
From: Valentin Schneider @ 2024-02-06 13:17 UTC (permalink / raw)
  To: David Vernet, linux-kernel
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	bsegall, bristot, kernel-team

On 05/02/24 22:39, David Vernet wrote:
> In update_sd_pick_busiest(), when comparing two sched groups that are
> both of type group_misfit_task, we currently consider the new group as
> busier than the current busiest group even if the new group has the
> same misfit task load as the current busiest group. We can avoid some
> unnecessary writes if we instead only consider the newest group to be
> the busiest if it has a higher load than the current busiest. This
> matches the behavior of other group types where we compare load, such as
> two groups that are both overloaded.
>
> Let's update the group_misfit_task type comparison to also only update
> the busiest group in the event of strict inequality.
>
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

Reviewed-by: Valentin Schneider <vschneid@redhat.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()
  2024-02-06  4:39 [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
                   ` (2 preceding siblings ...)
  2024-02-06  4:39 ` [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest() David Vernet
@ 2024-02-16 19:44 ` David Vernet
  2024-02-26 17:50   ` David Vernet
  3 siblings, 1 reply; 10+ messages in thread
From: David Vernet @ 2024-02-16 19:44 UTC (permalink / raw)
  To: peterz
  Cc: mingo, linux-kernel, juri.lelli, vincent.guittot,
	dietmar.eggemann, bsegall, bristot, vschneid, kernel-team

[-- Attachment #1: Type: text/plain, Size: 2101 bytes --]

On Mon, Feb 05, 2024 at 10:39:18PM -0600, David Vernet wrote:
> update_sd_pick_busiest() (and its caller) has some room for small
> optimizations, and some improvements in readability.

Hello Peter, hello Ingo,

Friendly ping. Is there anything else required for this to land?

Thanks,
David

> 
> - In update_sd_lb_stats(), we're using a goto to skip a single if check.
>   Let's remove the goto and just add another condition to the if.
> 
> - In update_sd_pick_busiest(), only update a group_misfit_task group to
>   be the busiest if it has strictly more load than the current busiest
>   task, rather than >= the load.
> 
> - When comparing the current struct sched_group with the yet-busiest
>   domain in update_sd_pick_busiest(), if the two groups have the same
>   group type, we're currently doing a bit of unnecessary work for any
>   group >= group_misfit_task. We're comparing the two groups, and then
>   returning only if false (the group in question is not the busiest).
>   Othewise, we break, do an extra unnecessary conditional check that's
>   vacuously false for any group type > group_fully_busy, and then always
>   return true. This patch series has us instead simply return directly
>   in the switch statement, saving some bytes in load_balance().
> 
> ---
> 
> v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
> v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
> 
> v2 -> v3:
> - Add Vincent's Reviewed-by tags
> - Fix stale commit summary sentence (Vincent)
> 
> v1 -> v2 changes:
> 
> - Split the patch series into separate patches (Valentin)
> - Update the group_misfit_task busiest check to use strict inequality
> 
> David Vernet (3):
>   sched/fair: Remove unnecessary goto in update_sd_lb_stats()
>   sched/fair: Do strict inequality check for busiest misfit task group
>   sched/fair: Simplify some logic in update_sd_pick_busiest()
> 
>  kernel/sched/fair.c | 19 ++++---------------
>  1 file changed, 4 insertions(+), 15 deletions(-)
> 
> -- 
> 2.43.0
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()
  2024-02-16 19:44 ` [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
@ 2024-02-26 17:50   ` David Vernet
  0 siblings, 0 replies; 10+ messages in thread
From: David Vernet @ 2024-02-26 17:50 UTC (permalink / raw)
  To: peterz
  Cc: mingo, linux-kernel, juri.lelli, vincent.guittot,
	dietmar.eggemann, bsegall, bristot, vschneid, kernel-team

[-- Attachment #1: Type: text/plain, Size: 2133 bytes --]

On Fri, Feb 16, 2024 at 01:44:40PM -0600, David Vernet wrote:
> Hello Peter, hello Ingo,
> 
> Friendly ping. Is there anything else required for this to land?

Hello,

Sending another ping.

Thanks,
David

> 
> Thanks,
> David
> 
> > 
> > - In update_sd_lb_stats(), we're using a goto to skip a single if check.
> >   Let's remove the goto and just add another condition to the if.
> > 
> > - In update_sd_pick_busiest(), only update a group_misfit_task group to
> >   be the busiest if it has strictly more load than the current busiest
> >   task, rather than >= the load.
> > 
> > - When comparing the current struct sched_group with the yet-busiest
> >   domain in update_sd_pick_busiest(), if the two groups have the same
> >   group type, we're currently doing a bit of unnecessary work for any
> >   group >= group_misfit_task. We're comparing the two groups, and then
> >   returning only if false (the group in question is not the busiest).
> >   Othewise, we break, do an extra unnecessary conditional check that's
> >   vacuously false for any group type > group_fully_busy, and then always
> >   return true. This patch series has us instead simply return directly
> >   in the switch statement, saving some bytes in load_balance().
> > 
> > ---
> > 
> > v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
> > v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
> > 
> > v2 -> v3:
> > - Add Vincent's Reviewed-by tags
> > - Fix stale commit summary sentence (Vincent)
> > 
> > v1 -> v2 changes:
> > 
> > - Split the patch series into separate patches (Valentin)
> > - Update the group_misfit_task busiest check to use strict inequality
> > 
> > David Vernet (3):
> >   sched/fair: Remove unnecessary goto in update_sd_lb_stats()
> >   sched/fair: Do strict inequality check for busiest misfit task group
> >   sched/fair: Simplify some logic in update_sd_pick_busiest()
> > 
> >  kernel/sched/fair.c | 19 ++++---------------
> >  1 file changed, 4 insertions(+), 15 deletions(-)
> > 
> > -- 
> > 2.43.0
> > 



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [tip: sched/core] sched/fair: Simplify the update_sd_pick_busiest() logic
  2024-02-06  4:39 ` [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest() David Vernet
@ 2024-02-28 22:00   ` tip-bot2 for David Vernet
  0 siblings, 0 replies; 10+ messages in thread
From: tip-bot2 for David Vernet @ 2024-02-28 22:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: David Vernet, Ingo Molnar, Vincent Guittot, Valentin Schneider,
	x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7e9f7d17fe6c23432fda9a3648a858b7589cb4aa
Gitweb:        https://git.kernel.org/tip/7e9f7d17fe6c23432fda9a3648a858b7589cb4aa
Author:        David Vernet <void@manifault.com>
AuthorDate:    Mon, 05 Feb 2024 22:39:21 -06:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 28 Feb 2024 15:19:26 +01:00

sched/fair: Simplify the update_sd_pick_busiest() logic

When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).

Otherwise, we break out, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true.

Let's just return directly in the switch statement instead. This doesn't
change the size of vmlinux with llvm 17 (not surprising given that all
of this is inlined in load_balance()), but it does shrink load_balance()
by 88 bytes on x86. Given that it also improves readability, this seems
worth doing.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20240206043921.850302-4-void@manifault.com
---
 kernel/sched/fair.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 448520f..51fe17f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10010,9 +10010,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	switch (sgs->group_type) {
 	case group_overloaded:
 		/* Select the overloaded group with highest avg_load. */
-		if (sgs->avg_load <= busiest->avg_load)
-			return false;
-		break;
+		return sgs->avg_load > busiest->avg_load;
 
 	case group_imbalanced:
 		/*
@@ -10023,18 +10021,14 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 
 	case group_asym_packing:
 		/* Prefer to move from lowest priority CPU's work */
-		if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
-			return false;
-		break;
+		return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
 
 	case group_misfit_task:
 		/*
 		 * If we have more than one misfit sg go with the biggest
 		 * misfit.
 		 */
-		if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
-			return false;
-		break;
+		return sgs->group_misfit_task_load > busiest->group_misfit_task_load;
 
 	case group_smt_balance:
 		/*

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip: sched/core] sched/fair: Remove unnecessary goto in update_sd_lb_stats()
  2024-02-06  4:39 ` [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats() David Vernet
@ 2024-02-28 22:00   ` tip-bot2 for David Vernet
  0 siblings, 0 replies; 10+ messages in thread
From: tip-bot2 for David Vernet @ 2024-02-28 22:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: David Vernet, Ingo Molnar, Vincent Guittot, Valentin Schneider,
	x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     9dfbc26d27aaf0f5958c5972188f16fe977e5af5
Gitweb:        https://git.kernel.org/tip/9dfbc26d27aaf0f5958c5972188f16fe977e5af5
Author:        David Vernet <void@manifault.com>
AuthorDate:    Mon, 05 Feb 2024 22:39:19 -06:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 28 Feb 2024 15:19:23 +01:00

sched/fair: Remove unnecessary goto in update_sd_lb_stats()

In update_sd_lb_stats(), when we're iterating over the sched groups that
comprise a sched domain, we're skipping the call to
update_sd_pick_busiest() for the sched group that contains the local /
destination CPU. We use a goto to skip the call, but we could just as
easily check !local_group, as there's no other logic that we need to
skip with the goto. Let's remove the goto, and check for !local_group in
the if statement instead.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20240206043921.850302-2-void@manifault.com
---
 kernel/sched/fair.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 352222d..41dda53 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10580,16 +10580,11 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 		update_sg_lb_stats(env, sds, sg, sgs, &sg_status);
 
-		if (local_group)
-			goto next_group;
-
-
-		if (update_sd_pick_busiest(env, sds, sg, sgs)) {
+		if (!local_group && update_sd_pick_busiest(env, sds, sg, sgs)) {
 			sds->busiest = sg;
 			sds->busiest_stat = *sgs;
 		}
 
-next_group:
 		/* Now, start updating sd_lb_stats */
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip: sched/core] sched/fair: Do strict inequality check for busiest misfit task group
  2024-02-06  4:39 ` [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group David Vernet
  2024-02-06 13:17   ` Valentin Schneider
@ 2024-02-28 22:00   ` tip-bot2 for David Vernet
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot2 for David Vernet @ 2024-02-28 22:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: David Vernet, Ingo Molnar, Vincent Guittot, Valentin Schneider,
	x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7f1a7229718d788f26a711374da83adc2689837f
Gitweb:        https://git.kernel.org/tip/7f1a7229718d788f26a711374da83adc2689837f
Author:        David Vernet <void@manifault.com>
AuthorDate:    Mon, 05 Feb 2024 22:39:20 -06:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 28 Feb 2024 15:19:24 +01:00

sched/fair: Do strict inequality check for busiest misfit task group

In update_sd_pick_busiest(), when comparing two sched groups that are
both of type group_misfit_task, we currently consider the new group as
busier than the current busiest group even if the new group has the
same misfit task load as the current busiest group. We can avoid some
unnecessary writes if we instead only consider the newest group to be
the busiest if it has a higher load than the current busiest. This
matches the behavior of other group types where we compare load, such as
two groups that are both overloaded.

Let's update the group_misfit_task type comparison to also only update
the busiest group in the event of strict inequality.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20240206043921.850302-3-void@manifault.com
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 41dda53..448520f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10032,7 +10032,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 		 * If we have more than one misfit sg go with the biggest
 		 * misfit.
 		 */
-		if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+		if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
 			return false;
 		break;
 

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-02-28 22:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-06  4:39 [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
2024-02-06  4:39 ` [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats() David Vernet
2024-02-28 22:00   ` [tip: sched/core] " tip-bot2 for David Vernet
2024-02-06  4:39 ` [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group David Vernet
2024-02-06 13:17   ` Valentin Schneider
2024-02-28 22:00   ` [tip: sched/core] " tip-bot2 for David Vernet
2024-02-06  4:39 ` [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest() David Vernet
2024-02-28 22:00   ` [tip: sched/core] sched/fair: Simplify the update_sd_pick_busiest() logic tip-bot2 for David Vernet
2024-02-16 19:44 ` [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest() David Vernet
2024-02-26 17:50   ` David Vernet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox