public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion
@ 2025-12-02  8:13 Ingo Molnar
  2025-12-02  8:13 ` [PATCH 1/3] sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance() Ingo Molnar
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02  8:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Frederic Weisbecker, Shrikanth Hegde, Juri Lelli,
	Dietmar Eggemann, Valentin Schneider, Vincent Guittot,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner,
	Ingo Molnar

There's three separate, independent pieces of logic in the
scheduler that are named 'has_blocked':

   1) nohz.has_blocked,
   2) rq->has_blocked_load - both of these relate to NOHZ balancing,

   3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
      averages.

While reviewing this code I noticed a couple of inconsistencies:

  - nohz.has_blocked sometimes gets handled via a local variable
    that is named 'has_blocked_load' - but it's the runqueue
    that has the has_blocked_load field, not the nohz structure ...

  - The cfs_rq_has_blocked() function does SMP load-balancing and
    has no relation to NOHZ has_blocked logic.

  - The update_blocked_load_status() function, which sets the
    rq->has_blocked_load field, has a parameter named 'has_blocked',
    but that's the field name of the nohz structure.

To sort all of this out, standardize on 3 distinct patterns:

  (1) nohz.has_blocked related functions and variables use the
      'has_blocked' nomenclature,

  (2) rq->has_blocked_load related functions and variables use
      'has_blocked_load',

  (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

This series implements (1), (2) and (3) via three patches.

The end result becomes a lot more greppable, as the following
'git grep' commands:

  git grep 'has_blocked\>'                kernel/sched/
  git grep 'has_blocked_load\>'           kernel/sched/
  git grep 'has_blocked_load_avg\>'       kernel/sched/

... will now output only the respective relevant lines.

No change in functionality.

Thanks,

	Ingo

================

Ingo Molnar (3):
  sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance()
  sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load'
  sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg()

 kernel/sched/fair.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance()
  2025-12-02  8:13 [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Ingo Molnar
@ 2025-12-02  8:13 ` Ingo Molnar
  2025-12-02  8:13 ` [PATCH 2/3] sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load' Ingo Molnar
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02  8:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Frederic Weisbecker, Shrikanth Hegde, Juri Lelli,
	Dietmar Eggemann, Valentin Schneider, Vincent Guittot,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner,
	Ingo Molnar, Frederic Weisbecker

There's three separate, independent pieces of logic in the
scheduler that are named 'has_blocked':

  (1) nohz.has_blocked,
  (2) rq->has_blocked_load - both of these relate to NOHZ balancing,

  (3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
      averages.

To reduce confusion, split these 3 shared uses of 'has_blocked' name
patterns into 3 distinct and greppable patterns:

  (1) nohz.has_blocked related functions and variables use
      'has_blocked',

  (2) rq->has_blocked_load related functions and variables use
      'has_blocked_load',

  (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

This patch implements (1) and in _nohz_idle_balance() renames the
'has_blocked_load' local variable to 'has_blocked'.

No change in functionality.

Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ff647e92a314..3f21b77b59ce 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12666,7 +12666,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 	/* Earliest time when we have to do rebalance again */
 	unsigned long now = jiffies;
 	unsigned long next_balance = now + 60*HZ;
-	bool has_blocked_load = false;
+	bool has_blocked = false;
 	int update_next_balance = 0;
 	int this_cpu = this_rq->cpu;
 	int balance_cpu;
@@ -12710,7 +12710,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 		 */
 		if (!idle_cpu(this_cpu) && need_resched()) {
 			if (flags & NOHZ_STATS_KICK)
-				has_blocked_load = true;
+				has_blocked = true;
 			if (flags & NOHZ_NEXT_KICK)
 				WRITE_ONCE(nohz.needs_update, 1);
 			goto abort;
@@ -12719,7 +12719,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 		rq = cpu_rq(balance_cpu);
 
 		if (flags & NOHZ_STATS_KICK)
-			has_blocked_load |= update_nohz_stats(rq);
+			has_blocked |= update_nohz_stats(rq);
 
 		/*
 		 * If time for next balance is due,
@@ -12756,7 +12756,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 
 abort:
 	/* There is still blocked load, enable periodic update */
-	if (has_blocked_load)
+	if (has_blocked)
 		WRITE_ONCE(nohz.has_blocked, 1);
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load'
  2025-12-02  8:13 [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Ingo Molnar
  2025-12-02  8:13 ` [PATCH 1/3] sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance() Ingo Molnar
@ 2025-12-02  8:13 ` Ingo Molnar
  2025-12-02  8:13 ` [PATCH 3/3] sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg() Ingo Molnar
  2025-12-02  9:04 ` [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Vincent Guittot
  3 siblings, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02  8:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Frederic Weisbecker, Shrikanth Hegde, Juri Lelli,
	Dietmar Eggemann, Valentin Schneider, Vincent Guittot,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner,
	Ingo Molnar, Frederic Weisbecker

There's three separate, independent pieces of logic in the
scheduler that are named 'has_blocked':

  1) nohz.has_blocked,
  2) rq->has_blocked_load - both of these relate to NOHZ balancing,

  3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
     averages.

To reduce confusion, split these 3 shared uses of 'has_blocked' name
patterns into 3 distinct and greppable patterns:

  1) nohz.has_blocked related functions and variables use
     'has_blocked',

  2) rq->has_blocked_load related functions and variables use
    'has_blocked_load',

  3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

This patch implements (2) and renames the 'has_blocked' parameter
in update_blocked_load_status() to 'has_blocked_load'.

No change in functionality.

Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3f21b77b59ce..a8a67d25873c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9800,16 +9800,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
 	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
 }
 
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load)
 {
-	if (!has_blocked)
+	if (!has_blocked_load)
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
 static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
 #endif /* !CONFIG_NO_HZ_COMMON */
 
 static bool __update_blocked_others(struct rq *rq, bool *done)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg()
  2025-12-02  8:13 [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Ingo Molnar
  2025-12-02  8:13 ` [PATCH 1/3] sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance() Ingo Molnar
  2025-12-02  8:13 ` [PATCH 2/3] sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load' Ingo Molnar
@ 2025-12-02  8:13 ` Ingo Molnar
  2025-12-02  9:04 ` [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Vincent Guittot
  3 siblings, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02  8:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Frederic Weisbecker, Shrikanth Hegde, Juri Lelli,
	Dietmar Eggemann, Valentin Schneider, Vincent Guittot,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner,
	Ingo Molnar, Frederic Weisbecker

There's three separate, independent pieces of logic in the
scheduler that are named 'has_blocked':

  1) nohz.has_blocked,
  2) rq->has_blocked_load - both of these relate to NOHZ balancing,

  3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
     averages.

To reduce confusion, split these 3 shared uses of 'has_blocked' name
patterns into 3 distinct and greppable patterns:

  1) nohz.has_blocked related functions and variables use
     'has_blocked',

  2) rq->has_blocked_load related functions and variables use
    'has_blocked_load',

  3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

This patch implements (3) and renames cfs_rq_has_blocked() to
cfs_rq_has_blocked_load_avg().

No change in functionality.

Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8a67d25873c..c098aedab694 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9767,7 +9767,7 @@ static void attach_tasks(struct lb_env *env)
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
+static inline bool cfs_rq_has_blocked_load_avg(struct cfs_rq *cfs_rq)
 {
 	if (cfs_rq->avg.load_avg)
 		return true;
@@ -9806,7 +9806,7 @@ static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_lo
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
+static inline bool cfs_rq_has_blocked_load_avg(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
 static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
@@ -9866,7 +9866,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 			list_del_leaf_cfs_rq(cfs_rq);
 
 		/* Don't need periodic decay once load/util_avg are null */
-		if (cfs_rq_has_blocked(cfs_rq))
+		if (cfs_rq_has_blocked_load_avg(cfs_rq))
 			*done = false;
 	}
 
@@ -9926,7 +9926,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 	bool decayed;
 
 	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
-	if (cfs_rq_has_blocked(cfs_rq))
+	if (cfs_rq_has_blocked_load_avg(cfs_rq))
 		*done = false;
 
 	return decayed;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion
  2025-12-02  8:13 [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Ingo Molnar
                   ` (2 preceding siblings ...)
  2025-12-02  8:13 ` [PATCH 3/3] sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg() Ingo Molnar
@ 2025-12-02  9:04 ` Vincent Guittot
  2025-12-02  9:34   ` [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise Ingo Molnar
  3 siblings, 1 reply; 12+ messages in thread
From: Vincent Guittot @ 2025-12-02  9:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner

On Tue, 2 Dec 2025 at 09:13, Ingo Molnar <mingo@kernel.org> wrote:
>
> There's three separate, independent pieces of logic in the
> scheduler that are named 'has_blocked':
>
>    1) nohz.has_blocked,
>    2) rq->has_blocked_load - both of these relate to NOHZ balancing,
>
>    3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
>       averages.
>
> While reviewing this code I noticed a couple of inconsistencies:
>
>   - nohz.has_blocked sometimes gets handled via a local variable
>     that is named 'has_blocked_load' - but it's the runqueue
>     that has the has_blocked_load field, not the nohz structure ...
>
>   - The cfs_rq_has_blocked() function does SMP load-balancing and
>     has no relation to NOHZ has_blocked logic.
>
>   - The update_blocked_load_status() function, which sets the
>     rq->has_blocked_load field, has a parameter named 'has_blocked',
>     but that's the field name of the nohz structure.
>
> To sort all of this out, standardize on 3 distinct patterns:
>
>   (1) nohz.has_blocked related functions and variables use the
>       'has_blocked' nomenclature,
>
>   (2) rq->has_blocked_load related functions and variables use
>       'has_blocked_load',
>
>   (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

They are all implementing the same feature: update the blocked pelt
signal of idle rqs.

If we want some renaming, we should use the same naming for all to
show that it's all about the same thing

nohz.has_blocked_load()
cfs_rq_has_blocked_load()
rq->has_blocked_load()

>
> This series implements (1), (2) and (3) via three patches.
>
> The end result becomes a lot more greppable, as the following
> 'git grep' commands:
>
>   git grep 'has_blocked\>'                kernel/sched/
>   git grep 'has_blocked_load\>'           kernel/sched/
>   git grep 'has_blocked_load_avg\>'       kernel/sched/
>
> ... will now output only the respective relevant lines.
>
> No change in functionality.
>
> Thanks,
>
>         Ingo
>
> ================
>
> Ingo Molnar (3):
>   sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance()
>   sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load'
>   sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg()
>
>  kernel/sched/fair.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02  9:04 ` [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Vincent Guittot
@ 2025-12-02  9:34   ` Ingo Molnar
  2025-12-02 10:13     ` Vincent Guittot
                       ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02  9:34 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner


* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> On Tue, 2 Dec 2025 at 09:13, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > There's three separate, independent pieces of logic in the
> > scheduler that are named 'has_blocked':
> >
> >    1) nohz.has_blocked,
> >    2) rq->has_blocked_load - both of these relate to NOHZ balancing,
> >
> >    3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
> >       averages.
> >
> > While reviewing this code I noticed a couple of inconsistencies:
> >
> >   - nohz.has_blocked sometimes gets handled via a local variable
> >     that is named 'has_blocked_load' - but it's the runqueue
> >     that has the has_blocked_load field, not the nohz structure ...
> >
> >   - The cfs_rq_has_blocked() function does SMP load-balancing and
> >     has no relation to NOHZ has_blocked logic.
> >
> >   - The update_blocked_load_status() function, which sets the
> >     rq->has_blocked_load field, has a parameter named 'has_blocked',
> >     but that's the field name of the nohz structure.
> >
> > To sort all of this out, standardize on 3 distinct patterns:
> >
> >   (1) nohz.has_blocked related functions and variables use the
> >       'has_blocked' nomenclature,
> >
> >   (2) rq->has_blocked_load related functions and variables use
> >       'has_blocked_load',
> >
> >   (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> 
> They are all implementing the same feature: update the blocked pelt
> signal of idle rqs.

Yeah, should have said 3 separate layers of logic that 
each deal with the same thing, when writing the 
changelog I missed how update_blocked_load_status() 
feeds into rq->has_blocked_load via !done PELT signal 
we get back from the load-balancers and didn't look 
further. :-/

> If we want some renaming, we should use the same naming for all to
> show that it's all about the same thing
> 
> nohz.has_blocked_load()
> cfs_rq_has_blocked_load()
> rq->has_blocked_load()

I'd still argue that greppability of the 3 layers might 
have a small code readability value:

  git grep 'has_blocked\>'                kernel/sched/
  git grep 'has_blocked_load\>'           kernel/sched/
  git grep 'has_blocked_load_avg\>'       kernel/sched/

... and I've fixed up the changelogs to say:

	There's three separate layers of logic in the scheduler that
	deal with 'has_blocked' handling of the NOHZ code:

	  (1) nohz.has_blocked,
	  (2) rq->has_blocked_load, deal with NOHZ idle balancing,
	  (3) and cfs_rq_has_blocked(), which is part of the layer
	      that is passing the SMP load-balancing signal to the
	      NOHZ layers.

	To make it easier to separate them, split these 3 shared-mixed
	uses of 'has_blocked' name patterns into 3 distinct and greppable
	patterns:

	  (1) nohz.has_blocked related functions and variables use
	      'has_blocked',

	  (2) rq->has_blocked_load related functions and variables use
	      'has_blocked_load',

	  (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.

... but if you still object to that notion, we can also 
do your suggestion - see the patch below. Both variants 
are fine to me, no strong preferences, as long as the 
names remove the existing random noise. :-)

In fact, on a second thought, I slightly prefer your 
suggestion, as 'has_blocked_load' has a proper noun.

Thanks,

	Ingo

========================>
From: Ingo Molnar <mingo@kernel.org>
Subject: sched/fair: Sort out 'blocked_load*' namespace noise

There's three layers of logic in the scheduler that
deal with 'has_blocked' (load) handling of the NOHZ code:

  (1) nohz.has_blocked,
  (2) rq->has_blocked_load, deal with NOHZ idle balancing,
  (3) and cfs_rq_has_blocked(), which is part of the layer
      that is passing the SMP load-balancing signal to the
      NOHZ layers.

The 'has_blocked' and 'has_blocked_load' names are used 
in a mixed fashion, sometimes within the same function.

Standardize on 'has_blocked_load' to make it all easy 
to read and easy to grep.

No change in functionality.

Signed-off-by: Ingo Molnar <mingo@kernel.org>

 kernel/sched/fair.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ff647e92a314..c6d3d01945bb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7137,7 +7137,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
 static struct {
 	cpumask_var_t idle_cpus_mask;
 	atomic_t nr_cpus;
-	int has_blocked;		/* Idle CPUS has blocked load */
+	int has_blocked_load;		/* Idle CPUS has blocked load */
 	int needs_update;		/* Newly idle CPUs need their next_balance collated */
 	unsigned long next_balance;     /* in jiffy units */
 	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
@@ -9767,7 +9767,7 @@ static void attach_tasks(struct lb_env *env)
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
 {
 	if (cfs_rq->avg.load_avg)
 		return true;
@@ -9800,16 +9800,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
 	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
 }
 
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load)
 {
-	if (!has_blocked)
+	if (!has_blocked_load)
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
 #endif /* !CONFIG_NO_HZ_COMMON */
 
 static bool __update_blocked_others(struct rq *rq, bool *done)
@@ -9866,7 +9866,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 			list_del_leaf_cfs_rq(cfs_rq);
 
 		/* Don't need periodic decay once load/util_avg are null */
-		if (cfs_rq_has_blocked(cfs_rq))
+		if (cfs_rq_has_blocked_load(cfs_rq))
 			*done = false;
 	}
 
@@ -9926,7 +9926,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 	bool decayed;
 
 	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
-	if (cfs_rq_has_blocked(cfs_rq))
+	if (cfs_rq_has_blocked_load(cfs_rq))
 		*done = false;
 
 	return decayed;
@@ -12443,7 +12443,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (likely(!atomic_read(&nohz.nr_cpus)))
 		return;
 
-	if (READ_ONCE(nohz.has_blocked) &&
+	if (READ_ONCE(nohz.has_blocked_load) &&
 	    time_after(now, READ_ONCE(nohz.next_blocked)))
 		flags = NOHZ_STATS_KICK;
 
@@ -12604,9 +12604,9 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * The tick is still stopped but load could have been added in the
-	 * meantime. We set the nohz.has_blocked flag to trig a check of the
+	 * meantime. We set the nohz.has_blocked_load flag to trig a check of the
 	 * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
-	 * of nohz.has_blocked can only happen after checking the new load
+	 * of nohz.has_blocked_load can only happen after checking the new load
 	 */
 	if (rq->nohz_tick_stopped)
 		goto out;
@@ -12622,7 +12622,7 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * Ensures that if nohz_idle_balance() fails to observe our
-	 * @idle_cpus_mask store, it must observe the @has_blocked
+	 * @idle_cpus_mask store, it must observe the @has_blocked_load
 	 * and @needs_update stores.
 	 */
 	smp_mb__after_atomic();
@@ -12635,7 +12635,7 @@ void nohz_balance_enter_idle(int cpu)
 	 * Each time a cpu enter idle, we assume that it has blocked load and
 	 * enable the periodic update of the load of idle CPUs
 	 */
-	WRITE_ONCE(nohz.has_blocked, 1);
+	WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 static bool update_nohz_stats(struct rq *rq)
@@ -12676,8 +12676,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 
 	/*
 	 * We assume there will be no idle load after this update and clear
-	 * the has_blocked flag. If a cpu enters idle in the mean time, it will
-	 * set the has_blocked flag and trigger another update of idle load.
+	 * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
+	 * set the has_blocked_load flag and trigger another update of idle load.
 	 * Because a cpu that becomes idle, is added to idle_cpus_mask before
 	 * setting the flag, we are sure to not clear the state and not
 	 * check the load of an idle cpu.
@@ -12685,12 +12685,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 	 * Same applies to idle_cpus_mask vs needs_update.
 	 */
 	if (flags & NOHZ_STATS_KICK)
-		WRITE_ONCE(nohz.has_blocked, 0);
+		WRITE_ONCE(nohz.has_blocked_load, 0);
 	if (flags & NOHZ_NEXT_KICK)
 		WRITE_ONCE(nohz.needs_update, 0);
 
 	/*
-	 * Ensures that if we miss the CPU, we must see the has_blocked
+	 * Ensures that if we miss the CPU, we must see the has_blocked_load
 	 * store from nohz_balance_enter_idle().
 	 */
 	smp_mb();
@@ -12757,7 +12757,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 abort:
 	/* There is still blocked load, enable periodic update */
 	if (has_blocked_load)
-		WRITE_ONCE(nohz.has_blocked, 1);
+		WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 /*
@@ -12819,7 +12819,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
 		return;
 
 	/* Don't need to update blocked load of idle CPUs*/
-	if (!READ_ONCE(nohz.has_blocked) ||
+	if (!READ_ONCE(nohz.has_blocked_load) ||
 	    time_before(jiffies, READ_ONCE(nohz.next_blocked)))
 		return;
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02  9:34   ` [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise Ingo Molnar
@ 2025-12-02 10:13     ` Vincent Guittot
  2025-12-02 16:09       ` [PATCH 1/1 -v3] " Ingo Molnar
  2025-12-14  7:46     ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2025-12-15  7:59     ` tip-bot2 for Ingo Molnar
  2 siblings, 1 reply; 12+ messages in thread
From: Vincent Guittot @ 2025-12-02 10:13 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner

On Tue, 2 Dec 2025 at 10:35, Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> > On Tue, 2 Dec 2025 at 09:13, Ingo Molnar <mingo@kernel.org> wrote:
> > >
> > > There's three separate, independent pieces of logic in the
> > > scheduler that are named 'has_blocked':
> > >
> > >    1) nohz.has_blocked,
> > >    2) rq->has_blocked_load - both of these relate to NOHZ balancing,
> > >
> > >    3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
> > >       averages.
> > >
> > > While reviewing this code I noticed a couple of inconsistencies:
> > >
> > >   - nohz.has_blocked sometimes gets handled via a local variable
> > >     that is named 'has_blocked_load' - but it's the runqueue
> > >     that has the has_blocked_load field, not the nohz structure ...
> > >
> > >   - The cfs_rq_has_blocked() function does SMP load-balancing and
> > >     has no relation to NOHZ has_blocked logic.
> > >
> > >   - The update_blocked_load_status() function, which sets the
> > >     rq->has_blocked_load field, has a parameter named 'has_blocked',
> > >     but that's the field name of the nohz structure.
> > >
> > > To sort all of this out, standardize on 3 distinct patterns:
> > >
> > >   (1) nohz.has_blocked related functions and variables use the
> > >       'has_blocked' nomenclature,
> > >
> > >   (2) rq->has_blocked_load related functions and variables use
> > >       'has_blocked_load',
> > >
> > >   (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> >
> > They are all implementing the same feature: update the blocked pelt
> > signal of idle rqs.
>
> Yeah, should have said 3 separate layers of logic that
> each deal with the same thing, when writing the
> changelog I missed how update_blocked_load_status()
> feeds into rq->has_blocked_load via !done PELT signal
> we get back from the load-balancers and didn't look
> further. :-/
>
> > If we want some renaming, we should use the same naming for all to
> > show that it's all about the same thing
> >
> > nohz.has_blocked_load()
> > cfs_rq_has_blocked_load()
> > rq->has_blocked_load()
>
> I'd still argue that greppability of the 3 layers might
> have a small code readability value:
>
>   git grep 'has_blocked\>'                kernel/sched/
>   git grep 'has_blocked_load\>'           kernel/sched/
>   git grep 'has_blocked_load_avg\>'       kernel/sched/
>
> ... and I've fixed up the changelogs to say:
>
>         There's three separate layers of logic in the scheduler that
>         deal with 'has_blocked' handling of the NOHZ code:
>
>           (1) nohz.has_blocked,
>           (2) rq->has_blocked_load, deal with NOHZ idle balancing,
>           (3) and cfs_rq_has_blocked(), which is part of the layer
>               that is passing the SMP load-balancing signal to the
>               NOHZ layers.
>
>         To make it easier to separate them, split these 3 shared-mixed
>         uses of 'has_blocked' name patterns into 3 distinct and greppable
>         patterns:
>
>           (1) nohz.has_blocked related functions and variables use
>               'has_blocked',
>
>           (2) rq->has_blocked_load related functions and variables use
>               'has_blocked_load',
>
>           (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
>
> ... but if you still object to that notion, we can also
> do your suggestion - see the patch below. Both variants
> are fine to me, no strong preferences, as long as the
> names remove the existing random noise. :-)
>
> In fact, on a second thought, I slightly prefer your
> suggestion, as 'has_blocked_load' has a proper noun.

I would prefer using 'has_blocked_load' as I find it easier to get
that it's the same info saved in different places

Thanks

>
> Thanks,
>
>         Ingo
>
> ========================>
> From: Ingo Molnar <mingo@kernel.org>
> Subject: sched/fair: Sort out 'blocked_load*' namespace noise
>
> There's three layers of logic in the scheduler that
> deal with 'has_blocked' (load) handling of the NOHZ code:
>
>   (1) nohz.has_blocked,
>   (2) rq->has_blocked_load, deal with NOHZ idle balancing,
>   (3) and cfs_rq_has_blocked(), which is part of the layer
>       that is passing the SMP load-balancing signal to the
>       NOHZ layers.
>
> The 'has_blocked' and 'has_blocked_load' names are used
> in a mixed fashion, sometimes within the same function.
>
> Standardize on 'has_blocked_load' to make it all easy
> to read and easy to grep.
>
> No change in functionality.
>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>
>  kernel/sched/fair.c | 38 +++++++++++++++++++-------------------
>  1 file changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ff647e92a314..c6d3d01945bb 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7137,7 +7137,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
>  static struct {
>         cpumask_var_t idle_cpus_mask;
>         atomic_t nr_cpus;
> -       int has_blocked;                /* Idle CPUS has blocked load */
> +       int has_blocked_load;           /* Idle CPUS has blocked load */
>         int needs_update;               /* Newly idle CPUs need their next_balance collated */
>         unsigned long next_balance;     /* in jiffy units */
>         unsigned long next_blocked;     /* Next update of blocked load in jiffies */
> @@ -9767,7 +9767,7 @@ static void attach_tasks(struct lb_env *env)
>  }
>
>  #ifdef CONFIG_NO_HZ_COMMON
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
>  {
>         if (cfs_rq->avg.load_avg)
>                 return true;
> @@ -9800,16 +9800,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
>         WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
>  }
>
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
> +static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load)
>  {
> -       if (!has_blocked)
> +       if (!has_blocked_load)
>                 rq->has_blocked_load = 0;
>  }
>  #else /* !CONFIG_NO_HZ_COMMON: */
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
>  static inline bool others_have_blocked(struct rq *rq) { return false; }
>  static inline void update_blocked_load_tick(struct rq *rq) {}
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
> +static inline void update_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
>  #endif /* !CONFIG_NO_HZ_COMMON */
>
>  static bool __update_blocked_others(struct rq *rq, bool *done)
> @@ -9866,7 +9866,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>                         list_del_leaf_cfs_rq(cfs_rq);
>
>                 /* Don't need periodic decay once load/util_avg are null */
> -               if (cfs_rq_has_blocked(cfs_rq))
> +               if (cfs_rq_has_blocked_load(cfs_rq))
>                         *done = false;
>         }
>
> @@ -9926,7 +9926,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>         bool decayed;
>
>         decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
> -       if (cfs_rq_has_blocked(cfs_rq))
> +       if (cfs_rq_has_blocked_load(cfs_rq))
>                 *done = false;
>
>         return decayed;
> @@ -12443,7 +12443,7 @@ static void nohz_balancer_kick(struct rq *rq)
>         if (likely(!atomic_read(&nohz.nr_cpus)))
>                 return;
>
> -       if (READ_ONCE(nohz.has_blocked) &&
> +       if (READ_ONCE(nohz.has_blocked_load) &&
>             time_after(now, READ_ONCE(nohz.next_blocked)))
>                 flags = NOHZ_STATS_KICK;
>
> @@ -12604,9 +12604,9 @@ void nohz_balance_enter_idle(int cpu)
>
>         /*
>          * The tick is still stopped but load could have been added in the
> -        * meantime. We set the nohz.has_blocked flag to trig a check of the
> +        * meantime. We set the nohz.has_blocked_load flag to trig a check of the
>          * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
> -        * of nohz.has_blocked can only happen after checking the new load
> +        * of nohz.has_blocked_load can only happen after checking the new load
>          */
>         if (rq->nohz_tick_stopped)
>                 goto out;
> @@ -12622,7 +12622,7 @@ void nohz_balance_enter_idle(int cpu)
>
>         /*
>          * Ensures that if nohz_idle_balance() fails to observe our
> -        * @idle_cpus_mask store, it must observe the @has_blocked
> +        * @idle_cpus_mask store, it must observe the @has_blocked_load
>          * and @needs_update stores.
>          */
>         smp_mb__after_atomic();
> @@ -12635,7 +12635,7 @@ void nohz_balance_enter_idle(int cpu)
>          * Each time a cpu enter idle, we assume that it has blocked load and
>          * enable the periodic update of the load of idle CPUs
>          */
> -       WRITE_ONCE(nohz.has_blocked, 1);
> +       WRITE_ONCE(nohz.has_blocked_load, 1);
>  }
>
>  static bool update_nohz_stats(struct rq *rq)
> @@ -12676,8 +12676,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>
>         /*
>          * We assume there will be no idle load after this update and clear
> -        * the has_blocked flag. If a cpu enters idle in the mean time, it will
> -        * set the has_blocked flag and trigger another update of idle load.
> +        * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
> +        * set the has_blocked_load flag and trigger another update of idle load.
>          * Because a cpu that becomes idle, is added to idle_cpus_mask before
>          * setting the flag, we are sure to not clear the state and not
>          * check the load of an idle cpu.
> @@ -12685,12 +12685,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>          * Same applies to idle_cpus_mask vs needs_update.
>          */
>         if (flags & NOHZ_STATS_KICK)
> -               WRITE_ONCE(nohz.has_blocked, 0);
> +               WRITE_ONCE(nohz.has_blocked_load, 0);
>         if (flags & NOHZ_NEXT_KICK)
>                 WRITE_ONCE(nohz.needs_update, 0);
>
>         /*
> -        * Ensures that if we miss the CPU, we must see the has_blocked
> +        * Ensures that if we miss the CPU, we must see the has_blocked_load
>          * store from nohz_balance_enter_idle().
>          */
>         smp_mb();
> @@ -12757,7 +12757,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>  abort:
>         /* There is still blocked load, enable periodic update */
>         if (has_blocked_load)
> -               WRITE_ONCE(nohz.has_blocked, 1);
> +               WRITE_ONCE(nohz.has_blocked_load, 1);
>  }
>
>  /*
> @@ -12819,7 +12819,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
>                 return;
>
>         /* Don't need to update blocked load of idle CPUs*/
> -       if (!READ_ONCE(nohz.has_blocked) ||
> +       if (!READ_ONCE(nohz.has_blocked_load) ||
>             time_before(jiffies, READ_ONCE(nohz.next_blocked)))
>                 return;
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/1 -v3] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02 10:13     ` Vincent Guittot
@ 2025-12-02 16:09       ` Ingo Molnar
  2025-12-02 16:55         ` Vincent Guittot
  2025-12-02 20:26         ` Shrikanth Hegde
  0 siblings, 2 replies; 12+ messages in thread
From: Ingo Molnar @ 2025-12-02 16:09 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner


* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> On Tue, 2 Dec 2025 at 10:35, Ingo Molnar <mingo@kernel.org> wrote:
> >
> >
> > * Vincent Guittot <vincent.guittot@linaro.org> wrote:
> >
> > > On Tue, 2 Dec 2025 at 09:13, Ingo Molnar <mingo@kernel.org> wrote:
> > > >
> > > > There's three separate, independent pieces of logic in the
> > > > scheduler that are named 'has_blocked':
> > > >
> > > >    1) nohz.has_blocked,
> > > >    2) rq->has_blocked_load - both of these relate to NOHZ balancing,
> > > >
> > > >    3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
> > > >       averages.
> > > >
> > > > While reviewing this code I noticed a couple of inconsistencies:
> > > >
> > > >   - nohz.has_blocked sometimes gets handled via a local variable
> > > >     that is named 'has_blocked_load' - but it's the runqueue
> > > >     that has the has_blocked_load field, not the nohz structure ...
> > > >
> > > >   - The cfs_rq_has_blocked() function does SMP load-balancing and
> > > >     has no relation to NOHZ has_blocked logic.
> > > >
> > > >   - The update_blocked_load_status() function, which sets the
> > > >     rq->has_blocked_load field, has a parameter named 'has_blocked',
> > > >     but that's the field name of the nohz structure.
> > > >
> > > > To sort all of this out, standardize on 3 distinct patterns:
> > > >
> > > >   (1) nohz.has_blocked related functions and variables use the
> > > >       'has_blocked' nomenclature,
> > > >
> > > >   (2) rq->has_blocked_load related functions and variables use
> > > >       'has_blocked_load',
> > > >
> > > >   (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> > >
> > > They are all implementing the same feature: update the blocked pelt
> > > signal of idle rqs.
> >
> > Yeah, should have said 3 separate layers of logic that
> > each deal with the same thing, when writing the
> > changelog I missed how update_blocked_load_status()
> > feeds into rq->has_blocked_load via !done PELT signal
> > we get back from the load-balancers and didn't look
> > further. :-/
> >
> > > If we want some renaming, we should use the same naming for all to
> > > show that it's all about the same thing
> > >
> > > nohz.has_blocked_load()
> > > cfs_rq_has_blocked_load()
> > > rq->has_blocked_load()
> >
> > I'd still argue that greppability of the 3 layers might
> > have a small code readability value:
> >
> >   git grep 'has_blocked\>'                kernel/sched/
> >   git grep 'has_blocked_load\>'           kernel/sched/
> >   git grep 'has_blocked_load_avg\>'       kernel/sched/
> >
> > ... and I've fixed up the changelogs to say:
> >
> >         There's three separate layers of logic in the scheduler that
> >         deal with 'has_blocked' handling of the NOHZ code:
> >
> >           (1) nohz.has_blocked,
> >           (2) rq->has_blocked_load, deal with NOHZ idle balancing,
> >           (3) and cfs_rq_has_blocked(), which is part of the layer
> >               that is passing the SMP load-balancing signal to the
> >               NOHZ layers.
> >
> >         To make it easier to separate them, split these 3 shared-mixed
> >         uses of 'has_blocked' name patterns into 3 distinct and greppable
> >         patterns:
> >
> >           (1) nohz.has_blocked related functions and variables use
> >               'has_blocked',
> >
> >           (2) rq->has_blocked_load related functions and variables use
> >               'has_blocked_load',
> >
> >           (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> >
> > ... but if you still object to that notion, we can also
> > do your suggestion - see the patch below. Both variants
> > are fine to me, no strong preferences, as long as the
> > names remove the existing random noise. :-)
> >
> > In fact, on a second thought, I slightly prefer your
> > suggestion, as 'has_blocked_load' has a proper noun.
> 
> I would prefer using 'has_blocked_load' as I find it easier to get
> that it's the same info saved in different places
> 
> Thanks

Okay, agreed - the reworked -v3 version is attached. 
I've optimistically added your Reviewed-by tag as well. :-)

Thanks,

	Ingo

- Change from -v2: also rename to cfs_rq_has_blocked_load().

===================>
From 395fc683e48f6fe5f36082691681d0d64d1a48ff Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@kernel.org>
Date: Tue, 2 Dec 2025 10:35:06 +0100
Subject: [PATCH] sched/fair: Sort out 'blocked_load*' namespace noise

There's three layers of logic in the scheduler that
deal with 'has_blocked' (load) handling of the NOHZ code:

  (1) nohz.has_blocked,
  (2) rq->has_blocked_load, deal with NOHZ idle balancing,
  (3) and cfs_rq_has_blocked(), which is part of the layer
      that is passing the SMP load-balancing signal to the
      NOHZ layers.

The 'has_blocked' and 'has_blocked_load' names are used
in a mixed fashion, sometimes within the same function.

Standardize on 'has_blocked_load' to make it all easy
to read and easy to grep.

No change in functionality.

Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://patch.msgid.link/aS6yvxyc3JfMxxQW@gmail.com
---
 kernel/sched/fair.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b6043ec4885b..76f5e4b78b30 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7140,7 +7140,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
 static struct {
 	cpumask_var_t idle_cpus_mask;
 	atomic_t nr_cpus;
-	int has_blocked;		/* Idle CPUS has blocked load */
+	int has_blocked_load;		/* Idle CPUS has blocked load */
 	int needs_update;		/* Newly idle CPUs need their next_balance collated */
 	unsigned long next_balance;     /* in jiffy units */
 	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
@@ -9776,7 +9776,7 @@ static void attach_tasks(struct lb_env *env)
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
 {
 	if (cfs_rq->avg.load_avg)
 		return true;
@@ -9809,16 +9809,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
 	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
 }
 
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load)
 {
-	if (!has_blocked)
+	if (!has_blocked_load)
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
 #endif /* !CONFIG_NO_HZ_COMMON */
 
 static bool __update_blocked_others(struct rq *rq, bool *done)
@@ -9875,7 +9875,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 			list_del_leaf_cfs_rq(cfs_rq);
 
 		/* Don't need periodic decay once load/util_avg are null */
-		if (cfs_rq_has_blocked(cfs_rq))
+		if (cfs_rq_has_blocked_load(cfs_rq))
 			*done = false;
 	}
 
@@ -9935,7 +9935,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 	bool decayed;
 
 	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
-	if (cfs_rq_has_blocked(cfs_rq))
+	if (cfs_rq_has_blocked_load(cfs_rq))
 		*done = false;
 
 	return decayed;
@@ -9956,7 +9956,7 @@ static void __sched_balance_update_blocked_averages(struct rq *rq)
 	decayed |= __update_blocked_others(rq, &done);
 	decayed |= __update_blocked_fair(rq, &done);
 
-	update_blocked_load_status(rq, !done);
+	update_has_blocked_load_status(rq, !done);
 	if (decayed)
 		cpufreq_update_util(rq, 0);
 }
@@ -12452,7 +12452,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (likely(!atomic_read(&nohz.nr_cpus)))
 		return;
 
-	if (READ_ONCE(nohz.has_blocked) &&
+	if (READ_ONCE(nohz.has_blocked_load) &&
 	    time_after(now, READ_ONCE(nohz.next_blocked)))
 		flags = NOHZ_STATS_KICK;
 
@@ -12613,9 +12613,9 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * The tick is still stopped but load could have been added in the
-	 * meantime. We set the nohz.has_blocked flag to trig a check of the
+	 * meantime. We set the nohz.has_blocked_load flag to trig a check of the
 	 * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
-	 * of nohz.has_blocked can only happen after checking the new load
+	 * of nohz.has_blocked_load can only happen after checking the new load
 	 */
 	if (rq->nohz_tick_stopped)
 		goto out;
@@ -12631,7 +12631,7 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * Ensures that if nohz_idle_balance() fails to observe our
-	 * @idle_cpus_mask store, it must observe the @has_blocked
+	 * @idle_cpus_mask store, it must observe the @has_blocked_load
 	 * and @needs_update stores.
 	 */
 	smp_mb__after_atomic();
@@ -12644,7 +12644,7 @@ void nohz_balance_enter_idle(int cpu)
 	 * Each time a cpu enter idle, we assume that it has blocked load and
 	 * enable the periodic update of the load of idle CPUs
 	 */
-	WRITE_ONCE(nohz.has_blocked, 1);
+	WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 static bool update_nohz_stats(struct rq *rq)
@@ -12685,8 +12685,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 
 	/*
 	 * We assume there will be no idle load after this update and clear
-	 * the has_blocked flag. If a cpu enters idle in the mean time, it will
-	 * set the has_blocked flag and trigger another update of idle load.
+	 * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
+	 * set the has_blocked_load flag and trigger another update of idle load.
 	 * Because a cpu that becomes idle, is added to idle_cpus_mask before
 	 * setting the flag, we are sure to not clear the state and not
 	 * check the load of an idle cpu.
@@ -12694,12 +12694,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 	 * Same applies to idle_cpus_mask vs needs_update.
 	 */
 	if (flags & NOHZ_STATS_KICK)
-		WRITE_ONCE(nohz.has_blocked, 0);
+		WRITE_ONCE(nohz.has_blocked_load, 0);
 	if (flags & NOHZ_NEXT_KICK)
 		WRITE_ONCE(nohz.needs_update, 0);
 
 	/*
-	 * Ensures that if we miss the CPU, we must see the has_blocked
+	 * Ensures that if we miss the CPU, we must see the has_blocked_load
 	 * store from nohz_balance_enter_idle().
 	 */
 	smp_mb();
@@ -12766,7 +12766,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 abort:
 	/* There is still blocked load, enable periodic update */
 	if (has_blocked_load)
-		WRITE_ONCE(nohz.has_blocked, 1);
+		WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 /*
@@ -12828,7 +12828,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
 		return;
 
 	/* Don't need to update blocked load of idle CPUs*/
-	if (!READ_ONCE(nohz.has_blocked) ||
+	if (!READ_ONCE(nohz.has_blocked_load) ||
 	    time_before(jiffies, READ_ONCE(nohz.next_blocked)))
 		return;
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1 -v3] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02 16:09       ` [PATCH 1/1 -v3] " Ingo Molnar
@ 2025-12-02 16:55         ` Vincent Guittot
  2025-12-02 20:26         ` Shrikanth Hegde
  1 sibling, 0 replies; 12+ messages in thread
From: Vincent Guittot @ 2025-12-02 16:55 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Linus Torvalds, Mel Gorman, Steven Rostedt, Thomas Gleixner

On Tue, 2 Dec 2025 at 17:10, Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> > On Tue, 2 Dec 2025 at 10:35, Ingo Molnar <mingo@kernel.org> wrote:
> > >
> > >
> > > * Vincent Guittot <vincent.guittot@linaro.org> wrote:
> > >
> > > > On Tue, 2 Dec 2025 at 09:13, Ingo Molnar <mingo@kernel.org> wrote:
> > > > >
> > > > > There's three separate, independent pieces of logic in the
> > > > > scheduler that are named 'has_blocked':
> > > > >
> > > > >    1) nohz.has_blocked,
> > > > >    2) rq->has_blocked_load - both of these relate to NOHZ balancing,
> > > > >
> > > > >    3) and cfs_rq_has_blocked(), which operates on SMP load-balancing
> > > > >       averages.
> > > > >
> > > > > While reviewing this code I noticed a couple of inconsistencies:
> > > > >
> > > > >   - nohz.has_blocked sometimes gets handled via a local variable
> > > > >     that is named 'has_blocked_load' - but it's the runqueue
> > > > >     that has the has_blocked_load field, not the nohz structure ...
> > > > >
> > > > >   - The cfs_rq_has_blocked() function does SMP load-balancing and
> > > > >     has no relation to NOHZ has_blocked logic.
> > > > >
> > > > >   - The update_blocked_load_status() function, which sets the
> > > > >     rq->has_blocked_load field, has a parameter named 'has_blocked',
> > > > >     but that's the field name of the nohz structure.
> > > > >
> > > > > To sort all of this out, standardize on 3 distinct patterns:
> > > > >
> > > > >   (1) nohz.has_blocked related functions and variables use the
> > > > >       'has_blocked' nomenclature,
> > > > >
> > > > >   (2) rq->has_blocked_load related functions and variables use
> > > > >       'has_blocked_load',
> > > > >
> > > > >   (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> > > >
> > > > They are all implementing the same feature: update the blocked pelt
> > > > signal of idle rqs.
> > >
> > > Yeah, should have said 3 separate layers of logic that
> > > each deal with the same thing, when writing the
> > > changelog I missed how update_blocked_load_status()
> > > feeds into rq->has_blocked_load via !done PELT signal
> > > we get back from the load-balancers and didn't look
> > > further. :-/
> > >
> > > > If we want some renaming, we should use the same naming for all to
> > > > show that it's all about the same thing
> > > >
> > > > nohz.has_blocked_load()
> > > > cfs_rq_has_blocked_load()
> > > > rq->has_blocked_load()
> > >
> > > I'd still argue that greppability of the 3 layers might
> > > have a small code readability value:
> > >
> > >   git grep 'has_blocked\>'                kernel/sched/
> > >   git grep 'has_blocked_load\>'           kernel/sched/
> > >   git grep 'has_blocked_load_avg\>'       kernel/sched/
> > >
> > > ... and I've fixed up the changelogs to say:
> > >
> > >         There's three separate layers of logic in the scheduler that
> > >         deal with 'has_blocked' handling of the NOHZ code:
> > >
> > >           (1) nohz.has_blocked,
> > >           (2) rq->has_blocked_load, deal with NOHZ idle balancing,
> > >           (3) and cfs_rq_has_blocked(), which is part of the layer
> > >               that is passing the SMP load-balancing signal to the
> > >               NOHZ layers.
> > >
> > >         To make it easier to separate them, split these 3 shared-mixed
> > >         uses of 'has_blocked' name patterns into 3 distinct and greppable
> > >         patterns:
> > >
> > >           (1) nohz.has_blocked related functions and variables use
> > >               'has_blocked',
> > >
> > >           (2) rq->has_blocked_load related functions and variables use
> > >               'has_blocked_load',
> > >
> > >           (3) and cfs_rq_has_blocked() uses 'has_blocked_load_avg'.
> > >
> > > ... but if you still object to that notion, we can also
> > > do your suggestion - see the patch below. Both variants
> > > are fine to me, no strong preferences, as long as the
> > > names remove the existing random noise. :-)
> > >
> > > In fact, on a second thought, I slightly prefer your
> > > suggestion, as 'has_blocked_load' has a proper noun.
> >
> > I would prefer using 'has_blocked_load' as I find it easier to get
> > that it's the same info saved in different places
> >
> > Thanks
>
> Okay, agreed - the reworked -v3 version is attached.
> I've optimistically added your Reviewed-by tag as well. :-)

Yes, looks good to me

>
> Thanks,
>
>         Ingo
>
> - Change from -v2: also rename to cfs_rq_has_blocked_load().
>
> ===================>
> From 395fc683e48f6fe5f36082691681d0d64d1a48ff Mon Sep 17 00:00:00 2001
> From: Ingo Molnar <mingo@kernel.org>
> Date: Tue, 2 Dec 2025 10:35:06 +0100
> Subject: [PATCH] sched/fair: Sort out 'blocked_load*' namespace noise
>
> There's three layers of logic in the scheduler that
> deal with 'has_blocked' (load) handling of the NOHZ code:
>
>   (1) nohz.has_blocked,
>   (2) rq->has_blocked_load, deal with NOHZ idle balancing,
>   (3) and cfs_rq_has_blocked(), which is part of the layer
>       that is passing the SMP load-balancing signal to the
>       NOHZ layers.
>
> The 'has_blocked' and 'has_blocked_load' names are used
> in a mixed fashion, sometimes within the same function.
>
> Standardize on 'has_blocked_load' to make it all easy
> to read and easy to grep.
>
> No change in functionality.
>
> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> Link: https://patch.msgid.link/aS6yvxyc3JfMxxQW@gmail.com
> ---
>  kernel/sched/fair.c | 40 ++++++++++++++++++++--------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b6043ec4885b..76f5e4b78b30 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7140,7 +7140,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
>  static struct {
>         cpumask_var_t idle_cpus_mask;
>         atomic_t nr_cpus;
> -       int has_blocked;                /* Idle CPUS has blocked load */
> +       int has_blocked_load;           /* Idle CPUS has blocked load */
>         int needs_update;               /* Newly idle CPUs need their next_balance collated */
>         unsigned long next_balance;     /* in jiffy units */
>         unsigned long next_blocked;     /* Next update of blocked load in jiffies */
> @@ -9776,7 +9776,7 @@ static void attach_tasks(struct lb_env *env)
>  }
>
>  #ifdef CONFIG_NO_HZ_COMMON
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
>  {
>         if (cfs_rq->avg.load_avg)
>                 return true;
> @@ -9809,16 +9809,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
>         WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
>  }
>
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
> +static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load)
>  {
> -       if (!has_blocked)
> +       if (!has_blocked_load)
>                 rq->has_blocked_load = 0;
>  }
>  #else /* !CONFIG_NO_HZ_COMMON: */
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
>  static inline bool others_have_blocked(struct rq *rq) { return false; }
>  static inline void update_blocked_load_tick(struct rq *rq) {}
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
> +static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
>  #endif /* !CONFIG_NO_HZ_COMMON */
>
>  static bool __update_blocked_others(struct rq *rq, bool *done)
> @@ -9875,7 +9875,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>                         list_del_leaf_cfs_rq(cfs_rq);
>
>                 /* Don't need periodic decay once load/util_avg are null */
> -               if (cfs_rq_has_blocked(cfs_rq))
> +               if (cfs_rq_has_blocked_load(cfs_rq))
>                         *done = false;
>         }
>
> @@ -9935,7 +9935,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>         bool decayed;
>
>         decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
> -       if (cfs_rq_has_blocked(cfs_rq))
> +       if (cfs_rq_has_blocked_load(cfs_rq))
>                 *done = false;
>
>         return decayed;
> @@ -9956,7 +9956,7 @@ static void __sched_balance_update_blocked_averages(struct rq *rq)
>         decayed |= __update_blocked_others(rq, &done);
>         decayed |= __update_blocked_fair(rq, &done);
>
> -       update_blocked_load_status(rq, !done);
> +       update_has_blocked_load_status(rq, !done);
>         if (decayed)
>                 cpufreq_update_util(rq, 0);
>  }
> @@ -12452,7 +12452,7 @@ static void nohz_balancer_kick(struct rq *rq)
>         if (likely(!atomic_read(&nohz.nr_cpus)))
>                 return;
>
> -       if (READ_ONCE(nohz.has_blocked) &&
> +       if (READ_ONCE(nohz.has_blocked_load) &&
>             time_after(now, READ_ONCE(nohz.next_blocked)))
>                 flags = NOHZ_STATS_KICK;
>
> @@ -12613,9 +12613,9 @@ void nohz_balance_enter_idle(int cpu)
>
>         /*
>          * The tick is still stopped but load could have been added in the
> -        * meantime. We set the nohz.has_blocked flag to trig a check of the
> +        * meantime. We set the nohz.has_blocked_load flag to trig a check of the
>          * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
> -        * of nohz.has_blocked can only happen after checking the new load
> +        * of nohz.has_blocked_load can only happen after checking the new load
>          */
>         if (rq->nohz_tick_stopped)
>                 goto out;
> @@ -12631,7 +12631,7 @@ void nohz_balance_enter_idle(int cpu)
>
>         /*
>          * Ensures that if nohz_idle_balance() fails to observe our
> -        * @idle_cpus_mask store, it must observe the @has_blocked
> +        * @idle_cpus_mask store, it must observe the @has_blocked_load
>          * and @needs_update stores.
>          */
>         smp_mb__after_atomic();
> @@ -12644,7 +12644,7 @@ void nohz_balance_enter_idle(int cpu)
>          * Each time a cpu enter idle, we assume that it has blocked load and
>          * enable the periodic update of the load of idle CPUs
>          */
> -       WRITE_ONCE(nohz.has_blocked, 1);
> +       WRITE_ONCE(nohz.has_blocked_load, 1);
>  }
>
>  static bool update_nohz_stats(struct rq *rq)
> @@ -12685,8 +12685,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>
>         /*
>          * We assume there will be no idle load after this update and clear
> -        * the has_blocked flag. If a cpu enters idle in the mean time, it will
> -        * set the has_blocked flag and trigger another update of idle load.
> +        * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
> +        * set the has_blocked_load flag and trigger another update of idle load.
>          * Because a cpu that becomes idle, is added to idle_cpus_mask before
>          * setting the flag, we are sure to not clear the state and not
>          * check the load of an idle cpu.
> @@ -12694,12 +12694,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>          * Same applies to idle_cpus_mask vs needs_update.
>          */
>         if (flags & NOHZ_STATS_KICK)
> -               WRITE_ONCE(nohz.has_blocked, 0);
> +               WRITE_ONCE(nohz.has_blocked_load, 0);
>         if (flags & NOHZ_NEXT_KICK)
>                 WRITE_ONCE(nohz.needs_update, 0);
>
>         /*
> -        * Ensures that if we miss the CPU, we must see the has_blocked
> +        * Ensures that if we miss the CPU, we must see the has_blocked_load
>          * store from nohz_balance_enter_idle().
>          */
>         smp_mb();
> @@ -12766,7 +12766,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>  abort:
>         /* There is still blocked load, enable periodic update */
>         if (has_blocked_load)
> -               WRITE_ONCE(nohz.has_blocked, 1);
> +               WRITE_ONCE(nohz.has_blocked_load, 1);
>  }
>
>  /*
> @@ -12828,7 +12828,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
>                 return;
>
>         /* Don't need to update blocked load of idle CPUs*/
> -       if (!READ_ONCE(nohz.has_blocked) ||
> +       if (!READ_ONCE(nohz.has_blocked_load) ||
>             time_before(jiffies, READ_ONCE(nohz.next_blocked)))
>                 return;
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1 -v3] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02 16:09       ` [PATCH 1/1 -v3] " Ingo Molnar
  2025-12-02 16:55         ` Vincent Guittot
@ 2025-12-02 20:26         ` Shrikanth Hegde
  1 sibling, 0 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2025-12-02 20:26 UTC (permalink / raw)
  To: Ingo Molnar, Vincent Guittot
  Cc: linux-kernel, Peter Zijlstra, Frederic Weisbecker, Juri Lelli,
	Dietmar Eggemann, Valentin Schneider, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner


>  From 395fc683e48f6fe5f36082691681d0d64d1a48ff Mon Sep 17 00:00:00 2001
> From: Ingo Molnar <mingo@kernel.org>
> Date: Tue, 2 Dec 2025 10:35:06 +0100
> Subject: [PATCH] sched/fair: Sort out 'blocked_load*' namespace noise
> 
> There's three layers of logic in the scheduler that
> deal with 'has_blocked' (load) handling of the NOHZ code:
> 
>    (1) nohz.has_blocked,
>    (2) rq->has_blocked_load, deal with NOHZ idle balancing,
>    (3) and cfs_rq_has_blocked(), which is part of the layer
>        that is passing the SMP load-balancing signal to the
>        NOHZ layers.
> 
> The 'has_blocked' and 'has_blocked_load' names are used
> in a mixed fashion, sometimes within the same function.
> 
> Standardize on 'has_blocked_load' to make it all easy
> to read and easy to grep.
> 
> No change in functionality.
> 

Nothing to do with this patch, but for my understanding.

rq->has_blocked_load - is reset when there is no cfs,rt,dl or irq or hw_pressure.
That meant this CPU is idle for sometime right?

Then just before disabling the tick, it is set to 1 again in nohz_balance_enter_idle.
at that point also it won't have anything on its rq right? it would set nohz.has_blocked_load
and that could trigger a NOHZ_STATS_KICK. That doesn't do any balancing per se. It would
update the same rq->has_blocked_load=0 again.

When it wakes up after sometime, i don't see rq->has_blocked_load updated. So it would continue
to be 0 until it hits nohz_balance_enter_idle?

> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
> Link: https://patch.msgid.link/aS6yvxyc3JfMxxQW@gmail.com
> ---
>   kernel/sched/fair.c | 40 ++++++++++++++++++++--------------------
>   1 file changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b6043ec4885b..76f5e4b78b30 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7140,7 +7140,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
>   static struct {
>   	cpumask_var_t idle_cpus_mask;
>   	atomic_t nr_cpus;
> -	int has_blocked;		/* Idle CPUS has blocked load */
> +	int has_blocked_load;		/* Idle CPUS has blocked load */
>   	int needs_update;		/* Newly idle CPUs need their next_balance collated */
>   	unsigned long next_balance;     /* in jiffy units */
>   	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
> @@ -9776,7 +9776,7 @@ static void attach_tasks(struct lb_env *env)
>   }
>   
>   #ifdef CONFIG_NO_HZ_COMMON
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
>   {
>   	if (cfs_rq->avg.load_avg)
>   		return true;
> @@ -9809,16 +9809,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
>   	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
>   }
>   
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
> +static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load)
>   {
> -	if (!has_blocked)
> +	if (!has_blocked_load)
>   		rq->has_blocked_load = 0;
>   }
>   #else /* !CONFIG_NO_HZ_COMMON: */
> -static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
> +static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
>   static inline bool others_have_blocked(struct rq *rq) { return false; }
>   static inline void update_blocked_load_tick(struct rq *rq) {}
> -static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
> +static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
>   #endif /* !CONFIG_NO_HZ_COMMON */
>   
>   static bool __update_blocked_others(struct rq *rq, bool *done)
> @@ -9875,7 +9875,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>   			list_del_leaf_cfs_rq(cfs_rq);
>   
>   		/* Don't need periodic decay once load/util_avg are null */
> -		if (cfs_rq_has_blocked(cfs_rq))
> +		if (cfs_rq_has_blocked_load(cfs_rq))
>   			*done = false;
>   	}
>   
> @@ -9935,7 +9935,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
>   	bool decayed;
>   
>   	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
> -	if (cfs_rq_has_blocked(cfs_rq))
> +	if (cfs_rq_has_blocked_load(cfs_rq))
>   		*done = false;
>   
>   	return decayed;
> @@ -9956,7 +9956,7 @@ static void __sched_balance_update_blocked_averages(struct rq *rq)
>   	decayed |= __update_blocked_others(rq, &done);
>   	decayed |= __update_blocked_fair(rq, &done);
>   
> -	update_blocked_load_status(rq, !done);
> +	update_has_blocked_load_status(rq, !done);
>   	if (decayed)
>   		cpufreq_update_util(rq, 0);
>   }
> @@ -12452,7 +12452,7 @@ static void nohz_balancer_kick(struct rq *rq)
>   	if (likely(!atomic_read(&nohz.nr_cpus)))
>   		return;
>   
> -	if (READ_ONCE(nohz.has_blocked) &&
> +	if (READ_ONCE(nohz.has_blocked_load) &&
>   	    time_after(now, READ_ONCE(nohz.next_blocked)))
>   		flags = NOHZ_STATS_KICK;
>   
> @@ -12613,9 +12613,9 @@ void nohz_balance_enter_idle(int cpu)
>   
>   	/*
>   	 * The tick is still stopped but load could have been added in the
> -	 * meantime. We set the nohz.has_blocked flag to trig a check of the
> +	 * meantime. We set the nohz.has_blocked_load flag to trig a check of the
>   	 * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
> -	 * of nohz.has_blocked can only happen after checking the new load
> +	 * of nohz.has_blocked_load can only happen after checking the new load
>   	 */
>   	if (rq->nohz_tick_stopped)
>   		goto out;
> @@ -12631,7 +12631,7 @@ void nohz_balance_enter_idle(int cpu)
>   
>   	/*
>   	 * Ensures that if nohz_idle_balance() fails to observe our
> -	 * @idle_cpus_mask store, it must observe the @has_blocked
> +	 * @idle_cpus_mask store, it must observe the @has_blocked_load
>   	 * and @needs_update stores.
>   	 */
>   	smp_mb__after_atomic();
> @@ -12644,7 +12644,7 @@ void nohz_balance_enter_idle(int cpu)
>   	 * Each time a cpu enter idle, we assume that it has blocked load and
>   	 * enable the periodic update of the load of idle CPUs
>   	 */
> -	WRITE_ONCE(nohz.has_blocked, 1);
> +	WRITE_ONCE(nohz.has_blocked_load, 1);
>   }
>   
>   static bool update_nohz_stats(struct rq *rq)
> @@ -12685,8 +12685,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>   
>   	/*
>   	 * We assume there will be no idle load after this update and clear
> -	 * the has_blocked flag. If a cpu enters idle in the mean time, it will
> -	 * set the has_blocked flag and trigger another update of idle load.
> +	 * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
> +	 * set the has_blocked_load flag and trigger another update of idle load.
>   	 * Because a cpu that becomes idle, is added to idle_cpus_mask before
>   	 * setting the flag, we are sure to not clear the state and not
>   	 * check the load of an idle cpu.
> @@ -12694,12 +12694,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>   	 * Same applies to idle_cpus_mask vs needs_update.
>   	 */
>   	if (flags & NOHZ_STATS_KICK)
> -		WRITE_ONCE(nohz.has_blocked, 0);
> +		WRITE_ONCE(nohz.has_blocked_load, 0);
>   	if (flags & NOHZ_NEXT_KICK)
>   		WRITE_ONCE(nohz.needs_update, 0);
>   
>   	/*
> -	 * Ensures that if we miss the CPU, we must see the has_blocked
> +	 * Ensures that if we miss the CPU, we must see the has_blocked_load
>   	 * store from nohz_balance_enter_idle().
>   	 */
>   	smp_mb();
> @@ -12766,7 +12766,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
>   abort:
>   	/* There is still blocked load, enable periodic update */
>   	if (has_blocked_load)
> -		WRITE_ONCE(nohz.has_blocked, 1);
> +		WRITE_ONCE(nohz.has_blocked_load, 1);
>   }
>   
>   /*
> @@ -12828,7 +12828,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
>   		return;
>   
>   	/* Don't need to update blocked load of idle CPUs*/
> -	if (!READ_ONCE(nohz.has_blocked) ||
> +	if (!READ_ONCE(nohz.has_blocked_load) ||
>   	    time_before(jiffies, READ_ONCE(nohz.next_blocked)))
>   		return;
>   

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [tip: sched/core] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02  9:34   ` [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise Ingo Molnar
  2025-12-02 10:13     ` Vincent Guittot
@ 2025-12-14  7:46     ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59     ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 12+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ingo Molnar, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     098f3900b035ab94fccd19d77b9ec2b13c300599
Gitweb:        https://git.kernel.org/tip/098f3900b035ab94fccd19d77b9ec2b13c300599
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Tue, 02 Dec 2025 10:35:06 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:03 +01:00

sched/fair: Sort out 'blocked_load*' namespace noise

There's three layers of logic in the scheduler that
deal with 'has_blocked' (load) handling of the NOHZ code:

  (1) nohz.has_blocked,
  (2) rq->has_blocked_load, deal with NOHZ idle balancing,
  (3) and cfs_rq_has_blocked(), which is part of the layer
      that is passing the SMP load-balancing signal to the
      NOHZ layers.

The 'has_blocked' and 'has_blocked_load' names are used
in a mixed fashion, sometimes within the same function.

Standardize on 'has_blocked_load' to make it all easy
to read and easy to grep.

No change in functionality.

Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://patch.msgid.link/aS6yvxyc3JfMxxQW@gmail.com
---
 kernel/sched/fair.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b6043ec..76f5e4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7140,7 +7140,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
 static struct {
 	cpumask_var_t idle_cpus_mask;
 	atomic_t nr_cpus;
-	int has_blocked;		/* Idle CPUS has blocked load */
+	int has_blocked_load;		/* Idle CPUS has blocked load */
 	int needs_update;		/* Newly idle CPUs need their next_balance collated */
 	unsigned long next_balance;     /* in jiffy units */
 	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
@@ -9776,7 +9776,7 @@ static void attach_tasks(struct lb_env *env)
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
 {
 	if (cfs_rq->avg.load_avg)
 		return true;
@@ -9809,16 +9809,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
 	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
 }
 
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load)
 {
-	if (!has_blocked)
+	if (!has_blocked_load)
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
 #endif /* !CONFIG_NO_HZ_COMMON */
 
 static bool __update_blocked_others(struct rq *rq, bool *done)
@@ -9875,7 +9875,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 			list_del_leaf_cfs_rq(cfs_rq);
 
 		/* Don't need periodic decay once load/util_avg are null */
-		if (cfs_rq_has_blocked(cfs_rq))
+		if (cfs_rq_has_blocked_load(cfs_rq))
 			*done = false;
 	}
 
@@ -9935,7 +9935,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 	bool decayed;
 
 	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
-	if (cfs_rq_has_blocked(cfs_rq))
+	if (cfs_rq_has_blocked_load(cfs_rq))
 		*done = false;
 
 	return decayed;
@@ -9956,7 +9956,7 @@ static void __sched_balance_update_blocked_averages(struct rq *rq)
 	decayed |= __update_blocked_others(rq, &done);
 	decayed |= __update_blocked_fair(rq, &done);
 
-	update_blocked_load_status(rq, !done);
+	update_has_blocked_load_status(rq, !done);
 	if (decayed)
 		cpufreq_update_util(rq, 0);
 }
@@ -12452,7 +12452,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (likely(!atomic_read(&nohz.nr_cpus)))
 		return;
 
-	if (READ_ONCE(nohz.has_blocked) &&
+	if (READ_ONCE(nohz.has_blocked_load) &&
 	    time_after(now, READ_ONCE(nohz.next_blocked)))
 		flags = NOHZ_STATS_KICK;
 
@@ -12613,9 +12613,9 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * The tick is still stopped but load could have been added in the
-	 * meantime. We set the nohz.has_blocked flag to trig a check of the
+	 * meantime. We set the nohz.has_blocked_load flag to trig a check of the
 	 * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
-	 * of nohz.has_blocked can only happen after checking the new load
+	 * of nohz.has_blocked_load can only happen after checking the new load
 	 */
 	if (rq->nohz_tick_stopped)
 		goto out;
@@ -12631,7 +12631,7 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * Ensures that if nohz_idle_balance() fails to observe our
-	 * @idle_cpus_mask store, it must observe the @has_blocked
+	 * @idle_cpus_mask store, it must observe the @has_blocked_load
 	 * and @needs_update stores.
 	 */
 	smp_mb__after_atomic();
@@ -12644,7 +12644,7 @@ out:
 	 * Each time a cpu enter idle, we assume that it has blocked load and
 	 * enable the periodic update of the load of idle CPUs
 	 */
-	WRITE_ONCE(nohz.has_blocked, 1);
+	WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 static bool update_nohz_stats(struct rq *rq)
@@ -12685,8 +12685,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 
 	/*
 	 * We assume there will be no idle load after this update and clear
-	 * the has_blocked flag. If a cpu enters idle in the mean time, it will
-	 * set the has_blocked flag and trigger another update of idle load.
+	 * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
+	 * set the has_blocked_load flag and trigger another update of idle load.
 	 * Because a cpu that becomes idle, is added to idle_cpus_mask before
 	 * setting the flag, we are sure to not clear the state and not
 	 * check the load of an idle cpu.
@@ -12694,12 +12694,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 	 * Same applies to idle_cpus_mask vs needs_update.
 	 */
 	if (flags & NOHZ_STATS_KICK)
-		WRITE_ONCE(nohz.has_blocked, 0);
+		WRITE_ONCE(nohz.has_blocked_load, 0);
 	if (flags & NOHZ_NEXT_KICK)
 		WRITE_ONCE(nohz.needs_update, 0);
 
 	/*
-	 * Ensures that if we miss the CPU, we must see the has_blocked
+	 * Ensures that if we miss the CPU, we must see the has_blocked_load
 	 * store from nohz_balance_enter_idle().
 	 */
 	smp_mb();
@@ -12766,7 +12766,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 abort:
 	/* There is still blocked load, enable periodic update */
 	if (has_blocked_load)
-		WRITE_ONCE(nohz.has_blocked, 1);
+		WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 /*
@@ -12828,7 +12828,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
 		return;
 
 	/* Don't need to update blocked load of idle CPUs*/
-	if (!READ_ONCE(nohz.has_blocked) ||
+	if (!READ_ONCE(nohz.has_blocked_load) ||
 	    time_before(jiffies, READ_ONCE(nohz.next_blocked)))
 		return;
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [tip: sched/core] sched/fair: Sort out 'blocked_load*' namespace noise
  2025-12-02  9:34   ` [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise Ingo Molnar
  2025-12-02 10:13     ` Vincent Guittot
  2025-12-14  7:46     ` [tip: sched/core] " tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59     ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 12+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ingo Molnar, Peter Zijlstra, Frederic Weisbecker,
	Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     527a521029c3edd38fb9fc96cd58e3fd7393d28e
Gitweb:        https://git.kernel.org/tip/527a521029c3edd38fb9fc96cd58e3fd7393d28e
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Tue, 02 Dec 2025 10:35:06 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:45 +01:00

sched/fair: Sort out 'blocked_load*' namespace noise

There's three layers of logic in the scheduler that
deal with 'has_blocked' (load) handling of the NOHZ code:

  (1) nohz.has_blocked,
  (2) rq->has_blocked_load, deal with NOHZ idle balancing,
  (3) and cfs_rq_has_blocked(), which is part of the layer
      that is passing the SMP load-balancing signal to the
      NOHZ layers.

The 'has_blocked' and 'has_blocked_load' names are used
in a mixed fashion, sometimes within the same function.

Standardize on 'has_blocked_load' to make it all easy
to read and easy to grep.

No change in functionality.

Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://patch.msgid.link/aS6yvxyc3JfMxxQW@gmail.com
---
 kernel/sched/fair.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8ac68d..d588eb8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7140,7 +7140,7 @@ static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
 static struct {
 	cpumask_var_t idle_cpus_mask;
 	atomic_t nr_cpus;
-	int has_blocked;		/* Idle CPUS has blocked load */
+	int has_blocked_load;		/* Idle CPUS has blocked load */
 	int needs_update;		/* Newly idle CPUs need their next_balance collated */
 	unsigned long next_balance;     /* in jiffy units */
 	unsigned long next_blocked;	/* Next update of blocked load in jiffies */
@@ -9770,7 +9770,7 @@ static void attach_tasks(struct lb_env *env)
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq)
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq)
 {
 	if (cfs_rq->avg.load_avg)
 		return true;
@@ -9803,16 +9803,16 @@ static inline void update_blocked_load_tick(struct rq *rq)
 	WRITE_ONCE(rq->last_blocked_load_update_tick, jiffies);
 }
 
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load)
 {
-	if (!has_blocked)
+	if (!has_blocked_load)
 		rq->has_blocked_load = 0;
 }
 #else /* !CONFIG_NO_HZ_COMMON: */
-static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
+static inline bool cfs_rq_has_blocked_load(struct cfs_rq *cfs_rq) { return false; }
 static inline bool others_have_blocked(struct rq *rq) { return false; }
 static inline void update_blocked_load_tick(struct rq *rq) {}
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
+static inline void update_has_blocked_load_status(struct rq *rq, bool has_blocked_load) {}
 #endif /* !CONFIG_NO_HZ_COMMON */
 
 static bool __update_blocked_others(struct rq *rq, bool *done)
@@ -9869,7 +9869,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 			list_del_leaf_cfs_rq(cfs_rq);
 
 		/* Don't need periodic decay once load/util_avg are null */
-		if (cfs_rq_has_blocked(cfs_rq))
+		if (cfs_rq_has_blocked_load(cfs_rq))
 			*done = false;
 	}
 
@@ -9929,7 +9929,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
 	bool decayed;
 
 	decayed = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
-	if (cfs_rq_has_blocked(cfs_rq))
+	if (cfs_rq_has_blocked_load(cfs_rq))
 		*done = false;
 
 	return decayed;
@@ -9950,7 +9950,7 @@ static void __sched_balance_update_blocked_averages(struct rq *rq)
 	decayed |= __update_blocked_others(rq, &done);
 	decayed |= __update_blocked_fair(rq, &done);
 
-	update_blocked_load_status(rq, !done);
+	update_has_blocked_load_status(rq, !done);
 	if (decayed)
 		cpufreq_update_util(rq, 0);
 }
@@ -12446,7 +12446,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (likely(!atomic_read(&nohz.nr_cpus)))
 		return;
 
-	if (READ_ONCE(nohz.has_blocked) &&
+	if (READ_ONCE(nohz.has_blocked_load) &&
 	    time_after(now, READ_ONCE(nohz.next_blocked)))
 		flags = NOHZ_STATS_KICK;
 
@@ -12607,9 +12607,9 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * The tick is still stopped but load could have been added in the
-	 * meantime. We set the nohz.has_blocked flag to trig a check of the
+	 * meantime. We set the nohz.has_blocked_load flag to trig a check of the
 	 * *_avg. The CPU is already part of nohz.idle_cpus_mask so the clear
-	 * of nohz.has_blocked can only happen after checking the new load
+	 * of nohz.has_blocked_load can only happen after checking the new load
 	 */
 	if (rq->nohz_tick_stopped)
 		goto out;
@@ -12625,7 +12625,7 @@ void nohz_balance_enter_idle(int cpu)
 
 	/*
 	 * Ensures that if nohz_idle_balance() fails to observe our
-	 * @idle_cpus_mask store, it must observe the @has_blocked
+	 * @idle_cpus_mask store, it must observe the @has_blocked_load
 	 * and @needs_update stores.
 	 */
 	smp_mb__after_atomic();
@@ -12638,7 +12638,7 @@ out:
 	 * Each time a cpu enter idle, we assume that it has blocked load and
 	 * enable the periodic update of the load of idle CPUs
 	 */
-	WRITE_ONCE(nohz.has_blocked, 1);
+	WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 static bool update_nohz_stats(struct rq *rq)
@@ -12679,8 +12679,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 
 	/*
 	 * We assume there will be no idle load after this update and clear
-	 * the has_blocked flag. If a cpu enters idle in the mean time, it will
-	 * set the has_blocked flag and trigger another update of idle load.
+	 * the has_blocked_load flag. If a cpu enters idle in the mean time, it will
+	 * set the has_blocked_load flag and trigger another update of idle load.
 	 * Because a cpu that becomes idle, is added to idle_cpus_mask before
 	 * setting the flag, we are sure to not clear the state and not
 	 * check the load of an idle cpu.
@@ -12688,12 +12688,12 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 	 * Same applies to idle_cpus_mask vs needs_update.
 	 */
 	if (flags & NOHZ_STATS_KICK)
-		WRITE_ONCE(nohz.has_blocked, 0);
+		WRITE_ONCE(nohz.has_blocked_load, 0);
 	if (flags & NOHZ_NEXT_KICK)
 		WRITE_ONCE(nohz.needs_update, 0);
 
 	/*
-	 * Ensures that if we miss the CPU, we must see the has_blocked
+	 * Ensures that if we miss the CPU, we must see the has_blocked_load
 	 * store from nohz_balance_enter_idle().
 	 */
 	smp_mb();
@@ -12760,7 +12760,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 abort:
 	/* There is still blocked load, enable periodic update */
 	if (has_blocked_load)
-		WRITE_ONCE(nohz.has_blocked, 1);
+		WRITE_ONCE(nohz.has_blocked_load, 1);
 }
 
 /*
@@ -12822,7 +12822,7 @@ static void nohz_newidle_balance(struct rq *this_rq)
 		return;
 
 	/* Don't need to update blocked load of idle CPUs*/
-	if (!READ_ONCE(nohz.has_blocked) ||
+	if (!READ_ONCE(nohz.has_blocked_load) ||
 	    time_before(jiffies, READ_ONCE(nohz.next_blocked)))
 		return;
 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-12-15  7:59 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-02  8:13 [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Ingo Molnar
2025-12-02  8:13 ` [PATCH 1/3] sched/fair: Rename the 'has_blocked_load' local variable to 'has_blocked' in _nohz_idle_balance() Ingo Molnar
2025-12-02  8:13 ` [PATCH 2/3] sched/fair: Rename the 'has_blocked' parameter in update_blocked_load_status() to 'has_blocked_load' Ingo Molnar
2025-12-02  8:13 ` [PATCH 3/3] sched/fair: Rename cfs_rq_has_blocked() => cfs_rq_has_blocked_load_avg() Ingo Molnar
2025-12-02  9:04 ` [PATCH 0/3] sched/fair: Sort out 'blocked_load*' namespace confusion Vincent Guittot
2025-12-02  9:34   ` [PATCH 1/1 -v2] sched/fair: Sort out 'blocked_load*' namespace noise Ingo Molnar
2025-12-02 10:13     ` Vincent Guittot
2025-12-02 16:09       ` [PATCH 1/1 -v3] " Ingo Molnar
2025-12-02 16:55         ` Vincent Guittot
2025-12-02 20:26         ` Shrikanth Hegde
2025-12-14  7:46     ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2025-12-15  7:59     ` tip-bot2 for Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox