public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
@ 2026-04-17  2:06 Masahito S
  2026-04-22 14:26 ` Christian Loehle
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Masahito S @ 2026-04-17  2:06 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	kprateek.nayak, linux-kernel, Masahito S

update_rq_avg_idle(), called from put_prev_task_idle(), computes
rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
through the fair class path.  When the idle task is preempted without
sched_balance_newidle() having run (boot, hotplug, sched class
transitions), idle_stamp remains 0, producing a delta equal to
rq_clock() — a value in the billions of nanoseconds — which saturates
avg_idle at 2 * max_idle_balance_cost.

This inflated avg_idle prevents sched_balance_newidle() from
early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
making it overly aggressive.  The resulting excess newidle migrations
override wake-time placement decisions made by select_idle_sibling(),
degrading cache locality that careful placement (recent_used_cpu,
select_idle_core, etc.) is designed to preserve.

Fix this by:

1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
   that a zero idle_stamp is never used as a timestamp.

2. Setting idle_stamp in set_next_task_idle() when it has not already
   been set by sched_balance_newidle().  This establishes a symmetric
   idle entry/exit contract: set_next_task_idle() marks the start of
   the idle period, put_prev_task_idle() measures and records it via
   update_rq_avg_idle().

The entry hook preserves idle_stamp if sched_balance_newidle() has
already set it, maintaining the existing semantic where balance-attempt
duration is included in the idle measurement.
---
 kernel/sched/core.c | 3 +++
 kernel/sched/idle.c | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 496dff740d..ec801f731c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
 
 void update_rq_avg_idle(struct rq *rq)
 {
+	if (!rq->idle_stamp)
+		return;
+
 	u64 delta = rq_clock(rq) - rq->idle_stamp;
 	u64 max = 2*rq->max_idle_balance_cost;
 
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index a83be0c834..9ceb7e6224 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
 	schedstat_inc(rq->sched_goidle);
 	next->se.exec_start = rq_clock_task(rq);
 
+	if (!rq->idle_stamp)
+		rq->idle_stamp = rq_clock(rq);
+
 	/*
 	 * rq is about to be idle, check if we need to update the
 	 * lost_idle_time of clock_pelt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
  2026-04-17  2:06 [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook Masahito S
@ 2026-04-22 14:26 ` Christian Loehle
  2026-04-23  2:33 ` [PATCH v2] " Masahito S
  2026-04-23  7:46 ` [PATCH] " Vincent Guittot
  2 siblings, 0 replies; 6+ messages in thread
From: Christian Loehle @ 2026-04-22 14:26 UTC (permalink / raw)
  To: Masahito S, mingo, peterz, juri.lelli, vincent.guittot
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	kprateek.nayak, linux-kernel

On 4/17/26 03:06, Masahito S wrote:
> update_rq_avg_idle(), called from put_prev_task_idle(), computes
> rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
> only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
> through the fair class path.  When the idle task is preempted without
> sched_balance_newidle() having run (boot, hotplug, sched class
> transitions), idle_stamp remains 0, producing a delta equal to
> rq_clock() — a value in the billions of nanoseconds — which saturates
> avg_idle at 2 * max_idle_balance_cost.
> 
> This inflated avg_idle prevents sched_balance_newidle() from
> early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
> making it overly aggressive.  The resulting excess newidle migrations
> override wake-time placement decisions made by select_idle_sibling(),
> degrading cache locality that careful placement (recent_used_cpu,
> select_idle_core, etc.) is designed to preserve.
> 
> Fix this by:
> 
> 1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
>    that a zero idle_stamp is never used as a timestamp.
> 
> 2. Setting idle_stamp in set_next_task_idle() when it has not already
>    been set by sched_balance_newidle().  This e> avg_idle at 2 * max_idle_balance_cost.stablishes a symmetric
>    idle entry/exit contract: set_next_task_idle() marks the start of
>    the idle period, put_prev_task_idle() measures and records it via
>    update_rq_avg_idle().
> 
> The entry hook preserves idle_stamp if sched_balance_newidle() has
> already set it, maintaining the existing semantic where balance-attempt
> duration is included in the idle measurement.


You're missing the sign-off tag.
The actual change looks reasonable to me, albeit the impact surely limited.

> ---
>  kernel/sched/core.c | 3 +++
>  kernel/sched/idle.c | 3 +++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 496dff740d..ec801f731c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
>  
>  void update_rq_avg_idle(struct rq *rq)
>  {
> +	if (!rq->idle_stamp)
> +		return;
> +
>  	u64 delta = rq_clock(rq) - rq->idle_stamp;
>  	u64 max = 2*rq->max_idle_balance_cost;
>  
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index a83be0c834..9ceb7e6224 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
>  	schedstat_inc(rq->sched_goidle);
>  	next->se.exec_start = rq_clock_task(rq);
>  
> +	if (!rq->idle_stamp)
> +		rq->idle_stamp = rq_clock(rq);
> +
>  	/*
>  	 * rq is about to be idle, check if we need to update the
>  	 * lost_idle_time of clock_pelt


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
  2026-04-17  2:06 [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook Masahito S
  2026-04-22 14:26 ` Christian Loehle
@ 2026-04-23  2:33 ` Masahito S
  2026-04-23  5:10   ` K Prateek Nayak
  2026-04-23  5:48   ` Eric Naim
  2026-04-23  7:46 ` [PATCH] " Vincent Guittot
  2 siblings, 2 replies; 6+ messages in thread
From: Masahito S @ 2026-04-23  2:33 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	kprateek.nayak, linux-kernel, dnaim, christian.loehle, Masahito S

update_rq_avg_idle(), called from put_prev_task_idle(), computes
rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
through the fair class path.  When the idle task is preempted without
sched_balance_newidle() having run (boot, hotplug, sched class
transitions), idle_stamp remains 0, producing a delta equal to
rq_clock() — a value in the billions of nanoseconds — which saturates
avg_idle at 2 * max_idle_balance_cost.

This inflated avg_idle prevents sched_balance_newidle() from
early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
making it overly aggressive.  The resulting excess newidle migrations
override wake-time placement decisions made by select_idle_sibling(),
degrading cache locality that careful placement (recent_used_cpu,
select_idle_core, etc.) is designed to preserve.

Fix this by:

1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
   that a zero idle_stamp is never used as a timestamp.

2. Setting idle_stamp in set_next_task_idle() when it has not already
   been set by sched_balance_newidle().  This establishes a symmetric
   idle entry/exit contract: set_next_task_idle() marks the start of
   the idle period, put_prev_task_idle() measures and records it via
   update_rq_avg_idle().

The entry hook preserves idle_stamp if sched_balance_newidle() has
already set it, maintaining the existing semantic where balance-attempt
duration is included in the idle measurement.

Signed-off-by: Masahito Suzuki <firelzrd@gmail.com>
---
Changes in v2:
- Added missing Signed-off-by tag (no functional changes).
  Thanks to Eric Naim and Christian Loehle for pointing this out.

 kernel/sched/core.c | 3 +++
 kernel/sched/idle.c | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 496dff740d..ec801f731c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
 
 void update_rq_avg_idle(struct rq *rq)
 {
+	if (!rq->idle_stamp)
+		return;
+
 	u64 delta = rq_clock(rq) - rq->idle_stamp;
 	u64 max = 2*rq->max_idle_balance_cost;
 
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index a83be0c834..9ceb7e6224 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
 	schedstat_inc(rq->sched_goidle);
 	next->se.exec_start = rq_clock_task(rq);
 
+	if (!rq->idle_stamp)
+		rq->idle_stamp = rq_clock(rq);
+
 	/*
 	 * rq is about to be idle, check if we need to update the
 	 * lost_idle_time of clock_pelt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
  2026-04-23  2:33 ` [PATCH v2] " Masahito S
@ 2026-04-23  5:10   ` K Prateek Nayak
  2026-04-23  5:48   ` Eric Naim
  1 sibling, 0 replies; 6+ messages in thread
From: K Prateek Nayak @ 2026-04-23  5:10 UTC (permalink / raw)
  To: Masahito S, mingo, peterz, juri.lelli, vincent.guittot,
	John Stultz
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, dnaim, christian.loehle

Hello Masahito,

On 4/23/2026 8:03 AM, Masahito S wrote:
> update_rq_avg_idle(), called from put_prev_task_idle(), computes
> rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
> only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
> through the fair class path.  When the idle task is preempted without
> sched_balance_newidle() having run (boot, hotplug, sched class
> transitions), idle_stamp remains 0, producing a delta equal to
> rq_clock() — a value in the billions of nanoseconds — which saturates
> avg_idle at 2 * max_idle_balance_cost.

But these are rare cases right?

Hotplug would anyways trigger a load balance when the CPU comes online
and the avg_idle will stabilize thereafter.

Boot happens once so it should be fine to saturate the counter at boot, and
idle task being preempted implies we are calling put_prev_task_idle(). For
idle task to be picked again it has to go though the pick for rest of the
scheduler classes which would do a newidle balance at fair right?

Maybe there is some case with sched-ext where this saturates but then the
counter only becomes relevant when the sched-ext scheduler is unloaded.

With PROXY_EXEC, we do switch to idle between schedule() to take the
current tasks off the CPU so this does have some merit.

> 
> This inflated avg_idle prevents sched_balance_newidle() from
> early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
> making it overly aggressive.  The resulting excess newidle migrations
> override wake-time placement decisions made by select_idle_sibling(),
> degrading cache locality that careful placement (recent_used_cpu,
> select_idle_core, etc.) is designed to preserve.
> 
> Fix this by:
> 
> 1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
>    that a zero idle_stamp is never used as a timestamp.
> 
> 2. Setting idle_stamp in set_next_task_idle() when it has not already
>    been set by sched_balance_newidle().  This establishes a symmetric
>    idle entry/exit contract: set_next_task_idle() marks the start of
>    the idle period, put_prev_task_idle() measures and records it via
>    update_rq_avg_idle().
> 
> The entry hook preserves idle_stamp if sched_balance_newidle() has
> already set it, maintaining the existing semantic where balance-attempt
> duration is included in the idle measurement.
> 
> Signed-off-by: Masahito Suzuki <firelzrd@gmail.com>
> ---
> Changes in v2:
> - Added missing Signed-off-by tag (no functional changes).
>   Thanks to Eric Naim and Christian Loehle for pointing this out.
> 
>  kernel/sched/core.c | 3 +++
>  kernel/sched/idle.c | 3 +++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 496dff740d..ec801f731c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
> 
>  void update_rq_avg_idle(struct rq *rq)
>  {
> +       if (!rq->idle_stamp)
> +               return;
> +

I think this makes sense since we can be forced into idle and we
don't want to account that.

>         u64 delta = rq_clock(rq) - rq->idle_stamp;
>         u64 max = 2*rq->max_idle_balance_cost;
> 
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index a83be0c834..9ceb7e6224 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
>         schedstat_inc(rq->sched_goidle);
>         next->se.exec_start = rq_clock_task(rq);
> 
> +       if (!rq->idle_stamp)
> +               rq->idle_stamp = rq_clock(rq);
> +

I don't think this is required because we can switch the donor to
idle task for PROXY_EXEC and we don't want to account that as a
short idle time unless there is another case I'm missing.

>         /*
>          * rq is about to be idle, check if we need to update the
>          * lost_idle_time of clock_pelt
-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
  2026-04-23  2:33 ` [PATCH v2] " Masahito S
  2026-04-23  5:10   ` K Prateek Nayak
@ 2026-04-23  5:48   ` Eric Naim
  1 sibling, 0 replies; 6+ messages in thread
From: Eric Naim @ 2026-04-23  5:48 UTC (permalink / raw)
  To: Masahito S, mingo, peterz, juri.lelli, vincent.guittot
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	kprateek.nayak, linux-kernel, christian.loehle

On 4/23/26 10:33 AM, Masahito S wrote:
> update_rq_avg_idle(), called from put_prev_task_idle(), computes
> rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
> only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
> through the fair class path.  When the idle task is preempted without
> sched_balance_newidle() having run (boot, hotplug, sched class
> transitions), idle_stamp remains 0, producing a delta equal to
> rq_clock() — a value in the billions of nanoseconds — which saturates
> avg_idle at 2 * max_idle_balance_cost.
> 
> This inflated avg_idle prevents sched_balance_newidle() from
> early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
> making it overly aggressive.  The resulting excess newidle migrations
> override wake-time placement decisions made by select_idle_sibling(),
> degrading cache locality that careful placement (recent_used_cpu,
> select_idle_core, etc.) is designed to preserve.
> 
> Fix this by:
> 
> 1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
>    that a zero idle_stamp is never used as a timestamp.
> 
> 2. Setting idle_stamp in set_next_task_idle() when it has not already
>    been set by sched_balance_newidle().  This establishes a symmetric
>    idle entry/exit contract: set_next_task_idle() marks the start of
>    the idle period, put_prev_task_idle() measures and records it via
>    update_rq_avg_idle().
> 
> The entry hook preserves idle_stamp if sched_balance_newidle() has
> already set it, maintaining the existing semantic where balance-attempt
> duration is included in the idle measurement.
> 
> Signed-off-by: Masahito Suzuki <firelzrd@gmail.com>

Should this have

  Fixes: 4b603f1551a73 ("sched: Update rq->avg_idle when a task is moved to an
idle CPU")

> ---
> Changes in v2:
> - Added missing Signed-off-by tag (no functional changes).
>   Thanks to Eric Naim and Christian Loehle for pointing this out.
> 
>  kernel/sched/core.c | 3 +++
>  kernel/sched/idle.c | 3 +++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 496dff740d..ec801f731c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
>  
>  void update_rq_avg_idle(struct rq *rq)
>  {
> +	if (!rq->idle_stamp)
> +		return;
> +
>  	u64 delta = rq_clock(rq) - rq->idle_stamp;
>  	u64 max = 2*rq->max_idle_balance_cost;
>  
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index a83be0c834..9ceb7e6224 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
>  	schedstat_inc(rq->sched_goidle);
>  	next->se.exec_start = rq_clock_task(rq);
>  
> +	if (!rq->idle_stamp)
> +		rq->idle_stamp = rq_clock(rq);
> +
>  	/*
>  	 * rq is about to be idle, check if we need to update the
>  	 * lost_idle_time of clock_pelt


-- 
Regards,
  Eric

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
  2026-04-17  2:06 [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook Masahito S
  2026-04-22 14:26 ` Christian Loehle
  2026-04-23  2:33 ` [PATCH v2] " Masahito S
@ 2026-04-23  7:46 ` Vincent Guittot
  2 siblings, 0 replies; 6+ messages in thread
From: Vincent Guittot @ 2026-04-23  7:46 UTC (permalink / raw)
  To: Masahito S
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, vschneid, kprateek.nayak, linux-kernel

On Fri, 17 Apr 2026 at 04:07, Masahito S <firelzrd@gmail.com> wrote:
>
> update_rq_avg_idle(), called from put_prev_task_idle(), computes
> rq->avg_idle as rq_clock() - rq->idle_stamp.  However, idle_stamp is
> only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
> through the fair class path.  When the idle task is preempted without
> sched_balance_newidle() having run (boot, hotplug, sched class
> transitions), idle_stamp remains 0, producing a delta equal to

Having avg_idle sets to 2*rq->max_idle_balance_cost after boot or
hotplug seems a good initial value

what do you mean by sched class transition ? pick_next_task_fair()->
sched_balance_newidle() is always called before picking an idle task

> rq_clock() — a value in the billions of nanoseconds — which saturates
> avg_idle at 2 * max_idle_balance_cost.
>
> This inflated avg_idle prevents sched_balance_newidle() from
> early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
> making it overly aggressive.  The resulting excess newidle migrations
> override wake-time placement decisions made by select_idle_sibling(),
> degrading cache locality that careful placement (recent_used_cpu,
> select_idle_core, etc.) is designed to preserve.
>
> Fix this by:
>
> 1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
>    that a zero idle_stamp is never used as a timestamp.
>
> 2. Setting idle_stamp in set_next_task_idle() when it has not already
>    been set by sched_balance_newidle().  This establishes a symmetric
>    idle entry/exit contract: set_next_task_idle() marks the start of
>    the idle period, put_prev_task_idle() measures and records it via
>    update_rq_avg_idle().
>
> The entry hook preserves idle_stamp if sched_balance_newidle() has
> already set it, maintaining the existing semantic where balance-attempt
> duration is included in the idle measurement.

Signed-off is missing


> ---
>  kernel/sched/core.c | 3 +++
>  kernel/sched/idle.c | 3 +++
>  2 files changed, 6 insertions(+)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 496dff740d..ec801f731c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
>
>  void update_rq_avg_idle(struct rq *rq)
>  {
> +       if (!rq->idle_stamp)
> +               return;
> +
>         u64 delta = rq_clock(rq) - rq->idle_stamp;
>         u64 max = 2*rq->max_idle_balance_cost;
>
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index a83be0c834..9ceb7e6224 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
>         schedstat_inc(rq->sched_goidle);
>         next->se.exec_start = rq_clock_task(rq);
>
> +       if (!rq->idle_stamp)
> +               rq->idle_stamp = rq_clock(rq);
> +
>         /*
>          * rq is about to be idle, check if we need to update the
>          * lost_idle_time of clock_pelt
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-04-23  7:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-17  2:06 [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook Masahito S
2026-04-22 14:26 ` Christian Loehle
2026-04-23  2:33 ` [PATCH v2] " Masahito S
2026-04-23  5:10   ` K Prateek Nayak
2026-04-23  5:48   ` Eric Naim
2026-04-23  7:46 ` [PATCH] " Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox