* [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook
@ 2026-04-17 2:06 Masahito S
0 siblings, 0 replies; only message in thread
From: Masahito S @ 2026-04-17 2:06 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, vincent.guittot
Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
kprateek.nayak, linux-kernel, Masahito S
update_rq_avg_idle(), called from put_prev_task_idle(), computes
rq->avg_idle as rq_clock() - rq->idle_stamp. However, idle_stamp is
only set by sched_balance_newidle() when a CPU enters CPU_NEWLY_IDLE
through the fair class path. When the idle task is preempted without
sched_balance_newidle() having run (boot, hotplug, sched class
transitions), idle_stamp remains 0, producing a delta equal to
rq_clock() — a value in the billions of nanoseconds — which saturates
avg_idle at 2 * max_idle_balance_cost.
This inflated avg_idle prevents sched_balance_newidle() from
early-returning (fair.c: avg_idle < max_newidle_lb_cost check),
making it overly aggressive. The resulting excess newidle migrations
override wake-time placement decisions made by select_idle_sibling(),
degrading cache locality that careful placement (recent_used_cpu,
select_idle_core, etc.) is designed to preserve.
Fix this by:
1. Adding an idle_stamp validity guard to update_rq_avg_idle(), so
that a zero idle_stamp is never used as a timestamp.
2. Setting idle_stamp in set_next_task_idle() when it has not already
been set by sched_balance_newidle(). This establishes a symmetric
idle entry/exit contract: set_next_task_idle() marks the start of
the idle period, put_prev_task_idle() measures and records it via
update_rq_avg_idle().
The entry hook preserves idle_stamp if sched_balance_newidle() has
already set it, maintaining the existing semantic where balance-attempt
duration is included in the idle measurement.
---
kernel/sched/core.c | 3 +++
kernel/sched/idle.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 496dff740d..ec801f731c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3633,6 +3633,9 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
void update_rq_avg_idle(struct rq *rq)
{
+ if (!rq->idle_stamp)
+ return;
+
u64 delta = rq_clock(rq) - rq->idle_stamp;
u64 max = 2*rq->max_idle_balance_cost;
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index a83be0c834..9ceb7e6224 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -491,6 +491,9 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
schedstat_inc(rq->sched_goidle);
next->se.exec_start = rq_clock_task(rq);
+ if (!rq->idle_stamp)
+ rq->idle_stamp = rq_clock(rq);
+
/*
* rq is about to be idle, check if we need to update the
* lost_idle_time of clock_pelt
--
2.34.1
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2026-04-17 2:07 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-17 2:06 [PATCH] sched/idle: Fix avg_idle saturation by establishing symmetric idle entry hook Masahito S
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox