From: Peter Zijlstra <peterz@infradead.org>
To: mingo@kernel.org
Cc: peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
vschneid@redhat.com, linux-kernel@vger.kernel.org,
wangtao554@huawei.com, quzicheng@huawei.com,
kprateek.nayak@amd.com, dsmythies@telus.net,
shubhang@os.amperecomputing.com
Subject: [PATCH v2 4/7] sched/fair: Fix lag clamp
Date: Thu, 19 Feb 2026 08:58:44 +0100 [thread overview]
Message-ID: <20260219080624.830623197@infradead.org> (raw)
In-Reply-To: 20260219075840.162631716@infradead.org
Vincent reported that he was seeing undue lag clamping in a mixed
slice workload. Implement the max_slice tracking as per the todo
comment.
Fixes: 147f3efaa241 ("sched/fair: Implement an EEVDF-like scheduling policy")
Reported-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Tested-by: Shubhang Kaushik <shubhang@os.amperecomputing.com>
Link: https://patch.msgid.link/20250422101628.GA33555@noisy.programming.kicks-ass.net
---
include/linux/sched.h | 1 +
kernel/sched/fair.c | 39 +++++++++++++++++++++++++++++++++++----
2 files changed, 36 insertions(+), 4 deletions(-)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -574,6 +574,7 @@ struct sched_entity {
u64 deadline;
u64 min_vruntime;
u64 min_slice;
+ u64 max_slice;
struct list_head group_node;
unsigned char on_rq;
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -748,6 +748,8 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
return cfs_rq->zero_vruntime;
}
+static inline u64 cfs_rq_max_slice(struct cfs_rq *cfs_rq);
+
/*
* lag_i = S - s_i = w_i * (V - v_i)
*
@@ -761,17 +763,16 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
* EEVDF gives the following limit for a steady state system:
*
* -r_max < lag < max(r_max, q)
- *
- * XXX could add max_slice to the augmented data to track this.
*/
static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
+ u64 max_slice = cfs_rq_max_slice(cfs_rq) + TICK_NSEC;
s64 vlag, limit;
WARN_ON_ONCE(!se->on_rq);
vlag = avg_vruntime(cfs_rq) - se->vruntime;
- limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se);
+ limit = calc_delta_fair(max_slice, se);
se->vlag = clamp(vlag, -limit, limit);
}
@@ -829,6 +830,21 @@ static inline u64 cfs_rq_min_slice(struc
return min_slice;
}
+static inline u64 cfs_rq_max_slice(struct cfs_rq *cfs_rq)
+{
+ struct sched_entity *root = __pick_root_entity(cfs_rq);
+ struct sched_entity *curr = cfs_rq->curr;
+ u64 max_slice = 0ULL;
+
+ if (curr && curr->on_rq)
+ max_slice = curr->slice;
+
+ if (root)
+ max_slice = max(max_slice, root->max_slice);
+
+ return max_slice;
+}
+
static inline bool __entity_less(struct rb_node *a, const struct rb_node *b)
{
return entity_before(__node_2_se(a), __node_2_se(b));
@@ -853,6 +869,15 @@ static inline void __min_slice_update(st
}
}
+static inline void __max_slice_update(struct sched_entity *se, struct rb_node *node)
+{
+ if (node) {
+ struct sched_entity *rse = __node_2_se(node);
+ if (rse->max_slice > se->max_slice)
+ se->max_slice = rse->max_slice;
+ }
+}
+
/*
* se->min_vruntime = min(se->vruntime, {left,right}->min_vruntime)
*/
@@ -860,6 +885,7 @@ static inline bool min_vruntime_update(s
{
u64 old_min_vruntime = se->min_vruntime;
u64 old_min_slice = se->min_slice;
+ u64 old_max_slice = se->max_slice;
struct rb_node *node = &se->run_node;
se->min_vruntime = se->vruntime;
@@ -870,8 +896,13 @@ static inline bool min_vruntime_update(s
__min_slice_update(se, node->rb_right);
__min_slice_update(se, node->rb_left);
+ se->max_slice = se->slice;
+ __max_slice_update(se, node->rb_right);
+ __max_slice_update(se, node->rb_left);
+
return se->min_vruntime == old_min_vruntime &&
- se->min_slice == old_min_slice;
+ se->min_slice == old_min_slice &&
+ se->max_slice == old_max_slice;
}
RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
next prev parent reply other threads:[~2026-02-19 8:10 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-19 7:58 [PATCH v2 0/7] sched: Various reweight_entity() fixes Peter Zijlstra
2026-02-19 7:58 ` [PATCH v2 1/7] sched/fair: Fix zero_vruntime tracking Peter Zijlstra
2026-02-23 10:56 ` Vincent Guittot
2026-02-23 13:09 ` Dietmar Eggemann
2026-02-23 14:15 ` Peter Zijlstra
2026-02-24 8:53 ` Dietmar Eggemann
2026-02-24 9:02 ` Peter Zijlstra
2026-03-28 5:44 ` John Stultz
2026-03-28 17:04 ` Steven Rostedt
2026-03-30 17:58 ` John Stultz
2026-03-30 18:27 ` Steven Rostedt
2026-03-30 9:43 ` Peter Zijlstra
2026-03-30 17:49 ` John Stultz
2026-03-30 10:10 ` Peter Zijlstra
2026-03-30 14:37 ` K Prateek Nayak
2026-03-30 14:40 ` Peter Zijlstra
2026-03-30 15:50 ` K Prateek Nayak
2026-03-30 19:11 ` Peter Zijlstra
2026-03-31 0:38 ` K Prateek Nayak
2026-03-31 4:58 ` K Prateek Nayak
2026-03-31 7:08 ` Peter Zijlstra
2026-03-31 7:14 ` Peter Zijlstra
2026-03-31 8:49 ` K Prateek Nayak
2026-03-31 9:29 ` Peter Zijlstra
2026-03-31 12:20 ` Peter Zijlstra
2026-03-31 16:14 ` Peter Zijlstra
2026-03-31 17:02 ` K Prateek Nayak
2026-03-31 22:40 ` John Stultz
2026-03-30 19:40 ` John Stultz
2026-03-30 19:43 ` Peter Zijlstra
2026-03-30 21:45 ` John Stultz
2026-02-19 7:58 ` [PATCH v2 2/7] sched/fair: Only set slice protection at pick time Peter Zijlstra
2026-02-19 7:58 ` [PATCH v2 3/7] sched/eevdf: Update se->vprot in reweight_entity() Peter Zijlstra
2026-02-19 7:58 ` Peter Zijlstra [this message]
2026-02-23 10:23 ` [PATCH v2 4/7] sched/fair: Fix lag clamp Dietmar Eggemann
2026-02-23 10:57 ` Vincent Guittot
2026-02-19 7:58 ` [PATCH v2 5/7] sched/fair: Increase weight bits for avg_vruntime Peter Zijlstra
2026-02-23 10:56 ` Vincent Guittot
2026-02-23 11:51 ` Peter Zijlstra
2026-02-23 12:36 ` Peter Zijlstra
2026-02-23 13:06 ` Vincent Guittot
2026-03-30 7:55 ` K Prateek Nayak
2026-03-30 9:27 ` Peter Zijlstra
2026-04-02 5:28 ` K Prateek Nayak
2026-04-02 10:22 ` Peter Zijlstra
2026-04-02 10:56 ` K Prateek Nayak
2026-04-03 4:02 ` K Prateek Nayak
2026-04-07 12:00 ` Peter Zijlstra
2026-04-07 13:42 ` [tip: sched/core] sched/fair: Avoid overflow in enqueue_entity() tip-bot2 for K Prateek Nayak
2026-02-19 7:58 ` [PATCH v2 6/7] sched/fair: Revert 6d71a9c61604 ("sched/fair: Fix EEVDF entity placement bug causing scheduling lag") Peter Zijlstra
2026-02-23 10:57 ` Vincent Guittot
2026-03-24 10:01 ` William Montaz
2026-04-07 13:45 ` Peter Zijlstra
2026-02-19 7:58 ` [PATCH v2 7/7] sched/fair: Use full weight to __calc_delta() Peter Zijlstra
2026-02-23 10:57 ` Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260219080624.830623197@infradead.org \
--to=peterz@infradead.org \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=dsmythies@telus.net \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=quzicheng@huawei.com \
--cc=rostedt@goodmis.org \
--cc=shubhang@os.amperecomputing.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=wangtao554@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox