From: Tejun Heo <tj@kernel.org>
To: David Vernet <void@manifault.com>,
Andrea Righi <arighi@nvidia.com>,
Changwoo Min <changwoo@igalia.com>
Cc: sched-ext@lists.linux.dev, Emil Tsalapatis <emil@etsalapatis.com>,
linux-kernel@vger.kernel.org, Tejun Heo <tj@kernel.org>
Subject: [PATCH 7/6 sched_ext/for-7.1] sched_ext: Use schedule_deferred_locked() in schedule_dsq_reenq()
Date: Fri, 13 Mar 2026 08:37:53 -1000 [thread overview]
Message-ID: <20260313183753.1825456-1-tj@kernel.org> (raw)
In-Reply-To: <20260313113114.1591010-1-tj@kernel.org>
schedule_dsq_reenq() always uses schedule_deferred() which falls back to
irq_work. However, callers like schedule_reenq_local() already hold the
target rq lock, and scx_bpf_dsq_reenq() may hold it via the ops callback.
Add a locked_rq parameter so schedule_dsq_reenq() can use
schedule_deferred_locked() when the target rq is already held. The locked
variant can use cheaper paths (balance callbacks, wakeup hooks) instead of
always bouncing through irq_work.
Signed-off-by: Tejun Heo <tj@kernel.org>
---
kernel/sched/ext.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index f7def0c57b51..a87d99ffe1fe 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1218,8 +1218,10 @@ static void schedule_deferred_locked(struct rq *rq)
}
static void schedule_dsq_reenq(struct scx_sched *sch, struct scx_dispatch_q *dsq,
- u64 reenq_flags)
+ u64 reenq_flags, struct rq *locked_rq)
{
+ struct rq *rq;
+
/*
* Allowing reenqueues doesn't make sense while bypassing. This also
* blocks from new reenqueues to be scheduled on dead scheds.
@@ -1228,7 +1230,8 @@ static void schedule_dsq_reenq(struct scx_sched *sch, struct scx_dispatch_q *dsq
return;
if (dsq->id == SCX_DSQ_LOCAL) {
- struct rq *rq = container_of(dsq, struct rq, scx.local_dsq);
+ rq = container_of(dsq, struct rq, scx.local_dsq);
+
struct scx_sched_pcpu *sch_pcpu = per_cpu_ptr(sch->pcpu, cpu_of(rq));
struct scx_deferred_reenq_local *drl = &sch_pcpu->deferred_reenq_local;
@@ -1247,10 +1250,9 @@ static void schedule_dsq_reenq(struct scx_sched *sch, struct scx_dispatch_q *dsq
list_move_tail(&drl->node, &rq->scx.deferred_reenq_locals);
WRITE_ONCE(drl->flags, drl->flags | reenq_flags);
}
-
- schedule_deferred(rq);
} else if (!(dsq->id & SCX_DSQ_FLAG_BUILTIN)) {
- struct rq *rq = this_rq();
+ rq = this_rq();
+
struct scx_dsq_pcpu *dsq_pcpu = per_cpu_ptr(dsq->pcpu, cpu_of(rq));
struct scx_deferred_reenq_user *dru = &dsq_pcpu->deferred_reenq_user;
@@ -1269,11 +1271,15 @@ static void schedule_dsq_reenq(struct scx_sched *sch, struct scx_dispatch_q *dsq
list_move_tail(&dru->node, &rq->scx.deferred_reenq_users);
WRITE_ONCE(dru->flags, dru->flags | reenq_flags);
}
-
- schedule_deferred(rq);
} else {
scx_error(sch, "DSQ 0x%llx not allowed for reenq", dsq->id);
+ return;
}
+
+ if (rq == locked_rq)
+ schedule_deferred_locked(rq);
+ else
+ schedule_deferred(rq);
}
static void schedule_reenq_local(struct rq *rq, u64 reenq_flags)
@@ -1283,7 +1289,7 @@ static void schedule_reenq_local(struct rq *rq, u64 reenq_flags)
if (WARN_ON_ONCE(!root))
return;
- schedule_dsq_reenq(root, &rq->scx.local_dsq, reenq_flags);
+ schedule_dsq_reenq(root, &rq->scx.local_dsq, reenq_flags, rq);
}
/**
@@ -8845,7 +8851,7 @@ __bpf_kfunc void scx_bpf_dsq_reenq(u64 dsq_id, u64 reenq_flags,
reenq_flags |= SCX_REENQ_ANY;
dsq = find_dsq_for_dispatch(sch, this_rq(), dsq_id, smp_processor_id());
- schedule_dsq_reenq(sch, dsq, reenq_flags);
+ schedule_dsq_reenq(sch, dsq, reenq_flags, scx_locked_rq());
}
/**
--
2.53.0
next prev parent reply other threads:[~2026-03-13 18:37 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-13 11:31 [PATCHSET v2 sched_ext/for-7.1] sched_ext: Implement SCX_ENQ_IMMED Tejun Heo
2026-03-13 11:31 ` [PATCH 1/6] sched_ext: Split task_should_reenq() into local and user variants Tejun Heo
2026-03-13 11:31 ` [PATCH 2/6] sched_ext: Add scx_vet_enq_flags() and plumb dsq_id into preamble Tejun Heo
2026-03-13 11:31 ` [PATCH 3/6] sched_ext: Implement SCX_ENQ_IMMED Tejun Heo
2026-03-13 19:15 ` Andrea Righi
2026-03-13 11:31 ` [PATCH 4/6] sched_ext: Plumb enq_flags through the consume path Tejun Heo
2026-03-13 11:31 ` [PATCH 5/6] sched_ext: Add enq_flags to scx_bpf_dsq_move_to_local() Tejun Heo
2026-03-13 11:31 ` [PATCH 6/6] sched_ext: Add SCX_OPS_ALWAYS_ENQ_IMMED ops flag Tejun Heo
2026-03-13 18:37 ` Tejun Heo [this message]
2026-03-13 19:21 ` [PATCHSET v2 sched_ext/for-7.1] sched_ext: Implement SCX_ENQ_IMMED Andrea Righi
2026-03-13 19:45 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260313183753.1825456-1-tj@kernel.org \
--to=tj@kernel.org \
--cc=arighi@nvidia.com \
--cc=changwoo@igalia.com \
--cc=emil@etsalapatis.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sched-ext@lists.linux.dev \
--cc=void@manifault.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox