From: Andrea Righi <arighi@nvidia.com>
To: Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Tejun Heo <tj@kernel.org>, David Vernet <void@manifault.com>,
Changwoo Min <changwoo@igalia.com>, Shuah Khan <shuah@kernel.org>,
Joel Fernandes <joelagnelf@nvidia.com>,
Christian Loehle <christian.loehle@arm.com>,
Emil Tsalapatis <emil@etsalapatis.com>,
Luigi De Matteis <ldematteis123@gmail.com>,
sched-ext@lists.linux.dev, bpf@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH 08/11] sched/deadline: Account ext server bandwidth
Date: Wed, 29 Oct 2025 20:08:45 +0100 [thread overview]
Message-ID: <20251029191111.167537-9-arighi@nvidia.com> (raw)
In-Reply-To: <20251029191111.167537-1-arighi@nvidia.com>
Always account for both the ext_server and fair_server bandwidth,
especially during CPU hotplug operations.
Ignoring either can lead to imbalances in total_bw when sched_ext
schedulers are active and CPUs are brought online / offline.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/deadline.c | 54 +++++++++++++++++++++++++++++++----------
kernel/sched/topology.c | 5 ++++
2 files changed, 46 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 6ecfaaa1f912d..f786174a126c8 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2994,6 +2994,36 @@ void dl_add_task_root_domain(struct task_struct *p)
task_rq_unlock(rq, p, &rf);
}
+static void dl_server_add_bw(struct root_domain *rd, int cpu)
+{
+ struct sched_dl_entity *dl_se;
+
+ dl_se = &cpu_rq(cpu)->fair_server;
+ if (dl_server(dl_se))
+ __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(cpu));
+
+#ifdef CONFIG_SCHED_CLASS_EXT
+ dl_se = &cpu_rq(cpu)->ext_server;
+ if (dl_server(dl_se))
+ __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(cpu));
+#endif
+}
+
+static u64 dl_server_read_bw(int cpu)
+{
+ u64 dl_bw = 0;
+
+ if (cpu_rq(cpu)->fair_server.dl_server)
+ dl_bw += cpu_rq(cpu)->fair_server.dl_bw;
+
+#ifdef CONFIG_SCHED_CLASS_EXT
+ if (cpu_rq(cpu)->ext_server.dl_server)
+ dl_bw += cpu_rq(cpu)->ext_server.dl_bw;
+#endif
+
+ return dl_bw;
+}
+
void dl_clear_root_domain(struct root_domain *rd)
{
int i;
@@ -3013,10 +3043,9 @@ void dl_clear_root_domain(struct root_domain *rd)
* them, we need to account for them here explicitly.
*/
for_each_cpu(i, rd->span) {
- struct sched_dl_entity *dl_se = &cpu_rq(i)->fair_server;
-
- if (dl_server(dl_se) && cpu_active(i))
- __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(i));
+ if (!cpu_active(i))
+ continue;
+ dl_server_add_bw(rd, i);
}
}
@@ -3513,7 +3542,7 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
unsigned long flags, cap;
struct dl_bw *dl_b;
bool overflow = 0;
- u64 fair_server_bw = 0;
+ u64 dl_server_bw = 0;
rcu_read_lock_sched();
dl_b = dl_bw_of(cpu);
@@ -3546,27 +3575,26 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
cap -= arch_scale_cpu_capacity(cpu);
/*
- * cpu is going offline and NORMAL tasks will be moved away
- * from it. We can thus discount dl_server bandwidth
- * contribution as it won't need to be servicing tasks after
- * the cpu is off.
+ * cpu is going offline and NORMAL and EXT tasks will be
+ * moved away from it. We can thus discount dl_server
+ * bandwidth contribution as it won't need to be servicing
+ * tasks after the cpu is off.
*/
- if (cpu_rq(cpu)->fair_server.dl_server)
- fair_server_bw = cpu_rq(cpu)->fair_server.dl_bw;
+ dl_server_bw = dl_server_read_bw(cpu);
/*
* Not much to check if no DEADLINE bandwidth is present.
* dl_servers we can discount, as tasks will be moved out the
* offlined CPUs anyway.
*/
- if (dl_b->total_bw - fair_server_bw > 0) {
+ if (dl_b->total_bw - dl_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
* wise thing to do. As said above, cpu is not offline
* yet, so account for that.
*/
if (dl_bw_cpus(cpu) - 1)
- overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
+ overflow = __dl_overflow(dl_b, cap, dl_server_bw, 0);
else
overflow = 1;
}
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 711076aa49801..1ec8e74b80219 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -508,6 +508,11 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
if (rq->fair_server.dl_server)
__dl_server_attach_root(&rq->fair_server, rq);
+#ifdef CONFIG_SCHED_CLASS_EXT
+ if (rq->ext_server.dl_server)
+ __dl_server_attach_root(&rq->ext_server, rq);
+#endif
+
rq_unlock_irqrestore(rq, &rf);
if (old_rd)
--
2.51.2
next prev parent reply other threads:[~2025-10-29 19:12 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-29 19:08 [PATCHSET v10 sched_ext/for-6.19] Add a deadline server for sched_ext tasks Andrea Righi
2025-10-29 19:08 ` [PATCH 01/11] sched/debug: Fix updating of ppos on server write ops Andrea Righi
2025-10-29 19:08 ` [PATCH 02/11] sched/debug: Stop and start server based on if it was active Andrea Righi
2025-11-06 7:13 ` Juri Lelli
2025-11-06 16:39 ` Andrea Righi
2025-11-07 6:51 ` Juri Lelli
2025-11-12 17:35 ` Andrea Righi
2025-10-29 19:08 ` [PATCH 03/11] sched/deadline: Clear the defer params Andrea Righi
2025-10-29 19:08 ` [PATCH 04/11] sched/deadline: Add support to initialize and remove dl_server bandwidth Andrea Righi
2025-11-06 9:49 ` Juri Lelli
2025-11-06 17:09 ` Andrea Righi
2025-11-07 13:53 ` Juri Lelli
2025-10-29 19:08 ` [PATCH 05/11] sched/deadline: Add a server arg to dl_server_update_idle_time() Andrea Righi
2025-10-29 19:08 ` [PATCH 06/11] sched_ext: Add a DL server for sched_ext tasks Andrea Righi
2025-11-06 10:59 ` Juri Lelli
2025-11-06 17:15 ` Andrea Righi
2025-10-29 19:08 ` [PATCH 07/11] sched/debug: Add support to change sched_ext server params Andrea Righi
2025-10-29 19:08 ` Andrea Righi [this message]
2025-10-29 19:08 ` [PATCH 09/11] sched_ext: Selectively enable ext and fair DL servers Andrea Righi
2025-10-29 19:08 ` [PATCH 10/11] selftests/sched_ext: Add test for sched_ext dl_server Andrea Righi
2025-10-30 16:49 ` Christian Loehle
2025-10-30 16:57 ` Andrea Righi
2025-10-29 19:08 ` [PATCH 11/11] selftests/sched_ext: Add test for DL server total_bw consistency Andrea Righi
2025-10-30 17:00 ` [PATCHSET v10 sched_ext/for-6.19] Add a deadline server for sched_ext tasks Christian Loehle
2025-11-05 13:47 ` Andrea Righi
2025-11-05 13:59 ` Peter Zijlstra
2025-11-05 14:20 ` Juri Lelli
2025-11-05 14:39 ` Andrea Righi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251029191111.167537-9-arighi@nvidia.com \
--to=arighi@nvidia.com \
--cc=bpf@vger.kernel.org \
--cc=bsegall@google.com \
--cc=changwoo@igalia.com \
--cc=christian.loehle@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=emil@etsalapatis.com \
--cc=joelagnelf@nvidia.com \
--cc=juri.lelli@redhat.com \
--cc=ldematteis123@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sched-ext@lists.linux.dev \
--cc=shuah@kernel.org \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=void@manifault.com \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox