* [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server
@ 2026-04-30 21:38 Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers Yuri Andriaccio
` (28 more replies)
0 siblings, 29 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Hello,
This is the v5 for Hierarchical Constant Bandwidth Server, aiming at replacing
the current RT_GROUP_SCHED mechanism with something more robust and
theoretically sound. The patchset has been presented at OSPM25 and OSPM26
(https://retis.sssup.it/ospm-summit/), and a summary of its inner workings can
be found at https://lwn.net/Articles/1021332/ . You can find the previous
versions of this patchset at the bottom of the page, in particular version 1
which talks in more detail what this patchset is all about and how it is
implemented.
This v5 version works on the comments by the reviewers and introduces the
following meaningful changes:
- Update to kernel version 7.0.
- General refactorings, cleanups, extensive use of lock guard for cleaner code.
- Add missing rcu read sections in deadline.c and rt.c code.
- Include fix for non-deferred deadline server logic (Patch 1).
- Account HCBS deadline servers along with all the active tasks when the servers
are active. This ensures correct behaviour for servers that are just
replenished but have no tasks to run.
- Update and reuse __checkparam_dl to also check for HCBS servers' parameters.
- Update default sysctl_sched_rt_runtime to 1s, as sysctl_sched_rt_period. These
parameters only manage the deadline tasks' and servers' bandwidth, not the
actual parameters of the fair (and ext) servers.
- Add early release of cgroup resources in unregister_rt_sched_group, reducing
from two to one the number of RCU grace periods to wait for the release of
reserved deadline bandwidth.
- Remove rt_server_try_pull, as it is now possible to pull tasks directly in
rt_server_pick on server replenish.
- Remove dl_server_stop call when emptying a cgroup's runqueue, as the server is
nonetheless stopped on the next server pick (if the pull operation fails).
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Summary of the patches:
1) Replenishment logic fix for non-deferred deadline servers
2-5) Preparation patches, so that the RT classes' code can be used both
for normal and cgroup scheduling.
6-17) Implementation of HCBS, no migration and only one level hierarchy.
The old RT_GROUP_SCHED code is removed.
18-19) Remove cgroups v1 in favour of v2.
20) Add support for deeper hierarchies.
21) Update default bandwidth for deadline entities.
22-26) Add support for tasks migration.
27) Documentation for HCBS.
28-29) Debug BUG_ONs optional patches.
Updates from v4:
- Rebase to latest tip/master.
- General rebasing/cleanup.
- Update default sysctl_sched_rt_runtime to 1s, same as the period.
- Fix non-deferred deadline server replenishment logic.
- Add missing RCU read sections.
- Account HCBS servers along with their tasks when the servers are active.
- Release bandwidth resources early in unregister_rt_sched_group.
- Drop server_try_pull_task as it is now redundant.
- Remove dl_server_stop call in dequeue_task_rt.
- Update to reuse __checkparam_dl for deadline servers.
Updates from v3:
- Rebase to latest tip/master.
- General rebasing/cleanup.
- Add Documentation.
- Define **live** and **active** groups.
- Introduce server_try_pull_task in place of the removed server_has_task.
- Introduce RELEASE_LOCK helper macro for guard-based locking.
- Update inc/dec_dl_tasks to account for served runqueues regardless of the
server type.
- Fix computing of new bandwidth values in dl_init_tg.
- Fix check in dl_check_tg to use capacity scaling.
- Fix wakeup_preempt_rt to check if curr is a DEADLINE task.
Updates from v2:
- Rebase to latest tip/master.
- Remove fair-servers' bw reclaiming.
- Fix a check which prevented execution of wakeup_preempt code.
- Fix a priority check in group_pull_rt_task between tasks of different groups.
- Rework allocation/deallocation code for rt-cgroups.
- Update signatures for some group related migration functions.
- Add documentation for wakeup_preempt preemption rules.
Updates from v1:
- Rebase to latest tip/master.
- Add migration code.
- Split big patches for more readability.
- Refactor code to use guarded locks where applicable.
- Remove unnecessary patches from v1 which have been addressed differently by
mainline updates.
- Remove unnecessary checks and general code cleanup.
Notes:
Patch 1 has already been submitted for review at:
https://lore.kernel.org/all/20260420163410.20808-1-yurand2000@gmail.com/
Patches 28-29 are completely optional and are not meant to be included in the
final patchset: they just add some invasive BUG_ONs that assert some
preconditions expected on some function calls.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Testing v5:
The patchset has been tested with a suite of tests tailored to stress all the
implemented functionalities.
The tests are available at https://github.com/Yurand2000/HCBS-Test-Suite .
Refer to the README of the repository for more details.
Follow these steps to test HCBS v5:
- Get the HCBS patch up and running. Any kernel/disto should work effortlessly.
- Get, compile and _install_ the tests.
- Run the `go_rt.sh` script to set the frequency of the CPUs to a fixed value
and disable hyperthreading and power saving features.
- Run the `run_tests.sh full` script, to run the whole test suite.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Future Work:
We think the current patchset is stable enough. Our current test suite
demonstrates, on our limited hardware, that the kernel does not throw warnings
and that it is actually possible to guarantee time reservations and isolation
among tenants.
In the hope that the pre-migration patches (2-19) have reached a decent final
form, we of course expect comments on the migration related code (22-26) and the
other patches (1,20-21).
Since the updates on the latest comments were already worked onto, we've decided
to release v5 without the multiCPU feature, presented at OSPM26, as the code is
not yet fully tested and cleaned, in the hope to release it in a future v6 RFC.
Additional future work:
- capacity aware bandwidth reservation.
- hotplug/hotunplug management.
Have a nice day,
Yuri
v1: https://lore.kernel.org/all/20250605071412.139240-1-yurand2000@gmail.com/
v2: https://lore.kernel.org/all/20250731105543.40832-1-yurand2000@gmail.com/
v3: https://lore.kernel.org/all/20250929092221.10947-1-yurand2000@gmail.com/
v4: https://lore.kernel.org/all/20251201124205.11169-1-yurand2000@gmail.com/
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Yuri Andriaccio (13):
sched/deadline: Fix replenishment logic for non-deferred servers
sched/rt: Disable RT_GROUP_SCHED
sched/rt: Remove unnecessary runqueue pointer in struct rt_rq
sched/rt: Implement dl-server operations for rt-cgroups
sched/rt: Update task event callbacks for HCBS scheduling
sched/rt: Allow zeroing the runtime of the root control group
sched/rt: Remove support for cgroups-v1
sched/rt: Update default bandwidth for real-time tasks to ONE
sched/rt: Try pull task on empty server pick.
sched/core: Execute enqueued balance callbacks after
migrate_disable_switch
Documentation: Update documentation for real-time cgroups
sched/rt: Add debug BUG_ONs for pre-migration code
sched/rt: Add debug BUG_ONs in migration code
luca abeni (16):
sched/deadline: Do not access dl_se->rq directly
sched/deadline: Distinguish between dl_rq and my_q
sched/rt: Pass an rt_rq instead of an rq where needed
sched/rt: Move functions from rt.c to sched.h
sched/rt: Introduce HCBS specific structs in task_group
sched/core: Initialize HCBS specific structures
sched/deadline: Add dl_init_tg
sched/rt: Add {alloc/unregister/free}_rt_sched_group
sched/deadline: Account rt-cgroups bandwidth in deadline tasks
schedulability tests.
sched/rt: Update rt-cgroup schedulability checks
sched/rt: Remove old RT_GROUP_SCHED data structures
sched/core: Cgroup v2 support
sched/deadline: Allow deeper hierarchies of RT cgroups
sched/rt: Add rt-cgroup migration functions
sched/rt: Hook HCBS migration functions
sched/core: Execute enqueued balance callbacks when changing allowed
CPUs
Documentation/scheduler/sched-rt-group.rst | 504 ++-
include/linux/rcupdate.h | 1 +
include/linux/sched.h | 10 +-
kernel/sched/autogroup.c | 4 +-
kernel/sched/core.c | 74 +-
kernel/sched/deadline.c | 251 +-
kernel/sched/debug.c | 6 -
kernel/sched/ext.c | 4 +-
kernel/sched/fair.c | 4 +-
kernel/sched/rt.c | 3240 ++++++++++----------
kernel/sched/sched.h | 178 +-
kernel/sched/syscalls.c | 9 +-
12 files changed, 2393 insertions(+), 1892 deletions(-)
base-commit: 028ef9c96e96197026887c0f092424679298aae8
--
2.53.0
^ permalink raw reply [flat|nested] 39+ messages in thread
* [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 02/29] sched/deadline: Do not access dl_se->rq directly Yuri Andriaccio
` (27 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Enqueue and replenish non-deferred deadline servers when their runtime is
exhausted and the replenishment timer could not be started because it is
too close to the wake-up instant.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/deadline.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 674de6a48551..fb7b62e8190e 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1523,8 +1523,12 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) {
if (dl_server(dl_se)) {
- replenish_dl_new_period(dl_se, rq);
- start_dl_timer(dl_se);
+ if (dl_se->dl_defer) {
+ replenish_dl_new_period(dl_se, rq);
+ start_dl_timer(dl_se);
+ } else {
+ enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH);
+ }
} else {
enqueue_task_dl(rq, dl_task_of(dl_se), ENQUEUE_REPLENISH);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 02/29] sched/deadline: Do not access dl_se->rq directly
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 03/29] sched/deadline: Distinguish between dl_rq and my_q Yuri Andriaccio
` (26 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Make deadline.c code access the runqueue of a scheduling entity saved in
the sched_dl_entity data structure. This allows future patches to save
different runqueues in sched_dl_entity other than the global runqueues.
Move dl_server_apply_params call in sched_init_dl_servers as the rq_of_dl_se
function will return the correct deadline entity only if the dl_server flag
is set.
Add a WARN_ON on the return value of dl_server_apply_params in
sched_init_dl_servers as this function may fail if the kernel is not
configured correctly.
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/deadline.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index fb7b62e8190e..ce80d9c08e31 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -863,7 +863,7 @@ static void replenish_dl_entity(struct sched_dl_entity *dl_se)
* and arm the defer timer.
*/
if (dl_se->dl_defer && !dl_se->dl_defer_running &&
- dl_time_before(rq_clock(dl_se->rq), dl_se->deadline - dl_se->runtime)) {
+ dl_time_before(rq_clock(rq), dl_se->deadline - dl_se->runtime)) {
if (!is_dl_boosted(dl_se)) {
/*
@@ -1180,11 +1180,11 @@ static enum hrtimer_restart dl_server_timer(struct hrtimer *timer, struct sched_
* of time. The dl_server_min_res serves as a limit to avoid
* forwarding the timer for a too small amount of time.
*/
- if (dl_time_before(rq_clock(dl_se->rq),
+ if (dl_time_before(rq_clock(rq),
(dl_se->deadline - dl_se->runtime - dl_server_min_res))) {
/* reset the defer timer */
- fw = dl_se->deadline - rq_clock(dl_se->rq) - dl_se->runtime;
+ fw = dl_se->deadline - rq_clock(rq) - dl_se->runtime;
hrtimer_forward_now(timer, ns_to_ktime(fw));
return HRTIMER_RESTART;
@@ -1195,7 +1195,7 @@ static enum hrtimer_restart dl_server_timer(struct hrtimer *timer, struct sched_
enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH);
- if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &dl_se->rq->curr->dl))
+ if (!dl_task(rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
resched_curr(rq);
__push_dl_task(rq, rf);
@@ -1490,7 +1490,7 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
hrtimer_try_to_cancel(&dl_se->dl_timer);
- replenish_dl_new_period(dl_se, dl_se->rq);
+ replenish_dl_new_period(dl_se, rq);
if (idle)
dl_se->dl_defer_idle = 1;
@@ -1584,14 +1584,14 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
void dl_server_update_idle(struct sched_dl_entity *dl_se, s64 delta_exec)
{
if (dl_se->dl_server_active && dl_se->dl_runtime && dl_se->dl_defer)
- update_curr_dl_se(dl_se->rq, dl_se, delta_exec);
+ update_curr_dl_se(rq_of_dl_se(dl_se), dl_se, delta_exec);
}
void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec)
{
/* 0 runtime = fair server disabled */
if (dl_se->dl_server_active && dl_se->dl_runtime)
- update_curr_dl_se(dl_se->rq, dl_se, delta_exec);
+ update_curr_dl_se(rq_of_dl_se(dl_se), dl_se, delta_exec);
}
/*
@@ -1800,7 +1800,7 @@ void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec)
*/
void dl_server_start(struct sched_dl_entity *dl_se)
{
- struct rq *rq = dl_se->rq;
+ struct rq *rq;
dl_se->dl_defer_idle = 0;
if (!dl_server(dl_se) || dl_se->dl_server_active || !dl_se->dl_runtime)
@@ -1809,15 +1809,15 @@ void dl_server_start(struct sched_dl_entity *dl_se)
/*
* Update the current task to 'now'.
*/
+ rq = rq_of_dl_se(dl_se);
rq->donor->sched_class->update_curr(rq);
-
if (WARN_ON_ONCE(!cpu_online(cpu_of(rq))))
return;
dl_se->dl_server_active = 1;
enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
- if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
- resched_curr(dl_se->rq);
+ if (!dl_task(rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
+ resched_curr(rq);
}
void dl_server_stop(struct sched_dl_entity *dl_se)
@@ -1859,9 +1859,9 @@ void sched_init_dl_servers(void)
WARN_ON(dl_server(dl_se));
- dl_server_apply_params(dl_se, runtime, period, 1);
-
dl_se->dl_server = 1;
+ WARN_ON(dl_server_apply_params(dl_se, runtime, period, 1));
+
dl_se->dl_defer = 1;
setup_new_dl_entity(dl_se);
@@ -1870,9 +1870,9 @@ void sched_init_dl_servers(void)
WARN_ON(dl_server(dl_se));
- dl_server_apply_params(dl_se, runtime, period, 1);
-
dl_se->dl_server = 1;
+ WARN_ON(dl_server_apply_params(dl_se, runtime, period, 1));
+
dl_se->dl_defer = 1;
setup_new_dl_entity(dl_se);
#endif
@@ -1898,7 +1898,7 @@ int dl_server_apply_params(struct sched_dl_entity *dl_se, u64 runtime, u64 perio
{
u64 old_bw = init ? 0 : to_ratio(dl_se->dl_period, dl_se->dl_runtime);
u64 new_bw = to_ratio(period, runtime);
- struct rq *rq = dl_se->rq;
+ struct rq *rq = rq_of_dl_se(dl_se);
int cpu = cpu_of(rq);
struct dl_bw *dl_b;
unsigned long cap;
@@ -1974,7 +1974,7 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer)
p = dl_task_of(dl_se);
rq = task_rq_lock(p, &rf);
} else {
- rq = dl_se->rq;
+ rq = rq_of_dl_se(dl_se);
rq_lock(rq, &rf);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 03/29] sched/deadline: Distinguish between dl_rq and my_q
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 02/29] sched/deadline: Do not access dl_se->rq directly Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed Yuri Andriaccio
` (25 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Split the single runqueue pointer in sched_dl_entity into two separate
pointers, following the existing pattern used by sched_rt_entity:
- dl_rq: Points to the deadline runqueue where this entity is queued
(global runqueue).
- my_q: Points to the runqueue that this entity serves (for servers).
This distinction is currently redundant for the fair_server and ext_servers
(both point to the same CPU's structures), but is essential for future RT
cgroup support where deadline servers will be queued on the global dl_rq while
serving tasks from cgroup-specific runqueues.
Update rq_of_dl_se() to use container_of_const() to recover the global rq from
dl_rq, and update fair.c and ext.c to explicitly use my_q (local rq) when
accessing the served runqueue.
Update dl_server_init() to take a dl_rq pointer (use to retrieve the
global runqueue where the dl_server is scheduled) and a rq pointer (for
the local runqueue served by the server).
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
include/linux/sched.h | 6 ++++--
kernel/sched/deadline.c | 10 +++++++---
kernel/sched/ext.c | 4 ++--
kernel/sched/fair.c | 4 ++--
kernel/sched/sched.h | 3 ++-
5 files changed, 17 insertions(+), 10 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 5a5d3dbc9cdf..eb8b57f689b5 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -733,9 +733,11 @@ struct sched_dl_entity {
* Bits for DL-server functionality. Also see the comment near
* dl_server_update().
*
- * @rq the runqueue this server is for
+ * @dl_rq the runqueue on which this entity is (to be) queued
+ * @my_q the runqueue "owned" by this entity
*/
- struct rq *rq;
+ struct dl_rq *dl_rq;
+ struct rq *my_q;
dl_server_pick_f server_pick_task;
#ifdef CONFIG_RT_MUTEXES
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index ce80d9c08e31..219fe2fd697d 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -75,10 +75,12 @@ static inline struct rq *rq_of_dl_rq(struct dl_rq *dl_rq)
static inline struct rq *rq_of_dl_se(struct sched_dl_entity *dl_se)
{
- struct rq *rq = dl_se->rq;
+ struct rq *rq;
if (!dl_server(dl_se))
rq = task_rq(dl_task_of(dl_se));
+ else
+ rq = container_of_const(dl_se->dl_rq, struct rq, dl);
return rq;
}
@@ -1833,10 +1835,12 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
dl_se->dl_server_active = 0;
}
-void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
+void dl_server_init(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq,
+ struct rq *served_rq,
dl_server_pick_f pick_task)
{
- dl_se->rq = rq;
+ dl_se->dl_rq = dl_rq;
+ dl_se->my_q = served_rq;
dl_se->server_pick_task = pick_task;
}
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 064eaa76be4b..382152f8895f 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2606,7 +2606,7 @@ ext_server_pick_task(struct sched_dl_entity *dl_se, struct rq_flags *rf)
if (!scx_enabled())
return NULL;
- return do_pick_task_scx(dl_se->rq, rf, true);
+ return do_pick_task_scx(dl_se->my_q, rf, true);
}
/*
@@ -2618,7 +2618,7 @@ void ext_server_init(struct rq *rq)
init_dl_entity(dl_se);
- dl_server_init(dl_se, rq, ext_server_pick_task);
+ dl_server_init(dl_se, &rq->dl, rq, ext_server_pick_task);
}
#ifdef CONFIG_SCHED_CORE
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ab4114712be7..8c951186d5e5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9059,7 +9059,7 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
static struct task_struct *
fair_server_pick_task(struct sched_dl_entity *dl_se, struct rq_flags *rf)
{
- return pick_task_fair(dl_se->rq, rf);
+ return pick_task_fair(dl_se->my_q, rf);
}
void fair_server_init(struct rq *rq)
@@ -9068,7 +9068,7 @@ void fair_server_init(struct rq *rq)
init_dl_entity(dl_se);
- dl_server_init(dl_se, rq, fair_server_pick_task);
+ dl_server_init(dl_se, &rq->dl, rq, fair_server_pick_task);
}
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1ef9ba480f51..8572bd12d0a2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -412,7 +412,8 @@ extern void dl_server_update_idle(struct sched_dl_entity *dl_se, s64 delta_exec)
extern void dl_server_update(struct sched_dl_entity *dl_se, s64 delta_exec);
extern void dl_server_start(struct sched_dl_entity *dl_se);
extern void dl_server_stop(struct sched_dl_entity *dl_se);
-extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
+extern void dl_server_init(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq,
+ struct rq *served_rq,
dl_server_pick_f pick_task);
extern void sched_init_dl_servers(void);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (2 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 03/29] sched/deadline: Distinguish between dl_rq and my_q Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 05/29] sched/rt: Move functions from rt.c to sched.h Yuri Andriaccio
` (24 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Make rt.c code access the runqueue through the rt_rq data structure rather
than passing an rq pointer directly. This allows future patches to define
rt_rq data structures which do not refer only to the global runqueue, but
also to local cgroup runqueues (as rt_rq will not be always equal to
&rq->rt).
Add checks in rt_queue_{push/pull}_tasks to make sure that the given rt_rq
object refers to a global runqueue and not any local one.
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 99 ++++++++++++++++++++++++++---------------------
1 file changed, 54 insertions(+), 45 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index f69e1f16d923..597eaba00a20 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -370,9 +370,9 @@ static inline void rt_clear_overload(struct rq *rq)
cpumask_clear_cpu(rq->cpu, rq->rd->rto_mask);
}
-static inline int has_pushable_tasks(struct rq *rq)
+static inline int has_pushable_tasks(struct rt_rq *rt_rq)
{
- return !plist_head_empty(&rq->rt.pushable_tasks);
+ return !plist_head_empty(&rt_rq->pushable_tasks);
}
static DEFINE_PER_CPU(struct balance_callback, rt_push_head);
@@ -381,50 +381,54 @@ static DEFINE_PER_CPU(struct balance_callback, rt_pull_head);
static void push_rt_tasks(struct rq *);
static void pull_rt_task(struct rq *);
-static inline void rt_queue_push_tasks(struct rq *rq)
+static inline void rt_queue_push_tasks(struct rt_rq *rt_rq)
{
- if (!has_pushable_tasks(rq))
+ struct rq *rq = container_of_const(rt_rq, struct rq, rt);
+
+ if (!has_pushable_tasks(rt_rq))
return;
queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks);
}
-static inline void rt_queue_pull_task(struct rq *rq)
+static inline void rt_queue_pull_task(struct rt_rq *rt_rq)
{
+ struct rq *rq = container_of_const(rt_rq, struct rq, rt);
+
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}
-static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
+static void enqueue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p)
{
- plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
+ plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks);
plist_node_init(&p->pushable_tasks, p->prio);
- plist_add(&p->pushable_tasks, &rq->rt.pushable_tasks);
+ plist_add(&p->pushable_tasks, &rt_rq->pushable_tasks);
/* Update the highest prio pushable task */
- if (p->prio < rq->rt.highest_prio.next)
- rq->rt.highest_prio.next = p->prio;
+ if (p->prio < rt_rq->highest_prio.next)
+ rt_rq->highest_prio.next = p->prio;
- if (!rq->rt.overloaded) {
- rt_set_overload(rq);
- rq->rt.overloaded = 1;
+ if (!rt_rq->overloaded) {
+ rt_set_overload(rq_of_rt_rq(rt_rq));
+ rt_rq->overloaded = 1;
}
}
-static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
+static void dequeue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p)
{
- plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
+ plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks);
/* Update the new highest prio pushable task */
- if (has_pushable_tasks(rq)) {
- p = plist_first_entry(&rq->rt.pushable_tasks,
+ if (has_pushable_tasks(rt_rq)) {
+ p = plist_first_entry(&rt_rq->pushable_tasks,
struct task_struct, pushable_tasks);
- rq->rt.highest_prio.next = p->prio;
+ rt_rq->highest_prio.next = p->prio;
} else {
- rq->rt.highest_prio.next = MAX_RT_PRIO-1;
+ rt_rq->highest_prio.next = MAX_RT_PRIO-1;
- if (rq->rt.overloaded) {
- rt_clear_overload(rq);
- rq->rt.overloaded = 0;
+ if (rt_rq->overloaded) {
+ rt_clear_overload(rq_of_rt_rq(rt_rq));
+ rt_rq->overloaded = 0;
}
}
}
@@ -1431,6 +1435,7 @@ static void
enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
{
struct sched_rt_entity *rt_se = &p->rt;
+ struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
if (flags & ENQUEUE_WAKEUP)
rt_se->timeout = 0;
@@ -1444,17 +1449,18 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
return;
if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
- enqueue_pushable_task(rq, p);
+ enqueue_pushable_task(rt_rq, p);
}
static bool dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
{
struct sched_rt_entity *rt_se = &p->rt;
+ struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
update_curr_rt(rq);
dequeue_rt_entity(rt_se, flags);
- dequeue_pushable_task(rq, p);
+ dequeue_pushable_task(rt_rq, p);
return true;
}
@@ -1645,14 +1651,14 @@ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags)
static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first)
{
struct sched_rt_entity *rt_se = &p->rt;
- struct rt_rq *rt_rq = &rq->rt;
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
p->se.exec_start = rq_clock_task(rq);
if (on_rt_rq(&p->rt))
update_stats_wait_end_rt(rt_rq, rt_se);
/* The running task is never eligible for pushing */
- dequeue_pushable_task(rq, p);
+ dequeue_pushable_task(rt_rq, p);
if (!first)
return;
@@ -1665,7 +1671,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
if (rq->donor->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
- rt_queue_push_tasks(rq);
+ rt_queue_push_tasks(rt_rq);
}
static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
@@ -1716,7 +1722,7 @@ static struct task_struct *pick_task_rt(struct rq *rq, struct rq_flags *rf)
static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_struct *next)
{
struct sched_rt_entity *rt_se = &p->rt;
- struct rt_rq *rt_rq = &rq->rt;
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
if (on_rt_rq(&p->rt))
update_stats_wait_start_rt(rt_rq, rt_se);
@@ -1732,7 +1738,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_s
* if it is still active
*/
if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
- enqueue_pushable_task(rq, p);
+ enqueue_pushable_task(rt_rq, p);
}
/* Only try algorithms three times */
@@ -1742,16 +1748,16 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_s
* Return the highest pushable rq's task, which is suitable to be executed
* on the CPU, NULL otherwise
*/
-static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)
+static struct task_struct *pick_highest_pushable_task(struct rt_rq *rt_rq, int cpu)
{
- struct plist_head *head = &rq->rt.pushable_tasks;
+ struct plist_head *head = &rt_rq->pushable_tasks;
struct task_struct *p;
- if (!has_pushable_tasks(rq))
+ if (!has_pushable_tasks(rt_rq))
return NULL;
plist_for_each_entry(p, head, pushable_tasks) {
- if (task_is_pushable(rq, p, cpu))
+ if (task_is_pushable(rq_of_rt_rq(rt_rq), p, cpu))
return p;
}
@@ -1851,14 +1857,15 @@ static int find_lowest_rq(struct task_struct *task)
return -1;
}
-static struct task_struct *pick_next_pushable_task(struct rq *rq)
+static struct task_struct *pick_next_pushable_task(struct rt_rq *rt_rq)
{
+ struct rq *rq = rq_of_rt_rq(rt_rq);
struct task_struct *p;
- if (!has_pushable_tasks(rq))
+ if (!has_pushable_tasks(rt_rq))
return NULL;
- p = plist_first_entry(&rq->rt.pushable_tasks,
+ p = plist_first_entry(&rt_rq->pushable_tasks,
struct task_struct, pushable_tasks);
BUG_ON(rq->cpu != task_cpu(p));
@@ -1911,7 +1918,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
*/
if (unlikely(is_migration_disabled(task) ||
!cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
- task != pick_next_pushable_task(rq))) {
+ task != pick_next_pushable_task(&rq->rt))) {
double_unlock_balance(rq, lowest_rq);
lowest_rq = NULL;
@@ -1945,7 +1952,7 @@ static int push_rt_task(struct rq *rq, bool pull)
if (!rq->rt.overloaded)
return 0;
- next_task = pick_next_pushable_task(rq);
+ next_task = pick_next_pushable_task(&rq->rt);
if (!next_task)
return 0;
@@ -2020,7 +2027,7 @@ static int push_rt_task(struct rq *rq, bool pull)
* run-queue and is also still the next task eligible for
* pushing.
*/
- task = pick_next_pushable_task(rq);
+ task = pick_next_pushable_task(&rq->rt);
if (task == next_task) {
/*
* The task hasn't migrated, and is still the next
@@ -2213,7 +2220,7 @@ void rto_push_irq_work_func(struct irq_work *work)
* We do not need to grab the lock to check for has_pushable_tasks.
* When it gets updated, a check is made if a push is possible.
*/
- if (has_pushable_tasks(rq)) {
+ if (has_pushable_tasks(&rq->rt)) {
raw_spin_rq_lock(rq);
while (push_rt_task(rq, true))
;
@@ -2242,6 +2249,7 @@ static void pull_rt_task(struct rq *this_rq)
int this_cpu = this_rq->cpu, cpu;
bool resched = false;
struct task_struct *p, *push_task;
+ struct rt_rq *src_rt_rq;
struct rq *src_rq;
int rt_overload_count = rt_overloaded(this_rq);
@@ -2271,6 +2279,7 @@ static void pull_rt_task(struct rq *this_rq)
continue;
src_rq = cpu_rq(cpu);
+ src_rt_rq = &src_rq->rt;
/*
* Don't bother taking the src_rq->lock if the next highest
@@ -2279,7 +2288,7 @@ static void pull_rt_task(struct rq *this_rq)
* logically higher, the src_rq will push this task away.
* And if its going logically lower, we do not care
*/
- if (src_rq->rt.highest_prio.next >=
+ if (src_rt_rq->highest_prio.next >=
this_rq->rt.highest_prio.curr)
continue;
@@ -2295,7 +2304,7 @@ static void pull_rt_task(struct rq *this_rq)
* We can pull only a task, which is pushable
* on its rq, and no others.
*/
- p = pick_highest_pushable_task(src_rq, this_cpu);
+ p = pick_highest_pushable_task(src_rt_rq, this_cpu);
/*
* Do we have an RT task that preempts
@@ -2401,7 +2410,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
return;
- rt_queue_pull_task(rq);
+ rt_queue_pull_task(rt_rq_of_se(&p->rt));
}
void __init init_sched_rt_class(void)
@@ -2437,7 +2446,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
*/
if (task_on_rq_queued(p)) {
if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
- rt_queue_push_tasks(rq);
+ rt_queue_push_tasks(rt_rq_of_se(&p->rt));
if (p->prio < rq->donor->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
}
@@ -2462,7 +2471,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, u64 oldprio)
* may need to pull tasks to this runqueue.
*/
if (oldprio < p->prio)
- rt_queue_pull_task(rq);
+ rt_queue_pull_task(rt_rq_of_se(&p->rt));
/*
* If there's a higher priority task waiting to run
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 05/29] sched/rt: Move functions from rt.c to sched.h
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (3 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 06/29] sched/rt: Disable RT_GROUP_SCHED Yuri Andriaccio
` (23 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Make the following functions/macros be non-static and move them in
sched.h, so that they can be also used in other source files:
- rt_entity_is_task()
- rt_task_of()
- rq_of_rt_rq()
- rt_rq_of_se()
- rq_of_rt_se()
There are no functional changes, apart from the use of container_of_const()
instead of container_of() where applicable. This is needed by future patches.
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 56 ------------------------------------------
kernel/sched/sched.h | 58 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+), 56 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 597eaba00a20..5f89c080a3ef 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -166,36 +166,6 @@ static void destroy_rt_bandwidth(struct rt_bandwidth *rt_b)
hrtimer_cancel(&rt_b->rt_period_timer);
}
-#define rt_entity_is_task(rt_se) (!(rt_se)->my_q)
-
-static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
-{
- WARN_ON_ONCE(!rt_entity_is_task(rt_se));
-
- return container_of(rt_se, struct task_struct, rt);
-}
-
-static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
-{
- /* Cannot fold with non-CONFIG_RT_GROUP_SCHED version, layout */
- WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
- return rt_rq->rq;
-}
-
-static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
-{
- WARN_ON(!rt_group_sched_enabled() && rt_se->rt_rq->tg != &root_task_group);
- return rt_se->rt_rq;
-}
-
-static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
-{
- struct rt_rq *rt_rq = rt_se->rt_rq;
-
- WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
- return rt_rq->rq;
-}
-
void unregister_rt_sched_group(struct task_group *tg)
{
if (!rt_group_sched_enabled())
@@ -294,32 +264,6 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
#else /* !CONFIG_RT_GROUP_SCHED: */
-#define rt_entity_is_task(rt_se) (1)
-
-static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
-{
- return container_of(rt_se, struct task_struct, rt);
-}
-
-static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
-{
- return container_of(rt_rq, struct rq, rt);
-}
-
-static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
-{
- struct task_struct *p = rt_task_of(rt_se);
-
- return task_rq(p);
-}
-
-static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
-{
- struct rq *rq = rq_of_rt_se(rt_se);
-
- return &rq->rt;
-}
-
void unregister_rt_sched_group(struct task_group *tg) { }
void free_rt_sched_group(struct task_group *tg) { }
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 8572bd12d0a2..2b8630ed1353 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3305,6 +3305,64 @@ extern void set_rq_offline(struct rq *rq);
extern bool sched_smp_initialized;
+#ifdef CONFIG_RT_GROUP_SCHED
+#define rt_entity_is_task(rt_se) (!(rt_se)->my_q)
+
+static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
+{
+ WARN_ON_ONCE(!rt_entity_is_task(rt_se));
+
+ return container_of_const(rt_se, struct task_struct, rt);
+}
+
+static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
+{
+ /* Cannot fold with non-CONFIG_RT_GROUP_SCHED version, layout */
+ WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
+ return rt_rq->rq;
+}
+
+static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
+{
+ WARN_ON(!rt_group_sched_enabled() && rt_se->rt_rq->tg != &root_task_group);
+ return rt_se->rt_rq;
+}
+
+static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
+{
+ struct rt_rq *rt_rq = rt_se->rt_rq;
+
+ WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
+ return rt_rq->rq;
+}
+#else
+#define rt_entity_is_task(rt_se) (1)
+
+static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
+{
+ return container_of_const(rt_se, struct task_struct, rt);
+}
+
+static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
+{
+ return container_of_const(rt_rq, struct rq, rt);
+}
+
+static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
+{
+ struct task_struct *p = rt_task_of(rt_se);
+
+ return task_rq(p);
+}
+
+static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
+{
+ struct rq *rq = rq_of_rt_se(rt_se);
+
+ return &rq->rt;
+}
+#endif
+
DEFINE_LOCK_GUARD_2(double_rq_lock, struct rq,
double_rq_lock(_T->lock, _T->lock2),
double_rq_unlock(_T->lock, _T->lock2))
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 06/29] sched/rt: Disable RT_GROUP_SCHED
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (4 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 05/29] sched/rt: Move functions from rt.c to sched.h Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq Yuri Andriaccio
` (22 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Disable the old RT_GROUP_SCHED scheduler. Note that this does not
completely remove all the RT_GROUP_SCHED functionality, just unhooks it
and removes most of the relevant functions. Some of the RT_GROUP_SCHED
functions are kept because they will be adapted for the HCBS scheduling.
Most notably:
- Disable the initialization of the rt_bandwidth for group scheduling.
- Unhook any functionality for RT_GROUP_SCHED in normal rt.c code, leaving
only non-group functionality.
- Remove group related field initialization in init_rt_rq().
- Remove all the unhooked (and so unused) functions from RT_GROUP_SCHED.
- Remove all allocation/deallocation code for rt-groups, always returning
failure on allocation.
- Update inc/dec_rt_tasks active tasks' counters, as rt scheduling
entities now only represent a single task, and not a group of tasks
anymore.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 6 -
kernel/sched/deadline.c | 34 --
kernel/sched/debug.c | 6 -
kernel/sched/rt.c | 861 ++--------------------------------------
kernel/sched/sched.h | 15 +-
kernel/sched/syscalls.c | 13 -
6 files changed, 26 insertions(+), 909 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 496dff740dca..a203a27fb16d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8647,11 +8647,6 @@ void __init sched_init(void)
init_defrootdomain();
-#ifdef CONFIG_RT_GROUP_SCHED
- init_rt_bandwidth(&root_task_group.rt_bandwidth,
- global_rt_period(), global_rt_runtime());
-#endif /* CONFIG_RT_GROUP_SCHED */
-
#ifdef CONFIG_CGROUP_SCHED
task_group_cache = KMEM_CACHE(task_group, 0);
@@ -8703,7 +8698,6 @@ void __init sched_init(void)
* starts working after scheduler_running, which is not the case
* yet.
*/
- rq->rt.rt_runtime = global_rt_runtime();
init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL);
#endif
rq->next_class = &idle_sched_class;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 219fe2fd697d..67615a0539fe 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1539,40 +1539,6 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
if (!is_leftmost(dl_se, &rq->dl))
resched_curr(rq);
}
-
- /*
- * The dl_server does not account for real-time workload because it
- * is running fair work.
- */
- if (dl_se->dl_server)
- return;
-
-#ifdef CONFIG_RT_GROUP_SCHED
- /*
- * Because -- for now -- we share the rt bandwidth, we need to
- * account our runtime there too, otherwise actual rt tasks
- * would be able to exceed the shared quota.
- *
- * Account to the root rt group for now.
- *
- * The solution we're working towards is having the RT groups scheduled
- * using deadline servers -- however there's a few nasties to figure
- * out before that can happen.
- */
- if (rt_bandwidth_enabled()) {
- struct rt_rq *rt_rq = &rq->rt;
-
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- /*
- * We'll let actual RT tasks worry about the overflow here, we
- * have our own CBS to keep us inline; only account when RT
- * bandwidth is relevant.
- */
- if (sched_rt_bandwidth_account(rt_rq))
- rt_rq->rt_time += delta_exec;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- }
-#endif /* CONFIG_RT_GROUP_SCHED */
}
/*
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 15bf45b6f912..e50e5115d4fd 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -997,12 +997,6 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
PU(rt_nr_running);
-#ifdef CONFIG_RT_GROUP_SCHED
- P(rt_throttled);
- PN(rt_time);
- PN(rt_runtime);
-#endif
-
#undef PN
#undef PU
#undef P
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 5f89c080a3ef..392212ac90d8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -82,115 +82,19 @@ void init_rt_rq(struct rt_rq *rt_rq)
rt_rq->highest_prio.next = MAX_RT_PRIO-1;
rt_rq->overloaded = 0;
plist_head_init(&rt_rq->pushable_tasks);
- /* We start is dequeued state, because no RT tasks are queued */
- rt_rq->rt_queued = 0;
-
-#ifdef CONFIG_RT_GROUP_SCHED
- rt_rq->rt_time = 0;
- rt_rq->rt_throttled = 0;
- rt_rq->rt_runtime = 0;
- raw_spin_lock_init(&rt_rq->rt_runtime_lock);
- rt_rq->tg = &root_task_group;
-#endif
}
#ifdef CONFIG_RT_GROUP_SCHED
-static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun);
-
-static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
-{
- struct rt_bandwidth *rt_b =
- container_of(timer, struct rt_bandwidth, rt_period_timer);
- int idle = 0;
- int overrun;
-
- raw_spin_lock(&rt_b->rt_runtime_lock);
- for (;;) {
- overrun = hrtimer_forward_now(timer, rt_b->rt_period);
- if (!overrun)
- break;
-
- raw_spin_unlock(&rt_b->rt_runtime_lock);
- idle = do_sched_rt_period_timer(rt_b, overrun);
- raw_spin_lock(&rt_b->rt_runtime_lock);
- }
- if (idle)
- rt_b->rt_period_active = 0;
- raw_spin_unlock(&rt_b->rt_runtime_lock);
-
- return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
-}
-
-void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
-{
- rt_b->rt_period = ns_to_ktime(period);
- rt_b->rt_runtime = runtime;
-
- raw_spin_lock_init(&rt_b->rt_runtime_lock);
-
- hrtimer_setup(&rt_b->rt_period_timer, sched_rt_period_timer, CLOCK_MONOTONIC,
- HRTIMER_MODE_REL_HARD);
-}
-
-static inline void do_start_rt_bandwidth(struct rt_bandwidth *rt_b)
-{
- raw_spin_lock(&rt_b->rt_runtime_lock);
- if (!rt_b->rt_period_active) {
- rt_b->rt_period_active = 1;
- /*
- * SCHED_DEADLINE updates the bandwidth, as a run away
- * RT task with a DL task could hog a CPU. But DL does
- * not reset the period. If a deadline task was running
- * without an RT task running, it can cause RT tasks to
- * throttle when they start up. Kick the timer right away
- * to update the period.
- */
- hrtimer_forward_now(&rt_b->rt_period_timer, ns_to_ktime(0));
- hrtimer_start_expires(&rt_b->rt_period_timer,
- HRTIMER_MODE_ABS_PINNED_HARD);
- }
- raw_spin_unlock(&rt_b->rt_runtime_lock);
-}
-
-static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
-{
- if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
- return;
-
- do_start_rt_bandwidth(rt_b);
-}
-
-static void destroy_rt_bandwidth(struct rt_bandwidth *rt_b)
-{
- hrtimer_cancel(&rt_b->rt_period_timer);
-}
-
void unregister_rt_sched_group(struct task_group *tg)
{
- if (!rt_group_sched_enabled())
- return;
- if (tg->rt_se)
- destroy_rt_bandwidth(&tg->rt_bandwidth);
}
void free_rt_sched_group(struct task_group *tg)
{
- int i;
-
if (!rt_group_sched_enabled())
return;
-
- for_each_possible_cpu(i) {
- if (tg->rt_rq)
- kfree(tg->rt_rq[i]);
- if (tg->rt_se)
- kfree(tg->rt_se[i]);
- }
-
- kfree(tg->rt_rq);
- kfree(tg->rt_se);
}
void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
@@ -200,66 +104,19 @@ void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
struct rq *rq = cpu_rq(cpu);
rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
- rt_rq->rt_nr_boosted = 0;
rt_rq->rq = rq;
rt_rq->tg = tg;
tg->rt_rq[cpu] = rt_rq;
tg->rt_se[cpu] = rt_se;
-
- if (!rt_se)
- return;
-
- if (!parent)
- rt_se->rt_rq = &rq->rt;
- else
- rt_se->rt_rq = parent->my_q;
-
- rt_se->my_q = rt_rq;
- rt_se->parent = parent;
- INIT_LIST_HEAD(&rt_se->run_list);
}
int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
{
- struct rt_rq *rt_rq;
- struct sched_rt_entity *rt_se;
- int i;
-
if (!rt_group_sched_enabled())
return 1;
- tg->rt_rq = kzalloc_objs(rt_rq, nr_cpu_ids);
- if (!tg->rt_rq)
- goto err;
- tg->rt_se = kzalloc_objs(rt_se, nr_cpu_ids);
- if (!tg->rt_se)
- goto err;
-
- init_rt_bandwidth(&tg->rt_bandwidth, ktime_to_ns(global_rt_period()), 0);
-
- for_each_possible_cpu(i) {
- rt_rq = kzalloc_node(sizeof(struct rt_rq),
- GFP_KERNEL, cpu_to_node(i));
- if (!rt_rq)
- goto err;
-
- rt_se = kzalloc_node(sizeof(struct sched_rt_entity),
- GFP_KERNEL, cpu_to_node(i));
- if (!rt_se)
- goto err_free_rq;
-
- init_rt_rq(rt_rq);
- rt_rq->rt_runtime = tg->rt_bandwidth.rt_runtime;
- init_tg_rt_entry(tg, rt_rq, rt_se, i, parent->rt_se[i]);
- }
-
return 1;
-
-err_free_rq:
- kfree(rt_rq);
-err:
- return 0;
}
#else /* !CONFIG_RT_GROUP_SCHED: */
@@ -377,9 +234,6 @@ static void dequeue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p)
}
}
-static void enqueue_top_rt_rq(struct rt_rq *rt_rq);
-static void dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count);
-
static inline int on_rt_rq(struct sched_rt_entity *rt_se)
{
return rt_se->on_rq;
@@ -426,16 +280,6 @@ static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu)
#ifdef CONFIG_RT_GROUP_SCHED
-static inline u64 sched_rt_runtime(struct rt_rq *rt_rq)
-{
- return rt_rq->rt_runtime;
-}
-
-static inline u64 sched_rt_period(struct rt_rq *rt_rq)
-{
- return ktime_to_ns(rt_rq->tg->rt_bandwidth.rt_period);
-}
-
typedef struct task_group *rt_rq_iter_t;
static inline struct task_group *next_task_group(struct task_group *tg)
@@ -461,457 +305,20 @@ static inline struct task_group *next_task_group(struct task_group *tg)
iter && (rt_rq = iter->rt_rq[cpu_of(rq)]); \
iter = next_task_group(iter))
-#define for_each_sched_rt_entity(rt_se) \
- for (; rt_se; rt_se = rt_se->parent)
-
-static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se)
-{
- return rt_se->my_q;
-}
-
static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags);
static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags);
-static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
-{
- struct task_struct *donor = rq_of_rt_rq(rt_rq)->donor;
- struct rq *rq = rq_of_rt_rq(rt_rq);
- struct sched_rt_entity *rt_se;
-
- int cpu = cpu_of(rq);
-
- rt_se = rt_rq->tg->rt_se[cpu];
-
- if (rt_rq->rt_nr_running) {
- if (!rt_se)
- enqueue_top_rt_rq(rt_rq);
- else if (!on_rt_rq(rt_se))
- enqueue_rt_entity(rt_se, 0);
-
- if (rt_rq->highest_prio.curr < donor->prio)
- resched_curr(rq);
- }
-}
-
-static void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
-{
- struct sched_rt_entity *rt_se;
- int cpu = cpu_of(rq_of_rt_rq(rt_rq));
-
- rt_se = rt_rq->tg->rt_se[cpu];
-
- if (!rt_se) {
- dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
- /* Kick cpufreq (see the comment in kernel/sched/sched.h). */
- cpufreq_update_util(rq_of_rt_rq(rt_rq), 0);
- }
- else if (on_rt_rq(rt_se))
- dequeue_rt_entity(rt_se, 0);
-}
-
-static inline int rt_rq_throttled(struct rt_rq *rt_rq)
-{
- return rt_rq->rt_throttled && !rt_rq->rt_nr_boosted;
-}
-
-static int rt_se_boosted(struct sched_rt_entity *rt_se)
-{
- struct rt_rq *rt_rq = group_rt_rq(rt_se);
- struct task_struct *p;
-
- if (rt_rq)
- return !!rt_rq->rt_nr_boosted;
-
- p = rt_task_of(rt_se);
- return p->prio != p->normal_prio;
-}
-
-static inline const struct cpumask *sched_rt_period_mask(void)
-{
- return this_rq()->rd->span;
-}
-
-static inline
-struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
-{
- return container_of(rt_b, struct task_group, rt_bandwidth)->rt_rq[cpu];
-}
-
-static inline struct rt_bandwidth *sched_rt_bandwidth(struct rt_rq *rt_rq)
-{
- return &rt_rq->tg->rt_bandwidth;
-}
-
-bool sched_rt_bandwidth_account(struct rt_rq *rt_rq)
-{
- struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
-
- return (hrtimer_active(&rt_b->rt_period_timer) ||
- rt_rq->rt_time < rt_b->rt_runtime);
-}
-
-/*
- * We ran out of runtime, see if we can borrow some from our neighbours.
- */
-static void do_balance_runtime(struct rt_rq *rt_rq)
-{
- struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
- struct root_domain *rd = rq_of_rt_rq(rt_rq)->rd;
- int i, weight;
- u64 rt_period;
-
- weight = cpumask_weight(rd->span);
-
- raw_spin_lock(&rt_b->rt_runtime_lock);
- rt_period = ktime_to_ns(rt_b->rt_period);
- for_each_cpu(i, rd->span) {
- struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
- s64 diff;
-
- if (iter == rt_rq)
- continue;
-
- raw_spin_lock(&iter->rt_runtime_lock);
- /*
- * Either all rqs have inf runtime and there's nothing to steal
- * or __disable_runtime() below sets a specific rq to inf to
- * indicate its been disabled and disallow stealing.
- */
- if (iter->rt_runtime == RUNTIME_INF)
- goto next;
-
- /*
- * From runqueues with spare time, take 1/n part of their
- * spare time, but no more than our period.
- */
- diff = iter->rt_runtime - iter->rt_time;
- if (diff > 0) {
- diff = div_u64((u64)diff, weight);
- if (rt_rq->rt_runtime + diff > rt_period)
- diff = rt_period - rt_rq->rt_runtime;
- iter->rt_runtime -= diff;
- rt_rq->rt_runtime += diff;
- if (rt_rq->rt_runtime == rt_period) {
- raw_spin_unlock(&iter->rt_runtime_lock);
- break;
- }
- }
-next:
- raw_spin_unlock(&iter->rt_runtime_lock);
- }
- raw_spin_unlock(&rt_b->rt_runtime_lock);
-}
-
-/*
- * Ensure this RQ takes back all the runtime it lend to its neighbours.
- */
-static void __disable_runtime(struct rq *rq)
-{
- struct root_domain *rd = rq->rd;
- rt_rq_iter_t iter;
- struct rt_rq *rt_rq;
-
- if (unlikely(!scheduler_running))
- return;
-
- for_each_rt_rq(rt_rq, iter, rq) {
- struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
- s64 want;
- int i;
-
- raw_spin_lock(&rt_b->rt_runtime_lock);
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- /*
- * Either we're all inf and nobody needs to borrow, or we're
- * already disabled and thus have nothing to do, or we have
- * exactly the right amount of runtime to take out.
- */
- if (rt_rq->rt_runtime == RUNTIME_INF ||
- rt_rq->rt_runtime == rt_b->rt_runtime)
- goto balanced;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
-
- /*
- * Calculate the difference between what we started out with
- * and what we current have, that's the amount of runtime
- * we lend and now have to reclaim.
- */
- want = rt_b->rt_runtime - rt_rq->rt_runtime;
-
- /*
- * Greedy reclaim, take back as much as we can.
- */
- for_each_cpu(i, rd->span) {
- struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
- s64 diff;
-
- /*
- * Can't reclaim from ourselves or disabled runqueues.
- */
- if (iter == rt_rq || iter->rt_runtime == RUNTIME_INF)
- continue;
-
- raw_spin_lock(&iter->rt_runtime_lock);
- if (want > 0) {
- diff = min_t(s64, iter->rt_runtime, want);
- iter->rt_runtime -= diff;
- want -= diff;
- } else {
- iter->rt_runtime -= want;
- want -= want;
- }
- raw_spin_unlock(&iter->rt_runtime_lock);
-
- if (!want)
- break;
- }
-
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- /*
- * We cannot be left wanting - that would mean some runtime
- * leaked out of the system.
- */
- WARN_ON_ONCE(want);
-balanced:
- /*
- * Disable all the borrow logic by pretending we have inf
- * runtime - in which case borrowing doesn't make sense.
- */
- rt_rq->rt_runtime = RUNTIME_INF;
- rt_rq->rt_throttled = 0;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- raw_spin_unlock(&rt_b->rt_runtime_lock);
-
- /* Make rt_rq available for pick_next_task() */
- sched_rt_rq_enqueue(rt_rq);
- }
-}
-
-static void __enable_runtime(struct rq *rq)
-{
- rt_rq_iter_t iter;
- struct rt_rq *rt_rq;
-
- if (unlikely(!scheduler_running))
- return;
-
- /*
- * Reset each runqueue's bandwidth settings
- */
- for_each_rt_rq(rt_rq, iter, rq) {
- struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
-
- raw_spin_lock(&rt_b->rt_runtime_lock);
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- rt_rq->rt_runtime = rt_b->rt_runtime;
- rt_rq->rt_time = 0;
- rt_rq->rt_throttled = 0;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- raw_spin_unlock(&rt_b->rt_runtime_lock);
- }
-}
-
-static void balance_runtime(struct rt_rq *rt_rq)
-{
- if (!sched_feat(RT_RUNTIME_SHARE))
- return;
-
- if (rt_rq->rt_time > rt_rq->rt_runtime) {
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- do_balance_runtime(rt_rq);
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- }
-}
-
-static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
-{
- int i, idle = 1, throttled = 0;
- const struct cpumask *span;
-
- span = sched_rt_period_mask();
-
- /*
- * FIXME: isolated CPUs should really leave the root task group,
- * whether they are isolcpus or were isolated via cpusets, lest
- * the timer run on a CPU which does not service all runqueues,
- * potentially leaving other CPUs indefinitely throttled. If
- * isolation is really required, the user will turn the throttle
- * off to kill the perturbations it causes anyway. Meanwhile,
- * this maintains functionality for boot and/or troubleshooting.
- */
- if (rt_b == &root_task_group.rt_bandwidth)
- span = cpu_online_mask;
-
- for_each_cpu(i, span) {
- int enqueue = 0;
- struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
- struct rq *rq = rq_of_rt_rq(rt_rq);
- struct rq_flags rf;
- int skip;
-
- /*
- * When span == cpu_online_mask, taking each rq->lock
- * can be time-consuming. Try to avoid it when possible.
- */
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- if (!sched_feat(RT_RUNTIME_SHARE) && rt_rq->rt_runtime != RUNTIME_INF)
- rt_rq->rt_runtime = rt_b->rt_runtime;
- skip = !rt_rq->rt_time && !rt_rq->rt_nr_running;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- if (skip)
- continue;
-
- rq_lock(rq, &rf);
- update_rq_clock(rq);
-
- if (rt_rq->rt_time) {
- u64 runtime;
-
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- if (rt_rq->rt_throttled)
- balance_runtime(rt_rq);
- runtime = rt_rq->rt_runtime;
- rt_rq->rt_time -= min(rt_rq->rt_time, overrun*runtime);
- if (rt_rq->rt_throttled && rt_rq->rt_time < runtime) {
- rt_rq->rt_throttled = 0;
- enqueue = 1;
-
- /*
- * When we're idle and a woken (rt) task is
- * throttled wakeup_preempt() will set
- * skip_update and the time between the wakeup
- * and this unthrottle will get accounted as
- * 'runtime'.
- */
- if (rt_rq->rt_nr_running && rq->curr == rq->idle)
- rq_clock_cancel_skipupdate(rq);
- }
- if (rt_rq->rt_time || rt_rq->rt_nr_running)
- idle = 0;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- } else if (rt_rq->rt_nr_running) {
- idle = 0;
- if (!rt_rq_throttled(rt_rq))
- enqueue = 1;
- }
- if (rt_rq->rt_throttled)
- throttled = 1;
-
- if (enqueue)
- sched_rt_rq_enqueue(rt_rq);
- rq_unlock(rq, &rf);
- }
-
- if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
- return 1;
-
- return idle;
-}
-
-static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
-{
- u64 runtime = sched_rt_runtime(rt_rq);
-
- if (rt_rq->rt_throttled)
- return rt_rq_throttled(rt_rq);
-
- if (runtime >= sched_rt_period(rt_rq))
- return 0;
-
- balance_runtime(rt_rq);
- runtime = sched_rt_runtime(rt_rq);
- if (runtime == RUNTIME_INF)
- return 0;
-
- if (rt_rq->rt_time > runtime) {
- struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
-
- /*
- * Don't actually throttle groups that have no runtime assigned
- * but accrue some time due to boosting.
- */
- if (likely(rt_b->rt_runtime)) {
- rt_rq->rt_throttled = 1;
- printk_deferred_once("sched: RT throttling activated\n");
- } else {
- /*
- * In case we did anyway, make it go away,
- * replenishment is a joke, since it will replenish us
- * with exactly 0 ns.
- */
- rt_rq->rt_time = 0;
- }
-
- if (rt_rq_throttled(rt_rq)) {
- sched_rt_rq_dequeue(rt_rq);
- return 1;
- }
- }
-
- return 0;
-}
-
-#else /* !CONFIG_RT_GROUP_SCHED: */
+#else /* !CONFIG_RT_GROUP_SCHED */
typedef struct rt_rq *rt_rq_iter_t;
#define for_each_rt_rq(rt_rq, iter, rq) \
for ((void) iter, rt_rq = &rq->rt; rt_rq; rt_rq = NULL)
-#define for_each_sched_rt_entity(rt_se) \
- for (; rt_se; rt_se = NULL)
-
-static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se)
-{
- return NULL;
-}
-
-static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
-{
- struct rq *rq = rq_of_rt_rq(rt_rq);
-
- if (!rt_rq->rt_nr_running)
- return;
-
- enqueue_top_rt_rq(rt_rq);
- resched_curr(rq);
-}
-
-static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
-{
- dequeue_top_rt_rq(rt_rq, rt_rq->rt_nr_running);
-}
-
-static inline int rt_rq_throttled(struct rt_rq *rt_rq)
-{
- return false;
-}
-
-static inline const struct cpumask *sched_rt_period_mask(void)
-{
- return cpu_online_mask;
-}
-
-static inline
-struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
-{
- return &cpu_rq(cpu)->rt;
-}
-
-static void __enable_runtime(struct rq *rq) { }
-static void __disable_runtime(struct rq *rq) { }
-
-#endif /* !CONFIG_RT_GROUP_SCHED */
+#endif /* CONFIG_RT_GROUP_SCHED */
static inline int rt_se_prio(struct sched_rt_entity *rt_se)
{
-#ifdef CONFIG_RT_GROUP_SCHED
- struct rt_rq *rt_rq = group_rt_rq(rt_se);
-
- if (rt_rq)
- return rt_rq->highest_prio.curr;
-#endif
-
return rt_task_of(rt_se)->prio;
}
@@ -931,67 +338,8 @@ static void update_curr_rt(struct rq *rq)
if (unlikely(delta_exec <= 0))
return;
-#ifdef CONFIG_RT_GROUP_SCHED
- struct sched_rt_entity *rt_se = &donor->rt;
-
if (!rt_bandwidth_enabled())
return;
-
- for_each_sched_rt_entity(rt_se) {
- struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
- int exceeded;
-
- if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- rt_rq->rt_time += delta_exec;
- exceeded = sched_rt_runtime_exceeded(rt_rq);
- if (exceeded)
- resched_curr(rq);
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
- if (exceeded)
- do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
- }
- }
-#endif /* CONFIG_RT_GROUP_SCHED */
-}
-
-static void
-dequeue_top_rt_rq(struct rt_rq *rt_rq, unsigned int count)
-{
- struct rq *rq = rq_of_rt_rq(rt_rq);
-
- BUG_ON(&rq->rt != rt_rq);
-
- if (!rt_rq->rt_queued)
- return;
-
- BUG_ON(!rq->nr_running);
-
- sub_nr_running(rq, count);
- rt_rq->rt_queued = 0;
-
-}
-
-static void
-enqueue_top_rt_rq(struct rt_rq *rt_rq)
-{
- struct rq *rq = rq_of_rt_rq(rt_rq);
-
- BUG_ON(&rq->rt != rt_rq);
-
- if (rt_rq->rt_queued)
- return;
-
- if (rt_rq_throttled(rt_rq))
- return;
-
- if (rt_rq->rt_nr_running) {
- add_nr_running(rq, rt_rq->rt_nr_running);
- rt_rq->rt_queued = 1;
- }
-
- /* Kick cpufreq (see the comment in kernel/sched/sched.h). */
- cpufreq_update_util(rq, 0);
}
static void
@@ -1062,58 +410,11 @@ dec_rt_prio(struct rt_rq *rt_rq, int prio)
dec_rt_prio_smp(rt_rq, prio, prev_prio);
}
-#ifdef CONFIG_RT_GROUP_SCHED
-
-static void
-inc_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
-{
- if (rt_se_boosted(rt_se))
- rt_rq->rt_nr_boosted++;
-
- start_rt_bandwidth(&rt_rq->tg->rt_bandwidth);
-}
-
-static void
-dec_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
-{
- if (rt_se_boosted(rt_se))
- rt_rq->rt_nr_boosted--;
-
- WARN_ON(!rt_rq->rt_nr_running && rt_rq->rt_nr_boosted);
-}
-
-#else /* !CONFIG_RT_GROUP_SCHED: */
-
-static void
-inc_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
-{
-}
-
-static inline
-void dec_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) {}
-
-#endif /* !CONFIG_RT_GROUP_SCHED */
-
static inline
-unsigned int rt_se_nr_running(struct sched_rt_entity *rt_se)
+unsigned int is_rr_task(struct sched_rt_entity *rt_se)
{
- struct rt_rq *group_rq = group_rt_rq(rt_se);
-
- if (group_rq)
- return group_rq->rt_nr_running;
- else
- return 1;
-}
-
-static inline
-unsigned int rt_se_rr_nr_running(struct sched_rt_entity *rt_se)
-{
- struct rt_rq *group_rq = group_rt_rq(rt_se);
struct task_struct *tsk;
- if (group_rq)
- return group_rq->rr_nr_running;
-
tsk = rt_task_of(rt_se);
return (tsk->policy == SCHED_RR) ? 1 : 0;
@@ -1122,26 +423,21 @@ unsigned int rt_se_rr_nr_running(struct sched_rt_entity *rt_se)
static inline
void inc_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
- int prio = rt_se_prio(rt_se);
-
- WARN_ON(!rt_prio(prio));
- rt_rq->rt_nr_running += rt_se_nr_running(rt_se);
- rt_rq->rr_nr_running += rt_se_rr_nr_running(rt_se);
+ WARN_ON(!rt_prio(rt_se_prio(rt_se)));
+ rt_rq->rt_nr_running += 1;
+ rt_rq->rr_nr_running += is_rr_task(rt_se);
- inc_rt_prio(rt_rq, prio);
- inc_rt_group(rt_se, rt_rq);
+ inc_rt_prio(rt_rq, rt_se_prio(rt_se));
}
static inline
void dec_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
{
WARN_ON(!rt_prio(rt_se_prio(rt_se)));
- WARN_ON(!rt_rq->rt_nr_running);
- rt_rq->rt_nr_running -= rt_se_nr_running(rt_se);
- rt_rq->rr_nr_running -= rt_se_rr_nr_running(rt_se);
+ rt_rq->rt_nr_running -= 1;
+ rt_rq->rr_nr_running -= is_rr_task(rt_se);
dec_rt_prio(rt_rq, rt_se_prio(rt_se));
- dec_rt_group(rt_se, rt_rq);
}
/*
@@ -1170,10 +466,6 @@ static void __delist_rt_entity(struct sched_rt_entity *rt_se, struct rt_prio_arr
static inline struct sched_statistics *
__schedstats_from_rt_se(struct sched_rt_entity *rt_se)
{
- /* schedstats is not supported for rt group. */
- if (!rt_entity_is_task(rt_se))
- return NULL;
-
return &rt_task_of(rt_se)->stats;
}
@@ -1186,9 +478,7 @@ update_stats_wait_start_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se)
if (!schedstat_enabled())
return;
- if (rt_entity_is_task(rt_se))
- p = rt_task_of(rt_se);
-
+ p = rt_task_of(rt_se);
stats = __schedstats_from_rt_se(rt_se);
if (!stats)
return;
@@ -1205,9 +495,7 @@ update_stats_enqueue_sleeper_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_
if (!schedstat_enabled())
return;
- if (rt_entity_is_task(rt_se))
- p = rt_task_of(rt_se);
-
+ p = rt_task_of(rt_se);
stats = __schedstats_from_rt_se(rt_se);
if (!stats)
return;
@@ -1235,9 +523,7 @@ update_stats_wait_end_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se)
if (!schedstat_enabled())
return;
- if (rt_entity_is_task(rt_se))
- p = rt_task_of(rt_se);
-
+ p = rt_task_of(rt_se);
stats = __schedstats_from_rt_se(rt_se);
if (!stats)
return;
@@ -1254,9 +540,7 @@ update_stats_dequeue_rt(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se,
if (!schedstat_enabled())
return;
- if (rt_entity_is_task(rt_se))
- p = rt_task_of(rt_se);
-
+ p = rt_task_of(rt_se);
if ((flags & DEQUEUE_SLEEP) && p) {
unsigned int state;
@@ -1275,21 +559,8 @@ static void __enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flag
{
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
struct rt_prio_array *array = &rt_rq->active;
- struct rt_rq *group_rq = group_rt_rq(rt_se);
struct list_head *queue = array->queue + rt_se_prio(rt_se);
- /*
- * Don't enqueue the group if its throttled, or when empty.
- * The latter is a consequence of the former when a child group
- * get throttled and the current group doesn't have any other
- * active members.
- */
- if (group_rq && (rt_rq_throttled(group_rq) || !group_rq->rt_nr_running)) {
- if (rt_se->on_list)
- __delist_rt_entity(rt_se, array);
- return;
- }
-
if (move_entity(flags)) {
WARN_ON_ONCE(rt_se->on_list);
if (flags & ENQUEUE_HEAD)
@@ -1319,57 +590,18 @@ static void __dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flag
dec_rt_tasks(rt_se, rt_rq);
}
-/*
- * Because the prio of an upper entry depends on the lower
- * entries, we must remove entries top - down.
- */
-static void dequeue_rt_stack(struct sched_rt_entity *rt_se, unsigned int flags)
-{
- struct sched_rt_entity *back = NULL;
- unsigned int rt_nr_running;
-
- for_each_sched_rt_entity(rt_se) {
- rt_se->back = back;
- back = rt_se;
- }
-
- rt_nr_running = rt_rq_of_se(back)->rt_nr_running;
-
- for (rt_se = back; rt_se; rt_se = rt_se->back) {
- if (on_rt_rq(rt_se))
- __dequeue_rt_entity(rt_se, flags);
- }
-
- dequeue_top_rt_rq(rt_rq_of_se(back), rt_nr_running);
-}
-
static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
{
- struct rq *rq = rq_of_rt_se(rt_se);
-
update_stats_enqueue_rt(rt_rq_of_se(rt_se), rt_se, flags);
- dequeue_rt_stack(rt_se, flags);
- for_each_sched_rt_entity(rt_se)
- __enqueue_rt_entity(rt_se, flags);
- enqueue_top_rt_rq(&rq->rt);
+ __enqueue_rt_entity(rt_se, flags);
}
static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
{
- struct rq *rq = rq_of_rt_se(rt_se);
-
update_stats_dequeue_rt(rt_rq_of_se(rt_se), rt_se, flags);
- dequeue_rt_stack(rt_se, flags);
-
- for_each_sched_rt_entity(rt_se) {
- struct rt_rq *rt_rq = group_rt_rq(rt_se);
-
- if (rt_rq && rt_rq->rt_nr_running)
- __enqueue_rt_entity(rt_se, flags);
- }
- enqueue_top_rt_rq(&rq->rt);
+ __dequeue_rt_entity(rt_se, flags);
}
/*
@@ -1429,13 +661,7 @@ requeue_rt_entity(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, int head)
static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head)
{
- struct sched_rt_entity *rt_se = &p->rt;
- struct rt_rq *rt_rq;
-
- for_each_sched_rt_entity(rt_se) {
- rt_rq = rt_rq_of_se(rt_se);
- requeue_rt_entity(rt_rq, rt_se, head);
- }
+ requeue_rt_entity(rt_rq_of_se(&p->rt), &p->rt, head);
}
static void yield_task_rt(struct rq *rq)
@@ -1636,21 +862,6 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
return next;
}
-static struct task_struct *_pick_next_task_rt(struct rq *rq)
-{
- struct sched_rt_entity *rt_se;
- struct rt_rq *rt_rq = &rq->rt;
-
- do {
- rt_se = pick_next_rt_entity(rt_rq);
- if (unlikely(!rt_se))
- return NULL;
- rt_rq = group_rt_rq(rt_se);
- } while (rt_rq);
-
- return rt_task_of(rt_se);
-}
-
static struct task_struct *pick_task_rt(struct rq *rq, struct rq_flags *rf)
{
struct task_struct *p;
@@ -1658,7 +869,7 @@ static struct task_struct *pick_task_rt(struct rq *rq, struct rq_flags *rf)
if (!sched_rt_runnable(rq))
return NULL;
- p = _pick_next_task_rt(rq);
+ p = rt_task_of(pick_next_rt_entity(&rq->rt));
return p;
}
@@ -2322,8 +1533,6 @@ static void rq_online_rt(struct rq *rq)
if (rq->rt.overloaded)
rt_set_overload(rq);
- __enable_runtime(rq);
-
cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio.curr);
}
@@ -2333,8 +1542,6 @@ static void rq_offline_rt(struct rq *rq)
if (rq->rt.overloaded)
rt_clear_overload(rq);
- __disable_runtime(rq);
-
cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
}
@@ -2495,12 +1702,10 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
* Requeue to the end of queue if we (and all of our ancestors) are not
* the only element on the queue
*/
- for_each_sched_rt_entity(rt_se) {
- if (rt_se->run_list.prev != rt_se->run_list.next) {
- requeue_task_rt(rq, p, 0);
- resched_curr(rq);
- return;
- }
+ if (rt_se->run_list.prev != rt_se->run_list.next) {
+ requeue_task_rt(rq, p, 0);
+ resched_curr(rq);
+ return;
}
}
@@ -2518,16 +1723,7 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
#ifdef CONFIG_SCHED_CORE
static int task_is_throttled_rt(struct task_struct *p, int cpu)
{
- struct rt_rq *rt_rq;
-
-#ifdef CONFIG_RT_GROUP_SCHED // XXX maybe add task_rt_rq(), see also sched_rt_period_rt_rq
- rt_rq = task_group(p)->rt_rq[cpu];
- WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
-#else
- rt_rq = &cpu_rq(cpu)->rt;
-#endif
-
- return rt_rq_throttled(rt_rq);
+ return 0;
}
#endif /* CONFIG_SCHED_CORE */
@@ -2774,13 +1970,7 @@ long sched_group_rt_period(struct task_group *tg)
#ifdef CONFIG_SYSCTL
static int sched_rt_global_constraints(void)
{
- int ret = 0;
-
- mutex_lock(&rt_constraints_mutex);
- ret = __rt_schedulable(NULL, 0, 0);
- mutex_unlock(&rt_constraints_mutex);
-
- return ret;
+ return 0;
}
#endif /* CONFIG_SYSCTL */
@@ -2815,10 +2005,6 @@ static int sched_rt_global_validate(void)
return 0;
}
-static void sched_rt_do_global(void)
-{
-}
-
static int sched_rt_handler(const struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos)
{
@@ -2846,7 +2032,6 @@ static int sched_rt_handler(const struct ctl_table *table, int write, void *buff
if (ret)
goto undo;
- sched_rt_do_global();
sched_dl_do_global();
}
if (0) {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2b8630ed1353..5833905d8eaa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -820,7 +820,7 @@ struct scx_rq {
static inline int rt_bandwidth_enabled(void)
{
- return sysctl_sched_rt_runtime >= 0;
+ return 0;
}
/* RT IPI pull logic requires IRQ_WORK */
@@ -860,7 +860,7 @@ struct rt_rq {
static inline bool rt_rq_is_runnable(struct rt_rq *rt_rq)
{
- return rt_rq->rt_queued && rt_rq->rt_nr_running;
+ return rt_rq->rt_nr_running;
}
/* Deadline class' related fields in a runqueue */
@@ -2775,7 +2775,7 @@ static inline bool sched_dl_runnable(struct rq *rq)
static inline bool sched_rt_runnable(struct rq *rq)
{
- return rq->rt.rt_queued > 0;
+ return rq->rt.rt_nr_running > 0;
}
static inline bool sched_fair_runnable(struct rq *rq)
@@ -2887,9 +2887,6 @@ extern void resched_curr(struct rq *rq);
extern void resched_curr_lazy(struct rq *rq);
extern void resched_cpu(int cpu);
-extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
-extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
-
extern void init_dl_entity(struct sched_dl_entity *dl_se);
extern void init_cfs_throttle_work(struct task_struct *p);
@@ -3306,12 +3303,8 @@ extern void set_rq_offline(struct rq *rq);
extern bool sched_smp_initialized;
#ifdef CONFIG_RT_GROUP_SCHED
-#define rt_entity_is_task(rt_se) (!(rt_se)->my_q)
-
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
{
- WARN_ON_ONCE(!rt_entity_is_task(rt_se));
-
return container_of_const(rt_se, struct task_struct, rt);
}
@@ -3336,8 +3329,6 @@ static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
return rt_rq->rq;
}
#else
-#define rt_entity_is_task(rt_se) (1)
-
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
{
return container_of_const(rt_se, struct task_struct, rt);
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index cadb0e9fe19b..806bc88d21ee 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -606,19 +606,6 @@ int __sched_setscheduler(struct task_struct *p,
change:
if (user) {
-#ifdef CONFIG_RT_GROUP_SCHED
- /*
- * Do not allow real-time tasks into groups that have no runtime
- * assigned.
- */
- if (rt_group_sched_enabled() &&
- rt_bandwidth_enabled() && rt_policy(policy) &&
- task_group(p)->rt_bandwidth.rt_runtime == 0 &&
- !task_group_is_autogroup(task_group(p))) {
- retval = -EPERM;
- goto unlock;
- }
-#endif /* CONFIG_RT_GROUP_SCHED */
if (dl_bandwidth_enabled() && dl_policy(policy) &&
!(attr->sched_flags & SCHED_FLAG_SUGOV)) {
cpumask_t *span = rq->rd->span;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (5 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 06/29] sched/rt: Disable RT_GROUP_SCHED Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 08/29] sched/rt: Introduce HCBS specific structs in task_group Yuri Andriaccio
` (21 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Remove the rq field in struct rt_rq.
The rq field now is just caching the pointer to the global runqueue of the
given rt_rq, so it is unnecessary as the global runqueue can be retrieved
in other ways.
Introduce served_rq_of_rt_rq to retrieve the runqueue the given rt_rq is
serving.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 7 ++-----
kernel/sched/sched.h | 21 +++++++++++++--------
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 392212ac90d8..dd4aee5570aa 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -101,10 +101,7 @@ void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
struct sched_rt_entity *rt_se, int cpu,
struct sched_rt_entity *parent)
{
- struct rq *rq = cpu_rq(cpu);
-
rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
- rt_rq->rq = rq;
rt_rq->tg = tg;
tg->rt_rq[cpu] = rt_rq;
@@ -184,7 +181,7 @@ static void pull_rt_task(struct rq *);
static inline void rt_queue_push_tasks(struct rt_rq *rt_rq)
{
- struct rq *rq = container_of_const(rt_rq, struct rq, rt);
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
if (!has_pushable_tasks(rt_rq))
return;
@@ -194,7 +191,7 @@ static inline void rt_queue_push_tasks(struct rt_rq *rt_rq)
static inline void rt_queue_pull_task(struct rt_rq *rt_rq)
{
- struct rq *rq = container_of_const(rt_rq, struct rq, rt);
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5833905d8eaa..770de5afd3a9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -850,8 +850,6 @@ struct rt_rq {
raw_spinlock_t rt_runtime_lock;
unsigned int rt_nr_boosted;
-
- struct rq *rq; /* this is always top-level rq, cache? */
#endif
#ifdef CONFIG_CGROUP_SCHED
struct task_group *tg; /* this tg has "this" rt_rq on given CPU for runnable entities */
@@ -3308,11 +3306,16 @@ static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
return container_of_const(rt_se, struct task_struct, rt);
}
+static inline struct rq *served_rq_of_rt_rq(struct rt_rq *rt_rq)
+{
+ WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
+ return container_of_const(rt_rq, struct rq, rt);
+}
+
static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
{
/* Cannot fold with non-CONFIG_RT_GROUP_SCHED version, layout */
- WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
- return rt_rq->rq;
+ return cpu_rq(served_rq_of_rt_rq(rt_rq)->cpu);
}
static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
@@ -3323,10 +3326,7 @@ static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
{
- struct rt_rq *rt_rq = rt_se->rt_rq;
-
- WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
- return rt_rq->rq;
+ return rq_of_rt_rq(rt_se->rt_rq);
}
#else
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
@@ -3334,6 +3334,11 @@ static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
return container_of_const(rt_se, struct task_struct, rt);
}
+static inline struct rq *served_rq_of_rt_rq(struct rt_rq *rt_rq)
+{
+ return container_of_const(rt_rq, struct rq, rt);
+}
+
static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
{
return container_of_const(rt_rq, struct rq, rt);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 08/29] sched/rt: Introduce HCBS specific structs in task_group
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (6 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures Yuri Andriaccio
` (20 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Add an array of sched_dl_entity objects in task_group.
Create the dl_bandwidth struct and add a field for it in task_group.
Add a rq pointer field in struct rt_rq.
---
For each CPU on the host system, the task_group manages a sched_dl_entity and
a rt_rq object, which in turn keeps a pointer to its locally managed runqueue.
The sched_dl_entity object manages the deadline server which will be scheduled
for execution on the CPU, while the rt_rq object is instead used to reference
the local runqueue's specific data and entities and it is used when an actual
task must be scheduled when the CPU is given to the dl_server.
The dl_bandwidth object keeps track of the currently allocated bandwidth for
the cgroup.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/sched.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 770de5afd3a9..1c614e54eba4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -322,6 +322,13 @@ struct rt_bandwidth {
unsigned int rt_period_active;
};
+struct dl_bandwidth {
+ raw_spinlock_t dl_runtime_lock;
+ u64 dl_runtime;
+ u64 dl_period;
+};
+
+
static inline int dl_bandwidth_enabled(void)
{
return sysctl_sched_rt_runtime >= 0;
@@ -495,10 +502,17 @@ struct task_group {
#endif /* CONFIG_FAIR_GROUP_SCHED */
#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Each task group manages a different scheduling entity per CPU, i.e. a
+ * different deadline server, and a runqueue per CPU. All the dl-servers
+ * share the same dl_bandwidth object.
+ */
struct sched_rt_entity **rt_se;
+ struct sched_dl_entity **dl_se;
struct rt_rq **rt_rq;
struct rt_bandwidth rt_bandwidth;
+ struct dl_bandwidth dl_bandwidth;
#endif
struct scx_task_group scx;
@@ -854,6 +868,12 @@ struct rt_rq {
#ifdef CONFIG_CGROUP_SCHED
struct task_group *tg; /* this tg has "this" rt_rq on given CPU for runnable entities */
#endif
+
+ /*
+ * The cgroup's served runqueue if the rt_rq entity belongs to a cgroup,
+ * otherwise the top-level global runqueue.
+ */
+ struct rq *rq;
};
static inline bool rt_rq_is_runnable(struct rt_rq *rt_rq)
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (7 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 08/29] sched/rt: Introduce HCBS specific structs in task_group Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 10/29] sched/deadline: Add dl_init_tg Yuri Andriaccio
` (19 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Update autogroups' creation/destruction to use the new data structures.
Initialize the default bandwidth for rt-cgroups (sched_init).
Initialize rt-scheduler's specific data structures for the root control
group (sched_init).
Remove init_tg_rt_entry in favour of manual setup of the necessary data
structures in sched_init.
Add utility functions to check (and get) if a rt_rq entity is connected
to a rt-cgroup.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/autogroup.c | 4 ++--
kernel/sched/core.c | 11 +++++++++--
kernel/sched/deadline.c | 8 ++++++++
kernel/sched/rt.c | 11 -----------
kernel/sched/sched.h | 30 +++++++++++++++++++++++++++---
5 files changed, 46 insertions(+), 18 deletions(-)
diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
index e380cf9372bb..2122a0740a19 100644
--- a/kernel/sched/autogroup.c
+++ b/kernel/sched/autogroup.c
@@ -52,7 +52,7 @@ static inline void autogroup_destroy(struct kref *kref)
#ifdef CONFIG_RT_GROUP_SCHED
/* We've redirected RT tasks to the root task group... */
- ag->tg->rt_se = NULL;
+ ag->tg->dl_se = NULL;
ag->tg->rt_rq = NULL;
#endif
sched_release_group(ag->tg);
@@ -109,7 +109,7 @@ static inline struct autogroup *autogroup_create(void)
* the policy change to proceed.
*/
free_rt_sched_group(tg);
- tg->rt_se = root_task_group.rt_se;
+ tg->dl_se = root_task_group.dl_se;
tg->rt_rq = root_task_group.rt_rq;
#endif /* CONFIG_RT_GROUP_SCHED */
tg->autogroup = ag;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a203a27fb16d..4e58b4f165ed 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8636,7 +8636,7 @@ void __init sched_init(void)
scx_tg_init(&root_task_group);
#endif /* CONFIG_EXT_GROUP_SCHED */
#ifdef CONFIG_RT_GROUP_SCHED
- root_task_group.rt_se = (struct sched_rt_entity **)ptr;
+ root_task_group.dl_se = (struct sched_dl_entity **)ptr;
ptr += nr_cpu_ids * sizeof(void **);
root_task_group.rt_rq = (struct rt_rq **)ptr;
@@ -8647,6 +8647,11 @@ void __init sched_init(void)
init_defrootdomain();
+#ifdef CONFIG_RT_GROUP_SCHED
+ init_dl_bandwidth(&root_task_group.dl_bandwidth,
+ global_rt_period(), global_rt_runtime());
+#endif /* CONFIG_RT_GROUP_SCHED */
+
#ifdef CONFIG_CGROUP_SCHED
task_group_cache = KMEM_CACHE(task_group, 0);
@@ -8698,7 +8703,9 @@ void __init sched_init(void)
* starts working after scheduler_running, which is not the case
* yet.
*/
- init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL);
+ rq->rt.tg = &root_task_group;
+ root_task_group.rt_rq[i] = &rq->rt;
+ root_task_group.dl_se[i] = NULL;
#endif
rq->next_class = &idle_sched_class;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 67615a0539fe..7c039d5f3c5d 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -505,6 +505,14 @@ static inline int is_leftmost(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq
static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
+void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
+{
+ raw_spin_lock_init(&dl_b->dl_runtime_lock);
+ dl_b->dl_period = period;
+ dl_b->dl_runtime = runtime;
+}
+
+
void init_dl_bw(struct dl_bw *dl_b)
{
raw_spin_lock_init(&dl_b->lock);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index dd4aee5570aa..741fac9f57ac 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -97,17 +97,6 @@ void free_rt_sched_group(struct task_group *tg)
return;
}
-void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
- struct sched_rt_entity *rt_se, int cpu,
- struct sched_rt_entity *parent)
-{
- rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
- rt_rq->tg = tg;
-
- tg->rt_rq[cpu] = rt_rq;
- tg->rt_se[cpu] = rt_se;
-}
-
int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
{
if (!rt_group_sched_enabled())
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1c614e54eba4..e7e263d3cddb 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -604,9 +604,6 @@ extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq);
extern bool cfs_task_bw_constrained(struct task_struct *p);
-extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
- struct sched_rt_entity *rt_se, int cpu,
- struct sched_rt_entity *parent);
extern int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us);
extern int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us);
extern long sched_group_rt_runtime(struct task_group *tg);
@@ -2905,6 +2902,7 @@ extern void resched_curr(struct rq *rq);
extern void resched_curr_lazy(struct rq *rq);
extern void resched_cpu(int cpu);
+void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
extern void init_dl_entity(struct sched_dl_entity *dl_se);
extern void init_cfs_throttle_work(struct task_struct *p);
@@ -3348,6 +3346,22 @@ static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
{
return rq_of_rt_rq(rt_se->rt_rq);
}
+
+static inline int is_dl_group(struct rt_rq *rt_rq)
+{
+ return rt_rq->tg != &root_task_group;
+}
+
+/*
+ * Return the scheduling entity of this group of tasks.
+ */
+static inline struct sched_dl_entity *dl_group_of(struct rt_rq *rt_rq)
+{
+ if (WARN_ON_ONCE(!is_dl_group(rt_rq)))
+ return NULL;
+
+ return rt_rq->tg->dl_se[served_rq_of_rt_rq(rt_rq)->cpu];
+}
#else
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
{
@@ -3377,6 +3391,16 @@ static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
return &rq->rt;
}
+
+static inline int is_dl_group(struct rt_rq *rt_rq)
+{
+ return 0;
+}
+
+static inline struct sched_dl_entity *dl_group_of(struct rt_rq *rt_rq)
+{
+ return NULL;
+}
#endif
DEFINE_LOCK_GUARD_2(double_rq_lock, struct rq,
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 10/29] sched/deadline: Add dl_init_tg
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (8 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group Yuri Andriaccio
` (18 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Add dl_init_tg to initialize and/or update a rt-cgroup dl_server and to
also account the allocated bandwidth. This function is currently unhooked
and will be later used to allocate bandwidth to rt-cgroups.
Add lock guard for raw_spin_rq_lock_irq for cleaner code.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/deadline.c | 31 +++++++++++++++++++++++++++++++
kernel/sched/sched.h | 5 +++++
2 files changed, 36 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 7c039d5f3c5d..5532ca4ad969 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -332,6 +332,37 @@ void cancel_inactive_timer(struct sched_dl_entity *dl_se)
cancel_dl_timer(dl_se, &dl_se->inactive_timer);
}
+#ifdef CONFIG_RT_GROUP_SCHED
+void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period)
+{
+ struct rq *rq = container_of_const(dl_se->dl_rq, struct rq, dl);
+ int is_active;
+ u64 new_bw;
+
+ guard(raw_spin_rq_lock_irq)(rq);
+ is_active = dl_se->my_q->rt.rt_nr_running > 0;
+
+ update_rq_clock(rq);
+ dl_server_stop(dl_se);
+
+ new_bw = to_ratio(rt_period, rt_runtime);
+ dl_rq_change_utilization(rq, dl_se, new_bw);
+
+ dl_se->dl_runtime = rt_runtime;
+ dl_se->dl_deadline = rt_period;
+ dl_se->dl_period = rt_period;
+
+ dl_se->runtime = 0;
+ dl_se->deadline = 0;
+
+ dl_se->dl_bw = new_bw;
+ dl_se->dl_density = new_bw;
+
+ if (is_active)
+ dl_server_start(dl_se);
+}
+#endif
+
static void dl_change_utilization(struct task_struct *p, u64 new_bw)
{
WARN_ON_ONCE(p->dl.flags & SCHED_FLAG_SUGOV);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e7e263d3cddb..ca69d2132061 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -423,6 +423,7 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq,
struct rq *served_rq,
dl_server_pick_f pick_task);
extern void sched_init_dl_servers(void);
+extern void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period);
extern void fair_server_init(struct rq *rq);
extern void ext_server_init(struct rq *rq);
@@ -2023,6 +2024,10 @@ static inline struct rq *_this_rq_lock_irq(struct rq_flags *rf) __acquires_ret
return rq;
}
+DEFINE_LOCK_GUARD_1(raw_spin_rq_lock_irq, struct rq,
+ raw_spin_rq_lock_irq(_T->lock),
+ raw_spin_rq_unlock_irq(_T->lock))
+
#ifdef CONFIG_NUMA
enum numa_topology_type {
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (9 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 10/29] sched/deadline: Add dl_init_tg Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 12/29] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests Yuri Andriaccio
` (17 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Add allocation and deallocation code for rt-cgroups.
Declare dl_server specific functions (only skeleton, but no
implementation yet), needed by the deadline servers to be called when
trying to schedule.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/rt.c | 151 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 149 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 741fac9f57ac..3d7f2b2ebe60 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -88,24 +88,171 @@ void init_rt_rq(struct rt_rq *rt_rq)
void unregister_rt_sched_group(struct task_group *tg)
{
+ int i;
+
+ if (!rt_group_sched_enabled())
+ return;
+
+ if (!tg->dl_se || !tg->rt_rq)
+ return;
+ for_each_possible_cpu(i) {
+ if (!tg->dl_se[i] || !tg->rt_rq[i])
+ continue;
+
+ if (tg->dl_se[i]->dl_runtime)
+ dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period);
+ }
}
void free_rt_sched_group(struct task_group *tg)
{
+ int i;
+ unsigned long flags;
+
if (!rt_group_sched_enabled())
return;
+
+ if (!tg->dl_se || !tg->rt_rq)
+ return;
+
+ for_each_possible_cpu(i) {
+ if (!tg->dl_se[i] || !tg->rt_rq[i])
+ continue;
+
+ /*
+ * Shutdown the dl_server and free it
+ *
+ * Since the dl timer is going to be cancelled,
+ * we risk to never decrease the running bw...
+ * Fix this issue by changing the group runtime
+ * to 0 immediately before freeing it.
+ */
+ if (tg->dl_se[i]->dl_runtime)
+ dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period);
+
+ raw_spin_rq_lock_irqsave(cpu_rq(i), flags);
+ hrtimer_cancel(&tg->dl_se[i]->dl_timer);
+ raw_spin_rq_unlock_irqrestore(cpu_rq(i), flags);
+ kfree(tg->dl_se[i]);
+
+ /* Free the local per-cpu runqueue */
+ kfree(served_rq_of_rt_rq(tg->rt_rq[i]));
+ }
+
+ kfree(tg->rt_rq);
+ kfree(tg->dl_se);
+}
+
+static struct task_struct *rt_server_pick(struct sched_dl_entity *dl_se, struct rq_flags *rf)
+{
+ return NULL;
+}
+
+static inline void __rt_rq_free(struct rt_rq **rt_rq)
+{
+ int i;
+
+ for_each_possible_cpu(i) {
+ kfree(served_rq_of_rt_rq(rt_rq[i]));
+ }
+
+ kfree(rt_rq);
+}
+
+DEFINE_FREE(rt_rq_free, struct rt_rq **, if (_T) __rt_rq_free(_T))
+
+static inline void __dl_se_free(struct sched_dl_entity **dl_se)
+{
+ int i;
+
+ for_each_possible_cpu(i) {
+ kfree(dl_se[i]);
+ }
+
+ kfree(dl_se);
+}
+
+DEFINE_FREE(dl_se_free, struct sched_dl_entity **, if (_T) __dl_se_free(_T))
+
+static int __alloc_rt_sched_group_data(struct task_group *tg) {
+ /* Instantiate automatic cleanup in event of kalloc fail */
+ struct rt_rq **tg_rt_rq __free(rt_rq_free) = NULL;
+ struct sched_dl_entity **tg_dl_se __free(dl_se_free) = NULL;
+ struct sched_dl_entity *dl_se __free(kfree) = NULL;
+ struct rq *s_rq __free(kfree) = NULL;
+ int i;
+
+ tg_rt_rq = kcalloc(nr_cpu_ids, sizeof(struct rt_rq *), GFP_KERNEL);
+ if (!tg_rt_rq)
+ return 0;
+
+ tg_dl_se = kcalloc(nr_cpu_ids,
+ sizeof(struct sched_dl_entity *), GFP_KERNEL);
+ if (!tg_dl_se)
+ return 0;
+
+ for_each_possible_cpu(i) {
+ s_rq = kzalloc_node(sizeof(struct rq),
+ GFP_KERNEL, cpu_to_node(i));
+ if (!s_rq)
+ return 0;
+
+ dl_se = kzalloc_node(sizeof(struct sched_dl_entity),
+ GFP_KERNEL, cpu_to_node(i));
+ if (!dl_se)
+ return 0;
+
+ tg_rt_rq[i] = &no_free_ptr(s_rq)->rt;
+ tg_dl_se[i] = no_free_ptr(dl_se);
+ }
+
+ tg->rt_rq = no_free_ptr(tg_rt_rq);
+ tg->dl_se = no_free_ptr(tg_dl_se);
+
+ return 1;
}
int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
{
+ struct sched_dl_entity *dl_se;
+ struct rq *s_rq;
+ int i;
+
if (!rt_group_sched_enabled())
return 1;
+ /* Allocate all necessary resources beforehand */
+ if (!__alloc_rt_sched_group_data(tg))
+ return 0;
+
+ /* Initialize the allocated resources now. */
+ init_dl_bandwidth(&tg->dl_bandwidth, 0, 0);
+
+ for_each_possible_cpu(i) {
+ s_rq = served_rq_of_rt_rq(tg->rt_rq[i]);
+ dl_se = tg->dl_se[i];
+
+ init_rt_rq(&s_rq->rt);
+ s_rq->cpu = i;
+ s_rq->rt.tg = tg;
+
+ init_dl_entity(dl_se);
+ dl_se->dl_runtime = tg->dl_bandwidth.dl_runtime;
+ dl_se->dl_deadline = tg->dl_bandwidth.dl_period;
+ dl_se->dl_period = tg->dl_bandwidth.dl_period;
+ dl_se->runtime = 0;
+ dl_se->deadline = 0;
+ dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
+ dl_se->dl_density = to_ratio(dl_se->dl_deadline, dl_se->dl_runtime);
+ dl_se->dl_server = 1;
+ dl_server_init(dl_se, &cpu_rq(i)->dl, s_rq, rt_server_pick);
+ }
+
return 1;
}
-#else /* !CONFIG_RT_GROUP_SCHED: */
+#else /* !CONFIG_RT_GROUP_SCHED */
void unregister_rt_sched_group(struct task_group *tg) { }
@@ -115,7 +262,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
{
return 1;
}
-#endif /* !CONFIG_RT_GROUP_SCHED */
+#endif /* CONFIG_RT_GROUP_SCHED */
static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 12/29] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (10 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Yuri Andriaccio
` (16 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Account the rt-cgroups hierarchy's reserved bandwidth in the
schedulability test of deadline entities. This mechanism allows to
completely reserve portion of the rt-bandwidth to rt-cgroups even if
they do not use all of it.
Account for the rt-cgroups' reserved bandwidth also when changing the
total dedicated bandwidth for real time tasks.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/deadline.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 5532ca4ad969..084af1d375b5 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -202,11 +202,22 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
__dl_update(dl_b, -((s32)tsk_bw / cpus));
}
+static inline u64 get_dl_groups_bw(void)
+{
+#ifdef CONFIG_RT_GROUP_SCHED
+ return to_ratio(root_task_group.dl_bandwidth.dl_period,
+ root_task_group.dl_bandwidth.dl_runtime);
+#else
+ return 0;
+#endif
+}
+
static inline bool
__dl_overflow(struct dl_bw *dl_b, unsigned long cap, u64 old_bw, u64 new_bw)
{
return dl_b->bw != -1 &&
- cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
+ cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw
+ + cap_scale(get_dl_groups_bw(), cap);
}
static inline
@@ -3462,8 +3473,9 @@ int sched_dl_global_validate(void)
u64 period = global_rt_period();
u64 new_bw = to_ratio(period, runtime);
u64 cookie = ++dl_cookie;
+ u64 dl_groups_root = get_dl_groups_bw();
struct dl_bw *dl_b;
- int cpu, cpus, ret = 0;
+ int cpu, cap, cpus, ret = 0;
unsigned long flags;
/*
@@ -3478,10 +3490,12 @@ int sched_dl_global_validate(void)
goto next;
dl_b = dl_bw_of(cpu);
+ cap = dl_bw_capacity(cpu);
cpus = dl_bw_cpus(cpu);
raw_spin_lock_irqsave(&dl_b->lock, flags);
- if (new_bw * cpus < dl_b->total_bw)
+ if (new_bw * cpus < dl_b->total_bw +
+ cap_scale(dl_groups_root, cap))
ret = -EBUSY;
raw_spin_unlock_irqrestore(&dl_b->lock, flags);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (11 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 12/29] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 13:04 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Yuri Andriaccio
` (15 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Implement rt_server_pick, the callback that deadline servers use to
pick a task to schedule.
rt_server_pick(): pick the next runnable rt task and tell the
scheduler that it is going to be scheduled next.
Let enqueue_task_rt function start the attached deadline server when the
first task is enqueued on a specific rq/server.
The server is not symmetrically stopped in dequeue_task_rt as it is
stopped when server_pick_task returns NULL (see deadline.c).
Change update_curr_rt to perform a deadline server update if the
updated task is served by non-root group.
Update inc/dec_dl_tasks to account the number of active tasks in the
local runqueue for rt-cgroups servers, as their local runqueue is
different from the global runqueue, and thus when a rt-group server is
activated/deactivated, the number of served tasks' must be
added/removed. This uses nr_running to be compatible with future
dl-server interfaces. Account also the deadline server so that it is
picked for shutdown when its runqueue is empty (future patches will
try to pull tasks before stopping).
Update inc/dec_rt_prio_smp to change a rq's cpupri only if the rt_rq
is the global runqueue, since cgroups are scheduled via their
dl-server priority.
Update inc/dec_rt_tasks to account for waking/sleeping tasks on the
global runqueue, when the task runs on the root cgroup, or its local
dl server is active. The accounting is not done when servers are
throttled, as they will add/sub the number of tasks running when they
get enqueued/dequeued. For rt cgroups, account for the number of active
tasks in the nr_running field of the local runqueue (add/sub_nr_running),
as this number is used when a dl server is enqueued/dequeued.
Update set_task_rq to record the dl_rq, tracking which deadline
server manages a task.
Update set_task_rq to not use the parent field anymore, as it is
unused by this patchset's code. Remove the unused parent field from
sched_rt_entity.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
include/linux/sched.h | 1 -
kernel/sched/deadline.c | 8 ++++++
kernel/sched/rt.c | 60 ++++++++++++++++++++++++++++++++++++++---
kernel/sched/sched.h | 8 +++++-
4 files changed, 71 insertions(+), 6 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index eb8b57f689b5..ea2e74598b93 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -630,7 +630,6 @@ struct sched_rt_entity {
struct sched_rt_entity *back;
#ifdef CONFIG_RT_GROUP_SCHED
- struct sched_rt_entity *parent;
/* rq on which this entity is (to be) queued: */
struct rt_rq *rt_rq;
/* rq "owned" by this entity/group: */
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 084af1d375b5..c82810732106 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2093,6 +2093,10 @@ void inc_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
if (!dl_server(dl_se))
add_nr_running(rq_of_dl_rq(dl_rq), 1);
+ else if (rq_of_dl_se(dl_se) != dl_se->my_q) {
+ WARN_ON(dl_se->my_q->rt.rt_nr_running != dl_se->my_q->nr_running);
+ add_nr_running(rq_of_dl_rq(dl_rq), dl_se->my_q->nr_running + 1);
+ }
inc_dl_deadline(dl_rq, deadline);
}
@@ -2105,6 +2109,10 @@ void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
if (!dl_server(dl_se))
sub_nr_running(rq_of_dl_rq(dl_rq), 1);
+ else if (rq_of_dl_se(dl_se) != dl_se->my_q) {
+ WARN_ON(dl_se->my_q->rt.rt_nr_running != dl_se->my_q->nr_running);
+ sub_nr_running(rq_of_dl_rq(dl_rq), dl_se->my_q->nr_running - 1);
+ }
dec_dl_deadline(dl_rq, dl_se->deadline);
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 3d7f2b2ebe60..defb812b0e48 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -144,9 +144,22 @@ void free_rt_sched_group(struct task_group *tg)
kfree(tg->dl_se);
}
+static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq);
+static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first);
+
static struct task_struct *rt_server_pick(struct sched_dl_entity *dl_se, struct rq_flags *rf)
{
- return NULL;
+ struct rt_rq *rt_rq = &dl_se->my_q->rt;
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+ struct task_struct *p;
+
+ if (!sched_rt_runnable(dl_se->my_q))
+ return NULL;
+
+ p = rt_task_of(pick_next_rt_entity(rt_rq));
+ set_next_task_rt(rq, p, true);
+
+ return p;
}
static inline void __rt_rq_free(struct rt_rq **rt_rq)
@@ -462,6 +475,7 @@ static inline int rt_se_prio(struct sched_rt_entity *rt_se)
static void update_curr_rt(struct rq *rq)
{
struct task_struct *donor = rq->donor;
+ struct rt_rq *rt_rq;
s64 delta_exec;
if (donor->sched_class != &rt_sched_class)
@@ -471,8 +485,18 @@ static void update_curr_rt(struct rq *rq)
if (unlikely(delta_exec <= 0))
return;
- if (!rt_bandwidth_enabled())
+ if (!rt_group_sched_enabled())
return;
+
+ if (!dl_bandwidth_enabled())
+ return;
+
+ rt_rq = rt_rq_of_se(&donor->rt);
+ if (is_dl_group(rt_rq)) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ dl_server_update(dl_se, delta_exec);
+ }
}
static void
@@ -483,7 +507,7 @@ inc_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio)
/*
* Change rq's cpupri only if rt_rq is the top queue.
*/
- if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && &rq->rt != rt_rq)
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
return;
if (rq->online && prio < prev_prio)
@@ -498,7 +522,7 @@ dec_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio)
/*
* Change rq's cpupri only if rt_rq is the top queue.
*/
- if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && &rq->rt != rt_rq)
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
return;
if (rq->online && rt_rq->highest_prio.curr != prev_prio)
@@ -561,6 +585,16 @@ void inc_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
rt_rq->rr_nr_running += is_rr_task(rt_se);
inc_rt_prio(rt_rq, rt_se_prio(rt_se));
+
+ if (rt_group_sched_enabled() && is_dl_group(rt_rq)) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ if (!dl_se->dl_throttled)
+ add_nr_running(rq_of_rt_rq(rt_rq), 1);
+ add_nr_running(served_rq_of_rt_rq(rt_rq), 1);
+ } else {
+ add_nr_running(rq_of_rt_rq(rt_rq), 1);
+ }
}
static inline
@@ -571,6 +605,16 @@ void dec_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
rt_rq->rr_nr_running -= is_rr_task(rt_se);
dec_rt_prio(rt_rq, rt_se_prio(rt_se));
+
+ if (rt_group_sched_enabled() && is_dl_group(rt_rq)) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ if (!dl_se->dl_throttled)
+ sub_nr_running(rq_of_rt_rq(rt_rq), 1);
+ sub_nr_running(served_rq_of_rt_rq(rt_rq), 1);
+ } else {
+ sub_nr_running(rq_of_rt_rq(rt_rq), 1);
+ }
}
/*
@@ -752,6 +796,14 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
check_schedstat_required();
update_stats_wait_start_rt(rt_rq_of_se(rt_se), rt_se);
+ /* Task arriving in an idle group of tasks. */
+ if (rt_group_sched_enabled() &&
+ is_dl_group(rt_rq) && rt_rq->rt_nr_running == 0) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ dl_server_start(dl_se);
+ }
+
enqueue_rt_entity(rt_se, flags);
if (task_is_blocked(p))
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ca69d2132061..d949babfe16a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2292,7 +2292,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
if (!rt_group_sched_enabled())
tg = &root_task_group;
p->rt.rt_rq = tg->rt_rq[cpu];
- p->rt.parent = tg->rt_se[cpu];
+ p->dl.dl_rq = &cpu_rq(cpu)->dl;
#endif /* CONFIG_RT_GROUP_SCHED */
}
@@ -2954,6 +2954,9 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
unsigned prev_nr = rq->nr_running;
rq->nr_running = prev_nr + count;
+ if (rq != cpu_rq(rq->cpu))
+ return;
+
if (trace_sched_update_nr_running_tp_enabled()) {
call_trace_sched_update_nr_running(rq, count);
}
@@ -2967,6 +2970,9 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
static inline void sub_nr_running(struct rq *rq, unsigned count)
{
rq->nr_running -= count;
+ if (rq != cpu_rq(rq->cpu))
+ return;
+
if (trace_sched_update_nr_running_tp_enabled()) {
call_trace_sched_update_nr_running(rq, -count);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (12 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 13:16 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Yuri Andriaccio
` (14 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Update wakeup_preempt_rt, switched_{from/to}_rt and prio_changed_rt with
rt-cgroup's specific preemption rules:
- In wakeup_preempt_rt(), whenever a task wakes up, it must be checked if
it is served by a deadline server or it lives on the global runqueue.
Preemption rules (as documented in the function), change based on the
current task and woken task runqueue:
- If both tasks are FIFO/RR tasks on the global runqueue, or the same
cgroup, run as normal.
- If woken is inside a cgroup, but donor is a FIFO task on the global
runqueue, always preempt. If donor is a DEADLINE task, check if the dl
server preempts donor.
- If both tasks are FIFO/RR tasks in served but different groups, check
whether the woken server preempts the donor server.
- In switched_from_rt(), perform a pull only on the global runqueue, and
do nothing if the task is inside a group. This will change when
migrations are added.
- In switched_to_rt(), queue a push only on the global runqueue, while
perform a priority check when the task switching is inside a group.
This will change also when migrations are added.
- In prio_changed_rt(), queue a pull only on the global runqueue, if the
task is not queued. If the task is queued, run preemption checks only
if both the prio changed task and curr are in the same cgroup.
Update sched_rt_can_attach() to check if a task can be attached to a given
cgroup. For now the check only consists in checking if the group has
non-zero bandwidth. Remove the tsk argument from sched_rt_can_attach, as
it is unused.
Change cpu_cgroup_can_attach() to check if the attachee is a FIFO/RR
task before attaching it to a cgroup.
Update __sched_setscheduler() to perform checks when trying to switch
to FIFO/RR for a task inside a cgroup, as the group needs to have
runtime allocated.
Update task_is_throttled_rt() for SCHED_CORE, returning the is_throttled
value of the server if present, while global rt-tasks are never throttled.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 2 +-
kernel/sched/rt.c | 106 +++++++++++++++++++++++++++++++++++-----
kernel/sched/sched.h | 2 +-
kernel/sched/syscalls.c | 12 +++++
4 files changed, 109 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4e58b4f165ed..98a53b60e21f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9270,7 +9270,7 @@ static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
goto scx_check;
cgroup_taskset_for_each(task, css, tset) {
- if (!sched_rt_can_attach(css_tg(css), task))
+ if (rt_task(task) && !sched_rt_can_attach(css_tg(css)))
return -EINVAL;
}
scx_check:
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index defb812b0e48..67fbf4bbe461 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -975,7 +975,58 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags)
{
struct task_struct *donor = rq->donor;
+ struct sched_dl_entity *woken_dl_se = NULL;
+ struct sched_dl_entity *donor_dl_se = NULL;
+
+ if (!rt_group_sched_enabled())
+ goto no_group_sched;
+ /*
+ * Preemption checks are different if the waking task and the current task
+ * are running on the global runqueue or in a cgroup. The following rules
+ * apply:
+ * - dl-tasks (and equally dl_servers) always preempt FIFO/RR tasks.
+ * - if curr is a FIFO/RR task inside a cgroup (i.e. run by a
+ * dl_server), or curr is a DEADLINE task and waking is a FIFO/RR task
+ * on the root cgroup, do nothing.
+ * - if waking is inside a cgroup but curr is a FIFO/RR task in the root
+ * cgroup, always reschedule.
+ * - if they are both on the global runqueue, run the standard code.
+ * - if they are both in the same cgroup, check for tasks priorities.
+ * - if they are both in a cgroup, but not the same one, check whether the
+ * woken task's dl_server preempts the current's dl_server.
+ * - if curr is a DEADLINE task and waking is in a cgroup, check whether
+ * the woken task's server preempts curr.
+ */
+ if (is_dl_group(rt_rq_of_se(&p->rt)))
+ woken_dl_se = dl_group_of(rt_rq_of_se(&p->rt));
+ if (is_dl_group(rt_rq_of_se(&donor->rt)))
+ donor_dl_se = dl_group_of(rt_rq_of_se(&donor->rt));
+ else if (task_has_dl_policy(donor))
+ donor_dl_se = &donor->dl;
+
+ if (woken_dl_se != NULL && donor_dl_se != NULL) {
+ if (woken_dl_se == donor_dl_se) {
+ if (p->prio < donor->prio)
+ resched_curr(rq);
+
+ return;
+ }
+
+ if (dl_entity_preempt(woken_dl_se, donor_dl_se))
+ resched_curr(rq);
+
+ return;
+
+ } else if (woken_dl_se != NULL) {
+ resched_curr(rq);
+ return;
+
+ } else if (donor_dl_se != NULL) {
+ return;
+ }
+
+no_group_sched:
/*
* XXX If we're preempted by DL, queue a push?
*/
@@ -1026,7 +1077,8 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
if (rq->donor->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
- rt_queue_push_tasks(rt_rq);
+ if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ rt_queue_push_tasks(rt_rq);
}
static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
@@ -1736,6 +1788,8 @@ static void rq_offline_rt(struct rq *rq)
*/
static void switched_from_rt(struct rq *rq, struct task_struct *p)
{
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
+
/*
* If there are other RT tasks then we will reschedule
* and the scheduling of the other RT tasks will handle
@@ -1743,10 +1797,11 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
* we may need to handle the pulling of RT tasks
* now.
*/
- if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
+ if (!task_on_rq_queued(p) || rt_rq->rt_nr_running)
return;
- rt_queue_pull_task(rt_rq_of_se(&p->rt));
+ if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ rt_queue_pull_task(rt_rq);
}
void __init init_sched_rt_class(void)
@@ -1766,6 +1821,8 @@ void __init init_sched_rt_class(void)
*/
static void switched_to_rt(struct rq *rq, struct task_struct *p)
{
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
+
/*
* If we are running, update the avg_rt tracking, as the running time
* will now on be accounted into the latter.
@@ -1781,8 +1838,14 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
* then see if we can move to another run queue.
*/
if (task_on_rq_queued(p)) {
- if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
- rt_queue_push_tasks(rt_rq_of_se(&p->rt));
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) {
+ if (p->prio < rq->donor->prio)
+ resched_curr(rq);
+ } else {
+ if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+ rt_queue_push_tasks(rt_rq_of_se(&p->rt));
+ }
+
if (p->prio < rq->donor->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
}
@@ -1795,6 +1858,8 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
static void
prio_changed_rt(struct rq *rq, struct task_struct *p, u64 oldprio)
{
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
+
if (!task_on_rq_queued(p))
return;
@@ -1807,15 +1872,25 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, u64 oldprio)
* may need to pull tasks to this runqueue.
*/
if (oldprio < p->prio)
- rt_queue_pull_task(rt_rq_of_se(&p->rt));
+ if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ rt_queue_pull_task(rt_rq);
/*
* If there's a higher priority task waiting to run
* then reschedule.
*/
- if (p->prio > rq->rt.highest_prio.curr)
+ if (p->prio > rt_rq->highest_prio.curr)
resched_curr(rq);
} else {
+ /*
+ * This task is not running, thus we check against the currently
+ * running task for preemption. We can preempt only if both tasks are
+ * in the same cgroup or on the global runqueue.
+ */
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) &&
+ rt_rq_of_se(&p->rt)->tg != rt_rq_of_se(&rq->curr->rt)->tg)
+ return;
+
/*
* This task is not running, but if it is
* greater than the current running task
@@ -1908,7 +1983,16 @@ static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
#ifdef CONFIG_SCHED_CORE
static int task_is_throttled_rt(struct task_struct *p, int cpu)
{
+#ifdef CONFIG_RT_GROUP_SCHED
+ struct rt_rq *rt_rq;
+
+ rt_rq = task_group(p)->rt_rq[cpu];
+ WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group);
+
+ return dl_group_of(rt_rq)->dl_throttled;
+#else
return 0;
+#endif
}
#endif /* CONFIG_SCHED_CORE */
@@ -2159,16 +2243,16 @@ static int sched_rt_global_constraints(void)
}
#endif /* CONFIG_SYSCTL */
-int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+int sched_rt_can_attach(struct task_group *tg)
{
/* Don't accept real-time tasks when there is no way for them to run */
- if (rt_group_sched_enabled() && rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
+ if (rt_group_sched_enabled() && tg->dl_bandwidth.dl_runtime == 0)
return 0;
return 1;
}
-#else /* !CONFIG_RT_GROUP_SCHED: */
+#else /* !CONFIG_RT_GROUP_SCHED */
#ifdef CONFIG_SYSCTL
static int sched_rt_global_constraints(void)
@@ -2176,7 +2260,7 @@ static int sched_rt_global_constraints(void)
return 0;
}
#endif /* CONFIG_SYSCTL */
-#endif /* !CONFIG_RT_GROUP_SCHED */
+#endif /* CONFIG_RT_GROUP_SCHED */
#ifdef CONFIG_SYSCTL
static int sched_rt_global_validate(void)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d949babfe16a..fceb02a04858 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -609,7 +609,7 @@ extern int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
extern int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us);
extern long sched_group_rt_runtime(struct task_group *tg);
extern long sched_group_rt_period(struct task_group *tg);
-extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
+extern int sched_rt_can_attach(struct task_group *tg);
extern struct task_group *sched_create_group(struct task_group *parent);
extern void sched_online_group(struct task_group *tg,
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index 806bc88d21ee..15653840c812 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -606,6 +606,18 @@ int __sched_setscheduler(struct task_struct *p,
change:
if (user) {
+ /*
+ * Do not allow real-time tasks into groups that have no runtime
+ * assigned.
+ */
+ if (rt_group_sched_enabled() &&
+ dl_bandwidth_enabled() && rt_policy(policy) &&
+ !sched_rt_can_attach(task_group(p)) &&
+ !task_group_is_autogroup(task_group(p))) {
+ retval = -EPERM;
+ goto unlock;
+ }
+
if (dl_bandwidth_enabled() && dl_policy(policy) &&
!(attr->sched_flags & SCHED_FLAG_SUGOV)) {
cpumask_t *span = rq->rd->span;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (13 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 14:36 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 16/29] sched/rt: Allow zeroing the runtime of the root control group Yuri Andriaccio
` (13 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Update sched_group_rt_runtime/period and sched_group_set_rt_runtime/period
to use the newly defined data structures and perform necessary checks to
update both the runtime and period of a given group.
The 'set' functions call tg_set_rt_bandwidth() which is also updated:
- Use the newly added HCBS dl_bandwidth structure instead of rt_bandwidth.
- Update __rt_schedulable() to check for numerical issues:
- Reuse __checkparam_dl.
- Add allow_zero_runtime param to __checkparam_dl as cgroups may zero
their runtime, while it is not allowed for DEADLINE tasks to do so.
- Use RCU lock guard instead of rcu_read_lock/unlock.
- Update tg_rt_schedulable(), used when walking the cgroup tree to check
if all invariants are met:
- Update most of the instructions to obtain data from the newly added
data structures (dl_bandwidth).
- If the task group is the root group, run a total bandwidth check with
the newly added dl_check_tg() function.
- After all checks are successful, if the changed group is not the root
cgroup, update the assigned runtime and period to all the local
deadline servers.
- Additionally use mutex guards instead of manually locking/unlocking.
Add dl_check_tg(), which performs an admission control test similar to
__dl_overflow, but this time we are updating the cgroup's total bandwidth
rather than scheduling a new DEADLINE task or updating a non-cgroup
deadline server.
Add rcu_sched lock guard for rcu_read_lock/unlock_sched.
Finally, prevent creation of a cgroup hierarchy with depth greater than
two, as this will be addressed in a future patch. A depth two hierarchy
is sufficient for now for testing the patchset.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
include/linux/rcupdate.h | 1 +
kernel/sched/core.c | 6 ++++
kernel/sched/deadline.c | 43 +++++++++++++++++-----
kernel/sched/rt.c | 77 +++++++++++++++++++---------------------
kernel/sched/sched.h | 3 +-
kernel/sched/syscalls.c | 2 +-
6 files changed, 82 insertions(+), 50 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 04f3f86a4145..032cfa763047 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1191,6 +1191,7 @@ extern int rcu_expedited;
extern int rcu_normal;
DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock())
+DEFINE_LOCK_GUARD_0(rcu_sched, rcu_read_lock_sched(), rcu_read_unlock_sched())
DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU))
#endif /* __LINUX_RCUPDATE_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 98a53b60e21f..0c7032d254ba 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9205,6 +9205,12 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
return &root_task_group.css;
}
+ /* Do not allow cpu_cgroup hierachies with depth greater than 2. */
+#ifdef CONFIG_RT_GROUP_SCHED
+ if (parent != &root_task_group)
+ return ERR_PTR(-EINVAL);
+#endif
+
tg = sched_create_group(parent);
if (IS_ERR(tg))
return ERR_PTR(-ENOMEM);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index c82810732106..74bff7fb7b92 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -343,7 +343,39 @@ void cancel_inactive_timer(struct sched_dl_entity *dl_se)
cancel_dl_timer(dl_se, &dl_se->inactive_timer);
}
+/*
+ * Used for dl_bw check and update, used under sched_rt_handler()::mutex and
+ * sched_domains_mutex.
+ */
+u64 dl_cookie;
+
#ifdef CONFIG_RT_GROUP_SCHED
+int dl_check_tg(unsigned long total)
+{
+ int which_cpu;
+ int cap;
+ struct dl_bw *dl_b;
+ u64 gen = ++dl_cookie;
+
+ for_each_possible_cpu(which_cpu) {
+ guard(rcu_sched)();
+
+ if (!dl_bw_visited(which_cpu, gen)) {
+ cap = dl_bw_capacity(which_cpu);
+ dl_b = dl_bw_of(which_cpu);
+
+ guard(raw_spinlock_irqsave)(&dl_b->lock);
+
+ if (dl_b->bw != -1 &&
+ cap_scale(dl_b->bw, cap) < dl_b->total_bw + cap_scale(total, cap))
+ return 0;
+ }
+
+ }
+
+ return 1;
+}
+
void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period)
{
struct rq *rq = container_of_const(dl_se->dl_rq, struct rq, dl);
@@ -3469,12 +3501,6 @@ DEFINE_SCHED_CLASS(dl) = {
#endif
};
-/*
- * Used for dl_bw check and update, used under sched_rt_handler()::mutex and
- * sched_domains_mutex.
- */
-u64 dl_cookie;
-
int sched_dl_global_validate(void)
{
u64 runtime = global_rt_runtime();
@@ -3670,7 +3696,7 @@ void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
* below 2^63 ns (we have to check both sched_deadline and
* sched_period, as the latter can be zero).
*/
-bool __checkparam_dl(const struct sched_attr *attr)
+bool __checkparam_dl(const struct sched_attr *attr, bool allow_zero_runtime)
{
u64 period, max, min;
@@ -3686,7 +3712,8 @@ bool __checkparam_dl(const struct sched_attr *attr)
* Since we truncate DL_SCALE bits, make sure we're at least
* that big.
*/
- if (attr->sched_runtime < (1ULL << DL_SCALE))
+ if ((!allow_zero_runtime || attr->sched_runtime != 0) &&
+ attr->sched_runtime < (1ULL << DL_SCALE))
return false;
/*
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 67fbf4bbe461..c994447f5b1c 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2035,11 +2035,6 @@ DEFINE_SCHED_CLASS(rt) = {
};
#ifdef CONFIG_RT_GROUP_SCHED
-/*
- * Ensure that the real time constraints are schedulable.
- */
-static DEFINE_MUTEX(rt_constraints_mutex);
-
static inline int tg_has_rt_tasks(struct task_group *tg)
{
struct task_struct *task;
@@ -2073,8 +2068,8 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
unsigned long total, sum = 0;
u64 period, runtime;
- period = ktime_to_ns(tg->rt_bandwidth.rt_period);
- runtime = tg->rt_bandwidth.rt_runtime;
+ period = tg->dl_bandwidth.dl_period;
+ runtime = tg->dl_bandwidth.dl_runtime;
if (tg == d->tg) {
period = d->rt_period;
@@ -2090,8 +2085,7 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
/*
* Ensure we don't starve existing RT tasks if runtime turns zero.
*/
- if (rt_bandwidth_enabled() && !runtime &&
- tg->rt_bandwidth.rt_runtime && tg_has_rt_tasks(tg))
+ if (dl_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg))
return -EBUSY;
if (WARN_ON(!rt_group_sched_enabled() && tg != &root_task_group))
@@ -2105,12 +2099,17 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
if (total > to_ratio(global_rt_period(), global_rt_runtime()))
return -EINVAL;
+ if (tg == &root_task_group) {
+ if (!dl_check_tg(total))
+ return -EBUSY;
+ }
+
/*
* The sum of our children's runtime should not exceed our own.
*/
list_for_each_entry_rcu(child, &tg->children, siblings) {
- period = ktime_to_ns(child->rt_bandwidth.rt_period);
- runtime = child->rt_bandwidth.rt_runtime;
+ period = child->dl_bandwidth.dl_period;
+ runtime = child->dl_bandwidth.dl_runtime;
if (child == d->tg) {
period = d->rt_period;
@@ -2128,24 +2127,30 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime)
{
- int ret;
-
struct rt_schedulable_data data = {
.tg = tg,
.rt_period = period,
.rt_runtime = runtime,
};
- rcu_read_lock();
- ret = walk_tg_tree(tg_rt_schedulable, tg_nop, &data);
- rcu_read_unlock();
+ struct sched_attr attr = {
+ .sched_flags = 0,
+ .sched_runtime = runtime,
+ .sched_deadline = period,
+ .sched_period = period,
+ };
- return ret;
+ if (!__checkparam_dl(&attr, true))
+ return -EINVAL;
+
+ guard(rcu)();
+ return walk_tg_tree(tg_rt_schedulable, tg_nop, &data);
}
static int tg_set_rt_bandwidth(struct task_group *tg,
u64 rt_period, u64 rt_runtime)
{
+ static DEFINE_MUTEX(rt_constraints_mutex);
int i, err = 0;
/*
@@ -2155,44 +2160,36 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
if (tg == &root_task_group && rt_runtime == 0)
return -EINVAL;
- /* No period doesn't make any sense. */
- if (rt_period == 0)
- return -EINVAL;
-
/*
* Bound quota to defend quota against overflow during bandwidth shift.
*/
if (rt_runtime != RUNTIME_INF && rt_runtime > max_rt_runtime)
return -EINVAL;
- mutex_lock(&rt_constraints_mutex);
+ guard(mutex)(&rt_constraints_mutex);
err = __rt_schedulable(tg, rt_period, rt_runtime);
if (err)
- goto unlock;
+ return err;
- raw_spin_lock_irq(&tg->rt_bandwidth.rt_runtime_lock);
- tg->rt_bandwidth.rt_period = ns_to_ktime(rt_period);
- tg->rt_bandwidth.rt_runtime = rt_runtime;
+ guard(raw_spinlock_irq)(&tg->dl_bandwidth.dl_runtime_lock);
+ tg->dl_bandwidth.dl_period = rt_period;
+ tg->dl_bandwidth.dl_runtime = rt_runtime;
- for_each_possible_cpu(i) {
- struct rt_rq *rt_rq = tg->rt_rq[i];
+ if (tg == &root_task_group)
+ return 0;
- raw_spin_lock(&rt_rq->rt_runtime_lock);
- rt_rq->rt_runtime = rt_runtime;
- raw_spin_unlock(&rt_rq->rt_runtime_lock);
+ for_each_possible_cpu(i) {
+ dl_init_tg(tg->dl_se[i], rt_runtime, rt_period);
}
- raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
-unlock:
- mutex_unlock(&rt_constraints_mutex);
- return err;
+ return 0;
}
int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
{
u64 rt_runtime, rt_period;
- rt_period = ktime_to_ns(tg->rt_bandwidth.rt_period);
+ rt_period = tg->dl_bandwidth.dl_period;
rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
if (rt_runtime_us < 0)
rt_runtime = RUNTIME_INF;
@@ -2206,10 +2203,10 @@ long sched_group_rt_runtime(struct task_group *tg)
{
u64 rt_runtime_us;
- if (tg->rt_bandwidth.rt_runtime == RUNTIME_INF)
+ if (tg->dl_bandwidth.dl_runtime == RUNTIME_INF)
return -1;
- rt_runtime_us = tg->rt_bandwidth.rt_runtime;
+ rt_runtime_us = tg->dl_bandwidth.dl_runtime;
do_div(rt_runtime_us, NSEC_PER_USEC);
return rt_runtime_us;
}
@@ -2222,7 +2219,7 @@ int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
return -EINVAL;
rt_period = rt_period_us * NSEC_PER_USEC;
- rt_runtime = tg->rt_bandwidth.rt_runtime;
+ rt_runtime = tg->dl_bandwidth.dl_runtime;
return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
}
@@ -2231,7 +2228,7 @@ long sched_group_rt_period(struct task_group *tg)
{
u64 rt_period_us;
- rt_period_us = ktime_to_ns(tg->rt_bandwidth.rt_period);
+ rt_period_us = tg->dl_bandwidth.dl_period;
do_div(rt_period_us, NSEC_PER_USEC);
return rt_period_us;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fceb02a04858..78f080275bf0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -364,7 +364,7 @@ extern void sched_dl_do_global(void);
extern int sched_dl_overflow(struct task_struct *p, int policy, const struct sched_attr *attr);
extern void __setparam_dl(struct task_struct *p, const struct sched_attr *attr);
extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
-extern bool __checkparam_dl(const struct sched_attr *attr);
+extern bool __checkparam_dl(const struct sched_attr *attr, bool allow_zero_runtime);
extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
extern int dl_bw_deactivate(int cpu);
@@ -423,6 +423,7 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq,
struct rq *served_rq,
dl_server_pick_f pick_task);
extern void sched_init_dl_servers(void);
+extern int dl_check_tg(unsigned long total);
extern void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period);
extern void fair_server_init(struct rq *rq);
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index 15653840c812..d30aee2e90c4 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -528,7 +528,7 @@ int __sched_setscheduler(struct task_struct *p,
*/
if (attr->sched_priority > MAX_RT_PRIO-1)
return -EINVAL;
- if ((dl_policy(policy) && !__checkparam_dl(attr)) ||
+ if ((dl_policy(policy) && !__checkparam_dl(attr, false)) ||
(rt_policy(policy) != (attr->sched_priority != 0)))
return -EINVAL;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 16/29] sched/rt: Allow zeroing the runtime of the root control group
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (14 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 17/29] sched/rt: Remove old RT_GROUP_SCHED data structures Yuri Andriaccio
` (12 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Allow execution of FIFO/RR tasks in the root cgroup regardless of reserved
bandwidth.
Tasks in the root cgroup use the standard FIFO/RR scheduler.
Allow creation of cgroups with runtime or period zero.
Disallow execution of tasks in zero bandwidth cgroups.
---
In HCBS, the root control group follows the already existing rules for
rt-task scheduling. As such, it does not make use of the deadline servers
to account for runtime, or any other HCBS specific code and features.
While the runtime of SCHED_DEADLINE tasks depends on the global bandwidth
reserved for rt_tasks, the runtime of SCHED_FIFO/SCHED_RR tasks is limited
by the activation of fair-servers (as the RT_THROTTLING mechanism has been
removed in favour of them), thus their maximum bandwidth depends solely on
the fair-server settings (which are thightly related to the global
bandwidth reserved for rt-tasks) and the amount of SCHED_OTHER workload to
run (recall that if no SCHED_OTHER tasks are running, the FIFO/RR tasks
may fully utilize the CPU).
The values of runtime and period in the root cgroup's cpu controller do
not affect, by design of HCBS, the fair-server settings and similar
(consequently they do not affect the scheduling of FIFO/RR tasks in the
root cgroup), but they are just used to reserve a portion of the
SCHED_DEADLINE bandwidth to the scheduling of rt-cgroups. These values
only affect child cgroups, their deadline servers and their assigned
FIFO/RR tasks.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c994447f5b1c..5caddc5c2876 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2085,7 +2085,8 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
/*
* Ensure we don't starve existing RT tasks if runtime turns zero.
*/
- if (dl_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg))
+ if (dl_bandwidth_enabled() && tg != &root_task_group &&
+ !runtime && tg_has_rt_tasks(tg))
return -EBUSY;
if (WARN_ON(!rt_group_sched_enabled() && tg != &root_task_group))
@@ -2153,13 +2154,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
static DEFINE_MUTEX(rt_constraints_mutex);
int i, err = 0;
- /*
- * Disallowing the root group RT runtime is BAD, it would disallow the
- * kernel creating (and or operating) RT threads.
- */
- if (tg == &root_task_group && rt_runtime == 0)
- return -EINVAL;
-
/*
* Bound quota to defend quota against overflow during bandwidth shift.
*/
@@ -2242,6 +2236,10 @@ static int sched_rt_global_constraints(void)
int sched_rt_can_attach(struct task_group *tg)
{
+ /* Allow executing in the root cgroup regardless of allowed bandwidth */
+ if (tg == &root_task_group)
+ return 1;
+
/* Don't accept real-time tasks when there is no way for them to run */
if (rt_group_sched_enabled() && tg->dl_bandwidth.dl_runtime == 0)
return 0;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 17/29] sched/rt: Remove old RT_GROUP_SCHED data structures
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (15 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 16/29] sched/rt: Allow zeroing the runtime of the root control group Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Yuri Andriaccio
` (11 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Completely remove the old RT_GROUP_SCHED's functions and data structures:
- Remove the fields back and my_q from sched_rt_entity.
- Remove the rt_bandwidth data structure.
- Remove the field rt_bandwidth from task_group.
- Remove the rt_bandwidth_enabled function.
- Remove the fields rt_queued, rt_throttled, rt_time, rt_runtime,
rt_runtime_lock and rt_nr_boosted from rt_rq.
All of the removed fields and data are similarly represented in previously
added fields in rq, rt_rq, dl_bandwidth and in the dl server themselves.
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
include/linux/sched.h | 3 ---
kernel/sched/sched.h | 32 --------------------------------
2 files changed, 35 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ea2e74598b93..2740f043e534 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -628,12 +628,9 @@ struct sched_rt_entity {
unsigned short on_rq;
unsigned short on_list;
- struct sched_rt_entity *back;
#ifdef CONFIG_RT_GROUP_SCHED
/* rq on which this entity is (to be) queued: */
struct rt_rq *rt_rq;
- /* rq "owned" by this entity/group: */
- struct rt_rq *my_q;
#endif
} __randomize_layout;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 78f080275bf0..a4435f107cfe 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -313,15 +313,6 @@ struct rt_prio_array {
struct list_head queue[MAX_RT_PRIO];
};
-struct rt_bandwidth {
- /* nests inside the rq lock: */
- raw_spinlock_t rt_runtime_lock;
- ktime_t rt_period;
- u64 rt_runtime;
- struct hrtimer rt_period_timer;
- unsigned int rt_period_active;
-};
-
struct dl_bandwidth {
raw_spinlock_t dl_runtime_lock;
u64 dl_runtime;
@@ -341,12 +332,6 @@ static inline int dl_bandwidth_enabled(void)
* - cache the fraction of bandwidth that is currently allocated in
* each root domain;
*
- * This is all done in the data structure below. It is similar to the
- * one used for RT-throttling (rt_bandwidth), with the main difference
- * that, since here we are only interested in admission control, we
- * do not decrease any runtime while the group "executes", neither we
- * need a timer to replenish it.
- *
* With respect to SMP, bandwidth is given on a per root domain basis,
* meaning that:
* - bw (< 100%) is the deadline bandwidth of each CPU;
@@ -513,7 +498,6 @@ struct task_group {
struct sched_dl_entity **dl_se;
struct rt_rq **rt_rq;
- struct rt_bandwidth rt_bandwidth;
struct dl_bandwidth dl_bandwidth;
#endif
@@ -831,11 +815,6 @@ struct scx_rq {
};
#endif /* CONFIG_SCHED_CLASS_EXT */
-static inline int rt_bandwidth_enabled(void)
-{
- return 0;
-}
-
/* RT IPI pull logic requires IRQ_WORK */
#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_SMP)
# define HAVE_RT_PUSH_IPI
@@ -853,17 +832,6 @@ struct rt_rq {
bool overloaded;
struct plist_head pushable_tasks;
- int rt_queued;
-
-#ifdef CONFIG_RT_GROUP_SCHED
- int rt_throttled;
- u64 rt_time; /* consumed RT time, goes up in update_curr_rt */
- u64 rt_runtime; /* allotted RT time, "slice" from rt_bandwidth, RT sharing/balancing */
- /* Nests inside the rq lock: */
- raw_spinlock_t rt_runtime_lock;
-
- unsigned int rt_nr_boosted;
-#endif
#ifdef CONFIG_CGROUP_SCHED
struct task_group *tg; /* this tg has "this" rt_rq on given CPU for runnable entities */
#endif
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 18/29] sched/core: Cgroup v2 support
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (16 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 17/29] sched/rt: Remove old RT_GROUP_SCHED data structures Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 14:59 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Yuri Andriaccio
` (10 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Make rt_runtime_us and rt_period_us virtual files accessible also to the
cgroup v2 controller, effectively enabling the RT_GROUP_SCHED mechanism to
cgroups v2.
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0c7032d254ba..3ffe3ac5071d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -10245,6 +10245,18 @@ static struct cftype cpu_files[] = {
.write = cpu_uclamp_max_write,
},
#endif /* CONFIG_UCLAMP_TASK_GROUP */
+#ifdef CONFIG_RT_GROUP_SCHED
+ {
+ .name = "rt_runtime_us",
+ .read_s64 = cpu_rt_runtime_read,
+ .write_s64 = cpu_rt_runtime_write,
+ },
+ {
+ .name = "rt_period_us",
+ .read_u64 = cpu_rt_period_read_uint,
+ .write_u64 = cpu_rt_period_write_uint,
+ },
+#endif /* CONFIG_RT_GROUP_SCHED */
{ } /* terminate */
};
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (17 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 15:01 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Yuri Andriaccio
` (9 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Disable control files for cgroups-v1, and allow only cgroups-v2.
This should simplify maintaining the code, since cgroups-v1 are deprecated.
Set the default rt-cgroups runtime to zero.
Needed for cgroup-v1 kernels as they wouldn't be able to start
SCHED_DEADLINE tasks. The bandwidth for rt-cgroups must then be manually
assigned after the kernel boots.
Remove cpu_rt_group_init function.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 26 +-------------------------
1 file changed, 1 insertion(+), 25 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3ffe3ac5071d..41758824b460 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8649,7 +8649,7 @@ void __init sched_init(void)
#ifdef CONFIG_RT_GROUP_SCHED
init_dl_bandwidth(&root_task_group.dl_bandwidth,
- global_rt_period(), global_rt_runtime());
+ global_rt_period(), 0);
#endif /* CONFIG_RT_GROUP_SCHED */
#ifdef CONFIG_CGROUP_SCHED
@@ -9984,20 +9984,6 @@ static struct cftype cpu_legacy_files[] = {
};
#ifdef CONFIG_RT_GROUP_SCHED
-static struct cftype rt_group_files[] = {
- {
- .name = "rt_runtime_us",
- .read_s64 = cpu_rt_runtime_read,
- .write_s64 = cpu_rt_runtime_write,
- },
- {
- .name = "rt_period_us",
- .read_u64 = cpu_rt_period_read_uint,
- .write_u64 = cpu_rt_period_write_uint,
- },
- { } /* Terminate */
-};
-
# ifdef CONFIG_RT_GROUP_SCHED_DEFAULT_DISABLED
DEFINE_STATIC_KEY_FALSE(rt_group_sched);
# else
@@ -10020,16 +10006,6 @@ static int __init setup_rt_group_sched(char *str)
return 1;
}
__setup("rt_group_sched=", setup_rt_group_sched);
-
-static int __init cpu_rt_group_init(void)
-{
- if (!rt_group_sched_enabled())
- return 0;
-
- WARN_ON(cgroup_add_legacy_cftypes(&cpu_cgrp_subsys, rt_group_files));
- return 0;
-}
-subsys_initcall(cpu_rt_group_init);
#endif /* CONFIG_RT_GROUP_SCHED */
static int cpu_extra_stat_show(struct seq_file *sf,
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (18 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 15:15 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 21/29] sched/rt: Update default bandwidth for real-time tasks to ONE Yuri Andriaccio
` (8 subsequent siblings)
28 siblings, 1 reply; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Allow for cgroup hierarchies with more than two levels.
Introduce the concept of live and active groups:
- A group is live if it is a leaf group or if all its children have zero
runtime.
- A live group with non-zero runtime can be used to schedule tasks.
- An active cgroup is a live group with running tasks.
- A non-live group cannot be used to run tasks, but it is only used for
bandwidth accounting, i.e. the sum of its children bandwidth must be
less than or equal to the bandwidth of the parent. This change allows
to use cgroups for bandwidth management for different users.
- While the root cgroup specifies the total allocatable bandwidth of rt
cgroups, a further accounting is performed to keep track of the live
bandwidth, i.e. the sum of the bandwidth of live groups. The hierarchy
invariant states that the live bandwidth must always be less than or
equal to the total allocatable bw.
Add is_live_sched_group() and sched_group_has_live_siblings() in
deadline.c. These utility functions are used by dl_init_tg to perform
updates only when necessary:
- Only live groups may update the active dl bandwidth of dl entities
(call to dl_rq_change_utilization), while non-live groups must not use
servers, and thus must not change the active dl bandwidth.
- The total bandwidth accounting must be changed to follow the
live/non-live rules:
- When disabling (runtime zero) the last child of a group, the parent
becomes a live group, and so the parent's bw must be accounted back.
- When enabling (runtime non-zero) the first child, the parent becomes a
non-live group, and so the parent's bandwidth must be removed.
Update tg_set_rt_bandwidth() to change the runtime of a group to a
non-zero value only if its parent is inactive, thus forcing it to become
non-live if it was precedently (it would've already been non-live if a
sibling cgroup was live). An exception is made for groups which have the
root cgroup as parent.
Update sched_rt_can_attach() to allow attaching only on live groups.
Update dl_init_tg() to take a task_group pointer and a cpu's id rather
than passing directly the pointer to the cpu's deadline server. The
task_group pointer is necessary to check and update the live bandwidth
accounting.
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/core.c | 6 ----
kernel/sched/deadline.c | 63 ++++++++++++++++++++++++++++++++++++++---
kernel/sched/rt.c | 17 ++++++++---
kernel/sched/sched.h | 3 +-
4 files changed, 74 insertions(+), 15 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 41758824b460..fd532bb46995 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9205,12 +9205,6 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
return &root_task_group.css;
}
- /* Do not allow cpu_cgroup hierachies with depth greater than 2. */
-#ifdef CONFIG_RT_GROUP_SCHED
- if (parent != &root_task_group)
- return ERR_PTR(-EINVAL);
-#endif
-
tg = sched_create_group(parent);
if (IS_ERR(tg))
return ERR_PTR(-ENOMEM);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 74bff7fb7b92..5967b5350166 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -376,11 +376,46 @@ int dl_check_tg(unsigned long total)
return 1;
}
-void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period)
+/*
+ * A cgroup is deemed live if:
+ * - It is a leaf cgroup.
+ * - All it's children have zero runtime.
+ */
+bool is_live_sched_group(struct task_group *tg)
+{
+ struct task_group *child;
+ bool is_active = 1;
+
+ /* if there are no children, this is a leaf group, thus it is live */
+ guard(rcu)();
+ list_for_each_entry_rcu(child, &tg->children, siblings) {
+ if (child->dl_bandwidth.dl_runtime > 0)
+ is_active = 0;
+ }
+ return is_active;
+}
+
+static inline bool sched_group_has_live_siblings(struct task_group *tg)
+{
+ struct task_group *child;
+ bool has_active_siblings = 0;
+
+ guard(rcu)();
+ list_for_each_entry_rcu(child, &tg->parent->children, siblings) {
+ if (child != tg && child->dl_bandwidth.dl_runtime > 0)
+ has_active_siblings = 1;
+ }
+ return has_active_siblings;
+}
+
+void dl_init_tg(struct task_group *tg, int cpu, u64 rt_runtime, u64 rt_period)
{
+ struct sched_dl_entity *dl_se = tg->dl_se[cpu];
struct rq *rq = container_of_const(dl_se->dl_rq, struct rq, dl);
- int is_active;
- u64 new_bw;
+ int is_active, is_live_group;
+ u64 old_runtime, new_bw;
+
+ is_live_group = (int)is_live_sched_group(tg);
guard(raw_spin_rq_lock_irq)(rq);
is_active = dl_se->my_q->rt.rt_nr_running > 0;
@@ -388,8 +423,10 @@ void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period)
update_rq_clock(rq);
dl_server_stop(dl_se);
+ old_runtime = dl_se->dl_runtime;
new_bw = to_ratio(rt_period, rt_runtime);
- dl_rq_change_utilization(rq, dl_se, new_bw);
+ if (is_live_group)
+ dl_rq_change_utilization(rq, dl_se, new_bw);
dl_se->dl_runtime = rt_runtime;
dl_se->dl_deadline = rt_period;
@@ -401,6 +438,24 @@ void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period)
dl_se->dl_bw = new_bw;
dl_se->dl_density = new_bw;
+ /*
+ * Handle parent bandwidth accounting when child runtime changes:
+ * - When disabling the last child, the parent becomes a leaf group,
+ * and so the parent's bandwidth must be accounted back.
+ * - When enabling the first child, the parent becomes a non-leaf group,
+ * and so the parent's bandwidth must be removed.
+ * Only leaf groups (those without active children) have non-zero bandwidth.
+ */
+ if (tg->parent && tg->parent != &root_task_group) {
+ if (rt_runtime == 0 && old_runtime != 0 &&
+ !sched_group_has_live_siblings(tg)) {
+ __add_rq_bw(tg->parent->dl_se[cpu]->dl_bw, dl_se->dl_rq);
+ } else if (rt_runtime != 0 && old_runtime == 0 &&
+ !sched_group_has_live_siblings(tg)) {
+ __sub_rq_bw(tg->parent->dl_se[cpu]->dl_bw, dl_se->dl_rq);
+ }
+ }
+
if (is_active)
dl_server_start(dl_se);
}
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 5caddc5c2876..2be22024e66d 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -101,7 +101,7 @@ void unregister_rt_sched_group(struct task_group *tg)
continue;
if (tg->dl_se[i]->dl_runtime)
- dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period);
+ dl_init_tg(tg, i, 0, tg->dl_se[i]->dl_period);
}
}
@@ -129,7 +129,7 @@ void free_rt_sched_group(struct task_group *tg)
* to 0 immediately before freeing it.
*/
if (tg->dl_se[i]->dl_runtime)
- dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period);
+ dl_init_tg(tg, i, 0, tg->dl_se[i]->dl_period);
raw_spin_rq_lock_irqsave(cpu_rq(i), flags);
hrtimer_cancel(&tg->dl_se[i]->dl_timer);
@@ -2154,6 +2154,14 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
static DEFINE_MUTEX(rt_constraints_mutex);
int i, err = 0;
+ /*
+ * Do not allow to set a RT runtime > 0 if the parent has RT tasks
+ * (and is not the root group)
+ */
+ if (rt_runtime && tg != &root_task_group &&
+ tg->parent != &root_task_group && tg_has_rt_tasks(tg->parent))
+ return -EINVAL;
+
/*
* Bound quota to defend quota against overflow during bandwidth shift.
*/
@@ -2173,7 +2181,7 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
return 0;
for_each_possible_cpu(i) {
- dl_init_tg(tg->dl_se[i], rt_runtime, rt_period);
+ dl_init_tg(tg, i, rt_runtime, rt_period);
}
return 0;
@@ -2244,7 +2252,8 @@ int sched_rt_can_attach(struct task_group *tg)
if (rt_group_sched_enabled() && tg->dl_bandwidth.dl_runtime == 0)
return 0;
- return 1;
+ /* tasks can be attached only if the taskgroup has no live children. */
+ return (int)is_live_sched_group(tg);
}
#else /* !CONFIG_RT_GROUP_SCHED */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a4435f107cfe..9814be8348cd 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -409,7 +409,8 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq,
dl_server_pick_f pick_task);
extern void sched_init_dl_servers(void);
extern int dl_check_tg(unsigned long total);
-extern void dl_init_tg(struct sched_dl_entity *dl_se, u64 rt_runtime, u64 rt_period);
+extern void dl_init_tg(struct task_group *tg, int cpu, u64 rt_runtime, u64 rt_period);
+extern bool is_live_sched_group(struct task_group *tg);
extern void fair_server_init(struct rq *rq);
extern void ext_server_init(struct rq *rq);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 21/29] sched/rt: Update default bandwidth for real-time tasks to ONE
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (19 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Yuri Andriaccio
` (7 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Set the default total bandwidth for SCHED_DEADLINE tasks and servers to ONE.
FIFO/RR tasks are already throttled by fair-servers and ext-servers, and
the sysctl_sched_rt_runtime parameter now only defines the total bw that
is allowed to deadline entities.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2be22024e66d..db88792787a8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -19,9 +19,9 @@ int sysctl_sched_rt_period = 1000000;
/*
* part of the period that we allow rt tasks to run in us.
- * default: 0.95s
+ * default: 1s
*/
-int sysctl_sched_rt_runtime = 950000;
+int sysctl_sched_rt_runtime = 1000000;
#ifdef CONFIG_SYSCTL
static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC * RR_TIMESLICE) / HZ;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (20 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 21/29] sched/rt: Update default bandwidth for real-time tasks to ONE Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-05-05 15:20 ` Peter Zijlstra
2026-05-05 15:24 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 23/29] sched/rt: Hook HCBS " Yuri Andriaccio
` (6 subsequent siblings)
28 siblings, 2 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Add migration related functions:
- group_find_lowest_rt_rq
- group_find_lock_lowest_rt_rq
Find (and lock) the lowest priority non-root runqueue where to migrate
a given task.
- group_pull_rt_task
Try pull a task onto the given non-root runqueue.
- group_push_rt_task
- group_push_rt_tasks
Try push tasks from the given non-root runqueue.
- group_pull_rt_task_callback
- group_push_rt_tasks_callback
- rt_queue_push_from_group
- rt_queue_pull_to_group
Deferred execution of push and pull functions at balancing points.
Update struct rq to include fields for deferred balancing of cgroup runqueues.
---
The functions are only implemented here, to be hooked up later in the patchset.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/rt.c | 461 +++++++++++++++++++++++++++++++++++++++++++
kernel/sched/sched.h | 10 +
2 files changed, 471 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index db88792787a8..e1731e01757b 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1,3 +1,4 @@
+#pragma GCC diagnostic ignored "-Wunused-function"
// SPDX-License-Identifier: GPL-2.0
/*
* Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR
@@ -84,6 +85,8 @@ void init_rt_rq(struct rt_rq *rt_rq)
plist_head_init(&rt_rq->pushable_tasks);
}
+static void group_pull_rt_task(struct rt_rq *this_rt_rq);
+
#ifdef CONFIG_RT_GROUP_SCHED
void unregister_rt_sched_group(struct task_group *tg)
@@ -345,6 +348,46 @@ static inline void rt_queue_pull_task(struct rt_rq *rt_rq)
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}
+#ifdef CONFIG_RT_GROUP_SCHED
+static DEFINE_PER_CPU(struct balance_callback, rt_group_push_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_group_pull_head);
+static void group_push_rt_tasks_callback(struct rq *);
+static void group_pull_rt_task_callback(struct rq *);
+
+static void rt_queue_push_from_group(struct rt_rq *rt_rq)
+{
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ struct rq *global_rq = cpu_rq(rq->cpu);
+
+ if (global_rq->rq_to_push_from)
+ return;
+
+ if (!has_pushable_tasks(rt_rq))
+ return;
+
+ global_rq->rq_to_push_from = rq;
+ queue_balance_callback(global_rq, &per_cpu(rt_group_push_head, global_rq->cpu),
+ group_push_rt_tasks_callback);
+}
+
+static void rt_queue_pull_to_group(struct rt_rq *rt_rq)
+{
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ struct rq *global_rq = cpu_rq(rq->cpu);
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ if (dl_se->dl_throttled || global_rq->rq_to_pull_to)
+ return;
+
+ global_rq->rq_to_pull_to = rq;
+ queue_balance_callback(global_rq, &per_cpu(rt_group_pull_head, global_rq->cpu),
+ group_pull_rt_task_callback);
+}
+#else /* !CONFIG_RT_GROUP_SCHED */
+static inline void rt_queue_push_from_group(struct rt_rq *rt_rq) {};
+static inline void rt_queue_pull_to_group(struct rt_rq *rt_rq) {};
+#endif /* CONFIG_RT_GROUP_SCHED */
+
static void enqueue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p)
{
plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks);
@@ -1747,6 +1790,424 @@ static void pull_rt_task(struct rq *this_rq)
resched_curr(this_rq);
}
+#ifdef CONFIG_RT_GROUP_SCHED
+/*
+ * Find the lowest priority runqueue among the runqueues of the same
+ * task group. Unlike find_lowest_rt(), this does not mean that the
+ * lowest priority cpu is running tasks from this runqueue.
+ */
+static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_rt_rq)
+{
+ struct sched_domain *sd;
+ struct cpumask lowest_mask;
+ struct sched_dl_entity *dl_se;
+ struct rt_rq *rt_rq;
+ int prio, lowest_prio;
+ int cpu, this_cpu = smp_processor_id();
+
+ if (task->nr_cpus_allowed == 1)
+ return -1; /* No other targets possible */
+
+ lowest_prio = task->prio - 1;
+ cpumask_clear(&lowest_mask);
+ for_each_cpu_and(cpu, cpu_online_mask, task->cpus_ptr) {
+ dl_se = task_rt_rq->tg->dl_se[cpu];
+ rt_rq = &dl_se->my_q->rt;
+ prio = rt_rq->highest_prio.curr;
+
+ /*
+ * If we're on asym system ensure we consider the different capacities
+ * of the CPUs when searching for the lowest_mask.
+ */
+ if (dl_se->dl_throttled || !rt_task_fits_capacity(task, cpu))
+ continue;
+
+ if (prio >= lowest_prio) {
+ if (prio > lowest_prio) {
+ cpumask_clear(&lowest_mask);
+ lowest_prio = prio;
+ }
+
+ cpumask_set_cpu(cpu, &lowest_mask);
+ }
+ }
+
+ if (cpumask_empty(&lowest_mask))
+ return -1;
+
+ /*
+ * At this point we have built a mask of CPUs representing the
+ * lowest priority tasks in the system. Now we want to elect
+ * the best one based on our affinity and topology.
+ *
+ * We prioritize the last CPU that the task executed on since
+ * it is most likely cache-hot in that location.
+ */
+ cpu = task_cpu(task);
+ if (cpumask_test_cpu(cpu, &lowest_mask))
+ return cpu;
+
+ /*
+ * Otherwise, we consult the sched_domains span maps to figure
+ * out which CPU is logically closest to our hot cache data.
+ */
+ if (!cpumask_test_cpu(this_cpu, &lowest_mask))
+ this_cpu = -1; /* Skip this_cpu opt if not among lowest */
+
+ scoped_guard(rcu) {
+ for_each_domain(cpu, sd) {
+ if (sd->flags & SD_WAKE_AFFINE) {
+ int best_cpu;
+
+ /*
+ * "this_cpu" is cheaper to preempt than a
+ * remote processor.
+ */
+ if (this_cpu != -1 &&
+ cpumask_test_cpu(this_cpu, sched_domain_span(sd)))
+ return this_cpu;
+
+ best_cpu = cpumask_any_and_distribute(&lowest_mask,
+ sched_domain_span(sd));
+ if (best_cpu < nr_cpu_ids)
+ return best_cpu;
+ }
+ }
+ }
+
+ /*
+ * And finally, if there were no matches within the domains
+ * just give the caller *something* to work with from the compatible
+ * locations.
+ */
+ if (this_cpu != -1)
+ return this_cpu;
+
+ cpu = cpumask_any_distribute(&lowest_mask);
+ if (cpu < nr_cpu_ids)
+ return cpu;
+
+ return -1;
+}
+
+/*
+ * Find and lock the lowest priority runqueue among the runqueues
+ * of the same task group. Unlike find_lock_lowest_rt(), this does not
+ * mean that the lowest priority cpu is running tasks from this runqueue.
+ */
+static struct rt_rq *group_find_lock_lowest_rt_rq(struct task_struct *task, struct rt_rq *rt_rq)
+{
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+ struct rq *lowest_rq;
+ struct rt_rq *lowest_rt_rq;
+ struct sched_dl_entity *lowest_dl_se;
+ int tries, cpu;
+
+ for (tries = 0; tries < RT_MAX_TRIES; tries++) {
+ cpu = group_find_lowest_rt_rq(task, rt_rq);
+
+ if ((cpu == -1) || (cpu == rq->cpu))
+ return NULL;
+
+ lowest_dl_se = rt_rq->tg->dl_se[cpu];
+ lowest_rt_rq = &lowest_dl_se->my_q->rt;
+ lowest_rq = cpu_rq(cpu);
+
+ if (lowest_rt_rq->highest_prio.curr <= task->prio) {
+ /*
+ * Target rq has tasks of equal or higher priority,
+ * retrying does not release any lock and is unlikely
+ * to yield a different result.
+ */
+ return NULL;
+ }
+
+ /* if the prio of this runqueue changed, try again */
+ if (double_lock_balance(rq, lowest_rq)) {
+ /*
+ * We had to unlock the run queue. In
+ * the mean time, task could have
+ * migrated already or had its affinity changed.
+ * Also make sure that it wasn't scheduled on its rq.
+ * It is possible the task was scheduled, set
+ * "migrate_disabled" and then got preempted, so we must
+ * check the task migration disable flag here too.
+ */
+ if (unlikely(is_migration_disabled(task) ||
+ lowest_dl_se->dl_throttled ||
+ !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
+ task != pick_next_pushable_task(rt_rq))) {
+
+ double_unlock_balance(rq, lowest_rq);
+ return NULL;
+ }
+ }
+
+ /* If this rq is still suitable use it. */
+ if (lowest_rt_rq->highest_prio.curr > task->prio)
+ return lowest_rt_rq;
+
+ /* try again */
+ double_unlock_balance(rq, lowest_rq);
+ }
+
+ return NULL;
+}
+
+static int group_push_rt_task(struct rt_rq *rt_rq, bool pull)
+{
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+ struct task_struct *next_task;
+ struct rq *lowest_rq;
+ struct rt_rq *lowest_rt_rq;
+ int ret = 0;
+
+ if (!rt_rq->overloaded)
+ return 0;
+
+ next_task = pick_next_pushable_task(rt_rq);
+ if (!next_task)
+ return 0;
+
+retry:
+ if (is_migration_disabled(next_task)) {
+ struct task_struct *push_task = NULL;
+ int cpu;
+
+ if (!pull || rq->push_busy)
+ return 0;
+
+ /*
+ * If the current task does not belong to the same task group
+ * we cannot push it away.
+ */
+ if (rq->donor->sched_task_group != rt_rq->tg)
+ return 0;
+
+ /*
+ * Invoking group_find_lowest_rt_rq() on anything but an RT task doesn't
+ * make sense. Per the above priority check, curr has to
+ * be of higher priority than next_task, so no need to
+ * reschedule when bailing out.
+ *
+ * Note that the stoppers are masqueraded as SCHED_FIFO
+ * (cf. sched_set_stop_task()), so we can't rely on rt_task().
+ */
+ if (rq->donor->sched_class != &rt_sched_class)
+ return 0;
+
+ cpu = group_find_lowest_rt_rq(rq->curr, rt_rq);
+ if (cpu == -1 || cpu == rq->cpu)
+ return 0;
+
+ /*
+ * Given we found a CPU with lower priority than @next_task,
+ * therefore it should be running. However we cannot migrate it
+ * to this other CPU, instead attempt to push the current
+ * running task on this CPU away.
+ */
+ push_task = get_push_task(rq);
+ if (push_task) {
+ preempt_disable();
+ raw_spin_rq_unlock(rq);
+ stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
+ push_task, &rq->push_work);
+ preempt_enable();
+ raw_spin_rq_lock(rq);
+ }
+
+ return 0;
+ }
+
+ if (WARN_ON(next_task == rq->curr))
+ return 0;
+
+ /* We might release rq lock */
+ get_task_struct(next_task);
+
+ /* group_find_lock_lowest_rq locks the rq if found */
+ lowest_rt_rq = group_find_lock_lowest_rt_rq(next_task, rt_rq);
+ if (!lowest_rt_rq) {
+ struct task_struct *task;
+ /*
+ * group_find_lock_lowest_rt_rq releases rq->lock
+ * so it is possible that next_task has migrated.
+ *
+ * We need to make sure that the task is still on the same
+ * run-queue and is also still the next task eligible for
+ * pushing.
+ */
+ task = pick_next_pushable_task(rt_rq);
+ if (task == next_task) {
+ /*
+ * The task hasn't migrated, and is still the next
+ * eligible task, but we failed to find a run-queue
+ * to push it to. Do not retry in this case, since
+ * other CPUs will pull from us when ready.
+ */
+ goto out;
+ }
+
+ if (!task)
+ /* No more tasks, just exit */
+ goto out;
+
+ /*
+ * Something has shifted, try again.
+ */
+ put_task_struct(next_task);
+ next_task = task;
+ goto retry;
+ }
+
+ lowest_rq = rq_of_rt_rq(lowest_rt_rq);
+
+ move_queued_task_locked(rq, lowest_rq, next_task);
+ resched_curr(lowest_rq);
+ ret = 1;
+
+ double_unlock_balance(rq, lowest_rq);
+out:
+ put_task_struct(next_task);
+
+ return ret;
+}
+
+static void group_pull_rt_task(struct rt_rq *this_rt_rq)
+{
+ struct rq *this_rq = rq_of_rt_rq(this_rt_rq);
+ int this_cpu = this_rq->cpu, cpu;
+ bool resched = false;
+ struct task_struct *p, *push_task = NULL;
+ struct rt_rq *src_rt_rq;
+ struct rq *src_rq;
+ struct sched_dl_entity *src_dl_se;
+
+ for_each_online_cpu(cpu) {
+ if (this_cpu == cpu)
+ continue;
+
+ src_dl_se = this_rt_rq->tg->dl_se[cpu];
+ src_rt_rq = &src_dl_se->my_q->rt;
+
+ if (src_rt_rq->rt_nr_running <= 1 && !src_dl_se->dl_throttled)
+ continue;
+
+ src_rq = rq_of_rt_rq(src_rt_rq);
+
+ /*
+ * Don't bother taking the src_rq->lock if the next highest
+ * task is known to be lower-priority than our current task.
+ * This may look racy, but if this value is about to go
+ * logically higher, the src_rq will push this task away.
+ * And if its going logically lower, we do not care
+ */
+ if (src_rt_rq->highest_prio.next >=
+ this_rt_rq->highest_prio.curr)
+ continue;
+
+ /*
+ * We can potentially drop this_rq's lock in
+ * double_lock_balance, and another CPU could
+ * alter this_rq
+ */
+ push_task = NULL;
+ double_lock_balance(this_rq, src_rq);
+
+ /*
+ * We can pull only a task, which is pushable
+ * on its rq, and no others.
+ */
+ p = pick_highest_pushable_task(src_rt_rq, this_cpu);
+
+ /*
+ * Do we have an RT task that preempts
+ * the to-be-scheduled task?
+ */
+ if (p && (p->prio < this_rt_rq->highest_prio.curr)) {
+ WARN_ON(p == src_rq->curr);
+ WARN_ON(!task_on_rq_queued(p));
+
+ /*
+ * There's a chance that p is higher in priority
+ * than what's currently running on its CPU.
+ * This is just that p is waking up and hasn't
+ * had a chance to schedule. We only pull
+ * p if it is lower in priority than the
+ * current task on the run queue
+ */
+ if (src_rq->donor->sched_task_group == this_rt_rq->tg &&
+ p->prio < src_rq->donor->prio)
+ goto skip;
+
+ if (is_migration_disabled(p)) {
+ /*
+ * If the current task does not belong to the same task group
+ * we cannot push it away.
+ */
+ if (src_rq->donor->sched_task_group != this_rt_rq->tg)
+ goto skip;
+
+ push_task = get_push_task(src_rq);
+ } else {
+ move_queued_task_locked(src_rq, this_rq, p);
+ resched = true;
+ }
+ /*
+ * We continue with the search, just in
+ * case there's an even higher prio task
+ * in another runqueue. (low likelihood
+ * but possible)
+ */
+ }
+skip:
+ double_unlock_balance(this_rq, src_rq);
+
+ if (push_task) {
+ preempt_disable();
+ raw_spin_rq_unlock(this_rq);
+ stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
+ push_task, &src_rq->push_work);
+ preempt_enable();
+ raw_spin_rq_lock(this_rq);
+ }
+ }
+
+ if (resched)
+ resched_curr(this_rq);
+}
+
+static void group_push_rt_tasks(struct rt_rq *rt_rq)
+{
+ while (group_push_rt_task(rt_rq, false))
+ ;
+}
+
+static void group_push_rt_tasks_callback(struct rq *global_rq)
+{
+ struct rt_rq *rt_rq = &global_rq->rq_to_push_from->rt;
+
+ if ((rt_rq->rt_nr_running > 1) ||
+ (dl_group_of(rt_rq)->dl_throttled == 1)) {
+
+ group_push_rt_tasks(rt_rq);
+ }
+
+ global_rq->rq_to_push_from = NULL;
+}
+
+static void group_pull_rt_task_callback(struct rq *global_rq)
+{
+ struct rt_rq *rt_rq = &global_rq->rq_to_pull_to->rt;
+
+ group_pull_rt_task(rt_rq);
+ global_rq->rq_to_pull_to = NULL;
+}
+#else /* !CONFIG_RT_GROUP_SCHED */
+static void group_pull_rt_task(struct rt_rq *this_rt_rq) { }
+static void group_push_rt_tasks(struct rt_rq *rt_rq) { }
+#endif /* CONFIG_RT_GROUP_SCHED */
+
/*
* If we are not running and we are not going to reschedule soon, we should
* try to push tasks away now
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9814be8348cd..6b5bd6270d9a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1330,6 +1330,16 @@ struct rq {
struct list_head cfsb_csd_list;
#endif
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Balance callbacks operate only on global runqueues.
+ * These pointers allow referencing cgroup specific runqueues
+ * for balancing operations.
+ */
+ struct rq *rq_to_push_from;
+ struct rq *rq_to_pull_to;
+#endif
+
atomic_t nr_iowait;
} __no_randomize_layout;
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 23/29] sched/rt: Hook HCBS migration functions
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (21 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 24/29] sched/core: Execute enqueued balance callbacks when changing allowed CPUs Yuri Andriaccio
` (5 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Hook rt-cgroup migration functions:
- balance_rt
- set_next_task_rt
- task_woken_rt
- switched_from_rt
- switched_to_rt
- prio_changed_rt
Follow the same patterns as for the standard FIFO/RR scheduling, but for
HCBS cgroups.
- put_prev_task_rt
If a server is throttled, put_prev_task_rt is invoked and a push is
necessary so that the task can keep running on another server if possible.
Update select_task_rq_rt to always return the cpu where the task is scheduled.
Update switched_to_rt to keep track of the deadline server that is assigned to
the task switching to FIFO/RR priority.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/rt.c | 59 ++++++++++++++++++++++++++++++++++++-----------
1 file changed, 45 insertions(+), 14 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index e1731e01757b..e6b3efa358d3 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1,4 +1,3 @@
-#pragma GCC diagnostic ignored "-Wunused-function"
// SPDX-License-Identifier: GPL-2.0
/*
* Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR
@@ -906,6 +905,11 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
struct rq *rq;
bool test;
+ /* Just return the task_cpu for processes inside task groups */
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) &&
+ is_dl_group(rt_rq_of_se(&p->rt)))
+ goto out;
+
/* For anything but wake ups, just return the task_cpu */
if (!(flags & (WF_TTWU | WF_FORK)))
goto out;
@@ -1005,7 +1009,10 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
* not yet started the picking loop.
*/
rq_unpin_lock(rq, rf);
- pull_rt_task(rq);
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq_of_se(&p->rt)))
+ group_pull_rt_task(rt_rq_of_se(&p->rt));
+ else
+ pull_rt_task(rq);
rq_repin_lock(rq, rf);
}
@@ -1120,7 +1127,9 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
if (rq->donor->sched_class != &rt_sched_class)
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
- if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
+ rt_queue_push_from_group(rt_rq);
+ else
rt_queue_push_tasks(rt_rq);
}
@@ -1174,6 +1183,13 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_s
*/
if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
enqueue_pushable_task(rt_rq, p);
+
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ if (dl_se->dl_throttled)
+ rt_queue_push_from_group(rt_rq);
+ }
}
/* Only try algorithms three times */
@@ -2214,6 +2230,7 @@ static void group_push_rt_tasks(struct rt_rq *rt_rq) { }
*/
static void task_woken_rt(struct rq *rq, struct task_struct *p)
{
+ struct rt_rq *rt_rq = rt_rq_of_se(&p->rt);
bool need_to_push = !task_on_cpu(rq, p) &&
!test_tsk_need_resched(rq->curr) &&
p->nr_cpus_allowed > 1 &&
@@ -2221,7 +2238,12 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p)
(rq->curr->nr_cpus_allowed < 2 ||
rq->donor->prio <= p->prio);
- if (need_to_push)
+ if (!need_to_push)
+ return;
+
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
+ group_push_rt_tasks(rt_rq);
+ else
push_rt_tasks(rq);
}
@@ -2261,7 +2283,9 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
if (!task_on_rq_queued(p) || rt_rq->rt_nr_running)
return;
- if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
+ rt_queue_pull_to_group(rt_rq);
+ else
rt_queue_pull_task(rt_rq);
}
@@ -2290,6 +2314,13 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
*/
if (task_current(rq, p)) {
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
+
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq_of_se(&p->rt))) {
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq_of_se(&p->rt));
+
+ p->dl_server = dl_se;
+ }
+
return;
}
@@ -2299,13 +2330,10 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
* then see if we can move to another run queue.
*/
if (task_on_rq_queued(p)) {
- if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq)) {
- if (p->prio < rq->donor->prio)
- resched_curr(rq);
- } else {
- if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
- rt_queue_push_tasks(rt_rq_of_se(&p->rt));
- }
+ if (!is_dl_group(rt_rq) && p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+ rt_queue_push_tasks(rt_rq);
+ else if (is_dl_group(rt_rq) && rt_rq->overloaded)
+ rt_queue_push_from_group(rt_rq);
if (p->prio < rq->donor->prio && cpu_online(cpu_of(rq)))
resched_curr(rq);
@@ -2332,9 +2360,12 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, u64 oldprio)
* If our priority decreases while running, we
* may need to pull tasks to this runqueue.
*/
- if (oldprio < p->prio)
- if (!IS_ENABLED(CONFIG_RT_GROUP_SCHED) || !is_dl_group(rt_rq))
+ if (oldprio < p->prio) {
+ if (IS_ENABLED(CONFIG_RT_GROUP_SCHED) && is_dl_group(rt_rq))
+ rt_queue_pull_to_group(rt_rq);
+ else
rt_queue_pull_task(rt_rq);
+ }
/*
* If there's a higher priority task waiting to run
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 24/29] sched/core: Execute enqueued balance callbacks when changing allowed CPUs
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (22 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 23/29] sched/rt: Hook HCBS " Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 25/29] sched/rt: Try pull task on empty server pick Yuri Andriaccio
` (4 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
From: luca abeni <luca.abeni@santannapisa.it>
Execute balancing callbacks when setting the affinity of a task, since
the HCBS scheduler may request balancing of throttled dl_servers to fully
utilize the server's bandwidth.
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd532bb46995..24ffe933527c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2875,6 +2875,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) ||
(task_current_donor(rq, p) && !task_current(rq, p))) {
struct task_struct *push_task = NULL;
+ struct balance_callback *head;
if ((flags & SCA_MIGRATE_ENABLE) &&
(p->migration_flags & MDF_PUSH) && !rq->push_busy) {
@@ -2893,11 +2894,13 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
}
preempt_disable();
+ head = splice_balance_callbacks(rq);
task_rq_unlock(rq, p, rf);
if (push_task) {
stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
p, &rq->push_work);
}
+ balance_callbacks(rq, head);
preempt_enable();
if (complete)
@@ -2952,6 +2955,8 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
}
if (task_on_cpu(rq, p) || READ_ONCE(p->__state) == TASK_WAKING) {
+ struct balance_callback *head;
+
/*
* MIGRATE_ENABLE gets here because 'p == current', but for
* anything else we cannot do is_migration_disabled(), punt
@@ -2965,16 +2970,19 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
p->migration_flags &= ~MDF_PUSH;
preempt_disable();
+ head = splice_balance_callbacks(rq);
task_rq_unlock(rq, p, rf);
if (!stop_pending) {
stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
&pending->arg, &pending->stop_work);
}
+ balance_callbacks(rq, head);
preempt_enable();
if (flags & SCA_MIGRATE_ENABLE)
return 0;
} else {
+ struct balance_callback *head;
if (!is_migration_disabled(p)) {
if (task_on_rq_queued(p))
@@ -2985,7 +2993,12 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
complete = true;
}
}
+
+ preempt_disable();
+ head = splice_balance_callbacks(rq);
task_rq_unlock(rq, p, rf);
+ balance_callbacks(rq, head);
+ preempt_enable();
if (complete)
complete_all(&pending->done);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 25/29] sched/rt: Try pull task on empty server pick
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (23 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 24/29] sched/core: Execute enqueued balance callbacks when changing allowed CPUs Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 26/29] sched/core: Execute enqueued balance callbacks after migrate_disable_switch Yuri Andriaccio
` (3 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Try to pull task on a server with an empty runqueue before returning NULL (and
thus shutting down).
---
When all the servers of a cgroup are throttled, work is pending, and any one of
the servers is replenished, it may happen that the runqueue is empty and thus
the replenished server is immediately shut down.
The server may try to pull a task so that the cgroup could consume its
allocated runtime as soon as it is replenished.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index e6b3efa358d3..4553a139398f 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -155,8 +155,14 @@ static struct task_struct *rt_server_pick(struct sched_dl_entity *dl_se, struct
struct rq *rq = rq_of_rt_rq(rt_rq);
struct task_struct *p;
- if (!sched_rt_runnable(dl_se->my_q))
- return NULL;
+ if (!sched_rt_runnable(dl_se->my_q)) {
+ rq_unpin_lock(rq, rf);
+ group_pull_rt_task(rt_rq);
+ rq_repin_lock(rq, rf);
+
+ if (!sched_rt_runnable(dl_se->my_q))
+ return NULL;
+ }
p = rt_task_of(pick_next_rt_entity(rt_rq));
set_next_task_rt(rq, p, true);
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 26/29] sched/core: Execute enqueued balance callbacks after migrate_disable_switch
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (24 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 25/29] sched/rt: Try pull task on empty server pick Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 27/29] Documentation: Update documentation for real-time cgroups Yuri Andriaccio
` (2 subsequent siblings)
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Execute balance callbacks after migrate_disable_switch.
Balancing may be requested on the __schedule path, in migrate_disable_switch,
when the running task is throttled and then pushed away from its runqueue.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/core.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 24ffe933527c..03bd86cc8d4f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2352,6 +2352,9 @@ do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx);
static void migrate_disable_switch(struct rq *rq, struct task_struct *p)
{
+ struct rq_flags rf;
+ struct balance_callback *head;
+
struct affinity_context ac = {
.new_mask = cpumask_of(rq->cpu),
.flags = SCA_MIGRATE_DISABLE,
@@ -2363,8 +2366,13 @@ static void migrate_disable_switch(struct rq *rq, struct task_struct *p)
if (p->cpus_ptr != &p->cpus_mask)
return;
- scoped_guard (task_rq_lock, p)
- do_set_cpus_allowed(p, &ac);
+ rq = task_rq_lock(p, &rf);
+
+ do_set_cpus_allowed(p, &ac);
+
+ head = splice_balance_callbacks(rq);
+ task_rq_unlock(rq, p, &rf);
+ balance_callbacks(rq, head);
}
void ___migrate_enable(void)
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 27/29] Documentation: Update documentation for real-time cgroups
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (25 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 26/29] sched/core: Execute enqueued balance callbacks after migrate_disable_switch Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 28/29] sched/rt: Add debug BUG_ONs for pre-migration code Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 29/29] sched/rt: Add debug BUG_ONs in migration code Yuri Andriaccio
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Update the RT_GROUP_SCHED specific documentation. Give a brief theoretical
background for Hierarchical Constant Bandwidth Server (HCBS). Document how
the HCBS is implemented in the kernel and how the RT_GROUP_SCHED behaves
now compared to the version which this patchset replaces.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
Documentation/scheduler/sched-rt-group.rst | 504 +++++++++++++++++----
1 file changed, 428 insertions(+), 76 deletions(-)
diff --git a/Documentation/scheduler/sched-rt-group.rst b/Documentation/scheduler/sched-rt-group.rst
index ab464335d320..eb2a9235fb00 100644
--- a/Documentation/scheduler/sched-rt-group.rst
+++ b/Documentation/scheduler/sched-rt-group.rst
@@ -53,9 +53,12 @@ CPU time is divided by means of specifying how much time can be spent running
in a given period. We allocate this "run time" for each real-time group which
the other real-time groups will not be permitted to use.
-Any time not allocated to a real-time group will be used to run normal priority
-tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by
-SCHED_OTHER.
+Each real-time group runs at the same priority as SCHED_DEADLINE, thus they
+share and contend the SCHED_DEADLINE allowed bandwidth. Any time not allocated
+to a real-time group (and SCHED_DEADLINE tasks) will be used to run both
+SCHED_FIFO/SCHED_RR, normal priority tasks (SCHED_OTHER), and SCHED_EXT tasks,
+following the usual priorities. Any allocated run time not used will also be
+picked up by the other scheduling classes, in the same order as before.
Let's consider an example: a frame fixed real-time renderer must deliver 25
frames a second, which yields a period of 0.04s per frame. Now say it will also
@@ -73,10 +76,6 @@ The remaining CPU time will be used for user input and other tasks. Because
real-time tasks have explicitly allocated the CPU time they need to perform
their tasks, buffer underruns in the graphics or audio can be eliminated.
-NOTE: the above example is not fully implemented yet. We still
-lack an EDF scheduler to make non-uniform periods usable.
-
-
2. The Interface
================
@@ -86,40 +85,92 @@ lack an EDF scheduler to make non-uniform periods usable.
The system wide settings are configured under the /proc virtual file system:
-/proc/sys/kernel/sched_rt_period_us:
+``/proc/sys/kernel/sched_rt_period_us``:
The scheduling period that is equivalent to 100% CPU bandwidth.
-/proc/sys/kernel/sched_rt_runtime_us:
- A global limit on how much time real-time scheduling may use. This is always
- less or equal to the period_us, as it denotes the time allocated from the
- period_us for the real-time tasks. Without CONFIG_RT_GROUP_SCHED enabled,
- this only serves for admission control of deadline tasks. With
- CONFIG_RT_GROUP_SCHED=y it also signifies the total bandwidth available to
- all real-time groups.
+``/proc/sys/kernel/sched_rt_runtime_us``:
+ A global limit on how much time real-time scheduling may use (SCHED_DEADLINE
+ tasks + real-time groups). This is always less or equal to the period_us, as
+ it denotes the time allocated from the period_us for the real-time tasks.
+ Without **CONFIG_RT_GROUP_SCHED** enabled, this only serves for admission
+ control of deadline tasks. With **CONFIG_RT_GROUP_SCHED=y** it also signifies
+ the total bandwidth available to both real-time groups and deadline tasks.
* Time is specified in us because the interface is s32. This gives an
operating range from 1us to about 35 minutes.
- * sched_rt_period_us takes values from 1 to INT_MAX.
- * sched_rt_runtime_us takes values from -1 to sched_rt_period_us.
- * A run time of -1 specifies runtime == period, ie. no limit.
- * sched_rt_runtime_us/sched_rt_period_us > 0.05 inorder to preserve
- bandwidth for fair dl_server. For accurate value check average of
- runtime/period in /sys/kernel/debug/sched/fair_server/cpuX/
-
-
-2.2 Default behaviour
----------------------
-
-The default values for sched_rt_period_us (1000000 or 1s) and
-sched_rt_runtime_us (950000 or 0.95s). This gives 0.05s to be used by
-SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away
-real-time tasks will not lock up the machine but leave a little time to recover
-it. By setting runtime to -1 you'd get the old behaviour back.
-
-By default all bandwidth is assigned to the root group and new groups get the
-period from /proc/sys/kernel/sched_rt_period_us and a run time of 0. If you
-want to assign bandwidth to another group, reduce the root group's bandwidth
-and assign some or all of the difference to another group.
+ * ``sched_rt_period_us`` takes values from 1 to INT_MAX.
+ * ``sched_rt_runtime_us`` takes values from -1 to ``sched_rt_period_us``.
+ * A run time of -1 specifies runtime == period, i.e., no limit, but also
+ disables admission tests for SCHED_DEADLINE.
+
+The default value for ``sched_rt_period_us`` is 1000000 (or 1s) and for
+``sched_rt_runtime_us`` is 1000000 (or 1s), while fair-servers and ext-servers
+have a default runtime of 50ms and default period of 1s, giving a minimum of
+0.05s to be used by SCHED_FIFO/SCHED_RR and non-RT tasks (SCHED_OTHER,
+SCHED_EXT), while 0.95s are the maximum to be used by SCHED_DEADLINE, and
+rt-cgroups if enabled.
+
+2.2 Cgroup settings
+-------------------
+
+Enabling **CONFIG_RT_GROUP_SCHED** lets you explicitly allocate real CPU
+bandwidth to task groups.
+
+This uses the cgroup virtual file system and the CPU controller for cgroups.
+Enabling the controller for the hierarchy creates two files:
+
+* ``<cgroup>/cpu.rt_period_us``, the scheduling period of the group.
+* ``<cgroup>/cpu.rt_runtime_us``, the maximum runtime each CPU will provide
+ every period.
+
+ .. tip::
+ For more information on working with control groups, you should read
+ *Documentation/admin-guide/cgroup-v1/cgroups.rst* as well.
+ ..
+
+By default the root cgroup has the same period of
+``/proc/sys/kernel/sched_rt_period_us``, which is 1s, and a runtime of zero, so
+that rt-cgroup is *soft-disabled* by default, and all the runtime is available
+for SCHED_DEADLINE tasks only. New groups instead get both a period and a
+runtime of zero.
+
+2.3 Cgroup Hierarchy and Behaviours
+-----------------------------------
+
+With HCBS, cgroups may act either as task runners or bandwidth reservation:
+
+* A bandwidth reservation cgroup (such as the root control group), has the
+ purpose to reserve a portion of the total real-time bandwidth for its sub-tree
+ of groups. A group in this state cannot run SCHED_FIFO/SCHED_RR tasks.
+
+ .. important::
+ The *root control group* behaviour is different from the other cgroups, as
+ its job is to reserve bandwidth for the whole group hierarchy, but it can
+ also run rt tasks. This is an exception: FIFO/RR tasks running in the
+ root cgroup follow the same rules as FIFO/RR tasks in a kernel which has
+ **CONFIG_RT_GROUP_SCHED=n**, and the bandwidth reservation is instead a
+ feature connected to HCBS, that acts on the cgroup tree.
+ ..
+
+* A *live* group instead can be used to run FIFO/RR tasks, with the given
+ bandwidth parameters: each CPU is served a *potentially continuous* runtime of
+ ``<cgroup>/cpu.rt_runtime_us`` every period ``<cgroup>/cpu.rt_period_us``. It
+ is important to notice that increasing the period but leaving the bandwidth
+ constant changes the behaviour of the cgroup's servers, as the bandwidth given
+ overall is the same, but it is given in longer bursts (and longer slices of no
+ bandwidth).
+
+More specifically on *live* and non-*live*:
+
+* A group is deemed *live* if it is a leaf of the groups' hierarchy or all of
+ its children have runtime 0.
+* *Live* groups are the only groups allowed to run real-time tasks. A SCHED_FIFO
+ task cannot be migrated in a non-*live* group, neither a task inside this
+ group can change scheduling policy to SCHED_FIFO/SCHED_RR if the group is not
+ *live*.
+* Non-*live* groups are only used for bandwidth reservation.
+* Group's bandwidth follow this invariant: the sum of the bandwidths of a
+ group's children is always less than or equal to the group's bandwidth.
Real-time group scheduling means you have to assign a portion of total CPU
bandwidth to the group before it will accept real-time tasks. Therefore you will
@@ -128,63 +179,364 @@ done that, even if the user has the rights to run processes with real-time
priority!
-2.3 Basis for grouping tasks
-----------------------------
+3. Theoretical Background
+=========================
+
+
+ .. BIG FAT WARNING ******************************************************
+
+ .. warning::
+
+ This section contains a (not-thorough) summary on deadline/hierarchical
+ scheduling theory, and how it applies to real-time control groups.
+ The reader can "safely" skip to Section 4 if only interested in seeing
+ how the scheduling policy can be used. Anyway, we strongly recommend
+ to come back here and continue reading (once the urge for testing is
+ satisfied :P) to be sure of fully understanding all technical details.
+
+ .. ************************************************************************
+
+The real-time cgroup scheduler is based upon the **Hierarchical Constant
+Bandwidth Server** (HCBS) [1] *Compositional Scheduling Framework* (CSF). A
+**CSF** is a framework where global (system-level) timing properties can be
+established by composing independently (specified and) analyzed local
+(component-level) timing properties [5].
+
+For HCBS (related to the Linux kernel), the compositional framework consists of
+two parts:
+
+* The *scheduling components*, which are the basic units of the scheduling. In
+ the kernel these are the single cgroups along with the tasks that must be run
+ inside.
+
+* The *scheduling resources*, which are the CPUs of the machine.
+
+HCBS is a *hierarchical scheduling framework*, where the scheduling components
+form a hierarchy and resources are allocated from parent components to its child
+components in the hierarchy.
+
+The Chapter is organized as follows: **Section 3.1** gives basic real-time
+theory definitions that are used throughout the whole section. **Section 3.2**
+talks about the HCBS framework, giving a general idea on how this is structured.
+**Section 3.3** introduces the MPR model, one of the many models which may be
+used for the analysis of the scheduling components and the computation of the
+minimum required scheduling resources for a given component. **Section 3.4**
+shows the schedulability test for MPR on the HCBS framework. **Section 3.5**
+shows how to convert a MPR interface to a HCBS compatible resource reservation
+for a component. Finally, **Section 3.6** lists other interesting models which
+could be used for the component analysis in HCBS.
+
+3.1 Basic Definitions
+---------------------
+
+*We borrow the same definitions given in the* ``sched_deadline`` *document, which
+are very briefly summarized here, and new ones, needed by the following content,
+are added.*
+
+A typical real-time task is composed of a repetition of computation phases (task
+instances, or jobs) which are activated on a periodic or sporadic fashion. For
+our purposes, real-time tasks are characterized by three parameters:
+
+* Worst Case Execution Time (WCET): the maximum execution time among all jobs.
+* Relative Deadline (D): the maximum time each job must be completed, relative
+ to the release time of the job.
+* Inter-Arrival Period (P): the exact/minimum (for periodic/sporadic tasks) time
+ between each consecutive job.
+
+3.2 Hierarchical Constant Bandwidth Server (HCBS) [1]
+-----------------------------------------------------
+
+As mentioned, HCBS is a *hierarchical scheduling framework*:
+
+* The framework hierarchy follows the same hierarchy of cgroups. Cgroups may
+ have two roles, either bandwidth reservation for children cgroups, or they may
+ be *live*, i.e. run tasks (but not both). The root cgroup, for the kernel's
+ implementation of HCBS, acts only as bandwidth reservation (but as written in
+ this document it has also different uses outside of the hierarchical
+ framework).
+* The cgroup tree is internally flattened, for ease of scheduling, to a
+ two-level hierarchy, since only the *live* groups are of interest and all the
+ necessary information for their scheduling lies in their interface (there is
+ no need for the reservation components).
+* The hierarchical framework, now on two levels, consists then of a first level
+ of cgroups, and a second level of tasks that are run inside these groups.
+* The scheduling of components is performed using global Earliest Deadline First
+ (gEDF), SCHED_DEADLINE in the kernel, following the bandwidth reservation of
+ each group.
+* Whenever a component is scheduled, a local scheduler picks which of the tasks
+ of the cgroup to run. The scheduling policy is global Fixed Priority (gFP),
+ SCHED_FIFO/SCHED_RR in the kernel.
+
+3.3 Multiprocessor Periodic Resource (MPR) model
+------------------------------------------------
+
+A Multiprocessor Periodic Resource (MPR) model [2] **u = <Pi, Theta, m'>**
+specifies that an identical, unit-capacity multiprocessor platform collectively
+provides **Theta** units of resource every **Pi** time units, where the
+**Theta** time units are supplied with concurrency at most **m'**.
+
+This theoretical model is one of the many models that can abstract the
+interface of our real-time cgroups: let **m'** be the number of CPUs of the
+machine, let **Theta** be **m' * <cgroup>/cpu.rt_runtime_us** and **Pi** be
+**<cgroup>/cpu.rt_period_us**.
-Enabling CONFIG_RT_GROUP_SCHED lets you explicitly allocate real
-CPU bandwidth to task groups.
+Let's introduce the concept of Supply Bound Function (SBF). A SBF is a function
+which outputs a lower bound for the processor supply provided in a given time
+interval, given a resource supply model. For a completely dedicated CPU, the SBF
+function is simply the identity function, as it will always provide **t** units
+of computation for an interval of length **t**. The situation gets slightly more
+complicated for the MPR model or any of the other model listed in section 3.6.
-This uses the cgroup virtual file system and "<cgroup>/cpu.rt_runtime_us"
-to control the CPU time reserved for each control group.
+The **SBF(t)** for a MPR model **u = <Pi, Theta, m'>** is::
-For more information on working with control groups, you should read
-Documentation/admin-guide/cgroup-v1/cgroups.rst as well.
+ | 0 if t' < 0
+ |
+ SBF_u(t) = | floor(t' / PI) * Theta
+ | + max(0, m' * x - (m' * Pi - Theta) if t' >= 0 and 1 <= x <= y
+ |
+ | floor(t' / PI) * Theta
+ | + max(0, m' * x - (m' * Pi - Theta) else
+ | - (m' - beta)
-Group settings are checked against the following limits in order to keep the
-configuration schedulable:
+where::
- \Sum_{i} runtime_{i} / global_period <= global_runtime / global_period
+ alpha = floor(Theta / m')
+ beta = Theta - m' * alpha
+ t' = t - (Pi - ceil(Theta / m'))
+ x = t' - (Pi * floor(t' / Pi))
+ y = Pi - floor(Theta / m')
-For now, this can be simplified to just the following (but see Future plans):
+Briefly, this function models that the server's bandwidth is given as late as
+possible, so describing the worst case possible for the supplied bandwidth.
- \Sum_{i} runtime_{i} <= global_runtime
+3.4 Schedulability for MPR on global Fixed-Priority
+---------------------------------------------------
+Let's introduce the concept of Demand Bound Function (DBF). A DBF is a function
+that, given a taskset, a scheduling algorithm and an interval of time, outputs
+the worst resource demand for that interval of time.
-3. Future plans
-===============
+It is easy to see that, given a DBF and a SBF, we can deem a component/taskset
+schedulable if, for every time interval t >= 0, it is possible to demonstrate
+that:
-There is work in progress to make the scheduling period for each group
-("<cgroup>/cpu.rt_period_us") configurable as well.
+ DBF(t) <= SBF(t)
-The constraint on the period is that a subgroup must have a smaller or
-equal period to its parent. But realistically its not very useful _yet_
-as its prone to starvation without deadline scheduling.
+We have the Supply Bound Function for our given MPR model, so we are missing the
+Demand Bound Function for a given taskset that is being scheduled using global
+Fixed Priority.
-Consider two sibling groups A and B; both have 50% bandwidth, but A's
-period is twice the length of B's.
+3.4.1 Schedulability Analysis for global Fixed Priority
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* group A: period=100000us, runtime=50000us
+Bertogna, Cirinei and Lipari [6] have derived a schedulability test for global
+Fixed Priority (gFP) on multi-processor platforms. In this test (called
+*BCL_gFP* test) we can consider all the CPUs to be dedicated to the scheduling.
- - this runs for 0.05s once every 0.1s
+ A taskset **Tau** is schedulable with gFP on a multiprocessor platform
+ composed of **m'** identical processors if for each task **tau_k in Tau**:
-* group B: period= 50000us, runtime=25000us
+ Sum(for i < k)( min(W_i(D_k), D_k - C_k + 1) ) < m' * (D_k - C_k + 1)
- - this runs for 0.025s twice every 0.1s (or once every 0.05 sec).
+ where **W_i(t)** is the workload of task **tau_i** over a time interval **t**:
-This means that currently a while (1) loop in A will run for the full period of
-B and can starve B's tasks (assuming they are of lower priority) for a whole
-period.
+ W_i(t) = N_i(t) * C_i + min(C_i, t + D_i - C_i - N_i(t) * P_i)
-The next project will be SCHED_EDF (Earliest Deadline First scheduling) to bring
-full deadline scheduling to the linux kernel. Deadline scheduling the above
-groups and treating end of the period as a deadline will ensure that they both
-get their allocated time.
+ and **N_i(t)** is the number of activations of task **tau_i** that complete in
+ a time interval **t**:
-Implementing SCHED_EDF might take a while to complete. Priority Inheritance is
-the biggest challenge as the current linux PI infrastructure is geared towards
-the limited static priority levels 0-99. With deadline scheduling you need to
-do deadline inheritance (since priority is inversely proportional to the
-deadline delta (deadline - now)).
+ N_i(t) = floor( (t + D_i - C_i) / P_i )
+
+ while the **min** term is the contribution of the carried-out job in the
+ interval **t**, i.e. that job that does not completely fit in the interval
+ **t**, but starts inside the interval after all the jobs that complete.
+
+3.4.2 From BCL_gFP to the Demand Bound Function
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We can then derive the DBF from this test:
+
+ DBF_gFP(tau_k) = Sum(for i < k)( min(W_i(D_k), D_k - C_k + 1) ) + m' * (C_k - 1)
+
+Briefly, the first sum component, the same in the BCL_gFP test, describes the
+maximum interference that higher priority task give to the analysed task. The
+workload is upperbounded by ``(D_k - C_K + 1)`` because we are only interested
+in the interference in the slack time, while for the ``C_k`` time we are
+requiring that all the CPUs are fully available, as the single job needs `C_k`
+(non overlapping) time units to run.
+
+The demand bound function from Bertogna et al. is only defined on a single time
+(i.e. the deadline of the task in analysis) instead of all possible times as
+this is the minimum argument to demonstrate schedulability on global Fixed
+Priority.
+
+3.4.3 Putting it all togheter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A component **C**, on **m'** processors, running a taskset **Tau = { tau_1 =
+(C_1, D_1, P_1), ..., tau_n = (C_n, D_n, P_n) }** of **n** sporadic tasks, is
+schedulable under gFP using an MPR model **u = <Pi, Theta, m'>**, if for all
+tasks **tau_k in Tau**:
+
+ DBF_gFP(tau_k) <= SBF_u(D_K)
+
+3.5 From MPR to deadline servers
+--------------------------------
+
+Since there exist no algorithm to schedule MPR interfaces, a tecnique was
+developed to transform MPR interfaces into periodic tasks, so that a
+number of periodic servers which respect the tasks requirements can be used for
+the scheduling of the MPR interface and associated tasks.
+
+Let **u = <Pi, Theta, m>** be a MPR interface, let **a = Theta - m * floor(Theta
+/ m)**, let **k = floor(a)**. Define a transformation from **u** to a periodic
+taskset **Tau_u = { tau_1 = (C_1, D_1, P_1), ..., tau_m' = (C_m', D_m', P_m')
+}**, where:
+
+ **tau_1 = ... = tau_k = (floor(Theta / m') + 1, Pi, Pi)**
+
+ **tau_k+1 = (floor(Theta / m') + a - k * floor(a/k), Pi, Pi)**
+
+ **tau_k+2 = ... = tau_m' = (floor(Theta / m'), Pi, Pi)**
+
+This periodic taskset of servers **Tau_u** can be scheduled on any number of
+processors with concurrency at most **m'**.
+
+For real-time control groups, it is possible to just consider a slightly more
+demanding taskset **Tau_u'**, where each task **tau_i** is defined as follows:
+
+ **tau_i = (ceil(Theta / m'), Pi, Pi)**
+
+3.6 Other models
+----------------
+
+There exist many other theoretical models in literature which are used to
+describe a hierarchical scheduling framework on multi-core architectures.
+Notable examples are the Multi Supply Function (MSF) abstraction [3], the
+Parallel Supply Function (PSF) abstraction [4] and the Bounded Delay
+Multipartition (BDM) [7].
+
+3.7 References
+--------------
+ 1 - L. Abeni, A. Balsini, and T. Cucinotta, “Container-based real-time
+ scheduling in the Linux kernel,” SIGBED Rev., vol. 16, no. 3, pp. 33-38,
+ Nov. 2019, doi: 10.1145/3373400.3373405.
+ 2 - A. Easwaran, I. Shin, and I. Lee, “Optimal virtual cluster-based
+ multiprocessor scheduling,” Real-Time Syst, vol. 43, no. 1, pp. 25-59,
+ Sept. 2009, doi: 10.1007/s11241-009-9073-x.
+ 3 - E. Bini, G. Buttazzo, and M. Bertogna, “The Multi Supply Function
+ Abstraction for Multiprocessors,” in 2009 15th IEEE International
+ Conference on Embedded and Real-Time Computing Systems and Applications,
+ Aug. 2009, pp. 294-302. doi: 10.1109/RTCSA.2009.39.
+ 4 - E. Bini, B. Marko, and S. K. Baruah, “The Parallel Supply Function
+ Abstraction for a Virtual Multiprocessor,” in Scheduling, S. Albers, S. K.
+ Baruah, R. H. Möhring, and K. Pruhs, Eds., in Dagstuhl Seminar Proceedings
+ (DagSemProc), vol. 10071. Dagstuhl, Germany: Schloss Dagstuhl -
+ Leibniz-Zentrum für Informatik, 2010, pp. 1-14. doi:
+ 10.4230/DagSemProc.10071.14.
+ 5 - I. Shin and I. Lee, “Compositional real-time scheduling framework,” in
+ 25th IEEE International Real-Time Systems Symposium, Dec. 2004, pp. 57-67.
+ doi: 10.1109/REAL.2004.15.
+ 6 - M. Bertogna, M. Cirinei, and G. Lipari, “Schedulability Analysis of Global
+ Scheduling Algorithms on Multiprocessor Platforms,” IEEE Transactions on
+ Parallel and Distributed Systems, vol. 20, no. 4, pp. 553-566, Apr. 2009,
+ doi: 10.1109/TPDS.2008.129.
+ 7 - G. Lipari and E. Bini, “A Framework for Hierarchical Scheduling on
+ Multiprocessors: From Application Requirements to Run-Time Allocation,” in
+ 2010 31st IEEE Real-Time Systems Symposium, Nov. 2010, pp. 249-258. doi:
+ 10.1109/RTSS.2010.12.
+
+
+4. Using Real-Time cgroups
+==========================
+
+4.1 CGroup Setup
+----------------
-This means the whole PI machinery will have to be reworked - and that is one of
-the most complex pieces of code we have.
+The following is a brief guide to the use of Real-Time Control Groups.
+
+Of course, real-time control groups require mounting of the cgroup file system.
+We have decided to only support cgroups v2, so make sure you mount the v2
+controller for the cgroup hierarchy.
+
+Additionally the real-time cgroups require the CPU controller for the cgroups to
+be enabled::
+
+ # Assume the cgroup file system is mounted at /sys/fs/cgroup
+ > echo "+cpu" > /sys/fs/cgroup/cgroup.subtree_control
+
+The CPU controller can only be mounted if there is no SCHED_FIFO/SCHED_RR task
+scheduled in any cgroup other than the root control group.
+
+The root control group has no bandwidth allocated by default, so make sure to
+allocate some bandwidth so that it can be used by the other cgroups. More on
+that in the following section...
+
+4.2 Bandwidth Allocation for groups
+-----------------------------------
+
+Allocating bandwidth to a cgroup is a fundamental step to run real-time
+workload. The cgroup filesystem exposes two files:
+
+* ``<cgroup>/cpu.rt_runtime_us``: which specifies the cgroups' runtime in
+ microseconds.
+* ``<cgroup>/cpu.rt_period_us``: which specifies the cgroups' period in
+ microseconds.
+
+Both files are readable and writable, and their default value is zero. By
+definition, the specified runtime must be always less than or equal to the
+period. Additionally, an admission test checks if the bandwidth invariant is
+respected (i.e. sum of children's bandwidth <= parent's bandwidth).
+
+The root control group files instead control and reserve the SCHED_DEADLINE
+bandwidth allocated to real-time cgroups, since real-time groups compete and
+share the same bandwidth allocated to SCHED_DEADLINE tasks.
+
+4.3 Running real-time tasks in groups
+-------------------------------------
+
+To run tasks in real-time groups it is just necessary to change a tasks
+scheduling policy to SCHED_FIFO/SCHED_RR and migrate it into the group. If the
+group is not allowed to run real-time tasks because of incorrect configuration,
+either migrating a SCHED_FIFO/SCHED_RR task into the group or changing
+scheduling policy to a task already inside the group will fail::
+
+ # assume there is a task of PID 42 running
+ # change its scheduling policy to SCHED_FIFO, priority 99
+ > chrt -f -p 99 42
+
+ # migrate the task to a cgroup
+ > echo 42 > /sys/fs/cgroup/<my-cgroup>/cgroup.procs
+
+4.4 Special case: the root control group
+----------------------------------------
+
+The root cgroup is special, compared to the other cgroups, as its tasks are not
+managed by the HCBS algorithm, rather they just use the original
+SCHED_FIFO/SCHED_RR policies (as if CONFIG_RT_GROUP_SCHED was disabled). As
+mentioned, its bandwidth files are just used to control how much of the
+SCHED_DEADLINE bandwidth is allocated to cgroups.
+
+4.5 Guarantees and Special Behaviours
+-------------------------------------
+
+Real-time cgroups are run at the same priority level of SCHED_DEADLINE tasks.
+Since this is the highest priority scheduling policy, and since the Constant
+Bandwidth Server (CBS) enforces that the specified bandwidth requirements for
+both groups and tasks cannot be overrun, real-time groups have the same
+guarantees that SCHED_DEADLINE tasks have, i.e. they will be necessarily
+supplied by the amount of bandwidth requested (whenever the admission tests
+pass).
+
+This means that, since SCHED_FIFO/SCHED_RR tasks (scheduled in the root control
+group) are not subject to bandwidth controls, they are run at a lower priority
+than the cgroups' counterparts. Nonetheless, a minimum amount of bandwidth, if
+reserved, will always be available to run SCHED_FIFO/SCHED_RR workloads in the
+root cgroup, while they will be able to use more runtime if any of the
+SCHED_DEADLINE tasks or servers use less than their specified amount of
+bandwidth. SCHED_OTHER tasks are instead scheduled as normal, at lower priority
+than real-time workloads.
+
+The aforementioned behaviour differs from the preceding RT_GROUP_SCHED
+implementation, but this is necessary to give actual guarantees to the amount of
+bandwidth given to rt-cgroups.
\ No newline at end of file
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 28/29] sched/rt: Add debug BUG_ONs for pre-migration code
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (26 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 27/29] Documentation: Update documentation for real-time cgroups Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 29/29] sched/rt: Add debug BUG_ONs in migration code Yuri Andriaccio
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Add debug BUG_ONs in rt_queue_push/pull_task(s).
Can be safely added after all the pre-migration patches.
These are extra asserts which are only useful to debug the kernel code and
are not meant to be part of the final patchset.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 4553a139398f..6cecda2ce812 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -340,6 +340,9 @@ static inline void rt_queue_push_tasks(struct rt_rq *rt_rq)
{
struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ BUG_ON(rt_rq == NULL);
+ BUG_ON(rq != cpu_rq(rq->cpu));
+
if (!has_pushable_tasks(rt_rq))
return;
@@ -350,6 +353,9 @@ static inline void rt_queue_pull_task(struct rt_rq *rt_rq)
{
struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ BUG_ON(rt_rq == NULL);
+ BUG_ON(rq != cpu_rq(rq->cpu));
+
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [RFC PATCH v5 29/29] sched/rt: Add debug BUG_ONs in migration code
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
` (27 preceding siblings ...)
2026-04-30 21:38 ` [RFC PATCH v5 28/29] sched/rt: Add debug BUG_ONs for pre-migration code Yuri Andriaccio
@ 2026-04-30 21:38 ` Yuri Andriaccio
28 siblings, 0 replies; 39+ messages in thread
From: Yuri Andriaccio @ 2026-04-30 21:38 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider
Cc: linux-kernel, Luca Abeni, Yuri Andriaccio
Add debug BUG_ONs in group specific migration functions.
Can be safely added after all the migration patches.
These are extra asserts which are only useful to debug the kernel code and
are not meant to be part of the final patchset.
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
---
kernel/sched/rt.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 6cecda2ce812..9f938ce84485 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -370,6 +370,9 @@ static void rt_queue_push_from_group(struct rt_rq *rt_rq)
struct rq *rq = served_rq_of_rt_rq(rt_rq);
struct rq *global_rq = cpu_rq(rq->cpu);
+ BUG_ON(rt_rq == NULL);
+ BUG_ON(rq == global_rq);
+
if (global_rq->rq_to_push_from)
return;
@@ -387,6 +390,10 @@ static void rt_queue_pull_to_group(struct rt_rq *rt_rq)
struct rq *global_rq = cpu_rq(rq->cpu);
struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+ BUG_ON(rt_rq == NULL);
+ BUG_ON(!is_dl_group(rt_rq));
+ BUG_ON(rq == global_rq);
+
if (dl_se->dl_throttled || global_rq->rq_to_pull_to)
return;
@@ -1408,6 +1415,8 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
*/
static int push_rt_task(struct rq *rq, bool pull)
{
+ BUG_ON(is_dl_group(&rq->rt));
+
struct task_struct *next_task;
struct rq *lowest_rq;
int ret = 0;
@@ -1709,6 +1718,8 @@ void rto_push_irq_work_func(struct irq_work *work)
static void pull_rt_task(struct rq *this_rq)
{
+ BUG_ON(is_dl_group(&this_rq->rt));
+
int this_cpu = this_rq->cpu, cpu;
bool resched = false;
struct task_struct *p, *push_task;
@@ -1833,6 +1844,8 @@ static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_
int prio, lowest_prio;
int cpu, this_cpu = smp_processor_id();
+ BUG_ON(task->sched_task_group != task_rt_rq->tg);
+
if (task->nr_cpus_allowed == 1)
return -1; /* No other targets possible */
@@ -1931,6 +1944,8 @@ static struct rt_rq *group_find_lock_lowest_rt_rq(struct task_struct *task, stru
struct sched_dl_entity *lowest_dl_se;
int tries, cpu;
+ BUG_ON(task->sched_task_group != rt_rq->tg);
+
for (tries = 0; tries < RT_MAX_TRIES; tries++) {
cpu = group_find_lowest_rt_rq(task, rt_rq);
@@ -1984,6 +1999,8 @@ static struct rt_rq *group_find_lock_lowest_rt_rq(struct task_struct *task, stru
static int group_push_rt_task(struct rt_rq *rt_rq, bool pull)
{
+ BUG_ON(!is_dl_group(rt_rq));
+
struct rq *rq = rq_of_rt_rq(rt_rq);
struct task_struct *next_task;
struct rq *lowest_rq;
@@ -2103,6 +2120,8 @@ static int group_push_rt_task(struct rt_rq *rt_rq, bool pull)
static void group_pull_rt_task(struct rt_rq *this_rt_rq)
{
+ BUG_ON(!is_dl_group(this_rt_rq));
+
struct rq *this_rq = rq_of_rt_rq(this_rt_rq);
int this_cpu = this_rq->cpu, cpu;
bool resched = false;
@@ -2215,6 +2234,9 @@ static void group_push_rt_tasks_callback(struct rq *global_rq)
{
struct rt_rq *rt_rq = &global_rq->rq_to_push_from->rt;
+ BUG_ON(global_rq->rq_to_push_from == NULL);
+ BUG_ON(served_rq_of_rt_rq(rt_rq) == global_rq);
+
if ((rt_rq->rt_nr_running > 1) ||
(dl_group_of(rt_rq)->dl_throttled == 1)) {
@@ -2228,6 +2250,9 @@ static void group_pull_rt_task_callback(struct rq *global_rq)
{
struct rt_rq *rt_rq = &global_rq->rq_to_pull_to->rt;
+ BUG_ON(global_rq->rq_to_pull_to == NULL);
+ BUG_ON(served_rq_of_rt_rq(rt_rq) == global_rq);
+
group_pull_rt_task(rt_rq);
global_rq->rq_to_pull_to = NULL;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups
2026-04-30 21:38 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Yuri Andriaccio
@ 2026-05-05 13:04 ` Peter Zijlstra
0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 13:04 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:17PM +0200, Yuri Andriaccio wrote:
> +static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq);
> +static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first);
> +
> static struct task_struct *rt_server_pick(struct sched_dl_entity *dl_se, struct rq_flags *rf)
> {
> - return NULL;
> + struct rt_rq *rt_rq = &dl_se->my_q->rt;
> + struct rq *rq = rq_of_rt_rq(rt_rq);
> + struct task_struct *p;
> +
> + if (!sched_rt_runnable(dl_se->my_q))
> + return NULL;
> +
> + p = rt_task_of(pick_next_rt_entity(rt_rq));
> + set_next_task_rt(rq, p, true);
> +
> + return p;
> }
set_next_task_rt() should not be needed at this point. There is only a
single ->pick_next_task() implementation left, and that will soon go
away too. All ->pick_task() methods are idempotent.
[ https://lore.kernel.org/r/20260317104343.225156112@infradead.org ]
Notably, see core.c:__pick_next_task(), it will take care of
set_next_task() through put_prev_set_next_task() after calling
class->pick_task().
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling
2026-04-30 21:38 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Yuri Andriaccio
@ 2026-05-05 13:16 ` Peter Zijlstra
0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 13:16 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:18PM +0200, Yuri Andriaccio wrote:
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index defb812b0e48..67fbf4bbe461 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -975,7 +975,58 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
> static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags)
> {
> struct task_struct *donor = rq->donor;
> + struct sched_dl_entity *woken_dl_se = NULL;
> + struct sched_dl_entity *donor_dl_se = NULL;
> +
> + if (!rt_group_sched_enabled())
> + goto no_group_sched;
>
> + /*
> + * Preemption checks are different if the waking task and the current task
> + * are running on the global runqueue or in a cgroup. The following rules
> + * apply:
> + * - dl-tasks (and equally dl_servers) always preempt FIFO/RR tasks.
> + * - if curr is a FIFO/RR task inside a cgroup (i.e. run by a
> + * dl_server), or curr is a DEADLINE task and waking is a FIFO/RR task
> + * on the root cgroup, do nothing.
> + * - if waking is inside a cgroup but curr is a FIFO/RR task in the root
> + * cgroup, always reschedule.
> + * - if they are both on the global runqueue, run the standard code.
> + * - if they are both in the same cgroup, check for tasks priorities.
> + * - if they are both in a cgroup, but not the same one, check whether the
> + * woken task's dl_server preempts the current's dl_server.
> + * - if curr is a DEADLINE task and waking is in a cgroup, check whether
> + * the woken task's server preempts curr.
> + */
> + if (is_dl_group(rt_rq_of_se(&p->rt)))
> + woken_dl_se = dl_group_of(rt_rq_of_se(&p->rt));
> + if (is_dl_group(rt_rq_of_se(&donor->rt)))
> + donor_dl_se = dl_group_of(rt_rq_of_se(&donor->rt));
> + else if (task_has_dl_policy(donor))
> + donor_dl_se = &donor->dl;
> +
> + if (woken_dl_se != NULL && donor_dl_se != NULL) {
> + if (woken_dl_se == donor_dl_se) {
> + if (p->prio < donor->prio)
> + resched_curr(rq);
> +
> + return;
This is effectively the traditional test, why not goto no_group_sched at
this point and share that code rather than duplicate?
> + }
> +
> + if (dl_entity_preempt(woken_dl_se, donor_dl_se))
> + resched_curr(rq);
> +
> + return;
> +
> + } else if (woken_dl_se != NULL) {
> + resched_curr(rq);
> + return;
> +
> + } else if (donor_dl_se != NULL) {
> + return;
> + }
> +
> +no_group_sched:
> /*
> * XXX If we're preempted by DL, queue a push?
> */
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks
2026-04-30 21:38 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Yuri Andriaccio
@ 2026-05-05 14:36 ` Peter Zijlstra
0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 14:36 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:19PM +0200, Yuri Andriaccio wrote:
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -343,7 +343,39 @@ void cancel_inactive_timer(struct sched_dl_entity *dl_se)
> cancel_dl_timer(dl_se, &dl_se->inactive_timer);
> }
>
> +/*
> + * Used for dl_bw check and update, used under sched_rt_handler()::mutex and
> + * sched_domains_mutex.
Please try very hard to express locking constraints in code, rather than
comments. Compilers are very bad at verifying comments ;-)
> + */
> +u64 dl_cookie;
> +
> #ifdef CONFIG_RT_GROUP_SCHED
> +int dl_check_tg(unsigned long total)
> +{
> + int which_cpu;
> + int cap;
> + struct dl_bw *dl_b;
> + u64 gen = ++dl_cookie;
This probably wants to be something like:
lockdep_assert_held(sched_domain_mutex);
or something like that?
And if it really is sched_domain_mutex _AND_ sched_rt_handler()::mutex,
it might make sense to pull that mutex out from that function to give it
global visibility so we can test for it here.
For bonus points, you'll use __guarded_by() from the context analysis
bits; you'll need to add:
CONTEXT_ANALYSIS_deadline.o := y to kernel/sched/Makefile and build the
tree with clang-22 or later (although we'll be raising this to -23
soonish).
> +
> + for_each_possible_cpu(which_cpu) {
> + guard(rcu_sched)();
> +
> + if (!dl_bw_visited(which_cpu, gen)) {
> + cap = dl_bw_capacity(which_cpu);
> + dl_b = dl_bw_of(which_cpu);
> +
> + guard(raw_spinlock_irqsave)(&dl_b->lock);
> +
> + if (dl_b->bw != -1 &&
> + cap_scale(dl_b->bw, cap) < dl_b->total_bw + cap_scale(total, cap))
> + return 0;
> + }
> +
> + }
> +
> + return 1;
> +}
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 18/29] sched/core: Cgroup v2 support
2026-04-30 21:38 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Yuri Andriaccio
@ 2026-05-05 14:59 ` Peter Zijlstra
0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 14:59 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:22PM +0200, Yuri Andriaccio wrote:
> From: luca abeni <luca.abeni@santannapisa.it>
>
> Make rt_runtime_us and rt_period_us virtual files accessible also to the
> cgroup v2 controller, effectively enabling the RT_GROUP_SCHED mechanism to
> cgroups v2.
Can we have a blub about why only strict periodic servers; eg. why no
sporadic? and such...
> Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
> Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
> ---
> kernel/sched/core.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 0c7032d254ba..3ffe3ac5071d 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -10245,6 +10245,18 @@ static struct cftype cpu_files[] = {
> .write = cpu_uclamp_max_write,
> },
> #endif /* CONFIG_UCLAMP_TASK_GROUP */
> +#ifdef CONFIG_RT_GROUP_SCHED
> + {
> + .name = "rt_runtime_us",
> + .read_s64 = cpu_rt_runtime_read,
> + .write_s64 = cpu_rt_runtime_write,
> + },
> + {
> + .name = "rt_period_us",
> + .read_u64 = cpu_rt_period_read_uint,
> + .write_u64 = cpu_rt_period_write_uint,
> + },
> +#endif /* CONFIG_RT_GROUP_SCHED */
> { } /* terminate */
> };
>
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1
2026-04-30 21:38 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Yuri Andriaccio
@ 2026-05-05 15:01 ` Peter Zijlstra
0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 15:01 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:23PM +0200, Yuri Andriaccio wrote:
> Disable control files for cgroups-v1, and allow only cgroups-v2.
> This should simplify maintaining the code, since cgroups-v1 are deprecated.
So while I love seeing all this code go away; I very much doubt we can
pull this off. People might actually be using this.
I think at best we can hide the whole cgroup-v1 thing behind a CONFIG
and eventually remove once no distro is left using it or something like
that :/
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups
2026-04-30 21:38 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Yuri Andriaccio
@ 2026-05-05 15:15 ` Peter Zijlstra
2026-05-05 19:56 ` Tejun Heo
0 siblings, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 15:15 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio, tj, hannes, mkoutny,
cgroups
On Thu, Apr 30, 2026 at 11:38:24PM +0200, Yuri Andriaccio wrote:
> From: luca abeni <luca.abeni@santannapisa.it>
>
> Allow for cgroup hierarchies with more than two levels.
>
> Introduce the concept of live and active groups:
> - A group is live if it is a leaf group or if all its children have zero
> runtime.
> - A live group with non-zero runtime can be used to schedule tasks.
> - An active cgroup is a live group with running tasks.
> - A non-live group cannot be used to run tasks, but it is only used for
> bandwidth accounting, i.e. the sum of its children bandwidth must be
> less than or equal to the bandwidth of the parent. This change allows
> to use cgroups for bandwidth management for different users.
> - While the root cgroup specifies the total allocatable bandwidth of rt
> cgroups, a further accounting is performed to keep track of the live
> bandwidth, i.e. the sum of the bandwidth of live groups. The hierarchy
> invariant states that the live bandwidth must always be less than or
> equal to the total allocatable bw.
>
> Add is_live_sched_group() and sched_group_has_live_siblings() in
> deadline.c. These utility functions are used by dl_init_tg to perform
> updates only when necessary:
> - Only live groups may update the active dl bandwidth of dl entities
> (call to dl_rq_change_utilization), while non-live groups must not use
> servers, and thus must not change the active dl bandwidth.
> - The total bandwidth accounting must be changed to follow the
> live/non-live rules:
> - When disabling (runtime zero) the last child of a group, the parent
> becomes a live group, and so the parent's bw must be accounted back.
> - When enabling (runtime non-zero) the first child, the parent becomes a
> non-live group, and so the parent's bandwidth must be removed.
>
> Update tg_set_rt_bandwidth() to change the runtime of a group to a
> non-zero value only if its parent is inactive, thus forcing it to become
> non-live if it was precedently (it would've already been non-live if a
> sibling cgroup was live). An exception is made for groups which have the
> root cgroup as parent.
>
> Update sched_rt_can_attach() to allow attaching only on live groups.
>
> Update dl_init_tg() to take a task_group pointer and a cpu's id rather
> than passing directly the pointer to the cpu's deadline server. The
> task_group pointer is necessary to check and update the live bandwidth
> accounting.
>
> Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
> Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
> Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
This probably wants to have the cgroup folks on Cc (added now) to make
sure the semantics are in line with cgroup-v2 expectations.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions
2026-04-30 21:38 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Yuri Andriaccio
@ 2026-05-05 15:20 ` Peter Zijlstra
2026-05-05 15:24 ` Peter Zijlstra
1 sibling, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 15:20 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:26PM +0200, Yuri Andriaccio wrote:
> +static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_rt_rq)
> +{
> + struct sched_domain *sd;
> + struct cpumask lowest_mask;
> + struct sched_dl_entity *dl_se;
> + struct rt_rq *rt_rq;
> + int prio, lowest_prio;
> + int cpu, this_cpu = smp_processor_id();
> +
> + if (task->nr_cpus_allowed == 1)
> + return -1; /* No other targets possible */
> +
> + lowest_prio = task->prio - 1;
> + cpumask_clear(&lowest_mask);
> + for_each_cpu_and(cpu, cpu_online_mask, task->cpus_ptr) {
> + dl_se = task_rt_rq->tg->dl_se[cpu];
> + rt_rq = &dl_se->my_q->rt;
> + prio = rt_rq->highest_prio.curr;
> +
> + /*
> + * If we're on asym system ensure we consider the different capacities
> + * of the CPUs when searching for the lowest_mask.
> + */
> + if (dl_se->dl_throttled || !rt_task_fits_capacity(task, cpu))
> + continue;
> +
> + if (prio >= lowest_prio) {
> + if (prio > lowest_prio) {
> + cpumask_clear(&lowest_mask);
> + lowest_prio = prio;
> + }
> +
> + cpumask_set_cpu(cpu, &lowest_mask);
> + }
> + }
> +
> + if (cpumask_empty(&lowest_mask))
> + return -1;
> +
> + /*
> + * At this point we have built a mask of CPUs representing the
> + * lowest priority tasks in the system. Now we want to elect
> + * the best one based on our affinity and topology.
> + *
> + * We prioritize the last CPU that the task executed on since
> + * it is most likely cache-hot in that location.
> + */
> + cpu = task_cpu(task);
> + if (cpumask_test_cpu(cpu, &lowest_mask))
> + return cpu;
> +
> + /*
> + * Otherwise, we consult the sched_domains span maps to figure
> + * out which CPU is logically closest to our hot cache data.
> + */
> + if (!cpumask_test_cpu(this_cpu, &lowest_mask))
> + this_cpu = -1; /* Skip this_cpu opt if not among lowest */
> +
> + scoped_guard(rcu) {
> + for_each_domain(cpu, sd) {
> + if (sd->flags & SD_WAKE_AFFINE) {
> + int best_cpu;
> +
> + /*
> + * "this_cpu" is cheaper to preempt than a
> + * remote processor.
> + */
> + if (this_cpu != -1 &&
> + cpumask_test_cpu(this_cpu, sched_domain_span(sd)))
> + return this_cpu;
> +
> + best_cpu = cpumask_any_and_distribute(&lowest_mask,
> + sched_domain_span(sd));
> + if (best_cpu < nr_cpu_ids)
> + return best_cpu;
> + }
> + }
> + }
I appreciate you trying to save on indent, but this does violate
coding-style, please indent as normal.
> +
> + /*
> + * And finally, if there were no matches within the domains
> + * just give the caller *something* to work with from the compatible
> + * locations.
> + */
> + if (this_cpu != -1)
> + return this_cpu;
> +
> + cpu = cpumask_any_distribute(&lowest_mask);
> + if (cpu < nr_cpu_ids)
> + return cpu;
> +
> + return -1;
> +}
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions
2026-04-30 21:38 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Yuri Andriaccio
2026-05-05 15:20 ` Peter Zijlstra
@ 2026-05-05 15:24 ` Peter Zijlstra
1 sibling, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-05-05 15:24 UTC (permalink / raw)
To: Yuri Andriaccio
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
linux-kernel, Luca Abeni, Yuri Andriaccio
On Thu, Apr 30, 2026 at 11:38:26PM +0200, Yuri Andriaccio wrote:
> From: luca abeni <luca.abeni@santannapisa.it>
>
> Add migration related functions:
>
> - group_find_lowest_rt_rq
> - group_find_lock_lowest_rt_rq
> Find (and lock) the lowest priority non-root runqueue where to migrate
> a given task.
>
> - group_pull_rt_task
> Try pull a task onto the given non-root runqueue.
>
> - group_push_rt_task
> - group_push_rt_tasks
> Try push tasks from the given non-root runqueue.
>
> - group_pull_rt_task_callback
> - group_push_rt_tasks_callback
> - rt_queue_push_from_group
> - rt_queue_pull_to_group
> Deferred execution of push and pull functions at balancing points.
>
> Update struct rq to include fields for deferred balancing of cgroup runqueues.
>
> ---
>
> The functions are only implemented here, to be hooked up later in the patchset.
These functions duplicate a ton of existing logic, is there really no
way to share?
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups
2026-05-05 15:15 ` Peter Zijlstra
@ 2026-05-05 19:56 ` Tejun Heo
0 siblings, 0 replies; 39+ messages in thread
From: Tejun Heo @ 2026-05-05 19:56 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Yuri Andriaccio, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, linux-kernel, Luca Abeni, Yuri Andriaccio,
hannes, mkoutny, cgroups
Hello,
Some high level comments:
- Please align it with existing cgroup2 interface files. See cpu.max. This
can be e.g. cpu.rt.max without about the same semantics.
- cgroup2 enforces that internal cgroups w/ controllers enabled cannot have
threads in them. No need to enforce that separately.
- However, the cpu controller is a threaded controller which means that it
can have threaded sub-hierarchy where the no-internal-process rule doesn't
apply. This was created explicitly for cpu controller. The proposed change
blocks it effectively forcing cpu controller into regular domain
controller behavior subject to no-internal-process rule. Note these are
enforced at controller granularity and this means that users who use the
threaded mode will be forced to pick between the two.
- This has the same problem with cgroup1's rt cgroup sched support where
there is no way to have a permissive default configuration, which means
that users who don't really care about distributing rt shares
hierarchically would get blocked from running rt processes by default,
which basically forces distros to disable rt cgroup sched support. This is
not new but it'd be a shame to put in all the work and the end result is
that most people don't even have access to the feature.
Here's my suggestion if there is desire for this to become something most
people have easy access to:
- Don't make it impossible to use in conjunction with other resource control
mechanisms especially not CPU controller itself. Don't force people to
choose between threaded mode and rt control. Allow them to co-exist in a
reasonable manner.
- The same in the wider scope. Don't let it get in the way of people who
don't care about it. Compromising on interface / failure mode is better
than people not being able to use it in most cases.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 39+ messages in thread
end of thread, other threads:[~2026-05-05 19:56 UTC | newest]
Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 02/29] sched/deadline: Do not access dl_se->rq directly Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 03/29] sched/deadline: Distinguish between dl_rq and my_q Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 05/29] sched/rt: Move functions from rt.c to sched.h Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 06/29] sched/rt: Disable RT_GROUP_SCHED Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 08/29] sched/rt: Introduce HCBS specific structs in task_group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 10/29] sched/deadline: Add dl_init_tg Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 12/29] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Yuri Andriaccio
2026-05-05 13:04 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Yuri Andriaccio
2026-05-05 13:16 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Yuri Andriaccio
2026-05-05 14:36 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 16/29] sched/rt: Allow zeroing the runtime of the root control group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 17/29] sched/rt: Remove old RT_GROUP_SCHED data structures Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Yuri Andriaccio
2026-05-05 14:59 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Yuri Andriaccio
2026-05-05 15:01 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Yuri Andriaccio
2026-05-05 15:15 ` Peter Zijlstra
2026-05-05 19:56 ` Tejun Heo
2026-04-30 21:38 ` [RFC PATCH v5 21/29] sched/rt: Update default bandwidth for real-time tasks to ONE Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Yuri Andriaccio
2026-05-05 15:20 ` Peter Zijlstra
2026-05-05 15:24 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 23/29] sched/rt: Hook HCBS " Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 24/29] sched/core: Execute enqueued balance callbacks when changing allowed CPUs Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 25/29] sched/rt: Try pull task on empty server pick Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 26/29] sched/core: Execute enqueued balance callbacks after migrate_disable_switch Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 27/29] Documentation: Update documentation for real-time cgroups Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 28/29] sched/rt: Add debug BUG_ONs for pre-migration code Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 29/29] sched/rt: Add debug BUG_ONs in migration code Yuri Andriaccio
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox