From: Yuri Andriaccio <yurand2000@gmail.com>
To: Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>
Cc: linux-kernel@vger.kernel.org,
Luca Abeni <luca.abeni@santannapisa.it>,
Yuri Andriaccio <yuri.andriaccio@santannapisa.it>
Subject: [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions
Date: Thu, 30 Apr 2026 23:38:26 +0200 [thread overview]
Message-ID: <20260430213835.62217-23-yurand2000@gmail.com> (raw)
In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com>
From: luca abeni <luca.abeni@santannapisa.it>
Add migration related functions:
- group_find_lowest_rt_rq
- group_find_lock_lowest_rt_rq
Find (and lock) the lowest priority non-root runqueue where to migrate
a given task.
- group_pull_rt_task
Try pull a task onto the given non-root runqueue.
- group_push_rt_task
- group_push_rt_tasks
Try push tasks from the given non-root runqueue.
- group_pull_rt_task_callback
- group_push_rt_tasks_callback
- rt_queue_push_from_group
- rt_queue_pull_to_group
Deferred execution of push and pull functions at balancing points.
Update struct rq to include fields for deferred balancing of cgroup runqueues.
---
The functions are only implemented here, to be hooked up later in the patchset.
Co-developed-by: Alessio Balsini <a.balsini@sssup.it>
Signed-off-by: Alessio Balsini <a.balsini@sssup.it>
Co-developed-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Co-developed-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: Yuri Andriaccio <yurand2000@gmail.com>
Signed-off-by: luca abeni <luca.abeni@santannapisa.it>
---
kernel/sched/rt.c | 461 +++++++++++++++++++++++++++++++++++++++++++
kernel/sched/sched.h | 10 +
2 files changed, 471 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index db88792787a8..e1731e01757b 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1,3 +1,4 @@
+#pragma GCC diagnostic ignored "-Wunused-function"
// SPDX-License-Identifier: GPL-2.0
/*
* Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR
@@ -84,6 +85,8 @@ void init_rt_rq(struct rt_rq *rt_rq)
plist_head_init(&rt_rq->pushable_tasks);
}
+static void group_pull_rt_task(struct rt_rq *this_rt_rq);
+
#ifdef CONFIG_RT_GROUP_SCHED
void unregister_rt_sched_group(struct task_group *tg)
@@ -345,6 +348,46 @@ static inline void rt_queue_pull_task(struct rt_rq *rt_rq)
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}
+#ifdef CONFIG_RT_GROUP_SCHED
+static DEFINE_PER_CPU(struct balance_callback, rt_group_push_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_group_pull_head);
+static void group_push_rt_tasks_callback(struct rq *);
+static void group_pull_rt_task_callback(struct rq *);
+
+static void rt_queue_push_from_group(struct rt_rq *rt_rq)
+{
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ struct rq *global_rq = cpu_rq(rq->cpu);
+
+ if (global_rq->rq_to_push_from)
+ return;
+
+ if (!has_pushable_tasks(rt_rq))
+ return;
+
+ global_rq->rq_to_push_from = rq;
+ queue_balance_callback(global_rq, &per_cpu(rt_group_push_head, global_rq->cpu),
+ group_push_rt_tasks_callback);
+}
+
+static void rt_queue_pull_to_group(struct rt_rq *rt_rq)
+{
+ struct rq *rq = served_rq_of_rt_rq(rt_rq);
+ struct rq *global_rq = cpu_rq(rq->cpu);
+ struct sched_dl_entity *dl_se = dl_group_of(rt_rq);
+
+ if (dl_se->dl_throttled || global_rq->rq_to_pull_to)
+ return;
+
+ global_rq->rq_to_pull_to = rq;
+ queue_balance_callback(global_rq, &per_cpu(rt_group_pull_head, global_rq->cpu),
+ group_pull_rt_task_callback);
+}
+#else /* !CONFIG_RT_GROUP_SCHED */
+static inline void rt_queue_push_from_group(struct rt_rq *rt_rq) {};
+static inline void rt_queue_pull_to_group(struct rt_rq *rt_rq) {};
+#endif /* CONFIG_RT_GROUP_SCHED */
+
static void enqueue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p)
{
plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks);
@@ -1747,6 +1790,424 @@ static void pull_rt_task(struct rq *this_rq)
resched_curr(this_rq);
}
+#ifdef CONFIG_RT_GROUP_SCHED
+/*
+ * Find the lowest priority runqueue among the runqueues of the same
+ * task group. Unlike find_lowest_rt(), this does not mean that the
+ * lowest priority cpu is running tasks from this runqueue.
+ */
+static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_rt_rq)
+{
+ struct sched_domain *sd;
+ struct cpumask lowest_mask;
+ struct sched_dl_entity *dl_se;
+ struct rt_rq *rt_rq;
+ int prio, lowest_prio;
+ int cpu, this_cpu = smp_processor_id();
+
+ if (task->nr_cpus_allowed == 1)
+ return -1; /* No other targets possible */
+
+ lowest_prio = task->prio - 1;
+ cpumask_clear(&lowest_mask);
+ for_each_cpu_and(cpu, cpu_online_mask, task->cpus_ptr) {
+ dl_se = task_rt_rq->tg->dl_se[cpu];
+ rt_rq = &dl_se->my_q->rt;
+ prio = rt_rq->highest_prio.curr;
+
+ /*
+ * If we're on asym system ensure we consider the different capacities
+ * of the CPUs when searching for the lowest_mask.
+ */
+ if (dl_se->dl_throttled || !rt_task_fits_capacity(task, cpu))
+ continue;
+
+ if (prio >= lowest_prio) {
+ if (prio > lowest_prio) {
+ cpumask_clear(&lowest_mask);
+ lowest_prio = prio;
+ }
+
+ cpumask_set_cpu(cpu, &lowest_mask);
+ }
+ }
+
+ if (cpumask_empty(&lowest_mask))
+ return -1;
+
+ /*
+ * At this point we have built a mask of CPUs representing the
+ * lowest priority tasks in the system. Now we want to elect
+ * the best one based on our affinity and topology.
+ *
+ * We prioritize the last CPU that the task executed on since
+ * it is most likely cache-hot in that location.
+ */
+ cpu = task_cpu(task);
+ if (cpumask_test_cpu(cpu, &lowest_mask))
+ return cpu;
+
+ /*
+ * Otherwise, we consult the sched_domains span maps to figure
+ * out which CPU is logically closest to our hot cache data.
+ */
+ if (!cpumask_test_cpu(this_cpu, &lowest_mask))
+ this_cpu = -1; /* Skip this_cpu opt if not among lowest */
+
+ scoped_guard(rcu) {
+ for_each_domain(cpu, sd) {
+ if (sd->flags & SD_WAKE_AFFINE) {
+ int best_cpu;
+
+ /*
+ * "this_cpu" is cheaper to preempt than a
+ * remote processor.
+ */
+ if (this_cpu != -1 &&
+ cpumask_test_cpu(this_cpu, sched_domain_span(sd)))
+ return this_cpu;
+
+ best_cpu = cpumask_any_and_distribute(&lowest_mask,
+ sched_domain_span(sd));
+ if (best_cpu < nr_cpu_ids)
+ return best_cpu;
+ }
+ }
+ }
+
+ /*
+ * And finally, if there were no matches within the domains
+ * just give the caller *something* to work with from the compatible
+ * locations.
+ */
+ if (this_cpu != -1)
+ return this_cpu;
+
+ cpu = cpumask_any_distribute(&lowest_mask);
+ if (cpu < nr_cpu_ids)
+ return cpu;
+
+ return -1;
+}
+
+/*
+ * Find and lock the lowest priority runqueue among the runqueues
+ * of the same task group. Unlike find_lock_lowest_rt(), this does not
+ * mean that the lowest priority cpu is running tasks from this runqueue.
+ */
+static struct rt_rq *group_find_lock_lowest_rt_rq(struct task_struct *task, struct rt_rq *rt_rq)
+{
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+ struct rq *lowest_rq;
+ struct rt_rq *lowest_rt_rq;
+ struct sched_dl_entity *lowest_dl_se;
+ int tries, cpu;
+
+ for (tries = 0; tries < RT_MAX_TRIES; tries++) {
+ cpu = group_find_lowest_rt_rq(task, rt_rq);
+
+ if ((cpu == -1) || (cpu == rq->cpu))
+ return NULL;
+
+ lowest_dl_se = rt_rq->tg->dl_se[cpu];
+ lowest_rt_rq = &lowest_dl_se->my_q->rt;
+ lowest_rq = cpu_rq(cpu);
+
+ if (lowest_rt_rq->highest_prio.curr <= task->prio) {
+ /*
+ * Target rq has tasks of equal or higher priority,
+ * retrying does not release any lock and is unlikely
+ * to yield a different result.
+ */
+ return NULL;
+ }
+
+ /* if the prio of this runqueue changed, try again */
+ if (double_lock_balance(rq, lowest_rq)) {
+ /*
+ * We had to unlock the run queue. In
+ * the mean time, task could have
+ * migrated already or had its affinity changed.
+ * Also make sure that it wasn't scheduled on its rq.
+ * It is possible the task was scheduled, set
+ * "migrate_disabled" and then got preempted, so we must
+ * check the task migration disable flag here too.
+ */
+ if (unlikely(is_migration_disabled(task) ||
+ lowest_dl_se->dl_throttled ||
+ !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
+ task != pick_next_pushable_task(rt_rq))) {
+
+ double_unlock_balance(rq, lowest_rq);
+ return NULL;
+ }
+ }
+
+ /* If this rq is still suitable use it. */
+ if (lowest_rt_rq->highest_prio.curr > task->prio)
+ return lowest_rt_rq;
+
+ /* try again */
+ double_unlock_balance(rq, lowest_rq);
+ }
+
+ return NULL;
+}
+
+static int group_push_rt_task(struct rt_rq *rt_rq, bool pull)
+{
+ struct rq *rq = rq_of_rt_rq(rt_rq);
+ struct task_struct *next_task;
+ struct rq *lowest_rq;
+ struct rt_rq *lowest_rt_rq;
+ int ret = 0;
+
+ if (!rt_rq->overloaded)
+ return 0;
+
+ next_task = pick_next_pushable_task(rt_rq);
+ if (!next_task)
+ return 0;
+
+retry:
+ if (is_migration_disabled(next_task)) {
+ struct task_struct *push_task = NULL;
+ int cpu;
+
+ if (!pull || rq->push_busy)
+ return 0;
+
+ /*
+ * If the current task does not belong to the same task group
+ * we cannot push it away.
+ */
+ if (rq->donor->sched_task_group != rt_rq->tg)
+ return 0;
+
+ /*
+ * Invoking group_find_lowest_rt_rq() on anything but an RT task doesn't
+ * make sense. Per the above priority check, curr has to
+ * be of higher priority than next_task, so no need to
+ * reschedule when bailing out.
+ *
+ * Note that the stoppers are masqueraded as SCHED_FIFO
+ * (cf. sched_set_stop_task()), so we can't rely on rt_task().
+ */
+ if (rq->donor->sched_class != &rt_sched_class)
+ return 0;
+
+ cpu = group_find_lowest_rt_rq(rq->curr, rt_rq);
+ if (cpu == -1 || cpu == rq->cpu)
+ return 0;
+
+ /*
+ * Given we found a CPU with lower priority than @next_task,
+ * therefore it should be running. However we cannot migrate it
+ * to this other CPU, instead attempt to push the current
+ * running task on this CPU away.
+ */
+ push_task = get_push_task(rq);
+ if (push_task) {
+ preempt_disable();
+ raw_spin_rq_unlock(rq);
+ stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
+ push_task, &rq->push_work);
+ preempt_enable();
+ raw_spin_rq_lock(rq);
+ }
+
+ return 0;
+ }
+
+ if (WARN_ON(next_task == rq->curr))
+ return 0;
+
+ /* We might release rq lock */
+ get_task_struct(next_task);
+
+ /* group_find_lock_lowest_rq locks the rq if found */
+ lowest_rt_rq = group_find_lock_lowest_rt_rq(next_task, rt_rq);
+ if (!lowest_rt_rq) {
+ struct task_struct *task;
+ /*
+ * group_find_lock_lowest_rt_rq releases rq->lock
+ * so it is possible that next_task has migrated.
+ *
+ * We need to make sure that the task is still on the same
+ * run-queue and is also still the next task eligible for
+ * pushing.
+ */
+ task = pick_next_pushable_task(rt_rq);
+ if (task == next_task) {
+ /*
+ * The task hasn't migrated, and is still the next
+ * eligible task, but we failed to find a run-queue
+ * to push it to. Do not retry in this case, since
+ * other CPUs will pull from us when ready.
+ */
+ goto out;
+ }
+
+ if (!task)
+ /* No more tasks, just exit */
+ goto out;
+
+ /*
+ * Something has shifted, try again.
+ */
+ put_task_struct(next_task);
+ next_task = task;
+ goto retry;
+ }
+
+ lowest_rq = rq_of_rt_rq(lowest_rt_rq);
+
+ move_queued_task_locked(rq, lowest_rq, next_task);
+ resched_curr(lowest_rq);
+ ret = 1;
+
+ double_unlock_balance(rq, lowest_rq);
+out:
+ put_task_struct(next_task);
+
+ return ret;
+}
+
+static void group_pull_rt_task(struct rt_rq *this_rt_rq)
+{
+ struct rq *this_rq = rq_of_rt_rq(this_rt_rq);
+ int this_cpu = this_rq->cpu, cpu;
+ bool resched = false;
+ struct task_struct *p, *push_task = NULL;
+ struct rt_rq *src_rt_rq;
+ struct rq *src_rq;
+ struct sched_dl_entity *src_dl_se;
+
+ for_each_online_cpu(cpu) {
+ if (this_cpu == cpu)
+ continue;
+
+ src_dl_se = this_rt_rq->tg->dl_se[cpu];
+ src_rt_rq = &src_dl_se->my_q->rt;
+
+ if (src_rt_rq->rt_nr_running <= 1 && !src_dl_se->dl_throttled)
+ continue;
+
+ src_rq = rq_of_rt_rq(src_rt_rq);
+
+ /*
+ * Don't bother taking the src_rq->lock if the next highest
+ * task is known to be lower-priority than our current task.
+ * This may look racy, but if this value is about to go
+ * logically higher, the src_rq will push this task away.
+ * And if its going logically lower, we do not care
+ */
+ if (src_rt_rq->highest_prio.next >=
+ this_rt_rq->highest_prio.curr)
+ continue;
+
+ /*
+ * We can potentially drop this_rq's lock in
+ * double_lock_balance, and another CPU could
+ * alter this_rq
+ */
+ push_task = NULL;
+ double_lock_balance(this_rq, src_rq);
+
+ /*
+ * We can pull only a task, which is pushable
+ * on its rq, and no others.
+ */
+ p = pick_highest_pushable_task(src_rt_rq, this_cpu);
+
+ /*
+ * Do we have an RT task that preempts
+ * the to-be-scheduled task?
+ */
+ if (p && (p->prio < this_rt_rq->highest_prio.curr)) {
+ WARN_ON(p == src_rq->curr);
+ WARN_ON(!task_on_rq_queued(p));
+
+ /*
+ * There's a chance that p is higher in priority
+ * than what's currently running on its CPU.
+ * This is just that p is waking up and hasn't
+ * had a chance to schedule. We only pull
+ * p if it is lower in priority than the
+ * current task on the run queue
+ */
+ if (src_rq->donor->sched_task_group == this_rt_rq->tg &&
+ p->prio < src_rq->donor->prio)
+ goto skip;
+
+ if (is_migration_disabled(p)) {
+ /*
+ * If the current task does not belong to the same task group
+ * we cannot push it away.
+ */
+ if (src_rq->donor->sched_task_group != this_rt_rq->tg)
+ goto skip;
+
+ push_task = get_push_task(src_rq);
+ } else {
+ move_queued_task_locked(src_rq, this_rq, p);
+ resched = true;
+ }
+ /*
+ * We continue with the search, just in
+ * case there's an even higher prio task
+ * in another runqueue. (low likelihood
+ * but possible)
+ */
+ }
+skip:
+ double_unlock_balance(this_rq, src_rq);
+
+ if (push_task) {
+ preempt_disable();
+ raw_spin_rq_unlock(this_rq);
+ stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
+ push_task, &src_rq->push_work);
+ preempt_enable();
+ raw_spin_rq_lock(this_rq);
+ }
+ }
+
+ if (resched)
+ resched_curr(this_rq);
+}
+
+static void group_push_rt_tasks(struct rt_rq *rt_rq)
+{
+ while (group_push_rt_task(rt_rq, false))
+ ;
+}
+
+static void group_push_rt_tasks_callback(struct rq *global_rq)
+{
+ struct rt_rq *rt_rq = &global_rq->rq_to_push_from->rt;
+
+ if ((rt_rq->rt_nr_running > 1) ||
+ (dl_group_of(rt_rq)->dl_throttled == 1)) {
+
+ group_push_rt_tasks(rt_rq);
+ }
+
+ global_rq->rq_to_push_from = NULL;
+}
+
+static void group_pull_rt_task_callback(struct rq *global_rq)
+{
+ struct rt_rq *rt_rq = &global_rq->rq_to_pull_to->rt;
+
+ group_pull_rt_task(rt_rq);
+ global_rq->rq_to_pull_to = NULL;
+}
+#else /* !CONFIG_RT_GROUP_SCHED */
+static void group_pull_rt_task(struct rt_rq *this_rt_rq) { }
+static void group_push_rt_tasks(struct rt_rq *rt_rq) { }
+#endif /* CONFIG_RT_GROUP_SCHED */
+
/*
* If we are not running and we are not going to reschedule soon, we should
* try to push tasks away now
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9814be8348cd..6b5bd6270d9a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1330,6 +1330,16 @@ struct rq {
struct list_head cfsb_csd_list;
#endif
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Balance callbacks operate only on global runqueues.
+ * These pointers allow referencing cgroup specific runqueues
+ * for balancing operations.
+ */
+ struct rq *rq_to_push_from;
+ struct rq *rq_to_pull_to;
+#endif
+
atomic_t nr_iowait;
} __no_randomize_layout;
--
2.53.0
next prev parent reply other threads:[~2026-04-30 21:39 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-30 21:38 [RFC PATCH v5 00/29] Hierarchical Constant Bandwidth Server Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 01/29] sched/deadline: Fix replenishment logic for non-deferred servers Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 02/29] sched/deadline: Do not access dl_se->rq directly Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 03/29] sched/deadline: Distinguish between dl_rq and my_q Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 05/29] sched/rt: Move functions from rt.c to sched.h Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 06/29] sched/rt: Disable RT_GROUP_SCHED Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 08/29] sched/rt: Introduce HCBS specific structs in task_group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 10/29] sched/deadline: Add dl_init_tg Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 12/29] sched/deadline: Account rt-cgroups bandwidth in deadline tasks schedulability tests Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Yuri Andriaccio
2026-05-05 13:04 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Yuri Andriaccio
2026-05-05 13:16 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Yuri Andriaccio
2026-05-05 14:36 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 16/29] sched/rt: Allow zeroing the runtime of the root control group Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 17/29] sched/rt: Remove old RT_GROUP_SCHED data structures Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Yuri Andriaccio
2026-05-05 14:59 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Yuri Andriaccio
2026-05-05 15:01 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Yuri Andriaccio
2026-05-05 15:15 ` Peter Zijlstra
2026-05-05 19:56 ` Tejun Heo
2026-04-30 21:38 ` [RFC PATCH v5 21/29] sched/rt: Update default bandwidth for real-time tasks to ONE Yuri Andriaccio
2026-04-30 21:38 ` Yuri Andriaccio [this message]
2026-05-05 15:20 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Peter Zijlstra
2026-05-05 15:24 ` Peter Zijlstra
2026-04-30 21:38 ` [RFC PATCH v5 23/29] sched/rt: Hook HCBS " Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 24/29] sched/core: Execute enqueued balance callbacks when changing allowed CPUs Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 25/29] sched/rt: Try pull task on empty server pick Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 26/29] sched/core: Execute enqueued balance callbacks after migrate_disable_switch Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 27/29] Documentation: Update documentation for real-time cgroups Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 28/29] sched/rt: Add debug BUG_ONs for pre-migration code Yuri Andriaccio
2026-04-30 21:38 ` [RFC PATCH v5 29/29] sched/rt: Add debug BUG_ONs in migration code Yuri Andriaccio
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260430213835.62217-23-yurand2000@gmail.com \
--to=yurand2000@gmail.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luca.abeni@santannapisa.it \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=yuri.andriaccio@santannapisa.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox