From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 134753B8955 for ; Thu, 30 Apr 2026 21:38:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.51 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585129; cv=none; b=nkKrwL+52z2mKf65k5G1VnVedtrRNrVPfqwo9Kr9RcrrL8HpAMyORF4IVyw1fx42ocAkCJTWhOfGGgnrvyI2C+YBZA2BQc0Lw7fScTI3c5/aPTR8sqsdDv9xdMOkQQ9tQPo4pOGLkaEH5KdlabdvmubC+c9BdyS5O5xN934kxhs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585129; c=relaxed/simple; bh=cTfwImoKxrdf3Zb/fy3tmCf92c+DLwswJX3ZEpI/jys=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NipQh6XWWg7bQ8Ynv97DdlR5Nqn6GP6LVMuxHvj6aYQWsGj5TxqtmhccUEoaQAVq+eGmCNggCTFgAhPEZTSzGfkYkf7c/hteGR1S2/gOdEiq1mfNe3ZOFIxe1Fw4MBi3MxYl+D3b6ww2LUtKacX4FqeyWDSscVTsDUiz7kbBNh4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DJ6Awtuo; arc=none smtp.client-ip=209.85.221.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DJ6Awtuo" Received: by mail-wr1-f51.google.com with SMTP id ffacd0b85a97d-43d73352cf2so1251437f8f.1 for ; Thu, 30 Apr 2026 14:38:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777585126; x=1778189926; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FO82wr1GLG+3gs4HQcQ6j0z+UM4MylCIAMZODdOaNFU=; b=DJ6Awtuom5rFObWRVTG7VLCN3/5lJyGWJFzCocoZmnhHEBocbSc/Oe77GCwIzbiCov dPO9wZE9Kn4LqWe81WhgnxnlMPQqz05ty5FQ9X0mUYTU5jo3zfKNt9KkvjVxGd4TIb7Y YvXQQDAwEG6wfw2XwpSRLn2wDmjLlHmHVxxBv+tGt3MaiFwPhvg9joFKDl+LNOJaE0HT ldl809KWuqnv542JL9K/d2Js8A0OZgyW7lyaJWG/9QOPIgXb59rFvvotxboyCvMPKy4C Zo2bndfUMTf7MCf13oL6auBAu9fxRvb4HdQBb7toeJUKqGOP9PE9Wf/yjKBEqDU4WPQC W82w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777585126; x=1778189926; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=FO82wr1GLG+3gs4HQcQ6j0z+UM4MylCIAMZODdOaNFU=; b=NAIVvz0FmH6gXzQxw9ff93309EyTee9SCxE/BqMDTiUN/jhF8vexqN1Hvgy6hHFrYw fsXlgD1Z0k3g2X/dswRv9CZOjAdP84jvo0tNGB31YrBnhoL/65N98DuNVWOQ9xH8HThJ V5IldORDI1oT9VNioVHG07xTkoyYPyWmsnZysHtyEGON1vJ+gPJ67OTFQZ3kNgSD8zkD +TEe0aoJkLQQf4jvy6TbfkQ0CTpgxMBJBkqaPvs4M+yfdkKt8+fVXizz0B+VObunwGcU yHPXvPnBkQ2hgpPmBBq9ugGvA8q3eaHbEomMkXYNiBbwJcVWUCzgdzh9uj7Z4MaYh7pa 2N/g== X-Gm-Message-State: AOJu0YyJGITYS8IHR4NQdSQww4GmlF57i+RDETvg9ye9G6M3vjO/nicm kE/O11UOekO9LSIh/thObTXe6NxnSubYHfSE/oFxWrCXcWShNwlUL1YJ X-Gm-Gg: AeBDievrAzWUVtk+eg58HwdXPijZ7rGVQfrkTH7kB+7S3smoQIODKyNPXvld+3QLjjV B7nQTzxbuZaaWm5VOpBWTBm9mf7PKXYjqJuwXhvG/oAplasvA/I01S2ABGcc8PREwP85z5IsBlR jNsw/Jbx27vU8ItoAvQm+OGH13lnUyGyBLqusyIASoMOHDeEL1Nd6vCR/4gk360zD8BPdswg5Z9 9AW7yPuidRFN0WVgCNWuwOD6S08+KbGw3qstA83Imjk2zwPzjDQDWOGfMIgujG049VszitZ3FNU w48x0BMg2Xl3bVyJIdhR0/ScrxKpNmWUYXA+17uKQQ4m5ErjpVAySVrK5OsNznOSBjv2UlOGcZY bnH0E1wJUo7dFYnycXUfmoWGMBZaDtBShBfsML0YgREkSGFO6M4dQXn8/VgM5zU5dhzuSgfNW6e za5SY/S/6eWeUXNFW71gfo5cC/OtKT3PFrqppA+OCj X-Received: by 2002:a05:6000:4205:b0:43d:71b:204b with SMTP id ffacd0b85a97d-44a8814e578mr558205f8f.39.1777585126368; Thu, 30 Apr 2026 14:38:46 -0700 (PDT) Received: from yuri-framework13 ([78.211.51.156]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-44a9879ef89sm418510f8f.30.2026.04.30.14.38.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2026 14:38:46 -0700 (PDT) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v5 04/29] sched/rt: Pass an rt_rq instead of an rq where needed Date: Thu, 30 Apr 2026 23:38:08 +0200 Message-ID: <20260430213835.62217-5-yurand2000@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com> References: <20260430213835.62217-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: luca abeni Make rt.c code access the runqueue through the rt_rq data structure rather than passing an rq pointer directly. This allows future patches to define rt_rq data structures which do not refer only to the global runqueue, but also to local cgroup runqueues (as rt_rq will not be always equal to &rq->rt). Add checks in rt_queue_{push/pull}_tasks to make sure that the given rt_rq object refers to a global runqueue and not any local one. Signed-off-by: luca abeni Signed-off-by: Yuri Andriaccio --- kernel/sched/rt.c | 99 ++++++++++++++++++++++++++--------------------- 1 file changed, 54 insertions(+), 45 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f69e1f16d923..597eaba00a20 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -370,9 +370,9 @@ static inline void rt_clear_overload(struct rq *rq) cpumask_clear_cpu(rq->cpu, rq->rd->rto_mask); } -static inline int has_pushable_tasks(struct rq *rq) +static inline int has_pushable_tasks(struct rt_rq *rt_rq) { - return !plist_head_empty(&rq->rt.pushable_tasks); + return !plist_head_empty(&rt_rq->pushable_tasks); } static DEFINE_PER_CPU(struct balance_callback, rt_push_head); @@ -381,50 +381,54 @@ static DEFINE_PER_CPU(struct balance_callback, rt_pull_head); static void push_rt_tasks(struct rq *); static void pull_rt_task(struct rq *); -static inline void rt_queue_push_tasks(struct rq *rq) +static inline void rt_queue_push_tasks(struct rt_rq *rt_rq) { - if (!has_pushable_tasks(rq)) + struct rq *rq = container_of_const(rt_rq, struct rq, rt); + + if (!has_pushable_tasks(rt_rq)) return; queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks); } -static inline void rt_queue_pull_task(struct rq *rq) +static inline void rt_queue_pull_task(struct rt_rq *rt_rq) { + struct rq *rq = container_of_const(rt_rq, struct rq, rt); + queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task); } -static void enqueue_pushable_task(struct rq *rq, struct task_struct *p) +static void enqueue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p) { - plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks); + plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks); plist_node_init(&p->pushable_tasks, p->prio); - plist_add(&p->pushable_tasks, &rq->rt.pushable_tasks); + plist_add(&p->pushable_tasks, &rt_rq->pushable_tasks); /* Update the highest prio pushable task */ - if (p->prio < rq->rt.highest_prio.next) - rq->rt.highest_prio.next = p->prio; + if (p->prio < rt_rq->highest_prio.next) + rt_rq->highest_prio.next = p->prio; - if (!rq->rt.overloaded) { - rt_set_overload(rq); - rq->rt.overloaded = 1; + if (!rt_rq->overloaded) { + rt_set_overload(rq_of_rt_rq(rt_rq)); + rt_rq->overloaded = 1; } } -static void dequeue_pushable_task(struct rq *rq, struct task_struct *p) +static void dequeue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p) { - plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks); + plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks); /* Update the new highest prio pushable task */ - if (has_pushable_tasks(rq)) { - p = plist_first_entry(&rq->rt.pushable_tasks, + if (has_pushable_tasks(rt_rq)) { + p = plist_first_entry(&rt_rq->pushable_tasks, struct task_struct, pushable_tasks); - rq->rt.highest_prio.next = p->prio; + rt_rq->highest_prio.next = p->prio; } else { - rq->rt.highest_prio.next = MAX_RT_PRIO-1; + rt_rq->highest_prio.next = MAX_RT_PRIO-1; - if (rq->rt.overloaded) { - rt_clear_overload(rq); - rq->rt.overloaded = 0; + if (rt_rq->overloaded) { + rt_clear_overload(rq_of_rt_rq(rt_rq)); + rt_rq->overloaded = 0; } } } @@ -1431,6 +1435,7 @@ static void enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) { struct sched_rt_entity *rt_se = &p->rt; + struct rt_rq *rt_rq = rt_rq_of_se(rt_se); if (flags & ENQUEUE_WAKEUP) rt_se->timeout = 0; @@ -1444,17 +1449,18 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) return; if (!task_current(rq, p) && p->nr_cpus_allowed > 1) - enqueue_pushable_task(rq, p); + enqueue_pushable_task(rt_rq, p); } static bool dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) { struct sched_rt_entity *rt_se = &p->rt; + struct rt_rq *rt_rq = rt_rq_of_se(rt_se); update_curr_rt(rq); dequeue_rt_entity(rt_se, flags); - dequeue_pushable_task(rq, p); + dequeue_pushable_task(rt_rq, p); return true; } @@ -1645,14 +1651,14 @@ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags) static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first) { struct sched_rt_entity *rt_se = &p->rt; - struct rt_rq *rt_rq = &rq->rt; + struct rt_rq *rt_rq = rt_rq_of_se(&p->rt); p->se.exec_start = rq_clock_task(rq); if (on_rt_rq(&p->rt)) update_stats_wait_end_rt(rt_rq, rt_se); /* The running task is never eligible for pushing */ - dequeue_pushable_task(rq, p); + dequeue_pushable_task(rt_rq, p); if (!first) return; @@ -1665,7 +1671,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f if (rq->donor->sched_class != &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); - rt_queue_push_tasks(rq); + rt_queue_push_tasks(rt_rq); } static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq) @@ -1716,7 +1722,7 @@ static struct task_struct *pick_task_rt(struct rq *rq, struct rq_flags *rf) static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_struct *next) { struct sched_rt_entity *rt_se = &p->rt; - struct rt_rq *rt_rq = &rq->rt; + struct rt_rq *rt_rq = rt_rq_of_se(&p->rt); if (on_rt_rq(&p->rt)) update_stats_wait_start_rt(rt_rq, rt_se); @@ -1732,7 +1738,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_s * if it is still active */ if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) - enqueue_pushable_task(rq, p); + enqueue_pushable_task(rt_rq, p); } /* Only try algorithms three times */ @@ -1742,16 +1748,16 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct task_s * Return the highest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise */ -static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu) +static struct task_struct *pick_highest_pushable_task(struct rt_rq *rt_rq, int cpu) { - struct plist_head *head = &rq->rt.pushable_tasks; + struct plist_head *head = &rt_rq->pushable_tasks; struct task_struct *p; - if (!has_pushable_tasks(rq)) + if (!has_pushable_tasks(rt_rq)) return NULL; plist_for_each_entry(p, head, pushable_tasks) { - if (task_is_pushable(rq, p, cpu)) + if (task_is_pushable(rq_of_rt_rq(rt_rq), p, cpu)) return p; } @@ -1851,14 +1857,15 @@ static int find_lowest_rq(struct task_struct *task) return -1; } -static struct task_struct *pick_next_pushable_task(struct rq *rq) +static struct task_struct *pick_next_pushable_task(struct rt_rq *rt_rq) { + struct rq *rq = rq_of_rt_rq(rt_rq); struct task_struct *p; - if (!has_pushable_tasks(rq)) + if (!has_pushable_tasks(rt_rq)) return NULL; - p = plist_first_entry(&rq->rt.pushable_tasks, + p = plist_first_entry(&rt_rq->pushable_tasks, struct task_struct, pushable_tasks); BUG_ON(rq->cpu != task_cpu(p)); @@ -1911,7 +1918,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq) */ if (unlikely(is_migration_disabled(task) || !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) || - task != pick_next_pushable_task(rq))) { + task != pick_next_pushable_task(&rq->rt))) { double_unlock_balance(rq, lowest_rq); lowest_rq = NULL; @@ -1945,7 +1952,7 @@ static int push_rt_task(struct rq *rq, bool pull) if (!rq->rt.overloaded) return 0; - next_task = pick_next_pushable_task(rq); + next_task = pick_next_pushable_task(&rq->rt); if (!next_task) return 0; @@ -2020,7 +2027,7 @@ static int push_rt_task(struct rq *rq, bool pull) * run-queue and is also still the next task eligible for * pushing. */ - task = pick_next_pushable_task(rq); + task = pick_next_pushable_task(&rq->rt); if (task == next_task) { /* * The task hasn't migrated, and is still the next @@ -2213,7 +2220,7 @@ void rto_push_irq_work_func(struct irq_work *work) * We do not need to grab the lock to check for has_pushable_tasks. * When it gets updated, a check is made if a push is possible. */ - if (has_pushable_tasks(rq)) { + if (has_pushable_tasks(&rq->rt)) { raw_spin_rq_lock(rq); while (push_rt_task(rq, true)) ; @@ -2242,6 +2249,7 @@ static void pull_rt_task(struct rq *this_rq) int this_cpu = this_rq->cpu, cpu; bool resched = false; struct task_struct *p, *push_task; + struct rt_rq *src_rt_rq; struct rq *src_rq; int rt_overload_count = rt_overloaded(this_rq); @@ -2271,6 +2279,7 @@ static void pull_rt_task(struct rq *this_rq) continue; src_rq = cpu_rq(cpu); + src_rt_rq = &src_rq->rt; /* * Don't bother taking the src_rq->lock if the next highest @@ -2279,7 +2288,7 @@ static void pull_rt_task(struct rq *this_rq) * logically higher, the src_rq will push this task away. * And if its going logically lower, we do not care */ - if (src_rq->rt.highest_prio.next >= + if (src_rt_rq->highest_prio.next >= this_rq->rt.highest_prio.curr) continue; @@ -2295,7 +2304,7 @@ static void pull_rt_task(struct rq *this_rq) * We can pull only a task, which is pushable * on its rq, and no others. */ - p = pick_highest_pushable_task(src_rq, this_cpu); + p = pick_highest_pushable_task(src_rt_rq, this_cpu); /* * Do we have an RT task that preempts @@ -2401,7 +2410,7 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p) if (!task_on_rq_queued(p) || rq->rt.rt_nr_running) return; - rt_queue_pull_task(rq); + rt_queue_pull_task(rt_rq_of_se(&p->rt)); } void __init init_sched_rt_class(void) @@ -2437,7 +2446,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p) */ if (task_on_rq_queued(p)) { if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) - rt_queue_push_tasks(rq); + rt_queue_push_tasks(rt_rq_of_se(&p->rt)); if (p->prio < rq->donor->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } @@ -2462,7 +2471,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, u64 oldprio) * may need to pull tasks to this runqueue. */ if (oldprio < p->prio) - rt_queue_pull_task(rq); + rt_queue_pull_task(rt_rq_of_se(&p->rt)); /* * If there's a higher priority task waiting to run -- 2.53.0