From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4B783D567D for ; Thu, 30 Apr 2026 21:39:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.42 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585159; cv=none; b=mrJqsobT+0dGPVd0ZpP7hGUA5Mm/mWpoA3eJok8SUvn2MyKbpz1BQrmMCsO2KCLhNpDjWrNrPew4A4BsaeX2JJ8TWWthwoBpELAV3DJ+QS9mb5Rx54XZRiBdoyBLihakie2tZfbdc2AhmbIcZIpKFK6OJKnsdTc98MIm3XBf7x4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585159; c=relaxed/simple; bh=68LkNIP0ZjThHsqexlbrPlTFE8fp2jwe3T+d3UU6TbY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mDhbuyOaN+gxZwNIz+iQVg0DA34E1hE9R20jJf9sk/kVWXyf0MxIFyfGzzYveDcJmuWZZU3bU+YI+cFmLLVEC9iml6zWtSsAJY02Bt4/t4WKckSXsVzWcNDD489u8cuXc/+e9NTbKwS10gTdZE2qHIXT9+ufXnKJbReFLsEArbg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AwjsmCoO; arc=none smtp.client-ip=209.85.221.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AwjsmCoO" Received: by mail-wr1-f42.google.com with SMTP id ffacd0b85a97d-43d64313c39so1020744f8f.3 for ; Thu, 30 Apr 2026 14:39:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777585156; x=1778189956; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fGjyemBq9V1ZIcy6qF9U8y/JdE09hO24pzspK8GmEfk=; b=AwjsmCoOIq3XM8/reQkNpRS+QoS1yb+DtnZvjoTlhvdLNvzFXbvVWYPGeRqoC6EX3V FPDnrzAyqJC7w2a7sr0YJAynkQG7Ob5xp3DKn0/QBT/02Tr/zLvmvPiGAat5o9uJ2My7 W7V/laNbIr3GDzgbytSITD2PuH1vM9C2sZg8/obL5ChBnMG9ZKB8PALfpHJ+O3bvZJJG xvL/oF7/4YIMxNHGuFbmchBr/L+4RzQtfHjfEay5GeFecWdac7EPiu0iCTZQxMRu6x12 ePZREez2lwLSeZaDrsJSqGF/SPAwbDBBK+X4gsgyiNvksLInLDvcq/MiDlym11RuM3Hv 1Diw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777585156; x=1778189956; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=fGjyemBq9V1ZIcy6qF9U8y/JdE09hO24pzspK8GmEfk=; b=d1VEKiHrcm1ICbCRMp7vxZGnpCjyki58pzRZ9S9mFFmewIOt12M0gYICW2oePmiMBV FXn6njaveVnL5dhmS/NGwGu7tKtSS5S91idxhsT5Q02e0YLJpM2OyulM04RsvQUJ/gBU 2utH6XdLl7J6ACORz4bSvtFVNtz5fDelY26dJKPnWE6/1ijFKEdU6Qp2lFUqIcAtK2G2 3dm/Y5Z3NGBzQtw9+2B5TkOsLgDmrO+Nc0jsuWjaLvvFOafzhHXw0Ga/c7XE6oaT7oNu /ie+Wh8g0XZY/wbNsWr0T32h9PfJZ3i0tVL14sSJ/r6537a6H23twySRzeaccnBRPXS0 0HUA== X-Gm-Message-State: AOJu0YxOziQU2E8m536Vlek2CAz91Lqf5iKG27IXPjNtrNUTobseS5Tw ygNkCt7EYn/pyd3aIXdYeINUFVpBbeEPWBMUJMvxUU6KHXks1cI13R0h X-Gm-Gg: AeBDiete+PLdIan6rvs5U5Y8Cm92eQOzKMde68Y5qQtojx+Pl7LD/1wLaX0JpQ+1NEY X+Vvzy6QeTetje0HHFG3jyKCBdY3POS1G16VTcsyXKQwGCn+gQ+3dKlWEIboZg+aKiHcrFtsviE hsiRaG8Dv5Q80ojrKlw3C4kqiH3usqqYiaXOhNEdMJMcn2xzXxHyfLf7HO1oC3mJWWTdwb/UwxO rq9ZM4kQtxzS7NEpeT5asuEHrLV4YwyD/dRca103DGVIp7ywjrA9JMpmY2Y7kwwM3yfG1wbK4c7 za9n3K10QX79QMc498G41lqYQH/yy9RsRvaWn+OLq8+WDV5lGtKvOXQRHsklLMxhs4hfEvl/H2S 2Glwd7X8fxO59bHQLVfa9oC68IEVj52TaODk+HOi+43KuB+VMgOp0BX78yO846cQMFqx+YL+FI4 zl96t06lkJitkqSU03GQQBxXPgdQr8d3/f0KfqiUML X-Received: by 2002:a05:6000:2909:b0:43c:f3ef:ee36 with SMTP id ffacd0b85a97d-4493ef44486mr7666758f8f.33.1777585156118; Thu, 30 Apr 2026 14:39:16 -0700 (PDT) Received: from yuri-framework13 ([78.211.51.156]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-44a9879ef89sm418510f8f.30.2026.04.30.14.39.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2026 14:39:15 -0700 (PDT) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Date: Thu, 30 Apr 2026 23:38:26 +0200 Message-ID: <20260430213835.62217-23-yurand2000@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com> References: <20260430213835.62217-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: luca abeni Add migration related functions: - group_find_lowest_rt_rq - group_find_lock_lowest_rt_rq Find (and lock) the lowest priority non-root runqueue where to migrate a given task. - group_pull_rt_task Try pull a task onto the given non-root runqueue. - group_push_rt_task - group_push_rt_tasks Try push tasks from the given non-root runqueue. - group_pull_rt_task_callback - group_push_rt_tasks_callback - rt_queue_push_from_group - rt_queue_pull_to_group Deferred execution of push and pull functions at balancing points. Update struct rq to include fields for deferred balancing of cgroup runqueues. --- The functions are only implemented here, to be hooked up later in the patchset. Co-developed-by: Alessio Balsini Signed-off-by: Alessio Balsini Co-developed-by: Andrea Parri Signed-off-by: Andrea Parri Co-developed-by: Yuri Andriaccio Signed-off-by: Yuri Andriaccio Signed-off-by: luca abeni --- kernel/sched/rt.c | 461 +++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 10 + 2 files changed, 471 insertions(+) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index db88792787a8..e1731e01757b 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1,3 +1,4 @@ +#pragma GCC diagnostic ignored "-Wunused-function" // SPDX-License-Identifier: GPL-2.0 /* * Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR @@ -84,6 +85,8 @@ void init_rt_rq(struct rt_rq *rt_rq) plist_head_init(&rt_rq->pushable_tasks); } +static void group_pull_rt_task(struct rt_rq *this_rt_rq); + #ifdef CONFIG_RT_GROUP_SCHED void unregister_rt_sched_group(struct task_group *tg) @@ -345,6 +348,46 @@ static inline void rt_queue_pull_task(struct rt_rq *rt_rq) queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task); } +#ifdef CONFIG_RT_GROUP_SCHED +static DEFINE_PER_CPU(struct balance_callback, rt_group_push_head); +static DEFINE_PER_CPU(struct balance_callback, rt_group_pull_head); +static void group_push_rt_tasks_callback(struct rq *); +static void group_pull_rt_task_callback(struct rq *); + +static void rt_queue_push_from_group(struct rt_rq *rt_rq) +{ + struct rq *rq = served_rq_of_rt_rq(rt_rq); + struct rq *global_rq = cpu_rq(rq->cpu); + + if (global_rq->rq_to_push_from) + return; + + if (!has_pushable_tasks(rt_rq)) + return; + + global_rq->rq_to_push_from = rq; + queue_balance_callback(global_rq, &per_cpu(rt_group_push_head, global_rq->cpu), + group_push_rt_tasks_callback); +} + +static void rt_queue_pull_to_group(struct rt_rq *rt_rq) +{ + struct rq *rq = served_rq_of_rt_rq(rt_rq); + struct rq *global_rq = cpu_rq(rq->cpu); + struct sched_dl_entity *dl_se = dl_group_of(rt_rq); + + if (dl_se->dl_throttled || global_rq->rq_to_pull_to) + return; + + global_rq->rq_to_pull_to = rq; + queue_balance_callback(global_rq, &per_cpu(rt_group_pull_head, global_rq->cpu), + group_pull_rt_task_callback); +} +#else /* !CONFIG_RT_GROUP_SCHED */ +static inline void rt_queue_push_from_group(struct rt_rq *rt_rq) {}; +static inline void rt_queue_pull_to_group(struct rt_rq *rt_rq) {}; +#endif /* CONFIG_RT_GROUP_SCHED */ + static void enqueue_pushable_task(struct rt_rq *rt_rq, struct task_struct *p) { plist_del(&p->pushable_tasks, &rt_rq->pushable_tasks); @@ -1747,6 +1790,424 @@ static void pull_rt_task(struct rq *this_rq) resched_curr(this_rq); } +#ifdef CONFIG_RT_GROUP_SCHED +/* + * Find the lowest priority runqueue among the runqueues of the same + * task group. Unlike find_lowest_rt(), this does not mean that the + * lowest priority cpu is running tasks from this runqueue. + */ +static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_rt_rq) +{ + struct sched_domain *sd; + struct cpumask lowest_mask; + struct sched_dl_entity *dl_se; + struct rt_rq *rt_rq; + int prio, lowest_prio; + int cpu, this_cpu = smp_processor_id(); + + if (task->nr_cpus_allowed == 1) + return -1; /* No other targets possible */ + + lowest_prio = task->prio - 1; + cpumask_clear(&lowest_mask); + for_each_cpu_and(cpu, cpu_online_mask, task->cpus_ptr) { + dl_se = task_rt_rq->tg->dl_se[cpu]; + rt_rq = &dl_se->my_q->rt; + prio = rt_rq->highest_prio.curr; + + /* + * If we're on asym system ensure we consider the different capacities + * of the CPUs when searching for the lowest_mask. + */ + if (dl_se->dl_throttled || !rt_task_fits_capacity(task, cpu)) + continue; + + if (prio >= lowest_prio) { + if (prio > lowest_prio) { + cpumask_clear(&lowest_mask); + lowest_prio = prio; + } + + cpumask_set_cpu(cpu, &lowest_mask); + } + } + + if (cpumask_empty(&lowest_mask)) + return -1; + + /* + * At this point we have built a mask of CPUs representing the + * lowest priority tasks in the system. Now we want to elect + * the best one based on our affinity and topology. + * + * We prioritize the last CPU that the task executed on since + * it is most likely cache-hot in that location. + */ + cpu = task_cpu(task); + if (cpumask_test_cpu(cpu, &lowest_mask)) + return cpu; + + /* + * Otherwise, we consult the sched_domains span maps to figure + * out which CPU is logically closest to our hot cache data. + */ + if (!cpumask_test_cpu(this_cpu, &lowest_mask)) + this_cpu = -1; /* Skip this_cpu opt if not among lowest */ + + scoped_guard(rcu) { + for_each_domain(cpu, sd) { + if (sd->flags & SD_WAKE_AFFINE) { + int best_cpu; + + /* + * "this_cpu" is cheaper to preempt than a + * remote processor. + */ + if (this_cpu != -1 && + cpumask_test_cpu(this_cpu, sched_domain_span(sd))) + return this_cpu; + + best_cpu = cpumask_any_and_distribute(&lowest_mask, + sched_domain_span(sd)); + if (best_cpu < nr_cpu_ids) + return best_cpu; + } + } + } + + /* + * And finally, if there were no matches within the domains + * just give the caller *something* to work with from the compatible + * locations. + */ + if (this_cpu != -1) + return this_cpu; + + cpu = cpumask_any_distribute(&lowest_mask); + if (cpu < nr_cpu_ids) + return cpu; + + return -1; +} + +/* + * Find and lock the lowest priority runqueue among the runqueues + * of the same task group. Unlike find_lock_lowest_rt(), this does not + * mean that the lowest priority cpu is running tasks from this runqueue. + */ +static struct rt_rq *group_find_lock_lowest_rt_rq(struct task_struct *task, struct rt_rq *rt_rq) +{ + struct rq *rq = rq_of_rt_rq(rt_rq); + struct rq *lowest_rq; + struct rt_rq *lowest_rt_rq; + struct sched_dl_entity *lowest_dl_se; + int tries, cpu; + + for (tries = 0; tries < RT_MAX_TRIES; tries++) { + cpu = group_find_lowest_rt_rq(task, rt_rq); + + if ((cpu == -1) || (cpu == rq->cpu)) + return NULL; + + lowest_dl_se = rt_rq->tg->dl_se[cpu]; + lowest_rt_rq = &lowest_dl_se->my_q->rt; + lowest_rq = cpu_rq(cpu); + + if (lowest_rt_rq->highest_prio.curr <= task->prio) { + /* + * Target rq has tasks of equal or higher priority, + * retrying does not release any lock and is unlikely + * to yield a different result. + */ + return NULL; + } + + /* if the prio of this runqueue changed, try again */ + if (double_lock_balance(rq, lowest_rq)) { + /* + * We had to unlock the run queue. In + * the mean time, task could have + * migrated already or had its affinity changed. + * Also make sure that it wasn't scheduled on its rq. + * It is possible the task was scheduled, set + * "migrate_disabled" and then got preempted, so we must + * check the task migration disable flag here too. + */ + if (unlikely(is_migration_disabled(task) || + lowest_dl_se->dl_throttled || + !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) || + task != pick_next_pushable_task(rt_rq))) { + + double_unlock_balance(rq, lowest_rq); + return NULL; + } + } + + /* If this rq is still suitable use it. */ + if (lowest_rt_rq->highest_prio.curr > task->prio) + return lowest_rt_rq; + + /* try again */ + double_unlock_balance(rq, lowest_rq); + } + + return NULL; +} + +static int group_push_rt_task(struct rt_rq *rt_rq, bool pull) +{ + struct rq *rq = rq_of_rt_rq(rt_rq); + struct task_struct *next_task; + struct rq *lowest_rq; + struct rt_rq *lowest_rt_rq; + int ret = 0; + + if (!rt_rq->overloaded) + return 0; + + next_task = pick_next_pushable_task(rt_rq); + if (!next_task) + return 0; + +retry: + if (is_migration_disabled(next_task)) { + struct task_struct *push_task = NULL; + int cpu; + + if (!pull || rq->push_busy) + return 0; + + /* + * If the current task does not belong to the same task group + * we cannot push it away. + */ + if (rq->donor->sched_task_group != rt_rq->tg) + return 0; + + /* + * Invoking group_find_lowest_rt_rq() on anything but an RT task doesn't + * make sense. Per the above priority check, curr has to + * be of higher priority than next_task, so no need to + * reschedule when bailing out. + * + * Note that the stoppers are masqueraded as SCHED_FIFO + * (cf. sched_set_stop_task()), so we can't rely on rt_task(). + */ + if (rq->donor->sched_class != &rt_sched_class) + return 0; + + cpu = group_find_lowest_rt_rq(rq->curr, rt_rq); + if (cpu == -1 || cpu == rq->cpu) + return 0; + + /* + * Given we found a CPU with lower priority than @next_task, + * therefore it should be running. However we cannot migrate it + * to this other CPU, instead attempt to push the current + * running task on this CPU away. + */ + push_task = get_push_task(rq); + if (push_task) { + preempt_disable(); + raw_spin_rq_unlock(rq); + stop_one_cpu_nowait(rq->cpu, push_cpu_stop, + push_task, &rq->push_work); + preempt_enable(); + raw_spin_rq_lock(rq); + } + + return 0; + } + + if (WARN_ON(next_task == rq->curr)) + return 0; + + /* We might release rq lock */ + get_task_struct(next_task); + + /* group_find_lock_lowest_rq locks the rq if found */ + lowest_rt_rq = group_find_lock_lowest_rt_rq(next_task, rt_rq); + if (!lowest_rt_rq) { + struct task_struct *task; + /* + * group_find_lock_lowest_rt_rq releases rq->lock + * so it is possible that next_task has migrated. + * + * We need to make sure that the task is still on the same + * run-queue and is also still the next task eligible for + * pushing. + */ + task = pick_next_pushable_task(rt_rq); + if (task == next_task) { + /* + * The task hasn't migrated, and is still the next + * eligible task, but we failed to find a run-queue + * to push it to. Do not retry in this case, since + * other CPUs will pull from us when ready. + */ + goto out; + } + + if (!task) + /* No more tasks, just exit */ + goto out; + + /* + * Something has shifted, try again. + */ + put_task_struct(next_task); + next_task = task; + goto retry; + } + + lowest_rq = rq_of_rt_rq(lowest_rt_rq); + + move_queued_task_locked(rq, lowest_rq, next_task); + resched_curr(lowest_rq); + ret = 1; + + double_unlock_balance(rq, lowest_rq); +out: + put_task_struct(next_task); + + return ret; +} + +static void group_pull_rt_task(struct rt_rq *this_rt_rq) +{ + struct rq *this_rq = rq_of_rt_rq(this_rt_rq); + int this_cpu = this_rq->cpu, cpu; + bool resched = false; + struct task_struct *p, *push_task = NULL; + struct rt_rq *src_rt_rq; + struct rq *src_rq; + struct sched_dl_entity *src_dl_se; + + for_each_online_cpu(cpu) { + if (this_cpu == cpu) + continue; + + src_dl_se = this_rt_rq->tg->dl_se[cpu]; + src_rt_rq = &src_dl_se->my_q->rt; + + if (src_rt_rq->rt_nr_running <= 1 && !src_dl_se->dl_throttled) + continue; + + src_rq = rq_of_rt_rq(src_rt_rq); + + /* + * Don't bother taking the src_rq->lock if the next highest + * task is known to be lower-priority than our current task. + * This may look racy, but if this value is about to go + * logically higher, the src_rq will push this task away. + * And if its going logically lower, we do not care + */ + if (src_rt_rq->highest_prio.next >= + this_rt_rq->highest_prio.curr) + continue; + + /* + * We can potentially drop this_rq's lock in + * double_lock_balance, and another CPU could + * alter this_rq + */ + push_task = NULL; + double_lock_balance(this_rq, src_rq); + + /* + * We can pull only a task, which is pushable + * on its rq, and no others. + */ + p = pick_highest_pushable_task(src_rt_rq, this_cpu); + + /* + * Do we have an RT task that preempts + * the to-be-scheduled task? + */ + if (p && (p->prio < this_rt_rq->highest_prio.curr)) { + WARN_ON(p == src_rq->curr); + WARN_ON(!task_on_rq_queued(p)); + + /* + * There's a chance that p is higher in priority + * than what's currently running on its CPU. + * This is just that p is waking up and hasn't + * had a chance to schedule. We only pull + * p if it is lower in priority than the + * current task on the run queue + */ + if (src_rq->donor->sched_task_group == this_rt_rq->tg && + p->prio < src_rq->donor->prio) + goto skip; + + if (is_migration_disabled(p)) { + /* + * If the current task does not belong to the same task group + * we cannot push it away. + */ + if (src_rq->donor->sched_task_group != this_rt_rq->tg) + goto skip; + + push_task = get_push_task(src_rq); + } else { + move_queued_task_locked(src_rq, this_rq, p); + resched = true; + } + /* + * We continue with the search, just in + * case there's an even higher prio task + * in another runqueue. (low likelihood + * but possible) + */ + } +skip: + double_unlock_balance(this_rq, src_rq); + + if (push_task) { + preempt_disable(); + raw_spin_rq_unlock(this_rq); + stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop, + push_task, &src_rq->push_work); + preempt_enable(); + raw_spin_rq_lock(this_rq); + } + } + + if (resched) + resched_curr(this_rq); +} + +static void group_push_rt_tasks(struct rt_rq *rt_rq) +{ + while (group_push_rt_task(rt_rq, false)) + ; +} + +static void group_push_rt_tasks_callback(struct rq *global_rq) +{ + struct rt_rq *rt_rq = &global_rq->rq_to_push_from->rt; + + if ((rt_rq->rt_nr_running > 1) || + (dl_group_of(rt_rq)->dl_throttled == 1)) { + + group_push_rt_tasks(rt_rq); + } + + global_rq->rq_to_push_from = NULL; +} + +static void group_pull_rt_task_callback(struct rq *global_rq) +{ + struct rt_rq *rt_rq = &global_rq->rq_to_pull_to->rt; + + group_pull_rt_task(rt_rq); + global_rq->rq_to_pull_to = NULL; +} +#else /* !CONFIG_RT_GROUP_SCHED */ +static void group_pull_rt_task(struct rt_rq *this_rt_rq) { } +static void group_push_rt_tasks(struct rt_rq *rt_rq) { } +#endif /* CONFIG_RT_GROUP_SCHED */ + /* * If we are not running and we are not going to reschedule soon, we should * try to push tasks away now diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9814be8348cd..6b5bd6270d9a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1330,6 +1330,16 @@ struct rq { struct list_head cfsb_csd_list; #endif +#ifdef CONFIG_RT_GROUP_SCHED + /* + * Balance callbacks operate only on global runqueues. + * These pointers allow referencing cgroup specific runqueues + * for balancing operations. + */ + struct rq *rq_to_push_from; + struct rq *rq_to_pull_to; +#endif + atomic_t nr_iowait; } __no_randomize_layout; -- 2.53.0