From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F1163B2FFD for ; Thu, 30 Apr 2026 21:38:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585134; cv=none; b=lJEcdFX/6oqz2LT31hIqMHtkXlgSHXEZkMI/eE7vv/BNtvdmesOk0zBQgeZOCNHoipiN8vSfp7s5PF82M7CnTjABW6FM9pTY5fBY1YJv5B2/DnBhj+YpX4gKfK1XhMevPMFUXY3E8V0TOrmJ1TeakU21jNNhzIfkjsQ00n+doxo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585134; c=relaxed/simple; bh=Eyp3uV+qL11K+rqD15xNqMG9WVc28wKc/T28J5YZ4J0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BNdPFy6c6u05+EGPfFyhGBwwYb+S4eAkITj79MDJZ70oDm8e/xUzZfepNgedYclqgGVEF173W7xf9mbrKuUwGfNf1MtNvX/KEy1j9N6g1AnJpKGPHvIR0PY2venLOFgURP/O0gfvoRqhrAo5ZzUaMyQ6W5RIZ9PIAzfKiehJkJA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=RupxTJ47; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RupxTJ47" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-48374014a77so16451135e9.3 for ; Thu, 30 Apr 2026 14:38:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777585132; x=1778189932; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JdI3UsjVeiDjAibmgUmYcwm9N7anheY9a/zPU/tkYCA=; b=RupxTJ47BiT+euzo2xDzbxECMejb63j+Sm3lhus3t4YMif1jeBxu2j3Qp83Pb4+8T+ u2pH+rhAX67RabpYEl9ncA/OB0KLIj3pDbhXtqOlDC8pobZ8rZn7/paosOQx8Dp5AXcO Zuzm0+wK56HwlOjP8C+DZ36QrBsn0t7TRwtPsiELCwwy8wQrU58FmBX5uB9XzOI88yq+ iUMXcog7sqLh7L1yHJU209NH/Tsk83PsJhdw0U0avoxUpKbKLAA1/KdTIAaZUHLMYxww 2bLWprKQ/lKYyQKgxAxXNAHBKtwl9LTnHtAPvtP/zsO3IFoBVXaGPprTmhxRqzD8q3yb ql1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777585132; x=1778189932; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=JdI3UsjVeiDjAibmgUmYcwm9N7anheY9a/zPU/tkYCA=; b=f74U2NgKJ8NpLBlLZrQsvjbZZoVGDUUDzhuocEz+1Xu07Jl7sE08kepjyUWrka+Dw8 vQhg6ceEozDUjprB8n0CvNN5vyihRO496HrV1ITu+SUD3ysF7/ex4ssvUdDgZaKGKl28 OFK/CyPCpxGwZdR9QFyL4Uh1rOL3AAxu/2tQxMLnDU6pyaejMtBasxLrModdteySnO7U vFopaDRpm4vT54kGUDbPnVFRgT5anDR9HcJC+cCxvknd0a7rnOoX4nNA5zzi6YCVu5P7 WGGF0nP8cs2OQXDSVbXDvsfUuQrpz5um/CYfrvhCQz0CFyj5kbxkrFnsFHnGSvvcVR9D 2EIg== X-Gm-Message-State: AOJu0YxqdHpEAtuYRCXeF/nodijIMl3l/ATcemRC49zpNKmI7PpwMz4w tPSBp6ywHaWhJGJ6meZAsPvm0+AtcOiC1MatQHGe2e0bdYU1VZX1p1sy X-Gm-Gg: AeBDieufatC7PR9QFci6M4qPp/+LgcSrFbi3oRXlEzkem+m/KTpPN/rGJfpK/tAAEhc HJEUJxHYJyJk1fbNainVGmowCbjIWQUk9uGyNMqctIT2laEMKcUCrJOBYFuS/2ghU4wgXBfkEIb 89XCXRDT+wH67ppBzktBJyHgzssu6ASbgAr5azZPGb09P0P09199BcnMD1Nqu7+7p5sHERusm/9 uk4mqHvDTqK6EmSNzj2nJqhJPs5ytz0x8vDY7xDNnLkkHLLM6OQn1rVoh55OwbKgfVDSRiEN6Ct 7WsI8aqEFikvMwQ3BSV1zdiUVK0vukrApz7mFmwRRikdkW4d8eupVSL5uEMq1/XShZcimkUKY0F JZEEYo339WU8qjpZPeemdjGETn9yFB5kyXO69ifiktjXA4hVTHsCn/jC9DAj8xy5MDgnr0t/Gk5 jqDKtDqN39XzjaD3RLIniEpxUDWVAUHXmSCKJotyrhIWEEytNUJBI= X-Received: by 2002:a05:6000:2505:b0:446:db72:e8ec with SMTP id ffacd0b85a97d-44a875bac0fmr575308f8f.23.1777585131602; Thu, 30 Apr 2026 14:38:51 -0700 (PDT) Received: from yuri-framework13 ([78.211.51.156]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-44a9879ef89sm418510f8f.30.2026.04.30.14.38.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2026 14:38:51 -0700 (PDT) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v5 07/29] sched/rt: Remove unnecessary runqueue pointer in struct rt_rq Date: Thu, 30 Apr 2026 23:38:11 +0200 Message-ID: <20260430213835.62217-8-yurand2000@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com> References: <20260430213835.62217-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Remove the rq field in struct rt_rq. The rq field now is just caching the pointer to the global runqueue of the given rt_rq, so it is unnecessary as the global runqueue can be retrieved in other ways. Introduce served_rq_of_rt_rq to retrieve the runqueue the given rt_rq is serving. Signed-off-by: Yuri Andriaccio --- kernel/sched/rt.c | 7 ++----- kernel/sched/sched.h | 21 +++++++++++++-------- 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 392212ac90d8..dd4aee5570aa 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -101,10 +101,7 @@ void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, int cpu, struct sched_rt_entity *parent) { - struct rq *rq = cpu_rq(cpu); - rt_rq->highest_prio.curr = MAX_RT_PRIO-1; - rt_rq->rq = rq; rt_rq->tg = tg; tg->rt_rq[cpu] = rt_rq; @@ -184,7 +181,7 @@ static void pull_rt_task(struct rq *); static inline void rt_queue_push_tasks(struct rt_rq *rt_rq) { - struct rq *rq = container_of_const(rt_rq, struct rq, rt); + struct rq *rq = served_rq_of_rt_rq(rt_rq); if (!has_pushable_tasks(rt_rq)) return; @@ -194,7 +191,7 @@ static inline void rt_queue_push_tasks(struct rt_rq *rt_rq) static inline void rt_queue_pull_task(struct rt_rq *rt_rq) { - struct rq *rq = container_of_const(rt_rq, struct rq, rt); + struct rq *rq = served_rq_of_rt_rq(rt_rq); queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5833905d8eaa..770de5afd3a9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -850,8 +850,6 @@ struct rt_rq { raw_spinlock_t rt_runtime_lock; unsigned int rt_nr_boosted; - - struct rq *rq; /* this is always top-level rq, cache? */ #endif #ifdef CONFIG_CGROUP_SCHED struct task_group *tg; /* this tg has "this" rt_rq on given CPU for runnable entities */ @@ -3308,11 +3306,16 @@ static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se) return container_of_const(rt_se, struct task_struct, rt); } +static inline struct rq *served_rq_of_rt_rq(struct rt_rq *rt_rq) +{ + WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group); + return container_of_const(rt_rq, struct rq, rt); +} + static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq) { /* Cannot fold with non-CONFIG_RT_GROUP_SCHED version, layout */ - WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group); - return rt_rq->rq; + return cpu_rq(served_rq_of_rt_rq(rt_rq)->cpu); } static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se) @@ -3323,10 +3326,7 @@ static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se) static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se) { - struct rt_rq *rt_rq = rt_se->rt_rq; - - WARN_ON(!rt_group_sched_enabled() && rt_rq->tg != &root_task_group); - return rt_rq->rq; + return rq_of_rt_rq(rt_se->rt_rq); } #else static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se) @@ -3334,6 +3334,11 @@ static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se) return container_of_const(rt_se, struct task_struct, rt); } +static inline struct rq *served_rq_of_rt_rq(struct rt_rq *rt_rq) +{ + return container_of_const(rt_rq, struct rq, rt); +} + static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq) { return container_of_const(rt_rq, struct rq, rt); -- 2.53.0