From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D5CB3BC693 for ; Thu, 30 Apr 2026 21:38:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585138; cv=none; b=Pp8AjjTQWwcu0WTeC+/V2HGYHjyYpH6vFbaKCFM0vl/vdSfCLr+dEtfQ4t8VTTy6JNFrMBQnYFDTTp5sUr/ZspijjIGCf+KFpxbwec5HWfXDnHCCq+opdHZpvrq2QnFSMxXYsPn1Bh8leWiGlVLQoE/Y6fxZxEvQNbq6ImIgi/U= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585138; c=relaxed/simple; bh=YgvxgQKFrmUqNHwd8X54jzOuM2jQ9V/uIuRrad/JscU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qjzjm+m8wGAbylGOqlj11JJO8+xMyduJ/37yhdPcCpsY3hZJv1qIXPsSOr/bNFvM31DGbOuLylJXjmeoBkVkvzdKrRYfkvAYRZBJQT826jll+gOwg2qV0DncaAfOFE35pj50BCYpu/tjH0Jd35/0u6392deSwNCXRVFbKsLAkAw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dcljJHEZ; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dcljJHEZ" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-488ad135063so12390615e9.0 for ; Thu, 30 Apr 2026 14:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777585135; x=1778189935; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n3rsAGkpkqdk5LPEdST/HdpNxl4aWz29JOMrx8GqBkg=; b=dcljJHEZyVvZhE/BD4SPeQlCjXbjI2TNeYTIIiQpfcIqBAaJ7fxgGgP2uEx+A82YDQ dg19UCmRiSHw2KxScwNSElShir659EFqegszEvaONH79tiDoGxIKd6KUr/tNlDW1Q0nS 0frp8d03bhlBKbwGQow2V+mdh0UC4GFMJBALl4u6yCNC6+XjEGYL4tc3ndCokKscAJtY G8SjVxsDBfYcQFVCTN6FIizjg5/7QzWCoSRd2C0HgcuSR7zC7pt/dLxQwCSOPwlt3izV 2/9iJnbnHyUT9SrqpHilbRR1o78gtUH29Bbul36Wr67DjE+9gtY44A8mdqwpPrH3ggHR qiUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777585135; x=1778189935; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=n3rsAGkpkqdk5LPEdST/HdpNxl4aWz29JOMrx8GqBkg=; b=BDSmclaDywex5UCBCAcAkSmmVyb1gcN7d27Av6w10kRtjBtdBPt5wztCZ1PGI75W8o aFHi1/q+V5ocL/ajT70dOzC6+OIIeQvVegP4pMdYXXOttwFF1PMbjv9xVoIFX0c95WCW +JHXdPNznZHzltCpj+5sC99l/lgEGQPaoPUmB/PpXo9+uG/Z1ga/WKzCCbivSbJfaa5y av9NzRsUYUNmbRhJnJo6uJBYWX9b1+KDohz3nyKbXTBnMxmiYuhZnv2lc5dc2XuXiQqI LE6eyi6ApvXGeT1u2/c6xanAc0JoyY411qSxUfZ8P9IctUdJtqMbs/zs6EMxFi+miR10 bm+Q== X-Gm-Message-State: AOJu0YykFv4ktty0vI129YR0CExZdLUEclp1unWYmhUt5d2+f/n7dVG/ nRhgtjcho3Nu+46IQRlKcJ8n+8eHwDJZtqse2jkWTXiObFZiE/7LjI1z X-Gm-Gg: AeBDieucYb22bBHwFBLfIqWrG8B2Jng9FLYw/XDnkhSfuma37YT8aCA2DLBEAJp/uN+ eFHCdIrH2v4XM3l8vWDERSzNq5Wc2G9vTprIHsORlBpu9bhooPpMLCdFoDVLKU99A5h0EeWBdRw 3h9HJcXdsHkk6D55i1MyQ1BnFwOoMaqP6CiefbyW/Yrvs7yQZt46183URy8KesRlFvOwLHdN7QI 2RIldkRtpot+95y1T+M1Fh6IYJwOjOqiHOPUU0e+ZZ2vcJB9DYgIpqbi79fFFGYTI7rSvlq/1Yo KtpMJyNvhpGm/4K3ThnnqOOhj52WmHZCzunXUEmmLTEUhbKBsZyxU7egVHR2teHlJi+MJ5NtXXU 3RLMFKFRd3lRSZ4ZZ5dMfYBPhWA+QTJthbjJvWpKoyObTk98xqgI+XZ+KPEVEzDU8QOG6s3VGQa sbX0Wrwn0xbApUjCFjK6pCjrfMJGIWMHkdAavxEeUu0k6wrBFWn4M= X-Received: by 2002:a05:600c:5254:b0:488:b8bc:6a32 with SMTP id 5b1f17b1804b1-48a8445e90fmr77149405e9.23.1777585134835; Thu, 30 Apr 2026 14:38:54 -0700 (PDT) Received: from yuri-framework13 ([78.211.51.156]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-44a9879ef89sm418510f8f.30.2026.04.30.14.38.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2026 14:38:54 -0700 (PDT) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v5 09/29] sched/core: Initialize HCBS specific structures Date: Thu, 30 Apr 2026 23:38:13 +0200 Message-ID: <20260430213835.62217-10-yurand2000@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com> References: <20260430213835.62217-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: luca abeni Update autogroups' creation/destruction to use the new data structures. Initialize the default bandwidth for rt-cgroups (sched_init). Initialize rt-scheduler's specific data structures for the root control group (sched_init). Remove init_tg_rt_entry in favour of manual setup of the necessary data structures in sched_init. Add utility functions to check (and get) if a rt_rq entity is connected to a rt-cgroup. Co-developed-by: Alessio Balsini Signed-off-by: Alessio Balsini Co-developed-by: Andrea Parri Signed-off-by: Andrea Parri Co-developed-by: Yuri Andriaccio Signed-off-by: Yuri Andriaccio Signed-off-by: luca abeni --- kernel/sched/autogroup.c | 4 ++-- kernel/sched/core.c | 11 +++++++++-- kernel/sched/deadline.c | 8 ++++++++ kernel/sched/rt.c | 11 ----------- kernel/sched/sched.h | 30 +++++++++++++++++++++++++++--- 5 files changed, 46 insertions(+), 18 deletions(-) diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c index e380cf9372bb..2122a0740a19 100644 --- a/kernel/sched/autogroup.c +++ b/kernel/sched/autogroup.c @@ -52,7 +52,7 @@ static inline void autogroup_destroy(struct kref *kref) #ifdef CONFIG_RT_GROUP_SCHED /* We've redirected RT tasks to the root task group... */ - ag->tg->rt_se = NULL; + ag->tg->dl_se = NULL; ag->tg->rt_rq = NULL; #endif sched_release_group(ag->tg); @@ -109,7 +109,7 @@ static inline struct autogroup *autogroup_create(void) * the policy change to proceed. */ free_rt_sched_group(tg); - tg->rt_se = root_task_group.rt_se; + tg->dl_se = root_task_group.dl_se; tg->rt_rq = root_task_group.rt_rq; #endif /* CONFIG_RT_GROUP_SCHED */ tg->autogroup = ag; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a203a27fb16d..4e58b4f165ed 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8636,7 +8636,7 @@ void __init sched_init(void) scx_tg_init(&root_task_group); #endif /* CONFIG_EXT_GROUP_SCHED */ #ifdef CONFIG_RT_GROUP_SCHED - root_task_group.rt_se = (struct sched_rt_entity **)ptr; + root_task_group.dl_se = (struct sched_dl_entity **)ptr; ptr += nr_cpu_ids * sizeof(void **); root_task_group.rt_rq = (struct rt_rq **)ptr; @@ -8647,6 +8647,11 @@ void __init sched_init(void) init_defrootdomain(); +#ifdef CONFIG_RT_GROUP_SCHED + init_dl_bandwidth(&root_task_group.dl_bandwidth, + global_rt_period(), global_rt_runtime()); +#endif /* CONFIG_RT_GROUP_SCHED */ + #ifdef CONFIG_CGROUP_SCHED task_group_cache = KMEM_CACHE(task_group, 0); @@ -8698,7 +8703,9 @@ void __init sched_init(void) * starts working after scheduler_running, which is not the case * yet. */ - init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL); + rq->rt.tg = &root_task_group; + root_task_group.rt_rq[i] = &rq->rt; + root_task_group.dl_se[i] = NULL; #endif rq->next_class = &idle_sched_class; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 67615a0539fe..7c039d5f3c5d 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -505,6 +505,14 @@ static inline int is_leftmost(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq); +void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime) +{ + raw_spin_lock_init(&dl_b->dl_runtime_lock); + dl_b->dl_period = period; + dl_b->dl_runtime = runtime; +} + + void init_dl_bw(struct dl_bw *dl_b) { raw_spin_lock_init(&dl_b->lock); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index dd4aee5570aa..741fac9f57ac 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -97,17 +97,6 @@ void free_rt_sched_group(struct task_group *tg) return; } -void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq, - struct sched_rt_entity *rt_se, int cpu, - struct sched_rt_entity *parent) -{ - rt_rq->highest_prio.curr = MAX_RT_PRIO-1; - rt_rq->tg = tg; - - tg->rt_rq[cpu] = rt_rq; - tg->rt_se[cpu] = rt_se; -} - int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) { if (!rt_group_sched_enabled()) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1c614e54eba4..e7e263d3cddb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -604,9 +604,6 @@ extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b); extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq); extern bool cfs_task_bw_constrained(struct task_struct *p); -extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq, - struct sched_rt_entity *rt_se, int cpu, - struct sched_rt_entity *parent); extern int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us); extern int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us); extern long sched_group_rt_runtime(struct task_group *tg); @@ -2905,6 +2902,7 @@ extern void resched_curr(struct rq *rq); extern void resched_curr_lazy(struct rq *rq); extern void resched_cpu(int cpu); +void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime); extern void init_dl_entity(struct sched_dl_entity *dl_se); extern void init_cfs_throttle_work(struct task_struct *p); @@ -3348,6 +3346,22 @@ static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se) { return rq_of_rt_rq(rt_se->rt_rq); } + +static inline int is_dl_group(struct rt_rq *rt_rq) +{ + return rt_rq->tg != &root_task_group; +} + +/* + * Return the scheduling entity of this group of tasks. + */ +static inline struct sched_dl_entity *dl_group_of(struct rt_rq *rt_rq) +{ + if (WARN_ON_ONCE(!is_dl_group(rt_rq))) + return NULL; + + return rt_rq->tg->dl_se[served_rq_of_rt_rq(rt_rq)->cpu]; +} #else static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se) { @@ -3377,6 +3391,16 @@ static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se) return &rq->rt; } + +static inline int is_dl_group(struct rt_rq *rt_rq) +{ + return 0; +} + +static inline struct sched_dl_entity *dl_group_of(struct rt_rq *rt_rq) +{ + return NULL; +} #endif DEFINE_LOCK_GUARD_2(double_rq_lock, struct rq, -- 2.53.0