From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f47.google.com (mail-wr1-f47.google.com [209.85.221.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1BEA3C0634 for ; Thu, 30 Apr 2026 21:38:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.47 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585141; cv=none; b=eEQmfQYuCXTaZ7XyoBPooD/dFl8R91U089OU23FSZ0qFcjjWsYOooDHjU4n/+fQ9VoEqfPOab7Pa+Dq7vfFJDpqEc4TRD87chIO76mD3ebfQ/4dtB5u4sZr67U3azJ14EE3ajFz52xtoLWl6w2UW2g8SWeujeonkcSFBTEw+we0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777585141; c=relaxed/simple; bh=1wPzz5pq1UBbOiwmpuuOy7IceVMbktvSPYQdCiYdHgw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=a9JpD3/Ck3xlooU/7+vKeMT3jrCha1uneDByLjGhOEDQta0cb3+cNH5R9GX83kTFwh/TU+VJnN8zDjNKl76de8XPSCufO3QbsvBJ0E8PM5oQMHVzLPBcGF9qdYudoC8h4qzDUNmTfYETDfDoLUEDxmhOLPxtsl+zT+IVGCuaU0Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SDK8uaGN; arc=none smtp.client-ip=209.85.221.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SDK8uaGN" Received: by mail-wr1-f47.google.com with SMTP id ffacd0b85a97d-43d75312379so1474621f8f.1 for ; Thu, 30 Apr 2026 14:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777585138; x=1778189938; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NPu5z0uxmwyiDPZKfRv0f/fAi+NkPj42GoDnwTDz+xc=; b=SDK8uaGNy8ls9sFaP/XB8Ahw60LXTa6kn8uhQj4VvKz5GtDT2/EyRnNILt9hmFcfAE 4wcaUncm0udr7BPE0lYwtGZPKtJTK4LWQm4ZIPp0hoeycc4bkX/gKLdLoNbJRRBx6IN3 WDn05ZGq08TEo3o52E6A6JOk9TbtU9pzcWLepw87BVCNEV993rSuge416eK1V/AqLJwR 0+tfj1maPQZqkBAsyrHNm7U5/xHlngVo66YxYJR7JSTXA9k+z60EUNRC7OhynGETtp2p LVC07/AFw63cfUdCt/bBW3o26bSZ6coTHhr/Iz6ma3jnsIE02LBbVdgBIJ03yzgC3qig iQyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777585138; x=1778189938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=NPu5z0uxmwyiDPZKfRv0f/fAi+NkPj42GoDnwTDz+xc=; b=lwvSdBrAxVZZW2CGvTVIozXF0U5IV5sSv32km7zcWfbUVDe10panQI8w/6wIPjBGzn 1qfdDdZKzm39T3WK31+4ofMSFBwrwAMqgoJxDVb5wfVyJgk1oaAhPoNWExKjmudU8rqX xbatlXTIhPokDFQvgaO3hatScQessqM5gzP2pSyT8uC0M+q8aTGDkfwz7S2A/I7tHNjB lRO8eJul6NxJuCJ4LokX944lZKWXxxh66Vk35WStZ4S4xK7mgV3OasWVNsLjYbWSvIKp E7YDi8NYDfm4iucsb4UWZtYGfOtjG2GGR2lEfhGxmHCIHIkYihP0aGSHIzHY7ObaQ2fG P9LQ== X-Gm-Message-State: AOJu0Yxk+sj34IZVfeagEGC/ZSh1IUyJf2oMknrZvAlWy03JZ4Yqls20 pEFxQ6jlNlIj3RectHDW4HlyqpoGqVODZ6iPokqmPEIl7dNaruPzBS/iAfH3Qw== X-Gm-Gg: AeBDietbAM7vcnIB+Gr6dgcsq+vq83Z7ysRBZq/4Agv/thkXBgzbGaK+c3TzQQ93Y7G B/F6RuxtIk0EZbvUzN2cxzUdPTxS+mYpM9EmrYjsCwadHyWLKT2mdDBeV9F5PA+cmxbYjvzmpUY +hm/+zKPNl9PLlsZnp0VdnbUEzZpCgpsJ46OICY83gQx3VfqLPl/b3wXYJ7U9CsdNO6bU9dgI6z gLM5801nyiT5HEnjpFhvKdA//r0Dh3bDM7ypZB4iw9Fro1wPubc3JRJPKw1iLH29EILGuMPd7o7 YhntuQjq9UO+mWVQgcpmg0Uo+mKzhTPaDare6xz6zCE6C+/Jgs3Rmsq39SksCNCggUUxzJDWLW3 cp8qTyYLRBzCEzOfLmZMbC/noVFDHaEC895pFA+ODVZv61HbmHw1QpoTS9S39BAdmYn2L2M32Cy 1jGMdoCuq8j+kIvnmHRSWR2QM9KSYXtLhQyzKzEqyVaDmAT+0BVuM= X-Received: by 2002:a05:6000:1786:b0:43d:7874:5d3b with SMTP id ffacd0b85a97d-4494e8d73bcmr7678434f8f.9.1777585138154; Thu, 30 Apr 2026 14:38:58 -0700 (PDT) Received: from yuri-framework13 ([78.211.51.156]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-44a9879ef89sm418510f8f.30.2026.04.30.14.38.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2026 14:38:57 -0700 (PDT) From: Yuri Andriaccio To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: [RFC PATCH v5 11/29] sched/rt: Add {alloc/unregister/free}_rt_sched_group Date: Thu, 30 Apr 2026 23:38:15 +0200 Message-ID: <20260430213835.62217-12-yurand2000@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260430213835.62217-1-yurand2000@gmail.com> References: <20260430213835.62217-1-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: luca abeni Add allocation and deallocation code for rt-cgroups. Declare dl_server specific functions (only skeleton, but no implementation yet), needed by the deadline servers to be called when trying to schedule. Co-developed-by: Alessio Balsini Signed-off-by: Alessio Balsini Co-developed-by: Andrea Parri Signed-off-by: Andrea Parri Co-developed-by: Yuri Andriaccio Signed-off-by: Yuri Andriaccio Signed-off-by: luca abeni --- kernel/sched/rt.c | 151 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 149 insertions(+), 2 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 741fac9f57ac..3d7f2b2ebe60 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -88,24 +88,171 @@ void init_rt_rq(struct rt_rq *rt_rq) void unregister_rt_sched_group(struct task_group *tg) { + int i; + + if (!rt_group_sched_enabled()) + return; + + if (!tg->dl_se || !tg->rt_rq) + return; + for_each_possible_cpu(i) { + if (!tg->dl_se[i] || !tg->rt_rq[i]) + continue; + + if (tg->dl_se[i]->dl_runtime) + dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period); + } } void free_rt_sched_group(struct task_group *tg) { + int i; + unsigned long flags; + if (!rt_group_sched_enabled()) return; + + if (!tg->dl_se || !tg->rt_rq) + return; + + for_each_possible_cpu(i) { + if (!tg->dl_se[i] || !tg->rt_rq[i]) + continue; + + /* + * Shutdown the dl_server and free it + * + * Since the dl timer is going to be cancelled, + * we risk to never decrease the running bw... + * Fix this issue by changing the group runtime + * to 0 immediately before freeing it. + */ + if (tg->dl_se[i]->dl_runtime) + dl_init_tg(tg->dl_se[i], 0, tg->dl_se[i]->dl_period); + + raw_spin_rq_lock_irqsave(cpu_rq(i), flags); + hrtimer_cancel(&tg->dl_se[i]->dl_timer); + raw_spin_rq_unlock_irqrestore(cpu_rq(i), flags); + kfree(tg->dl_se[i]); + + /* Free the local per-cpu runqueue */ + kfree(served_rq_of_rt_rq(tg->rt_rq[i])); + } + + kfree(tg->rt_rq); + kfree(tg->dl_se); +} + +static struct task_struct *rt_server_pick(struct sched_dl_entity *dl_se, struct rq_flags *rf) +{ + return NULL; +} + +static inline void __rt_rq_free(struct rt_rq **rt_rq) +{ + int i; + + for_each_possible_cpu(i) { + kfree(served_rq_of_rt_rq(rt_rq[i])); + } + + kfree(rt_rq); +} + +DEFINE_FREE(rt_rq_free, struct rt_rq **, if (_T) __rt_rq_free(_T)) + +static inline void __dl_se_free(struct sched_dl_entity **dl_se) +{ + int i; + + for_each_possible_cpu(i) { + kfree(dl_se[i]); + } + + kfree(dl_se); +} + +DEFINE_FREE(dl_se_free, struct sched_dl_entity **, if (_T) __dl_se_free(_T)) + +static int __alloc_rt_sched_group_data(struct task_group *tg) { + /* Instantiate automatic cleanup in event of kalloc fail */ + struct rt_rq **tg_rt_rq __free(rt_rq_free) = NULL; + struct sched_dl_entity **tg_dl_se __free(dl_se_free) = NULL; + struct sched_dl_entity *dl_se __free(kfree) = NULL; + struct rq *s_rq __free(kfree) = NULL; + int i; + + tg_rt_rq = kcalloc(nr_cpu_ids, sizeof(struct rt_rq *), GFP_KERNEL); + if (!tg_rt_rq) + return 0; + + tg_dl_se = kcalloc(nr_cpu_ids, + sizeof(struct sched_dl_entity *), GFP_KERNEL); + if (!tg_dl_se) + return 0; + + for_each_possible_cpu(i) { + s_rq = kzalloc_node(sizeof(struct rq), + GFP_KERNEL, cpu_to_node(i)); + if (!s_rq) + return 0; + + dl_se = kzalloc_node(sizeof(struct sched_dl_entity), + GFP_KERNEL, cpu_to_node(i)); + if (!dl_se) + return 0; + + tg_rt_rq[i] = &no_free_ptr(s_rq)->rt; + tg_dl_se[i] = no_free_ptr(dl_se); + } + + tg->rt_rq = no_free_ptr(tg_rt_rq); + tg->dl_se = no_free_ptr(tg_dl_se); + + return 1; } int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) { + struct sched_dl_entity *dl_se; + struct rq *s_rq; + int i; + if (!rt_group_sched_enabled()) return 1; + /* Allocate all necessary resources beforehand */ + if (!__alloc_rt_sched_group_data(tg)) + return 0; + + /* Initialize the allocated resources now. */ + init_dl_bandwidth(&tg->dl_bandwidth, 0, 0); + + for_each_possible_cpu(i) { + s_rq = served_rq_of_rt_rq(tg->rt_rq[i]); + dl_se = tg->dl_se[i]; + + init_rt_rq(&s_rq->rt); + s_rq->cpu = i; + s_rq->rt.tg = tg; + + init_dl_entity(dl_se); + dl_se->dl_runtime = tg->dl_bandwidth.dl_runtime; + dl_se->dl_deadline = tg->dl_bandwidth.dl_period; + dl_se->dl_period = tg->dl_bandwidth.dl_period; + dl_se->runtime = 0; + dl_se->deadline = 0; + dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime); + dl_se->dl_density = to_ratio(dl_se->dl_deadline, dl_se->dl_runtime); + dl_se->dl_server = 1; + dl_server_init(dl_se, &cpu_rq(i)->dl, s_rq, rt_server_pick); + } + return 1; } -#else /* !CONFIG_RT_GROUP_SCHED: */ +#else /* !CONFIG_RT_GROUP_SCHED */ void unregister_rt_sched_group(struct task_group *tg) { } @@ -115,7 +262,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) { return 1; } -#endif /* !CONFIG_RT_GROUP_SCHED */ +#endif /* CONFIG_RT_GROUP_SCHED */ static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev) { -- 2.53.0