From: luca abeni <luca.abeni@santannapisa.it>
To: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>, Tejun Heo <tj@kernel.org>,
Yuri Andriaccio <yurand2000@gmail.com>,
Ingo Molnar <mingo@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
linux-kernel@vger.kernel.org,
Yuri Andriaccio <yuri.andriaccio@santannapisa.it>,
hannes@cmpxchg.org, mkoutny@suse.com, cgroups@vger.kernel.org
Subject: Re: [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups
Date: Thu, 7 May 2026 18:39:31 +0200 [thread overview]
Message-ID: <20260507183931.3915dc59@nowhere> (raw)
In-Reply-To: <afypzfyH0M7Rcge2@jlelli-thinkpadt14gen4.remote.csb>
Hi,
On Thu, 7 May 2026 17:03:41 +0200
Juri Lelli <juri.lelli@redhat.com> wrote:
> On 07/05/26 12:53, Peter Zijlstra wrote:
> > On Tue, May 05, 2026 at 09:56:58AM -1000, Tejun Heo wrote:
>
> ...
>
> > > - However, the cpu controller is a threaded controller which
> > > means that it can have threaded sub-hierarchy where the
> > > no-internal-process rule doesn't apply. This was created
> > > explicitly for cpu controller. The proposed change blocks it
> > > effectively forcing cpu controller into regular domain controller
> > > behavior subject to no-internal-process rule. Note these are
> > > enforced at controller granularity and this means that users who
> > > use the threaded mode will be forced to pick between the two.
> >
> > Right... this then means we need two controls, one to do
> > hierarchical bandwidth distribution, and one to assign bandwidth to
> > the internal group -- which is then subject to its own bandwidth
> > distribution constraint.
> >
> > This might be a little confusing, but there is no way around that
> > AFAICT.
>
> Just to check if I'm following, you are thinking something like below?
>
> groupA/
> cpu.rt.max = "50 50 100" <- 0.5 from root
> cpu.rt.internal = "20 20 100" <- 0.2 from groupA for threads at
> this level
> + threadA <
> + threadB <
> +- group1/
> cpu.rt.max = "30 30 100" <- 0.3 from groupA
> + threadC
>
> And we still keep it flat, so 2 dl-entities (per CPU), one handles
> threads at groupA level and the other threads inside group1?
An alternative idea I was thinking about: we create 2 dl entities (one
for "groupA" and one for "group1"); we set cpu.rt.max for groupA, and
we subtract group1's utilization from it (so, if groupA's cpu.rt.max is
"50 100" and group1's cpu.rt.max is "30 100", groupA is served by a dl
entity (50-30,100)=(20,100) while group1 is served by a dl entity
(30,100)).
Basically, with this idea the "internal" reservation is automatically
computed based on rt.max and on the children cgroups. A possible issue
is that if the children consume all the groupA's utilization the groupA
RT tasks remain with 0 runtime (and never execute).
Luca
next prev parent reply other threads:[~2026-05-07 16:39 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260430213835.62217-1-yurand2000@gmail.com>
[not found] ` <20260430213835.62217-14-yurand2000@gmail.com>
2026-05-05 13:04 ` [RFC PATCH v5 13/29] sched/rt: Implement dl-server operations for rt-cgroups Peter Zijlstra
[not found] ` <20260430213835.62217-15-yurand2000@gmail.com>
2026-05-05 13:16 ` [RFC PATCH v5 14/29] sched/rt: Update task event callbacks for HCBS scheduling Peter Zijlstra
[not found] ` <20260430213835.62217-16-yurand2000@gmail.com>
2026-05-05 14:36 ` [RFC PATCH v5 15/29] sched/rt: Update rt-cgroup schedulability checks Peter Zijlstra
[not found] ` <20260430213835.62217-19-yurand2000@gmail.com>
2026-05-05 14:59 ` [RFC PATCH v5 18/29] sched/core: Cgroup v2 support Peter Zijlstra
2026-05-06 19:58 ` luca abeni
2026-05-07 7:01 ` Peter Zijlstra
2026-05-07 13:30 ` luca abeni
2026-05-07 14:16 ` Peter Zijlstra
[not found] ` <20260430213835.62217-20-yurand2000@gmail.com>
2026-05-05 15:01 ` [RFC PATCH v5 19/29] sched/rt: Remove support for cgroups-v1 Peter Zijlstra
2026-05-07 15:35 ` Juri Lelli
[not found] ` <20260430213835.62217-21-yurand2000@gmail.com>
2026-05-05 15:15 ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Peter Zijlstra
2026-05-05 19:56 ` Tejun Heo
2026-05-07 10:53 ` Peter Zijlstra
2026-05-07 15:03 ` Juri Lelli
2026-05-07 15:05 ` Peter Zijlstra
2026-05-07 16:39 ` luca abeni [this message]
2026-05-11 9:29 ` Juri Lelli
2026-05-11 17:52 ` Tejun Heo
2026-05-07 16:44 ` luca abeni
2026-05-11 9:40 ` luca abeni
2026-05-11 18:15 ` Tejun Heo
2026-05-11 17:37 ` Tejun Heo
2026-05-07 14:30 ` luca abeni
2026-05-11 18:28 ` Tejun Heo
[not found] ` <20260430213835.62217-23-yurand2000@gmail.com>
2026-05-05 15:20 ` [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Peter Zijlstra
2026-05-05 15:24 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260507183931.3915dc59@nowhere \
--to=luca.abeni@santannapisa.it \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=dietmar.eggemann@arm.com \
--cc=hannes@cmpxchg.org \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=mkoutny@suse.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=yurand2000@gmail.com \
--cc=yuri.andriaccio@santannapisa.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox