Linux cgroups development
 help / color / mirror / Atom feed
From: Juri Lelli <juri.lelli@redhat.com>
To: luca abeni <luca.abeni@santannapisa.it>
Cc: Peter Zijlstra <peterz@infradead.org>, Tejun Heo <tj@kernel.org>,
	Yuri Andriaccio <yurand2000@gmail.com>,
	Ingo Molnar <mingo@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	linux-kernel@vger.kernel.org,
	Yuri Andriaccio <yuri.andriaccio@santannapisa.it>,
	hannes@cmpxchg.org, mkoutny@suse.com, cgroups@vger.kernel.org
Subject: Re: [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups
Date: Mon, 11 May 2026 11:29:47 +0200	[thread overview]
Message-ID: <agGhi9_SG6vRnDVq@jlelli-thinkpadt14gen4.remote.csb> (raw)
In-Reply-To: <20260507183931.3915dc59@nowhere>

On 07/05/26 18:39, luca abeni wrote:
> Hi,
> 
> On Thu, 7 May 2026 17:03:41 +0200
> Juri Lelli <juri.lelli@redhat.com> wrote:
> 
> > On 07/05/26 12:53, Peter Zijlstra wrote:
> > > On Tue, May 05, 2026 at 09:56:58AM -1000, Tejun Heo wrote:  
> > 
> > ...
> > 
> > > > - However, the cpu controller is a threaded controller which
> > > > means that it can have threaded sub-hierarchy where the
> > > > no-internal-process rule doesn't apply. This was created
> > > > explicitly for cpu controller. The proposed change blocks it
> > > > effectively forcing cpu controller into regular domain controller
> > > > behavior subject to no-internal-process rule. Note these are
> > > > enforced at controller granularity and this means that users who
> > > > use the threaded mode will be forced to pick between the two.  
> > > 
> > > Right... this then means we need two controls, one to do
> > > hierarchical bandwidth distribution, and one to assign bandwidth to
> > > the internal group -- which is then subject to its own bandwidth
> > > distribution constraint.
> > > 
> > > This might be a little confusing, but there is no way around that
> > > AFAICT.  
> > 
> > Just to check if I'm following, you are thinking something like below?
> > 
> > groupA/
> >   cpu.rt.max = "50 50 100"       <- 0.5 from root
> >   cpu.rt.internal = "20 20 100"  <- 0.2 from groupA for threads at
> > this level
> >   + threadA                               <
> >   + threadB                               <
> >   +- group1/
> >        cpu.rt.max = "30 30 100"  <- 0.3 from groupA
> >        + threadC
> > 
> > And we still keep it flat, so 2 dl-entities (per CPU), one handles
> > threads at groupA level and the other threads inside group1?
> 
> An alternative idea I was thinking about: we create 2 dl entities (one
> for "groupA" and one for "group1"); we set cpu.rt.max for groupA, and
> we subtract group1's utilization from it (so, if groupA's cpu.rt.max is
> "50 100" and group1's cpu.rt.max is "30 100", groupA is served by a dl
> entity (50-30,100)=(20,100) while group1 is served by a dl entity
> (30,100)).
> 
> Basically, with this idea the "internal" reservation is automatically
> computed based on rt.max and on the children cgroups. A possible issue
> is that if the children consume all the groupA's utilization the groupA
> RT tasks remain with 0 runtime (and never execute).

While I like the automatic approach, I also fear that it might be more
difficult to maintain/use from a systemd admin perspective, e.g. I
cannot make a subgroup reservation bigger because there are threads
running in the parent group which consume all the remaining (internal)
bandwidth. If we make it explicit it seems easier to see where bandwidth
is allocated at all levels.

Peter? Tejun? What do we want to do with this interface?


  reply	other threads:[~2026-05-11  9:29 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20260430213835.62217-1-yurand2000@gmail.com>
     [not found] ` <20260430213835.62217-21-yurand2000@gmail.com>
2026-05-05 15:15   ` [RFC PATCH v5 20/29] sched/deadline: Allow deeper hierarchies of RT cgroups Peter Zijlstra
2026-05-05 19:56     ` Tejun Heo
2026-05-07 10:53       ` Peter Zijlstra
2026-05-07 15:03         ` Juri Lelli
2026-05-07 15:05           ` Peter Zijlstra
2026-05-07 16:39           ` luca abeni
2026-05-11  9:29             ` Juri Lelli [this message]
2026-05-11 17:52               ` Tejun Heo
2026-05-07 16:44         ` luca abeni
2026-05-11  9:40         ` luca abeni
2026-05-11 18:15           ` Tejun Heo
2026-05-11 17:37         ` Tejun Heo
2026-05-07 14:30       ` luca abeni
2026-05-11 18:28         ` Tejun Heo
2026-05-12 17:38           ` Yuri Andriaccio
2026-05-12 18:19             ` Tejun Heo
2026-05-12 18:20               ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=agGhi9_SG6vRnDVq@jlelli-thinkpadt14gen4.remote.csb \
    --to=juri.lelli@redhat.com \
    --cc=bsegall@google.com \
    --cc=cgroups@vger.kernel.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luca.abeni@santannapisa.it \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=yurand2000@gmail.com \
    --cc=yuri.andriaccio@santannapisa.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox