public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Aaron Lu <ziqianlu@bytedance.com>
To: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Benjamin Segall <bsegall@google.com>,
	Chengming Zhou <chengming.zhou@linux.dev>,
	Valentin Schneider <vschneid@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Josh Don <joshdon@google.com>, Ingo Molnar <mingo@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Xi Wang <xii@google.com>,
	linux-kernel@vger.kernel.org, Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Mel Gorman <mgorman@suse.de>,
	Chuyi Zhou <zhouchuyi@bytedance.com>,
	Jan Kiszka <jan.kiszka@siemens.com>,
	Florian Bezdeka <florian.bezdeka@siemens.com>
Subject: Re: [PATCH v2 0/5] Defer throttle when task exits to user
Date: Fri, 4 Jul 2025 17:47:50 +0800	[thread overview]
Message-ID: <20250704094750.GA1654@bytedance> (raw)
In-Reply-To: <b4461e54-ab3c-4596-8da2-ffbd4c7d87b6@amd.com>

Hi Prateek,

On Fri, Jul 04, 2025 at 02:18:49PM +0530, K Prateek Nayak wrote:
> Hello Aaron,
> 
> On 7/4/2025 1:24 PM, Aaron Lu wrote:
> > On Fri, Jul 04, 2025 at 10:04:13AM +0530, K Prateek Nayak wrote:
> > > Hello Ben,
> > > 
> > > On 7/3/2025 3:30 AM, Benjamin Segall wrote:
> > > > Aaron Lu <ziqianlu@bytedance.com> writes:
> > > > 
> > > > > For pelt clock, I chose to keep the current behavior to freeze it on
> > > > > cfs_rq's throttle time. The assumption is that tasks running in kernel
> > > > > mode should not last too long, freezing the cfs_rq's pelt clock can keep
> > > > > its load and its corresponding sched_entity's weight. Hopefully, this can
> > > > > result in a stable situation for the remaining running tasks to quickly
> > > > > finish their jobs in kernel mode.
> > > > 
> > > > I suppose the way that this would go wrong would be CPU 1 using up all
> > > > of the quota, and then a task waking up on CPU 2 and trying to run in
> > > > the kernel for a while. I suspect pelt time needs to also keep running
> > > > until all the tasks are asleep (and that's what we have been running at
> > > > google with the version based on separate accounting, so we haven't
> > > > accidentally done a large scale test of letting it pause).
> > > 
> > > Thinking out loud ...
> > > 
> > > One thing this can possibly do is create a lot of:
> > > 
> > >    throttled -> partially unthrottled -> throttled
> > > 
> > > transitions when tasks wakeup on throttled hierarchy, run for a while,
> > > and then go back to sleep. If the PELT clocks aren't frozen, this
> > > either means:
> > > 
> > > 1. Do a full walk_tg_tree_from() placing all the leaf cfs_rq that have
> > >     any load associated back onto the list and allow PELT to progress only
> > >     to then remove them again once tasks go back to sleep. A great many of
> > >     these transitions are possible theoretically which is not ideal.
> > > 
> > 
> > I'm going this route, becasue it is kind of already the case now :)
> > 
> > I mean throttled cfs_rqs are already added back to the leaf cfs_rq
> > list during enqueue time, to make the assert_list_leaf_cfs_rq(rq); at
> > the bottom of enqueue_task_fair() happy when a task is enqueued to a
> > throttled cfs_rq.
> > 
> > I'm sorry if this is not obvious in this series, I guess I put too many
> > things in patch3.
> > 
> > Below is the diff I cooked on top of this series to keep pelt clock
> > running as long as there is task running in a throttled cfs_rq, does it
> > look sane?
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d869c8b51c5a6..410b850df2a12 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5290,8 +5290,15 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >   	se->on_rq = 1;
> >   	if (cfs_rq->nr_queued == 1) {
> > +		struct rq *rq = rq_of(cfs_rq);
> > +
> >   		check_enqueue_throttle(cfs_rq);
> >   		list_add_leaf_cfs_rq(cfs_rq);
> > +		if (cfs_rq->pelt_clock_throttled) {
> > +			cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
> > +				cfs_rq->throttled_clock_pelt;
> > +			cfs_rq->pelt_clock_throttled = 0;
> > +		}
> 
> At this point we've already done a update_load_avg() above in
> enqueue_entity() without unfreezing the PELT clock. Does it make
> sense to do it at the beginning?
>

My thinking is, when the first entity is enqueued, the cfs_rq should
have been throttled so the update_load_avg() done above should do nothing
since its pelt clock is frozen(so no decay should happen). Then after the
entity is added, this throttled cfs_rq's pelt clock is unfrozen and load
can be updated accordingly.

Thinking it another way, when we move the unfreezing earlier, the delta
calculated in __update_load_sum() should be zero because
cfs_rq->throttled_clock_pelt_time is adjusted and the "now" value
derived from cfs_rq_clock_pelt() should be the same as last update time.

Does this make sense or do I mis-understand it?

> Overall idea seems sane to me. I was thinking if anything can go
> wrong by only unfreezing the PELT for one part of the hierarchy but
> I suppose the other cfs_rq can be considered individually throttled
> and it should turn out fine.
> 

Unfreezing the PELT for only one part seems problematic, I suppose we
will have to propagate the load upwards all the way up to let the root
level sched entity have a correct load setting so that when it needs to
compete for cpu resources, it gets the correct share. I assume this is
why Ben think it is better to keep the pelt clock running as long as
there is task running.

Note that the pelt clock is only frozen when the cfs_rq has no tasks
running, and gets unfrozen when the first entity is enqueued or at
unthrottle time. So when a new task is enqueued, at each level's
enqueue_entity():
- if it's the first entity of a throttled cfs_rq, then its PELT clock
  will be unfrozen;
- if it's not the first entity, that means the cfs_rq's pelt clock is
  not frozen yet.
So in the end, we should not have a partial unfrozen hiearchy. At least,
that's my intention, please let me know if I messed up :)

Thanks for the quick review!

Best regards,
Aaron

      reply	other threads:[~2025-07-04  9:48 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-18  8:19 [PATCH v2 0/5] Defer throttle when task exits to user Aaron Lu
2025-06-18  8:19 ` [PATCH v2 1/5] sched/fair: Add related data structure for task based throttle Aaron Lu
2025-06-18  8:19 ` [PATCH v2 2/5] sched/fair: Implement throttle task work and related helpers Aaron Lu
2025-06-18  9:03   ` Chengming Zhou
2025-06-18  8:19 ` [PATCH v2 3/5] sched/fair: Switch to task based throttle model Aaron Lu
2025-06-18  9:55   ` Chengming Zhou
2025-06-18 11:19     ` Aaron Lu
2025-06-19 12:02       ` Chengming Zhou
2025-06-18  8:19 ` [PATCH v2 4/5] sched/fair: Task based throttle time accounting Aaron Lu
2025-06-18  8:19 ` [PATCH v2 5/5] sched/fair: Get rid of throttled_lb_pair() Aaron Lu
2025-07-01  8:31 ` [PATCH v2 0/5] Defer throttle when task exits to user Aaron Lu
2025-07-03  7:37   ` Peter Zijlstra
2025-07-03 11:51     ` Aaron Lu
2025-07-02  4:25 ` K Prateek Nayak
2025-07-02  8:51   ` Aaron Lu
2025-07-02 22:00 ` Benjamin Segall
2025-07-03  6:34   ` Aaron Lu
2025-07-04  4:34   ` K Prateek Nayak
2025-07-04  7:54     ` Aaron Lu
2025-07-04  8:48       ` K Prateek Nayak
2025-07-04  9:47         ` Aaron Lu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250704094750.GA1654@bytedance \
    --to=ziqianlu@bytedance.com \
    --cc=bsegall@google.com \
    --cc=chengming.zhou@linux.dev \
    --cc=dietmar.eggemann@arm.com \
    --cc=florian.bezdeka@siemens.com \
    --cc=jan.kiszka@siemens.com \
    --cc=joshdon@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=xii@google.com \
    --cc=zhouchuyi@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox