From: Peter Zijlstra <peterz@infradead.org>
To: John Stultz <jstultz@google.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
Joel Fernandes <joelagnelf@nvidia.com>,
Qais Yousef <qyousef@layalina.io>, Ingo Molnar <mingo@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Valentin Schneider <vschneid@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Zimuzo Ezeozue <zezeozue@google.com>,
Mel Gorman <mgorman@suse.de>, Will Deacon <will@kernel.org>,
Waiman Long <longman@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Metin Kaya <Metin.Kaya@arm.com>,
Xuewen Yan <xuewen.yan94@gmail.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
Daniel Lezcano <daniel.lezcano@linaro.org>,
Suleiman Souhlal <suleiman@google.com>,
kernel-team@android.com
Subject: Re: [PATCH v16 4/7] sched: Fix runtime accounting w/ split exec & sched contexts
Date: Thu, 24 Apr 2025 15:37:07 +0200 [thread overview]
Message-ID: <20250424133707.GB1166@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <CANDhNCq7SETQ7j6ifUoF_Pwiv42RNfv9V3AV+=OWg_U4+gZVbA@mail.gmail.com>
On Mon, Apr 21, 2025 at 02:00:34PM -0700, John Stultz wrote:
> On Thu, Apr 17, 2025 at 4:12 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > On Fri, Apr 11, 2025 at 11:02:38PM -0700, John Stultz wrote:
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index e43993a4e5807..da8b0970c6655 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -1143,22 +1143,33 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq)
> > > }
> > > #endif /* CONFIG_SMP */
> > >
> > > -static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
> > > +static s64 update_se_times(struct rq *rq, struct sched_entity *se)
> >
> > update_se()
>
> Sure thing!
>
> > > {
> > > u64 now = rq_clock_task(rq);
> > > s64 delta_exec;
> > >
> > > - delta_exec = now - curr->exec_start;
> > > + delta_exec = now - se->exec_start;
> > > if (unlikely(delta_exec <= 0))
> > > return delta_exec;
> > >
> > > - curr->exec_start = now;
> > > - curr->sum_exec_runtime += delta_exec;
> > > + se->exec_start = now;
> > > + if (entity_is_task(se)) {
> > > + struct task_struct *running = rq->curr;
> > > + /*
> > > + * If se is a task, we account the time against the running
> > > + * task, as w/ proxy-exec they may not be the same.
> > > + */
> > > + running->se.exec_start = now;
> > > + running->se.sum_exec_runtime += delta_exec;
> > > + } else {
> > > + /* If not task, account the time against se */
> > > + se->sum_exec_runtime += delta_exec;
> > > + }
> >
> >
> > So I am confused; you're accounting runtime to the actual running task,
> > but then accounting the same runtime to the cgroup of the donor.
> >
> > This seems somewhat irregular.
>
> So, apologies, as it's been a bit since I've deeply thought on this.
> In general we want to charge the donor for everything since it's
> donating its time, etc. However, without this change, we got some
> strange behavior in top, etc, because the proxy tasks that actually
> ran didn't seem to gain any exec_runtime. So the split of charging
> everything to the donor except the sum_exec_runtime to the actually
> running process (the proxy) made sense.
>
> Now, for cgroup accounting, it seems like we'd still want to charge
> the donor's cgroup, so whatever restrictions there are in place apply
> to the donor, but it's just when we get to the leaf task we charge the
> proxy instead.
>
> Does that sound reasonable? Or am I making a bad assumption here
> around the cgroup logic?
Its all rather confusing one way or the other I'm afraid :/
This way when people go add up the task times and compare to cgroups
it doesn't match up.
Also, by adding sum_exec_runtime to curr, but
account_group_exec_runtime() on donor, the cputimer information is
inconsistent.
Whatever we do, it should be internally consistent, and this ain't it.
> > Please consider all of update_curr_task(), and if they all want to be
> > against rq->curr, rather than rq->donor then more changes are needed.
>
> So I think we are ok here, but it is confusing... see more below.
Yeah, we are okay. I remembered the discussion we had last time I
tripped over this. I just tripped over it again before remembering :-)
> > > @@ -1213,7 +1224,7 @@ s64 update_curr_common(struct rq *rq)
> > > struct task_struct *donor = rq->donor;
> > > s64 delta_exec;
> > >
> > > - delta_exec = update_curr_se(rq, &donor->se);
> > > + delta_exec = update_se_times(rq, &donor->se);
> > > if (likely(delta_exec > 0))
> > > update_curr_task(donor, delta_exec);
> > >
> > > @@ -1233,7 +1244,7 @@ static void update_curr(struct cfs_rq *cfs_rq)
> > > if (unlikely(!curr))
> > > return;
> > >
> > > - delta_exec = update_curr_se(rq, curr);
> > > + delta_exec = update_se_times(rq, curr);
> > > if (unlikely(delta_exec <= 0))
> > > return;
> >
> > I think I've tripped over this before, on how update_curr_common() uses
> > donor and update_curr() curr. This definitely needs a comment. Because
> > at first glance they're not the same.
>
> I suspect part of the incongruity/dissonance comes from the
> cfs_rq->curr is actually the rq->donor (where rq->donor and rq->curr
> are different), as its what the sched-class picked to run.
>
> Renaming that I think might clarify things, but I have been hesitant
> to cause too much naming churn in the series, but maybe it's the right
> time to do it if it's causing confusion.
>
> My other hesitancy there, is around wanting the proxy logic to be
> focused in the core, so the sched-class "curr" can still be what the
> class selected to run, its just proxy might pick something else to
> actually run. But the top level rq->curr not being the cfs_rq->curr is
> prone to confusion, and we already do have rq->donor references in
> fair.c so its not like it's perfectly encapsulated and layered.
>
> But I'll take a pass at renaming cfs_rq->curr to cfs_rq->donor, unless
> you object.
I was more thinking of a comment near here to clarify. Not sure
cfs_rq->donor makes much sense.
next prev parent reply other threads:[~2025-04-24 14:00 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-12 6:02 [PATCH v16 0/7] Single RunQueue Proxy Execution (v16) John Stultz
2025-04-12 6:02 ` [PATCH v16 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
2025-04-14 8:50 ` Juri Lelli
2025-04-16 21:24 ` John Stultz
2025-04-29 9:16 ` Juri Lelli
2025-04-12 6:02 ` [PATCH v16 2/7] locking/mutex: Rework task_struct::blocked_on John Stultz
2025-04-14 8:59 ` Juri Lelli
2025-04-16 21:28 ` John Stultz
2025-04-12 6:02 ` [PATCH v16 3/7] locking/mutex: Add p->blocked_on wrappers for correctness checks John Stultz
2025-04-14 9:09 ` Juri Lelli
2025-04-16 22:44 ` John Stultz
2025-04-12 6:02 ` [PATCH v16 4/7] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
2025-04-14 9:28 ` Juri Lelli
2025-04-16 23:30 ` John Stultz
2025-04-29 9:24 ` Juri Lelli
2025-04-17 11:12 ` Peter Zijlstra
2025-04-21 21:00 ` John Stultz
2025-04-24 13:37 ` Peter Zijlstra [this message]
2025-04-26 3:34 ` John Stultz
2025-04-12 6:02 ` [PATCH v16 5/7] sched: Add an initial sketch of the find_proxy_task() function John Stultz
2025-04-14 9:41 ` Juri Lelli
2025-04-16 22:55 ` John Stultz
2025-04-17 11:18 ` Peter Zijlstra
2025-04-22 21:14 ` John Stultz
2025-04-12 6:02 ` [PATCH v16 6/7] sched: Fix proxy/current (push,pull)ability John Stultz
2025-04-14 3:28 ` K Prateek Nayak
2025-04-16 21:18 ` John Stultz
2025-04-12 6:02 ` [PATCH v16 7/7] sched: Start blocked_on chain processing in find_proxy_task() John Stultz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250424133707.GB1166@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=Metin.Kaya@arm.com \
--cc=boqun.feng@gmail.com \
--cc=bsegall@google.com \
--cc=daniel.lezcano@linaro.org \
--cc=dietmar.eggemann@arm.com \
--cc=joelagnelf@nvidia.com \
--cc=jstultz@google.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@android.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=qyousef@layalina.io \
--cc=rostedt@goodmis.org \
--cc=suleiman@google.com \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=xuewen.yan94@gmail.com \
--cc=zezeozue@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox