From: Benjamin Segall <bsegall@google.com>
To: Chuyi Zhou <zhouchuyi@bytedance.com>
Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, mgorman@suse.de, vschneid@redhat.com,
chengming.zhou@linux.dev, linux-kernel@vger.kernel.org,
joshdon@google.com
Subject: Re: [PATCH v2 1/2] sched/fair: Decrease cfs bandwidth usage in task_group destruction
Date: Tue, 23 Jul 2024 18:26:48 -0700 [thread overview]
Message-ID: <xm26le1rzijr.fsf@google.com> (raw)
In-Reply-To: <20240723122006.47053-2-zhouchuyi@bytedance.com> (Chuyi Zhou's message of "Tue, 23 Jul 2024 20:20:05 +0800")
Chuyi Zhou <zhouchuyi@bytedance.com> writes:
> The static key __cfs_bandwidth_used is used to indicate whether bandwidth
> control is enabled in the system. Currently, it is only decreased when a
> task group disables bandwidth control. This is incorrect because if there
> was a task group in the past that enabled bandwidth control, the
> __cfs_bandwidth_used will never go to zero, even if there are no task_group
> using bandwidth control now.
>
> This patch tries to fix this issue by decrsasing bandwidth usage in
> destroy_cfs_bandwidth(). cfs_bandwidth_usage_dec() calls
> static_key_slow_dec_cpuslocked which needs to hold hotplug lock, but cfs
> bandwidth destroy maybe run in a rcu callback. Move the call to
> destroy_cfs_bandwidth() from unregister_fair_sched_group() to
> cpu_cgroup_css_free() which runs in process context.
>
> Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Reviewed-By: Ben Segall <bsegall@google.com>
> ---
> kernel/sched/core.c | 2 ++
> kernel/sched/fair.c | 13 +++++++------
> kernel/sched/sched.h | 2 ++
> 3 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 6d35c48239be..7720d34bd71b 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -12992,8 +12995,6 @@ void unregister_fair_sched_group(struct task_group *tg)
> struct rq *rq;
> int cpu;
>
> - destroy_cfs_bandwidth(tg_cfs_bandwidth(tg));
> -
> for_each_possible_cpu(cpu) {
> if (tg->se[cpu])
> remove_entity_load_avg(tg->se[cpu]);
There is a slightly subtle point here that autogroup cannot have a quota
set. If there's some shenanigans way that that's possible then it would
need a destroy as well. autogroup is already making assumptions anyways
though.
next prev parent reply other threads:[~2024-07-24 1:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-23 12:20 [PATCH v2 0/2] minor cpu bandwidth control fix Chuyi Zhou
2024-07-23 12:20 ` [PATCH v2 1/2] sched/fair: Decrease cfs bandwidth usage in task_group destruction Chuyi Zhou
2024-07-24 1:26 ` Benjamin Segall [this message]
2024-07-24 2:29 ` Chengming Zhou
2024-07-23 12:20 ` [PATCH v2 2/2] sched/core: Avoid unnecessary update in tg_set_cfs_bandwidth Chuyi Zhou
2024-07-24 1:27 ` Benjamin Segall
2024-07-24 2:31 ` Chengming Zhou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=xm26le1rzijr.fsf@google.com \
--to=bsegall@google.com \
--cc=chengming.zhou@linux.dev \
--cc=dietmar.eggemann@arm.com \
--cc=joshdon@google.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=zhouchuyi@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox