From: Chen Ridong <chenridong@huaweicloud.com>
To: Waiman Long <llong@redhat.com>,
tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
lujialin4@huawei.com, chenridong@huawei.com
Subject: Re: [PATCH -next v2] cpuset: Remove unnecessary checks in rebuild_sched_domains_locked
Date: Thu, 27 Nov 2025 09:57:30 +0800 [thread overview]
Message-ID: <362d011f-dfc8-44ed-ab6e-8b393dc04619@huaweicloud.com> (raw)
In-Reply-To: <518ffa19-fcb2-4131-942d-02aa8328a815@redhat.com>
On 2025/11/27 3:47, Waiman Long wrote:
> On 11/26/25 4:11 AM, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> Commit 406100f3da08 ("cpuset: fix race between hotplug work and later CPU
>> offline") added a check for empty effective_cpus in partitions for cgroup
>> v2. However, this check did not account for remote partitions, which were
>> introduced later.
>>
>> After commit 2125c0034c5d ("cgroup/cpuset: Make cpuset hotplug processing
>> synchronous"), cpuset hotplug handling is now synchronous. This eliminates
>> the race condition with subsequent CPU offline operations that the original
>> check aimed to fix.
>>
>> Instead of extending the check to support remote partitions, this patch
>> removes all the redundant effective_cpus check. Additionally, it adds a
>> check and warning to verify that all generated sched domains consist of
>> active CPUs, preventing partition_sched_domains from being invoked with
>> offline CPUs.
>>
>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>> ---
>> kernel/cgroup/cpuset.c | 50 +++++++++++++-----------------------------
>> 1 file changed, 15 insertions(+), 35 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index 6e6eb09b8db6..fea577b4016a 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -1103,53 +1103,33 @@ void dl_rebuild_rd_accounting(void)
>> */
>> void rebuild_sched_domains_locked(void)
>> {
>> - struct cgroup_subsys_state *pos_css;
>> struct sched_domain_attr *attr;
>> cpumask_var_t *doms;
>> - struct cpuset *cs;
>> int ndoms;
>> + int i;
>> lockdep_assert_cpus_held();
>> lockdep_assert_held(&cpuset_mutex);
>> force_sd_rebuild = false;
>> - /*
>> - * If we have raced with CPU hotplug, return early to avoid
>> - * passing doms with offlined cpu to partition_sched_domains().
>> - * Anyways, cpuset_handle_hotplug() will rebuild sched domains.
>> - *
>> - * With no CPUs in any subpartitions, top_cpuset's effective CPUs
>> - * should be the same as the active CPUs, so checking only top_cpuset
>> - * is enough to detect racing CPU offlines.
>> - */
>> - if (cpumask_empty(subpartitions_cpus) &&
>> - !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
>> - return;
>> + /* Generate domain masks and attrs */
>> + ndoms = generate_sched_domains(&doms, &attr);
>> /*
>> - * With subpartition CPUs, however, the effective CPUs of a partition
>> - * root should be only a subset of the active CPUs. Since a CPU in any
>> - * partition root could be offlined, all must be checked.
>> - */
>> - if (!cpumask_empty(subpartitions_cpus)) {
>> - rcu_read_lock();
>> - cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
>> - if (!is_partition_valid(cs)) {
>> - pos_css = css_rightmost_descendant(pos_css);
>> - continue;
>> - }
>> - if (!cpumask_subset(cs->effective_cpus,
>> - cpu_active_mask)) {
>> - rcu_read_unlock();
>> - return;
>> - }
>> - }
>> - rcu_read_unlock();
>> + * cpuset_hotplug_workfn is invoked synchronously now, thus this
>> + * function should not race with CPU hotplug. And the effective CPUs
>> + * must not include any offline CPUs. Passing an offline CPU in the
>> + * doms to partition_sched_domains() will trigger a kernel panic.
>> + *
>> + * We perform a final check here: if the doms contains any
>> + * offline CPUs, a warning is emitted and we return directly to
>> + * prevent the panic.
>> + */
>> + for (i = 0; i < ndoms; ++i) {
>> + if (WARN_ON_ONCE(!cpumask_subset(doms[i], cpu_active_mask)))
>> + return;
>> }
>> - /* Generate domain masks and attrs */
>> - ndoms = generate_sched_domains(&doms, &attr);
>> -
>> /* Have scheduler rebuild the domains */
>> partition_sched_domains(ndoms, doms, attr);
>> }
> Reviewed-by: Waiman Long <longman@redhat.com>
>
> Thanks!
>
Thanks.
--
Best regards,
Ridong
next prev parent reply other threads:[~2025-11-27 1:57 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-26 9:11 [PATCH -next v2] cpuset: Remove unnecessary checks in rebuild_sched_domains_locked Chen Ridong
2025-11-26 19:47 ` Waiman Long
2025-11-27 1:57 ` Chen Ridong [this message]
2025-12-02 0:57 ` Chen Ridong
2025-12-02 5:29 ` Tejun Heo
2025-12-02 6:14 ` Chen Ridong
2025-12-08 19:07 ` [PATCH " Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=362d011f-dfc8-44ed-ab6e-8b393dc04619@huaweicloud.com \
--to=chenridong@huaweicloud.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huawei.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=llong@redhat.com \
--cc=lujialin4@huawei.com \
--cc=mkoutny@suse.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).