From: Waiman Long <llong@redhat.com>
To: "Chen Ridong" <chenridong@huaweicloud.com>,
"Waiman Long" <llong@redhat.com>, "Tejun Heo" <tj@kernel.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Ingo Molnar" <mingo@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Juri Lelli" <juri.lelli@redhat.com>,
"Vincent Guittot" <vincent.guittot@linaro.org>,
"Steven Rostedt" <rostedt@goodmis.org>,
"Ben Segall" <bsegall@google.com>, "Mel Gorman" <mgorman@suse.de>,
"Valentin Schneider" <vschneid@redhat.com>,
"Anna-Maria Behnsen" <anna-maria@linutronix.de>,
"Frederic Weisbecker" <frederic@kernel.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Shuah Khan" <shuah@kernel.org>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org
Subject: Re: [PATCH/for-next v2 2/2] cgroup/cpuset: Introduce a new top level cpuset_top_mutex
Date: Wed, 4 Feb 2026 15:52:38 -0500 [thread overview]
Message-ID: <e9d49c28-dbfe-4145-9030-5b6c8168475d@redhat.com> (raw)
In-Reply-To: <1264cf4a-0acd-475b-9f0a-57b816cdd504@huaweicloud.com>
On 2/3/26 8:55 PM, Chen Ridong wrote:
>
> On 2026/2/3 2:29, Waiman Long wrote:
>> On 2/1/26 8:11 PM, Chen Ridong wrote:
>>> On 2026/2/1 7:13, Waiman Long wrote:
>>>> On 1/30/26 9:53 PM, Chen Ridong wrote:
>>>>> On 2026/1/30 23:42, Waiman Long wrote:
>>>>>> The current cpuset partition code is able to dynamically update
>>>>>> the sched domains of a running system and the corresponding
>>>>>> HK_TYPE_DOMAIN housekeeping cpumask to perform what is essentally the
>>>>>> "isolcpus=domain,..." boot command line feature at run time.
>>>>>>
>>>>>> The housekeeping cpumask update requires flushing a number of different
>>>>>> workqueues which may not be safe with cpus_read_lock() held as the
>>>>>> workqueue flushing code may acquire cpus_read_lock() or acquiring locks
>>>>>> which have locking dependency with cpus_read_lock() down the chain. Below
>>>>>> is an example of such circular locking problem.
>>>>>>
>>>>>> ======================================================
>>>>>> WARNING: possible circular locking dependency detected
>>>>>> 6.18.0-test+ #2 Tainted: G S
>>>>>> ------------------------------------------------------
>>>>>> test_cpuset_prs/10971 is trying to acquire lock:
>>>>>> ffff888112ba4958 ((wq_completion)sync_wq){+.+.}-{0:0}, at:
>>>>>> touch_wq_lockdep_map+0x7a/0x180
>>>>>>
>>>>>> but task is already holding lock:
>>>>>> ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at:
>>>>>> cpuset_partition_write+0x85/0x130
>>>>>>
>>>>>> which lock already depends on the new lock.
>>>>>>
>>>>>> the existing dependency chain (in reverse order) is:
>>>>>> -> #4 (cpuset_mutex){+.+.}-{4:4}:
>>>>>> -> #3 (cpu_hotplug_lock){++++}-{0:0}:
>>>>>> -> #2 (rtnl_mutex){+.+.}-{4:4}:
>>>>>> -> #1 ((work_completion)(&arg.work)){+.+.}-{0:0}:
>>>>>> -> #0 ((wq_completion)sync_wq){+.+.}-{0:0}:
>>>>>>
>>>>>> Chain exists of:
>>>>>> (wq_completion)sync_wq --> cpu_hotplug_lock --> cpuset_mutex
>>>>>>
>>>>>> 5 locks held by test_cpuset_prs/10971:
>>>>>> #0: ffff88816810e440 (sb_writers#7){.+.+}-{0:0}, at:
>>>>>> ksys_write+0xf9/0x1d0
>>>>>> #1: ffff8891ab620890 (&of->mutex#2){+.+.}-{4:4}, at:
>>>>>> kernfs_fop_write_iter+0x260/0x5f0
>>>>>> #2: ffff8890a78b83e8 (kn->active#187){.+.+}-{0:0}, at:
>>>>>> kernfs_fop_write_iter+0x2b6/0x5f0
>>>>>> #3: ffffffffadf32900 (cpu_hotplug_lock){++++}-{0:0}, at:
>>>>>> cpuset_partition_write+0x77/0x130
>>>>>> #4: ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at:
>>>>>> cpuset_partition_write+0x85/0x130
>>>>>>
>>>>>> Call Trace:
>>>>>> <TASK>
>>>>>> :
>>>>>> touch_wq_lockdep_map+0x93/0x180
>>>>>> __flush_workqueue+0x111/0x10b0
>>>>>> housekeeping_update+0x12d/0x2d0
>>>>>> update_parent_effective_cpumask+0x595/0x2440
>>>>>> update_prstate+0x89d/0xce0
>>>>>> cpuset_partition_write+0xc5/0x130
>>>>>> cgroup_file_write+0x1a5/0x680
>>>>>> kernfs_fop_write_iter+0x3df/0x5f0
>>>>>> vfs_write+0x525/0xfd0
>>>>>> ksys_write+0xf9/0x1d0
>>>>>> do_syscall_64+0x95/0x520
>>>>>> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>>>
>>>>>> To avoid such a circular locking dependency problem, we have to
>>>>>> call housekeeping_update() without holding the cpus_read_lock() and
>>>>>> cpuset_mutex. The current set of wq's flushed by housekeeping_update()
>>>>>> may not have work functions that call cpus_read_lock() directly,
>>>>>> but we are likely to extend the list of wq's that are flushed in the
>>>>>> future. Moreover, the current set of work functions may hold locks that
>>>>>> may have cpu_hotplug_lock down the dependency chain.
>>>>>>
>>>>>> One way to do that is to introduce a new top level cpuset_top_mutex
>>>>>> which will be acquired first. This new cpuset_top_mutex will provide
>>>>>> the need mutual exclusion without the need to hold cpus_read_lock().
>>>>>>
>>>>> Introducing a new global lock warrants careful consideration. I wonder if we
>>>>> could make all updates to isolated_cpus asynchronous. If that is feasible, we
>>>>> could avoid adding a global lock altogether. If not, we need to clarify which
>>>>> updates must remain synchronous and which ones can be handled asynchronously.
>>>> Almost all the cpuset code are run with cpuset_mutex held with either
>>>> cpus_read_lock or cpus_write_lock. So there is no concurrent access/update to
>>>> any of the cpuset internal data. The new cpuset_top_mutex is aded to resolve the
>>>> possible deadlock scenarios with the new housekeeping_update() call without
>>>> breaking this model. Allow parallel concurrent access/update to cpuset data will
>>>> greatly complicate the code and we will likely missed some corner cases that we
>>> I agree with that point. However, we already have paths where isolated_cpus is
>>> updated asynchronously, meaning parallel concurrent access/update is already
>>> happening. Therefore, we cannot entirely avoid such scenarios, so why not keep
>>> the locking simple(make all updates to isolated_cpus asynchronous)?
>> isolated_cpus should only be updated in isolated_cpus_update() where both
>> cpuset_mutex and callback_lock are held. It can be read asynchronously if either
>> cpuset_mutex or callback_lock is held. Can you show me the places where this
>> rule isn't followed?
>>
> I was considering that since the hotplug path calls update_isolation_cpumasks
> asynchronously, could other cpuset paths (such as setting CPUs or partitions)
> also call update_isolation_cpumasks asynchronously? If so, the global
> cpuset_top_mutex lock might be unnecessary. Note that isolated_cpus is updated
> synchronously, while housekeeping_update is invoked asynchronously.
update_isolation_cpumasks() is always called synchronously as
cpuset_mutex will always be held. With the current patchset, the only
asynchronous piece is CPU hotplug vs the the housekeeping_update() call
as it is being called without holding cpus_read_lock(). AFASICS, it
should not be a problem. Please let me if you are aware of some
potential hazard with the current setup.
Cheers,
Longman
prev parent reply other threads:[~2026-02-04 20:52 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 15:42 [PATCH/for-next v2 0/2] cgroup/cpuset: Fix partition related locking issues Waiman Long
2026-01-30 15:42 ` [PATCH/for-next v2 1/2] cgroup/cpuset: Defer housekeeping_update() call from CPU hotplug to workqueue Waiman Long
2026-01-31 0:47 ` Chen Ridong
2026-01-31 1:06 ` Waiman Long
2026-01-31 1:43 ` Chen Ridong
2026-01-31 1:49 ` Chen Ridong
2026-01-31 0:58 ` Chen Ridong
2026-01-31 1:45 ` Waiman Long
2026-01-31 2:05 ` Chen Ridong
2026-01-31 23:00 ` Waiman Long
2026-02-02 0:58 ` Chen Ridong
2026-02-02 13:05 ` Peter Zijlstra
2026-02-02 18:21 ` Waiman Long
2026-02-02 20:04 ` Peter Zijlstra
2026-02-02 20:06 ` Peter Zijlstra
2026-02-03 0:59 ` Waiman Long
2026-01-30 15:42 ` [PATCH/for-next v2 2/2] cgroup/cpuset: Introduce a new top level cpuset_top_mutex Waiman Long
2026-01-31 2:53 ` Chen Ridong
2026-01-31 23:13 ` Waiman Long
2026-02-02 1:11 ` Chen Ridong
2026-02-02 18:29 ` Waiman Long
2026-02-04 1:55 ` Chen Ridong
2026-02-04 20:52 ` Waiman Long [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e9d49c28-dbfe-4145-9030-5b6c8168475d@redhat.com \
--to=llong@redhat.com \
--cc=anna-maria@linutronix.de \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huaweicloud.com \
--cc=frederic@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=mkoutny@suse.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=shuah@kernel.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox