From: Qing Wang <wangqing7171@gmail.com>
To: Vlastimil Babka <vbabka@kernel.org>, Harry Yoo <harry@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Hao Li <hao.li@linux.dev>, Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Qing Wang <wangqing7171@gmail.com>
Subject: [PATCH] mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache()
Date: Fri, 8 May 2026 16:21:49 +0800 [thread overview]
Message-ID: <20260508082149.182139-1-wangqing7171@gmail.com> (raw)
flush_rcu_sheaves_on_cache() calls queue_work_on() in a
for_each_online_cpu() loop, which requires the cpu to stay online.
But cpus_read_lock() is not held in kvfree_rcu_barrier_on_cache().
There are two paths that call flush_rcu_sheaves_on_cache():
// has cpus_read_lock()
flush_all_rcu_sheaves()
-> flush_rcu_sheaves_on_cache()
// no cpus_read_lock()
kvfree_rcu_barrier_on_cache()
-> flush_rcu_sheaves_on_cache()
Fix this by holding cpus_read_lock() in kvfree_rcu_barrier_on_cache().
Why not move cpus_read_lock() from flush_all_rcu_sheaves() into
flush_rcu_sheaves_on_cache()? The reason is it would introduce a new l
ock order (slab_mutex -> cpu_hotplug_lock). The reverse order
(cpu_hotplug_lock -> slab_mutex) is established by
- cpuhp_setup_state_nocalls(..., slub_cpu_setup, ...)
- kmem_cache_destroy()
The two orders together would form an AB-BA deadlock.
Finally, add lockdep_assert_cpus_held() in flush_rcu_sheaves_on_cache()
to catch the same problem in the future.
Signed-off-by: Qing Wang <wangqing7171@gmail.com>
---
mm/slab_common.c | 6 ++++++
mm/slub.c | 1 +
2 files changed, 7 insertions(+)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d5a70a831a2a..0ee5a4189453 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -2110,7 +2110,13 @@ EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
{
if (cache_has_sheaves(s)) {
+ /*
+ * flush_rcu_sheaves_on_cache() use queue_work_on() and queue_work_on()
+ * must be called with the CPU hotplug read lock.
+ */
+ cpus_read_lock();
flush_rcu_sheaves_on_cache(s);
+ cpus_read_unlock();
rcu_barrier();
}
diff --git a/mm/slub.c b/mm/slub.c
index 161079ac5ba1..2a005d1e3a74 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4024,6 +4024,7 @@ void flush_rcu_sheaves_on_cache(struct kmem_cache *s)
struct slub_flush_work *sfw;
unsigned int cpu;
+ lockdep_assert_cpus_held();
mutex_lock(&flush_lock);
for_each_online_cpu(cpu) {
--
2.34.1
next reply other threads:[~2026-05-08 8:22 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 8:21 Qing Wang [this message]
2026-05-12 2:56 ` [PATCH] mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache() Harry Yoo (Oracle)
2026-05-12 3:46 ` [PATCH v2] " Qing Wang
2026-05-12 3:50 ` [PATCH v3] " Qing Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508082149.182139-1-wangqing7171@gmail.com \
--to=wangqing7171@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox