* [PATCH] mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache()
@ 2026-05-08 8:21 Qing Wang
0 siblings, 0 replies; only message in thread
From: Qing Wang @ 2026-05-08 8:21 UTC (permalink / raw)
To: Vlastimil Babka, Harry Yoo, Andrew Morton, Hao Li,
Christoph Lameter, David Rientjes, Roman Gushchin
Cc: linux-mm, linux-kernel, Qing Wang
flush_rcu_sheaves_on_cache() calls queue_work_on() in a
for_each_online_cpu() loop, which requires the cpu to stay online.
But cpus_read_lock() is not held in kvfree_rcu_barrier_on_cache().
There are two paths that call flush_rcu_sheaves_on_cache():
// has cpus_read_lock()
flush_all_rcu_sheaves()
-> flush_rcu_sheaves_on_cache()
// no cpus_read_lock()
kvfree_rcu_barrier_on_cache()
-> flush_rcu_sheaves_on_cache()
Fix this by holding cpus_read_lock() in kvfree_rcu_barrier_on_cache().
Why not move cpus_read_lock() from flush_all_rcu_sheaves() into
flush_rcu_sheaves_on_cache()? The reason is it would introduce a new l
ock order (slab_mutex -> cpu_hotplug_lock). The reverse order
(cpu_hotplug_lock -> slab_mutex) is established by
- cpuhp_setup_state_nocalls(..., slub_cpu_setup, ...)
- kmem_cache_destroy()
The two orders together would form an AB-BA deadlock.
Finally, add lockdep_assert_cpus_held() in flush_rcu_sheaves_on_cache()
to catch the same problem in the future.
Signed-off-by: Qing Wang <wangqing7171@gmail.com>
---
mm/slab_common.c | 6 ++++++
mm/slub.c | 1 +
2 files changed, 7 insertions(+)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d5a70a831a2a..0ee5a4189453 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -2110,7 +2110,13 @@ EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
{
if (cache_has_sheaves(s)) {
+ /*
+ * flush_rcu_sheaves_on_cache() use queue_work_on() and queue_work_on()
+ * must be called with the CPU hotplug read lock.
+ */
+ cpus_read_lock();
flush_rcu_sheaves_on_cache(s);
+ cpus_read_unlock();
rcu_barrier();
}
diff --git a/mm/slub.c b/mm/slub.c
index 161079ac5ba1..2a005d1e3a74 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4024,6 +4024,7 @@ void flush_rcu_sheaves_on_cache(struct kmem_cache *s)
struct slub_flush_work *sfw;
unsigned int cpu;
+ lockdep_assert_cpus_held();
mutex_lock(&flush_lock);
for_each_online_cpu(cpu) {
--
2.34.1
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2026-05-08 8:22 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-08 8:21 [PATCH] mm/slub: hold cpus_read_lock around flush_rcu_sheaves_on_cache() Qing Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox