* [PATCH mm-hotfixes 1/2] mm/page_alloc: return NULL early from alloc_frozen_pages_nolock() in NMI on UP
[not found] <20260427054205.560734-1-harry@kernel.org>
@ 2026-04-27 5:42 ` Harry Yoo (Oracle)
2026-04-27 5:42 ` [PATCH mm-hotfixes 2/2] mm/slub: return NULL early from kmalloc_nolock() " Harry Yoo (Oracle)
1 sibling, 0 replies; 3+ messages in thread
From: Harry Yoo (Oracle) @ 2026-04-27 5:42 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Shakeel Butt, Alexei Starovoitov,
Harry Yoo
Cc: Harry Yoo, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
Johannes Weiner, Zi Yan, linux-mm, linux-kernel
On UP kernels (!CONFIG_SMP), spin_trylock() is a no-op that
unconditionally succeeds even when the lock is already held. As a
result, alloc_frozen_pages_nolock() called from NMI context can
re-enter rmqueue() and acquire the zone lock that the interrupted
context is already holding, corrupting the freelists.
With CONFIG_DEBUG_SPINLOCK on UP, the following BUG is triggered with
the slub_kunit test module:
BUG: spinlock trylock failure on UP on CPU#0, kunit_try_catch/243
[...]
Call Trace:
<NMI>
dump_stack_lvl+0x3f/0x60
do_raw_spin_trylock+0x41/0x50
_raw_spin_trylock+0x24/0x50
rmqueue.isra.0+0x2a9/0xa70
get_page_from_freelist+0xeb/0x450
alloc_frozen_pages_nolock_noprof+0x111/0x1e0
allocate_slab+0x42a/0x500
___slab_alloc+0xa7/0x4c0
kmalloc_nolock_noprof+0x164/0x310
[...]
</NMI>
Fix this by returning NULL early when invoked from NMI on a UP kernel.
Link: https://lore.kernel.org/linux-mm/ad_cqe51pvr1WaDg@hyeyoo
Fixes: d7242af86434 ("mm: Introduce alloc_frozen_pages_nolock()")
Signed-off-by: Harry Yoo (Oracle) <harry@kernel.org>
---
mm/page_alloc.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 71859993dd54..23c7298d3be2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7737,6 +7737,11 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
*/
if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq()))
return NULL;
+
+ /* On UP, spin_trylock() always succeeds even when it is locked */
+ if (!IS_ENABLED(CONFIG_SMP) && in_nmi())
+ return NULL;
+
if (!pcp_allowed_order(order))
return NULL;
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH mm-hotfixes 2/2] mm/slub: return NULL early from kmalloc_nolock() in NMI on UP
[not found] <20260427054205.560734-1-harry@kernel.org>
2026-04-27 5:42 ` [PATCH mm-hotfixes 1/2] mm/page_alloc: return NULL early from alloc_frozen_pages_nolock() in NMI on UP Harry Yoo (Oracle)
@ 2026-04-27 5:42 ` Harry Yoo (Oracle)
1 sibling, 0 replies; 3+ messages in thread
From: Harry Yoo (Oracle) @ 2026-04-27 5:42 UTC (permalink / raw)
To: Vlastimil Babka, Harry Yoo, Andrew Morton, Alexei Starovoitov
Cc: Harry Yoo, Hao Li, Christoph Lameter, David Rientjes,
Roman Gushchin, linux-mm, linux-kernel
On UP kernels (!CONFIG_SMP), spin_trylock() is a no-op that
unconditionally succeeds even when the lock is already held. As a
result, kmalloc_nolock() called from NMI context can re-enter the slab
allocator and acquire a lock that the interrupted context is already
holding, corrupting slab state.
With CONFIG_DEBUG_SPINLOCK on UP, the following BUG is triggered with
the slub_kunit test module:
BUG: spinlock trylock failure on UP on CPU#0, kunit_try_catch/243
[...]
Call Trace:
<NMI>
dump_stack_lvl+0x3f/0x60
do_raw_spin_trylock+0x41/0x50
_raw_spin_trylock+0x24/0x50
get_from_partial_node+0x120/0x4d0
___slab_alloc+0x8a/0x4c0
kmalloc_nolock_noprof+0x164/0x310
[...]
</NMI>
Fix this by returning NULL early when invoked from NMI on a UP kernel.
Link: https://lore.kernel.org/linux-mm/ad_cqe51pvr1WaDg@hyeyoo
Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().")
Signed-off-by: Harry Yoo (Oracle) <harry@kernel.org>
---
mm/slub.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index 92362eeb13e5..b4ec15df92f6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5339,6 +5339,10 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node)
if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq()))
return NULL;
+ /* On UP, spin_trylock() always succeeds even when it is locked */
+ if (!IS_ENABLED(CONFIG_SMP) && in_nmi())
+ return NULL;
+
retry:
if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
return NULL;
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread