public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
@ 2026-03-30  8:36 Ke Zhao
  2026-03-30 16:36 ` Vlastimil Babka (SUSE)
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Ke Zhao @ 2026-03-30  8:36 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	John Hubbard, Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, Ke Zhao, syzbot+2aee6839a252e612ce34

Some page allocation paths that call post_alloc_hook() but skip
kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
Fix this by explicitly calling kmsan_alloc_page() after they
successfully get new pages.

Reported-by: syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34

Signed-off-by: Ke Zhao <ke.zhao.kernel@gmail.com>
---
 mm/page_alloc.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d4b6f1a554e..6435e8708ef4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
 
 		prep_new_page(page, 0, gfp, 0);
 		set_page_refcounted(page);
+
+		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
+		kmsan_alloc_page(page, 0, gfp);
+
 		page_array[nr_populated++] = page;
 	}
 
@@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
 			int i;
 
 			post_alloc_hook(page, order, gfp_mask);
+			/*
+			 * Initialize KMSAN state right after post_alloc_hook().
+			 * This prepares the pages for subsequent outer callers
+			 * that might free sub-pages after the split.
+			 */
+			kmsan_alloc_page(page, order, gfp_mask);
 			if (!order)
 				continue;
 
@@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
 
 		check_new_pages(head, order);
 		prep_new_page(head, order, gfp_mask, 0);
+
+		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
+		kmsan_alloc_page(page, order, gfp_mask);
 	} else {
 		ret = -EINVAL;
 		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",

---
base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
change-id: 20260325-fix-kmsan-e291f752a949

Best regards,
-- 
Ke Zhao <ke.zhao.kernel@gmail.com>



^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-31 14:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
2026-03-30 16:36 ` Vlastimil Babka (SUSE)
2026-03-30 20:39 ` Usama Anjum
2026-03-31  2:00   ` Ke Zhao
2026-03-31  7:53     ` Muhammad Usama Anjum
2026-03-31  2:04   ` Ke Zhao
2026-03-31 13:38 ` kernel test robot
2026-03-31 14:22 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox