public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
@ 2026-03-30  8:36 Ke Zhao
  2026-03-30 16:36 ` Vlastimil Babka (SUSE)
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Ke Zhao @ 2026-03-30  8:36 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	John Hubbard, Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, Ke Zhao, syzbot+2aee6839a252e612ce34

Some page allocation paths that call post_alloc_hook() but skip
kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
Fix this by explicitly calling kmsan_alloc_page() after they
successfully get new pages.

Reported-by: syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34

Signed-off-by: Ke Zhao <ke.zhao.kernel@gmail.com>
---
 mm/page_alloc.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d4b6f1a554e..6435e8708ef4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
 
 		prep_new_page(page, 0, gfp, 0);
 		set_page_refcounted(page);
+
+		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
+		kmsan_alloc_page(page, 0, gfp);
+
 		page_array[nr_populated++] = page;
 	}
 
@@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
 			int i;
 
 			post_alloc_hook(page, order, gfp_mask);
+			/*
+			 * Initialize KMSAN state right after post_alloc_hook().
+			 * This prepares the pages for subsequent outer callers
+			 * that might free sub-pages after the split.
+			 */
+			kmsan_alloc_page(page, order, gfp_mask);
 			if (!order)
 				continue;
 
@@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
 
 		check_new_pages(head, order);
 		prep_new_page(head, order, gfp_mask, 0);
+
+		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
+		kmsan_alloc_page(page, order, gfp_mask);
 	} else {
 		ret = -EINVAL;
 		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",

---
base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
change-id: 20260325-fix-kmsan-e291f752a949

Best regards,
-- 
Ke Zhao <ke.zhao.kernel@gmail.com>



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
@ 2026-03-30 16:36 ` Vlastimil Babka (SUSE)
  2026-03-30 20:39 ` Usama Anjum
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-03-30 16:36 UTC (permalink / raw)
  To: Ke Zhao, Andrew Morton, Suren Baghdasaryan, Michal Hocko,
	John Hubbard, Brendan Jackman, Johannes Weiner, Zi Yan,
	Alexander Potapenko
  Cc: linux-mm, linux-kernel, syzbot+2aee6839a252e612ce34, Marco Elver,
	Dmitry Vyukov, kasan-dev, Muhammad Usama Anjum

+Cc KMSAN folks, please review

On 3/30/26 10:36, Ke Zhao wrote:
> Some page allocation paths that call post_alloc_hook() but skip
> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
> Fix this by explicitly calling kmsan_alloc_page() after they
> successfully get new pages.
> 
> Reported-by: syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com

FYI the report thread:
https://lore.kernel.org/all/698f1877.a70a0220.2c38d7.00c2.GAE@google.com/

> Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34

Did syzbot confirm it as fix? Wonder if this submission alone will trigger
that check without some syz test command or whatnot.

> 
> Signed-off-by: Ke Zhao <ke.zhao.kernel@gmail.com>
> ---
>  mm/page_alloc.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2d4b6f1a554e..6435e8708ef4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>  
>  		prep_new_page(page, 0, gfp, 0);
>  		set_page_refcounted(page);
> +
> +		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);

Probably makes sense to add that here, yeah.

> +		kmsan_alloc_page(page, 0, gfp);
> +
>  		page_array[nr_populated++] = page;
>  	}
>  
> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
>  			int i;
>  
>  			post_alloc_hook(page, order, gfp_mask);
> +			/*
> +			 * Initialize KMSAN state right after post_alloc_hook().
> +			 * This prepares the pages for subsequent outer callers
> +			 * that might free sub-pages after the split.
> +			 */
> +			kmsan_alloc_page(page, order, gfp_mask);
>  			if (!order)
>  				continue;
>  
> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>  
>  		check_new_pages(head, order);
>  		prep_new_page(head, order, gfp_mask, 0);
> +
> +		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));

But I'm not sure we want to use this trace event here, minimally it would be
inconsistent with the branch above using split_free_frozen_pages()?

> +		kmsan_alloc_page(page, order, gfp_mask);
>  	} else {
>  		ret = -EINVAL;
>  		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
> 
> ---
> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
> change-id: 20260325-fix-kmsan-e291f752a949
> 
> Best regards,



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
  2026-03-30 16:36 ` Vlastimil Babka (SUSE)
@ 2026-03-30 20:39 ` Usama Anjum
  2026-03-31  2:00   ` Ke Zhao
  2026-03-31  2:04   ` Ke Zhao
  2026-03-31 13:38 ` kernel test robot
  2026-03-31 14:22 ` kernel test robot
  3 siblings, 2 replies; 8+ messages in thread
From: Usama Anjum @ 2026-03-30 20:39 UTC (permalink / raw)
  To: Ke Zhao, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: usama.anjum, linux-mm, linux-kernel, syzbot+2aee6839a252e612ce34

On 30/03/2026 9:36 am, Ke Zhao wrote:
> Some page allocation paths that call post_alloc_hook() but skip
> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
> Fix this by explicitly calling kmsan_alloc_page() after they
> successfully get new pages.
> 
> Reported-by: syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34
> 
> Signed-off-by: Ke Zhao <ke.zhao.kernel@gmail.com>
> ---
>  mm/page_alloc.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2d4b6f1a554e..6435e8708ef4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>  
>  		prep_new_page(page, 0, gfp, 0);
>  		set_page_refcounted(page);
> +
> +		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
> +		kmsan_alloc_page(page, 0, gfp);
> +
>  		page_array[nr_populated++] = page;
>  	}
>  
> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
>  			int i;
>  
>  			post_alloc_hook(page, order, gfp_mask);
> +			/*
> +			 * Initialize KMSAN state right after post_alloc_hook().
> +			 * This prepares the pages for subsequent outer callers
> +			 * that might free sub-pages after the split.
> +			 */
> +			kmsan_alloc_page(page, order, gfp_mask);
>  			if (!order)
>  				continue;
>  
> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>  
>  		check_new_pages(head, order);
>  		prep_new_page(head, order, gfp_mask, 0);
> +
> +		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
> +		kmsan_alloc_page(page, order, gfp_mask);
There is no page defined in this function. Most probably you wanted
to use head in place of page here.

How did you compiled and tested this change?

>  	} else {
>  		ret = -EINVAL;
>  		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
> 
> ---
> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
> change-id: 20260325-fix-kmsan-e291f752a949
> 
> Best regards,

Thanks,
Usama


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30 20:39 ` Usama Anjum
@ 2026-03-31  2:00   ` Ke Zhao
  2026-03-31  7:53     ` Muhammad Usama Anjum
  2026-03-31  2:04   ` Ke Zhao
  1 sibling, 1 reply; 8+ messages in thread
From: Ke Zhao @ 2026-03-31  2:00 UTC (permalink / raw)
  To: Usama Anjum, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: linux-mm, linux-kernel, syzbot+2aee6839a252e612ce34

[-- Attachment #1: Type: text/plain, Size: 2500 bytes --]


On 3/31/2026 4:39 AM, Usama Anjum wrote:
> On 30/03/2026 9:36 am, Ke Zhao wrote:
>> Some page allocation paths that call post_alloc_hook() but skip
>> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
>> Fix this by explicitly calling kmsan_alloc_page() after they
>> successfully get new pages.
>>
>> Reported-by:syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
>> Closes:https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34
>>
>> Signed-off-by: Ke Zhao<ke.zhao.kernel@gmail.com>
>> ---
>>   mm/page_alloc.c | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 2d4b6f1a554e..6435e8708ef4 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>>   
>>   		prep_new_page(page, 0, gfp, 0);
>>   		set_page_refcounted(page);
>> +
>> +		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
>> +		kmsan_alloc_page(page, 0, gfp);
>> +
>>   		page_array[nr_populated++] = page;
>>   	}
>>   
>> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
>>   			int i;
>>   
>>   			post_alloc_hook(page, order, gfp_mask);
>> +			/*
>> +			 * Initialize KMSAN state right after post_alloc_hook().
>> +			 * This prepares the pages for subsequent outer callers
>> +			 * that might free sub-pages after the split.
>> +			 */
>> +			kmsan_alloc_page(page, order, gfp_mask);
>>   			if (!order)
>>   				continue;
>>   
>> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>>   
>>   		check_new_pages(head, order);
>>   		prep_new_page(head, order, gfp_mask, 0);
>> +
>> +		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
>> +		kmsan_alloc_page(page, order, gfp_mask);
> There is no page defined in this function. Most probably you wanted
> to use head in place of page here.
>
> How did you compiled and tested this change?
Sorry, I just simply compiled with the change but add wrong code into 
the commit. I can hardly make an environment that could trigger the same 
warning here. I'm not sure if I can trigger syzbot to test this.
>>   	} else {
>>   		ret = -EINVAL;
>>   		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
>>
>> ---
>> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
>> change-id: 20260325-fix-kmsan-e291f752a949
>>
>> Best regards,
> Thanks,
> Usama

[-- Attachment #2: Type: text/html, Size: 4015 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30 20:39 ` Usama Anjum
  2026-03-31  2:00   ` Ke Zhao
@ 2026-03-31  2:04   ` Ke Zhao
  1 sibling, 0 replies; 8+ messages in thread
From: Ke Zhao @ 2026-03-31  2:04 UTC (permalink / raw)
  To: Usama Anjum, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: linux-mm, linux-kernel, syzbot+2aee6839a252e612ce34



On 3/31/2026 4:39 AM, Usama Anjum wrote:
> On 30/03/2026 9:36 am, Ke Zhao wrote:
>> Some page allocation paths that call post_alloc_hook() but skip
>> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
>> Fix this by explicitly calling kmsan_alloc_page() after they
>> successfully get new pages.
>>
>> Reported-by: syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
>> Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34
>>
>> Signed-off-by: Ke Zhao <ke.zhao.kernel@gmail.com>
>> ---
>>   mm/page_alloc.c | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 2d4b6f1a554e..6435e8708ef4 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>>   
>>   		prep_new_page(page, 0, gfp, 0);
>>   		set_page_refcounted(page);
>> +
>> +		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
>> +		kmsan_alloc_page(page, 0, gfp);
>> +
>>   		page_array[nr_populated++] = page;
>>   	}
>>   
>> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
>>   			int i;
>>   
>>   			post_alloc_hook(page, order, gfp_mask);
>> +			/*
>> +			 * Initialize KMSAN state right after post_alloc_hook().
>> +			 * This prepares the pages for subsequent outer callers
>> +			 * that might free sub-pages after the split.
>> +			 */
>> +			kmsan_alloc_page(page, order, gfp_mask);
>>   			if (!order)
>>   				continue;
>>   
>> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>>   
>>   		check_new_pages(head, order);
>>   		prep_new_page(head, order, gfp_mask, 0);
>> +
>> +		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
>> +		kmsan_alloc_page(page, order, gfp_mask);
> There is no page defined in this function. Most probably you wanted
> to use head in place of page here.
> 
Sorry, I just simply compiled with the change but add wrong code into 
the commit. I can hardly make an environment that could trigger the same 
warning here. I'm not sure if I can trigger syzbot to test this.
> How did you compiled and tested this change?
> 
>>   	} else {
>>   		ret = -EINVAL;
>>   		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
>>
>> ---
>> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
>> change-id: 20260325-fix-kmsan-e291f752a949
>>
>> Best regards,
> 
> Thanks,
> Usama



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-31  2:00   ` Ke Zhao
@ 2026-03-31  7:53     ` Muhammad Usama Anjum
  0 siblings, 0 replies; 8+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-31  7:53 UTC (permalink / raw)
  To: Ke Zhao, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: usama.anjum, linux-mm, linux-kernel, syzbot+2aee6839a252e612ce34

On 31/03/2026 3:00 am, Ke Zhao wrote:
> 
> On 3/31/2026 4:39 AM, Usama Anjum wrote:
>> On 30/03/2026 9:36 am, Ke Zhao wrote:
>>> Some page allocation paths that call post_alloc_hook() but skip
>>> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
>>> Fix this by explicitly calling kmsan_alloc_page() after they
>>> successfully get new pages.
>>>
>>> Reported-by:syzbot+2aee6839a252e612ce34@syzkaller.appspotmail.com
>>> Closes:https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34
>>>
>>> Signed-off-by: Ke Zhao<ke.zhao.kernel@gmail.com>
>>> ---
>>>   mm/page_alloc.c | 13 +++++++++++++
>>>   1 file changed, 13 insertions(+)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 2d4b6f1a554e..6435e8708ef4 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>>>   
>>>   		prep_new_page(page, 0, gfp, 0);
>>>   		set_page_refcounted(page);
>>> +
>>> +		trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
>>> +		kmsan_alloc_page(page, 0, gfp);
>>> +
>>>   		page_array[nr_populated++] = page;
>>>   	}
>>>   
>>> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
>>>   			int i;
>>>   
>>>   			post_alloc_hook(page, order, gfp_mask);
>>> +			/*
>>> +			 * Initialize KMSAN state right after post_alloc_hook().
>>> +			 * This prepares the pages for subsequent outer callers
>>> +			 * that might free sub-pages after the split.
>>> +			 */
>>> +			kmsan_alloc_page(page, order, gfp_mask);
>>>   			if (!order)
>>>   				continue;
>>>   
>>> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>>>   
>>>   		check_new_pages(head, order);
>>>   		prep_new_page(head, order, gfp_mask, 0);
>>> +
>>> +		trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
>>> +		kmsan_alloc_page(page, order, gfp_mask);
>> There is no page defined in this function. Most probably you wanted
>> to use head in place of page here.
>>
>> How did you compiled and tested this change?
> Sorry, I just simply compiled with the change but add wrong code into 
> the commit. I can hardly make an environment that could trigger the same 
> warning here. I'm not sure if I can trigger syzbot to test this.
Please try to send another version (v2) of this patch and then
[1] follow instructions mentioned here.

[1] https://github.com/google/syzkaller/blob/master/docs/syzbot.md#testing-patches

>>>   	} else {
>>>   		ret = -EINVAL;
>>>   		WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
>>>
>>> ---
>>> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
>>> change-id: 20260325-fix-kmsan-e291f752a949
>>>
>>> Best regards,
>> Thanks,
>> Usama



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
  2026-03-30 16:36 ` Vlastimil Babka (SUSE)
  2026-03-30 20:39 ` Usama Anjum
@ 2026-03-31 13:38 ` kernel test robot
  2026-03-31 14:22 ` kernel test robot
  3 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2026-03-31 13:38 UTC (permalink / raw)
  To: Ke Zhao, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel,
	Ke Zhao, syzbot+2aee6839a252e612ce34

Hi Ke,

kernel test robot noticed the following build errors:

[auto build test ERROR on bbeb83d3182abe0d245318e274e8531e5dd7a948]

url:    https://github.com/intel-lab-lkp/linux/commits/Ke-Zhao/mm-KMSAN-Add-missing-shadow-memory-initialization-in-special-allocation-paths/20260331-050740
base:   bbeb83d3182abe0d245318e274e8531e5dd7a948
patch link:    https://lore.kernel.org/r/20260330-fix-kmsan-v1-1-e9c672a4b9eb%40gmail.com
patch subject: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
config: hexagon-randconfig-r073-20260331 (https://download.01.org/0day-ci/archive/20260331/202603312101.BSxDJ969-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 2cd67b8b69f78e3f95918204320c3075a74ba16c)
smatch: v0.5.0-9004-gb810ac53
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260331/202603312101.BSxDJ969-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603312101.BSxDJ969-lkp@intel.com/

All errors (new ones prefixed by >>):

>> mm/page_alloc.c:7131:23: error: use of undeclared identifier 'page'
    7131 |                 trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
         |                                     ^~~~
   mm/page_alloc.c:7131:72: error: use of undeclared identifier 'page'
    7131 |                 trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
         |                                                                                      ^~~~
   mm/page_alloc.c:7131:72: error: use of undeclared identifier 'page'
    7131 |                 trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
         |                                                                                      ^~~~
   mm/page_alloc.c:7132:20: error: use of undeclared identifier 'page'
    7132 |                 kmsan_alloc_page(page, order, gfp_mask);
         |                                  ^~~~
   4 errors generated.


vim +/page +7131 mm/page_alloc.c

  6977	
  6978	/**
  6979	 * alloc_contig_frozen_range() -- tries to allocate given range of frozen pages
  6980	 * @start:	start PFN to allocate
  6981	 * @end:	one-past-the-last PFN to allocate
  6982	 * @alloc_flags:	allocation information
  6983	 * @gfp_mask:	GFP mask. Node/zone/placement hints are ignored; only some
  6984	 *		action and reclaim modifiers are supported. Reclaim modifiers
  6985	 *		control allocation behavior during compaction/migration/reclaim.
  6986	 *
  6987	 * The PFN range does not have to be pageblock aligned. The PFN range must
  6988	 * belong to a single zone.
  6989	 *
  6990	 * The first thing this routine does is attempt to MIGRATE_ISOLATE all
  6991	 * pageblocks in the range.  Once isolated, the pageblocks should not
  6992	 * be modified by others.
  6993	 *
  6994	 * All frozen pages which PFN is in [start, end) are allocated for the
  6995	 * caller, and they could be freed with free_contig_frozen_range(),
  6996	 * free_frozen_pages() also could be used to free compound frozen pages
  6997	 * directly.
  6998	 *
  6999	 * Return: zero on success or negative error code.
  7000	 */
  7001	int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
  7002			acr_flags_t alloc_flags, gfp_t gfp_mask)
  7003	{
  7004		const unsigned int order = ilog2(end - start);
  7005		unsigned long outer_start, outer_end;
  7006		int ret = 0;
  7007	
  7008		struct compact_control cc = {
  7009			.nr_migratepages = 0,
  7010			.order = -1,
  7011			.zone = page_zone(pfn_to_page(start)),
  7012			.mode = MIGRATE_SYNC,
  7013			.ignore_skip_hint = true,
  7014			.no_set_skip_hint = true,
  7015			.alloc_contig = true,
  7016		};
  7017		INIT_LIST_HEAD(&cc.migratepages);
  7018		enum pb_isolate_mode mode = (alloc_flags & ACR_FLAGS_CMA) ?
  7019						    PB_ISOLATE_MODE_CMA_ALLOC :
  7020						    PB_ISOLATE_MODE_OTHER;
  7021	
  7022		/*
  7023		 * In contrast to the buddy, we allow for orders here that exceed
  7024		 * MAX_PAGE_ORDER, so we must manually make sure that we are not
  7025		 * exceeding the maximum folio order.
  7026		 */
  7027		if (WARN_ON_ONCE((gfp_mask & __GFP_COMP) && order > MAX_FOLIO_ORDER))
  7028			return -EINVAL;
  7029	
  7030		gfp_mask = current_gfp_context(gfp_mask);
  7031		if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask))
  7032			return -EINVAL;
  7033	
  7034		/*
  7035		 * What we do here is we mark all pageblocks in range as
  7036		 * MIGRATE_ISOLATE.  Because pageblock and max order pages may
  7037		 * have different sizes, and due to the way page allocator
  7038		 * work, start_isolate_page_range() has special handlings for this.
  7039		 *
  7040		 * Once the pageblocks are marked as MIGRATE_ISOLATE, we
  7041		 * migrate the pages from an unaligned range (ie. pages that
  7042		 * we are interested in). This will put all the pages in
  7043		 * range back to page allocator as MIGRATE_ISOLATE.
  7044		 *
  7045		 * When this is done, we take the pages in range from page
  7046		 * allocator removing them from the buddy system.  This way
  7047		 * page allocator will never consider using them.
  7048		 *
  7049		 * This lets us mark the pageblocks back as
  7050		 * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
  7051		 * aligned range but not in the unaligned, original range are
  7052		 * put back to page allocator so that buddy can use them.
  7053		 */
  7054	
  7055		ret = start_isolate_page_range(start, end, mode);
  7056		if (ret)
  7057			goto done;
  7058	
  7059		drain_all_pages(cc.zone);
  7060	
  7061		/*
  7062		 * In case of -EBUSY, we'd like to know which page causes problem.
  7063		 * So, just fall through. test_pages_isolated() has a tracepoint
  7064		 * which will report the busy page.
  7065		 *
  7066		 * It is possible that busy pages could become available before
  7067		 * the call to test_pages_isolated, and the range will actually be
  7068		 * allocated.  So, if we fall through be sure to clear ret so that
  7069		 * -EBUSY is not accidentally used or returned to caller.
  7070		 */
  7071		ret = __alloc_contig_migrate_range(&cc, start, end);
  7072		if (ret && ret != -EBUSY)
  7073			goto done;
  7074	
  7075		/*
  7076		 * When in-use hugetlb pages are migrated, they may simply be released
  7077		 * back into the free hugepage pool instead of being returned to the
  7078		 * buddy system.  After the migration of in-use huge pages is completed,
  7079		 * we will invoke replace_free_hugepage_folios() to ensure that these
  7080		 * hugepages are properly released to the buddy system.
  7081		 */
  7082		ret = replace_free_hugepage_folios(start, end);
  7083		if (ret)
  7084			goto done;
  7085	
  7086		/*
  7087		 * Pages from [start, end) are within a pageblock_nr_pages
  7088		 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
  7089		 * more, all pages in [start, end) are free in page allocator.
  7090		 * What we are going to do is to allocate all pages from
  7091		 * [start, end) (that is remove them from page allocator).
  7092		 *
  7093		 * The only problem is that pages at the beginning and at the
  7094		 * end of interesting range may be not aligned with pages that
  7095		 * page allocator holds, ie. they can be part of higher order
  7096		 * pages.  Because of this, we reserve the bigger range and
  7097		 * once this is done free the pages we are not interested in.
  7098		 *
  7099		 * We don't have to hold zone->lock here because the pages are
  7100		 * isolated thus they won't get removed from buddy.
  7101		 */
  7102		outer_start = find_large_buddy(start);
  7103	
  7104		/* Make sure the range is really isolated. */
  7105		if (test_pages_isolated(outer_start, end, mode)) {
  7106			ret = -EBUSY;
  7107			goto done;
  7108		}
  7109	
  7110		/* Grab isolated pages from freelists. */
  7111		outer_end = isolate_freepages_range(&cc, outer_start, end);
  7112		if (!outer_end) {
  7113			ret = -EBUSY;
  7114			goto done;
  7115		}
  7116	
  7117		if (!(gfp_mask & __GFP_COMP)) {
  7118			split_free_frozen_pages(cc.freepages, gfp_mask);
  7119	
  7120			/* Free head and tail (if any) */
  7121			if (start != outer_start)
  7122				__free_contig_frozen_range(outer_start, start - outer_start);
  7123			if (end != outer_end)
  7124				__free_contig_frozen_range(end, outer_end - end);
  7125		} else if (start == outer_start && end == outer_end && is_power_of_2(end - start)) {
  7126			struct page *head = pfn_to_page(start);
  7127	
  7128			check_new_pages(head, order);
  7129			prep_new_page(head, order, gfp_mask, 0);
  7130	
> 7131			trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
  7132			kmsan_alloc_page(page, order, gfp_mask);
  7133		} else {
  7134			ret = -EINVAL;
  7135			WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
  7136			     start, end, outer_start, outer_end);
  7137		}
  7138	done:
  7139		undo_isolate_page_range(start, end);
  7140		return ret;
  7141	}
  7142	EXPORT_SYMBOL(alloc_contig_frozen_range_noprof);
  7143	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
  2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
                   ` (2 preceding siblings ...)
  2026-03-31 13:38 ` kernel test robot
@ 2026-03-31 14:22 ` kernel test robot
  3 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2026-03-31 14:22 UTC (permalink / raw)
  To: Ke Zhao, Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, John Hubbard, Brendan Jackman, Johannes Weiner,
	Zi Yan
  Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel,
	Ke Zhao, syzbot+2aee6839a252e612ce34

Hi Ke,

kernel test robot noticed the following build errors:

[auto build test ERROR on bbeb83d3182abe0d245318e274e8531e5dd7a948]

url:    https://github.com/intel-lab-lkp/linux/commits/Ke-Zhao/mm-KMSAN-Add-missing-shadow-memory-initialization-in-special-allocation-paths/20260331-050740
base:   bbeb83d3182abe0d245318e274e8531e5dd7a948
patch link:    https://lore.kernel.org/r/20260330-fix-kmsan-v1-1-e9c672a4b9eb%40gmail.com
patch subject: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
config: microblaze-defconfig (https://download.01.org/0day-ci/archive/20260331/202603312255.WPPwS69Q-lkp@intel.com/config)
compiler: microblaze-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260331/202603312255.WPPwS69Q-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603312255.WPPwS69Q-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/page_alloc.c: In function 'alloc_contig_frozen_range_noprof':
>> mm/page_alloc.c:7131:37: error: 'page' undeclared (first use in this function)
    7131 |                 trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
         |                                     ^~~~
   mm/page_alloc.c:7131:37: note: each undeclared identifier is reported only once for each function it appears in


vim +/page +7131 mm/page_alloc.c

  6977	
  6978	/**
  6979	 * alloc_contig_frozen_range() -- tries to allocate given range of frozen pages
  6980	 * @start:	start PFN to allocate
  6981	 * @end:	one-past-the-last PFN to allocate
  6982	 * @alloc_flags:	allocation information
  6983	 * @gfp_mask:	GFP mask. Node/zone/placement hints are ignored; only some
  6984	 *		action and reclaim modifiers are supported. Reclaim modifiers
  6985	 *		control allocation behavior during compaction/migration/reclaim.
  6986	 *
  6987	 * The PFN range does not have to be pageblock aligned. The PFN range must
  6988	 * belong to a single zone.
  6989	 *
  6990	 * The first thing this routine does is attempt to MIGRATE_ISOLATE all
  6991	 * pageblocks in the range.  Once isolated, the pageblocks should not
  6992	 * be modified by others.
  6993	 *
  6994	 * All frozen pages which PFN is in [start, end) are allocated for the
  6995	 * caller, and they could be freed with free_contig_frozen_range(),
  6996	 * free_frozen_pages() also could be used to free compound frozen pages
  6997	 * directly.
  6998	 *
  6999	 * Return: zero on success or negative error code.
  7000	 */
  7001	int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
  7002			acr_flags_t alloc_flags, gfp_t gfp_mask)
  7003	{
  7004		const unsigned int order = ilog2(end - start);
  7005		unsigned long outer_start, outer_end;
  7006		int ret = 0;
  7007	
  7008		struct compact_control cc = {
  7009			.nr_migratepages = 0,
  7010			.order = -1,
  7011			.zone = page_zone(pfn_to_page(start)),
  7012			.mode = MIGRATE_SYNC,
  7013			.ignore_skip_hint = true,
  7014			.no_set_skip_hint = true,
  7015			.alloc_contig = true,
  7016		};
  7017		INIT_LIST_HEAD(&cc.migratepages);
  7018		enum pb_isolate_mode mode = (alloc_flags & ACR_FLAGS_CMA) ?
  7019						    PB_ISOLATE_MODE_CMA_ALLOC :
  7020						    PB_ISOLATE_MODE_OTHER;
  7021	
  7022		/*
  7023		 * In contrast to the buddy, we allow for orders here that exceed
  7024		 * MAX_PAGE_ORDER, so we must manually make sure that we are not
  7025		 * exceeding the maximum folio order.
  7026		 */
  7027		if (WARN_ON_ONCE((gfp_mask & __GFP_COMP) && order > MAX_FOLIO_ORDER))
  7028			return -EINVAL;
  7029	
  7030		gfp_mask = current_gfp_context(gfp_mask);
  7031		if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask))
  7032			return -EINVAL;
  7033	
  7034		/*
  7035		 * What we do here is we mark all pageblocks in range as
  7036		 * MIGRATE_ISOLATE.  Because pageblock and max order pages may
  7037		 * have different sizes, and due to the way page allocator
  7038		 * work, start_isolate_page_range() has special handlings for this.
  7039		 *
  7040		 * Once the pageblocks are marked as MIGRATE_ISOLATE, we
  7041		 * migrate the pages from an unaligned range (ie. pages that
  7042		 * we are interested in). This will put all the pages in
  7043		 * range back to page allocator as MIGRATE_ISOLATE.
  7044		 *
  7045		 * When this is done, we take the pages in range from page
  7046		 * allocator removing them from the buddy system.  This way
  7047		 * page allocator will never consider using them.
  7048		 *
  7049		 * This lets us mark the pageblocks back as
  7050		 * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
  7051		 * aligned range but not in the unaligned, original range are
  7052		 * put back to page allocator so that buddy can use them.
  7053		 */
  7054	
  7055		ret = start_isolate_page_range(start, end, mode);
  7056		if (ret)
  7057			goto done;
  7058	
  7059		drain_all_pages(cc.zone);
  7060	
  7061		/*
  7062		 * In case of -EBUSY, we'd like to know which page causes problem.
  7063		 * So, just fall through. test_pages_isolated() has a tracepoint
  7064		 * which will report the busy page.
  7065		 *
  7066		 * It is possible that busy pages could become available before
  7067		 * the call to test_pages_isolated, and the range will actually be
  7068		 * allocated.  So, if we fall through be sure to clear ret so that
  7069		 * -EBUSY is not accidentally used or returned to caller.
  7070		 */
  7071		ret = __alloc_contig_migrate_range(&cc, start, end);
  7072		if (ret && ret != -EBUSY)
  7073			goto done;
  7074	
  7075		/*
  7076		 * When in-use hugetlb pages are migrated, they may simply be released
  7077		 * back into the free hugepage pool instead of being returned to the
  7078		 * buddy system.  After the migration of in-use huge pages is completed,
  7079		 * we will invoke replace_free_hugepage_folios() to ensure that these
  7080		 * hugepages are properly released to the buddy system.
  7081		 */
  7082		ret = replace_free_hugepage_folios(start, end);
  7083		if (ret)
  7084			goto done;
  7085	
  7086		/*
  7087		 * Pages from [start, end) are within a pageblock_nr_pages
  7088		 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
  7089		 * more, all pages in [start, end) are free in page allocator.
  7090		 * What we are going to do is to allocate all pages from
  7091		 * [start, end) (that is remove them from page allocator).
  7092		 *
  7093		 * The only problem is that pages at the beginning and at the
  7094		 * end of interesting range may be not aligned with pages that
  7095		 * page allocator holds, ie. they can be part of higher order
  7096		 * pages.  Because of this, we reserve the bigger range and
  7097		 * once this is done free the pages we are not interested in.
  7098		 *
  7099		 * We don't have to hold zone->lock here because the pages are
  7100		 * isolated thus they won't get removed from buddy.
  7101		 */
  7102		outer_start = find_large_buddy(start);
  7103	
  7104		/* Make sure the range is really isolated. */
  7105		if (test_pages_isolated(outer_start, end, mode)) {
  7106			ret = -EBUSY;
  7107			goto done;
  7108		}
  7109	
  7110		/* Grab isolated pages from freelists. */
  7111		outer_end = isolate_freepages_range(&cc, outer_start, end);
  7112		if (!outer_end) {
  7113			ret = -EBUSY;
  7114			goto done;
  7115		}
  7116	
  7117		if (!(gfp_mask & __GFP_COMP)) {
  7118			split_free_frozen_pages(cc.freepages, gfp_mask);
  7119	
  7120			/* Free head and tail (if any) */
  7121			if (start != outer_start)
  7122				__free_contig_frozen_range(outer_start, start - outer_start);
  7123			if (end != outer_end)
  7124				__free_contig_frozen_range(end, outer_end - end);
  7125		} else if (start == outer_start && end == outer_end && is_power_of_2(end - start)) {
  7126			struct page *head = pfn_to_page(start);
  7127	
  7128			check_new_pages(head, order);
  7129			prep_new_page(head, order, gfp_mask, 0);
  7130	
> 7131			trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
  7132			kmsan_alloc_page(page, order, gfp_mask);
  7133		} else {
  7134			ret = -EINVAL;
  7135			WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
  7136			     start, end, outer_start, outer_end);
  7137		}
  7138	done:
  7139		undo_isolate_page_range(start, end);
  7140		return ret;
  7141	}
  7142	EXPORT_SYMBOL(alloc_contig_frozen_range_noprof);
  7143	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-31 14:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30  8:36 [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths Ke Zhao
2026-03-30 16:36 ` Vlastimil Babka (SUSE)
2026-03-30 20:39 ` Usama Anjum
2026-03-31  2:00   ` Ke Zhao
2026-03-31  7:53     ` Muhammad Usama Anjum
2026-03-31  2:04   ` Ke Zhao
2026-03-31 13:38 ` kernel test robot
2026-03-31 14:22 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox