Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] lib/test_hmm: Check alloc_page_vma() return value
@ 2026-05-14  3:23 liuqiangneo
  2026-05-15  5:50 ` Alistair Popple
  0 siblings, 1 reply; 2+ messages in thread
From: liuqiangneo @ 2026-05-14  3:23 UTC (permalink / raw)
  To: jgg, leon; +Cc: akpm, linux-mm, linux-kernel, Qiang Liu

From: Qiang Liu <liuqiang@kylinos.cn>

Return VM_FAULT_OOM if page allocation fails, which
avoids a NULL pointer dereference when calling lock_page().

Signed-off-by: Qiang Liu <liuqiang@kylinos.cn>
---
 lib/test_hmm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 213504915737..f8b43d6eb261 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1063,6 +1063,8 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
 			/* Try with smaller pages if large allocation fails */
 			if (!dpage && order) {
 				dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
+				if (!dpage)
+					return VM_FAULT_OOM;
 				lock_page(dpage);
 				dst[i] = migrate_pfn(page_to_pfn(dpage));
 				dst_page = pfn_to_page(page_to_pfn(dpage));
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] lib/test_hmm: Check alloc_page_vma() return value
  2026-05-14  3:23 [PATCH] lib/test_hmm: Check alloc_page_vma() return value liuqiangneo
@ 2026-05-15  5:50 ` Alistair Popple
  0 siblings, 0 replies; 2+ messages in thread
From: Alistair Popple @ 2026-05-15  5:50 UTC (permalink / raw)
  To: liuqiangneo; +Cc: jgg, leon, akpm, linux-mm, linux-kernel, Qiang Liu

On 2026-05-14 at 13:23 +1000, liuqiangneo@163.com wrote...
> From: Qiang Liu <liuqiang@kylinos.cn>
> 
> Return VM_FAULT_OOM if page allocation fails, which
> avoids a NULL pointer dereference when calling lock_page().

Thanks, I agree the NULL dereference is a bug and this avoids it but what
happens to the pages that may have already been allocated and locked in previous
iterations of the loop? I think the subsequent migrate_vma_pages()/finalize()
calls will do the correct thing, but that would lead to a partial migration.

Given that's not what we're explicitly testing here I think it would be better
to just unlock and free the previously allocated pages before returning.

 - Alistair
 
> Signed-off-by: Qiang Liu <liuqiang@kylinos.cn>
> ---
>  lib/test_hmm.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
> index 213504915737..f8b43d6eb261 100644
> --- a/lib/test_hmm.c
> +++ b/lib/test_hmm.c
> @@ -1063,6 +1063,8 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
>  			/* Try with smaller pages if large allocation fails */
>  			if (!dpage && order) {
>  				dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
> +				if (!dpage)
> +					return VM_FAULT_OOM;
>  				lock_page(dpage);
>  				dst[i] = migrate_pfn(page_to_pfn(dpage));
>  				dst_page = pfn_to_page(page_to_pfn(dpage));
> -- 
> 2.43.0
> 
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-15  5:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14  3:23 [PATCH] lib/test_hmm: Check alloc_page_vma() return value liuqiangneo
2026-05-15  5:50 ` Alistair Popple

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox