public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd()
@ 2026-04-04 12:20 Muchun Song
  2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Huacai Chen, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, David S. Miller, Andreas Larsson,
	Andrew Morton, David Hildenbrand
  Cc: linux-mm, Muchun Song, Muchun Song, WANG Xuerui, Alexandre Ghiti,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Ryan Roberts, Kevin Brodsky,
	Dev Jain, Anshuman Khandual, Yang Shi, Chaitanya S Prakash,
	Petr Tesarik, Vishal Moola (Oracle), Junhui Liu, Austin Kim,
	Chengkaitao, Matthew Wilcox (Oracle), Alex Shi, linux-arm-kernel,
	linux-kernel, loongarch, linux-riscv, sparclinux

The two weak functions vmemmap_set_pmd() and vmemmap_check_pmd() are
currently no-ops on every architecture, forcing each platform that needs
them to duplicate the same handful of lines. Provide a generic implementation:

- vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection.

- vmemmap_check_pmd() verifies that the PMD is present and leaf,
  then calls the existing vmemmap_verify() helper.

Architectures that need special handling can continue to override the
weak symbols; everyone else gets the standard version for free.

This series drops the custom implementations in arm64, riscv, loongarch,
and sparc, replacing them with the generic implementation introduced
in the first patch.

v1 -> v2:
- Fixed a tooling issue in v1 where duplicate/conflicting patches
  were incorrectly sent to the mailing list. No code changes compared
  to the intended v1.


Muchun Song (5):
  mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and
    vmemmap_check_pmd()
  arm64/mm: drop vmemmap_pmd helpers and use generic code
  riscv/mm: drop vmemmap_pmd helpers and use generic code
  loongarch/mm: drop vmemmap_check_pmd helper and use generic code
  sparc/mm: drop vmemmap_check_pmd helper and use generic code

 arch/arm64/mm/mmu.c      | 14 --------------
 arch/loongarch/mm/init.c | 11 -----------
 arch/riscv/mm/init.c     | 13 -------------
 arch/sparc/mm/init_64.c  | 11 -----------
 mm/sparse-vmemmap.c      |  7 ++++++-
 5 files changed, 6 insertions(+), 50 deletions(-)

-- 
2.20.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd()
  2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
@ 2026-04-04 12:20 ` Muchun Song
  2026-04-05  7:07   ` Mike Rapoport
  2026-04-04 12:20 ` [PATCH v2 2/5] arm64/mm: drop vmemmap_pmd helpers and use generic code Muchun Song
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand
  Cc: linux-mm, Muchun Song, Muchun Song, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, linux-kernel

The two weak functions are currently no-ops on every architecture,
forcing each platform that needs them to duplicate the same handful
of lines.  Provide a generic implementation:

- vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection.

- vmemmap_check_pmd() verifies that the PMD is present and leaf,
  then calls the existing vmemmap_verify() helper.

Architectures that need special handling can continue to override the
weak symbols; everyone else gets the standard version for free.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/sparse-vmemmap.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 6eadb9d116e4..1eb990610d50 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -391,12 +391,17 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end,
 void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
 				      unsigned long addr, unsigned long next)
 {
+	BUG_ON(!pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL));
 }
 
 int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
 				       unsigned long addr, unsigned long next)
 {
-	return 0;
+	if (!pmd_leaf(pmdp_get(pmd)))
+		return 0;
+	vmemmap_verify((pte_t *)pmd, node, addr, next);
+
+	return 1;
 }
 
 int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/5] arm64/mm: drop vmemmap_pmd helpers and use generic code
  2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
  2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
@ 2026-04-04 12:20 ` Muchun Song
  2026-04-04 12:20 ` [PATCH v2 3/5] riscv/mm: " Muchun Song
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: linux-mm, akpm, Muchun Song, Muchun Song, Ryan Roberts,
	David Hildenbrand, Kevin Brodsky, Dev Jain, Lorenzo Stoakes,
	Anshuman Khandual, Yang Shi, Chaitanya S Prakash,
	linux-arm-kernel, linux-kernel

The generic implementations now suffice; remove the arm64 copies.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/arm64/mm/mmu.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ec1c6971a561..b87053452641 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1745,20 +1745,6 @@ static void free_empty_tables(unsigned long addr, unsigned long end,
 }
 #endif
 
-void __meminit vmemmap_set_pmd(pmd_t *pmdp, void *p, int node,
-			       unsigned long addr, unsigned long next)
-{
-	pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL));
-}
-
-int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
-				unsigned long addr, unsigned long next)
-{
-	vmemmap_verify((pte_t *)pmdp, node, addr, next);
-
-	return pmd_sect(READ_ONCE(*pmdp));
-}
-
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap)
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/5] riscv/mm: drop vmemmap_pmd helpers and use generic code
  2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
  2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
  2026-04-04 12:20 ` [PATCH v2 2/5] arm64/mm: drop vmemmap_pmd helpers and use generic code Muchun Song
@ 2026-04-04 12:20 ` Muchun Song
  2026-04-04 12:20 ` [PATCH v2 4/5] loongarch/mm: drop vmemmap_check_pmd helper " Muchun Song
  2026-04-04 12:20 ` [PATCH v2 5/5] sparc/mm: " Muchun Song
  4 siblings, 0 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-mm, akpm, Muchun Song, Muchun Song, Alexandre Ghiti,
	Mike Rapoport (Microsoft), Kevin Brodsky, Austin Kim,
	Vishal Moola (Oracle), Junhui Liu, linux-riscv, linux-kernel

The generic implementations now suffice; remove the riscv copies.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/riscv/mm/init.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 5142ca80be6f..f7e7d7c2e97f 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -1429,19 +1429,6 @@ void __init misc_mem_init(void)
 }
 
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
-			       unsigned long addr, unsigned long next)
-{
-	pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL);
-}
-
-int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
-				unsigned long addr, unsigned long next)
-{
-	vmemmap_verify((pte_t *)pmdp, node, addr, next);
-	return 1;
-}
-
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 			       struct vmem_altmap *altmap)
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 4/5] loongarch/mm: drop vmemmap_check_pmd helper and use generic code
  2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
                   ` (2 preceding siblings ...)
  2026-04-04 12:20 ` [PATCH v2 3/5] riscv/mm: " Muchun Song
@ 2026-04-04 12:20 ` Muchun Song
  2026-04-04 12:20 ` [PATCH v2 5/5] sparc/mm: " Muchun Song
  4 siblings, 0 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-mm, akpm, Muchun Song, Muchun Song, WANG Xuerui,
	Mike Rapoport (Microsoft), Catalin Marinas, Jiaxun Yang,
	Petr Tesarik, loongarch, linux-kernel

The generic implementations now suffice; remove the loongarch copies.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/loongarch/mm/init.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index 00f3822b6e47..7356d4eea140 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -110,17 +110,6 @@ void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
 	set_pmd_at(&init_mm, addr, pmd, entry);
 }
 
-int __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
-				unsigned long addr, unsigned long next)
-{
-	int huge = pmd_val(pmdp_get(pmd)) & _PAGE_HUGE;
-
-	if (huge)
-		vmemmap_verify((pte_t *)pmd, node, addr, next);
-
-	return huge;
-}
-
 int __meminit vmemmap_populate(unsigned long start, unsigned long end,
 			       int node, struct vmem_altmap *altmap)
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 5/5] sparc/mm: drop vmemmap_check_pmd helper and use generic code
  2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
                   ` (3 preceding siblings ...)
  2026-04-04 12:20 ` [PATCH v2 4/5] loongarch/mm: drop vmemmap_check_pmd helper " Muchun Song
@ 2026-04-04 12:20 ` Muchun Song
  4 siblings, 0 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-04 12:20 UTC (permalink / raw)
  To: David S. Miller, Andreas Larsson
  Cc: linux-mm, akpm, Muchun Song, Muchun Song,
	Mike Rapoport (Microsoft), Catalin Marinas,
	David Hildenbrand (Arm), Kevin Brodsky, Kees Cook,
	Matthew Wilcox (Oracle), Chengkaitao, Alex Shi, sparclinux,
	linux-kernel

The generic implementations now suffice; remove the sparc copies.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/sparc/mm/init_64.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 367c269305e5..4a089da0a490 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2579,17 +2579,6 @@ void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
 	pmd_val(*pmd) = pte_base | __pa(p);
 }
 
-int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
-				unsigned long addr, unsigned long next)
-{
-	int large = pmd_leaf(*pmdp);
-
-	if (large)
-		vmemmap_verify((pte_t *)pmdp, node, addr, next);
-
-	return large;
-}
-
 int __meminit vmemmap_populate(unsigned long vstart, unsigned long vend,
 			       int node, struct vmem_altmap *altmap)
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd()
  2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
@ 2026-04-05  7:07   ` Mike Rapoport
  2026-04-05 14:07     ` Muchun Song
  0 siblings, 1 reply; 8+ messages in thread
From: Mike Rapoport @ 2026-04-05  7:07 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, David Hildenbrand, linux-mm, Muchun Song,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, linux-kernel

Hi,

On Sat, Apr 04, 2026 at 08:20:54PM +0800, Muchun Song wrote:
> The two weak functions are currently no-ops on every architecture,
> forcing each platform that needs them to duplicate the same handful
> of lines.  Provide a generic implementation:
> 
> - vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection.
> 
> - vmemmap_check_pmd() verifies that the PMD is present and leaf,
>   then calls the existing vmemmap_verify() helper.
> 
> Architectures that need special handling can continue to override the
> weak symbols; everyone else gets the standard version for free.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/sparse-vmemmap.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 6eadb9d116e4..1eb990610d50 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -391,12 +391,17 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end,
>  void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
>  				      unsigned long addr, unsigned long next)
>  {
> +	BUG_ON(!pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL));

Do we have to crash the kernel here?
Wouldn't be better to make vmemmap_set_pmd() return error and make
vmemmap_populate_hugepages() fall back to base pages in case
vmemmap_set_pmd() errored?

>  }
>  
>  int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
>  				       unsigned long addr, unsigned long next)
>  {
> -	return 0;
> +	if (!pmd_leaf(pmdp_get(pmd)))
> +		return 0;
> +	vmemmap_verify((pte_t *)pmd, node, addr, next);
> +
> +	return 1;
>  }
>  
>  int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
> -- 
> 2.20.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd()
  2026-04-05  7:07   ` Mike Rapoport
@ 2026-04-05 14:07     ` Muchun Song
  0 siblings, 0 replies; 8+ messages in thread
From: Muchun Song @ 2026-04-05 14:07 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Muchun Song, Andrew Morton, David Hildenbrand, linux-mm,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, linux-kernel



> On Apr 5, 2026, at 15:07, Mike Rapoport <rppt@kernel.org> wrote:
> 
> Hi,
> 
> On Sat, Apr 04, 2026 at 08:20:54PM +0800, Muchun Song wrote:
>> The two weak functions are currently no-ops on every architecture,
>> forcing each platform that needs them to duplicate the same handful
>> of lines.  Provide a generic implementation:
>> 
>> - vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection.
>> 
>> - vmemmap_check_pmd() verifies that the PMD is present and leaf,
>>  then calls the existing vmemmap_verify() helper.
>> 
>> Architectures that need special handling can continue to override the
>> weak symbols; everyone else gets the standard version for free.
>> 
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>> mm/sparse-vmemmap.c | 7 ++++++-
>> 1 file changed, 6 insertions(+), 1 deletion(-)
>> 
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 6eadb9d116e4..1eb990610d50 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -391,12 +391,17 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end,
>> void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
>>       unsigned long addr, unsigned long next)
>> {
>> + 	BUG_ON(!pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL));
> 
> Do we have to crash the kernel here?
> Wouldn't be better to make vmemmap_set_pmd() return error and make
> vmemmap_populate_hugepages() fall back to base pages in case
> vmemmap_set_pmd() errored?

Hi Mike,

Thanks for the review. Let me explain my original thought process here.

My assumption was that pmd_set_huge() for the kernel virtual address space
should rarely, if ever, fail in this context. Furthermore, if we look at the
architectures this patch replaces (e.g., arm64 and riscv), they are either
ignoring the return value of pmd_set_huge() entirely or lacking any graceful
fallback mechanism anyway.

So, to keep the initial generic implementation as simple as possible, I used
BUG_ON() as a strict assertion.

Do you think we really need to introduce a more flexible, fallback-capable
solution at this stage? Based on the current architecture implementations, it
might not be strictly necessary right now. We could keep it simple and add the
error handling/fallback logic in the future if more architectures start using
this generic code and actually require error handling.

However, I am completely open to your suggestion. If you feel it's better to be
proactive and make the generic vmemmap_set_pmd() return an error code, allowing
vmemmap_populate_hugepages() to gracefully fall back to base pages right from
the start, I totally agree and will be happy to update it in v3.

Please let me know your thoughts.

Thanks,
Muchun

> 
>> }
>> 
>> int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
>>        unsigned long addr, unsigned long next)
>> {
>> - 	return 0;
>> + 	if (!pmd_leaf(pmdp_get(pmd)))
>> + 		return 0;
>> + 	vmemmap_verify((pte_t *)pmd, node, addr, next);
>> +
>> + 	return 1;
>> }
>> 
>> int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
>> -- 
>> 2.20.1
>> 
> 
> -- 
> Sincerely yours,
> Mike.




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-04-05 14:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-04 12:20 [PATCH v2 0/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Muchun Song
2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
2026-04-05  7:07   ` Mike Rapoport
2026-04-05 14:07     ` Muchun Song
2026-04-04 12:20 ` [PATCH v2 2/5] arm64/mm: drop vmemmap_pmd helpers and use generic code Muchun Song
2026-04-04 12:20 ` [PATCH v2 3/5] riscv/mm: " Muchun Song
2026-04-04 12:20 ` [PATCH v2 4/5] loongarch/mm: drop vmemmap_check_pmd helper " Muchun Song
2026-04-04 12:20 ` [PATCH v2 5/5] sparc/mm: " Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox