* [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support
@ 2025-01-27 9:35 Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 1/9] riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes Alexandre Ghiti
` (8 more replies)
0 siblings, 9 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
This patchset intends to merge the contiguous ptes hugetlbfs implementation
of arm64 and riscv.
Both arm64 and riscv support the use of contiguous ptes to map pages that
are larger than the default page table size, respectively called contpte
and svnapot.
The riscv implementation differs from the arm64's in that the LSBs of the
pfn of a svnapot pte are used to store the size of the mapping, allowing
for future sizes to be added (for now only 64KB is supported). That's an
issue for the core mm code which expects to find the *real* pfn a pte points
to. Patch 1 fixes that by always returning svnapot ptes with the real pfn
and restores the size of the mapping when it is written to a page table.
The following patches are just merges of the 2 different implementations
that currently exist in arm64 and riscv which are very similar. It paves
the way to the reuse of the recent contpte THP work by Ryan [1] to avoid
reimplementing the same in riscv.
This patchset was tested by running the libhugetlbfs testsuite with 64KB
and 2MB pages on both architectures (on a 4KB base page size arm64 kernel).
[1] https://lore.kernel.org/linux-arm-kernel/20240215103205.2607016-1-ryan.roberts@arm.com/
v3: https://lore.kernel.org/all/20240802151430.99114-1-alexghiti@rivosinc.com/
v2: https://lore.kernel.org/linux-riscv/20240508113419.18620-1-alexghiti@rivosinc.com/
v1: https://lore.kernel.org/linux-riscv/20240301091455.246686-1-alexghiti@rivosinc.com/
Changes in v4:
- Rebase on top of 6.13
Changes in v3:
- Split set_ptes and ptep_get into internal and external API (Ryan)
- Rename ARCH_HAS_CONTPTE into ARCH_WANT_GENERAL_HUGETLB_CONTPTE so that
we split hugetlb functions from contpte functions (actually riscv contpte
functions to support THP will come into another series) (Ryan)
- Rebase on top of 6.11-rc1
Changes in v2:
- Rebase on top of 6.9-rc3
Alexandre Ghiti (9):
riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes
riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code
mm: Use common huge_ptep_get() function for riscv/arm64
mm: Use common set_huge_pte_at() function for riscv/arm64
mm: Use common huge_pte_clear() function for riscv/arm64
mm: Use common huge_ptep_get_and_clear() function for riscv/arm64
mm: Use common huge_ptep_set_access_flags() function for riscv/arm64
mm: Use common huge_ptep_set_wrprotect() function for riscv/arm64
mm: Use common huge_ptep_clear_flush() function for riscv/arm64
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/hugetlb.h | 22 +--
arch/arm64/include/asm/pgtable.h | 59 +++++-
arch/arm64/mm/hugetlbpage.c | 291 +---------------------------
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/hugetlb.h | 35 +---
arch/riscv/include/asm/pgtable-64.h | 11 ++
arch/riscv/include/asm/pgtable.h | 156 +++++++++++++--
arch/riscv/mm/hugetlbpage.c | 227 ----------------------
arch/riscv/mm/pgtable.c | 6 +-
include/linux/hugetlb_contpte.h | 38 ++++
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/hugetlb_contpte.c | 271 ++++++++++++++++++++++++++
14 files changed, 527 insertions(+), 595 deletions(-)
create mode 100644 include/linux/hugetlb_contpte.h
create mode 100644 mm/hugetlb_contpte.c
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v4 1/9] riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code Alexandre Ghiti
` (7 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
The pte_t pointer is expected to point to the first entry of the NAPOT
mapping so no need to use huge_pte_offset(), similarly to what is done
in arm64.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/mm/hugetlbpage.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 42314f093922..6b09cd1ef41c 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -276,7 +276,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
order = napot_cont_order(pte);
pte_num = napot_pte_num(order);
- ptep = huge_pte_offset(mm, addr, napot_cont_size(order));
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
if (pte_dirty(orig_pte))
@@ -322,7 +321,6 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
order = napot_cont_order(pte);
pte_num = napot_pte_num(order);
- ptep = huge_pte_offset(mm, addr, napot_cont_size(order));
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
orig_pte = pte_wrprotect(orig_pte);
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 1/9] riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 13:51 ` Matthew Wilcox
2025-01-27 9:35 ` [PATCH v4 3/9] mm: Use common huge_ptep_get() function for riscv/arm64 Alexandre Ghiti
` (6 subsequent siblings)
8 siblings, 1 reply; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
The core mm code expects to be able to extract the pfn from a pte. NAPOT
mappings work differently since its ptes actually point to the first pfn
of the mapping, the other bits being used to encode the size of the
mapping.
So modify ptep_get() so that it returns a pte value that contains the
*real* pfn (which is then different from what the HW expects) and right
before storing the ptes to the page table, reset the pfn LSBs to the
size of the mapping.
And make sure that all NAPOT mappings are set using set_ptes().
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/include/asm/pgtable-64.h | 11 ++++
arch/riscv/include/asm/pgtable.h | 91 ++++++++++++++++++++++++++---
arch/riscv/mm/hugetlbpage.c | 9 +--
3 files changed, 96 insertions(+), 15 deletions(-)
diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
index 0897dd99ab8d..cddbe426f618 100644
--- a/arch/riscv/include/asm/pgtable-64.h
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -104,6 +104,17 @@ enum napot_cont_order {
#define napot_cont_mask(order) (~(napot_cont_size(order) - 1UL))
#define napot_pte_num(order) BIT(order)
+static inline bool is_napot_order(unsigned int order)
+{
+ unsigned int napot_order;
+
+ for_each_napot_order(napot_order)
+ if (order == napot_order)
+ return true;
+
+ return false;
+}
+
#ifdef CONFIG_RISCV_ISA_SVNAPOT
#define HUGE_MAX_HSTATE (2 + (NAPOT_ORDER_MAX - NAPOT_CONT_ORDER_BASE))
#else
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 050fdc49b5ad..82b264423b25 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -296,6 +296,8 @@ static inline unsigned long pte_napot(pte_t pte)
return pte_val(pte) & _PAGE_NAPOT;
}
+#define pte_valid_napot(pte) (pte_present(pte) && pte_napot(pte))
+
static inline pte_t pte_mknapot(pte_t pte, unsigned int order)
{
int pos = order - 1 + _PAGE_PFN_SHIFT;
@@ -305,6 +307,12 @@ static inline pte_t pte_mknapot(pte_t pte, unsigned int order)
return __pte((pte_val(pte) & napot_mask) | napot_bit | _PAGE_NAPOT);
}
+/* pte at entry must *not* encode the mapping size in the pfn LSBs. */
+static inline pte_t pte_clear_napot(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_NAPOT);
+}
+
#else
static __always_inline bool has_svnapot(void) { return false; }
@@ -314,17 +322,14 @@ static inline unsigned long pte_napot(pte_t pte)
return 0;
}
+#define pte_valid_napot(pte) false
+
#endif /* CONFIG_RISCV_ISA_SVNAPOT */
/* Yields the page frame number (PFN) of a page table entry */
static inline unsigned long pte_pfn(pte_t pte)
{
- unsigned long res = __page_val_to_pfn(pte_val(pte));
-
- if (has_svnapot() && pte_napot(pte))
- res = res & (res - 1UL);
-
- return res;
+ return __page_val_to_pfn(pte_val(pte));
}
#define pte_page(x) pfn_to_page(pte_pfn(x))
@@ -559,8 +564,13 @@ static inline void __set_pte_at(struct mm_struct *mm, pte_t *ptep, pte_t pteval)
#define PFN_PTE_SHIFT _PAGE_PFN_SHIFT
-static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, pte_t pteval, unsigned int nr)
+static inline pte_t __ptep_get(pte_t *ptep)
+{
+ return READ_ONCE(*ptep);
+}
+
+static inline void __set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval, unsigned int nr)
{
page_table_check_ptes_set(mm, ptep, pteval, nr);
@@ -569,10 +579,13 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
if (--nr == 0)
break;
ptep++;
+
+ if (unlikely(pte_valid_napot(pteval)))
+ continue;
+
pte_val(pteval) += 1 << _PAGE_PFN_SHIFT;
}
}
-#define set_ptes set_ptes
static inline void pte_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
@@ -627,6 +640,66 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
return ptep_test_and_clear_young(vma, address, ptep);
}
+#ifdef CONFIG_RISCV_ISA_SVNAPOT
+static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval, unsigned int nr)
+{
+ if (unlikely(pte_valid_napot(pteval))) {
+ unsigned int order = ilog2(nr);
+
+ if (!is_napot_order(order)) {
+ /*
+ * Something's weird, we are given a NAPOT pte but the
+ * size of the mapping is not a known NAPOT mapping
+ * size, so clear the NAPOT bit and map this without
+ * NAPOT support: core mm only manipulates pte with the
+ * real pfn so we know the pte is valid without the N
+ * bit.
+ */
+ pr_err("Incorrect NAPOT mapping, resetting.\n");
+ pteval = pte_clear_napot(pteval);
+ } else {
+ /*
+ * NAPOT ptes that arrive here only have the N bit set
+ * and their pfn does not contain the mapping size, so
+ * set that here.
+ */
+ pteval = pte_mknapot(pteval, order);
+ }
+ }
+
+ __set_ptes(mm, addr, ptep, pteval, nr);
+}
+#define set_ptes set_ptes
+
+static inline pte_t ptep_get(pte_t *ptep)
+{
+ pte_t pte = __ptep_get(ptep);
+
+ /*
+ * The pte we load has the N bit set and the size of the mapping in
+ * the pfn LSBs: keep the N bit and replace the mapping size with
+ * the *real* pfn since the core mm code expects to find it there.
+ * The mapping size will be reset just before being written to the
+ * page table in set_ptes().
+ */
+ if (unlikely(pte_valid_napot(pte))) {
+ unsigned int order = napot_cont_order(pte);
+ int pos = order - 1 + _PAGE_PFN_SHIFT;
+ unsigned long napot_mask = ~GENMASK(pos, _PAGE_PFN_SHIFT);
+ pte_t *orig_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * napot_pte_num(order));
+
+ pte = __pte((pte_val(pte) & napot_mask) + ((ptep - orig_ptep) << _PAGE_PFN_SHIFT));
+ }
+
+ return pte;
+}
+#define ptep_get ptep_get
+#else
+#define set_ptes __set_ptes
+#define ptep_get __ptep_get
+#endif /* CONFIG_RISCV_ISA_SVNAPOT */
+
#define pgprot_nx pgprot_nx
static inline pgprot_t pgprot_nx(pgprot_t _prot)
{
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 6b09cd1ef41c..59ed26ce6857 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -256,8 +256,7 @@ void set_huge_pte_at(struct mm_struct *mm,
clear_flush(mm, addr, ptep, pgsize, pte_num);
- for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
- set_pte_at(mm, addr, ptep, pte);
+ set_ptes(mm, addr, ptep, pte, pte_num);
}
int huge_ptep_set_access_flags(struct vm_area_struct *vma,
@@ -284,8 +283,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
if (pte_young(orig_pte))
pte = pte_mkyoung(pte);
- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++)
- set_pte_at(mm, addr, ptep, pte);
+ set_ptes(mm, addr, ptep, pte, pte_num);
return true;
}
@@ -325,8 +323,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
orig_pte = pte_wrprotect(orig_pte);
- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++)
- set_pte_at(mm, addr, ptep, orig_pte);
+ set_ptes(mm, addr, ptep, orig_pte, pte_num);
}
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 3/9] mm: Use common huge_ptep_get() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 1/9] riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 4/9] mm: Use common set_huge_pte_at() " Alexandre Ghiti
` (5 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
After some adjustments, both architectures have the same implementation
so move it to the generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/hugetlb.h | 3 +-
arch/arm64/include/asm/pgtable.h | 48 +++++++++++++++++++++++++---
arch/arm64/mm/hugetlbpage.c | 55 ++------------------------------
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/hugetlb.h | 6 ++--
arch/riscv/include/asm/pgtable.h | 36 +++++++++++++++++++++
arch/riscv/mm/hugetlbpage.c | 45 ++++++--------------------
include/linux/hugetlb_contpte.h | 12 +++++++
mm/Kconfig | 3 ++
mm/Makefile | 1 +
mm/hugetlb_contpte.c | 44 +++++++++++++++++++++++++
12 files changed, 157 insertions(+), 98 deletions(-)
create mode 100644 include/linux/hugetlb_contpte.h
create mode 100644 mm/hugetlb_contpte.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 100570a048c5..fb85d33bfe98 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -111,6 +111,7 @@ config ARM64
select ARCH_WANT_DEFAULT_BPF_JIT
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select ARCH_WANT_FRAME_POINTERS
+ select ARCH_WANT_GENERAL_HUGETLB_CONTPTE
select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANTS_EXECMEM_LATE if EXECMEM
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index c6dff3e69539..27d7f4bdd724 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -13,6 +13,7 @@
#include <asm/cacheflush.h>
#include <asm/mte.h>
#include <asm/page.h>
+#include <linux/hugetlb_contpte.h>
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
#define arch_hugetlb_migration_supported arch_hugetlb_migration_supported
@@ -53,8 +54,6 @@ extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
#define __HAVE_ARCH_HUGE_PTE_CLEAR
extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz);
-#define __HAVE_ARCH_HUGE_PTEP_GET
-extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
void __init arm64_hugetlb_cma_reserve(void);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 6986345b537a..cebbfcfb0e53 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -420,9 +420,10 @@ static inline pte_t pte_advance_pfn(pte_t pte, unsigned long nr)
return pfn_pte(pte_pfn(pte) + nr, pte_pgprot(pte));
}
-static inline void __set_ptes(struct mm_struct *mm,
- unsigned long __always_unused addr,
- pte_t *ptep, pte_t pte, unsigned int nr)
+static inline void ___set_ptes(struct mm_struct *mm,
+ unsigned long __always_unused addr,
+ pte_t *ptep, pte_t pte, unsigned int nr,
+ size_t pgsize)
{
page_table_check_ptes_set(mm, ptep, pte, nr);
__sync_cache_and_tags(pte, nr);
@@ -433,10 +434,15 @@ static inline void __set_ptes(struct mm_struct *mm,
if (--nr == 0)
break;
ptep++;
- pte = pte_advance_pfn(pte, 1);
+ pte = pte_advance_pfn(pte, pgsize >> PAGE_SHIFT);
}
}
+#define __set_ptes(mm, addr, ptep, pte, nr) \
+ ___set_ptes(mm, addr, ptep, pte, nr, PAGE_SIZE)
+
+#define set_contptes ___set_ptes
+
/*
* Hugetlb definitions.
*/
@@ -1825,6 +1831,40 @@ static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
#endif /* CONFIG_ARM64_CONTPTE */
+static inline int arch_contpte_get_num_contig(pte_t *ptep,
+ unsigned long size,
+ size_t *pgsize)
+{
+ int contig_ptes = 0;
+
+ if (pgsize)
+ *pgsize = size;
+
+ switch (size) {
+#ifndef __PAGETABLE_PMD_FOLDED
+ case PUD_SIZE:
+ if (pud_sect_supported())
+ contig_ptes = 1;
+ break;
+#endif
+ case PMD_SIZE:
+ contig_ptes = 1;
+ break;
+ case CONT_PMD_SIZE:
+ if (pgsize)
+ *pgsize = PMD_SIZE;
+ contig_ptes = CONT_PMDS;
+ break;
+ case CONT_PTE_SIZE:
+ if (pgsize)
+ *pgsize = PAGE_SIZE;
+ contig_ptes = CONT_PTES;
+ break;
+ }
+
+ return contig_ptes;
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_PGTABLE_H */
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 3215adf48a1b..3458461adb90 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -98,57 +98,6 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr,
return CONT_PTES;
}
-static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
-{
- int contig_ptes = 0;
-
- *pgsize = size;
-
- switch (size) {
-#ifndef __PAGETABLE_PMD_FOLDED
- case PUD_SIZE:
- if (pud_sect_supported())
- contig_ptes = 1;
- break;
-#endif
- case PMD_SIZE:
- contig_ptes = 1;
- break;
- case CONT_PMD_SIZE:
- *pgsize = PMD_SIZE;
- contig_ptes = CONT_PMDS;
- break;
- case CONT_PTE_SIZE:
- *pgsize = PAGE_SIZE;
- contig_ptes = CONT_PTES;
- break;
- }
-
- return contig_ptes;
-}
-
-pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
-{
- int ncontig, i;
- size_t pgsize;
- pte_t orig_pte = __ptep_get(ptep);
-
- if (!pte_present(orig_pte) || !pte_cont(orig_pte))
- return orig_pte;
-
- ncontig = num_contig_ptes(page_size(pte_page(orig_pte)), &pgsize);
- for (i = 0; i < ncontig; i++, ptep++) {
- pte_t pte = __ptep_get(ptep);
-
- if (pte_dirty(pte))
- orig_pte = pte_mkdirty(orig_pte);
-
- if (pte_young(pte))
- orig_pte = pte_mkyoung(orig_pte);
- }
- return orig_pte;
-}
-
/*
* Changing some bits of contiguous entries requires us to follow a
* Break-Before-Make approach, breaking the whole contiguous set
@@ -229,7 +178,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
unsigned long pfn, dpfn;
pgprot_t hugeprot;
- ncontig = num_contig_ptes(sz, &pgsize);
+ ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
if (!pte_present(pte)) {
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
@@ -390,7 +339,7 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
int i, ncontig;
size_t pgsize;
- ncontig = num_contig_ptes(sz, &pgsize);
+ ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
__pte_clear(mm, addr, ptep);
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d4a7ca0388c0..2fe8c68fba85 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -78,6 +78,7 @@ config RISCV
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
select ARCH_WANT_FRAME_POINTERS
select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT
+ select ARCH_WANT_GENERAL_HUGETLB_CONTPTE if RISCV_ISA_SVNAPOT
select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL
select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index faf3624d8057..d9f9bfb84908 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -4,6 +4,9 @@
#include <asm/cacheflush.h>
#include <asm/page.h>
+#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB_CONTPTE
+#include <linux/hugetlb_contpte.h>
+#endif
static inline void arch_clear_hugetlb_flags(struct folio *folio)
{
@@ -43,9 +46,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
-#define __HAVE_ARCH_HUGE_PTEP_GET
-pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
-
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 82b264423b25..d4e6427b8ca9 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -296,6 +296,8 @@ static inline unsigned long pte_napot(pte_t pte)
return pte_val(pte) & _PAGE_NAPOT;
}
+#define pte_cont pte_napot
+
#define pte_valid_napot(pte) (pte_present(pte) && pte_napot(pte))
static inline pte_t pte_mknapot(pte_t pte, unsigned int order)
@@ -587,6 +589,38 @@ static inline void __set_ptes(struct mm_struct *mm, unsigned long addr,
}
}
+#ifdef CONFIG_RISCV_ISA_SVNAPOT
+static inline int arch_contpte_get_num_contig(pte_t *ptep, unsigned long size,
+ size_t *pgsize)
+{
+ unsigned long hugepage_shift;
+ pte_t __pte;
+
+ if (size >= PGDIR_SIZE)
+ hugepage_shift = PGDIR_SHIFT;
+ else if (size >= P4D_SIZE)
+ hugepage_shift = P4D_SHIFT;
+ else if (size >= PUD_SIZE)
+ hugepage_shift = PUD_SHIFT;
+ else if (size >= PMD_SIZE)
+ hugepage_shift = PMD_SHIFT;
+ else
+ hugepage_shift = PAGE_SHIFT;
+
+ if (pgsize)
+ *pgsize = BIT(hugepage_shift);
+
+ /* We must read the raw value of the pte to get the size of the mapping */
+ __pte = __ptep_get(ptep);
+
+ /* Make sure __pte is not a swap entry */
+ if (pte_valid_napot(__pte))
+ return napot_pte_num(napot_cont_order(__pte));
+
+ return size >> hugepage_shift;
+}
+#endif
+
static inline void pte_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
@@ -671,6 +705,8 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
__set_ptes(mm, addr, ptep, pteval, nr);
}
#define set_ptes set_ptes
+#define set_contptes(mm, addr, ptep, pte, nr, pgsize) \
+ set_ptes(mm, addr, ptep, pte, nr)
static inline pte_t ptep_get(pte_t *ptep)
{
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 59ed26ce6857..d51863824540 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -3,30 +3,6 @@
#include <linux/err.h>
#ifdef CONFIG_RISCV_ISA_SVNAPOT
-pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
-{
- unsigned long pte_num;
- int i;
- pte_t orig_pte = ptep_get(ptep);
-
- if (!pte_present(orig_pte) || !pte_napot(orig_pte))
- return orig_pte;
-
- pte_num = napot_pte_num(napot_cont_order(orig_pte));
-
- for (i = 0; i < pte_num; i++, ptep++) {
- pte_t pte = ptep_get(ptep);
-
- if (pte_dirty(pte))
- orig_pte = pte_mkdirty(orig_pte);
-
- if (pte_young(pte))
- orig_pte = pte_mkyoung(orig_pte);
- }
-
- return orig_pte;
-}
-
pte_t *huge_pte_alloc(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr,
@@ -266,15 +242,13 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
int dirty)
{
struct mm_struct *mm = vma->vm_mm;
- unsigned long order;
pte_t orig_pte;
- int i, pte_num;
+ int pte_num;
if (!pte_napot(pte))
return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
- order = napot_cont_order(pte);
- pte_num = napot_pte_num(order);
+ pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
if (pte_dirty(orig_pte))
@@ -298,7 +272,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
if (!pte_napot(orig_pte))
return ptep_get_and_clear(mm, addr, ptep);
- pte_num = napot_pte_num(napot_cont_order(orig_pte));
+ pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
return get_clear_contig(mm, addr, ptep, pte_num);
}
@@ -308,17 +282,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
pte_t *ptep)
{
pte_t pte = ptep_get(ptep);
- unsigned long order;
pte_t orig_pte;
- int i, pte_num;
+ int pte_num;
if (!pte_napot(pte)) {
ptep_set_wrprotect(mm, addr, ptep);
return;
}
- order = napot_cont_order(pte);
- pte_num = napot_pte_num(order);
+ pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
orig_pte = pte_wrprotect(orig_pte);
@@ -336,7 +308,7 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
if (!pte_napot(pte))
return ptep_clear_flush(vma, addr, ptep);
- pte_num = napot_pte_num(napot_cont_order(pte));
+ pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
return get_clear_contig_flush(vma->vm_mm, addr, ptep, pte_num);
}
@@ -346,6 +318,7 @@ void huge_pte_clear(struct mm_struct *mm,
pte_t *ptep,
unsigned long sz)
{
+ size_t pgsize;
pte_t pte = ptep_get(ptep);
int i, pte_num;
@@ -354,8 +327,8 @@ void huge_pte_clear(struct mm_struct *mm,
return;
}
- pte_num = napot_pte_num(napot_cont_order(pte));
- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++)
+ pte_num = arch_contpte_get_num_contig(ptep, sz, &pgsize);
+ for (i = 0; i < pte_num; i++, addr += pgsize, ptep++)
pte_clear(mm, addr, ptep);
}
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
new file mode 100644
index 000000000000..ec4189cd65b8
--- /dev/null
+++ b/include/linux/hugetlb_contpte.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2024 Rivos Inc.
+ */
+
+#ifndef _LINUX_HUGETLB_CONTPTE_H
+#define _LINUX_HUGETLB_CONTPTE_H
+
+#define __HAVE_ARCH_HUGE_PTEP_GET
+extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
+
+#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 84000b016808..8cd38de612ce 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -810,6 +810,9 @@ config NOMMU_INITIAL_TRIM_EXCESS
config ARCH_WANT_GENERAL_HUGETLB
bool
+config ARCH_WANT_GENERAL_HUGETLB_CONTPTE
+ bool
+
config ARCH_WANTS_THP_SWAP
def_bool n
diff --git a/mm/Makefile b/mm/Makefile
index dba52bb0da8a..1c1250fbb020 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -96,6 +96,7 @@ obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_NUMA) += memory-tiers.o
obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o
obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o
+obj-$(CONFIG_ARCH_WANT_GENERAL_HUGETLB_CONTPTE) += hugetlb_contpte.o
obj-$(CONFIG_PAGE_COUNTER) += page_counter.o
obj-$(CONFIG_MEMCG_V1) += memcontrol-v1.o
obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
new file mode 100644
index 000000000000..a03e91d3efb1
--- /dev/null
+++ b/mm/hugetlb_contpte.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2025 Rivos Inc.
+ */
+
+#include <linux/pgtable.h>
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+
+/*
+ * Any arch that wants to use that needs to define:
+ * - __ptep_get()
+ * - pte_cont()
+ * - arch_contpte_get_num_contig()
+ */
+
+/*
+ * This file implements the following contpte aware API:
+ * - huge_ptep_get()
+ */
+
+pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+{
+ int ncontig, i;
+ pte_t orig_pte = __ptep_get(ptep);
+
+ if (!pte_present(orig_pte) || !pte_cont(orig_pte))
+ return orig_pte;
+
+ ncontig = arch_contpte_get_num_contig(ptep,
+ page_size(pte_page(orig_pte)),
+ NULL);
+
+ for (i = 0; i < ncontig; i++, ptep++) {
+ pte_t pte = __ptep_get(ptep);
+
+ if (pte_dirty(pte))
+ orig_pte = pte_mkdirty(orig_pte);
+
+ if (pte_young(pte))
+ orig_pte = pte_mkyoung(orig_pte);
+ }
+ return orig_pte;
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 4/9] mm: Use common set_huge_pte_at() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (2 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 3/9] mm: Use common huge_ptep_get() function for riscv/arm64 Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 5/9] mm: Use common huge_pte_clear() " Alexandre Ghiti
` (4 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
After some adjustments, both architectures have the same implementation
so move it to the generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 3 --
arch/arm64/mm/hugetlbpage.c | 56 -----------------------------
arch/riscv/include/asm/hugetlb.h | 5 ---
arch/riscv/include/asm/pgtable.h | 8 +++--
arch/riscv/mm/hugetlbpage.c | 62 --------------------------------
include/linux/hugetlb_contpte.h | 5 +++
mm/hugetlb_contpte.c | 59 ++++++++++++++++++++++++++++++
7 files changed, 69 insertions(+), 129 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 27d7f4bdd724..40d87a563093 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -35,9 +35,6 @@ static inline void arch_clear_hugetlb_flags(struct folio *folio)
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
-#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
-extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, pte_t pte, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 3458461adb90..02de680a6a0d 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -145,62 +145,6 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
return orig_pte;
}
-/*
- * Changing some bits of contiguous entries requires us to follow a
- * Break-Before-Make approach, breaking the whole contiguous set
- * before we can change any entries. See ARM DDI 0487A.k_iss10775,
- * "Misprogramming of the Contiguous bit", page D4-1762.
- *
- * This helper performs the break step for use cases where the
- * original pte is not needed.
- */
-static void clear_flush(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pgsize,
- unsigned long ncontig)
-{
- struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
- unsigned long i, saddr = addr;
-
- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
- __ptep_get_and_clear(mm, addr, ptep);
-
- flush_tlb_range(&vma, saddr, addr);
-}
-
-void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, pte_t pte, unsigned long sz)
-{
- size_t pgsize;
- int i;
- int ncontig;
- unsigned long pfn, dpfn;
- pgprot_t hugeprot;
-
- ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
-
- if (!pte_present(pte)) {
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
- __set_ptes(mm, addr, ptep, pte, 1);
- return;
- }
-
- if (!pte_cont(pte)) {
- __set_ptes(mm, addr, ptep, pte, 1);
- return;
- }
-
- pfn = pte_pfn(pte);
- dpfn = pgsize >> PAGE_SHIFT;
- hugeprot = pte_pgprot(pte);
-
- clear_flush(mm, addr, ptep, pgsize, ncontig);
-
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-}
-
pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, unsigned long sz)
{
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index d9f9bfb84908..28cbf5d761e1 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -24,11 +24,6 @@ bool arch_hugetlb_migration_supported(struct hstate *h);
void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz);
-#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
-void set_huge_pte_at(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep, pte_t pte,
- unsigned long sz);
-
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index d4e6427b8ca9..74d29d0af172 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -634,9 +634,8 @@ extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addre
extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep);
-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
-static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
- unsigned long address, pte_t *ptep)
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long address, pte_t *ptep)
{
pte_t pte = __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
@@ -736,6 +735,9 @@ static inline pte_t ptep_get(pte_t *ptep)
#define ptep_get __ptep_get
#endif /* CONFIG_RISCV_ISA_SVNAPOT */
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+#define ptep_get_and_clear __ptep_get_and_clear
+
#define pgprot_nx pgprot_nx
static inline pgprot_t pgprot_nx(pgprot_t _prot)
{
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index d51863824540..0ecb2846c3f0 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -173,68 +173,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-static void clear_flush(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pgsize,
- unsigned long ncontig)
-{
- struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
- unsigned long i, saddr = addr;
-
- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
- ptep_get_and_clear(mm, addr, ptep);
-
- flush_tlb_range(&vma, saddr, addr);
-}
-
-/*
- * When dealing with NAPOT mappings, the privileged specification indicates that
- * "if an update needs to be made, the OS generally should first mark all of the
- * PTEs invalid, then issue SFENCE.VMA instruction(s) covering all 4 KiB regions
- * within the range, [...] then update the PTE(s), as described in Section
- * 4.2.1.". That's the equivalent of the Break-Before-Make approach used by
- * arm64.
- */
-void set_huge_pte_at(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- pte_t pte,
- unsigned long sz)
-{
- unsigned long hugepage_shift, pgsize;
- int i, pte_num;
-
- if (sz >= PGDIR_SIZE)
- hugepage_shift = PGDIR_SHIFT;
- else if (sz >= P4D_SIZE)
- hugepage_shift = P4D_SHIFT;
- else if (sz >= PUD_SIZE)
- hugepage_shift = PUD_SHIFT;
- else if (sz >= PMD_SIZE)
- hugepage_shift = PMD_SHIFT;
- else
- hugepage_shift = PAGE_SHIFT;
-
- pte_num = sz >> hugepage_shift;
- pgsize = 1 << hugepage_shift;
-
- if (!pte_present(pte)) {
- for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
- set_ptes(mm, addr, ptep, pte, 1);
- return;
- }
-
- if (!pte_napot(pte)) {
- set_ptes(mm, addr, ptep, pte, 1);
- return;
- }
-
- clear_flush(mm, addr, ptep, pgsize, pte_num);
-
- set_ptes(mm, addr, ptep, pte, pte_num);
-}
-
int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr,
pte_t *ptep,
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index ec4189cd65b8..7acd734a75e8 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -9,4 +9,9 @@
#define __HAVE_ARCH_HUGE_PTEP_GET
extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
+#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
+extern void set_huge_pte_at(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep, pte_t pte,
+ unsigned long sz);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index a03e91d3efb1..677d714fd10d 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -10,6 +10,8 @@
/*
* Any arch that wants to use that needs to define:
* - __ptep_get()
+ * - __set_ptes()
+ * - __ptep_get_and_clear()
* - pte_cont()
* - arch_contpte_get_num_contig()
*/
@@ -17,6 +19,7 @@
/*
* This file implements the following contpte aware API:
* - huge_ptep_get()
+ * - set_huge_pte_at()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -42,3 +45,59 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
}
return orig_pte;
}
+
+/*
+ * ARM64: Changing some bits of contiguous entries requires us to follow a
+ * Break-Before-Make approach, breaking the whole contiguous set
+ * before we can change any entries. See ARM DDI 0487A.k_iss10775,
+ * "Misprogramming of the Contiguous bit", page D4-1762.
+ *
+ * RISCV: When dealing with NAPOT mappings, the privileged specification
+ * indicates that "if an update needs to be made, the OS generally should first
+ * mark all of the PTEs invalid, then issue SFENCE.VMA instruction(s) covering
+ * all 4 KiB regions within the range, [...] then update the PTE(s), as
+ * described in Section 4.2.1.". That's the equivalent of the Break-Before-Make
+ * approach used by arm64.
+ *
+ * This helper performs the break step for use cases where the
+ * original pte is not needed.
+ */
+static void clear_flush(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep,
+ unsigned long pgsize,
+ unsigned long ncontig)
+{
+ struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+ unsigned long i, saddr = addr;
+
+ for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
+ __ptep_get_and_clear(mm, addr, ptep);
+
+ flush_tlb_range(&vma, saddr, addr);
+}
+
+void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, unsigned long sz)
+{
+ size_t pgsize;
+ int i;
+ int ncontig;
+
+ ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
+
+ if (!pte_present(pte)) {
+ for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
+ __set_ptes(mm, addr, ptep, pte, 1);
+ return;
+ }
+
+ if (!pte_cont(pte)) {
+ __set_ptes(mm, addr, ptep, pte, 1);
+ return;
+ }
+
+ clear_flush(mm, addr, ptep, pgsize, ncontig);
+
+ set_contptes(mm, addr, ptep, pte, ncontig, pgsize);
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 5/9] mm: Use common huge_pte_clear() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (3 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 4/9] mm: Use common set_huge_pte_at() " Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 6/9] mm: Use common huge_ptep_get_and_clear() " Alexandre Ghiti
` (3 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
Both architectures have the same implementation so move it to generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 3 ---
arch/arm64/mm/hugetlbpage.c | 12 ------------
arch/riscv/include/asm/hugetlb.h | 4 ----
arch/riscv/include/asm/pgtable.h | 5 +++--
arch/riscv/mm/hugetlbpage.c | 19 -------------------
include/linux/hugetlb_contpte.h | 4 ++++
mm/hugetlb_contpte.c | 14 ++++++++++++++
7 files changed, 21 insertions(+), 40 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 40d87a563093..e4acaedea149 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -48,9 +48,6 @@ extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
-#define __HAVE_ARCH_HUGE_PTE_CLEAR
-extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, unsigned long sz);
void __init arm64_hugetlb_cma_reserve(void);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 02de680a6a0d..541358f50b64 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -277,18 +277,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, unsigned long sz)
-{
- int i, ncontig;
- size_t pgsize;
-
- ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
-
- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
- __pte_clear(mm, addr, ptep);
-}
-
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 28cbf5d761e1..ca9930cdf2e6 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -20,10 +20,6 @@ bool arch_hugetlb_migration_supported(struct hstate *h);
#endif
#ifdef CONFIG_RISCV_ISA_SVNAPOT
-#define __HAVE_ARCH_HUGE_PTE_CLEAR
-void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, unsigned long sz);
-
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 74d29d0af172..08b24c0a579b 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -621,8 +621,8 @@ static inline int arch_contpte_get_num_contig(pte_t *ptep, unsigned long size,
}
#endif
-static inline void pte_clear(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
+static inline void __pte_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
{
__set_pte_at(mm, ptep, __pte(0));
}
@@ -737,6 +737,7 @@ static inline pte_t ptep_get(pte_t *ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
#define ptep_get_and_clear __ptep_get_and_clear
+#define pte_clear __pte_clear
#define pgprot_nx pgprot_nx
static inline pgprot_t pgprot_nx(pgprot_t _prot)
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 0ecb2846c3f0..e2093e7266a5 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -251,25 +251,6 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
return get_clear_contig_flush(vma->vm_mm, addr, ptep, pte_num);
}
-void huge_pte_clear(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long sz)
-{
- size_t pgsize;
- pte_t pte = ptep_get(ptep);
- int i, pte_num;
-
- if (!pte_napot(pte)) {
- pte_clear(mm, addr, ptep);
- return;
- }
-
- pte_num = arch_contpte_get_num_contig(ptep, sz, &pgsize);
- for (i = 0; i < pte_num; i++, addr += pgsize, ptep++)
- pte_clear(mm, addr, ptep);
-}
-
static bool is_napot_size(unsigned long size)
{
unsigned long order;
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index 7acd734a75e8..d9892a047b2b 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -14,4 +14,8 @@ extern void set_huge_pte_at(struct mm_struct *mm,
unsigned long addr, pte_t *ptep, pte_t pte,
unsigned long sz);
+#define __HAVE_ARCH_HUGE_PTE_CLEAR
+extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, unsigned long sz);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index 677d714fd10d..c76d6b3d0121 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -12,6 +12,7 @@
* - __ptep_get()
* - __set_ptes()
* - __ptep_get_and_clear()
+ * - __pte_clear()
* - pte_cont()
* - arch_contpte_get_num_contig()
*/
@@ -20,6 +21,7 @@
* This file implements the following contpte aware API:
* - huge_ptep_get()
* - set_huge_pte_at()
+ * - huge_pte_clear()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -101,3 +103,15 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
set_contptes(mm, addr, ptep, pte, ncontig, pgsize);
}
+
+void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, unsigned long sz)
+{
+ int i, ncontig;
+ size_t pgsize;
+
+ ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
+
+ for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
+ __pte_clear(mm, addr, ptep);
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 6/9] mm: Use common huge_ptep_get_and_clear() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (4 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 5/9] mm: Use common huge_pte_clear() " Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 7/9] mm: Use common huge_ptep_set_access_flags() " Alexandre Ghiti
` (2 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
After some adjustments, both architectures have the same implementation
so move it to the generic code.
Note that get_clear_contig() function is duplicated in the generic and
the arm64 code because it is still used by some arm64 functions that
will, in the next commits, be moved to the generic code. Once all have
been moved, the arm64 version will be removed.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 3 --
arch/arm64/include/asm/pgtable.h | 15 ++++++++--
arch/arm64/mm/hugetlbpage.c | 19 ++-----------
arch/riscv/include/asm/hugetlb.h | 4 ---
arch/riscv/include/asm/pgtable.h | 4 ++-
arch/riscv/mm/hugetlbpage.c | 23 ++++-----------
include/linux/hugetlb_contpte.h | 4 +++
mm/hugetlb_contpte.c | 48 ++++++++++++++++++++++++++++++--
8 files changed, 72 insertions(+), 48 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index e4acaedea149..5c605a0a2017 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -39,9 +39,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
-#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
-extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep);
#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index cebbfcfb0e53..c339b568ac51 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1831,12 +1831,23 @@ static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
#endif /* CONFIG_ARM64_CONTPTE */
-static inline int arch_contpte_get_num_contig(pte_t *ptep,
- unsigned long size,
+extern int find_num_contig(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, size_t *pgsize);
+
+static inline int arch_contpte_get_num_contig(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long size,
size_t *pgsize)
{
int contig_ptes = 0;
+ /*
+ * If the size is not passed, we need to go through the page table to
+ * find out the number of contiguous ptes.
+ */
+ if (size == 0)
+ return find_num_contig(mm, addr, ptep, pgsize);
+
if (pgsize)
*pgsize = size;
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 541358f50b64..0b7a53fee55d 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -79,8 +79,8 @@ bool arch_hugetlb_migration_supported(struct hstate *h)
}
#endif
-static int find_num_contig(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, size_t *pgsize)
+int find_num_contig(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, size_t *pgsize)
{
pgd_t *pgdp = pgd_offset(mm, addr);
p4d_t *p4dp;
@@ -277,21 +277,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
-{
- int ncontig;
- size_t pgsize;
- pte_t orig_pte = __ptep_get(ptep);
-
- if (!pte_cont(orig_pte))
- return __ptep_get_and_clear(mm, addr, ptep);
-
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-
- return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
-}
-
/*
* huge_ptep_set_access_flags will update access flags (dirty, accesssed)
* and write permission.
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index ca9930cdf2e6..0fbb6b19df79 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -20,10 +20,6 @@ bool arch_hugetlb_migration_supported(struct hstate *h);
#endif
#ifdef CONFIG_RISCV_ISA_SVNAPOT
-#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep);
-
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 08b24c0a579b..705d666e014d 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -590,7 +590,9 @@ static inline void __set_ptes(struct mm_struct *mm, unsigned long addr,
}
#ifdef CONFIG_RISCV_ISA_SVNAPOT
-static inline int arch_contpte_get_num_contig(pte_t *ptep, unsigned long size,
+static inline int arch_contpte_get_num_contig(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long size,
size_t *pgsize)
{
unsigned long hugepage_shift;
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index e2093e7266a5..b44023336fd9 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -186,7 +186,8 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
if (!pte_napot(pte))
return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
- pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
+ pte_num = arch_contpte_get_num_contig(vma->vm_mm, addr, ptep, 0, NULL);
+
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
if (pte_dirty(orig_pte))
@@ -200,21 +201,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return true;
}
-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep)
-{
- pte_t orig_pte = ptep_get(ptep);
- int pte_num;
-
- if (!pte_napot(orig_pte))
- return ptep_get_and_clear(mm, addr, ptep);
-
- pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
-
- return get_clear_contig(mm, addr, ptep, pte_num);
-}
-
void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep)
@@ -228,7 +214,8 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
return;
}
- pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
+ pte_num = arch_contpte_get_num_contig(mm, addr, ptep, 0, NULL);
+
orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
orig_pte = pte_wrprotect(orig_pte);
@@ -246,7 +233,7 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
if (!pte_napot(pte))
return ptep_clear_flush(vma, addr, ptep);
- pte_num = arch_contpte_get_num_contig(ptep, 0, NULL);
+ pte_num = arch_contpte_get_num_contig(vma->vm_mm, addr, ptep, 0, NULL);
return get_clear_contig_flush(vma->vm_mm, addr, ptep, pte_num);
}
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index d9892a047b2b..20d3a3e14e14 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -18,4 +18,8 @@ extern void set_huge_pte_at(struct mm_struct *mm,
extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz);
+#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index c76d6b3d0121..0c86c6f77c29 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -22,6 +22,7 @@
* - huge_ptep_get()
* - set_huge_pte_at()
* - huge_pte_clear()
+ * - huge_ptep_get_and_clear()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -32,7 +33,7 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
if (!pte_present(orig_pte) || !pte_cont(orig_pte))
return orig_pte;
- ncontig = arch_contpte_get_num_contig(ptep,
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep,
page_size(pte_page(orig_pte)),
NULL);
@@ -86,7 +87,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
int i;
int ncontig;
- ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep, sz, &pgsize);
if (!pte_present(pte)) {
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
@@ -110,8 +111,49 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
int i, ncontig;
size_t pgsize;
- ncontig = arch_contpte_get_num_contig(ptep, sz, &pgsize);
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep, sz, &pgsize);
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
__pte_clear(mm, addr, ptep);
}
+
+static pte_t get_clear_contig(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep,
+ unsigned long pgsize,
+ unsigned long ncontig)
+{
+ pte_t orig_pte = __ptep_get(ptep);
+ unsigned long i;
+
+ for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
+ pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
+
+ /*
+ * If HW_AFDBM (arm64) or Svadu (riscv) is enabled, then the HW
+ * could turn on the dirty or accessed bit for any page in the
+ * set, so check them all.
+ */
+ if (pte_dirty(pte))
+ orig_pte = pte_mkdirty(orig_pte);
+
+ if (pte_young(pte))
+ orig_pte = pte_mkyoung(orig_pte);
+ }
+ return orig_pte;
+}
+
+pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ int ncontig;
+ size_t pgsize;
+ pte_t orig_pte = __ptep_get(ptep);
+
+ if (!pte_cont(orig_pte))
+ return __ptep_get_and_clear(mm, addr, ptep);
+
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep, 0, &pgsize);
+
+ return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 7/9] mm: Use common huge_ptep_set_access_flags() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (5 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 6/9] mm: Use common huge_ptep_get_and_clear() " Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 8/9] mm: Use common huge_ptep_set_wrprotect() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 9/9] mm: Use common huge_ptep_clear_flush() " Alexandre Ghiti
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
Both architectures have almost the same implementation:
__cont_access_flags_changed() is also correct on riscv and brings the
same benefits (ie don't do anything if the flags are unchanged).
As in the previous commit, get_clear_contig_flush() is duplicated in both
the arch and the generic codes, it will be removed from the arch code when
the last reference there gets moved to the generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 4 --
arch/arm64/mm/hugetlbpage.c | 65 ---------------------------
arch/riscv/include/asm/hugetlb.h | 5 ---
arch/riscv/include/asm/pgtable.h | 7 +--
arch/riscv/mm/hugetlbpage.c | 28 ------------
arch/riscv/mm/pgtable.c | 6 +--
include/linux/hugetlb_contpte.h | 5 +++
mm/hugetlb_contpte.c | 75 ++++++++++++++++++++++++++++++++
8 files changed, 87 insertions(+), 108 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 5c605a0a2017..654f5f2f03a3 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -35,10 +35,6 @@ static inline void arch_clear_hugetlb_flags(struct folio *folio)
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
-#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep,
- pte_t pte, int dirty);
#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 0b7a53fee55d..643ba2043f0f 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -277,71 +277,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-/*
- * huge_ptep_set_access_flags will update access flags (dirty, accesssed)
- * and write permission.
- *
- * For a contiguous huge pte range we need to check whether or not write
- * permission has to change only on the first pte in the set. Then for
- * all the contiguous ptes we need to check whether or not there is a
- * discrepancy between dirty or young.
- */
-static int __cont_access_flags_changed(pte_t *ptep, pte_t pte, int ncontig)
-{
- int i;
-
- if (pte_write(pte) != pte_write(__ptep_get(ptep)))
- return 1;
-
- for (i = 0; i < ncontig; i++) {
- pte_t orig_pte = __ptep_get(ptep + i);
-
- if (pte_dirty(pte) != pte_dirty(orig_pte))
- return 1;
-
- if (pte_young(pte) != pte_young(orig_pte))
- return 1;
- }
-
- return 0;
-}
-
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep,
- pte_t pte, int dirty)
-{
- int ncontig, i;
- size_t pgsize = 0;
- unsigned long pfn = pte_pfn(pte), dpfn;
- struct mm_struct *mm = vma->vm_mm;
- pgprot_t hugeprot;
- pte_t orig_pte;
-
- if (!pte_cont(pte))
- return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
-
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
- dpfn = pgsize >> PAGE_SHIFT;
-
- if (!__cont_access_flags_changed(ptep, pte, ncontig))
- return 0;
-
- orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
-
- /* Make sure we don't lose the dirty or young state */
- if (pte_dirty(orig_pte))
- pte = pte_mkdirty(pte);
-
- if (pte_young(orig_pte))
- pte = pte_mkyoung(pte);
-
- hugeprot = pte_pgprot(pte);
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-
- return 1;
-}
-
void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 0fbb6b19df79..bf533c2cef84 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -28,11 +28,6 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
-#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep,
- pte_t pte, int dirty);
-
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 705d666e014d..290d5fbfe031 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -629,9 +629,8 @@ static inline void __pte_clear(struct mm_struct *mm,
__set_pte_at(mm, ptep, __pte(0));
}
-#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS /* defined in mm/pgtable.c */
-extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
- pte_t *ptep, pte_t entry, int dirty);
+extern int __ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+ pte_t *ptep, pte_t entry, int dirty);
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG /* defined in mm/pgtable.c */
extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep);
@@ -740,6 +739,8 @@ static inline pte_t ptep_get(pte_t *ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
#define ptep_get_and_clear __ptep_get_and_clear
#define pte_clear __pte_clear
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+#define ptep_set_access_flags __ptep_set_access_flags
#define pgprot_nx pgprot_nx
static inline pgprot_t pgprot_nx(pgprot_t _prot)
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index b44023336fd9..0e2ca7327479 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -173,34 +173,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
- unsigned long addr,
- pte_t *ptep,
- pte_t pte,
- int dirty)
-{
- struct mm_struct *mm = vma->vm_mm;
- pte_t orig_pte;
- int pte_num;
-
- if (!pte_napot(pte))
- return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
-
- pte_num = arch_contpte_get_num_contig(vma->vm_mm, addr, ptep, 0, NULL);
-
- orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
-
- if (pte_dirty(orig_pte))
- pte = pte_mkdirty(pte);
-
- if (pte_young(orig_pte))
- pte = pte_mkyoung(pte);
-
- set_ptes(mm, addr, ptep, pte, pte_num);
-
- return true;
-}
-
void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep)
diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c
index 4ae67324f992..af8b3769a349 100644
--- a/arch/riscv/mm/pgtable.c
+++ b/arch/riscv/mm/pgtable.c
@@ -5,9 +5,9 @@
#include <linux/kernel.h>
#include <linux/pgtable.h>
-int ptep_set_access_flags(struct vm_area_struct *vma,
- unsigned long address, pte_t *ptep,
- pte_t entry, int dirty)
+int __ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep,
+ pte_t entry, int dirty)
{
asm goto(ALTERNATIVE("nop", "j %l[svvptc]", 0, RISCV_ISA_EXT_SVVPTC, 1)
: : : : svvptc);
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index 20d3a3e14e14..fea47035ac38 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -22,4 +22,9 @@ extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
+#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
+extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t pte, int dirty);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index 0c86c6f77c29..49950c1ce615 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -13,6 +13,7 @@
* - __set_ptes()
* - __ptep_get_and_clear()
* - __pte_clear()
+ * - __ptep_set_access_flags()
* - pte_cont()
* - arch_contpte_get_num_contig()
*/
@@ -23,6 +24,7 @@
* - set_huge_pte_at()
* - huge_pte_clear()
* - huge_ptep_get_and_clear()
+ * - huge_ptep_set_access_flags()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -157,3 +159,76 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
}
+
+/*
+ * huge_ptep_set_access_flags will update access flags (dirty, accesssed)
+ * and write permission.
+ *
+ * For a contiguous huge pte range we need to check whether or not write
+ * permission has to change only on the first pte in the set. Then for
+ * all the contiguous ptes we need to check whether or not there is a
+ * discrepancy between dirty or young.
+ */
+static int __cont_access_flags_changed(pte_t *ptep, pte_t pte, int ncontig)
+{
+ int i;
+
+ if (pte_write(pte) != pte_write(__ptep_get(ptep)))
+ return 1;
+
+ for (i = 0; i < ncontig; i++) {
+ pte_t orig_pte = __ptep_get(ptep + i);
+
+ if (pte_dirty(pte) != pte_dirty(orig_pte))
+ return 1;
+
+ if (pte_young(pte) != pte_young(orig_pte))
+ return 1;
+ }
+
+ return 0;
+}
+
+static pte_t get_clear_contig_flush(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep,
+ unsigned long pgsize,
+ unsigned long ncontig)
+{
+ pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
+ struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+
+ flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
+ return orig_pte;
+}
+
+int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t pte, int dirty)
+{
+ int ncontig;
+ size_t pgsize = 0;
+ struct mm_struct *mm = vma->vm_mm;
+ pte_t orig_pte;
+
+ if (!pte_cont(pte))
+ return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
+
+ ncontig = arch_contpte_get_num_contig(vma->vm_mm, addr, ptep, 0, &pgsize);
+
+ if (!__cont_access_flags_changed(ptep, pte, ncontig))
+ return 0;
+
+ orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
+
+ /* Make sure we don't lose the dirty or young state */
+ if (pte_dirty(orig_pte))
+ pte = pte_mkdirty(pte);
+
+ if (pte_young(orig_pte))
+ pte = pte_mkyoung(pte);
+
+ set_contptes(mm, addr, ptep, pte, ncontig, pgsize);
+
+ return 1;
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 8/9] mm: Use common huge_ptep_set_wrprotect() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (6 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 7/9] mm: Use common huge_ptep_set_access_flags() " Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 9/9] mm: Use common huge_ptep_clear_flush() " Alexandre Ghiti
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
After some adjustments, both architectures have the same implementation
so move it to the generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 3 ---
arch/arm64/mm/hugetlbpage.c | 27 ---------------------------
arch/riscv/include/asm/hugetlb.h | 4 ----
arch/riscv/include/asm/pgtable.h | 7 ++++---
arch/riscv/mm/hugetlbpage.c | 22 ----------------------
include/linux/hugetlb_contpte.h | 4 ++++
mm/hugetlb_contpte.c | 22 ++++++++++++++++++++++
7 files changed, 30 insertions(+), 59 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 654f5f2f03a3..fd1de0caad3f 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -35,9 +35,6 @@ static inline void arch_clear_hugetlb_flags(struct folio *folio)
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
-#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
-extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep);
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 643ba2043f0f..0430cb41f381 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -277,33 +277,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-void huge_ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
-{
- unsigned long pfn, dpfn;
- pgprot_t hugeprot;
- int ncontig, i;
- size_t pgsize;
- pte_t pte;
-
- if (!pte_cont(__ptep_get(ptep))) {
- __ptep_set_wrprotect(mm, addr, ptep);
- return;
- }
-
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
- dpfn = pgsize >> PAGE_SHIFT;
-
- pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
- pte = pte_wrprotect(pte);
-
- hugeprot = pte_pgprot(pte);
- pfn = pte_pfn(pte);
-
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-}
-
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index bf533c2cef84..4c692dd82779 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -24,10 +24,6 @@ bool arch_hugetlb_migration_supported(struct hstate *h);
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
-#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
-void huge_ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep);
-
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 290d5fbfe031..5a29153a4013 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -645,9 +645,8 @@ static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
return pte;
}
-#define __HAVE_ARCH_PTEP_SET_WRPROTECT
-static inline void ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long address, pte_t *ptep)
+static inline void __ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long address, pte_t *ptep)
{
atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep);
}
@@ -741,6 +740,8 @@ static inline pte_t ptep_get(pte_t *ptep)
#define pte_clear __pte_clear
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
#define ptep_set_access_flags __ptep_set_access_flags
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+#define ptep_set_wrprotect __ptep_set_wrprotect
#define pgprot_nx pgprot_nx
static inline pgprot_t pgprot_nx(pgprot_t _prot)
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 0e2ca7327479..8963a4e77742 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -173,28 +173,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-void huge_ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep)
-{
- pte_t pte = ptep_get(ptep);
- pte_t orig_pte;
- int pte_num;
-
- if (!pte_napot(pte)) {
- ptep_set_wrprotect(mm, addr, ptep);
- return;
- }
-
- pte_num = arch_contpte_get_num_contig(mm, addr, ptep, 0, NULL);
-
- orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num);
-
- orig_pte = pte_wrprotect(orig_pte);
-
- set_ptes(mm, addr, ptep, orig_pte, pte_num);
-}
-
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr,
pte_t *ptep)
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index fea47035ac38..02bce0ed93d8 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -27,4 +27,8 @@ extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
+#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
+extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index 49950c1ce615..de505350ef48 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -14,6 +14,7 @@
* - __ptep_get_and_clear()
* - __pte_clear()
* - __ptep_set_access_flags()
+ * - __ptep_set_wrprotect()
* - pte_cont()
* - arch_contpte_get_num_contig()
*/
@@ -25,6 +26,7 @@
* - huge_pte_clear()
* - huge_ptep_get_and_clear()
* - huge_ptep_set_access_flags()
+ * - huge_ptep_set_wrprotect()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -232,3 +234,23 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return 1;
}
+
+void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ int ncontig;
+ size_t pgsize;
+ pte_t pte;
+
+ if (!pte_cont(__ptep_get(ptep))) {
+ __ptep_set_wrprotect(mm, addr, ptep);
+ return;
+ }
+
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep, 0, &pgsize);
+
+ pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
+ pte = pte_wrprotect(pte);
+
+ set_contptes(mm, addr, ptep, pte, ncontig, pgsize);
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v4 9/9] mm: Use common huge_ptep_clear_flush() function for riscv/arm64
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
` (7 preceding siblings ...)
2025-01-27 9:35 ` [PATCH v4 8/9] mm: Use common huge_ptep_set_wrprotect() " Alexandre Ghiti
@ 2025-01-27 9:35 ` Alexandre Ghiti
8 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-01-27 9:35 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Cc: Alexandre Ghiti
After some adjustments, both architectures have the same implementation
so move it to the generic code.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/arm64/include/asm/hugetlb.h | 3 --
arch/arm64/mm/hugetlbpage.c | 61 --------------------------------
arch/riscv/include/asm/hugetlb.h | 7 +---
arch/riscv/mm/hugetlbpage.c | 51 --------------------------
include/linux/hugetlb_contpte.h | 4 +++
mm/hugetlb_contpte.c | 15 ++++++++
6 files changed, 20 insertions(+), 121 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index fd1de0caad3f..3f79e4b76711 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -35,9 +35,6 @@ static inline void arch_clear_hugetlb_flags(struct folio *folio)
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
-#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep);
void __init arm64_hugetlb_cma_reserve(void);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 0430cb41f381..270e4580e12a 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -98,53 +98,6 @@ int find_num_contig(struct mm_struct *mm, unsigned long addr,
return CONT_PTES;
}
-/*
- * Changing some bits of contiguous entries requires us to follow a
- * Break-Before-Make approach, breaking the whole contiguous set
- * before we can change any entries. See ARM DDI 0487A.k_iss10775,
- * "Misprogramming of the Contiguous bit", page D4-1762.
- *
- * This helper performs the break step.
- */
-static pte_t get_clear_contig(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pgsize,
- unsigned long ncontig)
-{
- pte_t orig_pte = __ptep_get(ptep);
- unsigned long i;
-
- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
- pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
-
- /*
- * If HW_AFDBM is enabled, then the HW could turn on
- * the dirty or accessed bit for any page in the set,
- * so check them all.
- */
- if (pte_dirty(pte))
- orig_pte = pte_mkdirty(orig_pte);
-
- if (pte_young(pte))
- orig_pte = pte_mkyoung(orig_pte);
- }
- return orig_pte;
-}
-
-static pte_t get_clear_contig_flush(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pgsize,
- unsigned long ncontig)
-{
- pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
- struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
-
- flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
- return orig_pte;
-}
-
pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, unsigned long sz)
{
@@ -277,20 +230,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep)
-{
- struct mm_struct *mm = vma->vm_mm;
- size_t pgsize;
- int ncontig;
-
- if (!pte_cont(__ptep_get(ptep)))
- return ptep_clear_flush(vma, addr, ptep);
-
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
- return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
-}
-
static int __init hugetlbpage_init(void)
{
if (pud_sect_supported())
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 4c692dd82779..63c7e4fa342a 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -20,14 +20,9 @@ bool arch_hugetlb_migration_supported(struct hstate *h);
#endif
#ifdef CONFIG_RISCV_ISA_SVNAPOT
-#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
- unsigned long addr, pte_t *ptep);
-
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
-
-#endif /*CONFIG_RISCV_ISA_SVNAPOT*/
+#endif /* CONFIG_RISCV_ISA_SVNAPOT */
#include <asm-generic/hugetlb.h>
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 8963a4e77742..ea1ae3a43d45 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -121,42 +121,6 @@ unsigned long hugetlb_mask_last_page(struct hstate *h)
return 0UL;
}
-static pte_t get_clear_contig(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pte_num)
-{
- pte_t orig_pte = ptep_get(ptep);
- unsigned long i;
-
- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) {
- pte_t pte = ptep_get_and_clear(mm, addr, ptep);
-
- if (pte_dirty(pte))
- orig_pte = pte_mkdirty(orig_pte);
-
- if (pte_young(pte))
- orig_pte = pte_mkyoung(orig_pte);
- }
-
- return orig_pte;
-}
-
-static pte_t get_clear_contig_flush(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep,
- unsigned long pte_num)
-{
- pte_t orig_pte = get_clear_contig(mm, addr, ptep, pte_num);
- struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
- bool valid = !pte_none(orig_pte);
-
- if (valid)
- flush_tlb_range(&vma, addr, addr + (PAGE_SIZE * pte_num));
-
- return orig_pte;
-}
-
pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
{
unsigned long order;
@@ -173,21 +137,6 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
return entry;
}
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
- unsigned long addr,
- pte_t *ptep)
-{
- pte_t pte = ptep_get(ptep);
- int pte_num;
-
- if (!pte_napot(pte))
- return ptep_clear_flush(vma, addr, ptep);
-
- pte_num = arch_contpte_get_num_contig(vma->vm_mm, addr, ptep, 0, NULL);
-
- return get_clear_contig_flush(vma->vm_mm, addr, ptep, pte_num);
-}
-
static bool is_napot_size(unsigned long size)
{
unsigned long order;
diff --git a/include/linux/hugetlb_contpte.h b/include/linux/hugetlb_contpte.h
index 02bce0ed93d8..911b9cd4aa4d 100644
--- a/include/linux/hugetlb_contpte.h
+++ b/include/linux/hugetlb_contpte.h
@@ -31,4 +31,8 @@ extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep);
+#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
+extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep);
+
#endif /* _LINUX_HUGETLB_CONTPTE_H */
diff --git a/mm/hugetlb_contpte.c b/mm/hugetlb_contpte.c
index de505350ef48..d27c7599ce74 100644
--- a/mm/hugetlb_contpte.c
+++ b/mm/hugetlb_contpte.c
@@ -27,6 +27,7 @@
* - huge_ptep_get_and_clear()
* - huge_ptep_set_access_flags()
* - huge_ptep_set_wrprotect()
+ * - huge_ptep_clear_flush()
*/
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
@@ -254,3 +255,17 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
set_contptes(mm, addr, ptep, pte, ncontig, pgsize);
}
+
+pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ size_t pgsize;
+ int ncontig;
+
+ if (!pte_cont(__ptep_get(ptep)))
+ return ptep_clear_flush(vma, addr, ptep);
+
+ ncontig = arch_contpte_get_num_contig(mm, addr, ptep, 0, &pgsize);
+ return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
+}
--
2.39.2
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code
2025-01-27 9:35 ` [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code Alexandre Ghiti
@ 2025-01-27 13:51 ` Matthew Wilcox
2025-02-14 12:39 ` Alexandre Ghiti
0 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox @ 2025-01-27 13:51 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
On Mon, Jan 27, 2025 at 10:35:23AM +0100, Alexandre Ghiti wrote:
> +#ifdef CONFIG_RISCV_ISA_SVNAPOT
> +static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
> + pte_t *ptep, pte_t pteval, unsigned int nr)
> +{
> + if (unlikely(pte_valid_napot(pteval))) {
> + unsigned int order = ilog2(nr);
> +
> + if (!is_napot_order(order)) {
> + /*
> + * Something's weird, we are given a NAPOT pte but the
No, nothing is weird. This can happen under a lot of different
circumstances. For example, one might mmap() part of a file and the
folio containing the data is only partially mapped. The filesystem /
page cache might choose to use a folio order that isn't one of your
magic hardware orders.
> + * size of the mapping is not a known NAPOT mapping
> + * size, so clear the NAPOT bit and map this without
> + * NAPOT support: core mm only manipulates pte with the
> + * real pfn so we know the pte is valid without the N
> + * bit.
> + */
> + pr_err("Incorrect NAPOT mapping, resetting.\n");
> + pteval = pte_clear_napot(pteval);
> + } else {
> + /*
> + * NAPOT ptes that arrive here only have the N bit set
> + * and their pfn does not contain the mapping size, so
> + * set that here.
> + */
> + pteval = pte_mknapot(pteval, order);
You're assuming that pteval is aligned to the order that you've
calculated, and again that's not true. For example, the user may have
called mmap() on range 0x21000-0x40000 of a file which is covered by
a 128kB folio. You'll be called with a pteval pointing to 0x21000 and
calculate that you can put a 64kB entry there ... no.
I'd suggest you do some testing with fstests and xfs as your underlying
filesystem. It should catch these kinds of mistakes.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code
2025-01-27 13:51 ` Matthew Wilcox
@ 2025-02-14 12:39 ` Alexandre Ghiti
2025-02-24 14:34 ` Alexandre Ghiti
0 siblings, 1 reply; 13+ messages in thread
From: Alexandre Ghiti @ 2025-02-14 12:39 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Hi Matthew,
Sorry for the very late reply, the flu hit me!
On Mon, Jan 27, 2025 at 2:51 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Mon, Jan 27, 2025 at 10:35:23AM +0100, Alexandre Ghiti wrote:
> > +#ifdef CONFIG_RISCV_ISA_SVNAPOT
> > +static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
> > + pte_t *ptep, pte_t pteval, unsigned int nr)
> > +{
> > + if (unlikely(pte_valid_napot(pteval))) {
> > + unsigned int order = ilog2(nr);
> > +
> > + if (!is_napot_order(order)) {
> > + /*
> > + * Something's weird, we are given a NAPOT pte but the
>
> No, nothing is weird. This can happen under a lot of different
> circumstances. For example, one might mmap() part of a file and the
> folio containing the data is only partially mapped.
I don't see how/when we would mark a PTE as napot if we try to mmap an
address that is not aligned on the size of a napot mapping or does not
have a napot mapping size.
> The filesystem /
> page cache might choose to use a folio order that isn't one of your
> magic hardware orders.
>
> > + * size of the mapping is not a known NAPOT mapping
> > + * size, so clear the NAPOT bit and map this without
> > + * NAPOT support: core mm only manipulates pte with the
> > + * real pfn so we know the pte is valid without the N
> > + * bit.
> > + */
> > + pr_err("Incorrect NAPOT mapping, resetting.\n");
> > + pteval = pte_clear_napot(pteval);
> > + } else {
> > + /*
> > + * NAPOT ptes that arrive here only have the N bit set
> > + * and their pfn does not contain the mapping size, so
> > + * set that here.
> > + */
> > + pteval = pte_mknapot(pteval, order);
>
> You're assuming that pteval is aligned to the order that you've
> calculated, and again that's not true. For example, the user may have
> called mmap() on range 0x21000-0x40000 of a file which is covered by
> a 128kB folio. You'll be called with a pteval pointing to 0x21000 and
> calculate that you can put a 64kB entry there ... no.
Yes, I agree with this, then we have to go through the list of ptes
and check if inside the region we are currently setting, some
subregions correspond to a napot mapping.
Thanks for your feedback,
Alex
>
> I'd suggest you do some testing with fstests and xfs as your underlying
> filesystem. It should catch these kinds of mistakes.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code
2025-02-14 12:39 ` Alexandre Ghiti
@ 2025-02-24 14:34 ` Alexandre Ghiti
0 siblings, 0 replies; 13+ messages in thread
From: Alexandre Ghiti @ 2025-02-24 14:34 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Catalin Marinas, Will Deacon, Ryan Roberts, Mark Rutland,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Andrew Morton,
linux-arm-kernel, linux-kernel, linux-riscv, linux-mm
Hi Matthew,
On Fri, Feb 14, 2025 at 1:39 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
>
> Hi Matthew,
>
> Sorry for the very late reply, the flu hit me!
>
> On Mon, Jan 27, 2025 at 2:51 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Mon, Jan 27, 2025 at 10:35:23AM +0100, Alexandre Ghiti wrote:
> > > +#ifdef CONFIG_RISCV_ISA_SVNAPOT
> > > +static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
> > > + pte_t *ptep, pte_t pteval, unsigned int nr)
> > > +{
> > > + if (unlikely(pte_valid_napot(pteval))) {
> > > + unsigned int order = ilog2(nr);
> > > +
> > > + if (!is_napot_order(order)) {
> > > + /*
> > > + * Something's weird, we are given a NAPOT pte but the
> >
> > No, nothing is weird. This can happen under a lot of different
> > circumstances. For example, one might mmap() part of a file and the
> > folio containing the data is only partially mapped.
>
> I don't see how/when we would mark a PTE as napot if we try to mmap an
> address that is not aligned on the size of a napot mapping or does not
> have a napot mapping size.
>
> > The filesystem /
> > page cache might choose to use a folio order that isn't one of your
> > magic hardware orders.
> >
> > > + * size of the mapping is not a known NAPOT mapping
> > > + * size, so clear the NAPOT bit and map this without
> > > + * NAPOT support: core mm only manipulates pte with the
> > > + * real pfn so we know the pte is valid without the N
> > > + * bit.
> > > + */
> > > + pr_err("Incorrect NAPOT mapping, resetting.\n");
> > > + pteval = pte_clear_napot(pteval);
> > > + } else {
> > > + /*
> > > + * NAPOT ptes that arrive here only have the N bit set
> > > + * and their pfn does not contain the mapping size, so
> > > + * set that here.
> > > + */
> > > + pteval = pte_mknapot(pteval, order);
> >
> > You're assuming that pteval is aligned to the order that you've
> > calculated, and again that's not true. For example, the user may have
> > called mmap() on range 0x21000-0x40000 of a file which is covered by
> > a 128kB folio. You'll be called with a pteval pointing to 0x21000 and
> > calculate that you can put a 64kB entry there ... no.
>
> Yes, I agree with this, then we have to go through the list of ptes
> and check if inside the region we are currently setting, some
> subregions correspond to a napot mapping.
So I looked at that and I think we are safe with the implementation in
this patch because:
- this patchset only deals with hugetlb, which cannot be partially
mapped (right?)
- when we'll add support for THP (upcoming series), we'll use arm64
set_ptes() implementation which splits the region to map using the
contpte mapping size
(https://elixir.bootlin.com/linux/v6.13.4/source/arch/arm64/mm/contpte.c#L268),
so we can't mark an unaligned region with the contpte bit.
Let me know if I missed something,
Thanks again,
Alex
>
> Thanks for your feedback,
>
> Alex
>
>
> >
> > I'd suggest you do some testing with fstests and xfs as your underlying
> > filesystem. It should catch these kinds of mistakes.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-02-24 15:05 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-27 9:35 [PATCH v4 0/9] Merge arm64/riscv hugetlbfs contpte support Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 1/9] riscv: Safely remove huge_pte_offset() when manipulating NAPOT ptes Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code Alexandre Ghiti
2025-01-27 13:51 ` Matthew Wilcox
2025-02-14 12:39 ` Alexandre Ghiti
2025-02-24 14:34 ` Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 3/9] mm: Use common huge_ptep_get() function for riscv/arm64 Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 4/9] mm: Use common set_huge_pte_at() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 5/9] mm: Use common huge_pte_clear() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 6/9] mm: Use common huge_ptep_get_and_clear() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 7/9] mm: Use common huge_ptep_set_access_flags() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 8/9] mm: Use common huge_ptep_set_wrprotect() " Alexandre Ghiti
2025-01-27 9:35 ` [PATCH v4 9/9] mm: Use common huge_ptep_clear_flush() " Alexandre Ghiti
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).