* [PATCH v9 0/7] Support page table check
@ 2023-11-30 2:53 Rohan McLure
2023-11-30 2:53 ` [PATCH v9 1/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
` (6 more replies)
0 siblings, 7 replies; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Support the page table check sanitiser on all PowerPC platforms. This
sanitiser works by serialising assignments, reassignments and clears of
page table entries at each level in order to ensure that anonymous
mappings have at most one writable consumer, and likewise that
file-backed mappings are not simultaneously also anonymous mappings.
In order to support this infrastructure, a number of stubs must be
defined for all powerpc platforms. Additionally, seperate set_pte_at
and set_pte, to allow for internal, uninstrumented mappings.
v9:
* Adapt to using the set_ptes() API, using __set_pte_at() where we need
must avoid instrumentation.
* Use the logic of *_access_permitted() for implementing
*_user_accessible_page(), which are required routines for page table
check.
* Even though we no longer need p{m,u,4}d_leaf(), still default
implement these to assist in refactoring out extant
p{m,u,4}_is_leaf().
* Add p{m,u}_pte() stubs where asm-generic does not provide them, as
page table check wants all *user_accessible_page() variants, and we
would like to default implement the variants in terms of
pte_user_accessible_page().
* Avoid the ugly pmdp_collapse_flush() macro nonsense! Just instrument
its constituent calls instead for radix and hash.
v8:
* Fix linux/page_table_check.h include in asm/pgtable.h breaking
32-bit.
Link: https://lore.kernel.org/linuxppc-dev/20230215231153.2147454-1-rmclure@linux.ibm.com/
v7:
* Remove use of extern in set_pte prototypes
* Clean up pmdp_collapse_flush macro
* Replace set_pte_at with static inline function
* Fix commit message for patch 7
Link: https://lore.kernel.org/linuxppc-dev/20230215020155.1969194-1-rmclure@linux.ibm.com/
v6:
* Support huge pages and p{m,u}d accounting.
* Remove instrumentation from set_pte from kernel internal pages.
* 64s: Implement pmdp_collapse_flush in terms of __pmdp_collapse_flush
as access to the mm_struct * is required.
Link: https://lore.kernel.org/linuxppc-dev/20230214015939.1853438-1-rmclure@linux.ibm.com/
v5:
Link: https://lore.kernel.org/linuxppc-dev/20221118002146.25979-1-rmclure@linux.ibm.com/
Rohan McLure (7):
powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf
powerpc: mm: Implement p{m,u,4}d_leaf on all platforms
powerpc: mm: Add common pud_pfn stub for all platforms
powerpc: mm: Default p{m,u}d_pte implementations
poweprc: mm: Implement *_user_accessible_page() for ptes
powerpc: mm: Use __set_pte_at() for early-boot / internal usages
powerpc: mm: Support page table check
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/32/pgtable.h | 17 ++++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 57 ++++++++++++----
arch/powerpc/include/asm/nohash/64/pgtable.h | 8 +++
arch/powerpc/include/asm/nohash/pgtable.h | 5 ++
arch/powerpc/include/asm/pgtable.h | 68 ++++++++++++++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++--
arch/powerpc/mm/book3s64/hash_pgtable.c | 6 +-
arch/powerpc/mm/book3s64/pgtable.c | 17 +++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 27 ++++----
arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
arch/powerpc/mm/pgtable.c | 10 ++-
arch/powerpc/mm/pgtable_32.c | 2 +-
arch/powerpc/mm/pgtable_64.c | 6 +-
arch/powerpc/xmon/xmon.c | 6 +-
15 files changed, 173 insertions(+), 71 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v9 1/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 2:53 ` [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
` (5 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Replace occurrences of p{u,m,4}d_is_leaf with p{u,m,4}_leaf, as the
latter is the name given to checking that a higher-level entry in
multi-level paging contains a page translation entry (pte) throughout
all other archs.
A future patch will implement p{u,m,4}_leaf stubs on all platforms so
that they may be referenced in generic code.
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: No longer required in order to implement page table check, just a
refactor.
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++++++------
arch/powerpc/mm/book3s64/radix_pgtable.c | 14 +++++++-------
arch/powerpc/mm/pgtable.c | 6 +++---
arch/powerpc/mm/pgtable_64.c | 6 +++---
arch/powerpc/xmon/xmon.c | 6 +++---
5 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 175a8eb2681f..fdb178fe754c 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -498,7 +498,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full,
for (im = 0; im < PTRS_PER_PMD; ++im, ++p) {
if (!pmd_present(*p))
continue;
- if (pmd_is_leaf(*p)) {
+ if (pmd_leaf(*p)) {
if (full) {
pmd_clear(p);
} else {
@@ -527,7 +527,7 @@ static void kvmppc_unmap_free_pud(struct kvm *kvm, pud_t *pud,
for (iu = 0; iu < PTRS_PER_PUD; ++iu, ++p) {
if (!pud_present(*p))
continue;
- if (pud_is_leaf(*p)) {
+ if (pud_leaf(*p)) {
pud_clear(p);
} else {
pmd_t *pmd;
@@ -630,12 +630,12 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pud = pud_alloc_one(kvm->mm, gpa);
pmd = NULL;
- if (pud && pud_present(*pud) && !pud_is_leaf(*pud))
+ if (pud && pud_present(*pud) && !pud_leaf(*pud))
pmd = pmd_offset(pud, gpa);
else if (level <= 1)
new_pmd = kvmppc_pmd_alloc();
- if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_is_leaf(*pmd)))
+ if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_leaf(*pmd)))
new_ptep = kvmppc_pte_alloc();
/* Check if we might have been invalidated; let the guest retry if so */
@@ -653,7 +653,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pud = NULL;
}
pud = pud_offset(p4d, gpa);
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
unsigned long hgpa = gpa & PUD_MASK;
/* Check if we raced and someone else has set the same thing */
@@ -704,7 +704,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pmd = NULL;
}
pmd = pmd_offset(pud, gpa);
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
unsigned long lgpa = gpa & PMD_MASK;
/* Check if we raced and someone else has set the same thing */
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index c6a4ac766b2b..1f8db10693e3 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -204,14 +204,14 @@ static void radix__change_memory_range(unsigned long start, unsigned long end,
pudp = pud_alloc(&init_mm, p4dp, idx);
if (!pudp)
continue;
- if (pud_is_leaf(*pudp)) {
+ if (pud_leaf(*pudp)) {
ptep = (pte_t *)pudp;
goto update_the_pte;
}
pmdp = pmd_alloc(&init_mm, pudp, idx);
if (!pmdp)
continue;
- if (pmd_is_leaf(*pmdp)) {
+ if (pmd_leaf(*pmdp)) {
ptep = pmdp_ptep(pmdp);
goto update_the_pte;
}
@@ -767,7 +767,7 @@ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
if (!pmd_present(*pmd))
continue;
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
if (IS_ALIGNED(addr, PMD_SIZE) &&
IS_ALIGNED(next, PMD_SIZE)) {
if (!direct)
@@ -807,7 +807,7 @@ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
if (!pud_present(*pud))
continue;
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
if (!IS_ALIGNED(addr, PUD_SIZE) ||
!IS_ALIGNED(next, PUD_SIZE)) {
WARN_ONCE(1, "%s: unaligned range\n", __func__);
@@ -845,7 +845,7 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct,
if (!p4d_present(*p4d))
continue;
- if (p4d_is_leaf(*p4d)) {
+ if (p4d_leaf(*p4d)) {
if (!IS_ALIGNED(addr, P4D_SIZE) ||
!IS_ALIGNED(next, P4D_SIZE)) {
WARN_ONCE(1, "%s: unaligned range\n", __func__);
@@ -1554,7 +1554,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
int pud_clear_huge(pud_t *pud)
{
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
pud_clear(pud);
return 1;
}
@@ -1601,7 +1601,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
int pmd_clear_huge(pmd_t *pmd)
{
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
pmd_clear(pmd);
return 1;
}
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index a04ae4449a02..e8e0289d7ab0 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -413,7 +413,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
if (p4d_none(p4d))
return NULL;
- if (p4d_is_leaf(p4d)) {
+ if (p4d_leaf(p4d)) {
ret_pte = (pte_t *)p4dp;
goto out;
}
@@ -435,7 +435,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
if (pud_none(pud))
return NULL;
- if (pud_is_leaf(pud)) {
+ if (pud_leaf(pud)) {
ret_pte = (pte_t *)pudp;
goto out;
}
@@ -474,7 +474,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
goto out;
}
- if (pmd_is_leaf(pmd)) {
+ if (pmd_leaf(pmd)) {
ret_pte = (pte_t *)pmdp;
goto out;
}
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 5ac1fd30341b..0604c80dae66 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -100,7 +100,7 @@ EXPORT_SYMBOL(__pte_frag_size_shift);
/* 4 level page table */
struct page *p4d_page(p4d_t p4d)
{
- if (p4d_is_leaf(p4d)) {
+ if (p4d_leaf(p4d)) {
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
VM_WARN_ON(!p4d_huge(p4d));
return pte_page(p4d_pte(p4d));
@@ -111,7 +111,7 @@ struct page *p4d_page(p4d_t p4d)
struct page *pud_page(pud_t pud)
{
- if (pud_is_leaf(pud)) {
+ if (pud_leaf(pud)) {
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
VM_WARN_ON(!pud_huge(pud));
return pte_page(pud_pte(pud));
@@ -125,7 +125,7 @@ struct page *pud_page(pud_t pud)
*/
struct page *pmd_page(pmd_t pmd)
{
- if (pmd_is_leaf(pmd)) {
+ if (pmd_leaf(pmd)) {
/*
* vmalloc_to_page may be called on any vmap address (not only
* vmalloc), and it uses pmd_page() etc., when huge vmap is
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index b3b94cd37713..9669c9925225 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -3342,7 +3342,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (p4d_is_leaf(*p4dp)) {
+ if (p4d_leaf(*p4dp)) {
format_pte(p4dp, p4d_val(*p4dp));
return;
}
@@ -3356,7 +3356,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (pud_is_leaf(*pudp)) {
+ if (pud_leaf(*pudp)) {
format_pte(pudp, pud_val(*pudp));
return;
}
@@ -3370,7 +3370,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (pmd_is_leaf(*pmdp)) {
+ if (pmd_leaf(*pmdp)) {
format_pte(pmdp, pmd_val(*pmdp));
return;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
2023-11-30 2:53 ` [PATCH v9 1/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 6:36 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
` (4 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
The check that a higher-level entry in multi-level pages contains a page
translation entry (pte) is performed by p{m,u,4}d_leaf stubs, which may
be specialised for each choice of mmu. In a prior commit, we replace
uses to the catch-all stubs, p{m,u,4}d_is_leaf with p{m,u,4}d_leaf.
Replace the catch-all stub definitions for p{m,u,4}d_is_leaf with
definitions for p{m,u,4}d_leaf. A future patch will assume that
p{m,u,4}d_leaf is defined on all platforms.
In particular, implement pud_leaf for Book3E-64, pmd_leaf for all Book3E
and Book3S-64 platforms, with a catch-all definition for p4d_leaf.
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: No longer required in order implement page table check, just
refactoring.
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 5 +++++
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++++-----
arch/powerpc/include/asm/nohash/64/pgtable.h | 6 ++++++
arch/powerpc/include/asm/pgtable.h | 22 ++------------------
4 files changed, 17 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 52971ee30717..9cc95a61d2a6 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -223,6 +223,11 @@ static inline void pmd_clear(pmd_t *pmdp)
*pmdp = __pmd(0);
}
+#define pmd_leaf pmd_leaf
+static inline bool pmd_leaf(pmd_t pmd)
+{
+ return false;
+}
/*
* When flushing the tlb entry for a page, we also need to flush the hash
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..8fdb7667c509 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1459,16 +1459,14 @@ static inline bool is_pte_rw_upgrade(unsigned long old_val, unsigned long new_va
/*
* Like pmd_huge() and pmd_large(), but works regardless of config options
*/
-#define pmd_is_leaf pmd_is_leaf
-#define pmd_leaf pmd_is_leaf
-static inline bool pmd_is_leaf(pmd_t pmd)
+#define pmd_leaf pmd_leaf
+static inline bool pmd_leaf(pmd_t pmd)
{
return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
}
-#define pud_is_leaf pud_is_leaf
-#define pud_leaf pud_is_leaf
-static inline bool pud_is_leaf(pud_t pud)
+#define pud_leaf pud_leaf
+static inline bool pud_leaf(pud_t pud)
{
return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
}
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 2202c78730e8..f58cbebde26e 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -116,6 +116,12 @@ static inline void pud_clear(pud_t *pudp)
*pudp = __pud(0);
}
+#define pud_leaf pud_leaf
+static inline bool pud_leaf(pud_t pud)
+{
+ return false;
+}
+
#define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|| (pud_val(pud) & PUD_BAD_BITS))
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9224f23065ff..db0231afca9d 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -180,29 +180,11 @@ static inline void pte_frag_set(mm_context_t *ctx, void *p)
}
#endif
-#ifndef pmd_is_leaf
-#define pmd_is_leaf pmd_is_leaf
-static inline bool pmd_is_leaf(pmd_t pmd)
+#define p4d_leaf p4d_leaf
+static inline bool p4d_leaf(p4d_t p4d)
{
return false;
}
-#endif
-
-#ifndef pud_is_leaf
-#define pud_is_leaf pud_is_leaf
-static inline bool pud_is_leaf(pud_t pud)
-{
- return false;
-}
-#endif
-
-#ifndef p4d_is_leaf
-#define p4d_is_leaf p4d_is_leaf
-static inline bool p4d_is_leaf(p4d_t p4d)
-{
- return false;
-}
-#endif
#define pmd_pgtable pmd_pgtable
static inline pgtable_t pmd_pgtable(pmd_t pmd)
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for all platforms
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
2023-11-30 2:53 ` [PATCH v9 1/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
2023-11-30 2:53 ` [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 6:43 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations Rohan McLure
` (3 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Prior to this commit, pud_pfn was implemented with BUILD_BUG as the inline
function for 64-bit Book3S systems but is never included, as its
invocations in generic code are guarded by calls to pud_devmap which return
zero on such systems. A future patch will provide support for page table
checks, the generic code for which depends on a pud_pfn stub being
implemented, even while the patch will not interact with puds directly.
Remove the 64-bit Book3S stub and define pud_pfn to warn on all
platforms. pud_pfn may be defined properly on a per-platform basis
should it grow real usages in future.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
arch/powerpc/include/asm/pgtable.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index db0231afca9d..9c0f2151f08f 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -219,6 +219,20 @@ static inline bool arch_supports_memmap_on_memory(unsigned long vmemmap_size)
#endif /* CONFIG_PPC64 */
+/*
+ * Currently only consumed by page_table_check_pud_{set,clear}. Since clears
+ * and sets to page table entries at any level are done through
+ * page_table_check_pte_{set,clear}, provide stub implementation.
+ */
+#ifndef pud_pfn
+#define pud_pfn pud_pfn
+static inline int pud_pfn(pud_t pud)
+{
+ WARN_ONCE(1, "pud: platform does not use pud entries directly");
+ return 0;
+}
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
` (2 preceding siblings ...)
2023-11-30 2:53 ` [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 7:35 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes Rohan McLure
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
For 32-bit systems which have no usecases for p{m,u}d_pte() prior to
page table checking, implement default stubs.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: New patch
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 3 +++
arch/powerpc/include/asm/nohash/64/pgtable.h | 2 ++
arch/powerpc/include/asm/pgtable.h | 17 +++++++++++++++++
3 files changed, 22 insertions(+)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 8fdb7667c509..2454174b26cb 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -887,6 +887,8 @@ static inline int pud_present(pud_t pud)
extern struct page *pud_page(pud_t pud);
extern struct page *pmd_page(pmd_t pmd);
+
+#define pud_pte pud_pte
static inline pte_t pud_pte(pud_t pud)
{
return __pte_raw(pud_raw(pud));
@@ -1043,6 +1045,7 @@ static inline void __kernel_map_pages(struct page *page, int numpages, int enabl
}
#endif
+#define pmd_pte pmd_pte
static inline pte_t pmd_pte(pmd_t pmd)
{
return __pte_raw(pmd_raw(pmd));
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index f58cbebde26e..09a34fe196ae 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -93,6 +93,7 @@ static inline void pmd_clear(pmd_t *pmdp)
*pmdp = __pmd(0);
}
+#define pmd_pte pmd_pte
static inline pte_t pmd_pte(pmd_t pmd)
{
return __pte(pmd_val(pmd));
@@ -134,6 +135,7 @@ static inline pmd_t *pud_pgtable(pud_t pud)
extern struct page *pud_page(pud_t pud);
+#define pud_pte pud_pte
static inline pte_t pud_pte(pud_t pud)
{
return __pte(pud_val(pud));
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9c0f2151f08f..d7d0f47760d3 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -233,6 +233,23 @@ static inline int pud_pfn(pud_t pud)
}
#endif
+#ifndef pmd_pte
+#define pmd_pte pmd_pte
+static inline pte_t pmd_pte(pmd_t pmd)
+{
+ WARN_ONCE(1, "pmd: platform does not use pmd entries directly");
+ return __pte(pmd_val(pmd));
+}
+#endif
+
+#ifndef pud_pte
+#define pud_pte pud_pte
+static inline pte_t pud_pte(pud_t pud)
+{
+ WARN_ONCE(1, "pud: platform does not use pud entries directly");
+ return __pte(pud_val(pud));
+}
+#endif
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
` (3 preceding siblings ...)
2023-11-30 2:53 ` [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 7:01 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages Rohan McLure
2023-11-30 2:54 ` [PATCH v9 7/7] powerpc: mm: Support page table check Rohan McLure
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Page table checking depends on architectures providing an
implementation of p{te,md,ud}_user_accessible_page. With
refactorisations made on powerpc/mm, the pte_access_permitted() and
similar methods verify whether a userland page is accessible with the
required permissions.
Since page table checking is the only user of
p{te,md,ud}_user_accessible_page(), implement these for all platforms,
using some of the same preliminay checks taken by pte_access_permitted()
on that platform.
Since Commit 8e9bd41e4ce1 ("powerpc/nohash: Replace pte_user() by pte_read()")
pte_user() is no longer required to be present on all platforms as it
may be equivalent to or implied by pte_read(). Hence implementations are
specialised.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: New implementation
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 5 +++++
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +++++
arch/powerpc/include/asm/nohash/pgtable.h | 5 +++++
arch/powerpc/include/asm/pgtable.h | 15 +++++++++++++++
4 files changed, 30 insertions(+)
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 9cc95a61d2a6..bd6f8cdd25aa 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -441,6 +441,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return true;
}
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+ return pte_present(pte) && pte_read(pte);
+}
+
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 2454174b26cb..dd3e7b190ab7 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -544,6 +544,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return arch_pte_access_permitted(pte_val(pte), write, 0);
}
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+ return pte_present(pte) && pte_user(pte) && pte_read(pte);
+}
+
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
index 427db14292c9..33b4a4267f66 100644
--- a/arch/powerpc/include/asm/nohash/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/pgtable.h
@@ -213,6 +213,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return true;
}
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+ return pte_present(pte) && pte_read(pte);
+}
+
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index d7d0f47760d3..661bf3afca37 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -250,6 +250,21 @@ static inline pte_t pud_pte(pud_t pud)
return __pte(pud_val(pud));
}
#endif
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+ pte_t pte = pmd_pte(pmd);
+
+ return pte_user_accessible_page(pte);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+ pte_t pte = pud_pte(pud);
+
+ return pte_user_accessible_page(pte);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
` (4 preceding siblings ...)
2023-11-30 2:53 ` [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes Rohan McLure
@ 2023-11-30 2:53 ` Rohan McLure
2023-11-30 7:18 ` Christophe Leroy
2023-11-30 2:54 ` [PATCH v9 7/7] powerpc: mm: Support page table check Rohan McLure
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:53 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
In the new set_ptes() API, set_pte_at() (a special case of set_ptes())
is intended to be instrumented by the page table check facility. There
are however several other routines that constitute the API for setting
page table entries, including set_pmd_at() among others. Such routines
are themselves implemented in terms of set_ptes_at().
A future patch providing support for page table checking on powerpc
must take care to avoid duplicate calls to
page_table_check_p{te,md,ud}_set().
Cause API-facing routines that call set_pte_at() to instead call
__set_pte_at(), which will remain uninstrumented by page table check.
set_ptes() is itself implemented by calls to __set_pte_at(), so this
eliminates redundant code.
Also prefer __set_pte_at() in early-boot usages which should not be
instrumented.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: New patch
---
arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
arch/powerpc/mm/book3s64/pgtable.c | 4 ++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 10 +++++-----
arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
arch/powerpc/mm/pgtable_32.c | 2 +-
5 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index 988948d69bc1..ae52c8db45b7 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -165,7 +165,7 @@ int hash__map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
ptep = pte_alloc_kernel(pmdp, ea);
if (!ptep)
return -ENOMEM;
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
+ __set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
} else {
/*
* If the mm subsystem is not fully up, we cannot create a
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index be229290a6a7..9a0a2accb261 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -116,7 +116,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
WARN_ON(!(pmd_large(pmd)));
#endif
trace_hugepage_set_pmd(addr, pmd_val(pmd));
- return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
+ return __set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd), 0);
}
void set_pud_at(struct mm_struct *mm, unsigned long addr,
@@ -539,7 +539,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
if (radix_enabled())
return radix__ptep_modify_prot_commit(vma, addr,
ptep, old_pte, pte);
- set_pte_at(vma->vm_mm, addr, ptep, pte);
+ __set_pte_at(vma->vm_mm, addr, ptep, pte, 0);
}
/*
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 1f8db10693e3..ae4a5f66ccd2 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -109,7 +109,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
ptep = pte_offset_kernel(pmdp, ea);
set_the_pte:
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+ __set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags), 0);
asm volatile("ptesync": : :"memory");
return 0;
}
@@ -169,7 +169,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
return -ENOMEM;
set_the_pte:
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+ __set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags), 0);
asm volatile("ptesync": : :"memory");
return 0;
}
@@ -1536,7 +1536,7 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
(atomic_read(&mm->context.copros) > 0))
radix__flush_tlb_page(vma, addr);
- set_pte_at(mm, addr, ptep, pte);
+ __set_pte_at(mm, addr, ptep, pte, 0);
}
int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
@@ -1547,7 +1547,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
if (!radix_enabled())
return 0;
- set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud);
+ __set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud, 0);
return 1;
}
@@ -1594,7 +1594,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
if (!radix_enabled())
return 0;
- set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd);
+ __set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd, 0);
return 1;
}
diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c
index 1c5e4ecbebeb..4de91b54fd89 100644
--- a/arch/powerpc/mm/nohash/book3e_pgtable.c
+++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
@@ -111,7 +111,7 @@ int __ref map_kernel_page(unsigned long ea, phys_addr_t pa, pgprot_t prot)
}
ptep = pte_offset_kernel(pmdp, ea);
}
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
+ __set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
smp_wmb();
return 0;
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 5c02fd08d61e..4bf3ca6af7cd 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -89,7 +89,7 @@ int __ref map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
* hash table
*/
BUG_ON((pte_present(*pg) | pte_hashpte(*pg)) && pgprot_val(prot));
- set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot));
+ __set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
}
smp_wmb();
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v9 7/7] powerpc: mm: Support page table check
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
` (5 preceding siblings ...)
2023-11-30 2:53 ` [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages Rohan McLure
@ 2023-11-30 2:54 ` Rohan McLure
2023-11-30 7:28 ` Christophe Leroy
6 siblings, 1 reply; 15+ messages in thread
From: Rohan McLure @ 2023-11-30 2:54 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
On creation and clearing of a page table mapping, instrument such calls
by invoking page_table_check_pte_set and page_table_check_pte_clear
respectively. These calls serve as a sanity check against illegal
mappings.
Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all platforms.
See also:
riscv support in commit 3fee229a8eb9 ("riscv/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
arm64 in commit 42b2547137f5 ("arm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v9: Updated for new API. Instrument pmdp_collapse_flush's two
constituent calls to avoid header hell
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/32/pgtable.h | 7 +++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 39 ++++++++++++++++----
arch/powerpc/mm/book3s64/hash_pgtable.c | 4 ++
arch/powerpc/mm/book3s64/pgtable.c | 13 +++++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 3 ++
arch/powerpc/mm/pgtable.c | 4 ++
7 files changed, 58 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6f105ee4f3cf..5bd6d367ef40 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -166,6 +166,7 @@ config PPC
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x
+ select ARCH_SUPPORTS_PAGE_TABLE_CHECK
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select ARCH_USE_MEMTEST
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index bd6f8cdd25aa..48f4e7b98340 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -201,6 +201,7 @@ void unmap_kernel_page(unsigned long va);
#ifndef __ASSEMBLY__
#include <linux/sched.h>
#include <linux/threads.h>
+#include <linux/page_table_check.h>
/* Bits to mask out from a PGD to get to the PUD page */
#define PGD_MASKED_BITS 0
@@ -319,7 +320,11 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
- return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
+
+ page_table_check_pte_clear(mm, old_pte);
+
+ return old_pte;
}
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index dd3e7b190ab7..834c997ba657 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -151,6 +151,8 @@
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
#ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
+
/*
* page table defines
*/
@@ -421,8 +423,11 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
- unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
- return __pte(old);
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 0));
+
+ page_table_check_pte_clear(mm, old_pte);
+
+ return old_pte;
}
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
@@ -431,11 +436,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
pte_t *ptep, int full)
{
if (full && radix_enabled()) {
+ pte_t old_pte;
+
/*
* We know that this is a full mm pte clear and
* hence can be sure there is no parallel set_pte.
*/
- return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+ old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+ page_table_check_pte_clear(mm, old_pte);
+
+ return old_pte;
}
return ptep_get_and_clear(mm, addr, ptep);
}
@@ -1339,17 +1349,30 @@ extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
- if (radix_enabled())
- return radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
- return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ pmd_t old_pmd;
+
+ if (radix_enabled()) {
+ old_pmd = radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ } else {
+ old_pmd = hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ }
+
+ page_table_check_pmd_clear(mm, old_pmd);
+
+ return old_pmd;
}
#define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR
static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pud_t *pudp)
{
- if (radix_enabled())
- return radix__pudp_huge_get_and_clear(mm, addr, pudp);
+ pud_t old_pud;
+
+ if (radix_enabled()) {
+ old_pud = radix__pudp_huge_get_and_clear(mm, addr, pudp);
+ page_table_check_pud_clear(mm, old_pud);
+ return old_pud;
+ }
BUG();
return *pudp;
}
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index ae52c8db45b7..d6bde756e4a6 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -8,6 +8,7 @@
#include <linux/sched.h>
#include <linux/mm_types.h>
#include <linux/mm.h>
+#include <linux/page_table_check.h>
#include <linux/stop_machine.h>
#include <asm/sections.h>
@@ -231,6 +232,9 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres
pmd = *pmdp;
pmd_clear(pmdp);
+
+ page_table_check_pmd_clear(vma->vm_mm, pmd);
+
/*
* Wait for all pending hash_page to finish. This is needed
* in case of subpage collapse. When we collapse normal pages
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 9a0a2accb261..194df0e4a33d 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -10,6 +10,7 @@
#include <linux/pkeys.h>
#include <linux/debugfs.h>
#include <linux/proc_fs.h>
+#include <linux/page_table_check.h>
#include <misc/cxl-base.h>
#include <asm/pgalloc.h>
@@ -116,6 +117,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
WARN_ON(!(pmd_large(pmd)));
#endif
trace_hugepage_set_pmd(addr, pmd_val(pmd));
+ page_table_check_pmd_set(mm, pmdp, pmd);
return __set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd), 0);
}
@@ -133,7 +135,8 @@ void set_pud_at(struct mm_struct *mm, unsigned long addr,
WARN_ON(!(pud_large(pud)));
#endif
trace_hugepage_set_pud(addr, pud_val(pud));
- return set_pte_at(mm, addr, pudp_ptep(pudp), pud_pte(pud));
+ page_table_check_pud_set(mm, pudp, pud);
+ return __set_pte_at(mm, addr, pudp_ptep(pudp), pud_pte(pud), 0);
}
static void do_serialize(void *arg)
@@ -168,11 +171,13 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
- unsigned long old_pmd;
+ pmd_t old_pmd;
- old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID);
+ old_pmd = __pmd(pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID));
flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
- return __pmd(old_pmd);
+ page_table_check_pmd_clear(vma->vm_mm, old_pmd);
+
+ return old_pmd;
}
pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index ae4a5f66ccd2..9ed38466a99a 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -14,6 +14,7 @@
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/mm.h>
+#include <linux/page_table_check.h>
#include <linux/hugetlb.h>
#include <linux/string_helpers.h>
#include <linux/memory.h>
@@ -1404,6 +1405,8 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
pmd = *pmdp;
pmd_clear(pmdp);
+ page_table_check_pmd_clear(vma->vm_mm, pmd);
+
radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
return pmd;
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index e8e0289d7ab0..bccc96ba2471 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -22,6 +22,7 @@
#include <linux/mm.h>
#include <linux/percpu.h>
#include <linux/hardirq.h>
+#include <linux/page_table_check.h>
#include <linux/hugetlb.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
@@ -206,6 +207,9 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
* and not hw_valid ptes. Hence there is no translation cache flush
* involved that need to be batched.
*/
+
+ page_table_check_ptes_set(mm, ptep, pte, nr);
+
for (;;) {
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms
2023-11-30 2:53 ` [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
@ 2023-11-30 6:36 ` Christophe Leroy
0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 6:36 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:53, Rohan McLure a écrit :
> The check that a higher-level entry in multi-level pages contains a page
> translation entry (pte) is performed by p{m,u,4}d_leaf stubs, which may
> be specialised for each choice of mmu. In a prior commit, we replace
> uses to the catch-all stubs, p{m,u,4}d_is_leaf with p{m,u,4}d_leaf.
>
> Replace the catch-all stub definitions for p{m,u,4}d_is_leaf with
> definitions for p{m,u,4}d_leaf. A future patch will assume that
> p{m,u,4}d_leaf is defined on all platforms.
>
> In particular, implement pud_leaf for Book3E-64, pmd_leaf for all Book3E
> and Book3S-64 platforms, with a catch-all definition for p4d_leaf.
Is that needed ? There are fallback definitions of all pXd_leaf() in
include/linux/pgtable.h
>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v9: No longer required in order implement page table check, just
> refactoring.
> ---
> arch/powerpc/include/asm/book3s/32/pgtable.h | 5 +++++
> arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++++-----
> arch/powerpc/include/asm/nohash/64/pgtable.h | 6 ++++++
> arch/powerpc/include/asm/pgtable.h | 22 ++------------------
> 4 files changed, 17 insertions(+), 26 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index 52971ee30717..9cc95a61d2a6 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -223,6 +223,11 @@ static inline void pmd_clear(pmd_t *pmdp)
> *pmdp = __pmd(0);
> }
>
> +#define pmd_leaf pmd_leaf
> +static inline bool pmd_leaf(pmd_t pmd)
> +{
> + return false;
> +}
Is that needed ? There is a fallback definition of pmd_leaf() in
include/linux/pgtable.h
>
> /*
> * When flushing the tlb entry for a page, we also need to flush the hash
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index cb77eddca54b..8fdb7667c509 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -1459,16 +1459,14 @@ static inline bool is_pte_rw_upgrade(unsigned long old_val, unsigned long new_va
> /*
> * Like pmd_huge() and pmd_large(), but works regardless of config options
> */
> -#define pmd_is_leaf pmd_is_leaf
> -#define pmd_leaf pmd_is_leaf
> -static inline bool pmd_is_leaf(pmd_t pmd)
> +#define pmd_leaf pmd_leaf
> +static inline bool pmd_leaf(pmd_t pmd)
> {
> return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
> }
>
> -#define pud_is_leaf pud_is_leaf
> -#define pud_leaf pud_is_leaf
> -static inline bool pud_is_leaf(pud_t pud)
> +#define pud_leaf pud_leaf
> +static inline bool pud_leaf(pud_t pud)
> {
> return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
> }
> diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
> index 2202c78730e8..f58cbebde26e 100644
> --- a/arch/powerpc/include/asm/nohash/64/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
> @@ -116,6 +116,12 @@ static inline void pud_clear(pud_t *pudp)
> *pudp = __pud(0);
> }
>
> +#define pud_leaf pud_leaf
> +static inline bool pud_leaf(pud_t pud)
> +{
> + return false;
> +}
> +
Is that needed ? There is a fallback definition of pud_leaf() in
include/linux/pgtable.h
> #define pud_none(pud) (!pud_val(pud))
> #define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
> || (pud_val(pud) & PUD_BAD_BITS))
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 9224f23065ff..db0231afca9d 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -180,29 +180,11 @@ static inline void pte_frag_set(mm_context_t *ctx, void *p)
> }
> #endif
>
> -#ifndef pmd_is_leaf
> -#define pmd_is_leaf pmd_is_leaf
> -static inline bool pmd_is_leaf(pmd_t pmd)
> +#define p4d_leaf p4d_leaf
> +static inline bool p4d_leaf(p4d_t p4d)
> {
> return false;
> }
Is that needed ? There is a fallback definition of p4d_leaf() in
include/linux/pgtable.h
> -#endif
> -
> -#ifndef pud_is_leaf
> -#define pud_is_leaf pud_is_leaf
> -static inline bool pud_is_leaf(pud_t pud)
> -{
> - return false;
> -}
> -#endif
> -
> -#ifndef p4d_is_leaf
> -#define p4d_is_leaf p4d_is_leaf
> -static inline bool p4d_is_leaf(p4d_t p4d)
> -{
> - return false;
> -}
> -#endif
>
> #define pmd_pgtable pmd_pgtable
> static inline pgtable_t pmd_pgtable(pmd_t pmd)
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for all platforms
2023-11-30 2:53 ` [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
@ 2023-11-30 6:43 ` Christophe Leroy
0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 6:43 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:53, Rohan McLure a écrit :
> Prior to this commit, pud_pfn was implemented with BUILD_BUG as the inline
> function for 64-bit Book3S systems but is never included, as its
> invocations in generic code are guarded by calls to pud_devmap which return
> zero on such systems. A future patch will provide support for page table
> checks, the generic code for which depends on a pud_pfn stub being
> implemented, even while the patch will not interact with puds directly.
This is not correct anymore, that was changed by commit 27af67f35631
("powerpc/book3s64/mm: enable transparent pud hugepage")
>
> Remove the 64-bit Book3S stub and define pud_pfn to warn on all
> platforms. pud_pfn may be defined properly on a per-platform basis
> should it grow real usages in future.
Your patch removes nothing, it just adds a fallback, is that still correct ?
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> arch/powerpc/include/asm/pgtable.h | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index db0231afca9d..9c0f2151f08f 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -219,6 +219,20 @@ static inline bool arch_supports_memmap_on_memory(unsigned long vmemmap_size)
>
> #endif /* CONFIG_PPC64 */
>
> +/*
> + * Currently only consumed by page_table_check_pud_{set,clear}. Since clears
> + * and sets to page table entries at any level are done through
> + * page_table_check_pte_{set,clear}, provide stub implementation.
> + */
> +#ifndef pud_pfn
> +#define pud_pfn pud_pfn
> +static inline int pud_pfn(pud_t pud)
> +{
> + WARN_ONCE(1, "pud: platform does not use pud entries directly");
> + return 0;
> +}
> +#endif
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_POWERPC_PGTABLE_H */
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes
2023-11-30 2:53 ` [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes Rohan McLure
@ 2023-11-30 7:01 ` Christophe Leroy
0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 7:01 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:53, Rohan McLure a écrit :
> Page table checking depends on architectures providing an
> implementation of p{te,md,ud}_user_accessible_page. With
> refactorisations made on powerpc/mm, the pte_access_permitted() and
> similar methods verify whether a userland page is accessible with the
> required permissions.
>
> Since page table checking is the only user of
> p{te,md,ud}_user_accessible_page(), implement these for all platforms,
> using some of the same preliminay checks taken by pte_access_permitted()
> on that platform.
pte_access_permitted() returns false on an exec-only page.
As far as I can see in arm64, pte_user_accessible_page() returns true on
an exec-only page.
In addition, pte_access_permitted() is called only from GUP so is
garanteed to be called only for user pages. Do we have the same garantee
from callers of pte_user_accessible_page() ? If not it is needed to
check address in addition, see commit a78587473642 ("powerpc: Rely on
address instead of pte_user()")
>
> Since Commit 8e9bd41e4ce1 ("powerpc/nohash: Replace pte_user() by pte_read()")
> pte_user() is no longer required to be present on all platforms as it
> may be equivalent to or implied by pte_read(). Hence implementations are
> specialised.
pte_user() is not equivalent nor implies by pte_read(). In most
platforms it is implied by the address being below TASK_SIZE.
pte_read() will also return true on kernel readable pages.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v9: New implementation
> ---
> arch/powerpc/include/asm/book3s/32/pgtable.h | 5 +++++
> arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +++++
> arch/powerpc/include/asm/nohash/pgtable.h | 5 +++++
> arch/powerpc/include/asm/pgtable.h | 15 +++++++++++++++
> 4 files changed, 30 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index 9cc95a61d2a6..bd6f8cdd25aa 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -441,6 +441,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
> return true;
> }
>
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> + return pte_present(pte) && pte_read(pte);
> +}
> +
> /* Conversion functions: convert a page and protection to a page entry,
> * and a page entry and page directory to the page they refer to.
> *
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 2454174b26cb..dd3e7b190ab7 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -544,6 +544,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
> return arch_pte_access_permitted(pte_val(pte), write, 0);
> }
>
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> + return pte_present(pte) && pte_user(pte) && pte_read(pte);
> +}
> +
> /*
> * Conversion functions: convert a page and protection to a page entry,
> * and a page entry and page directory to the page they refer to.
> diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
> index 427db14292c9..33b4a4267f66 100644
> --- a/arch/powerpc/include/asm/nohash/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/pgtable.h
> @@ -213,6 +213,11 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
> return true;
> }
>
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> + return pte_present(pte) && pte_read(pte);
> +}
> +
> /* Conversion functions: convert a page and protection to a page entry,
> * and a page entry and page directory to the page they refer to.
> *
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index d7d0f47760d3..661bf3afca37 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -250,6 +250,21 @@ static inline pte_t pud_pte(pud_t pud)
> return __pte(pud_val(pud));
> }
> #endif
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> + pte_t pte = pmd_pte(pmd);
> +
> + return pte_user_accessible_page(pte);
No need of that pte local var, can fit as a single line.
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> + pte_t pte = pud_pte(pud);
> +
> + return pte_user_accessible_page(pte);
Same.
> +}
> +
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_POWERPC_PGTABLE_H */
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages
2023-11-30 2:53 ` [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages Rohan McLure
@ 2023-11-30 7:18 ` Christophe Leroy
0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 7:18 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:53, Rohan McLure a écrit :
> In the new set_ptes() API, set_pte_at() (a special case of set_ptes())
> is intended to be instrumented by the page table check facility. There
> are however several other routines that constitute the API for setting
> page table entries, including set_pmd_at() among others. Such routines
> are themselves implemented in terms of set_ptes_at().
>
> A future patch providing support for page table checking on powerpc
> must take care to avoid duplicate calls to
> page_table_check_p{te,md,ud}_set().
>
> Cause API-facing routines that call set_pte_at() to instead call
> __set_pte_at(), which will remain uninstrumented by page table check.
> set_ptes() is itself implemented by calls to __set_pte_at(), so this
> eliminates redundant code.
>
> Also prefer __set_pte_at() in early-boot usages which should not be
> instrumented.
__set_pte_at() lacks the call to set_pte_filter() and is only intended
to be called directly in very specific situations like early KASAN where
set_pte_filter() cannot be called but where we absolutely know that
set_pte_filter() is not needed.
Are you really sure that all places you change are safe and do not
require a call to set_pte_filter() ?
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v9: New patch
> ---
> arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
> arch/powerpc/mm/book3s64/pgtable.c | 4 ++--
> arch/powerpc/mm/book3s64/radix_pgtable.c | 10 +++++-----
> arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
> arch/powerpc/mm/pgtable_32.c | 2 +-
> 5 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
> index 988948d69bc1..ae52c8db45b7 100644
> --- a/arch/powerpc/mm/book3s64/hash_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
> @@ -165,7 +165,7 @@ int hash__map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
> ptep = pte_alloc_kernel(pmdp, ea);
> if (!ptep)
> return -ENOMEM;
> - set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
> + __set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
> } else {
> /*
> * If the mm subsystem is not fully up, we cannot create a
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index be229290a6a7..9a0a2accb261 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -116,7 +116,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> WARN_ON(!(pmd_large(pmd)));
> #endif
> trace_hugepage_set_pmd(addr, pmd_val(pmd));
> - return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
> + return __set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd), 0);
> }
>
> void set_pud_at(struct mm_struct *mm, unsigned long addr,
> @@ -539,7 +539,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
> if (radix_enabled())
> return radix__ptep_modify_prot_commit(vma, addr,
> ptep, old_pte, pte);
> - set_pte_at(vma->vm_mm, addr, ptep, pte);
> + __set_pte_at(vma->vm_mm, addr, ptep, pte, 0);
> }
>
> /*
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 1f8db10693e3..ae4a5f66ccd2 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -109,7 +109,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
> ptep = pte_offset_kernel(pmdp, ea);
>
> set_the_pte:
> - set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
> + __set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags), 0);
> asm volatile("ptesync": : :"memory");
> return 0;
> }
> @@ -169,7 +169,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
> return -ENOMEM;
>
> set_the_pte:
> - set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
> + __set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags), 0);
> asm volatile("ptesync": : :"memory");
> return 0;
> }
> @@ -1536,7 +1536,7 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
> (atomic_read(&mm->context.copros) > 0))
> radix__flush_tlb_page(vma, addr);
>
> - set_pte_at(mm, addr, ptep, pte);
> + __set_pte_at(mm, addr, ptep, pte, 0);
> }
>
> int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
> @@ -1547,7 +1547,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
> if (!radix_enabled())
> return 0;
>
> - set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud);
> + __set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud, 0);
>
> return 1;
> }
> @@ -1594,7 +1594,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
> if (!radix_enabled())
> return 0;
>
> - set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd);
> + __set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd, 0);
>
> return 1;
> }
> diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c
> index 1c5e4ecbebeb..4de91b54fd89 100644
> --- a/arch/powerpc/mm/nohash/book3e_pgtable.c
> +++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
> @@ -111,7 +111,7 @@ int __ref map_kernel_page(unsigned long ea, phys_addr_t pa, pgprot_t prot)
> }
> ptep = pte_offset_kernel(pmdp, ea);
> }
> - set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
> + __set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
>
> smp_wmb();
> return 0;
> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
> index 5c02fd08d61e..4bf3ca6af7cd 100644
> --- a/arch/powerpc/mm/pgtable_32.c
> +++ b/arch/powerpc/mm/pgtable_32.c
> @@ -89,7 +89,7 @@ int __ref map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
> * hash table
> */
> BUG_ON((pte_present(*pg) | pte_hashpte(*pg)) && pgprot_val(prot));
> - set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot));
> + __set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot), 0);
> }
> smp_wmb();
> return err;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 7/7] powerpc: mm: Support page table check
2023-11-30 2:54 ` [PATCH v9 7/7] powerpc: mm: Support page table check Rohan McLure
@ 2023-11-30 7:28 ` Christophe Leroy
0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 7:28 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:54, Rohan McLure a écrit :
> On creation and clearing of a page table mapping, instrument such calls
> by invoking page_table_check_pte_set and page_table_check_pte_clear
> respectively. These calls serve as a sanity check against illegal
> mappings.
>
> Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all platforms.
>
> See also:
>
> riscv support in commit 3fee229a8eb9 ("riscv/mm: enable
> ARCH_SUPPORTS_PAGE_TABLE_CHECK")
> arm64 in commit 42b2547137f5 ("arm64/mm: enable
> ARCH_SUPPORTS_PAGE_TABLE_CHECK")
> x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check")
>
> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v9: Updated for new API. Instrument pmdp_collapse_flush's two
> constituent calls to avoid header hell
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/book3s/32/pgtable.h | 7 +++-
> arch/powerpc/include/asm/book3s/64/pgtable.h | 39 ++++++++++++++++----
> arch/powerpc/mm/book3s64/hash_pgtable.c | 4 ++
> arch/powerpc/mm/book3s64/pgtable.c | 13 +++++--
> arch/powerpc/mm/book3s64/radix_pgtable.c | 3 ++
> arch/powerpc/mm/pgtable.c | 4 ++
> 7 files changed, 58 insertions(+), 13 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 6f105ee4f3cf..5bd6d367ef40 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -166,6 +166,7 @@ config PPC
> select ARCH_STACKWALK
> select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x
> + select ARCH_SUPPORTS_PAGE_TABLE_CHECK
> select ARCH_USE_BUILTIN_BSWAP
> select ARCH_USE_CMPXCHG_LOCKREF if PPC64
> select ARCH_USE_MEMTEST
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index bd6f8cdd25aa..48f4e7b98340 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -201,6 +201,7 @@ void unmap_kernel_page(unsigned long va);
> #ifndef __ASSEMBLY__
> #include <linux/sched.h>
> #include <linux/threads.h>
> +#include <linux/page_table_check.h>
>
> /* Bits to mask out from a PGD to get to the PUD page */
> #define PGD_MASKED_BITS 0
> @@ -319,7 +320,11 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
> static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep)
> {
> - return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
> + pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
> +
> + page_table_check_pte_clear(mm, old_pte);
> +
> + return old_pte;
> }
>
> #define __HAVE_ARCH_PTEP_SET_WRPROTECT
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index dd3e7b190ab7..834c997ba657 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -151,6 +151,8 @@
> #define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
>
> #ifndef __ASSEMBLY__
> +#include <linux/page_table_check.h>
> +
> /*
> * page table defines
> */
> @@ -421,8 +423,11 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
> static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> unsigned long addr, pte_t *ptep)
> {
> - unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
> - return __pte(old);
> + pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 0));
> +
> + page_table_check_pte_clear(mm, old_pte);
> +
> + return old_pte;
> }
>
> #define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
> @@ -431,11 +436,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
> pte_t *ptep, int full)
> {
> if (full && radix_enabled()) {
> + pte_t old_pte;
> +
> /*
> * We know that this is a full mm pte clear and
> * hence can be sure there is no parallel set_pte.
> */
> - return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
> + old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full);
> + page_table_check_pte_clear(mm, old_pte);
> +
> + return old_pte;
> }
> return ptep_get_and_clear(mm, addr, ptep);
> }
> @@ -1339,17 +1349,30 @@ extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
> static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
> unsigned long addr, pmd_t *pmdp)
> {
> - if (radix_enabled())
> - return radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
> - return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
> + pmd_t old_pmd;
> +
> + if (radix_enabled()) {
> + old_pmd = radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
> + } else {
> + old_pmd = hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
> + }
> +
> + page_table_check_pmd_clear(mm, old_pmd);
> +
> + return old_pmd;
> }
>
> #define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR
> static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
> unsigned long addr, pud_t *pudp)
> {
> - if (radix_enabled())
> - return radix__pudp_huge_get_and_clear(mm, addr, pudp);
> + pud_t old_pud;
Should go inside the block below.
> +
> + if (radix_enabled()) {
> + old_pud = radix__pudp_huge_get_and_clear(mm, addr, pudp);
> + page_table_check_pud_clear(mm, old_pud);
> + return old_pud;
> + }
Otherwise, could implemented as follows in order to be similar to
pmdp_huge_get_and_clear()
{
pud_t old_pud;
if (radix_enabled())
old_pud = radix__pudp_huge_get_and_clear(mm, addr, pudp);
else
BUG();
page_table_check_pud_clear(mm, old_pud);
return old_pud;
}
> BUG();
> return *pudp;
> }
> diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
> index ae52c8db45b7..d6bde756e4a6 100644
> --- a/arch/powerpc/mm/book3s64/hash_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
> @@ -8,6 +8,7 @@
> #include <linux/sched.h>
> #include <linux/mm_types.h>
> #include <linux/mm.h>
> +#include <linux/page_table_check.h>
> #include <linux/stop_machine.h>
>
> #include <asm/sections.h>
> @@ -231,6 +232,9 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres
>
> pmd = *pmdp;
> pmd_clear(pmdp);
> +
> + page_table_check_pmd_clear(vma->vm_mm, pmd);
> +
> /*
> * Wait for all pending hash_page to finish. This is needed
> * in case of subpage collapse. When we collapse normal pages
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 9a0a2accb261..194df0e4a33d 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -10,6 +10,7 @@
> #include <linux/pkeys.h>
> #include <linux/debugfs.h>
> #include <linux/proc_fs.h>
> +#include <linux/page_table_check.h>
> #include <misc/cxl-base.h>
>
> #include <asm/pgalloc.h>
> @@ -116,6 +117,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> WARN_ON(!(pmd_large(pmd)));
> #endif
> trace_hugepage_set_pmd(addr, pmd_val(pmd));
> + page_table_check_pmd_set(mm, pmdp, pmd);
> return __set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd), 0);
> }
>
> @@ -133,7 +135,8 @@ void set_pud_at(struct mm_struct *mm, unsigned long addr,
> WARN_ON(!(pud_large(pud)));
> #endif
> trace_hugepage_set_pud(addr, pud_val(pud));
> - return set_pte_at(mm, addr, pudp_ptep(pudp), pud_pte(pud));
> + page_table_check_pud_set(mm, pudp, pud);
> + return __set_pte_at(mm, addr, pudp_ptep(pudp), pud_pte(pud), 0);
Did you miss that from previous patch ?
> }
>
> static void do_serialize(void *arg)
> @@ -168,11 +171,13 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmdp)
> {
> - unsigned long old_pmd;
> + pmd_t old_pmd;
>
> - old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID);
> + old_pmd = __pmd(pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID));
> flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> - return __pmd(old_pmd);
> + page_table_check_pmd_clear(vma->vm_mm, old_pmd);
> +
> + return old_pmd;
> }
>
> pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index ae4a5f66ccd2..9ed38466a99a 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -14,6 +14,7 @@
> #include <linux/of.h>
> #include <linux/of_fdt.h>
> #include <linux/mm.h>
> +#include <linux/page_table_check.h>
> #include <linux/hugetlb.h>
> #include <linux/string_helpers.h>
> #include <linux/memory.h>
> @@ -1404,6 +1405,8 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
> pmd = *pmdp;
> pmd_clear(pmdp);
>
> + page_table_check_pmd_clear(vma->vm_mm, pmd);
> +
> radix__flush_tlb_collapsed_pmd(vma->vm_mm, address);
>
> return pmd;
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index e8e0289d7ab0..bccc96ba2471 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -22,6 +22,7 @@
> #include <linux/mm.h>
> #include <linux/percpu.h>
> #include <linux/hardirq.h>
> +#include <linux/page_table_check.h>
> #include <linux/hugetlb.h>
> #include <asm/tlbflush.h>
> #include <asm/tlb.h>
> @@ -206,6 +207,9 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> * and not hw_valid ptes. Hence there is no translation cache flush
> * involved that need to be batched.
> */
> +
> + page_table_check_ptes_set(mm, ptep, pte, nr);
> +
> for (;;) {
>
> /*
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations
2023-11-30 2:53 ` [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations Rohan McLure
@ 2023-11-30 7:35 ` Christophe Leroy
2023-12-11 2:53 ` Rohan McLure
0 siblings, 1 reply; 15+ messages in thread
From: Christophe Leroy @ 2023-11-30 7:35 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 30/11/2023 à 03:53, Rohan McLure a écrit :
> For 32-bit systems which have no usecases for p{m,u}d_pte() prior to
> page table checking, implement default stubs.
Is that the best solution ?
If I understand correctly, it is only needed for
pmd_user_accessible_page(). Why not provide a stub
pmd_user_accessible_page() that returns false on those architectures ?
Same for pud_user_accessible_page()
But if you decide to keep it I think that:
- It should be squashed with following patch to make it clear it's
needed for that only.
- Remove the WARN_ONCE().
- Only have a special one for books/64 and a generic only common to he 3
others.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v9: New patch
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 3 +++
> arch/powerpc/include/asm/nohash/64/pgtable.h | 2 ++
> arch/powerpc/include/asm/pgtable.h | 17 +++++++++++++++++
> 3 files changed, 22 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 8fdb7667c509..2454174b26cb 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -887,6 +887,8 @@ static inline int pud_present(pud_t pud)
>
> extern struct page *pud_page(pud_t pud);
> extern struct page *pmd_page(pmd_t pmd);
> +
> +#define pud_pte pud_pte
> static inline pte_t pud_pte(pud_t pud)
> {
> return __pte_raw(pud_raw(pud));
> @@ -1043,6 +1045,7 @@ static inline void __kernel_map_pages(struct page *page, int numpages, int enabl
> }
> #endif
>
> +#define pmd_pte pmd_pte
> static inline pte_t pmd_pte(pmd_t pmd)
> {
> return __pte_raw(pmd_raw(pmd));
> diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
> index f58cbebde26e..09a34fe196ae 100644
> --- a/arch/powerpc/include/asm/nohash/64/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
> @@ -93,6 +93,7 @@ static inline void pmd_clear(pmd_t *pmdp)
> *pmdp = __pmd(0);
> }
>
> +#define pmd_pte pmd_pte
> static inline pte_t pmd_pte(pmd_t pmd)
> {
> return __pte(pmd_val(pmd));
> @@ -134,6 +135,7 @@ static inline pmd_t *pud_pgtable(pud_t pud)
>
> extern struct page *pud_page(pud_t pud);
>
> +#define pud_pte pud_pte
> static inline pte_t pud_pte(pud_t pud)
> {
> return __pte(pud_val(pud));
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 9c0f2151f08f..d7d0f47760d3 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -233,6 +233,23 @@ static inline int pud_pfn(pud_t pud)
> }
> #endif
>
> +#ifndef pmd_pte
> +#define pmd_pte pmd_pte
> +static inline pte_t pmd_pte(pmd_t pmd)
> +{
> + WARN_ONCE(1, "pmd: platform does not use pmd entries directly");
> + return __pte(pmd_val(pmd));
> +}
> +#endif
> +
> +#ifndef pud_pte
> +#define pud_pte pud_pte
> +static inline pte_t pud_pte(pud_t pud)
> +{
> + WARN_ONCE(1, "pud: platform does not use pud entries directly");
> + return __pte(pud_val(pud));
> +}
> +#endif
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_POWERPC_PGTABLE_H */
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations
2023-11-30 7:35 ` Christophe Leroy
@ 2023-12-11 2:53 ` Rohan McLure
0 siblings, 0 replies; 15+ messages in thread
From: Rohan McLure @ 2023-12-11 2:53 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev@lists.ozlabs.org
On 11/30/23 18:35, Christophe Leroy wrote:
>
> Le 30/11/2023 à 03:53, Rohan McLure a écrit :
>> For 32-bit systems which have no usecases for p{m,u}d_pte() prior to
>> page table checking, implement default stubs.
> Is that the best solution ?
>
> If I understand correctly, it is only needed for
> pmd_user_accessible_page(). Why not provide a stub
> pmd_user_accessible_page() that returns false on those architectures ?
Yep, this seems reasonable to me.
>
> Same for pud_user_accessible_page()
>
> But if you decide to keep it I think that:
> - It should be squashed with following patch to make it clear it's
> needed for that only.
> - Remove the WARN_ONCE().
I might however move those WARN_ONCE() calls to the default, false-returning
p{m,u}d_user_accessible_page() implementations, to be consistent with
pud_pfn().
> - Only have a special one for books/64 and a generic only common to he 3
> others.
>
>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>> ---
>> v9: New patch
>> ---
>> arch/powerpc/include/asm/book3s/64/pgtable.h | 3 +++
>> arch/powerpc/include/asm/nohash/64/pgtable.h | 2 ++
>> arch/powerpc/include/asm/pgtable.h | 17 +++++++++++++++++
>> 3 files changed, 22 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index 8fdb7667c509..2454174b26cb 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -887,6 +887,8 @@ static inline int pud_present(pud_t pud)
>>
>> extern struct page *pud_page(pud_t pud);
>> extern struct page *pmd_page(pmd_t pmd);
>> +
>> +#define pud_pte pud_pte
>> static inline pte_t pud_pte(pud_t pud)
>> {
>> return __pte_raw(pud_raw(pud));
>> @@ -1043,6 +1045,7 @@ static inline void __kernel_map_pages(struct page *page, int numpages, int enabl
>> }
>> #endif
>>
>> +#define pmd_pte pmd_pte
>> static inline pte_t pmd_pte(pmd_t pmd)
>> {
>> return __pte_raw(pmd_raw(pmd));
>> diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
>> index f58cbebde26e..09a34fe196ae 100644
>> --- a/arch/powerpc/include/asm/nohash/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
>> @@ -93,6 +93,7 @@ static inline void pmd_clear(pmd_t *pmdp)
>> *pmdp = __pmd(0);
>> }
>>
>> +#define pmd_pte pmd_pte
>> static inline pte_t pmd_pte(pmd_t pmd)
>> {
>> return __pte(pmd_val(pmd));
>> @@ -134,6 +135,7 @@ static inline pmd_t *pud_pgtable(pud_t pud)
>>
>> extern struct page *pud_page(pud_t pud);
>>
>> +#define pud_pte pud_pte
>> static inline pte_t pud_pte(pud_t pud)
>> {
>> return __pte(pud_val(pud));
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index 9c0f2151f08f..d7d0f47760d3 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -233,6 +233,23 @@ static inline int pud_pfn(pud_t pud)
>> }
>> #endif
>>
>> +#ifndef pmd_pte
>> +#define pmd_pte pmd_pte
>> +static inline pte_t pmd_pte(pmd_t pmd)
>> +{
>> + WARN_ONCE(1, "pmd: platform does not use pmd entries directly");
>> + return __pte(pmd_val(pmd));
>> +}
>> +#endif
>> +
>> +#ifndef pud_pte
>> +#define pud_pte pud_pte
>> +static inline pte_t pud_pte(pud_t pud)
>> +{
>> + WARN_ONCE(1, "pud: platform does not use pud entries directly");
>> + return __pte(pud_val(pud));
>> +}
>> +#endif
>> #endif /* __ASSEMBLY__ */
>>
>> #endif /* _ASM_POWERPC_PGTABLE_H */
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2023-12-11 2:54 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-30 2:53 [PATCH v9 0/7] Support page table check Rohan McLure
2023-11-30 2:53 ` [PATCH v9 1/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
2023-11-30 2:53 ` [PATCH v9 2/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
2023-11-30 6:36 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 3/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
2023-11-30 6:43 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 4/7] powerpc: mm: Default p{m,u}d_pte implementations Rohan McLure
2023-11-30 7:35 ` Christophe Leroy
2023-12-11 2:53 ` Rohan McLure
2023-11-30 2:53 ` [PATCH v9 5/7] poweprc: mm: Implement *_user_accessible_page() for ptes Rohan McLure
2023-11-30 7:01 ` Christophe Leroy
2023-11-30 2:53 ` [PATCH v9 6/7] powerpc: mm: Use __set_pte_at() for early-boot / internal usages Rohan McLure
2023-11-30 7:18 ` Christophe Leroy
2023-11-30 2:54 ` [PATCH v9 7/7] powerpc: mm: Support page table check Rohan McLure
2023-11-30 7:28 ` Christophe Leroy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).