* [PATCH v6 0/7] Support page table check
@ 2023-02-14 1:59 Rohan McLure
2023-02-14 1:59 ` [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use Rohan McLure
` (6 more replies)
0 siblings, 7 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Support the page table check sanitiser on all PowerPC platforms. This
sanitiser works by serialising assignments, reassignments and clears of
page table entries at each level in order to ensure that anonymous
mappings have at most one writable consumer, and likewise that
file-backed mappings are not simultaneously also anonymous mappings.
In order to support this infrastructure, a number of stubs must be
defined for all powerpc platforms. Additionally, seperate set_pte_at
and set_pte, to allow for internal, uninstrumented mappings.
v6:
* Support huge pages and p{m,u}d accounting.
* Remove instrumentation from set_pte from kernel internal pages.
* 64s: Implement pmdp_collapse_flush in terms of __pmdp_collapse_flush
as access to the mm_struct * is required.
v5:
Link:
Rohan McLure (7):
powerpc: mm: Separate set_pte, set_pte_at for internal, external use
powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct
argument
powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf
powerpc: mm: Implement p{m,u,4}d_leaf on all platforms
powerpc: mm: Add common pud_pfn stub for all platforms
powerpc: mm: Add p{te,md,ud}_user_accessible_page helpers
powerpc: mm: Support page table check
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/32/pgtable.h | 17 +++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 88 +++++++++++++-------
arch/powerpc/include/asm/book3s/pgtable.h | 4 +-
arch/powerpc/include/asm/nohash/32/pgtable.h | 12 ++-
arch/powerpc/include/asm/nohash/64/pgtable.h | 24 +++++-
arch/powerpc/include/asm/nohash/pgtable.h | 10 ++-
arch/powerpc/include/asm/pgtable.h | 61 +++++++++-----
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +--
arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
arch/powerpc/mm/book3s64/pgtable.c | 16 ++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 24 +++---
arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
arch/powerpc/mm/pgtable.c | 10 +--
arch/powerpc/mm/pgtable_32.c | 2 +-
arch/powerpc/mm/pgtable_64.c | 6 +-
arch/powerpc/xmon/xmon.c | 6 +-
17 files changed, 204 insertions(+), 93 deletions(-)
--
2.37.2
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 5:59 ` Christophe Leroy
2023-02-14 1:59 ` [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument Rohan McLure
` (5 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Produce separate symbols for set_pte, which is to be used in
arch/powerpc for reassignment of pte's, and set_pte_at, used in generic
code.
The reason for this distinction is to support the Page Table Check
sanitiser. Having this distinction allows for set_pte_at to
instrumented, but set_pte not to be, permitting for uninstrumented
internal mappings. This distinction in names is also present in x86.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v6: new patch
---
arch/powerpc/include/asm/book3s/pgtable.h | 4 ++--
arch/powerpc/include/asm/nohash/pgtable.h | 4 ++--
arch/powerpc/include/asm/pgtable.h | 1 +
arch/powerpc/mm/pgtable.c | 4 ++--
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h
index d18b748ea3ae..dbcdc2103c59 100644
--- a/arch/powerpc/include/asm/book3s/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/pgtable.h
@@ -12,8 +12,8 @@
/* Insert a PTE, top-level function is out of line. It uses an inline
* low level function in the respective pgtable-* files
*/
-extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
- pte_t pte);
+extern void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ pte_t pte);
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
index 69c3a050a3d8..ac3e69a18253 100644
--- a/arch/powerpc/include/asm/nohash/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/pgtable.h
@@ -154,8 +154,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
/* Insert a PTE, top-level function is out of line. It uses an inline
* low level function in the respective pgtable-* files
*/
-extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
- pte_t pte);
+extern void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ pte_t pte);
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9972626ddaf6..17d30359d1f4 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -48,6 +48,7 @@ struct mm_struct;
/* Keep these as a macros to avoid include dependency mess */
#define pte_page(x) pfn_to_page(pte_pfn(x))
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+#define set_pte_at set_pte
/*
* Select all bits except the pfn
*/
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index cb2dcdb18f8e..e9a464e0d081 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
/*
* set_pte stores a linux PTE into the linux page table.
*/
-void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
- pte_t pte)
+void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ pte_t pte)
{
/*
* Make sure hardware valid bit is not set. We don't do
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
2023-02-14 1:59 ` [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 6:02 ` Christophe Leroy
2023-02-14 1:59 ` [PATCH v6 3/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
` (4 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
pmdp_collapse_flush has references in generic code with just three
parameters, due to the choice of mm context being implied by the vm_area
context parameter.
Define __pmdp_collapse_flush to accept an additional mm_struct *
parameter, with pmdp_collapse_flush a macro that unpacks the vma and
calls __pmdp_collapse_flush. The mm_struct * parameter is needed in a
future patch providing Page Table Check support, which is defined in
terms of mm context objects.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v6: New patch
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb4c67bf45d7..9d8b4e25f5ed 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1244,14 +1244,22 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
}
-static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp)
+static inline pmd_t __pmdp_collapse_flush(struct vm_area_struct *vma, struct mm_struct *mm,
+ unsigned long address, pmd_t *pmdp)
{
if (radix_enabled())
return radix__pmdp_collapse_flush(vma, address, pmdp);
return hash__pmdp_collapse_flush(vma, address, pmdp);
}
-#define pmdp_collapse_flush pmdp_collapse_flush
+#define pmdp_collapse_flush(vma, addr, pmdp) \
+({ \
+ struct vm_area_struct *_vma = (vma); \
+ pmd_t _r; \
+ \
+ _r = __pmdp_collapse_flush(_vma, _vma->vm_mm, (addr), (pmdp)); \
+ \
+ _r; \
+})
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 3/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
2023-02-14 1:59 ` [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use Rohan McLure
2023-02-14 1:59 ` [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 1:59 ` [PATCH v6 4/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
` (3 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Replace occurrences of p{u,m,4}d_is_leaf with p{u,m,4}_leaf, as the
latter is the name given to checking that a higher-level entry in
multi-level paging contains a page translation entry (pte) throughout
all other archs.
A future patch will implement p{u,m,4}_leaf stubs on all platforms so
that they may be referenced in generic code.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
V4: New patch
V5: Previously replaced stub definition for *_is_leaf with *_leaf. Do
that in a later patch
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++++++------
arch/powerpc/mm/book3s64/radix_pgtable.c | 14 +++++++-------
arch/powerpc/mm/pgtable.c | 6 +++---
arch/powerpc/mm/pgtable_64.c | 6 +++---
arch/powerpc/xmon/xmon.c | 6 +++---
5 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 9d3743ca16d5..0d24fd984d16 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -497,7 +497,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full,
for (im = 0; im < PTRS_PER_PMD; ++im, ++p) {
if (!pmd_present(*p))
continue;
- if (pmd_is_leaf(*p)) {
+ if (pmd_leaf(*p)) {
if (full) {
pmd_clear(p);
} else {
@@ -526,7 +526,7 @@ static void kvmppc_unmap_free_pud(struct kvm *kvm, pud_t *pud,
for (iu = 0; iu < PTRS_PER_PUD; ++iu, ++p) {
if (!pud_present(*p))
continue;
- if (pud_is_leaf(*p)) {
+ if (pud_leaf(*p)) {
pud_clear(p);
} else {
pmd_t *pmd;
@@ -629,12 +629,12 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pud = pud_alloc_one(kvm->mm, gpa);
pmd = NULL;
- if (pud && pud_present(*pud) && !pud_is_leaf(*pud))
+ if (pud && pud_present(*pud) && !pud_leaf(*pud))
pmd = pmd_offset(pud, gpa);
else if (level <= 1)
new_pmd = kvmppc_pmd_alloc();
- if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_is_leaf(*pmd)))
+ if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_leaf(*pmd)))
new_ptep = kvmppc_pte_alloc();
/* Check if we might have been invalidated; let the guest retry if so */
@@ -652,7 +652,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pud = NULL;
}
pud = pud_offset(p4d, gpa);
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
unsigned long hgpa = gpa & PUD_MASK;
/* Check if we raced and someone else has set the same thing */
@@ -703,7 +703,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte,
new_pmd = NULL;
}
pmd = pmd_offset(pud, gpa);
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
unsigned long lgpa = gpa & PMD_MASK;
/* Check if we raced and someone else has set the same thing */
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 26245aaf12b8..4e46e001c3c3 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -205,14 +205,14 @@ static void radix__change_memory_range(unsigned long start, unsigned long end,
pudp = pud_alloc(&init_mm, p4dp, idx);
if (!pudp)
continue;
- if (pud_is_leaf(*pudp)) {
+ if (pud_leaf(*pudp)) {
ptep = (pte_t *)pudp;
goto update_the_pte;
}
pmdp = pmd_alloc(&init_mm, pudp, idx);
if (!pmdp)
continue;
- if (pmd_is_leaf(*pmdp)) {
+ if (pmd_leaf(*pmdp)) {
ptep = pmdp_ptep(pmdp);
goto update_the_pte;
}
@@ -786,7 +786,7 @@ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
if (!pmd_present(*pmd))
continue;
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
if (!IS_ALIGNED(addr, PMD_SIZE) ||
!IS_ALIGNED(next, PMD_SIZE)) {
WARN_ONCE(1, "%s: unaligned range\n", __func__);
@@ -816,7 +816,7 @@ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
if (!pud_present(*pud))
continue;
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
if (!IS_ALIGNED(addr, PUD_SIZE) ||
!IS_ALIGNED(next, PUD_SIZE)) {
WARN_ONCE(1, "%s: unaligned range\n", __func__);
@@ -849,7 +849,7 @@ static void __meminit remove_pagetable(unsigned long start, unsigned long end)
if (!p4d_present(*p4d))
continue;
- if (p4d_is_leaf(*p4d)) {
+ if (p4d_leaf(*p4d)) {
if (!IS_ALIGNED(addr, P4D_SIZE) ||
!IS_ALIGNED(next, P4D_SIZE)) {
WARN_ONCE(1, "%s: unaligned range\n", __func__);
@@ -1112,7 +1112,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
int pud_clear_huge(pud_t *pud)
{
- if (pud_is_leaf(*pud)) {
+ if (pud_leaf(*pud)) {
pud_clear(pud);
return 1;
}
@@ -1159,7 +1159,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
int pmd_clear_huge(pmd_t *pmd)
{
- if (pmd_is_leaf(*pmd)) {
+ if (pmd_leaf(*pmd)) {
pmd_clear(pmd);
return 1;
}
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index e9a464e0d081..3de0a8184f87 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -387,7 +387,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
if (p4d_none(p4d))
return NULL;
- if (p4d_is_leaf(p4d)) {
+ if (p4d_leaf(p4d)) {
ret_pte = (pte_t *)p4dp;
goto out;
}
@@ -409,7 +409,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
if (pud_none(pud))
return NULL;
- if (pud_is_leaf(pud)) {
+ if (pud_leaf(pud)) {
ret_pte = (pte_t *)pudp;
goto out;
}
@@ -448,7 +448,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
goto out;
}
- if (pmd_is_leaf(pmd)) {
+ if (pmd_leaf(pmd)) {
ret_pte = (pte_t *)pmdp;
goto out;
}
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 5ac1fd30341b..0604c80dae66 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -100,7 +100,7 @@ EXPORT_SYMBOL(__pte_frag_size_shift);
/* 4 level page table */
struct page *p4d_page(p4d_t p4d)
{
- if (p4d_is_leaf(p4d)) {
+ if (p4d_leaf(p4d)) {
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
VM_WARN_ON(!p4d_huge(p4d));
return pte_page(p4d_pte(p4d));
@@ -111,7 +111,7 @@ struct page *p4d_page(p4d_t p4d)
struct page *pud_page(pud_t pud)
{
- if (pud_is_leaf(pud)) {
+ if (pud_leaf(pud)) {
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
VM_WARN_ON(!pud_huge(pud));
return pte_page(pud_pte(pud));
@@ -125,7 +125,7 @@ struct page *pud_page(pud_t pud)
*/
struct page *pmd_page(pmd_t pmd)
{
- if (pmd_is_leaf(pmd)) {
+ if (pmd_leaf(pmd)) {
/*
* vmalloc_to_page may be called on any vmap address (not only
* vmalloc), and it uses pmd_page() etc., when huge vmap is
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 73c620c2a3a1..07346b10f972 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -3339,7 +3339,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (p4d_is_leaf(*p4dp)) {
+ if (p4d_leaf(*p4dp)) {
format_pte(p4dp, p4d_val(*p4dp));
return;
}
@@ -3353,7 +3353,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (pud_is_leaf(*pudp)) {
+ if (pud_leaf(*pudp)) {
format_pte(pudp, pud_val(*pudp));
return;
}
@@ -3367,7 +3367,7 @@ static void show_pte(unsigned long addr)
return;
}
- if (pmd_is_leaf(*pmdp)) {
+ if (pmd_leaf(*pmdp)) {
format_pte(pmdp, pmd_val(*pmdp));
return;
}
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 4/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
` (2 preceding siblings ...)
2023-02-14 1:59 ` [PATCH v6 3/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 1:59 ` [PATCH v6 5/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
` (2 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
The check that a higher-level entry in multi-level pages contains a page
translation entry (pte) is performed by p{m,u,4}d_leaf stubs, which may
be specialised for each choice of mmu. In a prior commit, we replace
uses to the catch-all stubs, p{m,u,4}d_is_leaf with p{m,u,4}d_leaf.
Replace the catch-all stub definitions for p{m,u,4}d_is_leaf with
definitions for p{m,u,4}d_leaf. A future patch will assume that
p{m,u,4}d_leaf is defined on all platforms.
In particular, implement pud_leaf for Book3E-64, pmd_leaf for all Book3E
and Book3S-64 platforms, with a catch-all definition for p4d_leaf.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
v5: Split patch that replaces p{m,u,4}d_is_leaf into two patches, first
replacing callsites and afterward providing generic definition.
Remove ifndef-defines implementing p{m,u}d_leaf in favour of
implementing stubs in headers belonging to the particular platforms
needing them.
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 5 +++++
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++++-----
arch/powerpc/include/asm/nohash/64/pgtable.h | 6 ++++++
arch/powerpc/include/asm/nohash/pgtable.h | 6 ++++++
arch/powerpc/include/asm/pgtable.h | 22 ++------------------
5 files changed, 23 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 75823f39e042..a090cb13a4a0 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -242,6 +242,11 @@ static inline void pmd_clear(pmd_t *pmdp)
*pmdp = __pmd(0);
}
+#define pmd_leaf pmd_leaf
+static inline bool pmd_leaf(pmd_t pmd)
+{
+ return false;
+}
/*
* When flushing the tlb entry for a page, we also need to flush the hash
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 9d8b4e25f5ed..5be0a4c8bf32 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1362,16 +1362,14 @@ static inline bool is_pte_rw_upgrade(unsigned long old_val, unsigned long new_va
/*
* Like pmd_huge() and pmd_large(), but works regardless of config options
*/
-#define pmd_is_leaf pmd_is_leaf
-#define pmd_leaf pmd_is_leaf
-static inline bool pmd_is_leaf(pmd_t pmd)
+#define pmd_leaf pmd_leaf
+static inline bool pmd_leaf(pmd_t pmd)
{
return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
}
-#define pud_is_leaf pud_is_leaf
-#define pud_leaf pud_is_leaf
-static inline bool pud_is_leaf(pud_t pud)
+#define pud_leaf pud_leaf
+static inline bool pud_leaf(pud_t pud)
{
return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
}
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 879e9a6e5a87..d391a45e0f11 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -141,6 +141,12 @@ static inline void pud_clear(pud_t *pudp)
*pudp = __pud(0);
}
+#define pud_leaf pud_leaf
+static inline bool pud_leaf(pud_t pud)
+{
+ return false;
+}
+
#define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|| (pud_val(pud) & PUD_BAD_BITS))
diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
index ac3e69a18253..cc3941a4790f 100644
--- a/arch/powerpc/include/asm/nohash/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/pgtable.h
@@ -60,6 +60,12 @@ static inline bool pte_hw_valid(pte_t pte)
return pte_val(pte) & _PAGE_PRESENT;
}
+#define pmd_leaf pmd_leaf
+static inline bool pmd_leaf(pmd_t pmd)
+{
+ return false;
+}
+
/*
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 17d30359d1f4..284408829fa3 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -128,29 +128,11 @@ static inline void pte_frag_set(mm_context_t *ctx, void *p)
}
#endif
-#ifndef pmd_is_leaf
-#define pmd_is_leaf pmd_is_leaf
-static inline bool pmd_is_leaf(pmd_t pmd)
+#define p4d_leaf p4d_leaf
+static inline bool p4d_leaf(p4d_t p4d)
{
return false;
}
-#endif
-
-#ifndef pud_is_leaf
-#define pud_is_leaf pud_is_leaf
-static inline bool pud_is_leaf(pud_t pud)
-{
- return false;
-}
-#endif
-
-#ifndef p4d_is_leaf
-#define p4d_is_leaf p4d_is_leaf
-static inline bool p4d_is_leaf(p4d_t p4d)
-{
- return false;
-}
-#endif
#define pmd_pgtable pmd_pgtable
static inline pgtable_t pmd_pgtable(pmd_t pmd)
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 5/7] powerpc: mm: Add common pud_pfn stub for all platforms
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
` (3 preceding siblings ...)
2023-02-14 1:59 ` [PATCH v6 4/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 1:59 ` [PATCH v6 6/7] powerpc: mm: Add p{te,md,ud}_user_accessible_page helpers Rohan McLure
2023-02-14 1:59 ` [PATCH v6 7/7] powerpc: mm: Support page table check Rohan McLure
6 siblings, 0 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Prior to this commit, pud_pfn was implemented with BUILD_BUG as the inline
function for 64-bit Book3S systems but is never included, as its
invocations in generic code are guarded by calls to pud_devmap which return
zero on such systems. A future patch will provide support for page table
checks, the generic code for which depends on a pud_pfn stub being
implemented, even while the patch will not interact with puds directly.
Remove the 64-bit Book3S stub and define pud_pfn to warn on all
platforms. pud_pfn may be defined properly on a per-platform basis
should it grow real usages in future.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
V3: Replace WARN with WARN_ONCE, which should suffice to demonstrate
misuse of puds.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
arch/powerpc/include/asm/pgtable.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 5be0a4c8bf32..8bbd3e1df93e 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1330,16 +1330,6 @@ static inline int pgd_devmap(pgd_t pgd)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-static inline int pud_pfn(pud_t pud)
-{
- /*
- * Currently all calls to pud_pfn() are gated around a pud_devmap()
- * check so this should never be used. If it grows another user we
- * want to know about it.
- */
- BUILD_BUG();
- return 0;
-}
#define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 284408829fa3..ad0829f816e9 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -153,6 +153,20 @@ struct seq_file;
void arch_report_meminfo(struct seq_file *m);
#endif /* CONFIG_PPC64 */
+/*
+ * Currently only consumed by page_table_check_pud_{set,clear}. Since clears
+ * and sets to page table entries at any level are done through
+ * page_table_check_pte_{set,clear}, provide stub implementation.
+ */
+#ifndef pud_pfn
+#define pud_pfn pud_pfn
+static inline int pud_pfn(pud_t pud)
+{
+ WARN_ONCE(1, "pud: platform does not use pud entries directly");
+ return 0;
+}
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 6/7] powerpc: mm: Add p{te,md,ud}_user_accessible_page helpers
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
` (4 preceding siblings ...)
2023-02-14 1:59 ` [PATCH v6 5/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 1:59 ` [PATCH v6 7/7] powerpc: mm: Support page table check Rohan McLure
6 siblings, 0 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
Add the following helpers for detecting whether a page table entry
is a leaf and is accessible to user space.
* pte_user_accessible_page
* pmd_user_accessible_page
* pud_user_accessible_page
Also implement missing pud_user definitions for both Book3S/nohash 64-bit
systems, and pmd_user for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
V4: Use pmd_leaf, pud_leaf, and define pmd_user for 32 Book3E with
static inline method rather than macro.
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 4 ++++
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++++++++++
arch/powerpc/include/asm/nohash/32/pgtable.h | 5 +++++
arch/powerpc/include/asm/nohash/64/pgtable.h | 10 ++++++++++
arch/powerpc/include/asm/pgtable.h | 15 +++++++++++++++
5 files changed, 44 insertions(+)
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index a090cb13a4a0..afd672e84791 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -516,6 +516,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
}
+static inline bool pmd_user(pmd_t pmd)
+{
+ return 0;
+}
/* This low level function performs the actual PTE insertion
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 8bbd3e1df93e..93601ef4c665 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -538,6 +538,16 @@ static inline bool pte_user(pte_t pte)
return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED));
}
+static inline bool pmd_user(pmd_t pmd)
+{
+ return !(pmd_raw(pmd) & cpu_to_be64(_PAGE_PRIVILEGED));
+}
+
+static inline bool pud_user(pud_t pud)
+{
+ return !(pud_raw(pud) & cpu_to_be64(_PAGE_PRIVILEGED));
+}
+
#define pte_access_permitted pte_access_permitted
static inline bool pte_access_permitted(pte_t pte, bool write)
{
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index 70edad44dff6..d953533c56ff 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -209,6 +209,11 @@ static inline void pmd_clear(pmd_t *pmdp)
*pmdp = __pmd(0);
}
+static inline bool pmd_user(pmd_t pmd)
+{
+ return false;
+}
+
/*
* PTE updates. This function is called whenever an existing
* valid PTE is updated. This does -not- include set_pte_at()
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index d391a45e0f11..14e69ebad31f 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -123,6 +123,11 @@ static inline pte_t pmd_pte(pmd_t pmd)
return __pte(pmd_val(pmd));
}
+static inline bool pmd_user(pmd_t pmd)
+{
+ return (pmd_val(pmd) & _PAGE_USER) == _PAGE_USER;
+}
+
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|| (pmd_val(pmd) & PMD_BAD_BITS))
@@ -164,6 +169,11 @@ static inline pte_t pud_pte(pud_t pud)
return __pte(pud_val(pud));
}
+static inline bool pud_user(pud_t pud)
+{
+ return (pud_val(pud) & _PAGE_USER) == _PAGE_USER;
+}
+
static inline pud_t pte_pud(pte_t pte)
{
return __pud(pte_val(pte));
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index ad0829f816e9..b76fdb80b6c9 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -167,6 +167,21 @@ static inline int pud_pfn(pud_t pud)
}
#endif
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+ return pte_present(pte) && pte_user(pte);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+ return pmd_leaf(pmd) && pmd_present(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+ return pud_leaf(pud) && pud_present(pud) && pud_user(pud);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 7/7] powerpc: mm: Support page table check
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
` (5 preceding siblings ...)
2023-02-14 1:59 ` [PATCH v6 6/7] powerpc: mm: Add p{te,md,ud}_user_accessible_page helpers Rohan McLure
@ 2023-02-14 1:59 ` Rohan McLure
2023-02-14 6:14 ` Christophe Leroy
6 siblings, 1 reply; 13+ messages in thread
From: Rohan McLure @ 2023-02-14 1:59 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Rohan McLure
On creation and clearing of a page table mapping, instrument such calls
by invoking page_table_check_pte_set and page_table_check_pte_clear
respectively. These calls serve as a sanity check against illegal
mappings.
Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
platforms implementing Book3S.
Change pud_pfn to be a runtime bug rather than a build bug as it is
consumed by page_table_check_pud_{clear,set} which are not called.
See also:
riscv support in commit 3fee229a8eb9 ("riscv/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
arm64 in commit 42b2547137f5 ("arm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
V2: Update spacing and types assigned to pte_update calls.
V3: Update one last pte_update call to remove __pte invocation.
V5: Fix 32-bit nohash double set
V6: Omit __set_pte_at instrumentation - should be instrumented by
set_pte_at, with set_pte in between, performing all prior checks.
Instrument pmds. Use set_pte where needed.
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/32/pgtable.h | 8 +++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 44 ++++++++++++++++----
arch/powerpc/include/asm/nohash/32/pgtable.h | 7 +++-
arch/powerpc/include/asm/nohash/64/pgtable.h | 8 +++-
arch/powerpc/include/asm/pgtable.h | 11 ++++-
arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
arch/powerpc/mm/book3s64/pgtable.c | 16 ++++---
arch/powerpc/mm/book3s64/radix_pgtable.c | 10 ++---
arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
arch/powerpc/mm/pgtable_32.c | 2 +-
11 files changed, 84 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2c9cdf1d8761..2474e2699037 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -154,6 +154,7 @@ config PPC
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x
+ select ARCH_SUPPORTS_PAGE_TABLE_CHECK
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select ARCH_USE_MEMTEST
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index afd672e84791..8850b4fb22a4 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -53,6 +53,8 @@
#ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
+
static inline bool pte_user(pte_t pte)
{
return pte_val(pte) & _PAGE_USER;
@@ -338,7 +340,11 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
- return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
+
+ page_table_check_pte_clear(mm, addr, old_pte);
+
+ return old_pte;
}
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 93601ef4c665..1835ba2c2309 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -162,6 +162,8 @@
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
#ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
+
/*
* page table defines
*/
@@ -431,8 +433,11 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
- unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
- return __pte(old);
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 0));
+
+ page_table_check_pte_clear(mm, addr, old_pte);
+
+ return old_pte;
}
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
@@ -441,11 +446,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
pte_t *ptep, int full)
{
if (full && radix_enabled()) {
+ pte_t old_pte;
+
/*
* We know that this is a full mm pte clear and
* hence can be sure there is no parallel set_pte.
*/
- return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+ old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+ page_table_check_pte_clear(mm, addr, old_pte);
+
+ return old_pte;
}
return ptep_get_and_clear(mm, addr, ptep);
}
@@ -1249,17 +1259,33 @@ extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
- if (radix_enabled())
- return radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
- return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ pmd_t old_pmd;
+
+ if (radix_enabled()) {
+ old_pmd = radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ } else {
+ old_pmd = hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ }
+
+ page_table_check_pmd_clear(mm, addr, old_pmd);
+
+ return old_pmd;
}
static inline pmd_t __pmdp_collapse_flush(struct vm_area_struct *vma, struct mm_struct *mm,
unsigned long address, pmd_t *pmdp)
{
- if (radix_enabled())
- return radix__pmdp_collapse_flush(vma, address, pmdp);
- return hash__pmdp_collapse_flush(vma, address, pmdp);
+ pmd_t old_pmd;
+
+ if (radix_enabled()) {
+ old_pmd = radix__pmdp_collapse_flush(vma, address, pmdp);
+ } else {
+ old_pmd = hash__pmdp_collapse_flush(vma, address, pmdp);
+ }
+
+ page_table_check_pmd_clear(mm, address, old_pmd);
+
+ return old_pmd;
}
#define pmdp_collapse_flush(vma, addr, pmdp) \
({ \
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index d953533c56ff..e9c77054fe0b 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -166,6 +166,7 @@ void unmap_kernel_page(unsigned long va);
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
#ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
#define pte_clear(mm, addr, ptep) \
do { pte_update(mm, addr, ptep, ~0, 0, 0); } while (0)
@@ -316,7 +317,11 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
- return __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
+
+ page_table_check_pte_clear(mm, addr, old_pte);
+
+ return old_pte;
}
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 14e69ebad31f..d88b22c753d3 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -83,6 +83,7 @@
#define H_PAGE_4K_PFN 0
#ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
/* pte_clear moved to later in this file */
static inline pte_t pte_mkwrite(pte_t pte)
@@ -259,8 +260,11 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
- unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
- return __pte(old);
+ pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 0));
+
+ page_table_check_pte_clear(mm, addr, old_pte);
+
+ return old_pte;
}
static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index b76fdb80b6c9..df016a0a3135 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -48,7 +48,16 @@ struct mm_struct;
/* Keep these as a macros to avoid include dependency mess */
#define pte_page(x) pfn_to_page(pte_pfn(x))
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
-#define set_pte_at set_pte
+#define set_pte_at(mm, addr, ptep, pte) \
+({ \
+ struct mm_struct *_mm = (mm); \
+ unsigned long _addr = (addr); \
+ pte_t *_ptep = (ptep), _pte = (pte); \
+ \
+ page_table_check_pte_set(_mm, _addr, _ptep, _pte); \
+ set_pte(_mm, _addr, _ptep, _pte); \
+})
+
/*
* Select all bits except the pfn
*/
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index 51f48984abca..a92a8a7c9199 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -165,7 +165,7 @@ int hash__map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
ptep = pte_alloc_kernel(pmdp, ea);
if (!ptep)
return -ENOMEM;
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
+ set_pte(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
} else {
/*
* If the mm subsystem is not fully up, we cannot create a
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 85c84e89e3ea..d95be1d08b79 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -9,6 +9,7 @@
#include <linux/memremap.h>
#include <linux/pkeys.h>
#include <linux/debugfs.h>
+#include <linux/page_table_check.h>
#include <misc/cxl-base.h>
#include <asm/pgalloc.h>
@@ -87,7 +88,10 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
WARN_ON(!(pmd_large(pmd)));
#endif
trace_hugepage_set_pmd(addr, pmd_val(pmd));
- return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
+
+ page_table_check_pmd_set(mm, addr, pmdp, pmd);
+
+ return set_pte(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
}
static void do_serialize(void *arg)
@@ -122,11 +126,13 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
- unsigned long old_pmd;
+ pmd_t old_pmd;
- old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID);
+ old_pmd = __pmd(pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID));
flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
- return __pmd(old_pmd);
+ page_table_check_pmd_clear(vma->vm_mm, address, old_pmd);
+
+ return old_pmd;
}
pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
@@ -460,7 +466,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
if (radix_enabled())
return radix__ptep_modify_prot_commit(vma, addr,
ptep, old_pte, pte);
- set_pte_at(vma->vm_mm, addr, ptep, pte);
+ set_pte(vma->vm_mm, addr, ptep, pte);
}
/*
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 4e46e001c3c3..9359e3589107 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -110,7 +110,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
ptep = pte_offset_kernel(pmdp, ea);
set_the_pte:
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+ set_pte(&init_mm, ea, ptep, pfn_pte(pfn, flags));
asm volatile("ptesync": : :"memory");
return 0;
}
@@ -170,7 +170,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
return -ENOMEM;
set_the_pte:
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+ set_pte(&init_mm, ea, ptep, pfn_pte(pfn, flags));
asm volatile("ptesync": : :"memory");
return 0;
}
@@ -1094,7 +1094,7 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
(atomic_read(&mm->context.copros) > 0))
radix__flush_tlb_page(vma, addr);
- set_pte_at(mm, addr, ptep, pte);
+ set_pte(mm, addr, ptep, pte);
}
int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
@@ -1105,7 +1105,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
if (!radix_enabled())
return 0;
- set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud);
+ set_pte(&init_mm, 0 /* radix unused */, ptep, new_pud);
return 1;
}
@@ -1152,7 +1152,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
if (!radix_enabled())
return 0;
- set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd);
+ set_pte(&init_mm, 0 /* radix unused */, ptep, new_pmd);
return 1;
}
diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c
index b80fc4a91a53..e50d22c6f983 100644
--- a/arch/powerpc/mm/nohash/book3e_pgtable.c
+++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
@@ -111,7 +111,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
}
ptep = pte_offset_kernel(pmdp, ea);
}
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
+ set_pte(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, prot));
smp_wmb();
return 0;
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 5c02fd08d61e..a86a16be24ea 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -89,7 +89,7 @@ int __ref map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
* hash table
*/
BUG_ON((pte_present(*pg) | pte_hashpte(*pg)) && pgprot_val(prot));
- set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot));
+ set_pte(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, prot));
}
smp_wmb();
return err;
--
2.37.2
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use
2023-02-14 1:59 ` [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use Rohan McLure
@ 2023-02-14 5:59 ` Christophe Leroy
0 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2023-02-14 5:59 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 14/02/2023 à 02:59, Rohan McLure a écrit :
> Produce separate symbols for set_pte, which is to be used in
> arch/powerpc for reassignment of pte's, and set_pte_at, used in generic
> code.
>
> The reason for this distinction is to support the Page Table Check
> sanitiser. Having this distinction allows for set_pte_at to
> instrumented, but set_pte not to be, permitting for uninstrumented
> internal mappings. This distinction in names is also present in x86.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v6: new patch
> ---
> arch/powerpc/include/asm/book3s/pgtable.h | 4 ++--
> arch/powerpc/include/asm/nohash/pgtable.h | 4 ++--
> arch/powerpc/include/asm/pgtable.h | 1 +
> arch/powerpc/mm/pgtable.c | 4 ++--
> 4 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h
> index d18b748ea3ae..dbcdc2103c59 100644
> --- a/arch/powerpc/include/asm/book3s/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/pgtable.h
> @@ -12,8 +12,8 @@
> /* Insert a PTE, top-level function is out of line. It uses an inline
> * low level function in the respective pgtable-* files
> */
> -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> - pte_t pte);
> +extern void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> + pte_t pte);
Remove 'extern' keyword, it's pointless and deprecated, checkpatch
--strict is likely complaining about it too.
Then have the protoype fit on a single line.
>
>
> #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
> index 69c3a050a3d8..ac3e69a18253 100644
> --- a/arch/powerpc/include/asm/nohash/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/pgtable.h
> @@ -154,8 +154,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
> /* Insert a PTE, top-level function is out of line. It uses an inline
> * low level function in the respective pgtable-* files
> */
> -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> - pte_t pte);
> +extern void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> + pte_t pte);
Remove 'extern' keyword and have the protoype fit on a single line.
>
> /* This low level function performs the actual PTE insertion
> * Setting the PTE depends on the MMU type and other factors. It's
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 9972626ddaf6..17d30359d1f4 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -48,6 +48,7 @@ struct mm_struct;
> /* Keep these as a macros to avoid include dependency mess */
> #define pte_page(x) pfn_to_page(pte_pfn(x))
> #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
> +#define set_pte_at set_pte
> /*
> * Select all bits except the pfn
> */
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index cb2dcdb18f8e..e9a464e0d081 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
> /*
> * set_pte stores a linux PTE into the linux page table.
> */
> -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> - pte_t pte)
> +void set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> + pte_t pte)
Have it fit on a single line.
> {
> /*
> * Make sure hardware valid bit is not set. We don't do
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument
2023-02-14 1:59 ` [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument Rohan McLure
@ 2023-02-14 6:02 ` Christophe Leroy
2023-02-15 0:17 ` Rohan McLure
0 siblings, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2023-02-14 6:02 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 14/02/2023 à 02:59, Rohan McLure a écrit :
> pmdp_collapse_flush has references in generic code with just three
> parameters, due to the choice of mm context being implied by the vm_area
> context parameter.
>
> Define __pmdp_collapse_flush to accept an additional mm_struct *
> parameter, with pmdp_collapse_flush a macro that unpacks the vma and
> calls __pmdp_collapse_flush. The mm_struct * parameter is needed in a
> future patch providing Page Table Check support, which is defined in
> terms of mm context objects.
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> v6: New patch
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index cb4c67bf45d7..9d8b4e25f5ed 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -1244,14 +1244,22 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
> return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
> }
>
> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> - unsigned long address, pmd_t *pmdp)
> +static inline pmd_t __pmdp_collapse_flush(struct vm_area_struct *vma, struct mm_struct *mm,
> + unsigned long address, pmd_t *pmdp)
> {
> if (radix_enabled())
> return radix__pmdp_collapse_flush(vma, address, pmdp);
> return hash__pmdp_collapse_flush(vma, address, pmdp);
> }
> -#define pmdp_collapse_flush pmdp_collapse_flush
> +#define pmdp_collapse_flush(vma, addr, pmdp) \
> +({ \
> + struct vm_area_struct *_vma = (vma); \
> + pmd_t _r; \
> + \
> + _r = __pmdp_collapse_flush(_vma, _vma->vm_mm, (addr), (pmdp)); \
> + \
> + _r; \
> +})
Can you make it a static inline function instead of a ugly macro ?
>
> #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
> pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 7/7] powerpc: mm: Support page table check
2023-02-14 1:59 ` [PATCH v6 7/7] powerpc: mm: Support page table check Rohan McLure
@ 2023-02-14 6:14 ` Christophe Leroy
0 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2023-02-14 6:14 UTC (permalink / raw)
To: Rohan McLure, linuxppc-dev@lists.ozlabs.org
Le 14/02/2023 à 02:59, Rohan McLure a écrit :
> On creation and clearing of a page table mapping, instrument such calls
> by invoking page_table_check_pte_set and page_table_check_pte_clear
> respectively. These calls serve as a sanity check against illegal
> mappings.
Please also explaing the changes around set_pte_at() versus set_pte().
>
> Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
> platforms implementing Book3S.
As far as I can see below, it is implemented for all plateforms,
including nohash/32.
>
> Change pud_pfn to be a runtime bug rather than a build bug as it is
> consumed by page_table_check_pud_{clear,set} which are not called.
Isn't this done in another patch ?
>
> See also:
>
> riscv support in commit 3fee229a8eb9 ("riscv/mm: enable
> ARCH_SUPPORTS_PAGE_TABLE_CHECK")
> arm64 in commit 42b2547137f5 ("arm64/mm: enable
> ARCH_SUPPORTS_PAGE_TABLE_CHECK")
> x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check")
>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> V2: Update spacing and types assigned to pte_update calls.
> V3: Update one last pte_update call to remove __pte invocation.
> V5: Fix 32-bit nohash double set
> V6: Omit __set_pte_at instrumentation - should be instrumented by
> set_pte_at, with set_pte in between, performing all prior checks.
> Instrument pmds. Use set_pte where needed.
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/book3s/32/pgtable.h | 8 +++-
> arch/powerpc/include/asm/book3s/64/pgtable.h | 44 ++++++++++++++++----
> arch/powerpc/include/asm/nohash/32/pgtable.h | 7 +++-
> arch/powerpc/include/asm/nohash/64/pgtable.h | 8 +++-
> arch/powerpc/include/asm/pgtable.h | 11 ++++-
> arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
> arch/powerpc/mm/book3s64/pgtable.c | 16 ++++---
> arch/powerpc/mm/book3s64/radix_pgtable.c | 10 ++---
> arch/powerpc/mm/nohash/book3e_pgtable.c | 2 +-
> arch/powerpc/mm/pgtable_32.c | 2 +-
> 11 files changed, 84 insertions(+), 27 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 2c9cdf1d8761..2474e2699037 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -154,6 +154,7 @@ config PPC
> select ARCH_STACKWALK
> select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x
> + select ARCH_SUPPORTS_PAGE_TABLE_CHECK
> select ARCH_USE_BUILTIN_BSWAP
> select ARCH_USE_CMPXCHG_LOCKREF if PPC64
> select ARCH_USE_MEMTEST
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index b76fdb80b6c9..df016a0a3135 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -48,7 +48,16 @@ struct mm_struct;
> /* Keep these as a macros to avoid include dependency mess */
> #define pte_page(x) pfn_to_page(pte_pfn(x))
> #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
> -#define set_pte_at set_pte
> +#define set_pte_at(mm, addr, ptep, pte) \
> +({ \
> + struct mm_struct *_mm = (mm); \
> + unsigned long _addr = (addr); \
> + pte_t *_ptep = (ptep), _pte = (pte); \
> + \
> + page_table_check_pte_set(_mm, _addr, _ptep, _pte); \
> + set_pte(_mm, _addr, _ptep, _pte); \
> +})
Can you make it a static inline function instead of a macro ?
> +
> /*
> * Select all bits except the pfn
> */
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument
2023-02-14 6:02 ` Christophe Leroy
@ 2023-02-15 0:17 ` Rohan McLure
2023-02-15 0:40 ` Rohan McLure
0 siblings, 1 reply; 13+ messages in thread
From: Rohan McLure @ 2023-02-15 0:17 UTC (permalink / raw)
To: Christophe Leroy; +Cc: linuxppc-dev@lists.ozlabs.org
> On 14 Feb 2023, at 5:02 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
>
>
>
> Le 14/02/2023 à 02:59, Rohan McLure a écrit :
>> pmdp_collapse_flush has references in generic code with just three
>> parameters, due to the choice of mm context being implied by the vm_area
>> context parameter.
>>
>> Define __pmdp_collapse_flush to accept an additional mm_struct *
>> parameter, with pmdp_collapse_flush a macro that unpacks the vma and
>> calls __pmdp_collapse_flush. The mm_struct * parameter is needed in a
>> future patch providing Page Table Check support, which is defined in
>> terms of mm context objects.
>>
>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>> ---
>> v6: New patch
>> ---
>> arch/powerpc/include/asm/book3s/64/pgtable.h | 14 +++++++++++---
>> 1 file changed, 11 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index cb4c67bf45d7..9d8b4e25f5ed 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -1244,14 +1244,22 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>> return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
>> }
>>
>> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
>> - unsigned long address, pmd_t *pmdp)
>> +static inline pmd_t __pmdp_collapse_flush(struct vm_area_struct *vma, struct mm_struct *mm,
>> + unsigned long address, pmd_t *pmdp)
>> {
>> if (radix_enabled())
>> return radix__pmdp_collapse_flush(vma, address, pmdp);
>> return hash__pmdp_collapse_flush(vma, address, pmdp);
>> }
>> -#define pmdp_collapse_flush pmdp_collapse_flush
>> +#define pmdp_collapse_flush(vma, addr, pmdp) \
>> +({ \
>> + struct vm_area_struct *_vma = (vma); \
>> + pmd_t _r; \
>> + \
>> + _r = __pmdp_collapse_flush(_vma, _vma->vm_mm, (addr), (pmdp)); \
>> + \
>> + _r; \
>> +})
>
> Can you make it a static inline function instead of a ugly macro ?
Due to some header hell, it’s looking like this location only has access to
a prototype for struct vm_area_struct. Might have to remain a macro then.
Probably don’t need to expliclty declare a variable for the macro ‘return’
though.
>
>>
>> #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
>> pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument
2023-02-15 0:17 ` Rohan McLure
@ 2023-02-15 0:40 ` Rohan McLure
0 siblings, 0 replies; 13+ messages in thread
From: Rohan McLure @ 2023-02-15 0:40 UTC (permalink / raw)
To: Christophe Leroy; +Cc: linuxppc-dev@lists.ozlabs.org
> On 15 Feb 2023, at 11:17 am, Rohan McLure <rmclure@linux.ibm.com> wrote:
>
>> On 14 Feb 2023, at 5:02 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
>>
>>
>>
>> Le 14/02/2023 à 02:59, Rohan McLure a écrit :
>>> pmdp_collapse_flush has references in generic code with just three
>>> parameters, due to the choice of mm context being implied by the vm_area
>>> context parameter.
>>>
>>> Define __pmdp_collapse_flush to accept an additional mm_struct *
>>> parameter, with pmdp_collapse_flush a macro that unpacks the vma and
>>> calls __pmdp_collapse_flush. The mm_struct * parameter is needed in a
>>> future patch providing Page Table Check support, which is defined in
>>> terms of mm context objects.
>>>
>>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>>> ---
>>> v6: New patch
>>> ---
>>> arch/powerpc/include/asm/book3s/64/pgtable.h | 14 +++++++++++---
>>> 1 file changed, 11 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>>> index cb4c67bf45d7..9d8b4e25f5ed 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>>> @@ -1244,14 +1244,22 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>> return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
>>> }
>>>
>>> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
>>> - unsigned long address, pmd_t *pmdp)
>>> +static inline pmd_t __pmdp_collapse_flush(struct vm_area_struct *vma, struct mm_struct *mm,
>>> + unsigned long address, pmd_t *pmdp)
>>> {
>>> if (radix_enabled())
>>> return radix__pmdp_collapse_flush(vma, address, pmdp);
>>> return hash__pmdp_collapse_flush(vma, address, pmdp);
>>> }
>>> -#define pmdp_collapse_flush pmdp_collapse_flush
>>> +#define pmdp_collapse_flush(vma, addr, pmdp) \
>>> +({ \
>>> + struct vm_area_struct *_vma = (vma); \
>>> + pmd_t _r; \
>>> + \
>>> + _r = __pmdp_collapse_flush(_vma, _vma->vm_mm, (addr), (pmdp)); \
>>> + \
>>> + _r; \
>>> +})
>>
>> Can you make it a static inline function instead of a ugly macro ?
>
> Due to some header hell, it’s looking like this location only has access to
> a prototype for struct vm_area_struct. Might have to remain a macro then.
>
> Probably don’t need to expliclty declare a variable for the macro ‘return’
> though.
It’s the same solution opted for by ptep_test_and_clear_young.
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
({ \
__ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
})
>
>>
>>>
>>> #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
>>> pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2023-02-15 0:41 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-14 1:59 [PATCH v6 0/7] Support page table check Rohan McLure
2023-02-14 1:59 ` [PATCH v6 1/7] powerpc: mm: Separate set_pte, set_pte_at for internal, external use Rohan McLure
2023-02-14 5:59 ` Christophe Leroy
2023-02-14 1:59 ` [PATCH v6 2/7] powerpc/64s: mm: Introduce __pmdp_collapse_flush with mm_struct argument Rohan McLure
2023-02-14 6:02 ` Christophe Leroy
2023-02-15 0:17 ` Rohan McLure
2023-02-15 0:40 ` Rohan McLure
2023-02-14 1:59 ` [PATCH v6 3/7] powerpc: mm: Replace p{u,m,4}d_is_leaf with p{u,m,4}_leaf Rohan McLure
2023-02-14 1:59 ` [PATCH v6 4/7] powerpc: mm: Implement p{m,u,4}d_leaf on all platforms Rohan McLure
2023-02-14 1:59 ` [PATCH v6 5/7] powerpc: mm: Add common pud_pfn stub for " Rohan McLure
2023-02-14 1:59 ` [PATCH v6 6/7] powerpc: mm: Add p{te,md,ud}_user_accessible_page helpers Rohan McLure
2023-02-14 1:59 ` [PATCH v6 7/7] powerpc: mm: Support page table check Rohan McLure
2023-02-14 6:14 ` Christophe Leroy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).