linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header
@ 2022-09-12  1:47 Rohan McLure
  2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Rohan McLure @ 2022-09-12  1:47 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Rohan McLure

The pud_pfn inline call is only referenced on 64-bit Book3S systems,
but its invocations are gated by pud_devmap() invocations, rendering the
body of this function as dead code.

As such, this function is readily exportable to all platforms in the
instance where kernel features depend on it at least being defined.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
 arch/powerpc/include/asm/pgtable.h           | 12 ++++++++++++
 2 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 392ff48f77df..8874f2a3661d 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1411,16 +1411,6 @@ static inline int pgd_devmap(pgd_t pgd)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-static inline int pud_pfn(pud_t pud)
-{
-	/*
-	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
-	 * check so this should never be used. If it grows another user we
-	 * want to know about it.
-	 */
-	BUILD_BUG();
-	return 0;
-}
 #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
 pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
 void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 33f4bf8d22b0..522145b16a07 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -158,6 +158,18 @@ struct seq_file;
 void arch_report_meminfo(struct seq_file *m);
 #endif /* CONFIG_PPC64 */
 
+#define pud_pfn pud_pfn
+static inline int pud_pfn(pud_t pud)
+{
+	/*
+	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
+	 * check so this should never be used. If it grows another user we
+	 * want to know about it.
+	 */
+	BUILD_BUG();
+	return 0;
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_H */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers
  2022-09-12  1:47 [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
@ 2022-09-12  1:47 ` Rohan McLure
  2022-09-12  5:40   ` Christophe Leroy
  2022-09-13 14:12   ` Christophe Leroy
  2022-09-12  1:47 ` [PATCH 3/3] powerpc: mm: support page table check Rohan McLure
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 11+ messages in thread
From: Rohan McLure @ 2022-09-12  1:47 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Rohan McLure

Add the following helpers for detecting whether a page table entry
is a leaf and is accessible to user space.

 * pte_user_accessible_page
 * pmd_user_accessible_page
 * pud_user_accessible_page

The heavy lifting is done by pte_user, which checks user accessibility
on a per-mmu level, provided prior to this commit.

On 32-bit systems, provide stub implementations for these methods, with
BUG(), as debug features such as page table checks will emit functions
that call p{md,ud}_user_accessible_page but must not be used.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 522145b16a07..8c1f5feb9360 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -170,6 +170,41 @@ static inline int pud_pfn(pud_t pud)
 	return 0;
 }
 
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && pte_user(pte);
+}
+
+#ifdef CONFIG_PPC64
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_is_leaf(pmd) && pmd_present(pmd)
+				&& pte_user(pmd_pte(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_is_leaf(pud) && pud_present(pud)
+				&& pte_user(pud_pte(pud));
+}
+
+#else
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	BUG();
+	return false;
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	BUG();
+	return false;
+}
+
+#endif /* CONFIG_PPC64 */
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_H */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] powerpc: mm: support page table check
  2022-09-12  1:47 [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
  2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
@ 2022-09-12  1:47 ` Rohan McLure
  2022-09-12  6:11   ` Christophe Leroy
  2022-09-12  1:51 ` [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
  2022-09-12  5:30 ` Christophe Leroy
  3 siblings, 1 reply; 11+ messages in thread
From: Rohan McLure @ 2022-09-12  1:47 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Rohan McLure

On creation and clearing of a page table mapping, instrument such calls
by invoking page_table_check_pte_set and page_table_check_pte_clear
respectively. These calls serve as a sanity check against illegal
mappings.

Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
platforms implementing Book3S.

Change pud_pfn to be a runtime bug rather than a build bug as it is
consumed by page_table_check_pud_{clear,set} which are not called.

See also:

riscv support in commit 3fee229a8eb9 ("riscv/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK")
arm64 in commit 42b2547137f5 ("arm64/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
    check")

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/Kconfig                         | 1 +
 arch/powerpc/include/asm/book3s/32/pgtable.h | 7 ++++++-
 arch/powerpc/include/asm/book3s/64/pgtable.h | 9 ++++++++-
 arch/powerpc/include/asm/nohash/32/pgtable.h | 5 ++++-
 arch/powerpc/include/asm/nohash/64/pgtable.h | 2 ++
 arch/powerpc/include/asm/nohash/pgtable.h    | 1 +
 arch/powerpc/include/asm/pgtable.h           | 4 ++++
 7 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 4c466acdc70d..6c213ac46a92 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -149,6 +149,7 @@ config PPC
 	select ARCH_STACKWALK
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC	if PPC_BOOK3S || PPC_8xx || 40x
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_CMPXCHG_LOCKREF		if PPC64
 	select ARCH_USE_MEMTEST
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 40041ac713d9..3d05c8fe4604 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -53,6 +53,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <linux/page_table_check.h>
+
 static inline bool pte_user(pte_t pte)
 {
 	return pte_val(pte) & _PAGE_USER;
@@ -353,7 +355,9 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
 				       pte_t *ptep)
 {
-	return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
+	unsigned long old = pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0);
+	page_table_check_pte_clear(mm, addr, __pte(old));
+	return __pte(old);
 }
 
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
@@ -541,6 +545,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, pte_t pte, int percpu)
 {
+	page_table_check_pte_set(mm, addr, ptep, pte);
 #if defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
 	/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
 	 * helper pte_update() which does an atomic update. We need to do that
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 8874f2a3661d..cbb7bd99c897 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -179,6 +179,8 @@
 #define PAGE_AGP		(PAGE_KERNEL_NC)
 
 #ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
+
 /*
  * page table defines
  */
@@ -483,6 +485,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long addr, pte_t *ptep)
 {
 	unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
+	page_table_check_pte_clear(mm, addr, __pte(old));
 	return __pte(old);
 }
 
@@ -491,12 +494,15 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 					    unsigned long addr,
 					    pte_t *ptep, int full)
 {
+	pte_t old_pte;
 	if (full && radix_enabled()) {
 		/*
 		 * We know that this is a full mm pte clear and
 		 * hence can be sure there is no parallel set_pte.
 		 */
-		return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+		old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full);
+		page_table_check_pte_clear(mm, addr, old_pte);
+		return old_pte;
 	}
 	return ptep_get_and_clear(mm, addr, ptep);
 }
@@ -872,6 +878,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 	 */
 	pte = __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PTE));
 
+	page_table_check_pte_set(mm, addr, ptep, pte);
 	if (radix_enabled())
 		return radix__set_pte_at(mm, addr, ptep, pte, percpu);
 	return hash__set_pte_at(mm, addr, ptep, pte, percpu);
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index 9091e4904a6b..a3416cfb75d7 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -166,6 +166,7 @@ void unmap_kernel_page(unsigned long va);
 #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
 
 #ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
 
 #define pte_clear(mm, addr, ptep) \
 	do { pte_update(mm, addr, ptep, ~0, 0, 0); } while (0)
@@ -305,7 +306,9 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
 				       pte_t *ptep)
 {
-	return __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
+	unsigned long old = pte_update(mm, addr, ptep, ~0, 0, 0);
+	page_table_check_pte_clear(mm, addr, __pte(old));
+	return __pte(old);
 }
 
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 599921cc257e..5f2e10842a0d 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -83,6 +83,7 @@
 #define H_PAGE_4K_PFN 0
 
 #ifndef __ASSEMBLY__
+#include <linux/page_table_check.h>
 /* pte_clear moved to later in this file */
 
 static inline pte_t pte_mkwrite(pte_t pte)
@@ -244,6 +245,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long addr, pte_t *ptep)
 {
 	unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
+	page_table_check_pte_clear(mm, addr, __pte(old));
 	return __pte(old);
 }
 
diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
index b499da6c1a99..62b221b7cccf 100644
--- a/arch/powerpc/include/asm/nohash/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/pgtable.h
@@ -185,6 +185,7 @@ extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
 static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, pte_t pte, int percpu)
 {
+	page_table_check_pte_set(mm, addr, ptep, pte);
 	/* Second case is 32-bit with 64-bit PTE.  In this case, we
 	 * can just store as long as we do the two halves in the right order
 	 * with a barrier in between.
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 8c1f5feb9360..5e1032d12499 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -166,7 +166,11 @@ static inline int pud_pfn(pud_t pud)
 	 * check so this should never be used. If it grows another user we
 	 * want to know about it.
 	 */
+#ifndef CONFIG_PAGE_TABLE_CHECK
 	BUILD_BUG();
+#else
+	BUG();
+#endif
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header
  2022-09-12  1:47 [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
  2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
  2022-09-12  1:47 ` [PATCH 3/3] powerpc: mm: support page table check Rohan McLure
@ 2022-09-12  1:51 ` Rohan McLure
  2022-09-12  5:30 ` Christophe Leroy
  3 siblings, 0 replies; 11+ messages in thread
From: Rohan McLure @ 2022-09-12  1:51 UTC (permalink / raw)
  To: linuxppc-dev

This patch and its successor would be avoidable if architectures could specify
that they wish to use page_table_check_p{ud,md}_{clear,set}.

> On 12 Sep 2022, at 11:47 am, Rohan McLure <rmclure@linux.ibm.com> wrote:
> 
> The pud_pfn inline call is only referenced on 64-bit Book3S systems,
> but its invocations are gated by pud_devmap() invocations, rendering the
> body of this function as dead code.
> 
> As such, this function is readily exportable to all platforms in the
> instance where kernel features depend on it at least being defined.
> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
> arch/powerpc/include/asm/pgtable.h           | 12 ++++++++++++
> 2 files changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 392ff48f77df..8874f2a3661d 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -1411,16 +1411,6 @@ static inline int pgd_devmap(pgd_t pgd)
> }
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> 
> -static inline int pud_pfn(pud_t pud)
> -{
> -	/*
> -	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
> -	 * check so this should never be used. If it grows another user we
> -	 * want to know about it.
> -	 */
> -	BUILD_BUG();
> -	return 0;
> -}
> #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
> pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
> void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 33f4bf8d22b0..522145b16a07 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -158,6 +158,18 @@ struct seq_file;
> void arch_report_meminfo(struct seq_file *m);
> #endif /* CONFIG_PPC64 */
> 
> +#define pud_pfn pud_pfn
> +static inline int pud_pfn(pud_t pud)
> +{
> +	/*
> +	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
> +	 * check so this should never be used. If it grows another user we
> +	 * want to know about it.
> +	 */
> +	BUILD_BUG();
> +	return 0;
> +}
> +
> #endif /* __ASSEMBLY__ */
> 
> #endif /* _ASM_POWERPC_PGTABLE_H */
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header
  2022-09-12  1:47 [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
                   ` (2 preceding siblings ...)
  2022-09-12  1:51 ` [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
@ 2022-09-12  5:30 ` Christophe Leroy
  3 siblings, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2022-09-12  5:30 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev@lists.ozlabs.org



Le 12/09/2022 à 03:47, Rohan McLure a écrit :
> The pud_pfn inline call is only referenced on 64-bit Book3S systems,
> but its invocations are gated by pud_devmap() invocations, rendering the
> body of this function as dead code.
> 
> As such, this function is readily exportable to all platforms in the
> instance where kernel features depend on it at least being defined.

I don't understand. If this function is dead code, why move it to a 
larger scope ? Shouldn't you remove it instead ? Or are you planning to 
re-use it on other platforms ? In that case say it.

> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ----------
>   arch/powerpc/include/asm/pgtable.h           | 12 ++++++++++++
>   2 files changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 392ff48f77df..8874f2a3661d 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -1411,16 +1411,6 @@ static inline int pgd_devmap(pgd_t pgd)
>   }
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
> -static inline int pud_pfn(pud_t pud)
> -{
> -	/*
> -	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
> -	 * check so this should never be used. If it grows another user we
> -	 * want to know about it.
> -	 */
> -	BUILD_BUG();
> -	return 0;
> -}
>   #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
>   pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
>   void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 33f4bf8d22b0..522145b16a07 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -158,6 +158,18 @@ struct seq_file;
>   void arch_report_meminfo(struct seq_file *m);
>   #endif /* CONFIG_PPC64 */
>   
> +#define pud_pfn pud_pfn

Why do you need to add that define ?

> +static inline int pud_pfn(pud_t pud)
> +{
> +	/*
> +	 * Currently all calls to pud_pfn() are gated around a pud_devmap()
> +	 * check so this should never be used. If it grows another user we
> +	 * want to know about it.
> +	 */
> +	BUILD_BUG();
> +	return 0;
> +}
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif /* _ASM_POWERPC_PGTABLE_H */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers
  2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
@ 2022-09-12  5:40   ` Christophe Leroy
  2022-09-12  8:38     ` Rohan McLure
  2022-09-13 14:12   ` Christophe Leroy
  1 sibling, 1 reply; 11+ messages in thread
From: Christophe Leroy @ 2022-09-12  5:40 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev@lists.ozlabs.org



Le 12/09/2022 à 03:47, Rohan McLure a écrit :
> Add the following helpers for detecting whether a page table entry
> is a leaf and is accessible to user space.
> 
>   * pte_user_accessible_page
>   * pmd_user_accessible_page
>   * pud_user_accessible_page
> 
> The heavy lifting is done by pte_user, which checks user accessibility
> on a per-mmu level, provided prior to this commit.

Not in this series it seems, so I guess that commit is already in the 
tree. Can you reference it ?

> 
> On 32-bit systems, provide stub implementations for these methods, with
> BUG(), as debug features such as page table checks will emit functions
> that call p{md,ud}_user_accessible_page but must not be used.

Please please please no new BUG or BUG_ON, please see the following 
discussion and especially the position of Linus Torvalds :

https://lore.kernel.org/all/CAHk-=wg40EAZofO16Eviaj7mfqDhZ2gVEbvfsMf6gYzspRjYvw@mail.gmail.com/

> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++
>   1 file changed, 35 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 522145b16a07..8c1f5feb9360 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -170,6 +170,41 @@ static inline int pud_pfn(pud_t pud)
>   	return 0;
>   }
>   
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return (pte_val(pte) & _PAGE_PRESENT) && pte_user(pte);
> +}
> +
> +#ifdef CONFIG_PPC64
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_is_leaf(pmd) && pmd_present(pmd)
> +				&& pte_user(pmd_pte(pmd));

The && goes on previous line.

> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_is_leaf(pud) && pud_present(pud)
> +				&& pte_user(pud_pte(pud));

Same

> +}
> +
> +#else
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	BUG();

Can you use BUILD_BUG() instead ?

> +	return false;
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	BUG();

Same.

> +	return false;
> +}
> +
> +#endif /* CONFIG_PPC64 */
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif /* _ASM_POWERPC_PGTABLE_H */

By the way, there is a great tool that can help you :

$ ./arch/powerpc/tools/checkpatch.sh --strict -g HEAD~
CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the 
previous line
#43: FILE: arch/powerpc/include/asm/pgtable.h:183:
+	return pmd_is_leaf(pmd) && pmd_present(pmd)
+				&& pte_user(pmd_pte(pmd));

CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the 
previous line
#49: FILE: arch/powerpc/include/asm/pgtable.h:189:
+	return pud_is_leaf(pud) && pud_present(pud)
+				&& pte_user(pud_pte(pud));

WARNING:AVOID_BUG: Avoid crashing the kernel - try using WARN_ON & 
recovery code rather than BUG() or BUG_ON()
#56: FILE: arch/powerpc/include/asm/pgtable.h:196:
+	BUG();

WARNING:AVOID_BUG: Avoid crashing the kernel - try using WARN_ON & 
recovery code rather than BUG() or BUG_ON()
#62: FILE: arch/powerpc/include/asm/pgtable.h:202:
+	BUG();

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] powerpc: mm: support page table check
  2022-09-12  1:47 ` [PATCH 3/3] powerpc: mm: support page table check Rohan McLure
@ 2022-09-12  6:11   ` Christophe Leroy
  2022-09-13  1:21     ` Rohan McLure
  0 siblings, 1 reply; 11+ messages in thread
From: Christophe Leroy @ 2022-09-12  6:11 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev@lists.ozlabs.org



Le 12/09/2022 à 03:47, Rohan McLure a écrit :
> On creation and clearing of a page table mapping, instrument such calls
> by invoking page_table_check_pte_set and page_table_check_pte_clear
> respectively. These calls serve as a sanity check against illegal
> mappings.
> 
> Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
> platforms implementing Book3S.

Why only book3s on 32 bits ?

> 
> Change pud_pfn to be a runtime bug rather than a build bug as it is
> consumed by page_table_check_pud_{clear,set} which are not called.

Runtime bug is to be banned.

> 
> See also:
> 
> riscv support in commit 3fee229a8eb9 ("riscv/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK")

Long line.

Commit lines must not be splitted in Fixes: tags, but in the rest of 
message it should be.

> arm64 in commit 42b2547137f5 ("arm64/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK")
> x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>      check")
> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
>   arch/powerpc/Kconfig                         | 1 +
>   arch/powerpc/include/asm/book3s/32/pgtable.h | 7 ++++++-
>   arch/powerpc/include/asm/book3s/64/pgtable.h | 9 ++++++++-
>   arch/powerpc/include/asm/nohash/32/pgtable.h | 5 ++++-
>   arch/powerpc/include/asm/nohash/64/pgtable.h | 2 ++
>   arch/powerpc/include/asm/nohash/pgtable.h    | 1 +
>   arch/powerpc/include/asm/pgtable.h           | 4 ++++
>   7 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 4c466acdc70d..6c213ac46a92 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -149,6 +149,7 @@ config PPC
>   	select ARCH_STACKWALK
>   	select ARCH_SUPPORTS_ATOMIC_RMW
>   	select ARCH_SUPPORTS_DEBUG_PAGEALLOC	if PPC_BOOK3S || PPC_8xx || 40x
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK

In the commit message you said only book3s on PPC32.

>   	select ARCH_USE_BUILTIN_BSWAP
>   	select ARCH_USE_CMPXCHG_LOCKREF		if PPC64
>   	select ARCH_USE_MEMTEST
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index 40041ac713d9..3d05c8fe4604 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -53,6 +53,8 @@
>   
>   #ifndef __ASSEMBLY__
>   
> +#include <linux/page_table_check.h>
> +
>   static inline bool pte_user(pte_t pte)
>   {
>   	return pte_val(pte) & _PAGE_USER;
> @@ -353,7 +355,9 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>   				       pte_t *ptep)
>   {
> -	return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
> +	unsigned long old = pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0);

Add a blank line here.


pte_update() return type is pte_basic_t, which can be more than a long 
on PPC32 (see asm/page_32.h):

	#ifdef CONFIG_PTE_64BIT
	typedef unsigned long long pte_basic_t;
	#else
	typedef unsigned long pte_basic_t;
	#endif

But why not just use pte_t instead, as you never use 'old' directly ?

> +	page_table_check_pte_clear(mm, addr, __pte(old));
> +	return __pte(old);
>   }
>   
>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
> @@ -541,6 +545,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>   static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>   				pte_t *ptep, pte_t pte, int percpu)
>   {
> +	page_table_check_pte_set(mm, addr, ptep, pte);
>   #if defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
>   	/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
>   	 * helper pte_update() which does an atomic update. We need to do that
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 8874f2a3661d..cbb7bd99c897 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -179,6 +179,8 @@
>   #define PAGE_AGP		(PAGE_KERNEL_NC)
>   
>   #ifndef __ASSEMBLY__
> +#include <linux/page_table_check.h>
> +
>   /*
>    * page table defines
>    */
> @@ -483,6 +485,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>   				       unsigned long addr, pte_t *ptep)
>   {
>   	unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
> +	page_table_check_pte_clear(mm, addr, __pte(old));
>   	return __pte(old);

Same, would be better to rewrite this function and use a 'pte_t old_pte'.

>   }
>   
> @@ -491,12 +494,15 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>   					    unsigned long addr,
>   					    pte_t *ptep, int full)
>   {
> +	pte_t old_pte;

You don't need that variable on the full function scope, put it inside 
the { } below.

>   	if (full && radix_enabled()) {
>   		/*
>   		 * We know that this is a full mm pte clear and
>   		 * hence can be sure there is no parallel set_pte.
>   		 */
> -		return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
> +		old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full);
> +		page_table_check_pte_clear(mm, addr, old_pte);

A blank line here would increase readability.

> +		return old_pte;
>   	}
>   	return ptep_get_and_clear(mm, addr, ptep);
>   }
> @@ -872,6 +878,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>   	 */
>   	pte = __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PTE));
>   
> +	page_table_check_pte_set(mm, addr, ptep, pte);

A blank line here would increase readability as well.

>   	if (radix_enabled())
>   		return radix__set_pte_at(mm, addr, ptep, pte, percpu);
>   	return hash__set_pte_at(mm, addr, ptep, pte, percpu);
> diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
> index 9091e4904a6b..a3416cfb75d7 100644
> --- a/arch/powerpc/include/asm/nohash/32/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
> @@ -166,6 +166,7 @@ void unmap_kernel_page(unsigned long va);
>   #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SPECIAL)
>   
>   #ifndef __ASSEMBLY__
> +#include <linux/page_table_check.h>
>   
>   #define pte_clear(mm, addr, ptep) \
>   	do { pte_update(mm, addr, ptep, ~0, 0, 0); } while (0)
> @@ -305,7 +306,9 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>   				       pte_t *ptep)
>   {
> -	return __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
> +	unsigned long old = pte_update(mm, addr, ptep, ~0, 0, 0);

Blank line required here.

> +	page_table_check_pte_clear(mm, addr, __pte(old));

A blank line here would increase readability.

> +	return __pte(old);
>   }
>   
>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
> diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
> index 599921cc257e..5f2e10842a0d 100644
> --- a/arch/powerpc/include/asm/nohash/64/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
> @@ -83,6 +83,7 @@
>   #define H_PAGE_4K_PFN 0
>   
>   #ifndef __ASSEMBLY__
> +#include <linux/page_table_check.h>
>   /* pte_clear moved to later in this file */
>   
>   static inline pte_t pte_mkwrite(pte_t pte)
> @@ -244,6 +245,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>   				       unsigned long addr, pte_t *ptep)
>   {
>   	unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);

Blank line.

> +	page_table_check_pte_clear(mm, addr, __pte(old));
>   	return __pte(old);
>   }
>   
> diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
> index b499da6c1a99..62b221b7cccf 100644
> --- a/arch/powerpc/include/asm/nohash/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/pgtable.h
> @@ -185,6 +185,7 @@ extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
>   static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>   				pte_t *ptep, pte_t pte, int percpu)
>   {
> +	page_table_check_pte_set(mm, addr, ptep, pte);

Blank line.

>   	/* Second case is 32-bit with 64-bit PTE.  In this case, we
>   	 * can just store as long as we do the two halves in the right order
>   	 * with a barrier in between.
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 8c1f5feb9360..5e1032d12499 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -166,7 +166,11 @@ static inline int pud_pfn(pud_t pud)
>   	 * check so this should never be used. If it grows another user we
>   	 * want to know about it.
>   	 */
> +#ifndef CONFIG_PAGE_TABLE_CHECK
>   	BUILD_BUG();
> +#else
> +	BUG();
> +#endif

Awfull.

>   	return 0;
>   }
>   

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers
  2022-09-12  5:40   ` Christophe Leroy
@ 2022-09-12  8:38     ` Rohan McLure
  0 siblings, 0 replies; 11+ messages in thread
From: Rohan McLure @ 2022-09-12  8:38 UTC (permalink / raw)
  To: Christophe Leroy; +Cc: linuxppc-dev@lists.ozlabs.org

>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_is_leaf(pmd) && pmd_present(pmd)
>> +				&& pte_user(pmd_pte(pmd));
> 
> The && goes on previous line.

> By the way, there is a great tool that can help you :
> 
> $ ./arch/powerpc/tools/checkpatch.sh --strict -g HEAD~

Thank you. Yes I should be using --strict with checkpatch.

> 
>> +}
>> +
>> +#else
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	BUG();
> 
> Can you use BUILD_BUG() instead ?

As it stands, p{m,u}d_user_accessible page is invoked by 
__page_table_check_p{m,u}d_{set,clear}. So unfortunately this
function will have to be compiled, but its body could be replaced
with a WARN() and return false on 32-bit.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] powerpc: mm: support page table check
  2022-09-12  6:11   ` Christophe Leroy
@ 2022-09-13  1:21     ` Rohan McLure
  2022-09-13 14:36       ` Christophe Leroy
  0 siblings, 1 reply; 11+ messages in thread
From: Rohan McLure @ 2022-09-13  1:21 UTC (permalink / raw)
  To: Christophe Leroy; +Cc: linuxppc-dev@lists.ozlabs.org



> On 12 Sep 2022, at 4:11 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
> 
> 
> 
> Le 12/09/2022 à 03:47, Rohan McLure a écrit :
>> On creation and clearing of a page table mapping, instrument such calls
>> by invoking page_table_check_pte_set and page_table_check_pte_clear
>> respectively. These calls serve as a sanity check against illegal
>> mappings.
>> 
>> Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
>> platforms implementing Book3S.
> 
> Why only book3s on 32 bits ?

Sorry. I failed to update that commit message. This patch instead supports,
page table checks on all platforms, but I began writing this patch series to
target just Book3S, and then updated it to include all platforms. The only
barrier to doing so was the need for the pud_pfn and
page_table_check_pud_{clear,set} bloat.

>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -166,7 +166,11 @@ static inline int pud_pfn(pud_t pud)
>>  	 * check so this should never be used. If it grows another user we
>>  	 * want to know about it.
>>  	 */
>> +#ifndef CONFIG_PAGE_TABLE_CHECK
>>  	BUILD_BUG();
>> +#else
>> +	BUG();
>> +#endif
> 
> Awfull.

Quite right. I suspect you can infer the intention here, which is to enforce
that this dead code must not be included anywhere in generic code, but rather
be gated by pud_devmap. I will relax this to a WARN().


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers
  2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
  2022-09-12  5:40   ` Christophe Leroy
@ 2022-09-13 14:12   ` Christophe Leroy
  1 sibling, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2022-09-13 14:12 UTC (permalink / raw)
  To: Rohan McLure, linuxppc-dev@lists.ozlabs.org



Le 12/09/2022 à 03:47, Rohan McLure a écrit :
> Add the following helpers for detecting whether a page table entry
> is a leaf and is accessible to user space.
> 
>   * pte_user_accessible_page
>   * pmd_user_accessible_page
>   * pud_user_accessible_page
> 
> The heavy lifting is done by pte_user, which checks user accessibility
> on a per-mmu level, provided prior to this commit.
> 
> On 32-bit systems, provide stub implementations for these methods, with
> BUG(), as debug features such as page table checks will emit functions
> that call p{md,ud}_user_accessible_page but must not be used.

Why "must not be used" ?



> 
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/pgtable.h | 35 ++++++++++++++++++++++++++++
>   1 file changed, 35 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 522145b16a07..8c1f5feb9360 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -170,6 +170,41 @@ static inline int pud_pfn(pud_t pud)
>   	return 0;
>   }
>   
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return (pte_val(pte) & _PAGE_PRESENT) && pte_user(pte);

Should use pte_present() instead ?

> +}
> +
> +#ifdef CONFIG_PPC64

I think the functions could be valid for PPC32 as well with below changes:

> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_is_leaf(pmd) && pmd_present(pmd)
> +				&& pte_user(pmd_pte(pmd));

Define and use pmd_user()

> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_is_leaf(pud) && pud_present(pud)
> +				&& pte_user(pud_pte(pud));

Use pud_user().

For ppc64 you have to define it in 
arch/powerpc/include/asm/book3s/64/pgtable.h and 
arch/powerpc/include/asm/nohash/64/pgtable.h
For ppc32 it is already defined in include/asm-generic/pgtable-nopmd.h

And use pud_leaf() instead of pud_is_leaf(). For ppc32 it is already 
defined in include/asm-generic/pgtable-nopmd.h

> +}
> +
> +#else
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	BUG();
> +	return false;
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	BUG();
> +	return false;
> +}
> +
> +#endif /* CONFIG_PPC64 */
> +
>   #endif /* __ASSEMBLY__ */
>   
>   #endif /* _ASM_POWERPC_PGTABLE_H */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] powerpc: mm: support page table check
  2022-09-13  1:21     ` Rohan McLure
@ 2022-09-13 14:36       ` Christophe Leroy
  0 siblings, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2022-09-13 14:36 UTC (permalink / raw)
  To: Rohan McLure; +Cc: linuxppc-dev@lists.ozlabs.org



Le 13/09/2022 à 03:21, Rohan McLure a écrit :
> 
> 
>> On 12 Sep 2022, at 4:11 pm, Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
>>
>>
>>
>> Le 12/09/2022 à 03:47, Rohan McLure a écrit :
>>> On creation and clearing of a page table mapping, instrument such calls
>>> by invoking page_table_check_pte_set and page_table_check_pte_clear
>>> respectively. These calls serve as a sanity check against illegal
>>> mappings.
>>>
>>> Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all ppc64, and 32-bit
>>> platforms implementing Book3S.
>>
>> Why only book3s on 32 bits ?
> 
> Sorry. I failed to update that commit message. This patch instead supports,
> page table checks on all platforms, but I began writing this patch series to
> target just Book3S, and then updated it to include all platforms. The only
> barrier to doing so was the need for the pud_pfn and
> page_table_check_pud_{clear,set} bloat.
> 
>>> +++ b/arch/powerpc/include/asm/pgtable.h
>>> @@ -166,7 +166,11 @@ static inline int pud_pfn(pud_t pud)
>>>   	 * check so this should never be used. If it grows another user we
>>>   	 * want to know about it.
>>>   	 */
>>> +#ifndef CONFIG_PAGE_TABLE_CHECK
>>>   	BUILD_BUG();
>>> +#else
>>> +	BUG();
>>> +#endif
>>
>> Awfull.
> 
> Quite right. I suspect you can infer the intention here, which is to enforce
> that this dead code must not be included anywhere in generic code, but rather
> be gated by pud_devmap. I will relax this to a WARN().
> 

That's the #ifndef/#else/#endif that I find awful, more than the BUG() 
itself. Either this is not supposed to be used at all, and the 
BUILD_BUG() should be the way, or it is supposed to be used, and then it 
should really be implemented.

By the way, the comment should also be updated. because it is more than 
devmap now.

However, as pud_pfn() is starting to get really used now, maybe it is 
time to implement it for real, isn't it ?

Or the other solution is to get pud_user_accessible_page() return always 
false.

Christophe

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-09-13 14:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-09-12  1:47 [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
2022-09-12  1:47 ` [PATCH 2/3] powerpc: mm: add p{te,md,ud}_user_accessible_page helpers Rohan McLure
2022-09-12  5:40   ` Christophe Leroy
2022-09-12  8:38     ` Rohan McLure
2022-09-13 14:12   ` Christophe Leroy
2022-09-12  1:47 ` [PATCH 3/3] powerpc: mm: support page table check Rohan McLure
2022-09-12  6:11   ` Christophe Leroy
2022-09-13  1:21     ` Rohan McLure
2022-09-13 14:36       ` Christophe Leroy
2022-09-12  1:51 ` [PATCH 1/3] powerpc: mm: move pud_pfn stub to common pgtable header Rohan McLure
2022-09-12  5:30 ` Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).