* [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-03 12:18 ` Christophe Leroy
2022-06-03 10:14 ` [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (4 subsequent siblings)
5 siblings, 1 reply; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Anshuman Khandual,
Heiko Carstens, openrisc, Thomas Gleixner, linux-arm-kernel,
Thomas Bogendoerfer, linux-mips, Dinh Nguyen, Andrew Morton,
linuxppc-dev, David S. Miller
Restrict generic protection_map[] array visibility only for platforms which
do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define
their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have
their private static protection_map[] still implementing an array look up.
These private protection_map[] array could do without __PXXX/__SXXX macros,
making them redundant and dropping them off.
But platforms which do not define their custom vm_get_page_prot() enabling
ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros.
Although this now provides a method for other willing platforms to drop off
__PXXX/__SXX macros completely.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: x86@kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/pgtable-prot.h | 18 ------------------
arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++
arch/powerpc/include/asm/pgtable.h | 2 ++
arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++
arch/sparc/include/asm/pgtable_32.h | 2 ++
arch/sparc/include/asm/pgtable_64.h | 19 -------------------
arch/sparc/mm/init_64.c | 20 ++++++++++++++++++++
arch/x86/include/asm/pgtable_types.h | 19 -------------------
arch/x86/mm/pgprot.c | 19 +++++++++++++++++++
include/linux/mm.h | 2 ++
mm/mmap.c | 2 +-
11 files changed, 87 insertions(+), 57 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 62e0ebeed720..9b165117a454 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -89,24 +89,6 @@ extern bool arm64_use_ng_mappings;
#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY
-#define __P011 PAGE_READONLY
-#define __P100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_READONLY_EXEC
-#define __P111 PAGE_READONLY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
#endif /* __ASSEMBLY__ */
#endif /* __ASM_PGTABLE_PROT_H */
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 78e9490f748d..8f5b7ce857ed 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -13,6 +13,27 @@
#include <asm/cpufeature.h>
#include <asm/page.h>
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_READONLY,
+ [VM_WRITE | VM_READ] = PAGE_READONLY,
+ /* PAGE_EXECONLY if Enhanced PAN */
+ [VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ /* PAGE_EXECONLY if Enhanced PAN */
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+
/*
* You really shouldn't be using read() or write() on /dev/mem. This might go
* away in the future.
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index d564d0ecd4cd..8ed2a80c896e 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -21,6 +21,7 @@ struct mm_struct;
#endif /* !CONFIG_PPC_BOOK3S */
/* Note due to the way vm flags are laid out, the bits are XWR */
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
@@ -38,6 +39,7 @@ struct mm_struct;
#define __S101 PAGE_READONLY_X
#define __S110 PAGE_SHARED_X
#define __S111 PAGE_SHARED_X
+#endif
#ifndef __ASSEMBLY__
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 7b9966402b25..2cf10a17c0a9 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void)
EXPORT_SYMBOL_GPL(memremap_compat_align);
#endif
+/* Note due to the way vm flags are laid out, the bits are XWR */
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY_X,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_X,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
+};
+
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
unsigned long prot = pgprot_val(protection_map[vm_flags &
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 4866625da314..bca98b280fdd 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -65,6 +65,7 @@ void paging_init(void);
extern unsigned long ptr_in_current_pgd;
/* xwr */
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
@@ -82,6 +83,7 @@ extern unsigned long ptr_in_current_pgd;
#define __S101 PAGE_READONLY
#define __S110 PAGE_SHARED
#define __S111 PAGE_SHARED
+#endif
/* First physical page can be anywhere, the following is needed so that
* va-->pa and vice versa conversions work properly without performance
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4679e45c8348..a779418ceba9 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
#define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V
-/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
#ifndef __ASSEMBLY__
pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index f6174df2d5af..6edc2a68b73c 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2634,6 +2634,26 @@ void vmemmap_free(unsigned long start, unsigned long end,
}
#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = __pgprot(0),
+ [VM_READ] = __pgprot(0),
+ [VM_WRITE] = __pgprot(0),
+ [VM_WRITE | VM_READ] = __pgprot(0),
+ [VM_EXEC] = __pgprot(0),
+ [VM_EXEC | VM_READ] = __pgprot(0),
+ [VM_EXEC | VM_WRITE] = __pgprot(0),
+ [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0),
+ [VM_SHARED] = __pgprot(0),
+ [VM_SHARED | VM_READ] = __pgprot(0),
+ [VM_SHARED | VM_WRITE] = __pgprot(0),
+ [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(0),
+ [VM_SHARED | VM_EXEC] = __pgprot(0),
+ [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(0),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(0),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0)
+};
+
static void prot_init_common(unsigned long page_none,
unsigned long page_shared,
unsigned long page_copy,
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index bdaf8391e2e0..aa174fed3a71 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -230,25 +230,6 @@ enum page_cache_mode {
#endif /* __ASSEMBLY__ */
-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_EXEC
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
/*
* early identity mapping pte attrib macros.
*/
diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c
index 763742782286..7eca1b009af6 100644
--- a/arch/x86/mm/pgprot.c
+++ b/arch/x86/mm/pgprot.c
@@ -4,6 +4,25 @@
#include <linux/mm.h>
#include <asm/pgtable.h>
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
unsigned long val = pgprot_val(protection_map[vm_flags &
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc8f326be0ce..2254c1980c8e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -420,11 +420,13 @@ extern unsigned int kobjsize(const void *objp);
#endif
#define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
/*
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
extern pgprot_t protection_map[16];
+#endif
/*
* The default fault flags that should be used by most of the
diff --git a/mm/mmap.c b/mm/mmap.c
index 61e6135c54ef..e66920414945 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE] = __P000,
[VM_READ] = __P001,
@@ -120,7 +121,6 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
};
-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility
2022-06-03 10:14 ` [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility Anshuman Khandual
@ 2022-06-03 12:18 ` Christophe Leroy
2022-06-05 10:19 ` Anshuman Khandual
0 siblings, 1 reply; 15+ messages in thread
From: Christophe Leroy @ 2022-06-03 12:18 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: Catalin Marinas, linux-mips@vger.kernel.org, Paul Mackerras,
sparclinux@vger.kernel.org, Will Deacon, Jonas Bonn,
linux-s390@vger.kernel.org, x86@kernel.org,
linux-csky@vger.kernel.org, Ingo Molnar, Geert Uytterhoeven,
Vasily Gorbik, Heiko Carstens, openrisc@lists.librecores.org,
Thomas Gleixner, linux-arm-kernel@lists.infradead.org,
Thomas Bogendoerfer, linux-kernel@vger.kernel.org, Dinh Nguyen,
Andrew Morton, linuxppc-dev@lists.ozlabs.org, David S. Miller
Le 03/06/2022 à 12:14, Anshuman Khandual a écrit :
> Restrict generic protection_map[] array visibility only for platforms which
> do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define
> their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have
> their private static protection_map[] still implementing an array look up.
> These private protection_map[] array could do without __PXXX/__SXXX macros,
> making them redundant and dropping them off.
>
> But platforms which do not define their custom vm_get_page_prot() enabling
> ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros.
> Although this now provides a method for other willing platforms to drop off
> __PXXX/__SXX macros completely.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: x86@kernel.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: sparclinux@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> arch/arm64/include/asm/pgtable-prot.h | 18 ------------------
> arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++
> arch/powerpc/include/asm/pgtable.h | 2 ++
> arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++
> arch/sparc/include/asm/pgtable_32.h | 2 ++
> arch/sparc/include/asm/pgtable_64.h | 19 -------------------
> arch/sparc/mm/init_64.c | 20 ++++++++++++++++++++
> arch/x86/include/asm/pgtable_types.h | 19 -------------------
> arch/x86/mm/pgprot.c | 19 +++++++++++++++++++
> include/linux/mm.h | 2 ++
> mm/mmap.c | 2 +-
> 11 files changed, 87 insertions(+), 57 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index d564d0ecd4cd..8ed2a80c896e 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -21,6 +21,7 @@ struct mm_struct;
> #endif /* !CONFIG_PPC_BOOK3S */
>
> /* Note due to the way vm flags are laid out, the bits are XWR */
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
Ok, so until now it was common to all powerpc platforms. Now you define
a different way whether it is a PPC_BOOK3S_64 or another platform ?
What's the point ?
> #define __P000 PAGE_NONE
> #define __P001 PAGE_READONLY
> #define __P010 PAGE_COPY
> @@ -38,6 +39,7 @@ struct mm_struct;
> #define __S101 PAGE_READONLY_X
> #define __S110 PAGE_SHARED_X
> #define __S111 PAGE_SHARED_X
> +#endif
>
> #ifndef __ASSEMBLY__
>
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 7b9966402b25..2cf10a17c0a9 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void)
> EXPORT_SYMBOL_GPL(memremap_compat_align);
> #endif
>
> +/* Note due to the way vm flags are laid out, the bits are XWR */
> +static pgprot_t protection_map[16] __ro_after_init = {
I don't think powerpc modifies that at all. Could be const instead of
ro_after_init.
> + [VM_NONE] = PAGE_NONE,
> + [VM_READ] = PAGE_READONLY,
> + [VM_WRITE] = PAGE_COPY,
> + [VM_WRITE | VM_READ] = PAGE_COPY,
> + [VM_EXEC] = PAGE_READONLY_X,
> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_EXEC | VM_WRITE] = PAGE_COPY_X,
> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
> + [VM_SHARED] = PAGE_NONE,
> + [VM_SHARED | VM_READ] = PAGE_READONLY,
> + [VM_SHARED | VM_WRITE] = PAGE_SHARED,
> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
> + [VM_SHARED | VM_EXEC] = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
> +};
That's nice but it could apply to all powerpc platforms. Why restrict it
to book3s/64 ?
> +
> pgprot_t vm_get_page_prot(unsigned long vm_flags)
> {
> unsigned long prot = pgprot_val(protection_map[vm_flags &
> diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
> index 4866625da314..bca98b280fdd 100644
> --- a/arch/sparc/include/asm/pgtable_32.h
> +++ b/arch/sparc/include/asm/pgtable_32.h
> @@ -65,6 +65,7 @@ void paging_init(void);
> extern unsigned long ptr_in_current_pgd;
>
> /* xwr */
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT is selected by sparc64 only, is that
ifdef needed at all ?
> #define __P000 PAGE_NONE
> #define __P001 PAGE_READONLY
> #define __P010 PAGE_COPY
> @@ -82,6 +83,7 @@ extern unsigned long ptr_in_current_pgd;
> #define __S101 PAGE_READONLY
> #define __S110 PAGE_SHARED
> #define __S111 PAGE_SHARED
> +#endif
>
> /* First physical page can be anywhere, the following is needed so that
> * va-->pa and vice versa conversions work properly without performance
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4679e45c8348..a779418ceba9 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
> #define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
> #define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V
>
> -/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
> -#define __P000 __pgprot(0)
> -#define __P001 __pgprot(0)
> -#define __P010 __pgprot(0)
> -#define __P011 __pgprot(0)
> -#define __P100 __pgprot(0)
> -#define __P101 __pgprot(0)
> -#define __P110 __pgprot(0)
> -#define __P111 __pgprot(0)
> -
> -#define __S000 __pgprot(0)
> -#define __S001 __pgprot(0)
> -#define __S010 __pgprot(0)
> -#define __S011 __pgprot(0)
> -#define __S100 __pgprot(0)
> -#define __S101 __pgprot(0)
> -#define __S110 __pgprot(0)
> -#define __S111 __pgprot(0)
> -
> #ifndef __ASSEMBLY__
>
> pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index f6174df2d5af..6edc2a68b73c 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -2634,6 +2634,26 @@ void vmemmap_free(unsigned long start, unsigned long end,
> }
> #endif /* CONFIG_SPARSEMEM_VMEMMAP */
>
> +/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
> +static pgprot_t protection_map[16] __ro_after_init = {
> + [VM_NONE] = __pgprot(0),
> + [VM_READ] = __pgprot(0),
> + [VM_WRITE] = __pgprot(0),
> + [VM_WRITE | VM_READ] = __pgprot(0),
> + [VM_EXEC] = __pgprot(0),
> + [VM_EXEC | VM_READ] = __pgprot(0),
> + [VM_EXEC | VM_WRITE] = __pgprot(0),
> + [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0),
> + [VM_SHARED] = __pgprot(0),
> + [VM_SHARED | VM_READ] = __pgprot(0),
> + [VM_SHARED | VM_WRITE] = __pgprot(0),
> + [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(0),
> + [VM_SHARED | VM_EXEC] = __pgprot(0),
> + [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(0),
> + [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(0),
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0)
> +};
__pgprot(0) is 0 so you don't need to initialise the fields at all, it
is zeroized at startup as part of BSS section.
> +
> static void prot_init_common(unsigned long page_none,
> unsigned long page_shared,
> unsigned long page_copy,
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index bc8f326be0ce..2254c1980c8e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -420,11 +420,13 @@ extern unsigned int kobjsize(const void *objp);
> #endif
> #define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
>
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
> /*
> * mapping from the currently active vm_flags protection bits (the
> * low four bits) to a page protection mask..
> */
> extern pgprot_t protection_map[16];
> +#endif
>
> /*
> * The default fault flags that should be used by most of the
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 61e6135c54ef..e66920414945 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
> * w: (no) no
> * x: (yes) yes
> */
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
> pgprot_t protection_map[16] __ro_after_init = {
> [VM_NONE] = __P000,
> [VM_READ] = __P001,
> @@ -120,7 +121,6 @@ pgprot_t protection_map[16] __ro_after_init = {
> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
> };
>
> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
Why not let architectures provide their protection_map[] and keep that
function ?
> pgprot_t vm_get_page_prot(unsigned long vm_flags)
> {
> return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility
2022-06-03 12:18 ` Christophe Leroy
@ 2022-06-05 10:19 ` Anshuman Khandual
0 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-05 10:19 UTC (permalink / raw)
To: Christophe Leroy, linux-mm@kvack.org
Cc: Catalin Marinas, linux-mips@vger.kernel.org, Paul Mackerras,
sparclinux@vger.kernel.org, Will Deacon, Jonas Bonn,
linux-s390@vger.kernel.org, x86@kernel.org,
linux-csky@vger.kernel.org, Ingo Molnar, Geert Uytterhoeven,
Vasily Gorbik, Heiko Carstens, openrisc@lists.librecores.org,
Thomas Gleixner, linux-arm-kernel@lists.infradead.org,
Thomas Bogendoerfer, linux-kernel@vger.kernel.org, Dinh Nguyen,
Andrew Morton, linuxppc-dev@lists.ozlabs.org, David S. Miller
On 6/3/22 17:48, Christophe Leroy wrote:
>
>
> Le 03/06/2022 à 12:14, Anshuman Khandual a écrit :
>> Restrict generic protection_map[] array visibility only for platforms which
>> do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define
>> their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have
>> their private static protection_map[] still implementing an array look up.
>> These private protection_map[] array could do without __PXXX/__SXXX macros,
>> making them redundant and dropping them off.
>>
>> But platforms which do not define their custom vm_get_page_prot() enabling
>> ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros.
>> Although this now provides a method for other willing platforms to drop off
>> __PXXX/__SXX macros completely.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: "David S. Miller" <davem@davemloft.net>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: x86@kernel.org
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: sparclinux@vger.kernel.org
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> arch/arm64/include/asm/pgtable-prot.h | 18 ------------------
>> arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++
>> arch/powerpc/include/asm/pgtable.h | 2 ++
>> arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++
>> arch/sparc/include/asm/pgtable_32.h | 2 ++
>> arch/sparc/include/asm/pgtable_64.h | 19 -------------------
>> arch/sparc/mm/init_64.c | 20 ++++++++++++++++++++
>> arch/x86/include/asm/pgtable_types.h | 19 -------------------
>> arch/x86/mm/pgprot.c | 19 +++++++++++++++++++
>> include/linux/mm.h | 2 ++
>> mm/mmap.c | 2 +-
>> 11 files changed, 87 insertions(+), 57 deletions(-)
>>
>
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index d564d0ecd4cd..8ed2a80c896e 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -21,6 +21,7 @@ struct mm_struct;
>> #endif /* !CONFIG_PPC_BOOK3S */
>>
>> /* Note due to the way vm flags are laid out, the bits are XWR */
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>
> Ok, so until now it was common to all powerpc platforms. Now you define
> a different way whether it is a PPC_BOOK3S_64 or another platform ?
> What's the point ?
On powerpc,
select ARCH_HAS_VM_GET_PAGE_PROT if PPC_BOOK3S_64
Currently protection_map[] which requires __PXXX/__SXXX macros,
is applicable on all platforms, irrespective whether they enable
ARCH_HAS_VM_GET_PAGE_PROT or not. But because protection_map[]
is being made private for ARCH_HAS_VM_GET_PAGE_PROT enabling
platforms, they will not require __PXXX/__SXXX macros anymore.
In this case, PPC_BOOK3S_64 does not require the macros anymore,
where as other powerpc platforms will still require them as they
depend on the generic protection_map[].
>
>> #define __P000 PAGE_NONE
>> #define __P001 PAGE_READONLY
>> #define __P010 PAGE_COPY
>> @@ -38,6 +39,7 @@ struct mm_struct;
>> #define __S101 PAGE_READONLY_X
>> #define __S110 PAGE_SHARED_X
>> #define __S111 PAGE_SHARED_X
>> +#endif
>>
>> #ifndef __ASSEMBLY__
>>
>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>> index 7b9966402b25..2cf10a17c0a9 100644
>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>> @@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void)
>> EXPORT_SYMBOL_GPL(memremap_compat_align);
>> #endif
>>
>> +/* Note due to the way vm flags are laid out, the bits are XWR */
>> +static pgprot_t protection_map[16] __ro_after_init = {
>
> I don't think powerpc modifies that at all. Could be const instead of
> ro_after_init.
Sure, will change that.
>
>> + [VM_NONE] = PAGE_NONE,
>> + [VM_READ] = PAGE_READONLY,
>> + [VM_WRITE] = PAGE_COPY,
>> + [VM_WRITE | VM_READ] = PAGE_COPY,
>> + [VM_EXEC] = PAGE_READONLY_X,
>> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> + [VM_EXEC | VM_WRITE] = PAGE_COPY_X,
>> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
>> + [VM_SHARED] = PAGE_NONE,
>> + [VM_SHARED | VM_READ] = PAGE_READONLY,
>> + [VM_SHARED | VM_WRITE] = PAGE_SHARED,
>> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
>> + [VM_SHARED | VM_EXEC] = PAGE_READONLY_X,
>> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X,
>> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
>> +};
>
> That's nice but it could apply to all powerpc platforms. Why restrict it
> to book3s/64 ?
Because as mentioned earlier, others powerpc platforms do not
enable ARCH_HAS_VM_GET_PAGE_PROT.
>
>> +
>> pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> {
>> unsigned long prot = pgprot_val(protection_map[vm_flags &
>> diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
>> index 4866625da314..bca98b280fdd 100644
>> --- a/arch/sparc/include/asm/pgtable_32.h
>> +++ b/arch/sparc/include/asm/pgtable_32.h
>> @@ -65,6 +65,7 @@ void paging_init(void);
>> extern unsigned long ptr_in_current_pgd;
>>
>> /* xwr */
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>
> CONFIG_ARCH_HAS_VM_GET_PAGE_PROT is selected by sparc64 only, is that
> ifdef needed at all ?
Not really necessary, but added just to tighten up.
>
>> #define __P000 PAGE_NONE
>> #define __P001 PAGE_READONLY
>> #define __P010 PAGE_COPY
>> @@ -82,6 +83,7 @@ extern unsigned long ptr_in_current_pgd;
>> #define __S101 PAGE_READONLY
>> #define __S110 PAGE_SHARED
>> #define __S111 PAGE_SHARED
>> +#endif
>>
>> /* First physical page can be anywhere, the following is needed so that
>> * va-->pa and vice versa conversions work properly without performance
>> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
>> index 4679e45c8348..a779418ceba9 100644
>> --- a/arch/sparc/include/asm/pgtable_64.h
>> +++ b/arch/sparc/include/asm/pgtable_64.h
>> @@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
>> #define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
>> #define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V
>>
>> -/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
>> -#define __P000 __pgprot(0)
>> -#define __P001 __pgprot(0)
>> -#define __P010 __pgprot(0)
>> -#define __P011 __pgprot(0)
>> -#define __P100 __pgprot(0)
>> -#define __P101 __pgprot(0)
>> -#define __P110 __pgprot(0)
>> -#define __P111 __pgprot(0)
>> -
>> -#define __S000 __pgprot(0)
>> -#define __S001 __pgprot(0)
>> -#define __S010 __pgprot(0)
>> -#define __S011 __pgprot(0)
>> -#define __S100 __pgprot(0)
>> -#define __S101 __pgprot(0)
>> -#define __S110 __pgprot(0)
>> -#define __S111 __pgprot(0)
>> -
>> #ifndef __ASSEMBLY__
>>
>> pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
>> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
>> index f6174df2d5af..6edc2a68b73c 100644
>> --- a/arch/sparc/mm/init_64.c
>> +++ b/arch/sparc/mm/init_64.c
>> @@ -2634,6 +2634,26 @@ void vmemmap_free(unsigned long start, unsigned long end,
>> }
>> #endif /* CONFIG_SPARSEMEM_VMEMMAP */
>>
>> +/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
>> +static pgprot_t protection_map[16] __ro_after_init = {
>> + [VM_NONE] = __pgprot(0),
>> + [VM_READ] = __pgprot(0),
>> + [VM_WRITE] = __pgprot(0),
>> + [VM_WRITE | VM_READ] = __pgprot(0),
>> + [VM_EXEC] = __pgprot(0),
>> + [VM_EXEC | VM_READ] = __pgprot(0),
>> + [VM_EXEC | VM_WRITE] = __pgprot(0),
>> + [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0),
>> + [VM_SHARED] = __pgprot(0),
>> + [VM_SHARED | VM_READ] = __pgprot(0),
>> + [VM_SHARED | VM_WRITE] = __pgprot(0),
>> + [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(0),
>> + [VM_SHARED | VM_EXEC] = __pgprot(0),
>> + [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(0),
>> + [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(0),
>> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(0)
>> +};
>
> __pgprot(0) is 0 so you don't need to initialise the fields at all, it
> is zeroized at startup as part of BSS section.
Sure, will change.
>
>> +
>> static void prot_init_common(unsigned long page_none,
>> unsigned long page_shared,
>> unsigned long page_copy,
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index bc8f326be0ce..2254c1980c8e 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -420,11 +420,13 @@ extern unsigned int kobjsize(const void *objp);
>> #endif
>> #define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
>>
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>> /*
>> * mapping from the currently active vm_flags protection bits (the
>> * low four bits) to a page protection mask..
>> */
>> extern pgprot_t protection_map[16];
>> +#endif
>>
>> /*
>> * The default fault flags that should be used by most of the
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 61e6135c54ef..e66920414945 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
>> * w: (no) no
>> * x: (yes) yes
>> */
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>> pgprot_t protection_map[16] __ro_after_init = {
>> [VM_NONE] = __P000,
>> [VM_READ] = __P001,
>> @@ -120,7 +121,6 @@ pgprot_t protection_map[16] __ro_after_init = {
>> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
>> };
>>
>> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>
> Why not let architectures provide their protection_map[] and keep that
> function ?
Just to understand this correctly.
All platforms provide their private protection_map[] array, drop __SXXX, __PXXX
macros which will not be required anymore, depend on generic vm_get_page_prot()
array look up, unless they need custom function via ARCH_HAS_VM_GET_PAGE_PROT ?
>
>> pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> {
>> return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-03 10:14 ` [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-03 12:25 ` Christophe Leroy
2022-06-03 10:14 ` [PATCH 3/6] mips/mm: " Anshuman Khandual
` (3 subsequent siblings)
5 siblings, 1 reply; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Alexander Gordeev, Will Deacon, Jonas Bonn, linux-s390, x86,
linux-csky, Ingo Molnar, Geert Uytterhoeven, Vasily Gorbik,
Anshuman Khandual, Heiko Carstens, openrisc, Thomas Gleixner,
linux-arm-kernel, Thomas Bogendoerfer, linux-mips, Dinh Nguyen,
Sven Schnelle, Andrew Morton, linuxppc-dev, David S. Miller
This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Sven Schnelle <svens@linux.ibm.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/pgtable.h | 17 -----------------
arch/s390/mm/mmap.c | 33 +++++++++++++++++++++++++++++++++
3 files changed, 34 insertions(+), 17 deletions(-)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index b17239ae7bd4..cdcf678deab1 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -81,6 +81,7 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index a397b072a580..c63a05b5368a 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -424,23 +424,6 @@ static inline int is_module_addr(void *addr)
* implies read permission.
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_RO
-#define __P010 PAGE_RO
-#define __P011 PAGE_RO
-#define __P100 PAGE_RX
-#define __P101 PAGE_RX
-#define __P110 PAGE_RX
-#define __P111 PAGE_RX
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_RO
-#define __S010 PAGE_RW
-#define __S011 PAGE_RW
-#define __S100 PAGE_RX
-#define __S101 PAGE_RX
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
/*
* Segment entry (large page) protection definitions.
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index d545f5c39f7e..11d75b8d5ec0 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -188,3 +188,36 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ case VM_WRITE:
+ case VM_WRITE | VM_READ:
+ return PAGE_RO;
+ case VM_EXEC:
+ case VM_EXEC | VM_READ:
+ case VM_EXEC | VM_WRITE:
+ case VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_RX;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_RO;
+ case VM_SHARED | VM_WRITE:
+ case VM_SHARED | VM_WRITE | VM_READ:
+ return PAGE_RW;
+ case VM_SHARED | VM_EXEC:
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_RX;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_RWX;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 ` [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-03 12:25 ` Christophe Leroy
2022-06-05 9:58 ` Anshuman Khandual
0 siblings, 1 reply; 15+ messages in thread
From: Christophe Leroy @ 2022-06-03 12:25 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: Catalin Marinas, linux-mips@vger.kernel.org, Paul Mackerras,
sparclinux@vger.kernel.org, Alexander Gordeev, Will Deacon,
Jonas Bonn, linux-s390@vger.kernel.org, x86@kernel.org,
linux-csky@vger.kernel.org, Ingo Molnar, Geert Uytterhoeven,
Vasily Gorbik, Heiko Carstens, openrisc@lists.librecores.org,
Thomas Gleixner, linux-arm-kernel@lists.infradead.org,
Thomas Bogendoerfer, linux-kernel@vger.kernel.org, Dinh Nguyen,
Sven Schnelle, Andrew Morton
Le 03/06/2022 à 12:14, Anshuman Khandual a écrit :
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: linux-s390@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Acked-by: Sven Schnelle <svens@linux.ibm.com>
> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> arch/s390/Kconfig | 1 +
> arch/s390/include/asm/pgtable.h | 17 -----------------
> arch/s390/mm/mmap.c | 33 +++++++++++++++++++++++++++++++++
> 3 files changed, 34 insertions(+), 17 deletions(-)
>
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index b17239ae7bd4..cdcf678deab1 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -81,6 +81,7 @@ config S390
> select ARCH_HAS_SYSCALL_WRAPPER
> select ARCH_HAS_UBSAN_SANITIZE_ALL
> select ARCH_HAS_VDSO_DATA
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_HAVE_NMI_SAFE_CMPXCHG
> select ARCH_INLINE_READ_LOCK
> select ARCH_INLINE_READ_LOCK_BH
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index a397b072a580..c63a05b5368a 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -424,23 +424,6 @@ static inline int is_module_addr(void *addr)
> * implies read permission.
> */
> /*xwr*/
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_RO
> -#define __P010 PAGE_RO
> -#define __P011 PAGE_RO
> -#define __P100 PAGE_RX
> -#define __P101 PAGE_RX
> -#define __P110 PAGE_RX
> -#define __P111 PAGE_RX
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_RO
> -#define __S010 PAGE_RW
> -#define __S011 PAGE_RW
> -#define __S100 PAGE_RX
> -#define __S101 PAGE_RX
> -#define __S110 PAGE_RWX
> -#define __S111 PAGE_RWX
>
> /*
> * Segment entry (large page) protection definitions.
> diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
> index d545f5c39f7e..11d75b8d5ec0 100644
> --- a/arch/s390/mm/mmap.c
> +++ b/arch/s390/mm/mmap.c
> @@ -188,3 +188,36 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> mm->get_unmapped_area = arch_get_unmapped_area_topdown;
> }
> }
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + case VM_WRITE:
> + case VM_WRITE | VM_READ:
> + return PAGE_RO;
> + case VM_EXEC:
> + case VM_EXEC | VM_READ:
> + case VM_EXEC | VM_WRITE:
> + case VM_EXEC | VM_WRITE | VM_READ:
> + return PAGE_RX;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_RO;
> + case VM_SHARED | VM_WRITE:
> + case VM_SHARED | VM_WRITE | VM_READ:
> + return PAGE_RW;
> + case VM_SHARED | VM_EXEC:
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_RX;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> + return PAGE_RWX;
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
Wasn't it demonstrated in previous discussions that a switch/case is
suboptimal compared to a table cell read ?
In order to get rid of the _Sxxx/_Pxxx macros, my preference would go to
having architectures provide their own protection_map[] table, and keep
the generic vm_get_page_prot() for the architectures would don't need a
specific version of it.
This comment applies to all following patches as well.
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 12:25 ` Christophe Leroy
@ 2022-06-05 9:58 ` Anshuman Khandual
0 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-05 9:58 UTC (permalink / raw)
To: Christophe Leroy, linux-mm@kvack.org
Cc: Catalin Marinas, linux-mips@vger.kernel.org, Paul Mackerras,
sparclinux@vger.kernel.org, Alexander Gordeev, Will Deacon,
Jonas Bonn, linux-s390@vger.kernel.org, x86@kernel.org,
linux-csky@vger.kernel.org, Ingo Molnar, Geert Uytterhoeven,
Vasily Gorbik, Heiko Carstens, openrisc@lists.librecores.org,
Thomas Gleixner, linux-arm-kernel@lists.infradead.org,
Thomas Bogendoerfer, linux-kernel@vger.kernel.org, Dinh Nguyen,
Sven Schnelle, Andrew Morton
On 6/3/22 17:55, Christophe Leroy wrote:
>
>
> Le 03/06/2022 à 12:14, Anshuman Khandual a écrit :
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed.
>>
>> Cc: Heiko Carstens <hca@linux.ibm.com>
>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>> Cc: linux-s390@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Acked-by: Sven Schnelle <svens@linux.ibm.com>
>> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> arch/s390/Kconfig | 1 +
>> arch/s390/include/asm/pgtable.h | 17 -----------------
>> arch/s390/mm/mmap.c | 33 +++++++++++++++++++++++++++++++++
>> 3 files changed, 34 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
>> index b17239ae7bd4..cdcf678deab1 100644
>> --- a/arch/s390/Kconfig
>> +++ b/arch/s390/Kconfig
>> @@ -81,6 +81,7 @@ config S390
>> select ARCH_HAS_SYSCALL_WRAPPER
>> select ARCH_HAS_UBSAN_SANITIZE_ALL
>> select ARCH_HAS_VDSO_DATA
>> + select ARCH_HAS_VM_GET_PAGE_PROT
>> select ARCH_HAVE_NMI_SAFE_CMPXCHG
>> select ARCH_INLINE_READ_LOCK
>> select ARCH_INLINE_READ_LOCK_BH
>> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
>> index a397b072a580..c63a05b5368a 100644
>> --- a/arch/s390/include/asm/pgtable.h
>> +++ b/arch/s390/include/asm/pgtable.h
>> @@ -424,23 +424,6 @@ static inline int is_module_addr(void *addr)
>> * implies read permission.
>> */
>> /*xwr*/
>> -#define __P000 PAGE_NONE
>> -#define __P001 PAGE_RO
>> -#define __P010 PAGE_RO
>> -#define __P011 PAGE_RO
>> -#define __P100 PAGE_RX
>> -#define __P101 PAGE_RX
>> -#define __P110 PAGE_RX
>> -#define __P111 PAGE_RX
>> -
>> -#define __S000 PAGE_NONE
>> -#define __S001 PAGE_RO
>> -#define __S010 PAGE_RW
>> -#define __S011 PAGE_RW
>> -#define __S100 PAGE_RX
>> -#define __S101 PAGE_RX
>> -#define __S110 PAGE_RWX
>> -#define __S111 PAGE_RWX
>>
>> /*
>> * Segment entry (large page) protection definitions.
>> diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
>> index d545f5c39f7e..11d75b8d5ec0 100644
>> --- a/arch/s390/mm/mmap.c
>> +++ b/arch/s390/mm/mmap.c
>> @@ -188,3 +188,36 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
>> mm->get_unmapped_area = arch_get_unmapped_area_topdown;
>> }
>> }
>> +
>> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
>> + case VM_NONE:
>> + return PAGE_NONE;
>> + case VM_READ:
>> + case VM_WRITE:
>> + case VM_WRITE | VM_READ:
>> + return PAGE_RO;
>> + case VM_EXEC:
>> + case VM_EXEC | VM_READ:
>> + case VM_EXEC | VM_WRITE:
>> + case VM_EXEC | VM_WRITE | VM_READ:
>> + return PAGE_RX;
>> + case VM_SHARED:
>> + return PAGE_NONE;
>> + case VM_SHARED | VM_READ:
>> + return PAGE_RO;
>> + case VM_SHARED | VM_WRITE:
>> + case VM_SHARED | VM_WRITE | VM_READ:
>> + return PAGE_RW;
>> + case VM_SHARED | VM_EXEC:
>> + case VM_SHARED | VM_EXEC | VM_READ:
>> + return PAGE_RX;
>> + case VM_SHARED | VM_EXEC | VM_WRITE:
>> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
>> + return PAGE_RWX;
>> + default:
>> + BUILD_BUG();
>> + }
>> +}
>> +EXPORT_SYMBOL(vm_get_page_prot);
>
> Wasn't it demonstrated in previous discussions that a switch/case is
> suboptimal compared to a table cell read ?
Right but all these platform patches here were acked from respective
platform folks. I assumed that they might have valued the simplicity
in switch case statements, while also dropping off the __SXXX/__PXXX
macros, which is the final objective. Looks like that assumption was
not accurate.
>
> In order to get rid of the _Sxxx/_Pxxx macros, my preference would go to
> having architectures provide their own protection_map[] table, and keep
> the generic vm_get_page_prot() for the architectures would don't need a
> specific version of it.
I will try and rework the patches as suggested.
>
> This comment applies to all following patches as well.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 3/6] mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-03 10:14 ` [PATCH 1/6] mm/mmap: Restrict generic protection_map[] array visibility Anshuman Khandual
2022-06-03 10:14 ` [PATCH 2/6] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-03 10:14 ` [PATCH 4/6] csky/mm: " Anshuman Khandual
` (2 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Anshuman Khandual,
Heiko Carstens, openrisc, Thomas Gleixner, linux-arm-kernel,
Thomas Bogendoerfer, linux-mips, Dinh Nguyen, Andrew Morton,
linuxppc-dev, David S. Miller
This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/mips/Kconfig | 1 +
arch/mips/include/asm/pgtable.h | 22 ------------
arch/mips/mm/cache.c | 60 +++++++++++++++++++--------------
3 files changed, 36 insertions(+), 47 deletions(-)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index db09d45d59ec..d0b7eb11ec81 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -14,6 +14,7 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 374c6322775d..6caec386ad2f 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -41,28 +41,6 @@ struct vm_area_struct;
* by reasonable means..
*/
-/*
- * Dummy values to fill the table in mmap.c
- * The real values will be generated at runtime
- */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
extern unsigned long _page_cachable_default;
extern void __update_cache(unsigned long address, pte_t pte);
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 7be7240f7703..012862004431 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -159,30 +159,6 @@ EXPORT_SYMBOL(_page_cachable_default);
#define PM(p) __pgprot(_page_cachable_default | (p))
-static inline void setup_protection_map(void)
-{
- protection_map[0] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[1] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[2] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[3] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[4] = PM(_PAGE_PRESENT);
- protection_map[5] = PM(_PAGE_PRESENT);
- protection_map[6] = PM(_PAGE_PRESENT);
- protection_map[7] = PM(_PAGE_PRESENT);
-
- protection_map[8] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[9] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[10] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE |
- _PAGE_NO_READ);
- protection_map[11] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
- protection_map[12] = PM(_PAGE_PRESENT);
- protection_map[13] = PM(_PAGE_PRESENT);
- protection_map[14] = PM(_PAGE_PRESENT | _PAGE_WRITE);
- protection_map[15] = PM(_PAGE_PRESENT | _PAGE_WRITE);
-}
-
-#undef PM
-
void cpu_cache_init(void)
{
if (cpu_has_3k_cache) {
@@ -201,6 +177,40 @@ void cpu_cache_init(void)
octeon_cache_init();
}
+}
- setup_protection_map();
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_WRITE | VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_EXEC:
+ case VM_EXEC | VM_READ:
+ case VM_EXEC | VM_WRITE:
+ case VM_EXEC | VM_WRITE | VM_READ:
+ return PM(_PAGE_PRESENT);
+ case VM_SHARED:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_SHARED | VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_SHARED | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE | _PAGE_NO_READ);
+ case VM_SHARED | VM_WRITE | VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
+ case VM_SHARED | VM_EXEC:
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PM(_PAGE_PRESENT);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_WRITE);
+ default:
+ BUILD_BUG();
+ }
}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 4/6] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (2 preceding siblings ...)
2022-06-03 10:14 ` [PATCH 3/6] mips/mm: " Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-04 12:13 ` Guo Ren
2022-06-03 10:14 ` [PATCH 5/6] nios2/mm: " Anshuman Khandual
2022-06-03 10:14 ` [PATCH 6/6] openrisc/mm: " Anshuman Khandual
5 siblings, 1 reply; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Anshuman Khandual,
Heiko Carstens, openrisc, Thomas Gleixner, linux-arm-kernel,
Thomas Bogendoerfer, linux-mips, Dinh Nguyen, Guo Ren,
Andrew Morton, linuxppc-dev, David S. Miller
This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-csky@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Guo Ren <guoren@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/csky/Kconfig | 1 +
arch/csky/include/asm/pgtable.h | 18 ------------------
arch/csky/mm/init.c | 32 ++++++++++++++++++++++++++++++++
3 files changed, 33 insertions(+), 18 deletions(-)
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 21d72b078eef..588b8a9c68ed 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,6 +6,7 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index bbe245117777..229a5f4ad7fc 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -77,24 +77,6 @@
#define MAX_SWAPFILES_CHECK() \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_READ
-#define __P011 PAGE_READ
-#define __P100 PAGE_READ
-#define __P101 PAGE_READ
-#define __P110 PAGE_READ
-#define __P111 PAGE_READ
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_WRITE
-#define __S011 PAGE_WRITE
-#define __S100 PAGE_READ
-#define __S101 PAGE_READ
-#define __S110 PAGE_WRITE
-#define __S111 PAGE_WRITE
-
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index bf2004aa811a..f9babbed17d4 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -197,3 +197,35 @@ void __init fixaddr_init(void)
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ case VM_WRITE:
+ case VM_WRITE | VM_READ:
+ case VM_EXEC:
+ case VM_EXEC | VM_READ:
+ case VM_EXEC | VM_WRITE:
+ case VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED | VM_WRITE:
+ case VM_SHARED | VM_WRITE | VM_READ:
+ return PAGE_WRITE;
+ case VM_SHARED | VM_EXEC:
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_WRITE;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 4/6] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 ` [PATCH 4/6] csky/mm: " Anshuman Khandual
@ 2022-06-04 12:13 ` Guo Ren
2022-06-05 9:50 ` Anshuman Khandual
0 siblings, 1 reply; 15+ messages in thread
From: Guo Ren @ 2022-06-04 12:13 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Catalin Marinas, Linux Kernel Mailing List, Linux-MM,
Paul Mackerras, sparclinux, Will Deacon, Jonas Bonn, linux-s390,
the arch/x86 maintainers, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Heiko Carstens, Openrisc,
Thomas Gleixner, Linux ARM, Thomas Bogendoerfer,
open list:BROADCOM NVRAM DRIVER, Dinh Nguyen, Andrew Morton,
linuxppc-dev, David S. Miller
Acked-by: Guo Ren <guoren@kernel.org>
On Fri, Jun 3, 2022 at 6:15 PM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: Geert Uytterhoeven <geert@linux-m68k.org>
> Cc: linux-csky@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Acked-by: Guo Ren <guoren@kernel.org>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> arch/csky/Kconfig | 1 +
> arch/csky/include/asm/pgtable.h | 18 ------------------
> arch/csky/mm/init.c | 32 ++++++++++++++++++++++++++++++++
> 3 files changed, 33 insertions(+), 18 deletions(-)
>
> diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
> index 21d72b078eef..588b8a9c68ed 100644
> --- a/arch/csky/Kconfig
> +++ b/arch/csky/Kconfig
> @@ -6,6 +6,7 @@ config CSKY
> select ARCH_HAS_GCOV_PROFILE_ALL
> select ARCH_HAS_SYNC_DMA_FOR_CPU
> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_USE_BUILTIN_BSWAP
> select ARCH_USE_QUEUED_RWLOCKS
> select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
> diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
> index bbe245117777..229a5f4ad7fc 100644
> --- a/arch/csky/include/asm/pgtable.h
> +++ b/arch/csky/include/asm/pgtable.h
> @@ -77,24 +77,6 @@
> #define MAX_SWAPFILES_CHECK() \
> BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
>
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READ
> -#define __P010 PAGE_READ
> -#define __P011 PAGE_READ
> -#define __P100 PAGE_READ
> -#define __P101 PAGE_READ
> -#define __P110 PAGE_READ
> -#define __P111 PAGE_READ
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READ
> -#define __S010 PAGE_WRITE
> -#define __S011 PAGE_WRITE
> -#define __S100 PAGE_READ
> -#define __S101 PAGE_READ
> -#define __S110 PAGE_WRITE
> -#define __S111 PAGE_WRITE
> -
> extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
>
> diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
> index bf2004aa811a..f9babbed17d4 100644
> --- a/arch/csky/mm/init.c
> +++ b/arch/csky/mm/init.c
> @@ -197,3 +197,35 @@ void __init fixaddr_init(void)
> vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
> fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
> }
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + case VM_WRITE:
> + case VM_WRITE | VM_READ:
> + case VM_EXEC:
> + case VM_EXEC | VM_READ:
> + case VM_EXEC | VM_WRITE:
> + case VM_EXEC | VM_WRITE | VM_READ:
> + return PAGE_READ;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_READ;
> + case VM_SHARED | VM_WRITE:
> + case VM_SHARED | VM_WRITE | VM_READ:
> + return PAGE_WRITE;
> + case VM_SHARED | VM_EXEC:
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_READ;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> + return PAGE_WRITE;
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> --
> 2.25.1
>
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 4/6] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-04 12:13 ` Guo Ren
@ 2022-06-05 9:50 ` Anshuman Khandual
0 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-05 9:50 UTC (permalink / raw)
To: Guo Ren
Cc: Catalin Marinas, Linux Kernel Mailing List, Linux-MM,
Paul Mackerras, sparclinux, Will Deacon, Jonas Bonn, linux-s390,
the arch/x86 maintainers, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Heiko Carstens, Openrisc,
Thomas Gleixner, Linux ARM, Thomas Bogendoerfer,
open list:BROADCOM NVRAM DRIVER, Dinh Nguyen, Andrew Morton,
linuxppc-dev, David S. Miller
On 6/4/22 17:43, Guo Ren wrote:
> Acked-by: Guo Ren <guoren@kernel.org>
I will resend this series with suggested changes.
>
> On Fri, Jun 3, 2022 at 6:15 PM Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed.
>>
>> Cc: Geert Uytterhoeven <geert@linux-m68k.org>
>> Cc: linux-csky@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Acked-by: Guo Ren <guoren@kernel.org>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> arch/csky/Kconfig | 1 +
>> arch/csky/include/asm/pgtable.h | 18 ------------------
>> arch/csky/mm/init.c | 32 ++++++++++++++++++++++++++++++++
>> 3 files changed, 33 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
>> index 21d72b078eef..588b8a9c68ed 100644
>> --- a/arch/csky/Kconfig
>> +++ b/arch/csky/Kconfig
>> @@ -6,6 +6,7 @@ config CSKY
>> select ARCH_HAS_GCOV_PROFILE_ALL
>> select ARCH_HAS_SYNC_DMA_FOR_CPU
>> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
>> + select ARCH_HAS_VM_GET_PAGE_PROT
>> select ARCH_USE_BUILTIN_BSWAP
>> select ARCH_USE_QUEUED_RWLOCKS
>> select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
>> diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
>> index bbe245117777..229a5f4ad7fc 100644
>> --- a/arch/csky/include/asm/pgtable.h
>> +++ b/arch/csky/include/asm/pgtable.h
>> @@ -77,24 +77,6 @@
>> #define MAX_SWAPFILES_CHECK() \
>> BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
>>
>> -#define __P000 PAGE_NONE
>> -#define __P001 PAGE_READ
>> -#define __P010 PAGE_READ
>> -#define __P011 PAGE_READ
>> -#define __P100 PAGE_READ
>> -#define __P101 PAGE_READ
>> -#define __P110 PAGE_READ
>> -#define __P111 PAGE_READ
>> -
>> -#define __S000 PAGE_NONE
>> -#define __S001 PAGE_READ
>> -#define __S010 PAGE_WRITE
>> -#define __S011 PAGE_WRITE
>> -#define __S100 PAGE_READ
>> -#define __S101 PAGE_READ
>> -#define __S110 PAGE_WRITE
>> -#define __S111 PAGE_WRITE
>> -
>> extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>> #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
>>
>> diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
>> index bf2004aa811a..f9babbed17d4 100644
>> --- a/arch/csky/mm/init.c
>> +++ b/arch/csky/mm/init.c
>> @@ -197,3 +197,35 @@ void __init fixaddr_init(void)
>> vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
>> fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
>> }
>> +
>> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
>> + case VM_NONE:
>> + return PAGE_NONE;
>> + case VM_READ:
>> + case VM_WRITE:
>> + case VM_WRITE | VM_READ:
>> + case VM_EXEC:
>> + case VM_EXEC | VM_READ:
>> + case VM_EXEC | VM_WRITE:
>> + case VM_EXEC | VM_WRITE | VM_READ:
>> + return PAGE_READ;
>> + case VM_SHARED:
>> + return PAGE_NONE;
>> + case VM_SHARED | VM_READ:
>> + return PAGE_READ;
>> + case VM_SHARED | VM_WRITE:
>> + case VM_SHARED | VM_WRITE | VM_READ:
>> + return PAGE_WRITE;
>> + case VM_SHARED | VM_EXEC:
>> + case VM_SHARED | VM_EXEC | VM_READ:
>> + return PAGE_READ;
>> + case VM_SHARED | VM_EXEC | VM_WRITE:
>> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
>> + return PAGE_WRITE;
>> + default:
>> + BUILD_BUG();
>> + }
>> +}
>> +EXPORT_SYMBOL(vm_get_page_prot);
>> --
>> 2.25.1
>>
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 5/6] nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (3 preceding siblings ...)
2022-06-03 10:14 ` [PATCH 4/6] csky/mm: " Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-03 10:14 ` [PATCH 6/6] openrisc/mm: " Anshuman Khandual
5 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky, Ingo Molnar,
Geert Uytterhoeven, Vasily Gorbik, Anshuman Khandual,
Heiko Carstens, openrisc, Thomas Gleixner, linux-arm-kernel,
Thomas Bogendoerfer, linux-mips, Dinh Nguyen, Andrew Morton,
linuxppc-dev, David S. Miller
This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: linux-kernel@vger.kernel.org
Acked-by: Dinh Nguyen <dinguyen@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/nios2/Kconfig | 1 +
arch/nios2/include/asm/pgtable.h | 24 ----------------
arch/nios2/mm/init.c | 47 ++++++++++++++++++++++++++++++++
3 files changed, 48 insertions(+), 24 deletions(-)
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 4167f1eb4cd8..e0459dffd218 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,6 +6,7 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index 262d0609268c..3c9f83c22733 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -34,30 +34,6 @@ struct mm_struct;
((x) ? _PAGE_EXEC : 0) | \
((r) ? _PAGE_READ : 0) | \
((w) ? _PAGE_WRITE : 0))
-/*
- * These are the macros that generic kernel code needs
- * (to populate protection_map[])
- */
-
-/* Remove W bit on private pages for COW support */
-#define __P000 MKP(0, 0, 0)
-#define __P001 MKP(0, 0, 1)
-#define __P010 MKP(0, 0, 0) /* COW */
-#define __P011 MKP(0, 0, 1) /* COW */
-#define __P100 MKP(1, 0, 0)
-#define __P101 MKP(1, 0, 1)
-#define __P110 MKP(1, 0, 0) /* COW */
-#define __P111 MKP(1, 0, 1) /* COW */
-
-/* Shared pages can have exact HW mapping */
-#define __S000 MKP(0, 0, 0)
-#define __S001 MKP(0, 0, 1)
-#define __S010 MKP(0, 1, 0)
-#define __S011 MKP(0, 1, 1)
-#define __S100 MKP(1, 0, 0)
-#define __S101 MKP(1, 0, 1)
-#define __S110 MKP(1, 1, 0)
-#define __S111 MKP(1, 1, 1)
/* Used all over the kernel */
#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 613fcaa5988a..e867f5d85580 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -124,3 +124,50 @@ const char *arch_vma_name(struct vm_area_struct *vma)
{
return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ /* Remove W bit on private pages for COW support */
+ case VM_NONE:
+ return MKP(0, 0, 0);
+ case VM_READ:
+ return MKP(0, 0, 1);
+ /* COW */
+ case VM_WRITE:
+ return MKP(0, 0, 0);
+ /* COW */
+ case VM_WRITE | VM_READ:
+ return MKP(0, 0, 1);
+ case VM_EXEC:
+ return MKP(1, 0, 0);
+ case VM_EXEC | VM_READ:
+ return MKP(1, 0, 1);
+ /* COW */
+ case VM_EXEC | VM_WRITE:
+ return MKP(1, 0, 0);
+ /* COW */
+ case VM_EXEC | VM_WRITE | VM_READ:
+ return MKP(1, 0, 1);
+ /* Shared pages can have exact HW mapping */
+ case VM_SHARED:
+ return MKP(0, 0, 0);
+ case VM_SHARED | VM_READ:
+ return MKP(0, 0, 1);
+ case VM_SHARED | VM_WRITE:
+ return MKP(0, 1, 0);
+ case VM_SHARED | VM_WRITE | VM_READ:
+ return MKP(0, 1, 1);
+ case VM_SHARED | VM_EXEC:
+ return MKP(1, 0, 0);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return MKP(1, 0, 1);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return MKP(1, 1, 0);
+ case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
+ return MKP(1, 1, 1);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 6/6] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 [PATCH 0/6] mm/mmap: Enable more platforms with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (4 preceding siblings ...)
2022-06-03 10:14 ` [PATCH 5/6] nios2/mm: " Anshuman Khandual
@ 2022-06-03 10:14 ` Anshuman Khandual
2022-06-05 6:07 ` Stafford Horne
5 siblings, 1 reply; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-03 10:14 UTC (permalink / raw)
To: linux-mm
Cc: Catalin Marinas, linux-kernel, Paul Mackerras, sparclinux,
Will Deacon, Stafford Horne, Jonas Bonn, linux-s390, x86,
linux-csky, Ingo Molnar, Geert Uytterhoeven, Vasily Gorbik,
Anshuman Khandual, Heiko Carstens, openrisc, Thomas Gleixner,
linux-arm-kernel, Thomas Bogendoerfer, linux-mips, Dinh Nguyen,
Andrew Morton, linuxppc-dev, David S. Miller
This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.
Cc: Jonas Bonn <jonas@southpole.se>
Cc: openrisc@lists.librecores.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/pgtable.h | 18 -------------
arch/openrisc/mm/init.c | 41 +++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 18 deletions(-)
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index e814df4c483c..fe0dfb50eb86 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,6 +10,7 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
index c3abbf71e09f..dcae8aea132f 100644
--- a/arch/openrisc/include/asm/pgtable.h
+++ b/arch/openrisc/include/asm/pgtable.h
@@ -176,24 +176,6 @@ extern void paging_init(void);
__pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
| _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
-
/* zero page used for uninitialized stuff */
extern unsigned long empty_zero_page[2048];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 3a021ab6f1ae..266dc68c32e5 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -208,3 +208,44 @@ void __init mem_init(void)
mem_init_done = 1;
return;
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY_X;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_WRITE | VM_READ:
+ return PAGE_COPY_X;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_COPY_X;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_WRITE | VM_READ:
+ return PAGE_SHARED_X;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
+ return PAGE_SHARED_X;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 6/6] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-03 10:14 ` [PATCH 6/6] openrisc/mm: " Anshuman Khandual
@ 2022-06-05 6:07 ` Stafford Horne
2022-06-05 9:48 ` Anshuman Khandual
0 siblings, 1 reply; 15+ messages in thread
From: Stafford Horne @ 2022-06-05 6:07 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Catalin Marinas, linux-kernel, linux-mm, Paul Mackerras,
sparclinux, Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky,
Ingo Molnar, Geert Uytterhoeven, Vasily Gorbik, Heiko Carstens,
openrisc, Thomas Gleixner, linux-arm-kernel, Thomas Bogendoerfer,
linux-mips, Dinh Nguyen, Andrew Morton, linuxppc-dev,
David S. Miller
On Fri, Jun 03, 2022 at 03:44:11PM +0530, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: Jonas Bonn <jonas@southpole.se>
> Cc: openrisc@lists.librecores.org
> Cc: linux-kernel@vger.kernel.org
> Acked-by: Stafford Horne <shorne@gmail.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Is it possible to retract my Acked-by? I was following the discussion of this
new function actually being sub optimal. So as far as I am concerned all these
architecture patches should be nak'ed.
-Stafford
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 6/6] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-05 6:07 ` Stafford Horne
@ 2022-06-05 9:48 ` Anshuman Khandual
0 siblings, 0 replies; 15+ messages in thread
From: Anshuman Khandual @ 2022-06-05 9:48 UTC (permalink / raw)
To: Stafford Horne
Cc: Catalin Marinas, linux-kernel, linux-mm, Paul Mackerras,
sparclinux, Will Deacon, Jonas Bonn, linux-s390, x86, linux-csky,
Ingo Molnar, Geert Uytterhoeven, Vasily Gorbik, Heiko Carstens,
openrisc, Thomas Gleixner, linux-arm-kernel, Thomas Bogendoerfer,
linux-mips, Dinh Nguyen, Andrew Morton, linuxppc-dev,
David S. Miller
On 6/5/22 11:37, Stafford Horne wrote:
> On Fri, Jun 03, 2022 at 03:44:11PM +0530, Anshuman Khandual wrote:
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed.
>>
>> Cc: Jonas Bonn <jonas@southpole.se>
>> Cc: openrisc@lists.librecores.org
>> Cc: linux-kernel@vger.kernel.org
>> Acked-by: Stafford Horne <shorne@gmail.com>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>
> Is it possible to retract my Acked-by? I was following the discussion of this
> new function actually being sub optimal. So as far as I am concerned all these
> architecture patches should be nak'ed.
Sure, alright. I am planning to redo these arch patches via making
the protection_map[] array private to the platforms but possibly
with a common look up function as Christophe had suggested earlier.
^ permalink raw reply [flat|nested] 15+ messages in thread