* [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:08 ` Christoph Hellwig
2022-06-27 5:47 ` Christophe Leroy
2022-06-24 4:55 ` [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT Anshuman Khandual
` (25 subsequent siblings)
26 siblings, 2 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
Build protect generic protection_map[] array with __P000, so that it can be
moved inside all the platforms one after the other. Otherwise there will be
build failures during this process. CONFIG_ARCH_HAS_VM_GET_PAGE_PROT cannot
be used for this purpose as only certain platforms enable this config now.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/mm.h | 2 ++
mm/mmap.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc8f326be0ce..47bfe038d46e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,7 +424,9 @@ extern unsigned int kobjsize(const void *objp);
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
+#ifdef __P000
extern pgprot_t protection_map[16];
+#endif
/*
* The default fault flags that should be used by most of the
diff --git a/mm/mmap.c b/mm/mmap.c
index 61e6135c54ef..b01f0280bda2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
+#ifdef __P000
pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE] = __P000,
[VM_READ] = __P001,
@@ -119,6 +120,7 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE] = __S110,
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
};
+#endif
#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
pgprot_t vm_get_page_prot(unsigned long vm_flags)
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000
2022-06-24 4:55 ` [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000 Anshuman Khandual
@ 2022-06-24 5:08 ` Christoph Hellwig
2022-06-27 5:47 ` Christophe Leroy
1 sibling, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:08 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000
2022-06-24 4:55 ` [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000 Anshuman Khandual
2022-06-24 5:08 ` Christoph Hellwig
@ 2022-06-27 5:47 ` Christophe Leroy
1 sibling, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-27 5:47 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IEJ1aWxkIHByb3RlY3QgZ2VuZXJpYyBwcm90ZWN0aW9uX21hcFtdIGFycmF5IHdpdGggX19Q
MDAwLCBzbyB0aGF0IGl0IGNhbiBiZQ0KPiBtb3ZlZCBpbnNpZGUgYWxsIHRoZSBwbGF0Zm9ybXMg
b25lIGFmdGVyIHRoZSBvdGhlci4gT3RoZXJ3aXNlIHRoZXJlIHdpbGwgYmUNCj4gYnVpbGQgZmFp
bHVyZXMgZHVyaW5nIHRoaXMgcHJvY2Vzcy4gQ09ORklHX0FSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BS
T1QgY2Fubm90DQo+IGJlIHVzZWQgZm9yIHRoaXMgcHVycG9zZSBhcyBvbmx5IGNlcnRhaW4gcGxh
dGZvcm1zIGVuYWJsZSB0aGlzIGNvbmZpZyBub3cuDQo+IA0KPiBDYzogQW5kcmV3IE1vcnRvbiA8
YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZz4NCj4gQ2M6IGxpbnV4LW1tQGt2YWNrLm9yZw0KPiBD
YzogbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiBTdWdnZXN0ZWQtYnk6IENocmlzdG9w
aGUgTGVyb3kgPGNocmlzdG9waGUubGVyb3lAY3Nncm91cC5ldT4NCj4gU2lnbmVkLW9mZi1ieTog
QW5zaHVtYW4gS2hhbmR1YWwgPGFuc2h1bWFuLmtoYW5kdWFsQGFybS5jb20+DQoNClJldmlld2Vk
LWJ5OiBDaHJpc3RvcGhlIExlcm95IDxjaHJpc3RvcGhlLmxlcm95QGNzZ3JvdXAuZXU+DQoNCj4g
LS0tDQo+ICAgaW5jbHVkZS9saW51eC9tbS5oIHwgMiArKw0KPiAgIG1tL21tYXAuYyAgICAgICAg
ICB8IDIgKysNCj4gICAyIGZpbGVzIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygrKQ0KPiANCj4gZGlm
ZiAtLWdpdCBhL2luY2x1ZGUvbGludXgvbW0uaCBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBpbmRl
eCBiYzhmMzI2YmUwY2UuLjQ3YmZlMDM4ZDQ2ZSAxMDA2NDQNCj4gLS0tIGEvaW5jbHVkZS9saW51
eC9tbS5oDQo+ICsrKyBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBAQCAtNDI0LDcgKzQyNCw5IEBA
IGV4dGVybiB1bnNpZ25lZCBpbnQga29ianNpemUoY29uc3Qgdm9pZCAqb2JqcCk7DQo+ICAgICog
bWFwcGluZyBmcm9tIHRoZSBjdXJyZW50bHkgYWN0aXZlIHZtX2ZsYWdzIHByb3RlY3Rpb24gYml0
cyAodGhlDQo+ICAgICogbG93IGZvdXIgYml0cykgdG8gYSBwYWdlIHByb3RlY3Rpb24gbWFzay4u
DQo+ICAgICovDQo+ICsjaWZkZWYgX19QMDAwDQo+ICAgZXh0ZXJuIHBncHJvdF90IHByb3RlY3Rp
b25fbWFwWzE2XTsNCj4gKyNlbmRpZg0KPiAgIA0KPiAgIC8qDQo+ICAgICogVGhlIGRlZmF1bHQg
ZmF1bHQgZmxhZ3MgdGhhdCBzaG91bGQgYmUgdXNlZCBieSBtb3N0IG9mIHRoZQ0KPiBkaWZmIC0t
Z2l0IGEvbW0vbW1hcC5jIGIvbW0vbW1hcC5jDQo+IGluZGV4IDYxZTYxMzVjNTRlZi4uYjAxZjAy
ODBiZGEyIDEwMDY0NA0KPiAtLS0gYS9tbS9tbWFwLmMNCj4gKysrIGIvbW0vbW1hcC5jDQo+IEBA
IC0xMDEsNiArMTAxLDcgQEAgc3RhdGljIHZvaWQgdW5tYXBfcmVnaW9uKHN0cnVjdCBtbV9zdHJ1
Y3QgKm1tLA0KPiAgICAqCQkJCQkJCQl3OiAobm8pIG5vDQo+ICAgICoJCQkJCQkJCXg6ICh5ZXMp
IHllcw0KPiAgICAqLw0KPiArI2lmZGVmIF9fUDAwMA0KPiAgIHBncHJvdF90IHByb3RlY3Rpb25f
bWFwWzE2XSBfX3JvX2FmdGVyX2luaXQgPSB7DQo+ICAgCVtWTV9OT05FXQkJCQkJPSBfX1AwMDAs
DQo+ICAgCVtWTV9SRUFEXQkJCQkJPSBfX1AwMDEsDQo+IEBAIC0xMTksNiArMTIwLDcgQEAgcGdw
cm90X3QgcHJvdGVjdGlvbl9tYXBbMTZdIF9fcm9fYWZ0ZXJfaW5pdCA9IHsNCj4gICAJW1ZNX1NI
QVJFRCB8IFZNX0VYRUMgfCBWTV9XUklURV0JCT0gX19TMTEwLA0KPiAgIAlbVk1fU0hBUkVEIHwg
Vk1fRVhFQyB8IFZNX1dSSVRFIHwgVk1fUkVBRF0JPSBfX1MxMTENCj4gICB9Ow0KPiArI2VuZGlm
DQo+ICAgDQo+ICAgI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAg
IHBncHJvdF90IHZtX2dldF9wYWdlX3Byb3QodW5zaWduZWQgbG9uZyB2bV9mbGFncyk
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000 Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:09 ` Christoph Hellwig
` (2 more replies)
2022-06-24 4:55 ` [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform Anshuman Khandual
` (24 subsequent siblings)
26 siblings, 3 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
This just converts the generic vm_get_page_prot() implementation into a new
macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
functional change.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/mm.h | 8 ++++++++
mm/mmap.c | 6 +-----
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 47bfe038d46e..237828c2bae2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -428,6 +428,14 @@ extern unsigned int kobjsize(const void *objp);
extern pgprot_t protection_map[16];
#endif
+#define DECLARE_VM_GET_PAGE_PROT \
+pgprot_t vm_get_page_prot(unsigned long vm_flags) \
+{ \
+ return protection_map[vm_flags & \
+ (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)]; \
+} \
+EXPORT_SYMBOL(vm_get_page_prot);
+
/*
* The default fault flags that should be used by most of the
* arch-specific page fault handlers.
diff --git a/mm/mmap.c b/mm/mmap.c
index b01f0280bda2..55c30aee3999 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -123,11 +123,7 @@ pgprot_t protection_map[16] __ro_after_init = {
#endif
#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
-pgprot_t vm_get_page_prot(unsigned long vm_flags)
-{
- return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
-}
-EXPORT_SYMBOL(vm_get_page_prot);
+DECLARE_VM_GET_PAGE_PROT
#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 5:09 ` Christoph Hellwig
2022-06-24 6:21 ` Anshuman Khandual
2022-06-24 5:28 ` Christophe Leroy
2022-06-27 5:47 ` Christophe Leroy
2 siblings, 1 reply; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:09 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
On Fri, Jun 24, 2022 at 10:13:15AM +0530, Anshuman Khandual wrote:
> This just converts the generic vm_get_page_prot() implementation into a new
> macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
> when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
> functional change.
mm.h is a huhe header included by almost everything in the kernel.
I'd rather have it in something only included in a few files. If we
can't find anything suitable it might be worth to add a header just
for this even.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 5:09 ` Christoph Hellwig
@ 2022-06-24 6:21 ` Anshuman Khandual
0 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 6:21 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-mm, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
On 6/24/22 10:39, Christoph Hellwig wrote:
> On Fri, Jun 24, 2022 at 10:13:15AM +0530, Anshuman Khandual wrote:
>> This just converts the generic vm_get_page_prot() implementation into a new
>> macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
>> when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
>> functional change.
>
> mm.h is a huhe header included by almost everything in the kernel.
> I'd rather have it in something only included in a few files. If we
> can't find anything suitable it might be worth to add a header just
> for this even.
I guess <linux/pgtable.h> should be better ?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:09 ` Christoph Hellwig
@ 2022-06-24 5:28 ` Christophe Leroy
2022-06-24 6:23 ` Anshuman Khandual
2022-06-27 5:47 ` Christophe Leroy
2 siblings, 1 reply; 52+ messages in thread
From: Christophe Leroy @ 2022-06-24 5:28 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IFRoaXMganVzdCBjb252ZXJ0cyB0aGUgZ2VuZXJpYyB2bV9nZXRfcGFnZV9wcm90KCkgaW1w
bGVtZW50YXRpb24gaW50byBhIG5ldw0KPiBtYWNybyBpLmUgREVDTEFSRV9WTV9HRVRfUEFHRV9Q
Uk9UIHdoaWNoIGxhdGVyIGNhbiBiZSB1c2VkIGFjcm9zcyBwbGF0Zm9ybXMNCj4gd2hlbiBlbmFi
bGluZyB0aGVtIHdpdGggQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVC4gVGhpcyBkb2VzIG5vdCBj
cmVhdGUgYW55DQo+IGZ1bmN0aW9uYWwgY2hhbmdlLg0KPiANCj4gQ2M6IEFuZHJldyBNb3J0b24g
PGFrcG1AbGludXgtZm91bmRhdGlvbi5vcmc+DQo+IENjOiBsaW51eC1tbUBrdmFjay5vcmcNCj4g
Q2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gU3VnZ2VzdGVkLWJ5OiBDaHJpc3Rv
cGggSGVsbHdpZyA8aGNoQGluZnJhZGVhZC5vcmc+DQo+IFNpZ25lZC1vZmYtYnk6IEFuc2h1bWFu
IEtoYW5kdWFsIDxhbnNodW1hbi5raGFuZHVhbEBhcm0uY29tPg0KPiAtLS0NCj4gICBpbmNsdWRl
L2xpbnV4L21tLmggfCA4ICsrKysrKysrDQo+ICAgbW0vbW1hcC5jICAgICAgICAgIHwgNiArLS0t
LS0NCj4gICAyIGZpbGVzIGNoYW5nZWQsIDkgaW5zZXJ0aW9ucygrKSwgNSBkZWxldGlvbnMoLSkN
Cj4gDQo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L21tLmggYi9pbmNsdWRlL2xpbnV4L21t
LmgNCj4gaW5kZXggNDdiZmUwMzhkNDZlLi4yMzc4MjhjMmJhZTIgMTAwNjQ0DQo+IC0tLSBhL2lu
Y2x1ZGUvbGludXgvbW0uaA0KPiArKysgYi9pbmNsdWRlL2xpbnV4L21tLmgNCj4gQEAgLTQyOCw2
ICs0MjgsMTQgQEAgZXh0ZXJuIHVuc2lnbmVkIGludCBrb2Jqc2l6ZShjb25zdCB2b2lkICpvYmpw
KTsNCj4gICBleHRlcm4gcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBbMTZdOw0KPiAgICNlbmRpZg0K
PiAgIA0KDQpJIHRoaW5rIHRoZSBjb21tZW50IGFib3ZlIHByb3RlY3Rpb25fbWFwWzE2XSBpbiBt
bS9tbWFwLmMgc2hvdWxkIGJlIA0KbW92ZWQgaGVyZS4NCg0KPiArI2RlZmluZSBERUNMQVJFX1ZN
X0dFVF9QQUdFX1BST1QJCQkJCVwNCj4gK3BncHJvdF90IHZtX2dldF9wYWdlX3Byb3QodW5zaWdu
ZWQgbG9uZyB2bV9mbGFncykJCQlcDQo+ICt7CQkJCQkJCQkJXA0KPiArCQlyZXR1cm4gcHJvdGVj
dGlvbl9tYXBbdm1fZmxhZ3MgJgkJCVwNCj4gKwkJCShWTV9SRUFEIHwgVk1fV1JJVEUgfCBWTV9F
WEVDIHwgVk1fU0hBUkVEKV07CVwNCj4gK30JCQkJCQkJCQlcDQo+ICtFWFBPUlRfU1lNQk9MKHZt
X2dldF9wYWdlX3Byb3QpOw0KPiArDQo+ICAgLyoNCj4gICAgKiBUaGUgZGVmYXVsdCBmYXVsdCBm
bGFncyB0aGF0IHNob3VsZCBiZSB1c2VkIGJ5IG1vc3Qgb2YgdGhlDQo+ICAgICogYXJjaC1zcGVj
aWZpYyBwYWdlIGZhdWx0IGhhbmRsZXJzLg0KPiBkaWZmIC0tZ2l0IGEvbW0vbW1hcC5jIGIvbW0v
bW1hcC5jDQo+IGluZGV4IGIwMWYwMjgwYmRhMi4uNTVjMzBhZWUzOTk5IDEwMDY0NA0KPiAtLS0g
YS9tbS9tbWFwLmMNCj4gKysrIGIvbW0vbW1hcC5jDQo+IEBAIC0xMjMsMTEgKzEyMyw3IEBAIHBn
cHJvdF90IHByb3RlY3Rpb25fbWFwWzE2XSBfX3JvX2FmdGVyX2luaXQgPSB7DQo+ICAgI2VuZGlm
DQo+ICAgDQo+ICAgI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAt
cGdwcm90X3Qgdm1fZ2V0X3BhZ2VfcHJvdCh1bnNpZ25lZCBsb25nIHZtX2ZsYWdzKQ0KPiAtew0K
PiAtCXJldHVybiBwcm90ZWN0aW9uX21hcFt2bV9mbGFncyAmIChWTV9SRUFEfFZNX1dSSVRFfFZN
X0VYRUN8Vk1fU0hBUkVEKV07DQo+IC19DQo+IC1FWFBPUlRfU1lNQk9MKHZtX2dldF9wYWdlX3By
b3QpOw0KPiArREVDTEFSRV9WTV9HRVRfUEFHRV9QUk9UDQo+ICAgI2VuZGlmCS8qIENPTkZJR19B
UkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UICovDQo+ICAgDQo+ICAgc3RhdGljIHBncHJvdF90IHZt
X3BncHJvdF9tb2RpZnkocGdwcm90X3Qgb2xkcHJvdCwgdW5zaWduZWQgbG9uZyB2bV9mbGFncyk
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 5:28 ` Christophe Leroy
@ 2022-06-24 6:23 ` Anshuman Khandual
0 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 6:23 UTC (permalink / raw)
To: Christophe Leroy, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
On 6/24/22 10:58, Christophe Leroy wrote:
>
> Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
>> This just converts the generic vm_get_page_prot() implementation into a new
>> macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
>> when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
>> functional change.
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Suggested-by: Christoph Hellwig <hch@infradead.org>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> include/linux/mm.h | 8 ++++++++
>> mm/mmap.c | 6 +-----
>> 2 files changed, 9 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 47bfe038d46e..237828c2bae2 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -428,6 +428,14 @@ extern unsigned int kobjsize(const void *objp);
>> extern pgprot_t protection_map[16];
>> #endif
>>
> I think the comment above protection_map[16] in mm/mmap.c should be
> moved here.
Sure, makes sense. Will move.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:09 ` Christoph Hellwig
2022-06-24 5:28 ` Christophe Leroy
@ 2022-06-27 5:47 ` Christophe Leroy
2 siblings, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-27 5:47 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IFRoaXMganVzdCBjb252ZXJ0cyB0aGUgZ2VuZXJpYyB2bV9nZXRfcGFnZV9wcm90KCkgaW1w
bGVtZW50YXRpb24gaW50byBhIG5ldw0KPiBtYWNybyBpLmUgREVDTEFSRV9WTV9HRVRfUEFHRV9Q
Uk9UIHdoaWNoIGxhdGVyIGNhbiBiZSB1c2VkIGFjcm9zcyBwbGF0Zm9ybXMNCj4gd2hlbiBlbmFi
bGluZyB0aGVtIHdpdGggQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVC4gVGhpcyBkb2VzIG5vdCBj
cmVhdGUgYW55DQo+IGZ1bmN0aW9uYWwgY2hhbmdlLg0KPiANCj4gQ2M6IEFuZHJldyBNb3J0b24g
PGFrcG1AbGludXgtZm91bmRhdGlvbi5vcmc+DQo+IENjOiBsaW51eC1tbUBrdmFjay5vcmcNCj4g
Q2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gU3VnZ2VzdGVkLWJ5OiBDaHJpc3Rv
cGggSGVsbHdpZyA8aGNoQGluZnJhZGVhZC5vcmc+DQo+IFNpZ25lZC1vZmYtYnk6IEFuc2h1bWFu
IEtoYW5kdWFsIDxhbnNodW1hbi5raGFuZHVhbEBhcm0uY29tPg0KDQpSZXZpZXdlZC1ieTogQ2hy
aXN0b3BoZSBMZXJveSA8Y2hyaXN0b3BoZS5sZXJveUBjc2dyb3VwLmV1Pg0KDQo+IC0tLQ0KPiAg
IGluY2x1ZGUvbGludXgvbW0uaCB8IDggKysrKysrKysNCj4gICBtbS9tbWFwLmMgICAgICAgICAg
fCA2ICstLS0tLQ0KPiAgIDIgZmlsZXMgY2hhbmdlZCwgOSBpbnNlcnRpb25zKCspLCA1IGRlbGV0
aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2luY2x1ZGUvbGludXgvbW0uaCBiL2luY2x1ZGUv
bGludXgvbW0uaA0KPiBpbmRleCA0N2JmZTAzOGQ0NmUuLjIzNzgyOGMyYmFlMiAxMDA2NDQNCj4g
LS0tIGEvaW5jbHVkZS9saW51eC9tbS5oDQo+ICsrKyBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBA
QCAtNDI4LDYgKzQyOCwxNCBAQCBleHRlcm4gdW5zaWduZWQgaW50IGtvYmpzaXplKGNvbnN0IHZv
aWQgKm9ianApOw0KPiAgIGV4dGVybiBwZ3Byb3RfdCBwcm90ZWN0aW9uX21hcFsxNl07DQo+ICAg
I2VuZGlmDQo+ICAgDQo+ICsjZGVmaW5lIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVAkJCQkJXA0K
PiArcGdwcm90X3Qgdm1fZ2V0X3BhZ2VfcHJvdCh1bnNpZ25lZCBsb25nIHZtX2ZsYWdzKQkJCVwN
Cj4gK3sJCQkJCQkJCQlcDQo+ICsJCXJldHVybiBwcm90ZWN0aW9uX21hcFt2bV9mbGFncyAmCQkJ
XA0KPiArCQkJKFZNX1JFQUQgfCBWTV9XUklURSB8IFZNX0VYRUMgfCBWTV9TSEFSRUQpXTsJXA0K
PiArfQkJCQkJCQkJCVwNCj4gK0VYUE9SVF9TWU1CT0wodm1fZ2V0X3BhZ2VfcHJvdCk7DQo+ICsN
Cj4gICAvKg0KPiAgICAqIFRoZSBkZWZhdWx0IGZhdWx0IGZsYWdzIHRoYXQgc2hvdWxkIGJlIHVz
ZWQgYnkgbW9zdCBvZiB0aGUNCj4gICAgKiBhcmNoLXNwZWNpZmljIHBhZ2UgZmF1bHQgaGFuZGxl
cnMuDQo+IGRpZmYgLS1naXQgYS9tbS9tbWFwLmMgYi9tbS9tbWFwLmMNCj4gaW5kZXggYjAxZjAy
ODBiZGEyLi41NWMzMGFlZTM5OTkgMTAwNjQ0DQo+IC0tLSBhL21tL21tYXAuYw0KPiArKysgYi9t
bS9tbWFwLmMNCj4gQEAgLTEyMywxMSArMTIzLDcgQEAgcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBb
MTZdIF9fcm9fYWZ0ZXJfaW5pdCA9IHsNCj4gICAjZW5kaWYNCj4gICANCj4gICAjaWZuZGVmIENP
TkZJR19BUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+IC1wZ3Byb3RfdCB2bV9nZXRfcGFnZV9w
cm90KHVuc2lnbmVkIGxvbmcgdm1fZmxhZ3MpDQo+IC17DQo+IC0JcmV0dXJuIHByb3RlY3Rpb25f
bWFwW3ZtX2ZsYWdzICYgKFZNX1JFQUR8Vk1fV1JJVEV8Vk1fRVhFQ3xWTV9TSEFSRUQpXTsNCj4g
LX0NCj4gLUVYUE9SVF9TWU1CT0wodm1fZ2V0X3BhZ2VfcHJvdCk7DQo+ICtERUNMQVJFX1ZNX0dF
VF9QQUdFX1BST1QNCj4gICAjZW5kaWYJLyogQ09ORklHX0FSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BS
T1QgKi8NCj4gICANCj4gICBzdGF0aWMgcGdwcm90X3Qgdm1fcGdwcm90X21vZGlmeShwZ3Byb3Rf
dCBvbGRwcm90LCB1bnNpZ25lZCBsb25nIHZtX2ZsYWdzKQ=
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000 Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:18 ` Christophe Leroy
2022-06-27 5:48 ` Christophe Leroy
2022-06-24 4:55 ` [PATCH V4 04/26] sparc/mm: " Anshuman Khandual
` (23 subsequent siblings)
26 siblings, 2 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Michael Ellerman, Paul Mackerras, Nicholas Piggin
This moves protection_map[] inside the platform and while here, also enable
ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/powerpc/Kconfig | 2 +-
arch/powerpc/include/asm/pgtable.h | 20 +-------------------
arch/powerpc/mm/pgtable.c | 24 ++++++++++++++++++++++++
3 files changed, 26 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c2ce2e60c8f0..1035d172c7dd 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,7 +140,7 @@ config PPC
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT if PPC_BOOK3S_64
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index d564d0ecd4cd..bf98db844579 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -20,25 +20,6 @@ struct mm_struct;
#include <asm/nohash/pgtable.h>
#endif /* !CONFIG_PPC_BOOK3S */
-/* Note due to the way vm flags are laid out, the bits are XWR */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_X
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY_X
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_X
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED_X
-#define __S111 PAGE_SHARED_X
-
#ifndef __ASSEMBLY__
#ifndef MAX_PTRS_PER_PGD
@@ -79,6 +60,7 @@ extern void paging_init(void);
void poking_init(void);
extern unsigned long ioremap_bot;
+extern pgprot_t protection_map[16] __ro_after_init;
/*
* kern_addr_valid is intended to indicate whether an address is a valid
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index e6166b71d36d..618f30d35b17 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -472,3 +472,27 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
return ret_pte;
}
EXPORT_SYMBOL_GPL(__find_linux_pte);
+
+/* Note due to the way vm flags are laid out, the bits are XWR */
+pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY_X,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_X,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
+};
+
+#ifndef CONFIG_PPC_BOOK3S_64
+DECLARE_VM_GET_PAGE_PROT
+#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform
2022-06-24 4:55 ` [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform Anshuman Khandual
@ 2022-06-24 5:18 ` Christophe Leroy
2022-06-24 5:35 ` Anshuman Khandual
2022-06-27 5:48 ` Christophe Leroy
1 sibling, 1 reply; 52+ messages in thread
From: Christophe Leroy @ 2022-06-24 5:18 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org, Michael Ellerman, Paul Mackerras,
Nicholas Piggin
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IFRoaXMgbW92ZXMgcHJvdGVjdGlvbl9tYXBbXSBpbnNpZGUgdGhlIHBsYXRmb3JtIGFuZCB3
aGlsZSBoZXJlLCBhbHNvIGVuYWJsZQ0KPiBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UIG9uIDMy
IGJpdCBwbGF0Zm9ybXMgdmlhIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVC4NCg0KTm90IG9ubHkg
MzIgYml0IHBsYXRmb3JtcywgYWxzbyBub2hhc2ggNjQgKGFrYSBib29rM2UvNjQpDQoNCj4gDQo+
IENjOiBNaWNoYWVsIEVsbGVybWFuIDxtcGVAZWxsZXJtYW4uaWQuYXU+DQo+IENjOiBQYXVsIE1h
Y2tlcnJhcyA8cGF1bHVzQHNhbWJhLm9yZz4NCj4gQ2M6IE5pY2hvbGFzIFBpZ2dpbiA8bnBpZ2dp
bkBnbWFpbC5jb20+DQo+IENjOiBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZw0KPiBDYzog
bGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiBTaWduZWQtb2ZmLWJ5OiBBbnNodW1hbiBL
aGFuZHVhbCA8YW5zaHVtYW4ua2hhbmR1YWxAYXJtLmNvbT4NCj4gLS0tDQo+ICAgYXJjaC9wb3dl
cnBjL0tjb25maWcgICAgICAgICAgICAgICB8ICAyICstDQo+ICAgYXJjaC9wb3dlcnBjL2luY2x1
ZGUvYXNtL3BndGFibGUuaCB8IDIwICstLS0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgYXJjaC9wb3dl
cnBjL21tL3BndGFibGUuYyAgICAgICAgICB8IDI0ICsrKysrKysrKysrKysrKysrKysrKysrKw0K
PiAgIDMgZmlsZXMgY2hhbmdlZCwgMjYgaW5zZXJ0aW9ucygrKSwgMjAgZGVsZXRpb25zKC0pDQo+
IA0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9wb3dlcnBjL0tjb25maWcgYi9hcmNoL3Bvd2VycGMvS2Nv
bmZpZw0KPiBpbmRleCBjMmNlMmU2MGM4ZjAuLjEwMzVkMTcyYzdkZCAxMDA2NDQNCj4gLS0tIGEv
YXJjaC9wb3dlcnBjL0tjb25maWcNCj4gKysrIGIvYXJjaC9wb3dlcnBjL0tjb25maWcNCj4gQEAg
LTE0MCw3ICsxNDAsNyBAQCBjb25maWcgUFBDDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19USUNLX0JS
T0FEQ0FTVAkJaWYgR0VORVJJQ19DTE9DS0VWRU5UU19CUk9BRENBU1QNCj4gICAJc2VsZWN0IEFS
Q0hfSEFTX1VBQ0NFU1NfRkxVU0hDQUNIRQ0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVUJTQU5fU0FO
SVRJWkVfQUxMDQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QJaWYgUFBDX0JP
T0szU182NA0KPiArCXNlbGVjdCBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgCXNlbGVj
dCBBUkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRw0KPiAgIAlzZWxlY3QgQVJDSF9LRUVQX01FTUJM
T0NLDQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVA0KPiBkaWZmIC0tZ2l0
IGEvYXJjaC9wb3dlcnBjL2luY2x1ZGUvYXNtL3BndGFibGUuaCBiL2FyY2gvcG93ZXJwYy9pbmNs
dWRlL2FzbS9wZ3RhYmxlLmgNCj4gaW5kZXggZDU2NGQwZWNkNGNkLi5iZjk4ZGI4NDQ1NzkgMTAw
NjQ0DQo+IC0tLSBhL2FyY2gvcG93ZXJwYy9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgNCj4gKysrIGIv
YXJjaC9wb3dlcnBjL2luY2x1ZGUvYXNtL3BndGFibGUuaA0KPiBAQCAtMjAsMjUgKzIwLDYgQEAg
c3RydWN0IG1tX3N0cnVjdDsNCj4gICAjaW5jbHVkZSA8YXNtL25vaGFzaC9wZ3RhYmxlLmg+DQo+
ICAgI2VuZGlmIC8qICFDT05GSUdfUFBDX0JPT0szUyAqLw0KPiAgIA0KPiAtLyogTm90ZSBkdWUg
dG8gdGhlIHdheSB2bSBmbGFncyBhcmUgbGFpZCBvdXQsIHRoZSBiaXRzIGFyZSBYV1IgKi8NCj4g
LSNkZWZpbmUgX19QMDAwCVBBR0VfTk9ORQ0KPiAtI2RlZmluZSBfX1AwMDEJUEFHRV9SRUFET05M
WQ0KPiAtI2RlZmluZSBfX1AwMTAJUEFHRV9DT1BZDQo+IC0jZGVmaW5lIF9fUDAxMQlQQUdFX0NP
UFkNCj4gLSNkZWZpbmUgX19QMTAwCVBBR0VfUkVBRE9OTFlfWA0KPiAtI2RlZmluZSBfX1AxMDEJ
UEFHRV9SRUFET05MWV9YDQo+IC0jZGVmaW5lIF9fUDExMAlQQUdFX0NPUFlfWA0KPiAtI2RlZmlu
ZSBfX1AxMTEJUEFHRV9DT1BZX1gNCj4gLQ0KPiAtI2RlZmluZSBfX1MwMDAJUEFHRV9OT05FDQo+
IC0jZGVmaW5lIF9fUzAwMQlQQUdFX1JFQURPTkxZDQo+IC0jZGVmaW5lIF9fUzAxMAlQQUdFX1NI
QVJFRA0KPiAtI2RlZmluZSBfX1MwMTEJUEFHRV9TSEFSRUQNCj4gLSNkZWZpbmUgX19TMTAwCVBB
R0VfUkVBRE9OTFlfWA0KPiAtI2RlZmluZSBfX1MxMDEJUEFHRV9SRUFET05MWV9YDQo+IC0jZGVm
aW5lIF9fUzExMAlQQUdFX1NIQVJFRF9YDQo+IC0jZGVmaW5lIF9fUzExMQlQQUdFX1NIQVJFRF9Y
DQo+IC0NCj4gICAjaWZuZGVmIF9fQVNTRU1CTFlfXw0KPiAgIA0KPiAgICNpZm5kZWYgTUFYX1BU
UlNfUEVSX1BHRA0KPiBAQCAtNzksNiArNjAsNyBAQCBleHRlcm4gdm9pZCBwYWdpbmdfaW5pdCh2
b2lkKTsNCj4gICB2b2lkIHBva2luZ19pbml0KHZvaWQpOw0KPiAgIA0KPiAgIGV4dGVybiB1bnNp
Z25lZCBsb25nIGlvcmVtYXBfYm90Ow0KPiArZXh0ZXJuIHBncHJvdF90IHByb3RlY3Rpb25fbWFw
WzE2XSBfX3JvX2FmdGVyX2luaXQ7DQo+ICAgDQo+ICAgLyoNCj4gICAgKiBrZXJuX2FkZHJfdmFs
aWQgaXMgaW50ZW5kZWQgdG8gaW5kaWNhdGUgd2hldGhlciBhbiBhZGRyZXNzIGlzIGEgdmFsaWQN
Cj4gZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJwYy9tbS9wZ3RhYmxlLmMgYi9hcmNoL3Bvd2VycGMv
bW0vcGd0YWJsZS5jDQo+IGluZGV4IGU2MTY2YjcxZDM2ZC4uNjE4ZjMwZDM1YjE3IDEwMDY0NA0K
PiAtLS0gYS9hcmNoL3Bvd2VycGMvbW0vcGd0YWJsZS5jDQo+ICsrKyBiL2FyY2gvcG93ZXJwYy9t
bS9wZ3RhYmxlLmMNCj4gQEAgLTQ3MiwzICs0NzIsMjcgQEAgcHRlX3QgKl9fZmluZF9saW51eF9w
dGUocGdkX3QgKnBnZGlyLCB1bnNpZ25lZCBsb25nIGVhLA0KPiAgIAlyZXR1cm4gcmV0X3B0ZTsN
Cj4gICB9DQo+ICAgRVhQT1JUX1NZTUJPTF9HUEwoX19maW5kX2xpbnV4X3B0ZSk7DQo+ICsNCj4g
Ky8qIE5vdGUgZHVlIHRvIHRoZSB3YXkgdm0gZmxhZ3MgYXJlIGxhaWQgb3V0LCB0aGUgYml0cyBh
cmUgWFdSICovDQo+ICtwZ3Byb3RfdCBwcm90ZWN0aW9uX21hcFsxNl0gX19yb19hZnRlcl9pbml0
ID0gew0KDQpJIGNhbid0IHNlZSBhbnkgcGxhY2Ugd2hlcmUgcHJvdGVjdGlvbl9tYXBbXSBnZXRz
IG1vZGlmaWVkLiBUaGlzIGNvdWxkIA0KYmUgbWFkZSBjb25zdC4NCg0KPiArCVtWTV9OT05FXQkJ
CQkJPSBQQUdFX05PTkUsDQo+ICsJW1ZNX1JFQURdCQkJCQk9IFBBR0VfUkVBRE9OTFksDQo+ICsJ
W1ZNX1dSSVRFXQkJCQkJPSBQQUdFX0NPUFksDQo+ICsJW1ZNX1dSSVRFIHwgVk1fUkVBRF0JCQkJ
PSBQQUdFX0NPUFksDQo+ICsJW1ZNX0VYRUNdCQkJCQk9IFBBR0VfUkVBRE9OTFlfWCwNCj4gKwlb
Vk1fRVhFQyB8IFZNX1JFQURdCQkJCT0gUEFHRV9SRUFET05MWV9YLA0KPiArCVtWTV9FWEVDIHwg
Vk1fV1JJVEVdCQkJCT0gUEFHRV9DT1BZX1gsDQo+ICsJW1ZNX0VYRUMgfCBWTV9XUklURSB8IFZN
X1JFQURdCQkJPSBQQUdFX0NPUFlfWCwNCj4gKwlbVk1fU0hBUkVEXQkJCQkJPSBQQUdFX05PTkUs
DQo+ICsJW1ZNX1NIQVJFRCB8IFZNX1JFQURdCQkJCT0gUEFHRV9SRUFET05MWSwNCj4gKwlbVk1f
U0hBUkVEIHwgVk1fV1JJVEVdCQkJCT0gUEFHRV9TSEFSRUQsDQo+ICsJW1ZNX1NIQVJFRCB8IFZN
X1dSSVRFIHwgVk1fUkVBRF0JCT0gUEFHRV9TSEFSRUQsDQo+ICsJW1ZNX1NIQVJFRCB8IFZNX0VY
RUNdCQkJCT0gUEFHRV9SRUFET05MWV9YLA0KPiArCVtWTV9TSEFSRUQgfCBWTV9FWEVDIHwgVk1f
UkVBRF0JCQk9IFBBR0VfUkVBRE9OTFlfWCwNCj4gKwlbVk1fU0hBUkVEIHwgVk1fRVhFQyB8IFZN
X1dSSVRFXQkJPSBQQUdFX1NIQVJFRF9YLA0KPiArCVtWTV9TSEFSRUQgfCBWTV9FWEVDIHwgVk1f
V1JJVEUgfCBWTV9SRUFEXQk9IFBBR0VfU0hBUkVEX1gNCj4gK307DQo+ICsNCj4gKyNpZm5kZWYg
Q09ORklHX1BQQ19CT09LM1NfNjQNCj4gK0RFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVA0KPiArI2Vu
ZGlm
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform
2022-06-24 5:18 ` Christophe Leroy
@ 2022-06-24 5:35 ` Anshuman Khandual
0 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 5:35 UTC (permalink / raw)
To: Christophe Leroy, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org, Michael Ellerman, Paul Mackerras,
Nicholas Piggin
On 6/24/22 10:48, Christophe Leroy wrote:
>
>
> Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
>> This moves protection_map[] inside the platform and while here, also enable
>> ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.
>
> Not only 32 bit platforms, also nohash 64 (aka book3e/64)
Sure, will update the commit message.
>
>>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> arch/powerpc/Kconfig | 2 +-
>> arch/powerpc/include/asm/pgtable.h | 20 +-------------------
>> arch/powerpc/mm/pgtable.c | 24 ++++++++++++++++++++++++
>> 3 files changed, 26 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index c2ce2e60c8f0..1035d172c7dd 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -140,7 +140,7 @@ config PPC
>> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>> select ARCH_HAS_UACCESS_FLUSHCACHE
>> select ARCH_HAS_UBSAN_SANITIZE_ALL
>> - select ARCH_HAS_VM_GET_PAGE_PROT if PPC_BOOK3S_64
>> + select ARCH_HAS_VM_GET_PAGE_PROT
>> select ARCH_HAVE_NMI_SAFE_CMPXCHG
>> select ARCH_KEEP_MEMBLOCK
>> select ARCH_MIGHT_HAVE_PC_PARPORT
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index d564d0ecd4cd..bf98db844579 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -20,25 +20,6 @@ struct mm_struct;
>> #include <asm/nohash/pgtable.h>
>> #endif /* !CONFIG_PPC_BOOK3S */
>>
>> -/* Note due to the way vm flags are laid out, the bits are XWR */
>> -#define __P000 PAGE_NONE
>> -#define __P001 PAGE_READONLY
>> -#define __P010 PAGE_COPY
>> -#define __P011 PAGE_COPY
>> -#define __P100 PAGE_READONLY_X
>> -#define __P101 PAGE_READONLY_X
>> -#define __P110 PAGE_COPY_X
>> -#define __P111 PAGE_COPY_X
>> -
>> -#define __S000 PAGE_NONE
>> -#define __S001 PAGE_READONLY
>> -#define __S010 PAGE_SHARED
>> -#define __S011 PAGE_SHARED
>> -#define __S100 PAGE_READONLY_X
>> -#define __S101 PAGE_READONLY_X
>> -#define __S110 PAGE_SHARED_X
>> -#define __S111 PAGE_SHARED_X
>> -
>> #ifndef __ASSEMBLY__
>>
>> #ifndef MAX_PTRS_PER_PGD
>> @@ -79,6 +60,7 @@ extern void paging_init(void);
>> void poking_init(void);
>>
>> extern unsigned long ioremap_bot;
>> +extern pgprot_t protection_map[16] __ro_after_init;
>>
>> /*
>> * kern_addr_valid is intended to indicate whether an address is a valid
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index e6166b71d36d..618f30d35b17 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -472,3 +472,27 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
>> return ret_pte;
>> }
>> EXPORT_SYMBOL_GPL(__find_linux_pte);
>> +
>> +/* Note due to the way vm flags are laid out, the bits are XWR */
>> +pgprot_t protection_map[16] __ro_after_init = {
>
> I can't see any place where protection_map[] gets modified. This could
> be made const.
Sure, will make it a const as in case for many other platforms as well.
>
>> + [VM_NONE] = PAGE_NONE,
>> + [VM_READ] = PAGE_READONLY,
>> + [VM_WRITE] = PAGE_COPY,
>> + [VM_WRITE | VM_READ] = PAGE_COPY,
>> + [VM_EXEC] = PAGE_READONLY_X,
>> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> + [VM_EXEC | VM_WRITE] = PAGE_COPY_X,
>> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
>> + [VM_SHARED] = PAGE_NONE,
>> + [VM_SHARED | VM_READ] = PAGE_READONLY,
>> + [VM_SHARED | VM_WRITE] = PAGE_SHARED,
>> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
>> + [VM_SHARED | VM_EXEC] = PAGE_READONLY_X,
>> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X,
>> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
>> +};
>> +
>> +#ifndef CONFIG_PPC_BOOK3S_64
>> +DECLARE_VM_GET_PAGE_PROT
>> +#endif
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform
2022-06-24 4:55 ` [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform Anshuman Khandual
2022-06-24 5:18 ` Christophe Leroy
@ 2022-06-27 5:48 ` Christophe Leroy
1 sibling, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-27 5:48 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org, Michael Ellerman, Paul Mackerras,
Nicholas Piggin
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IFRoaXMgbW92ZXMgcHJvdGVjdGlvbl9tYXBbXSBpbnNpZGUgdGhlIHBsYXRmb3JtIGFuZCB3
aGlsZSBoZXJlLCBhbHNvIGVuYWJsZQ0KPiBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UIG9uIDMy
IGJpdCBwbGF0Zm9ybXMgdmlhIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVC4NCj4gDQo+IENjOiBN
aWNoYWVsIEVsbGVybWFuIDxtcGVAZWxsZXJtYW4uaWQuYXU+DQo+IENjOiBQYXVsIE1hY2tlcnJh
cyA8cGF1bHVzQHNhbWJhLm9yZz4NCj4gQ2M6IE5pY2hvbGFzIFBpZ2dpbiA8bnBpZ2dpbkBnbWFp
bC5jb20+DQo+IENjOiBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZw0KPiBDYzogbGludXgt
a2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiBTaWduZWQtb2ZmLWJ5OiBBbnNodW1hbiBLaGFuZHVh
bCA8YW5zaHVtYW4ua2hhbmR1YWxAYXJtLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IENocmlzdG9waGUg
TGVyb3kgPGNocmlzdG9waGUubGVyb3lAY3Nncm91cC5ldT4NCg0KPiAtLS0NCj4gICBhcmNoL3Bv
d2VycGMvS2NvbmZpZyAgICAgICAgICAgICAgIHwgIDIgKy0NCj4gICBhcmNoL3Bvd2VycGMvaW5j
bHVkZS9hc20vcGd0YWJsZS5oIHwgMjAgKy0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gICBhcmNoL3Bv
d2VycGMvbW0vcGd0YWJsZS5jICAgICAgICAgIHwgMjQgKysrKysrKysrKysrKysrKysrKysrKysr
DQo+ICAgMyBmaWxlcyBjaGFuZ2VkLCAyNiBpbnNlcnRpb25zKCspLCAyMCBkZWxldGlvbnMoLSkN
Cj4gDQo+IGRpZmYgLS1naXQgYS9hcmNoL3Bvd2VycGMvS2NvbmZpZyBiL2FyY2gvcG93ZXJwYy9L
Y29uZmlnDQo+IGluZGV4IGMyY2UyZTYwYzhmMC4uMTAzNWQxNzJjN2RkIDEwMDY0NA0KPiAtLS0g
YS9hcmNoL3Bvd2VycGMvS2NvbmZpZw0KPiArKysgYi9hcmNoL3Bvd2VycGMvS2NvbmZpZw0KPiBA
QCAtMTQwLDcgKzE0MCw3IEBAIGNvbmZpZyBQUEMNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1RJQ0tf
QlJPQURDQVNUCQlpZiBHRU5FUklDX0NMT0NLRVZFTlRTX0JST0FEQ0FTVA0KPiAgIAlzZWxlY3Qg
QVJDSF9IQVNfVUFDQ0VTU19GTFVTSENBQ0hFDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19VQlNBTl9T
QU5JVElaRV9BTEwNCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVAlpZiBQUENf
Qk9PSzNTXzY0DQo+ICsJc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJc2Vs
ZWN0IEFSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHDQo+ICAgCXNlbGVjdCBBUkNIX0tFRVBfTUVN
QkxPQ0sNCj4gICAJc2VsZWN0IEFSQ0hfTUlHSFRfSEFWRV9QQ19QQVJQT1JUDQo+IGRpZmYgLS1n
aXQgYS9hcmNoL3Bvd2VycGMvaW5jbHVkZS9hc20vcGd0YWJsZS5oIGIvYXJjaC9wb3dlcnBjL2lu
Y2x1ZGUvYXNtL3BndGFibGUuaA0KPiBpbmRleCBkNTY0ZDBlY2Q0Y2QuLmJmOThkYjg0NDU3OSAx
MDA2NDQNCj4gLS0tIGEvYXJjaC9wb3dlcnBjL2luY2x1ZGUvYXNtL3BndGFibGUuaA0KPiArKysg
Yi9hcmNoL3Bvd2VycGMvaW5jbHVkZS9hc20vcGd0YWJsZS5oDQo+IEBAIC0yMCwyNSArMjAsNiBA
QCBzdHJ1Y3QgbW1fc3RydWN0Ow0KPiAgICNpbmNsdWRlIDxhc20vbm9oYXNoL3BndGFibGUuaD4N
Cj4gICAjZW5kaWYgLyogIUNPTkZJR19QUENfQk9PSzNTICovDQo+ICAgDQo+IC0vKiBOb3RlIGR1
ZSB0byB0aGUgd2F5IHZtIGZsYWdzIGFyZSBsYWlkIG91dCwgdGhlIGJpdHMgYXJlIFhXUiAqLw0K
PiAtI2RlZmluZSBfX1AwMDAJUEFHRV9OT05FDQo+IC0jZGVmaW5lIF9fUDAwMQlQQUdFX1JFQURP
TkxZDQo+IC0jZGVmaW5lIF9fUDAxMAlQQUdFX0NPUFkNCj4gLSNkZWZpbmUgX19QMDExCVBBR0Vf
Q09QWQ0KPiAtI2RlZmluZSBfX1AxMDAJUEFHRV9SRUFET05MWV9YDQo+IC0jZGVmaW5lIF9fUDEw
MQlQQUdFX1JFQURPTkxZX1gNCj4gLSNkZWZpbmUgX19QMTEwCVBBR0VfQ09QWV9YDQo+IC0jZGVm
aW5lIF9fUDExMQlQQUdFX0NPUFlfWA0KPiAtDQo+IC0jZGVmaW5lIF9fUzAwMAlQQUdFX05PTkUN
Cj4gLSNkZWZpbmUgX19TMDAxCVBBR0VfUkVBRE9OTFkNCj4gLSNkZWZpbmUgX19TMDEwCVBBR0Vf
U0hBUkVEDQo+IC0jZGVmaW5lIF9fUzAxMQlQQUdFX1NIQVJFRA0KPiAtI2RlZmluZSBfX1MxMDAJ
UEFHRV9SRUFET05MWV9YDQo+IC0jZGVmaW5lIF9fUzEwMQlQQUdFX1JFQURPTkxZX1gNCj4gLSNk
ZWZpbmUgX19TMTEwCVBBR0VfU0hBUkVEX1gNCj4gLSNkZWZpbmUgX19TMTExCVBBR0VfU0hBUkVE
X1gNCj4gLQ0KPiAgICNpZm5kZWYgX19BU1NFTUJMWV9fDQo+ICAgDQo+ICAgI2lmbmRlZiBNQVhf
UFRSU19QRVJfUEdEDQo+IEBAIC03OSw2ICs2MCw3IEBAIGV4dGVybiB2b2lkIHBhZ2luZ19pbml0
KHZvaWQpOw0KPiAgIHZvaWQgcG9raW5nX2luaXQodm9pZCk7DQo+ICAgDQo+ICAgZXh0ZXJuIHVu
c2lnbmVkIGxvbmcgaW9yZW1hcF9ib3Q7DQo+ICtleHRlcm4gcGdwcm90X3QgcHJvdGVjdGlvbl9t
YXBbMTZdIF9fcm9fYWZ0ZXJfaW5pdDsNCj4gICANCj4gICAvKg0KPiAgICAqIGtlcm5fYWRkcl92
YWxpZCBpcyBpbnRlbmRlZCB0byBpbmRpY2F0ZSB3aGV0aGVyIGFuIGFkZHJlc3MgaXMgYSB2YWxp
ZA0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9wb3dlcnBjL21tL3BndGFibGUuYyBiL2FyY2gvcG93ZXJw
Yy9tbS9wZ3RhYmxlLmMNCj4gaW5kZXggZTYxNjZiNzFkMzZkLi42MThmMzBkMzViMTcgMTAwNjQ0
DQo+IC0tLSBhL2FyY2gvcG93ZXJwYy9tbS9wZ3RhYmxlLmMNCj4gKysrIGIvYXJjaC9wb3dlcnBj
L21tL3BndGFibGUuYw0KPiBAQCAtNDcyLDMgKzQ3MiwyNyBAQCBwdGVfdCAqX19maW5kX2xpbnV4
X3B0ZShwZ2RfdCAqcGdkaXIsIHVuc2lnbmVkIGxvbmcgZWEsDQo+ICAgCXJldHVybiByZXRfcHRl
Ow0KPiAgIH0NCj4gICBFWFBPUlRfU1lNQk9MX0dQTChfX2ZpbmRfbGludXhfcHRlKTsNCj4gKw0K
PiArLyogTm90ZSBkdWUgdG8gdGhlIHdheSB2bSBmbGFncyBhcmUgbGFpZCBvdXQsIHRoZSBiaXRz
IGFyZSBYV1IgKi8NCj4gK3BncHJvdF90IHByb3RlY3Rpb25fbWFwWzE2XSBfX3JvX2FmdGVyX2lu
aXQgPSB7DQo+ICsJW1ZNX05PTkVdCQkJCQk9IFBBR0VfTk9ORSwNCj4gKwlbVk1fUkVBRF0JCQkJ
CT0gUEFHRV9SRUFET05MWSwNCj4gKwlbVk1fV1JJVEVdCQkJCQk9IFBBR0VfQ09QWSwNCj4gKwlb
Vk1fV1JJVEUgfCBWTV9SRUFEXQkJCQk9IFBBR0VfQ09QWSwNCj4gKwlbVk1fRVhFQ10JCQkJCT0g
UEFHRV9SRUFET05MWV9YLA0KPiArCVtWTV9FWEVDIHwgVk1fUkVBRF0JCQkJPSBQQUdFX1JFQURP
TkxZX1gsDQo+ICsJW1ZNX0VYRUMgfCBWTV9XUklURV0JCQkJPSBQQUdFX0NPUFlfWCwNCj4gKwlb
Vk1fRVhFQyB8IFZNX1dSSVRFIHwgVk1fUkVBRF0JCQk9IFBBR0VfQ09QWV9YLA0KPiArCVtWTV9T
SEFSRURdCQkJCQk9IFBBR0VfTk9ORSwNCj4gKwlbVk1fU0hBUkVEIHwgVk1fUkVBRF0JCQkJPSBQ
QUdFX1JFQURPTkxZLA0KPiArCVtWTV9TSEFSRUQgfCBWTV9XUklURV0JCQkJPSBQQUdFX1NIQVJF
RCwNCj4gKwlbVk1fU0hBUkVEIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQkJPSBQQUdFX1NIQVJFRCwN
Cj4gKwlbVk1fU0hBUkVEIHwgVk1fRVhFQ10JCQkJPSBQQUdFX1JFQURPTkxZX1gsDQo+ICsJW1ZN
X1NIQVJFRCB8IFZNX0VYRUMgfCBWTV9SRUFEXQkJCT0gUEFHRV9SRUFET05MWV9YLA0KPiArCVtW
TV9TSEFSRUQgfCBWTV9FWEVDIHwgVk1fV1JJVEVdCQk9IFBBR0VfU0hBUkVEX1gsDQo+ICsJW1ZN
X1NIQVJFRCB8IFZNX0VYRUMgfCBWTV9XUklURSB8IFZNX1JFQURdCT0gUEFHRV9TSEFSRURfWA0K
PiArfTsNCj4gKw0KPiArI2lmbmRlZiBDT05GSUdfUFBDX0JPT0szU182NA0KPiArREVDTEFSRV9W
TV9HRVRfUEFHRV9QUk9UDQo+ICsjZW5kaWY
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 04/26] sparc/mm: Move protection_map[] inside the platform
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (2 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 05/26] arm64/mm: " Anshuman Khandual
` (22 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
David S. Miller
This moves protection_map[] inside the platform and while here, also enable
ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/sparc/Kconfig | 2 +-
arch/sparc/include/asm/pgtable_32.h | 19 -------------------
arch/sparc/include/asm/pgtable_64.h | 19 -------------------
arch/sparc/mm/init_32.c | 20 ++++++++++++++++++++
arch/sparc/mm/init_64.c | 3 +++
5 files changed, 24 insertions(+), 39 deletions(-)
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index ba449c47effd..09f868613a4d 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -13,6 +13,7 @@ config 64BIT
config SPARC
bool
default y
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT if SPARC64 && PCI
select ARCH_MIGHT_HAVE_PC_SERIO
select DMA_OPS
@@ -84,7 +85,6 @@ config SPARC64
select PERF_USE_VMALLOC
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select HAVE_C_RECORDMCOUNT
- select ARCH_HAS_VM_GET_PAGE_PROT
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 4866625da314..8ff549004fac 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -64,25 +64,6 @@ void paging_init(void);
extern unsigned long ptr_in_current_pgd;
-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
-
/* First physical page can be anywhere, the following is needed so that
* va-->pa and vice versa conversions work properly without performance
* hit for all __pa()/__va() operations.
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4679e45c8348..a779418ceba9 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
#define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V
-/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
#ifndef __ASSEMBLY__
pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
index 1e9f577f084d..8693e4e28b86 100644
--- a/arch/sparc/mm/init_32.c
+++ b/arch/sparc/mm/init_32.c
@@ -302,3 +302,23 @@ void sparc_flush_page_to_ram(struct page *page)
__flush_page_to_ram(vaddr);
}
EXPORT_SYMBOL(sparc_flush_page_to_ram);
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY,
+ [VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED
+};
+DECLARE_VM_GET_PAGE_PROT
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index f6174df2d5af..d6faee23c77d 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2634,6 +2634,9 @@ void vmemmap_free(unsigned long start, unsigned long end,
}
#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
+static pgprot_t protection_map[16] __ro_after_init;
+
static void prot_init_common(unsigned long page_none,
unsigned long page_shared,
unsigned long page_copy,
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 05/26] arm64/mm: Move protection_map[] inside the platform
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (3 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 04/26] sparc/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 12:24 ` Catalin Marinas
2022-06-24 4:55 ` [PATCH V4 06/26] x86/mm: " Anshuman Khandual
` (21 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Catalin Marinas, Will Deacon
This moves protection_map[] inside the platform and makes it a static.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/pgtable-prot.h | 18 ------------------
arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++
2 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 62e0ebeed720..9b165117a454 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -89,24 +89,6 @@ extern bool arm64_use_ng_mappings;
#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY
-#define __P011 PAGE_READONLY
-#define __P100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_READONLY_EXEC
-#define __P111 PAGE_READONLY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
#endif /* __ASSEMBLY__ */
#endif /* __ASM_PGTABLE_PROT_H */
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 78e9490f748d..8f5b7ce857ed 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -13,6 +13,27 @@
#include <asm/cpufeature.h>
#include <asm/page.h>
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_READONLY,
+ [VM_WRITE | VM_READ] = PAGE_READONLY,
+ /* PAGE_EXECONLY if Enhanced PAN */
+ [VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ /* PAGE_EXECONLY if Enhanced PAN */
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+
/*
* You really shouldn't be using read() or write() on /dev/mem. This might go
* away in the future.
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 05/26] arm64/mm: Move protection_map[] inside the platform
2022-06-24 4:55 ` [PATCH V4 05/26] arm64/mm: " Anshuman Khandual
@ 2022-06-24 12:24 ` Catalin Marinas
0 siblings, 0 replies; 52+ messages in thread
From: Catalin Marinas @ 2022-06-24 12:24 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Will Deacon
On Fri, Jun 24, 2022 at 10:13:18AM +0530, Anshuman Khandual wrote:
> This moves protection_map[] inside the platform and makes it a static.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 06/26] x86/mm: Move protection_map[] inside the platform
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (4 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 05/26] arm64/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:10 ` Christoph Hellwig
2022-06-24 4:55 ` [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (20 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Thomas Gleixner, Ingo Molnar
This moves protection_map[] inside the platform and makes it a static. This
also defines a helper function add_encrypt_protection_map() that can update
the protection_map[] array with pgprot_encrypted().
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/x86/include/asm/pgtable_types.h | 19 -------------------
arch/x86/mm/mem_encrypt_amd.c | 7 +++----
arch/x86/mm/pgprot.c | 27 +++++++++++++++++++++++++++
3 files changed, 30 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index bdaf8391e2e0..aa174fed3a71 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -230,25 +230,6 @@ enum page_cache_mode {
#endif /* __ASSEMBLY__ */
-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_EXEC
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
/*
* early identity mapping pte attrib macros.
*/
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index f6d038e2cd8e..4b3ec87e8c7d 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -484,10 +484,10 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, boo
enc_dec_hypercall(vaddr, npages, enc);
}
+void add_encrypt_protection_map(void);
+
void __init sme_early_init(void)
{
- unsigned int i;
-
if (!sme_me_mask)
return;
@@ -496,8 +496,7 @@ void __init sme_early_init(void)
__supported_pte_mask = __sme_set(__supported_pte_mask);
/* Update the protection map with memory encryption mask */
- for (i = 0; i < ARRAY_SIZE(protection_map); i++)
- protection_map[i] = pgprot_encrypted(protection_map[i]);
+ add_encrypt_protection_map();
x86_platform.guest.enc_status_change_prepare = amd_enc_status_change_prepare;
x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish;
diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c
index 763742782286..b867839b16aa 100644
--- a/arch/x86/mm/pgprot.c
+++ b/arch/x86/mm/pgprot.c
@@ -4,6 +4,33 @@
#include <linux/mm.h>
#include <asm/pgtable.h>
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+
+void add_encrypt_protection_map(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(protection_map); i++)
+ protection_map[i] = pgprot_encrypted(protection_map[i]);
+}
+
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
unsigned long val = pgprot_val(protection_map[vm_flags &
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 06/26] x86/mm: Move protection_map[] inside the platform
2022-06-24 4:55 ` [PATCH V4 06/26] x86/mm: " Anshuman Khandual
@ 2022-06-24 5:10 ` Christoph Hellwig
0 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:10 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Thomas Gleixner, Ingo Molnar
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (5 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 06/26] x86/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:22 ` Christophe Leroy
2022-06-27 5:48 ` Christophe Leroy
2022-06-24 4:55 ` [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
` (19 subsequent siblings)
26 siblings, 2 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
protection_map[] has already been moved inside those platforms which enable
ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be
protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/mm.h | 2 +-
mm/mmap.c | 5 +----
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 237828c2bae2..70d900f6df43 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp);
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
-#ifdef __P000
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
extern pgprot_t protection_map[16];
#endif
diff --git a/mm/mmap.c b/mm/mmap.c
index 55c30aee3999..43db3bd49071 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,7 +101,7 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-#ifdef __P000
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE] = __P000,
[VM_READ] = __P001,
@@ -120,9 +120,6 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE] = __S110,
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
};
-#endif
-
-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
DECLARE_VM_GET_PAGE_PROT
#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 5:22 ` Christophe Leroy
2022-06-24 5:48 ` Anshuman Khandual
2022-06-27 5:48 ` Christophe Leroy
1 sibling, 1 reply; 52+ messages in thread
From: Christophe Leroy @ 2022-06-24 5:22 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IHByb3RlY3Rpb25fbWFwW10gaGFzIGFscmVhZHkgYmVlbiBtb3ZlZCBpbnNpZGUgdGhvc2Ug
cGxhdGZvcm1zIHdoaWNoIGVuYWJsZQ0KDQpVc3VhbGx5ICJhbHJlYWR5IiBtZWFucyBiZWZvcmUg
eW91ciBzZXJpZXMuDQoNCllvdXIgc2VyaWVzIGlzIHRoZSBvbmUgdGhhdCBtb3ZlcyBwcm90ZWN0
aW9uX21hcFtdIHNvIEkgd291bGQgaGF2ZSBqdXN0IA0Kc2FpZCAiTm93IHRoYXQgcHJvdGVjdGlv
bl9tYXBbXSBoYXMgYmVlbiBtb3ZlZCBpbnNpZGUgdGhvc2UgcGxhdGZvcm1zIA0Kd2hpY2ggZW5h
YmxlIC4uLi4iDQoNCj4gQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVC4gSGVuY2UgZ2VuZXJpYyBw
cm90ZWN0aW9uX21hcFtdIGFycmF5IG5vdyBjYW4gYmUNCj4gcHJvdGVjdGVkIHdpdGggQ09ORklH
X0FSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QgaW50ZWFkIG9mIF9fUDAwMC4NCj4gDQo+IENjOiBB
bmRyZXcgTW9ydG9uIDxha3BtQGxpbnV4LWZvdW5kYXRpb24ub3JnPg0KPiBDYzogbGludXgtbW1A
a3ZhY2sub3JnDQo+IENjOiBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+IFNpZ25lZC1v
ZmYtYnk6IEFuc2h1bWFuIEtoYW5kdWFsIDxhbnNodW1hbi5raGFuZHVhbEBhcm0uY29tPg0KPiAt
LS0NCj4gICBpbmNsdWRlL2xpbnV4L21tLmggfCAyICstDQo+ICAgbW0vbW1hcC5jICAgICAgICAg
IHwgNSArLS0tLQ0KPiAgIDIgZmlsZXMgY2hhbmdlZCwgMiBpbnNlcnRpb25zKCspLCA1IGRlbGV0
aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2luY2x1ZGUvbGludXgvbW0uaCBiL2luY2x1ZGUv
bGludXgvbW0uaA0KPiBpbmRleCAyMzc4MjhjMmJhZTIuLjcwZDkwMGY2ZGY0MyAxMDA2NDQNCj4g
LS0tIGEvaW5jbHVkZS9saW51eC9tbS5oDQo+ICsrKyBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBA
QCAtNDI0LDcgKzQyNCw3IEBAIGV4dGVybiB1bnNpZ25lZCBpbnQga29ianNpemUoY29uc3Qgdm9p
ZCAqb2JqcCk7DQo+ICAgICogbWFwcGluZyBmcm9tIHRoZSBjdXJyZW50bHkgYWN0aXZlIHZtX2Zs
YWdzIHByb3RlY3Rpb24gYml0cyAodGhlDQo+ICAgICogbG93IGZvdXIgYml0cykgdG8gYSBwYWdl
IHByb3RlY3Rpb24gbWFzay4uDQo+ICAgICovDQo+IC0jaWZkZWYgX19QMDAwDQo+ICsjaWZuZGVm
IENPTkZJR19BUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgZXh0ZXJuIHBncHJvdF90IHBy
b3RlY3Rpb25fbWFwWzE2XTsNCg0KSXMgdGhpcyBkZWNsYXJhdGlvbiBzdGlsbCBuZWVkZWQgPyBJ
IGhhdmUgdGhlIGZlZWxpbmcgdGhhdCANCnByb3RlY3Rpb25fbWFwW10gaXMgb25seSB1c2VkIGlu
IG1tL21tYXAuYyBub3cuDQoNCj4gICAjZW5kaWYNCj4gICANCj4gZGlmZiAtLWdpdCBhL21tL21t
YXAuYyBiL21tL21tYXAuYw0KPiBpbmRleCA1NWMzMGFlZTM5OTkuLjQzZGIzYmQ0OTA3MSAxMDA2
NDQNCj4gLS0tIGEvbW0vbW1hcC5jDQo+ICsrKyBiL21tL21tYXAuYw0KPiBAQCAtMTAxLDcgKzEw
MSw3IEBAIHN0YXRpYyB2b2lkIHVubWFwX3JlZ2lvbihzdHJ1Y3QgbW1fc3RydWN0ICptbSwNCj4g
ICAgKgkJCQkJCQkJdzogKG5vKSBubw0KPiAgICAqCQkJCQkJCQl4OiAoeWVzKSB5ZXMNCj4gICAg
Ki8NCj4gLSNpZmRlZiBfX1AwMDANCj4gKyNpZm5kZWYgQ09ORklHX0FSQ0hfSEFTX1ZNX0dFVF9Q
QUdFX1BST1QNCj4gICBwZ3Byb3RfdCBwcm90ZWN0aW9uX21hcFsxNl0gX19yb19hZnRlcl9pbml0
ID0gew0KDQpTaG91bGQgdGhpcyBiZSBzdGF0aWMsIGFzIGl0IHNlZW1zIHRvIG5vdyBiZSB1c2Vk
IG9ubHkgaW4gdGhpcyBmaWxlID8NCkFuZCBpdCBjb3VsZCBhbHNvIGJlICdjb25zdCcgaW5zdGVh
ZCBvZiBfX3JvX2FmdGVyX2luaXQuDQoNCj4gICAJW1ZNX05PTkVdCQkJCQk9IF9fUDAwMCwNCj4g
ICAJW1ZNX1JFQURdCQkJCQk9IF9fUDAwMSwNCj4gQEAgLTEyMCw5ICsxMjAsNiBAQCBwZ3Byb3Rf
dCBwcm90ZWN0aW9uX21hcFsxNl0gX19yb19hZnRlcl9pbml0ID0gew0KPiAgIAlbVk1fU0hBUkVE
IHwgVk1fRVhFQyB8IFZNX1dSSVRFXQkJPSBfX1MxMTAsDQo+ICAgCVtWTV9TSEFSRUQgfCBWTV9F
WEVDIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQk9IF9fUzExMQ0KPiAgIH07DQo+IC0jZW5kaWYNCj4g
LQ0KPiAtI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIERFQ0xB
UkVfVk1fR0VUX1BBR0VfUFJPVA0KPiAgICNlbmRpZgkvKiBDT05GSUdfQVJDSF9IQVNfVk1fR0VU
X1BBR0VfUFJPVCAqLw0KPiAgIA=
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 5:22 ` Christophe Leroy
@ 2022-06-24 5:48 ` Anshuman Khandual
0 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 5:48 UTC (permalink / raw)
To: Christophe Leroy, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
On 6/24/22 10:52, Christophe Leroy wrote:
>
>
> Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
>> protection_map[] has already been moved inside those platforms which enable
>
> Usually "already" means before your series.
>
> Your series is the one that moves protection_map[] so I would have just
> said "Now that protection_map[] has been moved inside those platforms
> which enable ...."
Got it, will update the commit message.
>
>> ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be
>> protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000.
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> include/linux/mm.h | 2 +-
>> mm/mmap.c | 5 +----
>> 2 files changed, 2 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 237828c2bae2..70d900f6df43 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp);
>> * mapping from the currently active vm_flags protection bits (the
>> * low four bits) to a page protection mask..
>> */
>> -#ifdef __P000
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>> extern pgprot_t protection_map[16];
>
> Is this declaration still needed ? I have the feeling that
> protection_map[] is only used in mm/mmap.c now.
At this point generic protection_map[] array is still being used via
this declaration on many (!ARCH_HAS_VM_GET_PAGE_PROT) platforms such
as mips, m68k, arm etc.
>
>> #endif
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 55c30aee3999..43db3bd49071 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -101,7 +101,7 @@ static void unmap_region(struct mm_struct *mm,
>> * w: (no) no
>> * x: (yes) yes
>> */
>> -#ifdef __P000
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>> pgprot_t protection_map[16] __ro_after_init = {
>
> Should this be static, as it seems to now be used only in this file ?
This is being used in some platforms as mentioned before.
> And it could also be 'const' instead of __ro_after_init.
Then should be able to be a 'const' wrt mips, m68k, arm platforms.
But should this even be changed, if this is going to be dropped off
eventually ?
>
>> [VM_NONE] = __P000,
>> [VM_READ] = __P001,
>> @@ -120,9 +120,6 @@ pgprot_t protection_map[16] __ro_after_init = {
>> [VM_SHARED | VM_EXEC | VM_WRITE] = __S110,
>> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
>> };
>> -#endif
>> -
>> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>> DECLARE_VM_GET_PAGE_PROT
>> #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
>>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:22 ` Christophe Leroy
@ 2022-06-27 5:48 ` Christophe Leroy
1 sibling, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-27 5:48 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IHByb3RlY3Rpb25fbWFwW10gaGFzIGFscmVhZHkgYmVlbiBtb3ZlZCBpbnNpZGUgdGhvc2Ug
cGxhdGZvcm1zIHdoaWNoIGVuYWJsZQ0KPiBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9ULiBIZW5j
ZSBnZW5lcmljIHByb3RlY3Rpb25fbWFwW10gYXJyYXkgbm93IGNhbiBiZQ0KPiBwcm90ZWN0ZWQg
d2l0aCBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVCBpbnRlYWQgb2YgX19QMDAwLg0K
PiANCj4gQ2M6IEFuZHJldyBNb3J0b24gPGFrcG1AbGludXgtZm91bmRhdGlvbi5vcmc+DQo+IENj
OiBsaW51eC1tbUBrdmFjay5vcmcNCj4gQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcN
Cj4gU2lnbmVkLW9mZi1ieTogQW5zaHVtYW4gS2hhbmR1YWwgPGFuc2h1bWFuLmtoYW5kdWFsQGFy
bS5jb20+DQoNClJldmlld2VkLWJ5OiBDaHJpc3RvcGhlIExlcm95IDxjaHJpc3RvcGhlLmxlcm95
QGNzZ3JvdXAuZXU+DQoNCj4gLS0tDQo+ICAgaW5jbHVkZS9saW51eC9tbS5oIHwgMiArLQ0KPiAg
IG1tL21tYXAuYyAgICAgICAgICB8IDUgKy0tLS0NCj4gICAyIGZpbGVzIGNoYW5nZWQsIDIgaW5z
ZXJ0aW9ucygrKSwgNSBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xp
bnV4L21tLmggYi9pbmNsdWRlL2xpbnV4L21tLmgNCj4gaW5kZXggMjM3ODI4YzJiYWUyLi43MGQ5
MDBmNmRmNDMgMTAwNjQ0DQo+IC0tLSBhL2luY2x1ZGUvbGludXgvbW0uaA0KPiArKysgYi9pbmNs
dWRlL2xpbnV4L21tLmgNCj4gQEAgLTQyNCw3ICs0MjQsNyBAQCBleHRlcm4gdW5zaWduZWQgaW50
IGtvYmpzaXplKGNvbnN0IHZvaWQgKm9ianApOw0KPiAgICAqIG1hcHBpbmcgZnJvbSB0aGUgY3Vy
cmVudGx5IGFjdGl2ZSB2bV9mbGFncyBwcm90ZWN0aW9uIGJpdHMgKHRoZQ0KPiAgICAqIGxvdyBm
b3VyIGJpdHMpIHRvIGEgcGFnZSBwcm90ZWN0aW9uIG1hc2suLg0KPiAgICAqLw0KPiAtI2lmZGVm
IF9fUDAwMA0KPiArI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAg
IGV4dGVybiBwZ3Byb3RfdCBwcm90ZWN0aW9uX21hcFsxNl07DQo+ICAgI2VuZGlmDQo+ICAgDQo+
IGRpZmYgLS1naXQgYS9tbS9tbWFwLmMgYi9tbS9tbWFwLmMNCj4gaW5kZXggNTVjMzBhZWUzOTk5
Li40M2RiM2JkNDkwNzEgMTAwNjQ0DQo+IC0tLSBhL21tL21tYXAuYw0KPiArKysgYi9tbS9tbWFw
LmMNCj4gQEAgLTEwMSw3ICsxMDEsNyBAQCBzdGF0aWMgdm9pZCB1bm1hcF9yZWdpb24oc3RydWN0
IG1tX3N0cnVjdCAqbW0sDQo+ICAgICoJCQkJCQkJCXc6IChubykgbm8NCj4gICAgKgkJCQkJCQkJ
eDogKHllcykgeWVzDQo+ICAgICovDQo+IC0jaWZkZWYgX19QMDAwDQo+ICsjaWZuZGVmIENPTkZJ
R19BUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBb
MTZdIF9fcm9fYWZ0ZXJfaW5pdCA9IHsNCj4gICAJW1ZNX05PTkVdCQkJCQk9IF9fUDAwMCwNCj4g
ICAJW1ZNX1JFQURdCQkJCQk9IF9fUDAwMSwNCj4gQEAgLTEyMCw5ICsxMjAsNiBAQCBwZ3Byb3Rf
dCBwcm90ZWN0aW9uX21hcFsxNl0gX19yb19hZnRlcl9pbml0ID0gew0KPiAgIAlbVk1fU0hBUkVE
IHwgVk1fRVhFQyB8IFZNX1dSSVRFXQkJPSBfX1MxMTAsDQo+ICAgCVtWTV9TSEFSRUQgfCBWTV9F
WEVDIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQk9IF9fUzExMQ0KPiAgIH07DQo+IC0jZW5kaWYNCj4g
LQ0KPiAtI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIERFQ0xB
UkVfVk1fR0VUX1BBR0VfUFJPVA0KPiAgICNlbmRpZgkvKiBDT05GSUdfQVJDSF9IQVNfVk1fR0VU
X1BBR0VfUFJPVCAqLw0KPiAgIA=
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (6 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:24 ` Christophe Leroy
2022-06-24 4:55 ` [PATCH V4 09/26] loongarch/mm: " Anshuman Khandual
` (18 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Michal Simek
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Michal Simek <monstr@monstr.eu>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/microblaze/Kconfig | 1 +
arch/microblaze/include/asm/pgtable.h | 17 -----------------
arch/microblaze/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 8cf429ad1c84..15f91ba8a0c4 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -7,6 +7,7 @@ config MICROBLAZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_TABLE_SORT
diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h
index 0c72646370e1..ba348e997dbb 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -204,23 +204,6 @@ extern pte_t *va_to_pte(unsigned long address);
* We consider execute permission the same as read.
* Also, write permissions imply read permissions.
*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
#ifndef __ASSEMBLY__
/*
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index f4e503461d24..315fd5024f00 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -285,3 +285,23 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)
return p;
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY_X,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_EXEC] = PAGE_READONLY,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED_X,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 5:24 ` Christophe Leroy
0 siblings, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-24 5:24 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org, Michal Simek
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IFRoaXMgZW5hYmxlcyBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UIG9uIHRoZSBwbGF0Zm9y
bSBhbmQgZXhwb3J0cyBzdGFuZGFyZA0KPiB2bV9nZXRfcGFnZV9wcm90KCkgaW1wbGVtZW50YXRp
b24gdmlhIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVCwgd2hpY2ggbG9va3MNCj4gdXAgYSBwcml2
YXRlIGFuZCBzdGF0aWMgcHJvdGVjdGlvbl9tYXBbXSBhcnJheS4gU3Vic2VxdWVudGx5IGFsbCBf
X1NYWFggYW5kDQo+IF9fUFhYWCBtYWNyb3MgY2FuIGJlIGRyb3BwZWQgd2hpY2ggYXJlIG5vIGxv
bmdlciBuZWVkZWQuDQoNCkluIHRoaXMgcGF0Y2ggYW5kIGFsbCBmb2xsb3dpbmcgb25lcywgY2Fu
J3QgcHJvdGVjdGlvbl9tYXBbXSBiZSBjb25zdCANCmluc3RlYWQgb2YgX19yb19hZnRlcl9pbml0
ID8NCg0KPiANCj4gQ2M6IE1pY2hhbCBTaW1layA8bW9uc3RyQG1vbnN0ci5ldT4NCj4gQ2M6IGxp
bnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gU2lnbmVkLW9mZi1ieTogQW5zaHVtYW4gS2hh
bmR1YWwgPGFuc2h1bWFuLmtoYW5kdWFsQGFybS5jb20+DQo+IC0tLQ0KPiAgIGFyY2gvbWljcm9i
bGF6ZS9LY29uZmlnICAgICAgICAgICAgICAgfCAgMSArDQo+ICAgYXJjaC9taWNyb2JsYXplL2lu
Y2x1ZGUvYXNtL3BndGFibGUuaCB8IDE3IC0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgYXJjaC9taWNy
b2JsYXplL21tL2luaXQuYyAgICAgICAgICAgICB8IDIwICsrKysrKysrKysrKysrKysrKysrDQo+
ICAgMyBmaWxlcyBjaGFuZ2VkLCAyMSBpbnNlcnRpb25zKCspLCAxNyBkZWxldGlvbnMoLSkNCj4g
DQo+IGRpZmYgLS1naXQgYS9hcmNoL21pY3JvYmxhemUvS2NvbmZpZyBiL2FyY2gvbWljcm9ibGF6
ZS9LY29uZmlnDQo+IGluZGV4IDhjZjQyOWFkMWM4NC4uMTVmOTFiYThhMGM0IDEwMDY0NA0KPiAt
LS0gYS9hcmNoL21pY3JvYmxhemUvS2NvbmZpZw0KPiArKysgYi9hcmNoL21pY3JvYmxhemUvS2Nv
bmZpZw0KPiBAQCAtNyw2ICs3LDcgQEAgY29uZmlnIE1JQ1JPQkxBWkUNCj4gICAJc2VsZWN0IEFS
Q0hfSEFTX0dDT1ZfUFJPRklMRV9BTEwNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNfRE1BX0ZP
Ul9DUFUNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNfRE1BX0ZPUl9ERVZJQ0UNCj4gKwlzZWxl
Y3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9NSUdIVF9IQVZF
X1BDX1BBUlBPUlQNCj4gICAJc2VsZWN0IEFSQ0hfV0FOVF9JUENfUEFSU0VfVkVSU0lPTg0KPiAg
IAlzZWxlY3QgQlVJTERUSU1FX1RBQkxFX1NPUlQNCj4gZGlmZiAtLWdpdCBhL2FyY2gvbWljcm9i
bGF6ZS9pbmNsdWRlL2FzbS9wZ3RhYmxlLmggYi9hcmNoL21pY3JvYmxhemUvaW5jbHVkZS9hc20v
cGd0YWJsZS5oDQo+IGluZGV4IDBjNzI2NDYzNzBlMS4uYmEzNDhlOTk3ZGJiIDEwMDY0NA0KPiAt
LS0gYS9hcmNoL21pY3JvYmxhemUvaW5jbHVkZS9hc20vcGd0YWJsZS5oDQo+ICsrKyBiL2FyY2gv
bWljcm9ibGF6ZS9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgNCj4gQEAgLTIwNCwyMyArMjA0LDYgQEAg
ZXh0ZXJuIHB0ZV90ICp2YV90b19wdGUodW5zaWduZWQgbG9uZyBhZGRyZXNzKTsNCj4gICAgKiBX
ZSBjb25zaWRlciBleGVjdXRlIHBlcm1pc3Npb24gdGhlIHNhbWUgYXMgcmVhZC4NCj4gICAgKiBB
bHNvLCB3cml0ZSBwZXJtaXNzaW9ucyBpbXBseSByZWFkIHBlcm1pc3Npb25zLg0KPiAgICAqLw0K
PiAtI2RlZmluZSBfX1AwMDAJUEFHRV9OT05FDQo+IC0jZGVmaW5lIF9fUDAwMQlQQUdFX1JFQURP
TkxZX1gNCj4gLSNkZWZpbmUgX19QMDEwCVBBR0VfQ09QWQ0KPiAtI2RlZmluZSBfX1AwMTEJUEFH
RV9DT1BZX1gNCj4gLSNkZWZpbmUgX19QMTAwCVBBR0VfUkVBRE9OTFkNCj4gLSNkZWZpbmUgX19Q
MTAxCVBBR0VfUkVBRE9OTFlfWA0KPiAtI2RlZmluZSBfX1AxMTAJUEFHRV9DT1BZDQo+IC0jZGVm
aW5lIF9fUDExMQlQQUdFX0NPUFlfWA0KPiAtDQo+IC0jZGVmaW5lIF9fUzAwMAlQQUdFX05PTkUN
Cj4gLSNkZWZpbmUgX19TMDAxCVBBR0VfUkVBRE9OTFlfWA0KPiAtI2RlZmluZSBfX1MwMTAJUEFH
RV9TSEFSRUQNCj4gLSNkZWZpbmUgX19TMDExCVBBR0VfU0hBUkVEX1gNCj4gLSNkZWZpbmUgX19T
MTAwCVBBR0VfUkVBRE9OTFkNCj4gLSNkZWZpbmUgX19TMTAxCVBBR0VfUkVBRE9OTFlfWA0KPiAt
I2RlZmluZSBfX1MxMTAJUEFHRV9TSEFSRUQNCj4gLSNkZWZpbmUgX19TMTExCVBBR0VfU0hBUkVE
X1gNCj4gICANCj4gICAjaWZuZGVmIF9fQVNTRU1CTFlfXw0KPiAgIC8qDQo+IGRpZmYgLS1naXQg
YS9hcmNoL21pY3JvYmxhemUvbW0vaW5pdC5jIGIvYXJjaC9taWNyb2JsYXplL21tL2luaXQuYw0K
PiBpbmRleCBmNGU1MDM0NjFkMjQuLjMxNWZkNTAyNGYwMCAxMDA2NDQNCj4gLS0tIGEvYXJjaC9t
aWNyb2JsYXplL21tL2luaXQuYw0KPiArKysgYi9hcmNoL21pY3JvYmxhemUvbW0vaW5pdC5jDQo+
IEBAIC0yODUsMyArMjg1LDIzIEBAIHZvaWQgKiBfX3JlZiB6YWxsb2NfbWF5YmVfYm9vdG1lbShz
aXplX3Qgc2l6ZSwgZ2ZwX3QgbWFzaykNCj4gICANCj4gICAJcmV0dXJuIHA7DQo+ICAgfQ0KPiAr
DQo+ICtzdGF0aWMgcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBbMTZdIF9fcm9fYWZ0ZXJfaW5pdCA9
IHsNCj4gKwlbVk1fTk9ORV0JCQkJCT0gUEFHRV9OT05FLA0KPiArCVtWTV9SRUFEXQkJCQkJPSBQ
QUdFX1JFQURPTkxZX1gsDQo+ICsJW1ZNX1dSSVRFXQkJCQkJPSBQQUdFX0NPUFksDQo+ICsJW1ZN
X1dSSVRFIHwgVk1fUkVBRF0JCQkJPSBQQUdFX0NPUFlfWCwNCj4gKwlbVk1fRVhFQ10JCQkJCT0g
UEFHRV9SRUFET05MWSwNCj4gKwlbVk1fRVhFQyB8IFZNX1JFQURdCQkJCT0gUEFHRV9SRUFET05M
WV9YLA0KPiArCVtWTV9FWEVDIHwgVk1fV1JJVEVdCQkJCT0gUEFHRV9DT1BZLA0KPiArCVtWTV9F
WEVDIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQkJCT0gUEFHRV9DT1BZX1gsDQo+ICsJW1ZNX1NIQVJF
RF0JCQkJCT0gUEFHRV9OT05FLA0KPiArCVtWTV9TSEFSRUQgfCBWTV9SRUFEXQkJCQk9IFBBR0Vf
UkVBRE9OTFlfWCwNCj4gKwlbVk1fU0hBUkVEIHwgVk1fV1JJVEVdCQkJCT0gUEFHRV9TSEFSRUQs
DQo+ICsJW1ZNX1NIQVJFRCB8IFZNX1dSSVRFIHwgVk1fUkVBRF0JCT0gUEFHRV9TSEFSRURfWCwN
Cj4gKwlbVk1fU0hBUkVEIHwgVk1fRVhFQ10JCQkJPSBQQUdFX1JFQURPTkxZLA0KPiArCVtWTV9T
SEFSRUQgfCBWTV9FWEVDIHwgVk1fUkVBRF0JCQk9IFBBR0VfUkVBRE9OTFlfWCwNCj4gKwlbVk1f
U0hBUkVEIHwgVk1fRVhFQyB8IFZNX1dSSVRFXQkJPSBQQUdFX1NIQVJFRCwNCj4gKwlbVk1fU0hB
UkVEIHwgVk1fRVhFQyB8IFZNX1dSSVRFIHwgVk1fUkVBRF0JPSBQQUdFX1NIQVJFRF9YDQo+ICt9
Ow0KPiArREVDTEFSRV9WTV9HRVRfUEFHRV9QUk9U
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 09/26] loongarch/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (7 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 10/26] openrisc/mm: " Anshuman Khandual
` (17 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Huacai Chen,
WANG Xuerui
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/pgtable-bits.h | 19 ----------
arch/loongarch/mm/cache.c | 46 +++++++++++++++++++++++
3 files changed, 47 insertions(+), 19 deletions(-)
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 1920d52653b4..fd07b8e760ee 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,6 +9,7 @@ config LOONGARCH
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_PTE_SPECIAL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_INLINE_READ_LOCK if !PREEMPTION
select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h
index 3badd112d9ab..9ca147a29bab 100644
--- a/arch/loongarch/include/asm/pgtable-bits.h
+++ b/arch/loongarch/include/asm/pgtable-bits.h
@@ -83,25 +83,6 @@
_PAGE_GLOBAL | _PAGE_KERN | _CACHE_SUC)
#define PAGE_KERNEL_WUC __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
_PAGE_GLOBAL | _PAGE_KERN | _CACHE_WUC)
-
-#define __P000 __pgprot(_CACHE_CC | _PAGE_USER | _PAGE_PROTNONE | _PAGE_NO_EXEC | _PAGE_NO_READ)
-#define __P001 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC)
-#define __P010 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC)
-#define __P011 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC)
-#define __P100 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P101 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P110 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P111 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-
-#define __S000 __pgprot(_CACHE_CC | _PAGE_USER | _PAGE_PROTNONE | _PAGE_NO_EXEC | _PAGE_NO_READ)
-#define __S001 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC)
-#define __S010 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE)
-#define __S011 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE)
-#define __S100 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __S101 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __S110 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_WRITE)
-#define __S111 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | _PAGE_WRITE)
-
#ifndef __ASSEMBLY__
#define pgprot_noncached pgprot_noncached
diff --git a/arch/loongarch/mm/cache.c b/arch/loongarch/mm/cache.c
index 9e5ce5aa73f7..aa4ea357ea44 100644
--- a/arch/loongarch/mm/cache.c
+++ b/arch/loongarch/mm/cache.c
@@ -139,3 +139,49 @@ void cpu_cache_init(void)
shm_align_mask = PAGE_SIZE - 1;
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = __pgprot(_CACHE_CC | _PAGE_USER |
+ _PAGE_PROTNONE | _PAGE_NO_EXEC |
+ _PAGE_NO_READ),
+ [VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC),
+ [VM_WRITE] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC),
+ [VM_WRITE | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC),
+ [VM_EXEC] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_EXEC | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_EXEC | VM_WRITE] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_SHARED] = __pgprot(_CACHE_CC | _PAGE_USER |
+ _PAGE_PROTNONE | _PAGE_NO_EXEC |
+ _PAGE_NO_READ),
+ [VM_SHARED | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC),
+ [VM_SHARED | VM_WRITE] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC | _PAGE_WRITE),
+ [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_NO_EXEC | _PAGE_WRITE),
+ [VM_SHARED | VM_EXEC] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_WRITE),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(_CACHE_CC | _PAGE_VALID |
+ _PAGE_USER | _PAGE_PRESENT |
+ _PAGE_WRITE)
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 10/26] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (8 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 09/26] loongarch/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-25 4:04 ` Stafford Horne
2022-06-24 4:55 ` [PATCH V4 11/26] extensa/mm: " Anshuman Khandual
` (16 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Jonas Bonn
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Jonas Bonn <jonas@southpole.se>
Cc: openrisc@lists.librecores.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/pgtable.h | 18 ------------------
arch/openrisc/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index e814df4c483c..fe0dfb50eb86 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,6 +10,7 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
index c3abbf71e09f..dcae8aea132f 100644
--- a/arch/openrisc/include/asm/pgtable.h
+++ b/arch/openrisc/include/asm/pgtable.h
@@ -176,24 +176,6 @@ extern void paging_init(void);
__pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
| _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
-
/* zero page used for uninitialized stuff */
extern unsigned long empty_zero_page[2048];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 3a021ab6f1ae..a654b9dcba91 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -208,3 +208,23 @@ void __init mem_init(void)
mem_init_done = 1;
return;
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY_X,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_EXEC] = PAGE_READONLY,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED_X,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 10/26] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 10/26] openrisc/mm: " Anshuman Khandual
@ 2022-06-25 4:04 ` Stafford Horne
0 siblings, 0 replies; 52+ messages in thread
From: Stafford Horne @ 2022-06-25 4:04 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, linux-ia64, linux-sh, linux-kernel, linux-csky,
sparclinux, linux-riscv, Jonas Bonn, linux-s390, linux-hexagon,
x86, christophe.leroy, hch, linux-snps-arc, linux-xtensa,
linux-um, linux-m68k, openrisc, linux-arm-kernel, linux-parisc,
linux-mips, linux-alpha, Andrew Morton, linuxppc-dev
On Fri, Jun 24, 2022 at 10:13:23AM +0530, Anshuman Khandual wrote:
> This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
> vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
> up a private and static protection_map[] array. Subsequently all __SXXX and
> __PXXX macros can be dropped which are no longer needed.
>
> Cc: Jonas Bonn <jonas@southpole.se>
> Cc: openrisc@lists.librecores.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> arch/openrisc/Kconfig | 1 +
> arch/openrisc/include/asm/pgtable.h | 18 ------------------
> arch/openrisc/mm/init.c | 20 ++++++++++++++++++++
> 3 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
> index e814df4c483c..fe0dfb50eb86 100644
> --- a/arch/openrisc/Kconfig
> +++ b/arch/openrisc/Kconfig
> @@ -10,6 +10,7 @@ config OPENRISC
> select ARCH_HAS_DMA_SET_UNCACHED
> select ARCH_HAS_DMA_CLEAR_UNCACHED
> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select COMMON_CLK
> select OF
> select OF_EARLY_FLATTREE
> diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
> index c3abbf71e09f..dcae8aea132f 100644
> --- a/arch/openrisc/include/asm/pgtable.h
> +++ b/arch/openrisc/include/asm/pgtable.h
> @@ -176,24 +176,6 @@ extern void paging_init(void);
> __pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
> | _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)
>
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READONLY_X
> -#define __P010 PAGE_COPY
> -#define __P011 PAGE_COPY_X
> -#define __P100 PAGE_READONLY
> -#define __P101 PAGE_READONLY_X
> -#define __P110 PAGE_COPY
> -#define __P111 PAGE_COPY_X
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READONLY_X
> -#define __S010 PAGE_SHARED
> -#define __S011 PAGE_SHARED_X
> -#define __S100 PAGE_READONLY
> -#define __S101 PAGE_READONLY_X
> -#define __S110 PAGE_SHARED
> -#define __S111 PAGE_SHARED_X
> -
> /* zero page used for uninitialized stuff */
> extern unsigned long empty_zero_page[2048];
> #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
> diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
> index 3a021ab6f1ae..a654b9dcba91 100644
> --- a/arch/openrisc/mm/init.c
> +++ b/arch/openrisc/mm/init.c
> @@ -208,3 +208,23 @@ void __init mem_init(void)
> mem_init_done = 1;
> return;
> }
> +
> +static pgprot_t protection_map[16] __ro_after_init = {
> + [VM_NONE] = PAGE_NONE,
> + [VM_READ] = PAGE_READONLY_X,
> + [VM_WRITE] = PAGE_COPY,
> + [VM_WRITE | VM_READ] = PAGE_COPY_X,
> + [VM_EXEC] = PAGE_READONLY,
> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_EXEC | VM_WRITE] = PAGE_COPY,
> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X,
> + [VM_SHARED] = PAGE_NONE,
> + [VM_SHARED | VM_READ] = PAGE_READONLY_X,
> + [VM_SHARED | VM_WRITE] = PAGE_SHARED,
> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED_X,
> + [VM_SHARED | VM_EXEC] = PAGE_READONLY,
> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
> +};
> +DECLARE_VM_GET_PAGE_PROT
Looks good.
Acked-by: Stafford Horne <shorne@gmail.com>
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 11/26] extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (9 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 10/26] openrisc/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 12/26] hexagon/mm: " Anshuman Khandual
` (15 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Chris Zankel,
Guo Ren
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Chris Zankel <chris@zankel.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: linux-xtensa@linux-xtensa.org
Cc: linux-csky@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/pgtable.h | 18 ------------------
arch/xtensa/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 0b0f0172cced..4c0d83520ff1 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -11,6 +11,7 @@ config XTENSA
select ARCH_HAS_DMA_SET_UNCACHED if MMU
select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index 0a91376131c5..e0d5531ae00d 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -200,24 +200,6 @@
* What follows is the closest we can get by reasonable means..
* See linux/mm/mmap.c for protection_map[] array that uses these definitions.
*/
-#define __P000 PAGE_NONE /* private --- */
-#define __P001 PAGE_READONLY /* private --r */
-#define __P010 PAGE_COPY /* private -w- */
-#define __P011 PAGE_COPY /* private -wr */
-#define __P100 PAGE_READONLY_EXEC /* private x-- */
-#define __P101 PAGE_READONLY_EXEC /* private x-r */
-#define __P110 PAGE_COPY_EXEC /* private xw- */
-#define __P111 PAGE_COPY_EXEC /* private xwr */
-
-#define __S000 PAGE_NONE /* shared --- */
-#define __S001 PAGE_READONLY /* shared --r */
-#define __S010 PAGE_SHARED /* shared -w- */
-#define __S011 PAGE_SHARED /* shared -wr */
-#define __S100 PAGE_READONLY_EXEC /* shared x-- */
-#define __S101 PAGE_READONLY_EXEC /* shared x-r */
-#define __S110 PAGE_SHARED_EXEC /* shared xw- */
-#define __S111 PAGE_SHARED_EXEC /* shared xwr */
-
#ifndef __ASSEMBLY__
#define pte_ERROR(e) \
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index 6a32b2cf2718..7d5ac1b049c3 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -216,3 +216,23 @@ static int __init parse_memmap_opt(char *str)
return 0;
}
early_param("memmap", parse_memmap_opt);
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 12/26] hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (10 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 11/26] extensa/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 13/26] parisc/mm: " Anshuman Khandual
` (14 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Brian Cain
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Brian Cain <bcain@codeaurora.org>
Cc: linux-hexagon@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/pgtable.h | 27 -------------------
arch/hexagon/mm/init.c | 42 ++++++++++++++++++++++++++++++
3 files changed, 43 insertions(+), 27 deletions(-)
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 54eadf265178..bc4ceecd0588 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,6 +6,7 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h
index 0610724d6a28..f7048c18b6f9 100644
--- a/arch/hexagon/include/asm/pgtable.h
+++ b/arch/hexagon/include/asm/pgtable.h
@@ -126,33 +126,6 @@ extern unsigned long _dflt_cache_att;
*/
#define CACHEDEF (CACHE_DEFAULT << 6)
-/* Private (copy-on-write) page protections. */
-#define __P000 __pgprot(_PAGE_PRESENT | _PAGE_USER | CACHEDEF)
-#define __P001 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | CACHEDEF)
-#define __P010 __P000 /* Write-only copy-on-write */
-#define __P011 __P001 /* Read/Write copy-on-write */
-#define __P100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | CACHEDEF)
-#define __P101 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_EXECUTE | \
- _PAGE_READ | CACHEDEF)
-#define __P110 __P100 /* Write/execute copy-on-write */
-#define __P111 __P101 /* Read/Write/Execute, copy-on-write */
-
-/* Shared page protections. */
-#define __S000 __P000
-#define __S001 __P001
-#define __S010 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_WRITE | CACHEDEF)
-#define __S011 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
- _PAGE_WRITE | CACHEDEF)
-#define __S100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | CACHEDEF)
-#define __S101 __P101
-#define __S110 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)
-#define __S111 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
- _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)
-
extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; /* located in head.S */
/* HUGETLB not working currently */
diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c
index 3167a3b5c97b..5d4a44a48ad0 100644
--- a/arch/hexagon/mm/init.c
+++ b/arch/hexagon/mm/init.c
@@ -234,3 +234,45 @@ void __init setup_arch_memory(void)
* which is called by start_kernel() later on in the process
*/
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ CACHEDEF),
+ [VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_READ | CACHEDEF),
+ [VM_WRITE] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ CACHEDEF),
+ [VM_WRITE | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_READ | CACHEDEF),
+ [VM_EXEC] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | CACHEDEF),
+ [VM_EXEC | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | _PAGE_READ |
+ CACHEDEF),
+ [VM_EXEC | VM_WRITE] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | CACHEDEF),
+ [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | _PAGE_READ |
+ CACHEDEF),
+ [VM_SHARED] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ CACHEDEF),
+ [VM_SHARED | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_READ | CACHEDEF),
+ [VM_SHARED | VM_WRITE] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_WRITE | CACHEDEF),
+ [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_READ | _PAGE_WRITE |
+ CACHEDEF),
+ [VM_SHARED | VM_EXEC] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | CACHEDEF),
+ [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | _PAGE_READ |
+ CACHEDEF),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_EXECUTE | _PAGE_WRITE |
+ CACHEDEF),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(_PAGE_PRESENT | _PAGE_USER |
+ _PAGE_READ | _PAGE_EXECUTE |
+ _PAGE_WRITE | CACHEDEF)
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 13/26] parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (11 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 12/26] hexagon/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 14/26] alpha/mm: " Anshuman Khandual
` (13 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
James E.J. Bottomley
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: linux-parisc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/parisc/Kconfig | 1 +
arch/parisc/include/asm/pgtable.h | 18 ------------------
arch/parisc/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 5f2448dc5a2b..90eabc846f81 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -11,6 +11,7 @@ config PARISC
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_PTE_SPECIAL
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
index 69765a6dbe89..6a1899a9b420 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -271,24 +271,6 @@ extern void __update_cache(pte_t pte);
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 __P000 /* copy on write */
-#define __P011 __P001 /* copy on write */
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 __P100 /* copy on write */
-#define __P111 __P101 /* copy on write */
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_WRITEONLY
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECREAD
-#define __S101 PAGE_EXECREAD
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
-
extern pgd_t swapper_pg_dir[]; /* declared in init_task.c */
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index 0a81499dd35e..451f20f87711 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -871,3 +871,23 @@ void flush_tlb_all(void)
spin_unlock(&sid_lock);
}
#endif
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_NONE,
+ [VM_WRITE | VM_READ] = PAGE_READONLY,
+ [VM_EXEC] = PAGE_EXECREAD,
+ [VM_EXEC | VM_READ] = PAGE_EXECREAD,
+ [VM_EXEC | VM_WRITE] = PAGE_EXECREAD,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_EXECREAD,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_WRITEONLY,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_EXECREAD,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_EXECREAD,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_RWX,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 14/26] alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (12 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 13/26] parisc/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 15/26] nios2/mm: " Anshuman Khandual
` (12 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Richard Henderson
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Richard Henderson <rth@twiddle.net>
Cc: linux-alpha@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/alpha/Kconfig | 1 +
arch/alpha/include/asm/pgtable.h | 17 -----------------
arch/alpha/mm/init.c | 22 ++++++++++++++++++++++
3 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 7d0d26b5b3f5..db1c8b329461 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,6 +2,7 @@
config ALPHA
bool
default y
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 170451fde043..3ea9661c09ff 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -116,23 +116,6 @@ struct vm_area_struct;
* arch/alpha/mm/fault.c)
*/
/* xwr */
-#define __P000 _PAGE_P(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __P001 _PAGE_P(_PAGE_FOE | _PAGE_FOW)
-#define __P010 _PAGE_P(_PAGE_FOE)
-#define __P011 _PAGE_P(_PAGE_FOE)
-#define __P100 _PAGE_P(_PAGE_FOW | _PAGE_FOR)
-#define __P101 _PAGE_P(_PAGE_FOW)
-#define __P110 _PAGE_P(0)
-#define __P111 _PAGE_P(0)
-
-#define __S000 _PAGE_S(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __S001 _PAGE_S(_PAGE_FOE | _PAGE_FOW)
-#define __S010 _PAGE_S(_PAGE_FOE)
-#define __S011 _PAGE_S(_PAGE_FOE)
-#define __S100 _PAGE_S(_PAGE_FOW | _PAGE_FOR)
-#define __S101 _PAGE_S(_PAGE_FOW)
-#define __S110 _PAGE_S(0)
-#define __S111 _PAGE_S(0)
/*
* pgprot_noncached() is only for infiniband pci support, and a real
diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c
index 7511723b7669..a2350b2f44d0 100644
--- a/arch/alpha/mm/init.c
+++ b/arch/alpha/mm/init.c
@@ -280,3 +280,25 @@ mem_init(void)
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
memblock_free_all();
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = _PAGE_P(_PAGE_FOE | _PAGE_FOW |
+ _PAGE_FOR),
+ [VM_READ] = _PAGE_P(_PAGE_FOE | _PAGE_FOW),
+ [VM_WRITE] = _PAGE_P(_PAGE_FOE),
+ [VM_WRITE | VM_READ] = _PAGE_P(_PAGE_FOE),
+ [VM_EXEC] = _PAGE_P(_PAGE_FOW | _PAGE_FOR),
+ [VM_EXEC | VM_READ] = _PAGE_P(_PAGE_FOW),
+ [VM_EXEC | VM_WRITE] = _PAGE_P(0),
+ [VM_EXEC | VM_WRITE | VM_READ] = _PAGE_P(0),
+ [VM_SHARED] = _PAGE_S(_PAGE_FOE | _PAGE_FOW |
+ _PAGE_FOR),
+ [VM_SHARED | VM_READ] = _PAGE_S(_PAGE_FOE | _PAGE_FOW),
+ [VM_SHARED | VM_WRITE] = _PAGE_S(_PAGE_FOE),
+ [VM_SHARED | VM_WRITE | VM_READ] = _PAGE_S(_PAGE_FOE),
+ [VM_SHARED | VM_EXEC] = _PAGE_S(_PAGE_FOW | _PAGE_FOR),
+ [VM_SHARED | VM_EXEC | VM_READ] = _PAGE_S(_PAGE_FOW),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = _PAGE_S(0),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = _PAGE_S(0)
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 15/26] nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (13 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 14/26] alpha/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 16/26] riscv/mm: " Anshuman Khandual
` (11 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Dinh Nguyen
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/nios2/Kconfig | 1 +
arch/nios2/include/asm/pgtable.h | 16 ----------------
arch/nios2/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 16 deletions(-)
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 4167f1eb4cd8..e0459dffd218 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,6 +6,7 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index 262d0609268c..470516d4555e 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -40,24 +40,8 @@ struct mm_struct;
*/
/* Remove W bit on private pages for COW support */
-#define __P000 MKP(0, 0, 0)
-#define __P001 MKP(0, 0, 1)
-#define __P010 MKP(0, 0, 0) /* COW */
-#define __P011 MKP(0, 0, 1) /* COW */
-#define __P100 MKP(1, 0, 0)
-#define __P101 MKP(1, 0, 1)
-#define __P110 MKP(1, 0, 0) /* COW */
-#define __P111 MKP(1, 0, 1) /* COW */
/* Shared pages can have exact HW mapping */
-#define __S000 MKP(0, 0, 0)
-#define __S001 MKP(0, 0, 1)
-#define __S010 MKP(0, 1, 0)
-#define __S011 MKP(0, 1, 1)
-#define __S100 MKP(1, 0, 0)
-#define __S101 MKP(1, 0, 1)
-#define __S110 MKP(1, 1, 0)
-#define __S111 MKP(1, 1, 1)
/* Used all over the kernel */
#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 613fcaa5988a..9a3dd4c80d70 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -124,3 +124,23 @@ const char *arch_vma_name(struct vm_area_struct *vma)
{
return (vma->vm_start = KUSER_BASE) ? "[kuser]" : NULL;
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = MKP(0, 0, 0),
+ [VM_READ] = MKP(0, 0, 1),
+ [VM_WRITE] = MKP(0, 0, 0),
+ [VM_WRITE | VM_READ] = MKP(0, 0, 1),
+ [VM_EXEC] = MKP(1, 0, 0),
+ [VM_EXEC | VM_READ] = MKP(1, 0, 1),
+ [VM_EXEC | VM_WRITE] = MKP(1, 0, 0),
+ [VM_EXEC | VM_WRITE | VM_READ] = MKP(1, 0, 1),
+ [VM_SHARED] = MKP(0, 0, 0),
+ [VM_SHARED | VM_READ] = MKP(0, 0, 1),
+ [VM_SHARED | VM_WRITE] = MKP(0, 1, 0),
+ [VM_SHARED | VM_WRITE | VM_READ] = MKP(0, 1, 1),
+ [VM_SHARED | VM_EXEC] = MKP(1, 0, 0),
+ [VM_SHARED | VM_EXEC | VM_READ] = MKP(1, 0, 1),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = MKP(1, 1, 0),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = MKP(1, 1, 1)
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 16/26] riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (14 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 15/26] nios2/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:10 ` Christoph Hellwig
2022-06-24 4:55 ` [PATCH V4 17/26] csky/mm: " Anshuman Khandual
` (10 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Paul Walmsley,
Palmer Dabbelt
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/pgtable.h | 20 --------------------
arch/riscv/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 32ffef9f6e5b..583389d4e43a 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -32,6 +32,7 @@ config RISCV
select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 1d1be9d9419c..23e643db6575 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -186,26 +186,6 @@ extern struct pt_alloc_ops pt_ops __initdata;
extern pgd_t swapper_pg_dir[];
-/* MAP_PRIVATE permissions: xwr (copy-on-write) */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXEC
-#define __P101 PAGE_READ_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_READ_EXEC
-
-/* MAP_SHARED permissions: xwr */
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXEC
-#define __S101 PAGE_READ_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline int pmd_present(pmd_t pmd)
{
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index d466ec670e1f..f976580500b1 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -288,6 +288,26 @@ static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAG
#define early_pg_dir ((pgd_t *)XIP_FIXUP(early_pg_dir))
#endif /* CONFIG_XIP_KERNEL */
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READ,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_EXEC,
+ [VM_EXEC | VM_READ] = PAGE_READ_EXEC,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_READ_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READ,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READ_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
+
void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
{
unsigned long addr = __fix_to_virt(idx);
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 16/26] riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 16/26] riscv/mm: " Anshuman Khandual
@ 2022-06-24 5:10 ` Christoph Hellwig
0 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:10 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Paul Walmsley,
Palmer Dabbelt
On Fri, Jun 24, 2022 at 10:13:29AM +0530, Anshuman Khandual wrote:
index d466ec670e1f..f976580500b1 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -288,6 +288,26 @@ static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAG
> #define early_pg_dir ((pgd_t *)XIP_FIXUP(early_pg_dir))
> #endif /* CONFIG_XIP_KERNEL */
>
> +static pgprot_t protection_map[16] __ro_after_init = {
Can't this be marked const now?
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 17/26] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (15 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 16/26] riscv/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-25 3:07 ` Guo Ren
2022-06-24 4:55 ` [PATCH V4 18/26] s390/mm: " Anshuman Khandual
` (9 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Geert Uytterhoeven
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-csky@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/csky/Kconfig | 1 +
arch/csky/include/asm/pgtable.h | 18 ------------------
arch/csky/mm/init.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 21d72b078eef..588b8a9c68ed 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,6 +6,7 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index bbe245117777..229a5f4ad7fc 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -77,24 +77,6 @@
#define MAX_SWAPFILES_CHECK() \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_READ
-#define __P011 PAGE_READ
-#define __P100 PAGE_READ
-#define __P101 PAGE_READ
-#define __P110 PAGE_READ
-#define __P111 PAGE_READ
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_WRITE
-#define __S011 PAGE_WRITE
-#define __S100 PAGE_READ
-#define __S101 PAGE_READ
-#define __S110 PAGE_WRITE
-#define __S111 PAGE_WRITE
-
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index bf2004aa811a..1bf7b2a748fd 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -197,3 +197,23 @@ void __init fixaddr_init(void)
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READ,
+ [VM_WRITE] = PAGE_READ,
+ [VM_WRITE | VM_READ] = PAGE_READ,
+ [VM_EXEC] = PAGE_READ,
+ [VM_EXEC | VM_READ] = PAGE_READ,
+ [VM_EXEC | VM_WRITE] = PAGE_READ,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_READ,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READ,
+ [VM_SHARED | VM_WRITE] = PAGE_WRITE,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_WRITE,
+ [VM_SHARED | VM_EXEC] = PAGE_READ,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READ,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_WRITE,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_WRITE
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 17/26] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 17/26] csky/mm: " Anshuman Khandual
@ 2022-06-25 3:07 ` Guo Ren
0 siblings, 0 replies; 52+ messages in thread
From: Guo Ren @ 2022-06-25 3:07 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Linux-MM, Christoph Hellwig, Christophe Leroy, Andrew Morton,
linuxppc-dev, sparclinux, the arch/x86 maintainers, Openrisc,
open list:TENSILICA XTENSA PORT (xtensa), linux-csky,
linux-hexagon, Parisc List, linux-alpha, linux-riscv, linux-s390,
linux-ia64, open list:BROADCOM NVRAM DRIVER, linux-m68k,
linux-snps-arc, Linux ARM, linux-um, linux-sh,
Linux Kernel Mailing List, Geert Uytterhoeven
For csky part.
Acked-by: Guo Ren <guoren@kernel.org>
On Fri, Jun 24, 2022 at 12:48 PM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
> vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
> up a private and static protection_map[] array. Subsequently all __SXXX and
> __PXXX macros can be dropped which are no longer needed.
>
> Cc: Geert Uytterhoeven <geert@linux-m68k.org>
> Cc: linux-csky@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> arch/csky/Kconfig | 1 +
> arch/csky/include/asm/pgtable.h | 18 ------------------
> arch/csky/mm/init.c | 20 ++++++++++++++++++++
> 3 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
> index 21d72b078eef..588b8a9c68ed 100644
> --- a/arch/csky/Kconfig
> +++ b/arch/csky/Kconfig
> @@ -6,6 +6,7 @@ config CSKY
> select ARCH_HAS_GCOV_PROFILE_ALL
> select ARCH_HAS_SYNC_DMA_FOR_CPU
> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_USE_BUILTIN_BSWAP
> select ARCH_USE_QUEUED_RWLOCKS
> select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
> diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
> index bbe245117777..229a5f4ad7fc 100644
> --- a/arch/csky/include/asm/pgtable.h
> +++ b/arch/csky/include/asm/pgtable.h
> @@ -77,24 +77,6 @@
> #define MAX_SWAPFILES_CHECK() \
> BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
>
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READ
> -#define __P010 PAGE_READ
> -#define __P011 PAGE_READ
> -#define __P100 PAGE_READ
> -#define __P101 PAGE_READ
> -#define __P110 PAGE_READ
> -#define __P111 PAGE_READ
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READ
> -#define __S010 PAGE_WRITE
> -#define __S011 PAGE_WRITE
> -#define __S100 PAGE_READ
> -#define __S101 PAGE_READ
> -#define __S110 PAGE_WRITE
> -#define __S111 PAGE_WRITE
> -
> extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
>
> diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
> index bf2004aa811a..1bf7b2a748fd 100644
> --- a/arch/csky/mm/init.c
> +++ b/arch/csky/mm/init.c
> @@ -197,3 +197,23 @@ void __init fixaddr_init(void)
> vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
> fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
> }
> +
> +static pgprot_t protection_map[16] __ro_after_init = {
> + [VM_NONE] = PAGE_NONE,
> + [VM_READ] = PAGE_READ,
> + [VM_WRITE] = PAGE_READ,
> + [VM_WRITE | VM_READ] = PAGE_READ,
> + [VM_EXEC] = PAGE_READ,
> + [VM_EXEC | VM_READ] = PAGE_READ,
> + [VM_EXEC | VM_WRITE] = PAGE_READ,
> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_READ,
> + [VM_SHARED] = PAGE_NONE,
> + [VM_SHARED | VM_READ] = PAGE_READ,
> + [VM_SHARED | VM_WRITE] = PAGE_WRITE,
> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_WRITE,
> + [VM_SHARED | VM_EXEC] = PAGE_READ,
> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READ,
> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_WRITE,
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_WRITE
> +};
> +DECLARE_VM_GET_PAGE_PROT
> --
> 2.25.1
>
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH V4 18/26] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (16 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 17/26] csky/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 19/26] ia64/mm: " Anshuman Khandual
` (8 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Heiko Carstens, Vasily Gorbik
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/pgtable.h | 17 -----------------
arch/s390/mm/mmap.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 91c0b80a8bf0..c4481377ca83 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -81,6 +81,7 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index a397b072a580..c63a05b5368a 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -424,23 +424,6 @@ static inline int is_module_addr(void *addr)
* implies read permission.
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_RO
-#define __P010 PAGE_RO
-#define __P011 PAGE_RO
-#define __P100 PAGE_RX
-#define __P101 PAGE_RX
-#define __P110 PAGE_RX
-#define __P111 PAGE_RX
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_RO
-#define __S010 PAGE_RW
-#define __S011 PAGE_RW
-#define __S100 PAGE_RX
-#define __S101 PAGE_RX
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
/*
* Segment entry (large page) protection definitions.
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index d545f5c39f7e..c745b545012b 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -188,3 +188,23 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_RO,
+ [VM_WRITE] = PAGE_RO,
+ [VM_WRITE | VM_READ] = PAGE_RO,
+ [VM_EXEC] = PAGE_RX,
+ [VM_EXEC | VM_READ] = PAGE_RX,
+ [VM_EXEC | VM_WRITE] = PAGE_RX,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_RX,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_RO,
+ [VM_SHARED | VM_WRITE] = PAGE_RW,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_RW,
+ [VM_SHARED | VM_EXEC] = PAGE_RX,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_RX,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_RWX,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 19/26] ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (17 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 18/26] s390/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 20/26] mips/mm: " Anshuman Khandual
` (7 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: linux-ia64@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/ia64/Kconfig | 1 +
arch/ia64/include/asm/pgtable.h | 18 ------------------
arch/ia64/mm/init.c | 28 +++++++++++++++++++++++++++-
3 files changed, 28 insertions(+), 19 deletions(-)
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index cb93769a9f2a..0510a5737711 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -12,6 +12,7 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
index 7aa8f2330fb1..6925e28ae61d 100644
--- a/arch/ia64/include/asm/pgtable.h
+++ b/arch/ia64/include/asm/pgtable.h
@@ -161,24 +161,6 @@
* attempts to write to the page.
*/
/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY /* write to priv pg -> copy & make writable */
-#define __P011 PAGE_READONLY /* ditto */
-#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __P101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */
-#define __S011 PAGE_SHARED
-#define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
-#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
-
#define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e))
#if CONFIG_PGTABLE_LEVELS = 4
#define pud_ERROR(e) printk("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, pud_val(e))
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 855d949d81df..9c91df243d62 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -273,7 +273,7 @@ static int __init gate_vma_init(void)
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
- gate_vma.vm_page_prot = __P101;
+ gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);
return 0;
}
@@ -490,3 +490,29 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
__remove_pages(start_pfn, nr_pages, altmap);
}
#endif
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_READONLY,
+ [VM_WRITE | VM_READ] = PAGE_READONLY,
+ [VM_EXEC] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_X_RX),
+ [VM_EXEC | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_RX),
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_X_RX),
+ [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_RX),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_RWX),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+ _PAGE_AR_RWX)
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 20/26] mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (18 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 19/26] ia64/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 21/26] m68k/mm: " Anshuman Khandual
` (6 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Thomas Bogendoerfer
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/mips/Kconfig | 1 +
arch/mips/include/asm/pgtable.h | 22 ----------------------
arch/mips/mm/cache.c | 3 +++
3 files changed, 4 insertions(+), 22 deletions(-)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index db09d45d59ec..d0b7eb11ec81 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -14,6 +14,7 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 374c6322775d..6caec386ad2f 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -41,28 +41,6 @@ struct vm_area_struct;
* by reasonable means..
*/
-/*
- * Dummy values to fill the table in mmap.c
- * The real values will be generated at runtime
- */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
extern unsigned long _page_cachable_default;
extern void __update_cache(unsigned long address, pte_t pte);
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 7be7240f7703..11b3e7ddafd5 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -159,6 +159,9 @@ EXPORT_SYMBOL(_page_cachable_default);
#define PM(p) __pgprot(_page_cachable_default | (p))
+static pgprot_t protection_map[16] __ro_after_init;
+DECLARE_VM_GET_PAGE_PROT
+
static inline void setup_protection_map(void)
{
protection_map[0] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 21/26] m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (19 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 20/26] mips/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 22/26] arc/mm: " Anshuman Khandual
` (5 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Thomas Bogendoerfer
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-m68k@lists.linux-m68k.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/m68k/Kconfig | 1 +
arch/m68k/include/asm/mcf_pgtable.h | 54 -----------------------
arch/m68k/include/asm/motorola_pgtable.h | 22 ----------
arch/m68k/include/asm/sun3_pgtable.h | 17 --------
arch/m68k/mm/mcfmmu.c | 55 ++++++++++++++++++++++++
arch/m68k/mm/motorola.c | 20 +++++++++
arch/m68k/mm/sun3mmu.c | 20 +++++++++
7 files changed, 96 insertions(+), 93 deletions(-)
diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 936cce42ae9a..49aa0cf13e96 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -7,6 +7,7 @@ config M68K
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select ARCH_MIGHT_HAVE_PC_PARPORT if ISA
select ARCH_NO_PREEMPT if !COLDFIRE
diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h
index 94f38d76e278..0e9c1b28dcab 100644
--- a/arch/m68k/include/asm/mcf_pgtable.h
+++ b/arch/m68k/include/asm/mcf_pgtable.h
@@ -91,60 +91,6 @@
* for use. In general, the bit positions are xwr, and P-items are
* private, the S-items are shared.
*/
-#define __P000 PAGE_NONE
-#define __P001 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE)
-#define __P010 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_WRITABLE)
-#define __P011 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_WRITABLE)
-#define __P100 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_EXEC)
-#define __P101 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-#define __P110 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_WRITABLE \
- | CF_PAGE_EXEC)
-#define __P111 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_WRITABLE \
- | CF_PAGE_EXEC)
-
-#define __S000 PAGE_NONE
-#define __S001 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE)
-#define __S010 PAGE_SHARED
-#define __S011 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_READABLE)
-#define __S100 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_EXEC)
-#define __S101 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-#define __S110 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_EXEC)
-#define __S111 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-
#define PTE_MASK PAGE_MASK
#define CF_PAGE_CHG_MASK (PTE_MASK | CF_PAGE_ACCESSED | CF_PAGE_DIRTY)
diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h
index 7c9b56e2a750..63aaece0722f 100644
--- a/arch/m68k/include/asm/motorola_pgtable.h
+++ b/arch/m68k/include/asm/motorola_pgtable.h
@@ -83,28 +83,6 @@ extern unsigned long mm_cachebits;
#define PAGE_COPY_C __pgprot(_PAGE_PRESENT | _PAGE_RONLY | _PAGE_ACCESSED)
#define PAGE_READONLY_C __pgprot(_PAGE_PRESENT | _PAGE_RONLY | _PAGE_ACCESSED)
-/*
- * The m68k can't do page protection for execute, and considers that the same are read.
- * Also, write permissions imply read permissions. This is the closest we can get..
- */
-#define __P000 PAGE_NONE_C
-#define __P001 PAGE_READONLY_C
-#define __P010 PAGE_COPY_C
-#define __P011 PAGE_COPY_C
-#define __P100 PAGE_READONLY_C
-#define __P101 PAGE_READONLY_C
-#define __P110 PAGE_COPY_C
-#define __P111 PAGE_COPY_C
-
-#define __S000 PAGE_NONE_C
-#define __S001 PAGE_READONLY_C
-#define __S010 PAGE_SHARED_C
-#define __S011 PAGE_SHARED_C
-#define __S100 PAGE_READONLY_C
-#define __S101 PAGE_READONLY_C
-#define __S110 PAGE_SHARED_C
-#define __S111 PAGE_SHARED_C
-
#define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd))
/*
diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h
index 5e4e753f0d24..9d919491765b 100644
--- a/arch/m68k/include/asm/sun3_pgtable.h
+++ b/arch/m68k/include/asm/sun3_pgtable.h
@@ -71,23 +71,6 @@
* protection settings, valid (implying read and execute) and writeable. These
* are as close as we can get...
*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
/* Use these fake page-protections on PMDs. */
#define SUN3_PMD_VALID (0x00000001)
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 6f1f25125294..45d1964301f4 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -234,3 +234,58 @@ void steal_context(void)
destroy_context(mm);
}
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE),
+ [VM_WRITE] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_WRITABLE),
+ [VM_WRITE | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_WRITABLE),
+ [VM_EXEC] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_EXEC),
+ [VM_EXEC | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_EXEC),
+ [VM_EXEC | VM_WRITE] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_WRITABLE |
+ CF_PAGE_EXEC),
+ [VM_EXEC | VM_WRITE | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_WRITABLE |
+ CF_PAGE_EXEC),
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE),
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_SHARED),
+ [VM_SHARED | VM_EXEC] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_EXEC),
+ [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_EXEC),
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_SHARED |
+ CF_PAGE_EXEC),
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(CF_PAGE_VALID |
+ CF_PAGE_ACCESSED |
+ CF_PAGE_READABLE |
+ CF_PAGE_SHARED |
+ CF_PAGE_EXEC)
+};
+DECLARE_VM_GET_PAGE_PROT
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index df7f797c908a..31fba64bc530 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -382,6 +382,26 @@ static void __init map_node(int node)
#endif
}
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE_C,
+ [VM_READ] = PAGE_READONLY_C,
+ [VM_WRITE] = PAGE_COPY_C,
+ [VM_WRITE | VM_READ] = PAGE_COPY_C,
+ [VM_EXEC] = PAGE_READONLY_C,
+ [VM_EXEC | VM_READ] = PAGE_READONLY_C,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY_C,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_C,
+ [VM_SHARED] = PAGE_NONE_C,
+ [VM_SHARED | VM_READ] = PAGE_READONLY_C,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED_C,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED_C,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY_C,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_C,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_C,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_C
+};
+DECLARE_VM_GET_PAGE_PROT
+
/*
* paging_init() continues the virtual memory environment setup which
* was begun by the code in arch/head.S.
diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
index dad494224497..ef0baea67ea2 100644
--- a/arch/m68k/mm/sun3mmu.c
+++ b/arch/m68k/mm/sun3mmu.c
@@ -95,3 +95,23 @@ void __init paging_init(void)
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY,
+ [VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 22/26] arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (20 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 21/26] m68k/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 23/26] arm/mm: " Anshuman Khandual
` (4 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Vineet Gupta
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arc/Kconfig | 1 +
arch/arc/include/asm/pgtable-bits-arcv2.h | 18 ------------------
arch/arc/mm/mmap.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 9e3653253ef2..8be56a5d8a9b 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,6 +13,7 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h
index 183d23bc1e00..b23be557403e 100644
--- a/arch/arc/include/asm/pgtable-bits-arcv2.h
+++ b/arch/arc/include/asm/pgtable-bits-arcv2.h
@@ -72,24 +72,6 @@
* This is to enable COW mechanism
*/
/* xwr */
-#define __P000 PAGE_U_NONE
-#define __P001 PAGE_U_R
-#define __P010 PAGE_U_R /* Pvt-W => !W */
-#define __P011 PAGE_U_R /* Pvt-W => !W */
-#define __P100 PAGE_U_X_R /* X => R */
-#define __P101 PAGE_U_X_R
-#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */
-#define __P111 PAGE_U_X_R /* Pvt-W => !W */
-
-#define __S000 PAGE_U_NONE
-#define __S001 PAGE_U_R
-#define __S010 PAGE_U_W_R /* W => R */
-#define __S011 PAGE_U_W_R
-#define __S100 PAGE_U_X_R /* X => R */
-#define __S101 PAGE_U_X_R
-#define __S110 PAGE_U_X_W_R /* X => R */
-#define __S111 PAGE_U_X_W_R
-
#ifndef __ASSEMBLY__
#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE)
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 722d26b94307..7dd50b66f266 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -74,3 +74,23 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.align_offset = pgoff << PAGE_SHIFT;
return vm_unmapped_area(&info);
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_U_NONE,
+ [VM_READ] = PAGE_U_R,
+ [VM_WRITE] = PAGE_U_R,
+ [VM_WRITE | VM_READ] = PAGE_U_R,
+ [VM_EXEC] = PAGE_U_X_R,
+ [VM_EXEC | VM_READ] = PAGE_U_X_R,
+ [VM_EXEC | VM_WRITE] = PAGE_U_X_R,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_U_X_R,
+ [VM_SHARED] = PAGE_U_NONE,
+ [VM_SHARED | VM_READ] = PAGE_U_R,
+ [VM_SHARED | VM_WRITE] = PAGE_U_W_R,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_U_W_R,
+ [VM_SHARED | VM_EXEC] = PAGE_U_X_R,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_U_X_R,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_U_X_W_R,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_U_X_W_R
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 23/26] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (21 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 22/26] arc/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 24/26] um/mm: " Anshuman Khandual
` (3 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Russell King,
Arnd Bergmann
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Russell King <linux@armlinux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/pgtable.h | 17 -----------------
arch/arm/lib/uaccess_with_memcpy.c | 2 +-
arch/arm/mm/mmu.c | 20 ++++++++++++++++++++
4 files changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7630ba9cb6cc..e153b6d4fc5b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -24,6 +24,7 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index cd1f84bb40ae..78a532068fec 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -137,23 +137,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
* 2) If we could do execute protection, then read is implied
* 3) write implies read permissions
*/
-#define __P000 __PAGE_NONE
-#define __P001 __PAGE_READONLY
-#define __P010 __PAGE_COPY
-#define __P011 __PAGE_COPY
-#define __P100 __PAGE_READONLY_EXEC
-#define __P101 __PAGE_READONLY_EXEC
-#define __P110 __PAGE_COPY_EXEC
-#define __P111 __PAGE_COPY_EXEC
-
-#define __S000 __PAGE_NONE
-#define __S001 __PAGE_READONLY
-#define __S010 __PAGE_SHARED
-#define __S011 __PAGE_SHARED
-#define __S100 __PAGE_READONLY_EXEC
-#define __S101 __PAGE_READONLY_EXEC
-#define __S110 __PAGE_SHARED_EXEC
-#define __S111 __PAGE_SHARED_EXEC
#ifndef __ASSEMBLY__
/*
diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c
index c30b689bec2e..14eecaaf295f 100644
--- a/arch/arm/lib/uaccess_with_memcpy.c
+++ b/arch/arm/lib/uaccess_with_memcpy.c
@@ -237,7 +237,7 @@ static int __init test_size_treshold(void)
if (!dst_page)
goto no_dst;
kernel_ptr = page_address(src_page);
- user_ptr = vmap(&dst_page, 1, VM_IOREMAP, __pgprot(__P010));
+ user_ptr = vmap(&dst_page, 1, VM_IOREMAP, __pgprot(__PAGE_COPY));
if (!user_ptr)
goto no_vmap;
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 5e2be37a198e..2722abddd725 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -405,6 +405,26 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
}
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = __PAGE_NONE,
+ [VM_READ] = __PAGE_READONLY,
+ [VM_WRITE] = __PAGE_COPY,
+ [VM_WRITE | VM_READ] = __PAGE_COPY,
+ [VM_EXEC] = __PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_READ] = __PAGE_READONLY_EXEC,
+ [VM_EXEC | VM_WRITE] = __PAGE_COPY_EXEC,
+ [VM_EXEC | VM_WRITE | VM_READ] = __PAGE_COPY_EXEC,
+ [VM_SHARED] = __PAGE_NONE,
+ [VM_SHARED | VM_READ] = __PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = __PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = __PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = __PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_READ] = __PAGE_READONLY_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = __PAGE_SHARED_EXEC,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
+
/*
* Adjust the PMD section entries according to the CPU in use.
*/
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 24/26] um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (22 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 23/26] arm/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 25/26] sh/mm: " Anshuman Khandual
` (2 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel, Jeff Dike
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Jeff Dike <jdike@addtoit.com>
Cc: linux-um@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/um/Kconfig | 1 +
arch/um/include/asm/pgtable.h | 17 -----------------
arch/um/kernel/mem.c | 20 ++++++++++++++++++++
arch/x86/um/mem_32.c | 2 +-
4 files changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 4ec22e156a2e..7fb43654e5b5 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -10,6 +10,7 @@ config UML
select ARCH_HAS_KCOV
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h
index 167e236d9bb8..66bc3f99d9be 100644
--- a/arch/um/include/asm/pgtable.h
+++ b/arch/um/include/asm/pgtable.h
@@ -68,23 +68,6 @@ extern unsigned long end_iomem;
* Also, write permissions imply read permissions. This is the closest we can
* get..
*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
/*
* ZERO_PAGE is a global shared page that is always zero: used
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 15295c3237a0..26ef8a77be59 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -197,3 +197,23 @@ void *uml_kmalloc(int size, int flags)
{
return kmalloc(size, flags);
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_READONLY,
+ [VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED
+};
+DECLARE_VM_GET_PAGE_PROT
diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
index 19c5dbd46770..cafd01f730da 100644
--- a/arch/x86/um/mem_32.c
+++ b/arch/x86/um/mem_32.c
@@ -17,7 +17,7 @@ static int __init gate_vma_init(void)
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
- gate_vma.vm_page_prot = __P101;
+ gate_vma.vm_page_prot = PAGE_READONLY;
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 25/26] sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (23 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 24/26] um/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 4:55 ` [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:12 ` [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Christoph Hellwig
26 siblings, 0 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel,
Yoshinori Sato, Rich Felker
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: linux-sh@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/sh/Kconfig | 1 +
arch/sh/include/asm/pgtable.h | 17 -----------------
arch/sh/mm/mmap.c | 20 ++++++++++++++++++++
3 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 5f220e903e5a..91f3ea325388 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -12,6 +12,7 @@ config SUPERH
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HIBERNATION_POSSIBLE if MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
index d7ddb1ec86a0..6fb9ec54cf9b 100644
--- a/arch/sh/include/asm/pgtable.h
+++ b/arch/sh/include/asm/pgtable.h
@@ -89,23 +89,6 @@ static inline unsigned long phys_addr_mask(void)
* completely separate permission bits for user and kernel space.
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_WRITEONLY
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECREAD
-#define __S101 PAGE_EXECREAD
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
typedef pte_t *pte_addr_t;
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6a1a1297baae..0a61ce6950bb 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -162,3 +162,23 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
{
return 1;
}
+
+static pgprot_t protection_map[16] __ro_after_init = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY,
+ [VM_WRITE] = PAGE_COPY,
+ [VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_EXEC] = PAGE_EXECREAD,
+ [VM_EXEC | VM_READ] = PAGE_EXECREAD,
+ [VM_EXEC | VM_WRITE] = PAGE_COPY,
+ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY,
+ [VM_SHARED] = PAGE_NONE,
+ [VM_SHARED | VM_READ] = PAGE_READONLY,
+ [VM_SHARED | VM_WRITE] = PAGE_WRITEONLY,
+ [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
+ [VM_SHARED | VM_EXEC] = PAGE_EXECREAD,
+ [VM_SHARED | VM_EXEC | VM_READ] = PAGE_EXECREAD,
+ [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_RWX,
+ [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (24 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 25/26] sh/mm: " Anshuman Khandual
@ 2022-06-24 4:55 ` Anshuman Khandual
2022-06-24 5:11 ` Christoph Hellwig
` (2 more replies)
2022-06-24 5:12 ` [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Christoph Hellwig
26 siblings, 3 replies; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 4:55 UTC (permalink / raw)
To: linux-mm
Cc: hch, christophe.leroy, Anshuman Khandual, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
Now all the platforms enable ARCH_HAS_GET_PAGE_PROT. They define and export
own vm_get_page_prot() whether custom or standard DECLARE_VM_GET_PAGE_PROT.
Hence there is no need for default generic fallback for vm_get_page_prot().
Just drop this fallback and also ARCH_HAS_GET_PAGE_PROT mechanism.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/alpha/Kconfig | 1 -
arch/arc/Kconfig | 1 -
arch/arm/Kconfig | 1 -
arch/arm64/Kconfig | 1 -
arch/csky/Kconfig | 1 -
arch/hexagon/Kconfig | 1 -
arch/ia64/Kconfig | 1 -
arch/loongarch/Kconfig | 1 -
arch/m68k/Kconfig | 1 -
arch/microblaze/Kconfig | 1 -
arch/mips/Kconfig | 1 -
arch/nios2/Kconfig | 1 -
arch/openrisc/Kconfig | 1 -
arch/parisc/Kconfig | 1 -
arch/powerpc/Kconfig | 1 -
arch/riscv/Kconfig | 1 -
arch/s390/Kconfig | 1 -
arch/sh/Kconfig | 1 -
arch/sparc/Kconfig | 1 -
arch/um/Kconfig | 1 -
arch/x86/Kconfig | 1 -
arch/xtensa/Kconfig | 1 -
include/linux/mm.h | 3 ---
mm/Kconfig | 3 ---
mm/mmap.c | 22 ----------------------
25 files changed, 50 deletions(-)
diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index db1c8b329461..7d0d26b5b3f5 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,7 +2,6 @@
config ALPHA
bool
default y
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 8be56a5d8a9b..9e3653253ef2 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,7 +13,6 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e153b6d4fc5b..7630ba9cb6cc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -24,7 +24,6 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1652a9800ebe..7030bf3f8d6f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -45,7 +45,6 @@ config ARM64
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 588b8a9c68ed..21d72b078eef 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,7 +6,6 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index bc4ceecd0588..54eadf265178 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,7 +6,6 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 0510a5737711..cb93769a9f2a 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -12,7 +12,6 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index fd07b8e760ee..1920d52653b4 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,7 +9,6 @@ config LOONGARCH
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_PTE_SPECIAL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_INLINE_READ_LOCK if !PREEMPTION
select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 49aa0cf13e96..936cce42ae9a 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -7,7 +7,6 @@ config M68K
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select ARCH_MIGHT_HAVE_PC_PARPORT if ISA
select ARCH_NO_PREEMPT if !COLDFIRE
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 15f91ba8a0c4..8cf429ad1c84 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -7,7 +7,6 @@ config MICROBLAZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_TABLE_SORT
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index d0b7eb11ec81..db09d45d59ec 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -14,7 +14,6 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index e0459dffd218..4167f1eb4cd8 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,7 +6,6 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index fe0dfb50eb86..e814df4c483c 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,7 +10,6 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 90eabc846f81..5f2448dc5a2b 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -11,7 +11,6 @@ config PARISC
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_PTE_SPECIAL
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1035d172c7dd..250b8658b2d4 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,7 +140,6 @@ config PPC
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 583389d4e43a..32ffef9f6e5b 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -32,7 +32,6 @@ config RISCV
select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index c4481377ca83..91c0b80a8bf0 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -81,7 +81,6 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 91f3ea325388..5f220e903e5a 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -12,7 +12,6 @@ config SUPERH
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HIBERNATION_POSSIBLE if MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 09f868613a4d..9c1cce74953a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -13,7 +13,6 @@ config 64BIT
config SPARC
bool
default y
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT if SPARC64 && PCI
select ARCH_MIGHT_HAVE_PC_SERIO
select DMA_OPS
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 7fb43654e5b5..4ec22e156a2e 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -10,7 +10,6 @@ config UML
select ARCH_HAS_KCOV
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index be0b95e51df6..841e4843d0c4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -94,7 +94,6 @@ config X86
select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_DEBUG_WX
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 4c0d83520ff1..0b0f0172cced 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -11,7 +11,6 @@ config XTENSA
select ARCH_HAS_DMA_SET_UNCACHED if MMU
select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 70d900f6df43..d776a86be1d4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,9 +424,6 @@ extern unsigned int kobjsize(const void *objp);
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
-extern pgprot_t protection_map[16];
-#endif
#define DECLARE_VM_GET_PAGE_PROT \
pgprot_t vm_get_page_prot(unsigned long vm_flags) \
diff --git a/mm/Kconfig b/mm/Kconfig
index 169e64192e48..f47d257a053b 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -951,9 +951,6 @@ config ARCH_HAS_CURRENT_STACK_POINTER
register alias named "current_stack_pointer", this config can be
selected.
-config ARCH_HAS_VM_GET_PAGE_PROT
- bool
-
config ARCH_HAS_PTE_DEVMAP
bool
diff --git a/mm/mmap.c b/mm/mmap.c
index 43db3bd49071..3557fe83d124 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,28 +101,6 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
-pgprot_t protection_map[16] __ro_after_init = {
- [VM_NONE] = __P000,
- [VM_READ] = __P001,
- [VM_WRITE] = __P010,
- [VM_WRITE | VM_READ] = __P011,
- [VM_EXEC] = __P100,
- [VM_EXEC | VM_READ] = __P101,
- [VM_EXEC | VM_WRITE] = __P110,
- [VM_EXEC | VM_WRITE | VM_READ] = __P111,
- [VM_SHARED] = __S000,
- [VM_SHARED | VM_READ] = __S001,
- [VM_SHARED | VM_WRITE] = __S010,
- [VM_SHARED | VM_WRITE | VM_READ] = __S011,
- [VM_SHARED | VM_EXEC] = __S100,
- [VM_SHARED | VM_EXEC | VM_READ] = __S101,
- [VM_SHARED | VM_EXEC | VM_WRITE] = __S110,
- [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
-};
-DECLARE_VM_GET_PAGE_PROT
-#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
-
static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
{
return pgprot_modify(oldprot, vm_get_page_prot(vm_flags));
--
2.25.1
^ permalink raw reply related [flat|nested] 52+ messages in thread* Re: [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 5:11 ` Christoph Hellwig
2022-06-24 5:29 ` Christophe Leroy
2022-06-27 5:48 ` Christophe Leroy
2 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:11 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:11 ` Christoph Hellwig
@ 2022-06-24 5:29 ` Christophe Leroy
2022-06-27 5:48 ` Christophe Leroy
2 siblings, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-24 5:29 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IE5vdyBhbGwgdGhlIHBsYXRmb3JtcyBlbmFibGUgQVJDSF9IQVNfR0VUX1BBR0VfUFJPVC4g
VGhleSBkZWZpbmUgYW5kIGV4cG9ydA0KPiBvd24gdm1fZ2V0X3BhZ2VfcHJvdCgpIHdoZXRoZXIg
Y3VzdG9tIG9yIHN0YW5kYXJkIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVC4NCj4gSGVuY2UgdGhl
cmUgaXMgbm8gbmVlZCBmb3IgZGVmYXVsdCBnZW5lcmljIGZhbGxiYWNrIGZvciB2bV9nZXRfcGFn
ZV9wcm90KCkuDQo+IEp1c3QgZHJvcCB0aGlzIGZhbGxiYWNrIGFuZCBhbHNvIEFSQ0hfSEFTX0dF
VF9QQUdFX1BST1QgbWVjaGFuaXNtLg0KPiANCj4gQ2M6IEFuZHJldyBNb3J0b24gPGFrcG1AbGlu
dXgtZm91bmRhdGlvbi5vcmc+DQo+IENjOiBsaW51eC1tbUBrdmFjay5vcmcNCj4gQ2M6IGxpbnV4
LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gU2lnbmVkLW9mZi1ieTogQW5zaHVtYW4gS2hhbmR1
YWwgPGFuc2h1bWFuLmtoYW5kdWFsQGFybS5jb20+DQo+IC0tLQ0KPiAgIGFyY2gvYWxwaGEvS2Nv
bmZpZyAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvYXJjL0tjb25maWcgICAgICAgIHwgIDEgLQ0KPiAg
IGFyY2gvYXJtL0tjb25maWcgICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvYXJtNjQvS2NvbmZpZyAg
ICAgIHwgIDEgLQ0KPiAgIGFyY2gvY3NreS9LY29uZmlnICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gv
aGV4YWdvbi9LY29uZmlnICAgIHwgIDEgLQ0KPiAgIGFyY2gvaWE2NC9LY29uZmlnICAgICAgIHwg
IDEgLQ0KPiAgIGFyY2gvbG9vbmdhcmNoL0tjb25maWcgIHwgIDEgLQ0KPiAgIGFyY2gvbTY4ay9L
Y29uZmlnICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvbWljcm9ibGF6ZS9LY29uZmlnIHwgIDEgLQ0K
PiAgIGFyY2gvbWlwcy9LY29uZmlnICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvbmlvczIvS2NvbmZp
ZyAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvb3BlbnJpc2MvS2NvbmZpZyAgIHwgIDEgLQ0KPiAgIGFy
Y2gvcGFyaXNjL0tjb25maWcgICAgIHwgIDEgLQ0KPiAgIGFyY2gvcG93ZXJwYy9LY29uZmlnICAg
IHwgIDEgLQ0KPiAgIGFyY2gvcmlzY3YvS2NvbmZpZyAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvczM5
MC9LY29uZmlnICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvc2gvS2NvbmZpZyAgICAgICAgIHwgIDEg
LQ0KPiAgIGFyY2gvc3BhcmMvS2NvbmZpZyAgICAgIHwgIDEgLQ0KPiAgIGFyY2gvdW0vS2NvbmZp
ZyAgICAgICAgIHwgIDEgLQ0KPiAgIGFyY2gveDg2L0tjb25maWcgICAgICAgIHwgIDEgLQ0KPiAg
IGFyY2gveHRlbnNhL0tjb25maWcgICAgIHwgIDEgLQ0KPiAgIGluY2x1ZGUvbGludXgvbW0uaCAg
ICAgIHwgIDMgLS0tDQo+ICAgbW0vS2NvbmZpZyAgICAgICAgICAgICAgfCAgMyAtLS0NCj4gICBt
bS9tbWFwLmMgICAgICAgICAgICAgICB8IDIyIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gICAy
NSBmaWxlcyBjaGFuZ2VkLCA1MCBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS9tbS9t
bWFwLmMgYi9tbS9tbWFwLmMNCj4gaW5kZXggNDNkYjNiZDQ5MDcxLi4zNTU3ZmU4M2QxMjQgMTAw
NjQ0DQo+IC0tLSBhL21tL21tYXAuYw0KPiArKysgYi9tbS9tbWFwLmMNCj4gQEAgLTEwMSwyOCAr
MTAxLDYgQEAgc3RhdGljIHZvaWQgdW5tYXBfcmVnaW9uKHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLA0K
PiAgICAqCQkJCQkJCQl3OiAobm8pIG5vDQo+ICAgICoJCQkJCQkJCXg6ICh5ZXMpIHllcw0KPiAg
ICAqLw0KDQpUaGUgYWJvdmUgY29tbWVudCBpcyBub3Qgb3JwaGFuZWQuIEkgdGhpbmsgaXQgc2hv
dWxkIGdvIGluIGxpbnV4L21tLmgNCg0KPiAtI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfVk1fR0VU
X1BBR0VfUFJPVA0KPiAtcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBbMTZdIF9fcm9fYWZ0ZXJfaW5p
dCA9IHsNCj4gLQlbVk1fTk9ORV0JCQkJCT0gX19QMDAwLA0KPiAtCVtWTV9SRUFEXQkJCQkJPSBf
X1AwMDEsDQo+IC0JW1ZNX1dSSVRFXQkJCQkJPSBfX1AwMTAsDQo+IC0JW1ZNX1dSSVRFIHwgVk1f
UkVBRF0JCQkJPSBfX1AwMTEsDQo+IC0JW1ZNX0VYRUNdCQkJCQk9IF9fUDEwMCwNCj4gLQlbVk1f
RVhFQyB8IFZNX1JFQURdCQkJCT0gX19QMTAxLA0KPiAtCVtWTV9FWEVDIHwgVk1fV1JJVEVdCQkJ
CT0gX19QMTEwLA0KPiAtCVtWTV9FWEVDIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQkJCT0gX19QMTEx
LA0KPiAtCVtWTV9TSEFSRURdCQkJCQk9IF9fUzAwMCwNCj4gLQlbVk1fU0hBUkVEIHwgVk1fUkVB
RF0JCQkJPSBfX1MwMDEsDQo+IC0JW1ZNX1NIQVJFRCB8IFZNX1dSSVRFXQkJCQk9IF9fUzAxMCwN
Cj4gLQlbVk1fU0hBUkVEIHwgVk1fV1JJVEUgfCBWTV9SRUFEXQkJPSBfX1MwMTEsDQo+IC0JW1ZN
X1NIQVJFRCB8IFZNX0VYRUNdCQkJCT0gX19TMTAwLA0KPiAtCVtWTV9TSEFSRUQgfCBWTV9FWEVD
IHwgVk1fUkVBRF0JCQk9IF9fUzEwMSwNCj4gLQlbVk1fU0hBUkVEIHwgVk1fRVhFQyB8IFZNX1dS
SVRFXQkJPSBfX1MxMTAsDQo+IC0JW1ZNX1NIQVJFRCB8IFZNX0VYRUMgfCBWTV9XUklURSB8IFZN
X1JFQURdCT0gX19TMTExDQo+IC19Ow0KPiAtREVDTEFSRV9WTV9HRVRfUEFHRV9QUk9UDQo+IC0j
ZW5kaWYJLyogQ09ORklHX0FSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QgKi8NCj4gLQ0KPiAgIHN0
YXRpYyBwZ3Byb3RfdCB2bV9wZ3Byb3RfbW9kaWZ5KHBncHJvdF90IG9sZHByb3QsIHVuc2lnbmVk
IGxvbmcgdm1fZmxhZ3MpDQo+ICAgew0KPiAgIAlyZXR1cm4gcGdwcm90X21vZGlmeShvbGRwcm90
LCB2bV9nZXRfcGFnZV9wcm90KHZtX2ZsYWdzKSk7
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
2022-06-24 4:55 ` [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-06-24 5:11 ` Christoph Hellwig
2022-06-24 5:29 ` Christophe Leroy
@ 2022-06-27 5:48 ` Christophe Leroy
2 siblings, 0 replies; 52+ messages in thread
From: Christophe Leroy @ 2022-06-27 5:48 UTC (permalink / raw)
To: Anshuman Khandual, linux-mm@kvack.org
Cc: hch@infradead.org, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
sparclinux@vger.kernel.org, x86@kernel.org,
openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org,
linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org,
linux-m68k@lists.linux-m68k.org,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-um@lists.infradead.org, linux-sh@vger.kernel.org,
linux-kernel@vger.kernel.org
DQoNCkxlIDI0LzA2LzIwMjIgw6AgMDY6NDMsIEFuc2h1bWFuIEtoYW5kdWFsIGEgw6ljcml0wqA6
DQo+IE5vdyBhbGwgdGhlIHBsYXRmb3JtcyBlbmFibGUgQVJDSF9IQVNfR0VUX1BBR0VfUFJPVC4g
VGhleSBkZWZpbmUgYW5kIGV4cG9ydA0KPiBvd24gdm1fZ2V0X3BhZ2VfcHJvdCgpIHdoZXRoZXIg
Y3VzdG9tIG9yIHN0YW5kYXJkIERFQ0xBUkVfVk1fR0VUX1BBR0VfUFJPVC4NCj4gSGVuY2UgdGhl
cmUgaXMgbm8gbmVlZCBmb3IgZGVmYXVsdCBnZW5lcmljIGZhbGxiYWNrIGZvciB2bV9nZXRfcGFn
ZV9wcm90KCkuDQo+IEp1c3QgZHJvcCB0aGlzIGZhbGxiYWNrIGFuZCBhbHNvIEFSQ0hfSEFTX0dF
VF9QQUdFX1BST1QgbWVjaGFuaXNtLg0KPiANCj4gQ2M6IEFuZHJldyBNb3J0b24gPGFrcG1AbGlu
dXgtZm91bmRhdGlvbi5vcmc+DQo+IENjOiBsaW51eC1tbUBrdmFjay5vcmcNCj4gQ2M6IGxpbnV4
LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gU2lnbmVkLW9mZi1ieTogQW5zaHVtYW4gS2hhbmR1
YWwgPGFuc2h1bWFuLmtoYW5kdWFsQGFybS5jb20+DQoNClJldmlld2VkLWJ5OiBDaHJpc3RvcGhl
IExlcm95IDxjaHJpc3RvcGhlLmxlcm95QGNzZ3JvdXAuZXU+DQoNCj4gLS0tDQo+ICAgYXJjaC9h
bHBoYS9LY29uZmlnICAgICAgfCAgMSAtDQo+ICAgYXJjaC9hcmMvS2NvbmZpZyAgICAgICAgfCAg
MSAtDQo+ICAgYXJjaC9hcm0vS2NvbmZpZyAgICAgICAgfCAgMSAtDQo+ICAgYXJjaC9hcm02NC9L
Y29uZmlnICAgICAgfCAgMSAtDQo+ICAgYXJjaC9jc2t5L0tjb25maWcgICAgICAgfCAgMSAtDQo+
ICAgYXJjaC9oZXhhZ29uL0tjb25maWcgICAgfCAgMSAtDQo+ICAgYXJjaC9pYTY0L0tjb25maWcg
ICAgICAgfCAgMSAtDQo+ICAgYXJjaC9sb29uZ2FyY2gvS2NvbmZpZyAgfCAgMSAtDQo+ICAgYXJj
aC9tNjhrL0tjb25maWcgICAgICAgfCAgMSAtDQo+ICAgYXJjaC9taWNyb2JsYXplL0tjb25maWcg
fCAgMSAtDQo+ICAgYXJjaC9taXBzL0tjb25maWcgICAgICAgfCAgMSAtDQo+ICAgYXJjaC9uaW9z
Mi9LY29uZmlnICAgICAgfCAgMSAtDQo+ICAgYXJjaC9vcGVucmlzYy9LY29uZmlnICAgfCAgMSAt
DQo+ICAgYXJjaC9wYXJpc2MvS2NvbmZpZyAgICAgfCAgMSAtDQo+ICAgYXJjaC9wb3dlcnBjL0tj
b25maWcgICAgfCAgMSAtDQo+ICAgYXJjaC9yaXNjdi9LY29uZmlnICAgICAgfCAgMSAtDQo+ICAg
YXJjaC9zMzkwL0tjb25maWcgICAgICAgfCAgMSAtDQo+ICAgYXJjaC9zaC9LY29uZmlnICAgICAg
ICAgfCAgMSAtDQo+ICAgYXJjaC9zcGFyYy9LY29uZmlnICAgICAgfCAgMSAtDQo+ICAgYXJjaC91
bS9LY29uZmlnICAgICAgICAgfCAgMSAtDQo+ICAgYXJjaC94ODYvS2NvbmZpZyAgICAgICAgfCAg
MSAtDQo+ICAgYXJjaC94dGVuc2EvS2NvbmZpZyAgICAgfCAgMSAtDQo+ICAgaW5jbHVkZS9saW51
eC9tbS5oICAgICAgfCAgMyAtLS0NCj4gICBtbS9LY29uZmlnICAgICAgICAgICAgICB8ICAzIC0t
LQ0KPiAgIG1tL21tYXAuYyAgICAgICAgICAgICAgIHwgMjIgLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KPiAgIDI1IGZpbGVzIGNoYW5nZWQsIDUwIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdp
dCBhL2FyY2gvYWxwaGEvS2NvbmZpZyBiL2FyY2gvYWxwaGEvS2NvbmZpZw0KPiBpbmRleCBkYjFj
OGIzMjk0NjEuLjdkMGQyNmI1YjNmNSAxMDA2NDQNCj4gLS0tIGEvYXJjaC9hbHBoYS9LY29uZmln
DQo+ICsrKyBiL2FyY2gvYWxwaGEvS2NvbmZpZw0KPiBAQCAtMiw3ICsyLDYgQEANCj4gICBjb25m
aWcgQUxQSEENCj4gICAJYm9vbA0KPiAgIAlkZWZhdWx0IHkNCj4gLQlzZWxlY3QgQVJDSF9IQVNf
Vk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF8zMkJJVF9VU1RBVF9GX1RJTk9ERQ0K
PiAgIAlzZWxlY3QgQVJDSF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQNCj4gICAJc2VsZWN0IEFSQ0hf
TUlHSFRfSEFWRV9QQ19TRVJJTw0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9hcmMvS2NvbmZpZyBiL2Fy
Y2gvYXJjL0tjb25maWcNCj4gaW5kZXggOGJlNTZhNWQ4YTliLi45ZTM2NTMyNTNlZjIgMTAwNjQ0
DQo+IC0tLSBhL2FyY2gvYXJjL0tjb25maWcNCj4gKysrIGIvYXJjaC9hcmMvS2NvbmZpZw0KPiBA
QCAtMTMsNyArMTMsNiBAQCBjb25maWcgQVJDDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19TRVRVUF9E
TUFfT1BTDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19TWU5DX0RNQV9GT1JfQ1BVDQo+ICAgCXNlbGVj
dCBBUkNIX0hBU19TWU5DX0RNQV9GT1JfREVWSUNFDQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dF
VF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IEFSQ0hfU1VQUE9SVFNfQVRPTUlDX1JNVyBpZiBBUkNf
SEFTX0xMU0MNCj4gICAJc2VsZWN0IEFSQ0hfMzJCSVRfT0ZGX1QNCj4gICAJc2VsZWN0IEJVSUxE
VElNRV9UQUJMRV9TT1JUDQo+IGRpZmYgLS1naXQgYS9hcmNoL2FybS9LY29uZmlnIGIvYXJjaC9h
cm0vS2NvbmZpZw0KPiBpbmRleCBlMTUzYjZkNGZjNWIuLjc2MzBiYTljYjZjYyAxMDA2NDQNCj4g
LS0tIGEvYXJjaC9hcm0vS2NvbmZpZw0KPiArKysgYi9hcmNoL2FybS9LY29uZmlnDQo+IEBAIC0y
NCw3ICsyNCw2IEBAIGNvbmZpZyBBUk0NCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNfRE1BX0ZP
Ul9DUFUgaWYgU1dJT1RMQiB8fCAhTU1VDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19URUFSRE9XTl9E
TUFfT1BTIGlmIE1NVQ0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVElDS19CUk9BRENBU1QgaWYgR0VO
RVJJQ19DTE9DS0VWRU5UU19CUk9BRENBU1QNCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BB
R0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9IQVZFX0NVU1RPTV9HUElPX0gNCj4gICAJc2VsZWN0
IEFSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHIGlmIENQVV9WNyB8fCBDUFVfVjdNIHx8IENQVV9W
NksNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX0dDT1ZfUFJPRklMRV9BTEwNCj4gZGlmZiAtLWdpdCBh
L2FyY2gvYXJtNjQvS2NvbmZpZyBiL2FyY2gvYXJtNjQvS2NvbmZpZw0KPiBpbmRleCAxNjUyYTk4
MDBlYmUuLjcwMzBiZjNmOGQ2ZiAxMDA2NDQNCj4gLS0tIGEvYXJjaC9hcm02NC9LY29uZmlnDQo+
ICsrKyBiL2FyY2gvYXJtNjQvS2NvbmZpZw0KPiBAQCAtNDUsNyArNDUsNiBAQCBjb25maWcgQVJN
NjQNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZU0NBTExfV1JBUFBFUg0KPiAgIAlzZWxlY3QgQVJD
SF9IQVNfVEVBUkRPV05fRE1BX09QUyBpZiBJT01NVV9TVVBQT1JUDQo+ICAgCXNlbGVjdCBBUkNI
X0hBU19USUNLX0JST0FEQ0FTVCBpZiBHRU5FUklDX0NMT0NLRVZFTlRTX0JST0FEQ0FTVA0KPiAt
CXNlbGVjdCBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19a
T05FX0RNQV9TRVQgaWYgRVhQRVJUDQo+ICAgCXNlbGVjdCBBUkNIX0hBVkVfRUxGX1BST1QNCj4g
ICAJc2VsZWN0IEFSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHDQo+IGRpZmYgLS1naXQgYS9hcmNo
L2Nza3kvS2NvbmZpZyBiL2FyY2gvY3NreS9LY29uZmlnDQo+IGluZGV4IDU4OGI4YTljNjhlZC4u
MjFkNzJiMDc4ZWVmIDEwMDY0NA0KPiAtLS0gYS9hcmNoL2Nza3kvS2NvbmZpZw0KPiArKysgYi9h
cmNoL2Nza3kvS2NvbmZpZw0KPiBAQCAtNiw3ICs2LDYgQEAgY29uZmlnIENTS1kNCj4gICAJc2Vs
ZWN0IEFSQ0hfSEFTX0dDT1ZfUFJPRklMRV9BTEwNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNf
RE1BX0ZPUl9DUFUNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNfRE1BX0ZPUl9ERVZJQ0UNCj4g
LQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9VU0Vf
QlVJTFRJTl9CU1dBUA0KPiAgIAlzZWxlY3QgQVJDSF9VU0VfUVVFVUVEX1JXTE9DS1MNCj4gICAJ
c2VsZWN0IEFSQ0hfV0FOVF9GUkFNRV9QT0lOVEVSUyBpZiAhQ1BVX0NLNjEwICYmICQoY2Mtb3B0
aW9uLC1tYmFja3RyYWNlKQ0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9oZXhhZ29uL0tjb25maWcgYi9h
cmNoL2hleGFnb24vS2NvbmZpZw0KPiBpbmRleCBiYzRjZWVjZDA1ODguLjU0ZWFkZjI2NTE3OCAx
MDA2NDQNCj4gLS0tIGEvYXJjaC9oZXhhZ29uL0tjb25maWcNCj4gKysrIGIvYXJjaC9oZXhhZ29u
L0tjb25maWcNCj4gQEAgLTYsNyArNiw2IEBAIGNvbmZpZyBIRVhBR09ODQo+ICAgCWRlZl9ib29s
IHkNCj4gICAJc2VsZWN0IEFSQ0hfMzJCSVRfT0ZGX1QNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZ
TkNfRE1BX0ZPUl9ERVZJQ0UNCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0K
PiAgIAlzZWxlY3QgQVJDSF9OT19QUkVFTVBUDQo+ICAgCXNlbGVjdCBETUFfR0xPQkFMX1BPT0wN
Cj4gICAJIyBPdGhlciBwZW5kaW5nIHByb2plY3RzL3RvLWRvIGl0ZW1zLg0KPiBkaWZmIC0tZ2l0
IGEvYXJjaC9pYTY0L0tjb25maWcgYi9hcmNoL2lhNjQvS2NvbmZpZw0KPiBpbmRleCAwNTEwYTU3
Mzc3MTEuLmNiOTM3NjlhOWYyYSAxMDA2NDQNCj4gLS0tIGEvYXJjaC9pYTY0L0tjb25maWcNCj4g
KysrIGIvYXJjaC9pYTY0L0tjb25maWcNCj4gQEAgLTEyLDcgKzEyLDYgQEAgY29uZmlnIElBNjQN
Cj4gICAJc2VsZWN0IEFSQ0hfSEFTX0RNQV9NQVJLX0NMRUFODQo+ICAgCXNlbGVjdCBBUkNIX0hB
U19TVFJOQ1BZX0ZST01fVVNFUg0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfU1RSTkxFTl9VU0VSDQo+
IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IEFSQ0hfTUlH
SFRfSEFWRV9QQ19QQVJQT1JUDQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVfUENfU0VSSU8N
Cj4gICAJc2VsZWN0IEFDUEkNCj4gZGlmZiAtLWdpdCBhL2FyY2gvbG9vbmdhcmNoL0tjb25maWcg
Yi9hcmNoL2xvb25nYXJjaC9LY29uZmlnDQo+IGluZGV4IGZkMDdiOGU3NjBlZS4uMTkyMGQ1MjY1
M2I0IDEwMDY0NA0KPiAtLS0gYS9hcmNoL2xvb25nYXJjaC9LY29uZmlnDQo+ICsrKyBiL2FyY2gv
bG9vbmdhcmNoL0tjb25maWcNCj4gQEAgLTksNyArOSw2IEBAIGNvbmZpZyBMT09OR0FSQ0gNCj4g
ICAJc2VsZWN0IEFSQ0hfSEFTX0FDUElfVEFCTEVfVVBHUkFERQlpZiBBQ1BJDQo+ICAgCXNlbGVj
dCBBUkNIX0hBU19QSFlTX1RPX0RNQQ0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfUFRFX1NQRUNJQUwN
Cj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9I
QVNfVElDS19CUk9BRENBU1QgaWYgR0VORVJJQ19DTE9DS0VWRU5UU19CUk9BRENBU1QNCj4gICAJ
c2VsZWN0IEFSQ0hfSU5MSU5FX1JFQURfTE9DSyBpZiAhUFJFRU1QVElPTg0KPiAgIAlzZWxlY3Qg
QVJDSF9JTkxJTkVfUkVBRF9MT0NLX0JIIGlmICFQUkVFTVBUSU9ODQo+IGRpZmYgLS1naXQgYS9h
cmNoL202OGsvS2NvbmZpZyBiL2FyY2gvbTY4ay9LY29uZmlnDQo+IGluZGV4IDQ5YWEwY2YxM2U5
Ni4uOTM2Y2NlNDJhZTlhIDEwMDY0NA0KPiAtLS0gYS9hcmNoL202OGsvS2NvbmZpZw0KPiArKysg
Yi9hcmNoL202OGsvS2NvbmZpZw0KPiBAQCAtNyw3ICs3LDYgQEAgY29uZmlnIE02OEsNCj4gICAJ
c2VsZWN0IEFSQ0hfSEFTX0NVUlJFTlRfU1RBQ0tfUE9JTlRFUg0KPiAgIAlzZWxlY3QgQVJDSF9I
QVNfRE1BX1BSRVBfQ09IRVJFTlQgaWYgSEFTX0RNQSAmJiBNTVUgJiYgIUNPTERGSVJFDQo+ICAg
CXNlbGVjdCBBUkNIX0hBU19TWU5DX0RNQV9GT1JfREVWSUNFIGlmIEhBU19ETUENCj4gLQlzZWxl
Y3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9IQVZFX05NSV9T
QUZFX0NNUFhDSEcgaWYgUk1XX0lOU05TDQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVfUENf
UEFSUE9SVCBpZiBJU0ENCj4gICAJc2VsZWN0IEFSQ0hfTk9fUFJFRU1QVCBpZiAhQ09MREZJUkUN
Cj4gZGlmZiAtLWdpdCBhL2FyY2gvbWljcm9ibGF6ZS9LY29uZmlnIGIvYXJjaC9taWNyb2JsYXpl
L0tjb25maWcNCj4gaW5kZXggMTVmOTFiYThhMGM0Li44Y2Y0MjlhZDFjODQgMTAwNjQ0DQo+IC0t
LSBhL2FyY2gvbWljcm9ibGF6ZS9LY29uZmlnDQo+ICsrKyBiL2FyY2gvbWljcm9ibGF6ZS9LY29u
ZmlnDQo+IEBAIC03LDcgKzcsNiBAQCBjb25maWcgTUlDUk9CTEFaRQ0KPiAgIAlzZWxlY3QgQVJD
SF9IQVNfR0NPVl9QUk9GSUxFX0FMTA0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfU1lOQ19ETUFfRk9S
X0NQVQ0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfU1lOQ19ETUFfRk9SX0RFVklDRQ0KPiAtCXNlbGVj
dCBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVf
UENfUEFSUE9SVA0KPiAgIAlzZWxlY3QgQVJDSF9XQU5UX0lQQ19QQVJTRV9WRVJTSU9ODQo+ICAg
CXNlbGVjdCBCVUlMRFRJTUVfVEFCTEVfU09SVA0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9taXBzL0tj
b25maWcgYi9hcmNoL21pcHMvS2NvbmZpZw0KPiBpbmRleCBkMGI3ZWIxMWVjODEuLmRiMDlkNDVk
NTllYyAxMDA2NDQNCj4gLS0tIGEvYXJjaC9taXBzL0tjb25maWcNCj4gKysrIGIvYXJjaC9taXBz
L0tjb25maWcNCj4gQEAgLTE0LDcgKzE0LDYgQEAgY29uZmlnIE1JUFMNCj4gICAJc2VsZWN0IEFS
Q0hfSEFTX1NUUk5MRU5fVVNFUg0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVElDS19CUk9BRENBU1Qg
aWYgR0VORVJJQ19DTE9DS0VWRU5UU19CUk9BRENBU1QNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1VC
U0FOX1NBTklUSVpFX0FMTA0KPiAtCXNlbGVjdCBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+
ICAgCXNlbGVjdCBBUkNIX0hBU19HQ09WX1BST0ZJTEVfQUxMDQo+ICAgCXNlbGVjdCBBUkNIX0tF
RVBfTUVNQkxPQ0sNCj4gICAJc2VsZWN0IEFSQ0hfU1VQUE9SVFNfVVBST0JFUw0KPiBkaWZmIC0t
Z2l0IGEvYXJjaC9uaW9zMi9LY29uZmlnIGIvYXJjaC9uaW9zMi9LY29uZmlnDQo+IGluZGV4IGUw
NDU5ZGZmZDIxOC4uNDE2N2YxZWI0Y2Q4IDEwMDY0NA0KPiAtLS0gYS9hcmNoL25pb3MyL0tjb25m
aWcNCj4gKysrIGIvYXJjaC9uaW9zMi9LY29uZmlnDQo+IEBAIC02LDcgKzYsNiBAQCBjb25maWcg
TklPUzINCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNfRE1BX0ZPUl9DUFUNCj4gICAJc2VsZWN0
IEFSQ0hfSEFTX1NZTkNfRE1BX0ZPUl9ERVZJQ0UNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX0RNQV9T
RVRfVU5DQUNIRUQNCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlz
ZWxlY3QgQVJDSF9OT19TV0FQDQo+ICAgCXNlbGVjdCBDT01NT05fQ0xLDQo+ICAgCXNlbGVjdCBU
SU1FUl9PRg0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9vcGVucmlzYy9LY29uZmlnIGIvYXJjaC9vcGVu
cmlzYy9LY29uZmlnDQo+IGluZGV4IGZlMGRmYjUwZWI4Ni4uZTgxNGRmNGM0ODNjIDEwMDY0NA0K
PiAtLS0gYS9hcmNoL29wZW5yaXNjL0tjb25maWcNCj4gKysrIGIvYXJjaC9vcGVucmlzYy9LY29u
ZmlnDQo+IEBAIC0xMCw3ICsxMCw2IEBAIGNvbmZpZyBPUEVOUklTQw0KPiAgIAlzZWxlY3QgQVJD
SF9IQVNfRE1BX1NFVF9VTkNBQ0hFRA0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfRE1BX0NMRUFSX1VO
Q0FDSEVEDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19TWU5DX0RNQV9GT1JfREVWSUNFDQo+IC0Jc2Vs
ZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IENPTU1PTl9DTEsNCj4g
ICAJc2VsZWN0IE9GDQo+ICAgCXNlbGVjdCBPRl9FQVJMWV9GTEFUVFJFRQ0KPiBkaWZmIC0tZ2l0
IGEvYXJjaC9wYXJpc2MvS2NvbmZpZyBiL2FyY2gvcGFyaXNjL0tjb25maWcNCj4gaW5kZXggOTBl
YWJjODQ2ZjgxLi41ZjI0NDhkYzVhMmIgMTAwNjQ0DQo+IC0tLSBhL2FyY2gvcGFyaXNjL0tjb25m
aWcNCj4gKysrIGIvYXJjaC9wYXJpc2MvS2NvbmZpZw0KPiBAQCAtMTEsNyArMTEsNiBAQCBjb25m
aWcgUEFSSVNDDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19FTEZfUkFORE9NSVpFDQo+ICAgCXNlbGVj
dCBBUkNIX0hBU19TVFJJQ1RfS0VSTkVMX1JXWA0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVUJTQU5f
U0FOSVRJWkVfQUxMDQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJ
c2VsZWN0IEFSQ0hfSEFTX1BURV9TUEVDSUFMDQo+ICAgCXNlbGVjdCBBUkNIX05PX1NHX0NIQUlO
DQo+ICAgCXNlbGVjdCBBUkNIX1NVUFBPUlRTX0hVR0VUTEJGUyBpZiBQQTIwDQo+IGRpZmYgLS1n
aXQgYS9hcmNoL3Bvd2VycGMvS2NvbmZpZyBiL2FyY2gvcG93ZXJwYy9LY29uZmlnDQo+IGluZGV4
IDEwMzVkMTcyYzdkZC4uMjUwYjg2NThiMmQ0IDEwMDY0NA0KPiAtLS0gYS9hcmNoL3Bvd2VycGMv
S2NvbmZpZw0KPiArKysgYi9hcmNoL3Bvd2VycGMvS2NvbmZpZw0KPiBAQCAtMTQwLDcgKzE0MCw2
IEBAIGNvbmZpZyBQUEMNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1RJQ0tfQlJPQURDQVNUCQlpZiBH
RU5FUklDX0NMT0NLRVZFTlRTX0JST0FEQ0FTVA0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVUFDQ0VT
U19GTFVTSENBQ0hFDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19VQlNBTl9TQU5JVElaRV9BTEwNCj4g
LQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAgIAlzZWxlY3QgQVJDSF9IQVZF
X05NSV9TQUZFX0NNUFhDSEcNCj4gICAJc2VsZWN0IEFSQ0hfS0VFUF9NRU1CTE9DSw0KPiAgIAlz
ZWxlY3QgQVJDSF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQNCj4gZGlmZiAtLWdpdCBhL2FyY2gvcmlz
Y3YvS2NvbmZpZyBiL2FyY2gvcmlzY3YvS2NvbmZpZw0KPiBpbmRleCA1ODMzODlkNGU0M2EuLjMy
ZmZlZjlmNmU1YiAxMDA2NDQNCj4gLS0tIGEvYXJjaC9yaXNjdi9LY29uZmlnDQo+ICsrKyBiL2Fy
Y2gvcmlzY3YvS2NvbmZpZw0KPiBAQCAtMzIsNyArMzIsNiBAQCBjb25maWcgUklTQ1YNCj4gICAJ
c2VsZWN0IEFSQ0hfSEFTX1NUUklDVF9NT0RVTEVfUldYIGlmIE1NVSAmJiAhWElQX0tFUk5FTA0K
PiAgIAlzZWxlY3QgQVJDSF9IQVNfVElDS19CUk9BRENBU1QgaWYgR0VORVJJQ19DTE9DS0VWRU5U
U19CUk9BRENBU1QNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1VCU0FOX1NBTklUSVpFX0FMTA0KPiAt
CXNlbGVjdCBBUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+ICAgCXNlbGVjdCBBUkNIX09QVElP
TkFMX0tFUk5FTF9SV1ggaWYgQVJDSF9IQVNfU1RSSUNUX0tFUk5FTF9SV1gNCj4gICAJc2VsZWN0
IEFSQ0hfT1BUSU9OQUxfS0VSTkVMX1JXWF9ERUZBVUxUDQo+ICAgCXNlbGVjdCBBUkNIX1NUQUNL
V0FMSw0KPiBkaWZmIC0tZ2l0IGEvYXJjaC9zMzkwL0tjb25maWcgYi9hcmNoL3MzOTAvS2NvbmZp
Zw0KPiBpbmRleCBjNDQ4MTM3N2NhODMuLjkxYzBiODBhOGJmMCAxMDA2NDQNCj4gLS0tIGEvYXJj
aC9zMzkwL0tjb25maWcNCj4gKysrIGIvYXJjaC9zMzkwL0tjb25maWcNCj4gQEAgLTgxLDcgKzgx
LDYgQEAgY29uZmlnIFMzOTANCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZU0NBTExfV1JBUFBFUg0K
PiAgIAlzZWxlY3QgQVJDSF9IQVNfVUJTQU5fU0FOSVRJWkVfQUxMDQo+ICAgCXNlbGVjdCBBUkNI
X0hBU19WRFNPX0RBVEENCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAg
IAlzZWxlY3QgQVJDSF9IQVZFX05NSV9TQUZFX0NNUFhDSEcNCj4gICAJc2VsZWN0IEFSQ0hfSU5M
SU5FX1JFQURfTE9DSw0KPiAgIAlzZWxlY3QgQVJDSF9JTkxJTkVfUkVBRF9MT0NLX0JIDQo+IGRp
ZmYgLS1naXQgYS9hcmNoL3NoL0tjb25maWcgYi9hcmNoL3NoL0tjb25maWcNCj4gaW5kZXggOTFm
M2VhMzI1Mzg4Li41ZjIyMGU5MDNlNWEgMTAwNjQ0DQo+IC0tLSBhL2FyY2gvc2gvS2NvbmZpZw0K
PiArKysgYi9hcmNoL3NoL0tjb25maWcNCj4gQEAgLTEyLDcgKzEyLDYgQEAgY29uZmlnIFNVUEVS
SA0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfR0NPVl9QUk9GSUxFX0FMTA0KPiAgIAlzZWxlY3QgQVJD
SF9IQVNfUFRFX1NQRUNJQUwNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1RJQ0tfQlJPQURDQVNUIGlm
IEdFTkVSSUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUDQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dF
VF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IEFSQ0hfSElCRVJOQVRJT05fUE9TU0lCTEUgaWYgTU1V
DQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVA0KPiAgIAlzZWxlY3QgQVJD
SF9XQU5UX0lQQ19QQVJTRV9WRVJTSU9ODQo+IGRpZmYgLS1naXQgYS9hcmNoL3NwYXJjL0tjb25m
aWcgYi9hcmNoL3NwYXJjL0tjb25maWcNCj4gaW5kZXggMDlmODY4NjEzYTRkLi45YzFjY2U3NDk1
M2EgMTAwNjQ0DQo+IC0tLSBhL2FyY2gvc3BhcmMvS2NvbmZpZw0KPiArKysgYi9hcmNoL3NwYXJj
L0tjb25maWcNCj4gQEAgLTEzLDcgKzEzLDYgQEAgY29uZmlnIDY0QklUDQo+ICAgY29uZmlnIFNQ
QVJDDQo+ICAgCWJvb2wNCj4gICAJZGVmYXVsdCB5DQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dF
VF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IEFSQ0hfTUlHSFRfSEFWRV9QQ19QQVJQT1JUIGlmIFNQ
QVJDNjQgJiYgUENJDQo+ICAgCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVfUENfU0VSSU8NCj4gICAJ
c2VsZWN0IERNQV9PUFMNCj4gZGlmZiAtLWdpdCBhL2FyY2gvdW0vS2NvbmZpZyBiL2FyY2gvdW0v
S2NvbmZpZw0KPiBpbmRleCA3ZmI0MzY1NGU1YjUuLjRlYzIyZTE1NmEyZSAxMDA2NDQNCj4gLS0t
IGEvYXJjaC91bS9LY29uZmlnDQo+ICsrKyBiL2FyY2gvdW0vS2NvbmZpZw0KPiBAQCAtMTAsNyAr
MTAsNiBAQCBjb25maWcgVU1MDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19LQ09WDQo+ICAgCXNlbGVj
dCBBUkNIX0hBU19TVFJOQ1BZX0ZST01fVVNFUg0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfU1RSTkxF
Tl9VU0VSDQo+IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJc2VsZWN0
IEFSQ0hfTk9fUFJFRU1QVA0KPiAgIAlzZWxlY3QgSEFWRV9BUkNIX0FVRElUU1lTQ0FMTA0KPiAg
IAlzZWxlY3QgSEFWRV9BUkNIX1NFQ0NPTVBfRklMVEVSDQo+IGRpZmYgLS1naXQgYS9hcmNoL3g4
Ni9LY29uZmlnIGIvYXJjaC94ODYvS2NvbmZpZw0KPiBpbmRleCBiZTBiOTVlNTFkZjYuLjg0MWU0
ODQzZDBjNCAxMDA2NDQNCj4gLS0tIGEvYXJjaC94ODYvS2NvbmZpZw0KPiArKysgYi9hcmNoL3g4
Ni9LY29uZmlnDQo+IEBAIC05NCw3ICs5NCw2IEBAIGNvbmZpZyBYODYNCj4gICAJc2VsZWN0IEFS
Q0hfSEFTX1NZTkNfQ09SRV9CRUZPUkVfVVNFUk1PREUNCj4gICAJc2VsZWN0IEFSQ0hfSEFTX1NZ
U0NBTExfV1JBUFBFUg0KPiAgIAlzZWxlY3QgQVJDSF9IQVNfVUJTQU5fU0FOSVRJWkVfQUxMDQo+
IC0Jc2VsZWN0IEFSQ0hfSEFTX1ZNX0dFVF9QQUdFX1BST1QNCj4gICAJc2VsZWN0IEFSQ0hfSEFT
X0RFQlVHX1dYDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19aT05FX0RNQV9TRVQgaWYgRVhQRVJUDQo+
ICAgCXNlbGVjdCBBUkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRw0KPiBkaWZmIC0tZ2l0IGEvYXJj
aC94dGVuc2EvS2NvbmZpZyBiL2FyY2gveHRlbnNhL0tjb25maWcNCj4gaW5kZXggNGMwZDgzNTIw
ZmYxLi4wYjBmMDE3MmNjZWQgMTAwNjQ0DQo+IC0tLSBhL2FyY2gveHRlbnNhL0tjb25maWcNCj4g
KysrIGIvYXJjaC94dGVuc2EvS2NvbmZpZw0KPiBAQCAtMTEsNyArMTEsNiBAQCBjb25maWcgWFRF
TlNBDQo+ICAgCXNlbGVjdCBBUkNIX0hBU19ETUFfU0VUX1VOQ0FDSEVEIGlmIE1NVQ0KPiAgIAlz
ZWxlY3QgQVJDSF9IQVNfU1RSTkNQWV9GUk9NX1VTRVIgaWYgIUtBU0FODQo+ICAgCXNlbGVjdCBB
UkNIX0hBU19TVFJOTEVOX1VTRVINCj4gLQlzZWxlY3QgQVJDSF9IQVNfVk1fR0VUX1BBR0VfUFJP
VA0KPiAgIAlzZWxlY3QgQVJDSF9VU0VfTUVNVEVTVA0KPiAgIAlzZWxlY3QgQVJDSF9VU0VfUVVF
VUVEX1JXTE9DS1MNCj4gICAJc2VsZWN0IEFSQ0hfVVNFX1FVRVVFRF9TUElOTE9DS1MNCj4gZGlm
ZiAtLWdpdCBhL2luY2x1ZGUvbGludXgvbW0uaCBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBpbmRl
eCA3MGQ5MDBmNmRmNDMuLmQ3NzZhODZiZTFkNCAxMDA2NDQNCj4gLS0tIGEvaW5jbHVkZS9saW51
eC9tbS5oDQo+ICsrKyBiL2luY2x1ZGUvbGludXgvbW0uaA0KPiBAQCAtNDI0LDkgKzQyNCw2IEBA
IGV4dGVybiB1bnNpZ25lZCBpbnQga29ianNpemUoY29uc3Qgdm9pZCAqb2JqcCk7DQo+ICAgICog
bWFwcGluZyBmcm9tIHRoZSBjdXJyZW50bHkgYWN0aXZlIHZtX2ZsYWdzIHByb3RlY3Rpb24gYml0
cyAodGhlDQo+ICAgICogbG93IGZvdXIgYml0cykgdG8gYSBwYWdlIHByb3RlY3Rpb24gbWFzay4u
DQo+ICAgICovDQo+IC0jaWZuZGVmIENPTkZJR19BUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+
IC1leHRlcm4gcGdwcm90X3QgcHJvdGVjdGlvbl9tYXBbMTZdOw0KPiAtI2VuZGlmDQo+ICAgDQo+
ICAgI2RlZmluZSBERUNMQVJFX1ZNX0dFVF9QQUdFX1BST1QJCQkJCVwNCj4gICBwZ3Byb3RfdCB2
bV9nZXRfcGFnZV9wcm90KHVuc2lnbmVkIGxvbmcgdm1fZmxhZ3MpCQkJXA0KPiBkaWZmIC0tZ2l0
IGEvbW0vS2NvbmZpZyBiL21tL0tjb25maWcNCj4gaW5kZXggMTY5ZTY0MTkyZTQ4Li5mNDdkMjU3
YTA1M2IgMTAwNjQ0DQo+IC0tLSBhL21tL0tjb25maWcNCj4gKysrIGIvbW0vS2NvbmZpZw0KPiBA
QCAtOTUxLDkgKzk1MSw2IEBAIGNvbmZpZyBBUkNIX0hBU19DVVJSRU5UX1NUQUNLX1BPSU5URVIN
Cj4gICAJICByZWdpc3RlciBhbGlhcyBuYW1lZCAiY3VycmVudF9zdGFja19wb2ludGVyIiwgdGhp
cyBjb25maWcgY2FuIGJlDQo+ICAgCSAgc2VsZWN0ZWQuDQo+ICAgDQo+IC1jb25maWcgQVJDSF9I
QVNfVk1fR0VUX1BBR0VfUFJPVA0KPiAtCWJvb2wNCj4gLQ0KPiAgIGNvbmZpZyBBUkNIX0hBU19Q
VEVfREVWTUFQDQo+ICAgCWJvb2wNCj4gICANCj4gZGlmZiAtLWdpdCBhL21tL21tYXAuYyBiL21t
L21tYXAuYw0KPiBpbmRleCA0M2RiM2JkNDkwNzEuLjM1NTdmZTgzZDEyNCAxMDA2NDQNCj4gLS0t
IGEvbW0vbW1hcC5jDQo+ICsrKyBiL21tL21tYXAuYw0KPiBAQCAtMTAxLDI4ICsxMDEsNiBAQCBz
dGF0aWMgdm9pZCB1bm1hcF9yZWdpb24oc3RydWN0IG1tX3N0cnVjdCAqbW0sDQo+ICAgICoJCQkJ
CQkJCXc6IChubykgbm8NCj4gICAgKgkJCQkJCQkJeDogKHllcykgeWVzDQo+ICAgICovDQo+IC0j
aWZuZGVmIENPTkZJR19BUkNIX0hBU19WTV9HRVRfUEFHRV9QUk9UDQo+IC1wZ3Byb3RfdCBwcm90
ZWN0aW9uX21hcFsxNl0gX19yb19hZnRlcl9pbml0ID0gew0KPiAtCVtWTV9OT05FXQkJCQkJPSBf
X1AwMDAsDQo+IC0JW1ZNX1JFQURdCQkJCQk9IF9fUDAwMSwNCj4gLQlbVk1fV1JJVEVdCQkJCQk9
IF9fUDAxMCwNCj4gLQlbVk1fV1JJVEUgfCBWTV9SRUFEXQkJCQk9IF9fUDAxMSwNCj4gLQlbVk1f
RVhFQ10JCQkJCT0gX19QMTAwLA0KPiAtCVtWTV9FWEVDIHwgVk1fUkVBRF0JCQkJPSBfX1AxMDEs
DQo+IC0JW1ZNX0VYRUMgfCBWTV9XUklURV0JCQkJPSBfX1AxMTAsDQo+IC0JW1ZNX0VYRUMgfCBW
TV9XUklURSB8IFZNX1JFQURdCQkJPSBfX1AxMTEsDQo+IC0JW1ZNX1NIQVJFRF0JCQkJCT0gX19T
MDAwLA0KPiAtCVtWTV9TSEFSRUQgfCBWTV9SRUFEXQkJCQk9IF9fUzAwMSwNCj4gLQlbVk1fU0hB
UkVEIHwgVk1fV1JJVEVdCQkJCT0gX19TMDEwLA0KPiAtCVtWTV9TSEFSRUQgfCBWTV9XUklURSB8
IFZNX1JFQURdCQk9IF9fUzAxMSwNCj4gLQlbVk1fU0hBUkVEIHwgVk1fRVhFQ10JCQkJPSBfX1Mx
MDAsDQo+IC0JW1ZNX1NIQVJFRCB8IFZNX0VYRUMgfCBWTV9SRUFEXQkJCT0gX19TMTAxLA0KPiAt
CVtWTV9TSEFSRUQgfCBWTV9FWEVDIHwgVk1fV1JJVEVdCQk9IF9fUzExMCwNCj4gLQlbVk1fU0hB
UkVEIHwgVk1fRVhFQyB8IFZNX1dSSVRFIHwgVk1fUkVBRF0JPSBfX1MxMTENCj4gLX07DQo+IC1E
RUNMQVJFX1ZNX0dFVF9QQUdFX1BST1QNCj4gLSNlbmRpZgkvKiBDT05GSUdfQVJDSF9IQVNfVk1f
R0VUX1BBR0VfUFJPVCAqLw0KPiAtDQo+ICAgc3RhdGljIHBncHJvdF90IHZtX3BncHJvdF9tb2Rp
ZnkocGdwcm90X3Qgb2xkcHJvdCwgdW5zaWduZWQgbG9uZyB2bV9mbGFncykNCj4gICB7DQo+ICAg
CXJldHVybiBwZ3Byb3RfbW9kaWZ5KG9sZHByb3QsIHZtX2dldF9wYWdlX3Byb3Qodm1fZmxhZ3Mp
KTs
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms
2022-06-24 4:55 [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
` (25 preceding siblings ...)
2022-06-24 4:55 ` [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
@ 2022-06-24 5:12 ` Christoph Hellwig
2022-06-24 5:32 ` Anshuman Khandual
26 siblings, 1 reply; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:12 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-mm, hch, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
On Fri, Jun 24, 2022 at 10:13:13AM +0530, Anshuman Khandual wrote:
> vm_get_page_prot(), in order for it to be reused on platforms that do not
> require custom implementation. Finally, ARCH_HAS_VM_GET_PAGE_PROT can just
> be dropped, as all platforms now define and export vm_get_page_prot(), via
> looking up a private and static protection_map[] array. protection_map[]
> data type is the following for all platforms without deviation (except the
> powerpc one which is shared between 32 and 64 bit platforms), keeping it
> unchanged for now.
>
> static pgprot_t protection_map[16] __ro_after_init
On most architectures this should be const now, only very few ever
modify it.
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms
2022-06-24 5:12 ` [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Christoph Hellwig
@ 2022-06-24 5:32 ` Anshuman Khandual
2022-06-24 5:21 ` Christoph Hellwig
0 siblings, 1 reply; 52+ messages in thread
From: Anshuman Khandual @ 2022-06-24 5:32 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-mm, christophe.leroy, Andrew Morton, linuxppc-dev,
sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
On 6/24/22 10:42, Christoph Hellwig wrote:
> On Fri, Jun 24, 2022 at 10:13:13AM +0530, Anshuman Khandual wrote:
>> vm_get_page_prot(), in order for it to be reused on platforms that do not
>> require custom implementation. Finally, ARCH_HAS_VM_GET_PAGE_PROT can just
>> be dropped, as all platforms now define and export vm_get_page_prot(), via
>> looking up a private and static protection_map[] array. protection_map[]
>> data type is the following for all platforms without deviation (except the
>> powerpc one which is shared between 32 and 64 bit platforms), keeping it
>> unchanged for now.
>>
>> static pgprot_t protection_map[16] __ro_after_init
>
> On most architectures this should be const now, only very few ever
> modify it.
Will make it a 'static const pgprot_t protection_map[16] __ro_after_init'
on platforms that do not change the protection_map[] even during boot.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms
2022-06-24 5:32 ` Anshuman Khandual
@ 2022-06-24 5:21 ` Christoph Hellwig
0 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2022-06-24 5:21 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Christoph Hellwig, linux-mm, christophe.leroy, Andrew Morton,
linuxppc-dev, sparclinux, x86, openrisc, linux-xtensa, linux-csky,
linux-hexagon, linux-parisc, linux-alpha, linux-riscv, linux-s390,
linux-ia64, linux-mips, linux-m68k, linux-snps-arc,
linux-arm-kernel, linux-um, linux-sh, linux-kernel
On Fri, Jun 24, 2022 at 10:50:33AM +0530, Anshuman Khandual wrote:
> > On most architectures this should be const now, only very few ever
> > modify it.
>
> Will make it a 'static const pgprot_t protection_map[16] __ro_after_init'
> on platforms that do not change the protection_map[] even during boot.
No need for __ro_after_init when it is already declarated const.
^ permalink raw reply [flat|nested] 52+ messages in thread