From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F362C4167B for ; Thu, 7 Dec 2023 15:11:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+GncfOtTxxfq/eB3EnhErXJRHdpjWY7N7qghcy7XzKQ=; b=xiT0GFbba3OEz9 tTml+I8vs4wloPXGkoPEjDp3WSu4DEikl0dQ/tUvljxzniv9FJIKgO3Jd3eP0J67fy8WvpT+vKfeV 4TM8tCIuHevaCDjKX/vt6rkuM/gFtv5wKU+/8OtzVNIc/lntLVmDFUjoI2UZ6nf6at2tLR08eq5YT Q4RfMn2O+t2hAKI9618pOClLgMrARx90iH5bjR/qpVBEu+p9MAn6Ocm3HGZZIpC7EVatZCKHavq6S VfZwme4j/K5OFdGvCvAiZOt3zzXz8zAhFXYveixPsh/4SIuDQ/MreXJ9H/SHn9C3SL2EHQzgw1WNF EtsPG8rkB0Y/o/DX1GHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rBG1v-00D93R-1V; Thu, 07 Dec 2023 15:11:11 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rBG1r-00D92V-1X for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2023 15:11:09 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id CFE40B827F7; Thu, 7 Dec 2023 15:11:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1AAFCC433C7; Thu, 7 Dec 2023 15:11:01 +0000 (UTC) Date: Thu, 7 Dec 2023 15:10:58 +0000 From: Catalin Marinas To: Joey Gouly Cc: linux-arm-kernel@lists.infradead.org, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, broonie@kernel.org, dave.hansen@linux.intel.com, maz@kernel.org, oliver.upton@linux.dev, shuah@kernel.org, will@kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, James Morse , Suzuki K Poulose , Zenghui Yu Subject: Re: [PATCH v3 09/25] arm64: define VM_PKEY_BIT* for arm64 Message-ID: References: <20231124163510.1835740-1-joey.gouly@arm.com> <20231124163510.1835740-10-joey.gouly@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231124163510.1835740-10-joey.gouly@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231207_071107_810084_4DBB5892 X-CRM114-Status: GOOD ( 16.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Nov 24, 2023 at 04:34:54PM +0000, Joey Gouly wrote: > arch/arm64/include/asm/mman.h | 8 +++++++- > arch/arm64/include/asm/page.h | 10 ++++++++++ > arch/arm64/mm/mmap.c | 9 +++++++++ > arch/powerpc/include/asm/page.h | 11 +++++++++++ > arch/x86/include/asm/page.h | 10 ++++++++++ > fs/proc/task_mmu.c | 2 ++ > include/linux/mm.h | 13 ------------- > 7 files changed, 49 insertions(+), 14 deletions(-) It might be worth splitting out the powerpc/x86/generic parts into a separate patch. > diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h > index 5966ee4a6154..ecb2d18dc4d7 100644 > --- a/arch/arm64/include/asm/mman.h > +++ b/arch/arm64/include/asm/mman.h > @@ -7,7 +7,7 @@ > #include > > static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > - unsigned long pkey __always_unused) > + unsigned long pkey) > { > unsigned long ret = 0; > > @@ -17,6 +17,12 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > if (system_supports_mte() && (prot & PROT_MTE)) > ret |= VM_MTE; > > +#if defined(CONFIG_ARCH_HAS_PKEYS) > + ret |= pkey & 0x1 ? VM_PKEY_BIT0 : 0; > + ret |= pkey & 0x2 ? VM_PKEY_BIT1 : 0; > + ret |= pkey & 0x4 ? VM_PKEY_BIT2 : 0; > +#endif Is there anywhere that rejects pkey & 8 on arm64? Because with 128-bit PTEs, if we ever support them, we can have 4-bit pkeys. > #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 2312e6ee595f..aabfda2516d2 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -49,6 +49,16 @@ int pfn_is_map_memory(unsigned long pfn); > > #define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED) > > +#if defined(CONFIG_ARCH_HAS_PKEYS) > +/* A protection key is a 3-bit value */ > +# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_2 > +# define VM_PKEY_BIT0 VM_HIGH_ARCH_2 > +# define VM_PKEY_BIT1 VM_HIGH_ARCH_3 > +# define VM_PKEY_BIT2 VM_HIGH_ARCH_4 > +# define VM_PKEY_BIT3 0 > +# define VM_PKEY_BIT4 0 > +#endif I think we should start from VM_HIGH_ARCH_BIT_0 and just move the VM_MTE, VM_MTE_ALLOWED to VM_HIGH_ARCH_BIT_{4,5}. > diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h > index e5fcc79b5bfb..a5e75ec333ad 100644 > --- a/arch/powerpc/include/asm/page.h > +++ b/arch/powerpc/include/asm/page.h > @@ -330,6 +330,17 @@ static inline unsigned long kaslr_offset(void) > } > > #include > + > +#if defined(CONFIG_ARCH_HAS_PKEYS) > +# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 > +/* A protection key is a 5-bit value */ > +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 > +# define VM_PKEY_BIT1 VM_HIGH_ARCH_1 > +# define VM_PKEY_BIT2 VM_HIGH_ARCH_2 > +# define VM_PKEY_BIT3 VM_HIGH_ARCH_3 > +# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 > +#endif /* CONFIG_ARCH_HAS_PKEYS */ > + > #endif /* __ASSEMBLY__ */ > > #endif /* _ASM_POWERPC_PAGE_H */ > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h > index d18e5c332cb9..b770db1a21e7 100644 > --- a/arch/x86/include/asm/page.h > +++ b/arch/x86/include/asm/page.h > @@ -87,5 +87,15 @@ static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits) > > #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA > > +#if defined(CONFIG_ARCH_HAS_PKEYS) > +# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 > +/* A protection key is a 4-bit value */ > +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 > +# define VM_PKEY_BIT1 VM_HIGH_ARCH_1 > +# define VM_PKEY_BIT2 VM_HIGH_ARCH_2 > +# define VM_PKEY_BIT3 VM_HIGH_ARCH_3 > +# define VM_PKEY_BIT4 0 > +#endif /* CONFIG_ARCH_HAS_PKEYS */ Rather than moving these to arch code, we could instead keep the generic definitions with some additional CONFIG_ARCH_HAS_PKEYS_{3,4,5}BIT selected from the arch code. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel