From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7D01C3DA6F for ; Thu, 24 Aug 2023 13:10:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zgYRaZoB/8Vh9WP89LgxVxcFCIk+iQZ/OJWVlhoCsbg=; b=GWe4mr1E4y3f5X LXzS0NWYPhJ6LztY7b1QpMplTXcQjyelJfg9LOHUE5nI43IG2lUI0UCPV6XUum8OewoWEIpnd9K4P +ycS0jKZWWhCoBbyH+8mwUBJzCIZEdQWn2GDhevd/oqWkRzD4hvZQIX1YBPC2HJUbQBJfqhmq2ZG5 bE0nx9wqdM88F+jZpIbJxkVZ0hnHNDfYfHgPkTkmZp7tSzW91uW5Tko7FTy36IeC4JWjlbGdsUqg+ V4pB90vt7blNf516OcudZefoAtUWXlrk3QbWsSh2ZYh7jWYlpMHiZAHOFZXJQ8mN757/L8bj2PJhs 7ApSOSczi2y9eJtKzbJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZA6G-0037OF-2F; Thu, 24 Aug 2023 13:10:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZA6C-0037Nd-0f for linux-arm-kernel@lists.infradead.org; Thu, 24 Aug 2023 13:10:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1BE91007; Thu, 24 Aug 2023 06:10:41 -0700 (PDT) Received: from e124191.cambridge.arm.com (e124191.cambridge.arm.com [10.1.197.45]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8BECA3F740; Thu, 24 Aug 2023 06:09:59 -0700 (PDT) Date: Thu, 24 Aug 2023 14:09:53 +0100 From: Joey Gouly To: Ard Biesheuvel Cc: linux-arm-kernel@lists.infradead.org, nd@arm.com, broonie@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, shuah@kernel.org, suzuki.poulose@arm.com, will@kernel.org, yuzenghui@huawei.com Subject: Re: [PATCH v4 13/20] arm64: reorganise PAGE_/PROT_ macros Message-ID: <20230824130953.GA1814347@e124191.cambridge.arm.com> References: <20230606145859.697944-1-joey.gouly@arm.com> <20230606145859.697944-14-joey.gouly@arm.com> <20230824101416.GA1740263@e124191.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230824_061008_367839_D8791BAE X-CRM114-Status: GOOD ( 40.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Aug 24, 2023 at 12:18:51PM +0200, Ard Biesheuvel wrote: > On Thu, 24 Aug 2023 at 12:16, Joey Gouly wrote: > > > > Hi Ard, > > > > On Tue, Aug 22, 2023 at 04:10:35PM +0200, Ard Biesheuvel wrote: > > > On Tue, 6 Jun 2023 at 17:00, Joey Gouly wrote: > > > > > > > > Make these macros available to assembly code, so they can be re-used by the > > > > PIE initialisation code. > > > > > > > > This involves adding some extra macros, prepended with _ that are the raw > > > > values not `pgprot` values. > > > > > > > > A dummy value for PTE_MAYBE_NG is also provided, for use in assembly. > > > > > > > ... > > > > + > > > > +#ifdef __ASSEMBLY__ > > > > +#define PTE_MAYBE_NG 0 > > > > +#endif > > > > + > > > > > > I am struggling a bit to understand why this is ok. I get that the PIE > > > index macros mask off the nG bit even if it is set, but this exposes a > > > definition of PROT_DEFAULT and everything based on it to asm code that > > > deviates from the one observed by C code. > > > > Yes, it's a bit of a hack to share as much as possible, and it's "ok" because, > > as you said PIE masks that bit out. > > > > > > > > I am running into this because I am adding PTE_MAYBE_SHARED for LPA2 > > > support (which repurposes the shareability bits as output address > > > bits), and I could just #define it to 0x0 as well for assembly, but I > > > am not sure this is the right approach. > > > > Happy to do this differently, if there is a better approach. > > > > I reverted this patch (fa4cdccaa582), and applied something like (just compile tested): > > > > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h > > index c7d77333ce1e..8fceeb111ad1 100644 > > --- a/arch/arm64/include/asm/pgtable-prot.h > > +++ b/arch/arm64/include/asm/pgtable-prot.h > > @@ -20,6 +20,17 @@ > > #define PTE_DEVMAP (_AT(pteval_t, 1) << 57) > > #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ > > > > +#define PIE_PAGE_SHARED (PTE_USER | PTE_PXN | PTE_UXN | PTE_WRITE) > > +#define PIE_PAGE_SHARED_EXEC (PTE_USER | PTE_PXN | PTE_WRITE) > > +#define PIE_PAGE_READONLY (PTE_USER | PTE_PXN | PTE_UXN) > > +#define PIE_PAGE_READONLY_EXEC (PTE_USER | PTE_PXN) > > +#define PIE_PAGE_EXECONLY (PTE_PXN) > > + > > +#define PIE_PAGE_KERNEL (PTE_PXN | PTE_UXN | PTE_WRITE) > > +#define PIE_PAGE_KERNEL_RO (PTE_PXN | PTE_UXN) > > +#define PIE_PAGE_KERNEL_ROX (PTE_UXN) > > +#define PIE_PAGE_KERNEL_EXEC (PTE_UXN | PTE_WRITE) > > + > > /* > > * This bit indicates that the entry is present i.e. pmd_page() > > * still points to a valid huge page in memory even if the pmd > > @@ -83,11 +94,11 @@ extern bool arm64_use_ng_mappings; > > > > #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) > > /* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */ > > -#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) > > -#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) > > -#define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) > > -#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN) > > -#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN) > > +#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PIE_PAGE_SHARED) > > +#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PIE_PAGE_SHARED_EXEC) > > +#define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PIE_PAGE_READONLY) > > +#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PIE_PAGE_READONLY_EXEC) > > +#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PIE_PAGE_EXECONLY) > > > > #endif /* __ASSEMBLY__ */ > > > > @@ -124,21 +135,21 @@ extern bool arm64_use_ng_mappings; > > /* f: PAGE_SHARED PTE_UXN | PTE_PXN | PTE_WRITE | PTE_USER */ > > > > #define PIE_E0 ( \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_EXECONLY), PIE_X_O) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY_EXEC), PIE_RX) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED_EXEC), PIE_RWX) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY), PIE_R) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED), PIE_RW)) > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_EXECONLY), PIE_X_O) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_READONLY_EXEC), PIE_RX) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_SHARED_EXEC), PIE_RWX) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_READONLY), PIE_R) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_SHARED), PIE_RW)) > > > > #define PIE_E1 ( \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_EXECONLY), PIE_NONE_O) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY_EXEC), PIE_R) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED_EXEC), PIE_RW) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY), PIE_R) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED), PIE_RW) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_KERNEL_ROX), PIE_RX) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_KERNEL_EXEC), PIE_RWX) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_KERNEL_RO), PIE_R) | \ > > - PIRx_ELx_PERM(pte_pi_index(_PAGE_KERNEL), PIE_RW)) > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_EXECONLY), PIE_NONE_O) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_READONLY_EXEC), PIE_R) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_SHARED_EXEC), PIE_RW) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_READONLY), PIE_R) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_SHARED), PIE_RW) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_KERNEL_ROX), PIE_RX) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_KERNEL_EXEC), PIE_RWX) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_KERNEL_RO), PIE_R) | \ > > + PIRx_ELx_PERM(pte_pi_index(PIE_PAGE_KERNEL), PIE_RW)) > > > > #endif /* __ASM_PGTABLE_PROT_H */ > > > > > > The PAGE_KERNEL bits are harder to share, because they are based on > > PROT_NORMAL. But maybe this bit of duplication is better than the #define 0x0 > > hack I had. Could maybe add a BUILD_BUG_ON somewhere to check that PIE_PAGE_KERNEL* > > and PAGE_KERNEL have matching bits? > > > > That seems rather invasive. I was about to send out a patch that does > the below instead. Would that work for you? > > diff --git a/arch/arm64/include/asm/pgtable-prot.h > b/arch/arm64/include/asm/pgtable-prot.h > index eed814b00a389..282e0ba658f03 100644 > --- a/arch/arm64/include/asm/pgtable-prot.h > +++ b/arch/arm64/include/asm/pgtable-prot.h > @@ -57,10 +57,6 @@ > #define _PAGE_READONLY_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | > PTE_NG | PTE_PXN) > #define _PAGE_EXECONLY (_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN) > > -#ifdef __ASSEMBLY__ > -#define PTE_MAYBE_NG 0 > -#endif > - > #ifndef __ASSEMBLY__ > > #include > diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S > index 22201066749e9..069265a8c4384 100644 > --- a/arch/arm64/mm/proc.S > +++ b/arch/arm64/mm/proc.S > @@ -457,11 +457,24 @@ alternative_else_nop_endif > ubfx x1, x1, #ID_AA64MMFR3_EL1_S1PIE_SHIFT, #4 > cbz x1, .Lskip_indirection > > + /* > + * The PROT_* macros describing the various memory types may resolve to > + * C expressions if they include the PTE_MAYBE_* macros, and so they > + * can only be used from C code. The PIE_E* constants below are also > + * defined in terms of those macros, but will mask out those > + * PTE_MAYBE_* constants, whether they are set or not. So #define them > + * as 0x0 here so we can evaluate the PIE_E* constants in asm context. > + */ > + > +#define PTE_MAYBE_NG 0 > + > mov_q x0, PIE_E0 > msr REG_PIRE0_EL1, x0 > mov_q x0, PIE_E1 > msr REG_PIR_EL1, x0 > > +#undef PTE_MAYBE_NG > + > mov x0, TCR2_EL1x_PIE > msr REG_TCR2_EL1, x0 > Seems like a way to localise the 'hack', I'm fine with it. We can always take a similar approach to what I suggested, if it becomes a problem later. Thanks, Joey _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel