From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 029E8C43334 for ; Sun, 26 Jun 2022 09:32:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GrpJUUMywnKNicNYaRO21mXv94UQ6qvi54vICin34vU=; b=oSc8W/ZgGgklPB 8q8ugun8ECiI+Zn9KiUNfcdPRTECfVU3gDmM/qah2boNtpqB0MmyHNHB0fjfYNdpXqhmqvZvYvcRN 1etE5PFCN3M30o01Z78EW71GMclbNocCGLSTPRCGpzP43l3ObdbXtVSiyXxdF9SSYgEmr957clUJL eWBU5r2lMmxUUT5RH14NfDDERhrh6qicqGjH7eVGGHynU0wDFex1FOVZvZlKCLC80He12mwjqHGsa ul5ZpeNSIkCBKCM4YTtZwAO7UaBaRq7BZUlInZJeK3pmZuNgessntlG0ZdQZCVW55MIQFAGB3io17 9gdagviCeZy/61Q8Yp5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o5OcJ-00AuU4-9V; Sun, 26 Jun 2022 09:31:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o5OcE-00AuSr-MJ for linux-arm-kernel@lists.infradead.org; Sun, 26 Jun 2022 09:31:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92BFE23A; Sun, 26 Jun 2022 02:31:37 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.71.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F12153F792; Sun, 26 Jun 2022 02:31:35 -0700 (PDT) Date: Sun, 26 Jun 2022 10:31:32 +0100 From: Mark Rutland To: Ard Biesheuvel Cc: linux-arm-kernel@lists.infradead.org, Marc Zyngier , Will Deacon , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual Subject: Re: [PATCH v5 02/21] arm64: mm: make vabits_actual a build time constant if possible Message-ID: References: <20220624150651.1358849-1-ardb@kernel.org> <20220624150651.1358849-3-ardb@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220624150651.1358849-3-ardb@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220626_023138_867917_DE79503C X-CRM114-Status: GOOD ( 21.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Jun 24, 2022 at 05:06:32PM +0200, Ard Biesheuvel wrote: > Currently, we only support 52-bit virtual addressing on 64k pages > configurations, and in all other cases, vabits_actual is guaranteed to > equal VA_BITS (== VA_BITS_MIN). So get rid of the variable entirely in > that case. > > While at it, move the assignment out of the asm entry code - it has no > need to be there. > > Signed-off-by: Ard Biesheuvel I see the patch itself checks VA_BITS rather than PAGE_SIZE, (and the former is the right thing to for FEAT_LPA2), so FWIW: Mark Rutland Mark. > --- > arch/arm64/include/asm/memory.h | 4 ++++ > arch/arm64/kernel/head.S | 15 +-------------- > arch/arm64/mm/init.c | 15 ++++++++++++++- > arch/arm64/mm/mmu.c | 4 +++- > 4 files changed, 22 insertions(+), 16 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 0af70d9abede..c751cd9b94f8 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -174,7 +174,11 @@ > #include > #include > > +#if VA_BITS > 48 > extern u64 vabits_actual; > +#else > +#define vabits_actual ((u64)VA_BITS) > +#endif > > extern s64 memstart_addr; > /* PHYS_OFFSET - the physical address of the start of memory. */ > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index 1cdecce552bb..dc07858eb673 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -293,19 +293,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > adrp x0, idmap_pg_dir > adrp x3, __idmap_text_start // __pa(__idmap_text_start) > > -#ifdef CONFIG_ARM64_VA_BITS_52 > - mrs_s x6, SYS_ID_AA64MMFR2_EL1 > - and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) > - mov x5, #52 > - cbnz x6, 1f > -#endif > - mov x5, #VA_BITS_MIN > -1: > - adr_l x6, vabits_actual > - str x5, [x6] > - dmb sy > - dc ivac, x6 // Invalidate potentially stale cache line > - > /* > * VA_BITS may be too small to allow for an ID mapping to be created > * that covers system RAM if that is located sufficiently high in the > @@ -713,7 +700,7 @@ SYM_FUNC_START(__enable_mmu) > SYM_FUNC_END(__enable_mmu) > > SYM_FUNC_START(__cpu_secondary_check52bitva) > -#ifdef CONFIG_ARM64_VA_BITS_52 > +#if VA_BITS > 48 > ldr_l x0, vabits_actual > cmp x0, #52 > b.ne 2f > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 339ee84e5a61..1faa6760895e 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -265,7 +265,20 @@ early_param("mem", early_mem); > > void __init arm64_memblock_init(void) > { > - s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual); > + s64 linear_region_size; > + > +#if VA_BITS > 48 > + if (cpuid_feature_extract_unsigned_field( > + read_sysreg_s(SYS_ID_AA64MMFR2_EL1), > + ID_AA64MMFR2_LVA_SHIFT)) > + vabits_actual = VA_BITS; > + > + /* make the variable visible to secondaries with the MMU off */ > + dcache_clean_inval_poc((u64)&vabits_actual, > + (u64)&vabits_actual + sizeof(vabits_actual)); > +#endif > + > + linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual); > > /* > * Corner case: 52-bit VA capable systems running KVM in nVHE mode may > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 7148928e3932..a6392656d589 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -46,8 +46,10 @@ > u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN); > u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; > > -u64 __section(".mmuoff.data.write") vabits_actual; > +#if VA_BITS > 48 > +u64 vabits_actual __ro_after_init = VA_BITS_MIN; > EXPORT_SYMBOL(vabits_actual); > +#endif > > u64 kimage_vaddr __ro_after_init = (u64)&_text; > EXPORT_SYMBOL(kimage_vaddr); > -- > 2.35.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel