From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 983DFC43217 for ; Thu, 1 Dec 2022 11:16:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rfD6bvGQ967mLgl0eMfM7QDJsMJI/5mfru2BLao6H30=; b=MqJqqT5AE38usd cO1X4dh/1hkeFyIVsMcJaVqx+LZ0qBzC2ylSY/5FqKn4QUoi4seEvbP1BsgtVeWsR964xnoLR7PxI AyzIA6XcSzegHH1tc2Iii9jazRm5JK24A7g5VqHQrnJh09C0oHWOqyRVDR8QM+NFrJRcRKUm337DP ZYJ8EFDGvwrP3pwgNiPeXvZi1zhqEdTyj1FyARvw6nr1qkqv7DtzlH2NyPIyDp/mbPp4w89R+ovtB iO2gJMyp7rMQcH7eWE5TUrPgkj01jhvrp4nMEW+XP/bEQbkH++C0SdEjLY+SEl7YdT80hYlzXXJm8 QaBansMnRkPHK7Ih8OLg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0hWu-006zlC-Tk; Thu, 01 Dec 2022 11:15:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0hWh-006zfn-EA for linux-arm-kernel@lists.infradead.org; Thu, 01 Dec 2022 11:14:53 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E63C5D6E; Thu, 1 Dec 2022 03:14:49 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.5.176]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 39BAA3F73D; Thu, 1 Dec 2022 03:14:42 -0800 (PST) Date: Thu, 1 Dec 2022 11:13:59 +0000 From: Mark Rutland To: Catalin Marinas Cc: Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Will Deacon , Anshuman Khandual , Joey Gouly Subject: Re: [PATCH 2/3] arm64: mm: Handle LVA support as a CPU feature Message-ID: References: <20221115143824.2798908-1-ardb@kernel.org> <20221115143824.2798908-3-ardb@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221201_031447_636613_8D158ADF X-CRM114-Status: GOOD ( 40.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Nov 30, 2022 at 04:40:11PM +0000, Catalin Marinas wrote: > On Wed, Nov 30, 2022 at 05:29:55PM +0100, Ard Biesheuvel wrote: > > On Wed, 30 Nov 2022 at 17:28, Catalin Marinas wrote: > > > On Wed, Nov 30, 2022 at 03:56:26PM +0100, Ard Biesheuvel wrote: > > > > On Wed, 30 Nov 2022 at 15:50, Catalin Marinas wrote: > > > > > On Tue, Nov 15, 2022 at 03:38:23PM +0100, Ard Biesheuvel wrote: > > > > > > Currently, we detect CPU support for 52-bit virtual addressing (LVA) > > > > > > extremely early, before creating the kernel page tables or enabling the > > > > > > MMU. We cannot override the feature this early, and so large virtual > > > > > > addressing is always enabled on CPUs that implement support for it if > > > > > > the software support for it was enabled at build time. It also means we > > > > > > rely on non-trivial code in asm to deal with this feature. > > > > > > > > > > > > Given that both the ID map and the TTBR1 mapping of the kernel image are > > > > > > guaranteed to be 48-bit addressable, it is not actually necessary to > > > > > > enable support this early, and instead, we can model it as a CPU > > > > > > feature. That way, we can rely on code patching to get the correct > > > > > > TCR.T1SZ values programmed on secondary boot and suspend from resume. > > > > > > > > > > > > On the primary boot path, we simply enable the MMU with 48-bit virtual > > > > > > addressing initially, and update TCR.T1SZ from C code if LVA is > > > > > > supported, right before creating the kernel mapping. Given that TTBR1 > > > > > > still points to reserved_pg_dir at this point, updating TCR.T1SZ should > > > > > > be safe without the need for explicit TLB maintenance. > > > > > > > > > > I'm not sure we can skip the TLBI here. There's some weird rule in the > > > > > ARM ARM that if you change any of fields that may be cached in the TLB, > > > > > maintenance is needed even if the MMU is off. From the latest version > > > > > (I.a, I didn't dig into H.a), > > > > > > > > > > R_VNRFW: > > > > > When a System register field is modified and that field is permitted > > > > > to be cached in a TLB, software is required to invalidate all TLB > > > > > entries that might be affected by the field, at any address > > > > > translation stage in the translation regime even if the translation > > > > > stage is disabled, using the appropriate VMID and ASID, after any > > > > > required System register synchronization. > > > > > > > > Don't we already rely on this in cpu_set_default_tcr_t0sz() / > > > > cpu_set_idmap_tcr_t0sz() ? > > > > > > Yeah, we do this and depending on how you read the above rule, we may > > > need to move the local_flush_tlb_all() line after T0SZ setting. > > > > OK, so wouldn't this mean that we cannot change TxSZ at all without > > going through a MMU off/on cycle? > > I don't think so. The way I see it is that the change is not guaranteed > to take effect until we invalidate the TLBs. We don't risk fetching > random stuff in the TLB since we have the reserved TTBR0 at that point. > If the subsequent cpu_switch_mm() changed the ASID, in some > interpretation of the ARM ARM we could have skipped the TLBI but that's > not the case anyway. > > Looking for Mark R's opinion as I recall he talked to the architects in > the past about this (though I think it was in the context of SCTLR_EL1). The architecture is unfortunately vague here. >From prior discussions with architects, the general rule was "if it's permitted to be cached in a TLB, an update must be followed by a TLBI for that update to take effect, regardless of SCTLR_ELx.M". The architecture isn't very specific about what scope fo maintenance is required (e.g. if certain fields are tagged by ASID/VMID), but I beleive it's sufficient to use a (VM)ALL for the current translation regime (which might be stronger than necessary). So for this case, my understanding is: 1) When changing TxSZ, we need a subsequent invalidate before the change is guaranteed to take effect. So I believe that what we do today isn't quite right. 2) During the window between writing to TxSZ and completing the invalidation, I'm not sure how the MMU is permitted to behave w.r.t. intermediate walk entries. I could imagine that those become (CONSTRAINED) UNPREDICTABLE , and that we might need to ensure those are invalidated (or inactive and prior walks completed) before we write to TCR_EL1. I can go chase this up with the architects; in the mean time my thinking would be that we should retain the existing maintenance. There's almost certainly more stuff that we'd need to fix in this area. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel