From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C29AC531DC for ; Tue, 20 Aug 2024 13:37:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ibZNBs8OgakAM0te9ntxGw55IKaczMxZlArD+csfgKk=; b=zwfmFFqBsgNM1Vj1S6du1/O2oP 06BhIgs9rQhQqw9WIqHfkAZ+Rpi8ELB4YjzL0qtsYdoXRDqX+K91cIqfvjKFNxtgbzqSwf9AZnxhE NpoA2oicVze+WesZfVrTOVJwpPmlVmXgB6jQLUBNYCV7K1rDXv4H2NWT0tgwpOxOWkQjgPfZCdli3 K2CfcfoWZcwOo/7risjSzmJn5L07exwmr6SYgZwLI+x2V5vliZzK+E9q1Nx84uFH8WmDiJCisF6+k KYMS8EStMBizAkgTVAggTZl5ZFuv86yYeadBgBY9QxGU63HBagK2RLridFu/T+UTgHWBuAcJUf+F2 /vssiyfw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgP3G-00000005Ok9-0Iqc; Tue, 20 Aug 2024 13:37:34 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgP2Y-00000005Oe7-1vv8 for linux-arm-kernel@lists.infradead.org; Tue, 20 Aug 2024 13:36:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F2ACBFEC; Tue, 20 Aug 2024 06:37:13 -0700 (PDT) Received: from J2N7QTR9R3 (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E527F3F66E; Tue, 20 Aug 2024 06:36:46 -0700 (PDT) Date: Tue, 20 Aug 2024 14:36:37 +0100 From: Mark Rutland To: Andre Przywara Cc: linux-arm-kernel@lists.infradead.org, akos.denke@arm.com, luca.fancellu@arm.com, maz@kernel.org Subject: Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Message-ID: References: <20240812101555.3558589-1-mark.rutland@arm.com> <20240812101555.3558589-7-mark.rutland@arm.com> <20240819182241.36d15eb1@donnerap.manchester.arm.com> <20240820135944.0b43f393@donnerap.manchester.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240820135944.0b43f393@donnerap.manchester.arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240820_063650_608802_504FC684 X-CRM114-Status: GOOD ( 44.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Aug 20, 2024 at 01:59:44PM +0100, Andre Przywara wrote: > On Tue, 20 Aug 2024 12:43:18 +0100 > Mark Rutland wrote: > > On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote: > > > On Mon, 12 Aug 2024 11:15:51 +0100 > > > Mark Rutland wrote: > > > > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel) > > > > bl find_logical_id > > > > bl setup_stack > > > > > > > > - ldr lr, [r5], #4 > > > > - ldm r5, {r0 - r2} > > > > - > > > > - ldr r4, =flag_no_el3 > > > > - ldr r4, [r4] > > > > - cmp r4, #1 > > > > - bxeq lr @ no EL3 > > > > + mov r0, r5 > > > > + mov r1, r6 > > > > + mov r2, r7 > > > > + ldr r3, =SPSR_KERNEL > > > > > > > > - ldr r4, =SPSR_KERNEL > > > > /* Return in thumb2 mode when bit 0 of address is 1 */ > > > > - tst lr, #1 > > > > - orrne r4, #PSR_T > > > > + tst r4, #1 > > > > + orrne r3, #PSR_T > > > > + > > > > + mrs r5, cpsr > > > > + and r5, #PSR_MODE_MASK > > > > + cmp r5, #PSR_MON > > > > + beq eret_at_mon > > > > + cmp r5, #PSR_HYP > > > > + beq eret_at_hyp > > > > + b . > > > > > > > > - msr spsr_cxf, r4 > > > > +eret_at_mon: > > > > + mov lr, r4 > > > > + msr spsr_cxf, r3 > > > > movs pc, lr > > > Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM > > > I don't immediately see why an eret wouldn't be possible here. > > > > > > If there is a restriction I missed, I guess either a comment here or in > > > the commit message would be helpful. > > > > We can use ERET here; IIRC that was added in the ARMv7 virtualization > > extensions, but the boot-wrapper requires that and really it's ARMv8+ > > Is that so? I mean in all practicality we will indeed use the bootwrapper > on ARMv8 only these days, but I don't think we need to artificially limit > this. Also I consider the boot-wrapper one of the more reliable sources > for ARMv7 boot code, so not sure we should drop this aspect. > There is one ARMv7 compile time check, to avoid "sevl", so we have some > support, at least. What I was trying to say here was "the minimum bound is ARMv7 + virtualization extensions", which is already required by the ".arch_extension virt" directive that's been in this file since it was introduced. Practically speaking, I don't think that we should care about ARMv7 here, but if that happens to work, great! > > anyway. I had opted to stick with "movs pc, lr" because it was a > > (trivially) smaller change, and kept the cases distinct, but I'm happy > > to use ERET. > > > > However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the > > return address from ELR_HYP, while in all other modes it takes it from > > the LR (as only hyp has an ELR). > > Yeah, I saw this yesterday, and am even more grateful for the ARMv8 > exception model now ;-) > > So I am fine with "movs pc, lr", if that's the more canonical way on > 32-bit/ARMv7. On the other hand your revised sequence below looks > intriguingly simple ... > > > > > > > - > > > > - .section .data > > > > - .align 2 > > > > -flag_no_el3: > > > > - .long 0 > > > > +eret_at_hyp: > > > > + msr elr_hyp, r4 > > > > + msr spsr_cxf, r3 > > > > > > Shouldn't that be spsr_hyp? > > > > It can be, but doesn't need to be. This is the SPSR_ encoding, > > So I didn't know about this until yesterday, and it's not easy to find, > since it seems not to be mentioned as such in the ARM ARM (at least not > "cxf"). binutils seems to disassemble this to SPSR_fxc, but I guess we > should indeed move to SPSR_fsxc (if we keep this at all). > > > which writes to the SPSR for owned by the active mode, though it skips > > bits<23:16>, which we probably should initialise. > > > > If I change that all to: > > > > | eret_at_mon: > > | mov lr, r4 > > | msr spsr_mon, r3 > > | eret > > | eret_at_hyp: > > | msr elr_hyp, r4 > > | msr spsr_hyp, r3 > > | > > > > ... do you think that's clear enough, or do you think we need a comment > > about the "LR" vs "ELR_HYP" distinction? > > Oh, that certainly looks the clearest, but indeed a comment on LR vs. ELR > situation looks indicated. Considering the earlier comments I'm going to make this: | eret_at_mon: | mov lr, r4 | msr spsr_mon | movs pc, lr | eret_at_hyp: | msr elr_hyp, r4 | msr spsr_hyp, r3 | eret Using 'spsr_mon' and 'spsr_hyp' means we initialize *all* of the SPSR bits, so that's a bug fix in addition to being clearer. Using 'movs pc, lr' for the 'eret_at_mon' case is the standard way to do exception returns in AArch32 generally, and then that clearly doesnt't depend on the virtualization extensions, so if we ever want to handle a CPU without hyp in future all we'll need to do is mess with the SPSR value. I'm not going to bother with a comment given that's standard AArch32 behaviour. Mark.