From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8840DC3DA4A for ; Tue, 20 Aug 2024 13:51:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:Cc:To: From:Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4sUadb64zsnEYtiXe/te9DX0SAP4WxAkywPkhiAYL1E=; b=aLLF1NVgf1r5D4VfSbdaizk6in RnlSnROE+Ymi3vkQ2slMeFALqrgRFUYqNpYq63L8toRtPxIXciRvqEZfe0Nd9C9w8tRh/D1+XDJho wy8hQD/xWR1b1PghsuNFd3+nv77oIp/ux7iLyXCZIhl4d4rjWuQhaSVpCDpRQTIiB17xeSEf7g60c Rz7WMyiYOi19QpL/Lr1GQvCe2Bq6b0Wj50FHarwGHRZXYjiMxKYjKL5HYHnmM6NtMPqnfyhNrkCA1 /x5VapC3Yj21Y+d9GX8KHbDwMJh2YOd7yW09BaNt+TaOIH7P2EFvgMtNT5/CsNOJVjL1T3u59LJr0 Eq8PRG0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgPGx-00000005Qqm-08Px; Tue, 20 Aug 2024 13:51:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgPGF-00000005Qkk-1nBe for linux-arm-kernel@lists.infradead.org; Tue, 20 Aug 2024 13:51:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C06CDA7; Tue, 20 Aug 2024 06:51:24 -0700 (PDT) Received: from donnerap.manchester.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4518E3F66E; Tue, 20 Aug 2024 06:50:57 -0700 (PDT) Date: Tue, 20 Aug 2024 14:50:54 +0100 From: Andre Przywara To: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, akos.denke@arm.com, luca.fancellu@arm.com, maz@kernel.org Subject: Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Message-ID: <20240820145054.2b6dd911@donnerap.manchester.arm.com> In-Reply-To: References: <20240812101555.3558589-1-mark.rutland@arm.com> <20240812101555.3558589-7-mark.rutland@arm.com> <20240819182241.36d15eb1@donnerap.manchester.arm.com> <20240820135944.0b43f393@donnerap.manchester.arm.com> Organization: ARM X-Mailer: Claws Mail 3.18.0 (GTK+ 2.24.32; aarch64-unknown-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240820_065059_689138_CF6D85F5 X-CRM114-Status: GOOD ( 49.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 20 Aug 2024 14:36:37 +0100 Mark Rutland wrote: Hi Mark, > On Tue, Aug 20, 2024 at 01:59:44PM +0100, Andre Przywara wrote: > > On Tue, 20 Aug 2024 12:43:18 +0100 > > Mark Rutland wrote: > > > On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote: > > > > On Mon, 12 Aug 2024 11:15:51 +0100 > > > > Mark Rutland wrote: > > > > > > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel) > > > > > bl find_logical_id > > > > > bl setup_stack > > > > > > > > > > - ldr lr, [r5], #4 > > > > > - ldm r5, {r0 - r2} > > > > > - > > > > > - ldr r4, =flag_no_el3 > > > > > - ldr r4, [r4] > > > > > - cmp r4, #1 > > > > > - bxeq lr @ no EL3 > > > > > + mov r0, r5 > > > > > + mov r1, r6 > > > > > + mov r2, r7 > > > > > + ldr r3, =SPSR_KERNEL > > > > > > > > > > - ldr r4, =SPSR_KERNEL > > > > > /* Return in thumb2 mode when bit 0 of address is 1 */ > > > > > - tst lr, #1 > > > > > - orrne r4, #PSR_T > > > > > + tst r4, #1 > > > > > + orrne r3, #PSR_T > > > > > + > > > > > + mrs r5, cpsr > > > > > + and r5, #PSR_MODE_MASK > > > > > + cmp r5, #PSR_MON > > > > > + beq eret_at_mon > > > > > + cmp r5, #PSR_HYP > > > > > + beq eret_at_hyp > > > > > + b . > > > > > > > > > > - msr spsr_cxf, r4 > > > > > +eret_at_mon: > > > > > + mov lr, r4 > > > > > + msr spsr_cxf, r3 > > > > > movs pc, lr > > > > > Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM > > > > I don't immediately see why an eret wouldn't be possible here. > > > > > > > > If there is a restriction I missed, I guess either a comment here or in > > > > the commit message would be helpful. > > > > > > We can use ERET here; IIRC that was added in the ARMv7 virtualization > > > extensions, but the boot-wrapper requires that and really it's ARMv8+ > > > > Is that so? I mean in all practicality we will indeed use the bootwrapper > > on ARMv8 only these days, but I don't think we need to artificially limit > > this. Also I consider the boot-wrapper one of the more reliable sources > > for ARMv7 boot code, so not sure we should drop this aspect. > > There is one ARMv7 compile time check, to avoid "sevl", so we have some > > support, at least. > > What I was trying to say here was "the minimum bound is ARMv7 + > virtualization extensions", which is already required by the > ".arch_extension virt" directive that's been in this file since it was > introduced. > > Practically speaking, I don't think that we should care about ARMv7 > here, but if that happens to work, great! Ah, no, I meant "armv7ve". Given that we either drop to HYP or stay in HYP, I don't think supporting something before that makes much sense here ;-) > > > anyway. I had opted to stick with "movs pc, lr" because it was a > > > (trivially) smaller change, and kept the cases distinct, but I'm happy > > > to use ERET. > > > > > > However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the > > > return address from ELR_HYP, while in all other modes it takes it from > > > the LR (as only hyp has an ELR). > > > > Yeah, I saw this yesterday, and am even more grateful for the ARMv8 > > exception model now ;-) > > > > So I am fine with "movs pc, lr", if that's the more canonical way on > > 32-bit/ARMv7. On the other hand your revised sequence below looks > > intriguingly simple ... > > > > > > > > > > - > > > > > - .section .data > > > > > - .align 2 > > > > > -flag_no_el3: > > > > > - .long 0 > > > > > +eret_at_hyp: > > > > > + msr elr_hyp, r4 > > > > > + msr spsr_cxf, r3 > > > > > > > > Shouldn't that be spsr_hyp? > > > > > > It can be, but doesn't need to be. This is the SPSR_ encoding, > > > > So I didn't know about this until yesterday, and it's not easy to find, > > since it seems not to be mentioned as such in the ARM ARM (at least not > > "cxf"). binutils seems to disassemble this to SPSR_fxc, but I guess we > > should indeed move to SPSR_fsxc (if we keep this at all). > > > > > which writes to the SPSR for owned by the active mode, though it skips > > > bits<23:16>, which we probably should initialise. > > > > > > If I change that all to: > > > > > > | eret_at_mon: > > > | mov lr, r4 > > > | msr spsr_mon, r3 > > > | eret > > > | eret_at_hyp: > > > | msr elr_hyp, r4 > > > | msr spsr_hyp, r3 > > > | > > > > > > ... do you think that's clear enough, or do you think we need a comment > > > about the "LR" vs "ELR_HYP" distinction? > > > > Oh, that certainly looks the clearest, but indeed a comment on LR vs. ELR > > situation looks indicated. > > Considering the earlier comments I'm going to make this: > > | eret_at_mon: > | mov lr, r4 > | msr spsr_mon > | movs pc, lr > | eret_at_hyp: > | msr elr_hyp, r4 > | msr spsr_hyp, r3 > | eret > > Using 'spsr_mon' and 'spsr_hyp' means we initialize *all* of the SPSR > bits, so that's a bug fix in addition to being clearer. > > Using 'movs pc, lr' for the 'eret_at_mon' case is the standard way to do > exception returns in AArch32 generally, and then that clearly doesnt't > depend on the virtualization extensions, so if we ever want to handle a > CPU without hyp in future all we'll need to do is mess with the SPSR > value. > > I'm not going to bother with a comment given that's standard AArch32 > behaviour. Many thanks, that looks absolutely fine to me and makes the most sense! Cheers, Andre.