From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 314B8C2B9F8 for ; Tue, 25 May 2021 10:40:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E8DA9613F4 for ; Tue, 25 May 2021 10:40:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8DA9613F4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LbAg/7UfGGZohBAlKxVSzsCrTH7R1/P0Pkp91IBQVaU=; b=SXnH6q4cfIyYer 3rE0ECy032EgOt2q3TOlUJpioG8a0Cx5efLMWdzV5I4Uj41DhCedW2iZWGqHzcVCyHZ/1lZ/+sbUa s2AD16wtu0rFQc5uTSYI+j0ap3fXwUTgR4SJBGl3HMhIdTVW5LSgreci67WziM8cXA3b6I5uSSYd5 EohBOibaUFEtO6C92m3hFuJrLBI61mg17+ej9x4ANvqFpc2+x0avIuxK48d4tmEhutb+2TNmBYYuz nPs3yP0tjq2pXOr6EbLTobN0VAYpEZuMACMq0aL611sgltyPfPoZu/a1+QlEhMM2y0pvWsYvB3Vg1 zHqsVFoPYqnaFz6Z2k4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llUSn-004bEN-Pt; Tue, 25 May 2021 10:39:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llUSh-004bCx-8U for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 10:39:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F012D6E; Tue, 25 May 2021 03:38:58 -0700 (PDT) Received: from slackpad.fritz.box (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 47D343F719; Tue, 25 May 2021 03:38:57 -0700 (PDT) Date: Tue, 25 May 2021 11:38:42 +0100 From: Andre Przywara To: Jaxson Han Cc: mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, wei.chen@arm.com Subject: Re: [boot-wrapper PATCH v3 8/8] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Message-ID: <20210525113842.6657365e@slackpad.fritz.box> In-Reply-To: <20210525062509.201464-9-jaxson.han@arm.com> References: <20210525062509.201464-1-jaxson.han@arm.com> <20210525062509.201464-9-jaxson.han@arm.com> Organization: Arm Ltd. X-Mailer: Claws Mail 3.17.1 (GTK+ 2.24.31; x86_64-slackware-linux-gnu) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_033859_443733_7879D78E X-CRM114-Status: GOOD ( 29.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 25 May 2021 14:25:09 +0800 Jaxson Han wrote: Hi, > The Armv8-R AArch64 profile does not support the EL3 exception level. > The Armv8-R AArch64 profile allows for an (optional) VMSAv8-64 MMU > at EL1, which allows to run off-the-shelf Linux. However EL2 only > supports a PMSA, which is not supported by Linux, so we need to drop > into EL1 before entering the kernel. > > We add a new err_invalid_arch symbol as a dead loop. If we detect the > current Armv8-R aarch64 only supports with PMSA, meaning we cannot boot > Linux anymore, then we jump to err_invalid_arch. > > During Armv8-R aarch64 init, to make sure nothing unexpected traps into > EL2, we auto-detect and config FIEN and EnSCXT in HCR_EL2. > > The boot sequence is: > If CurrentEL == EL3, then goto EL3 initialisation and drop to lower EL > before entering the kernel. > If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf (Armv8-R aarch64), > if id_aa64mmfr0_el1.MSA_frac == 0x2, > then goto Armv8-R AArch64 initialisation and drop to EL1 before > entering the kernel. > else, which means VMSA unsupported and cannot boot Linux, > goto err_invalid_arch (dead loop). > Else, no initialisation and keep the current EL before entering the > kernel. thanks for the changes, that looks good now to me. I checked the CPU features and HCR_EL2 bits against the manuals (both v8-A and v8-R). Reviewed-by: Andre Przywara Cheers, Andre > > Signed-off-by: Jaxson Han > --- > arch/aarch64/boot.S | 92 +++++++++++++++++++++++++++++++++- > arch/aarch64/include/asm/cpu.h | 2 + > 2 files changed, 92 insertions(+), 2 deletions(-) > > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S > index 14fd9cf..461e927 100644 > --- a/arch/aarch64/boot.S > +++ b/arch/aarch64/boot.S > @@ -25,16 +25,24 @@ _start: > * Boot sequence > * If CurrentEL == EL3, then goto EL3 initialisation and drop to > * lower EL before entering the kernel. > + * If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf, then > + * If id_aa64mmfr0_el1.MSA_frac == 0x2, then goto > + * Armv8-R AArch64 initialisation and drop to EL1 before > + * entering the kernel. > + * Else, which means VMSA unsupported and cannot boot Linux, > + * goto err_invalid_arch (dead loop). > * Else, no initialisation and keep the current EL before > * entering the kernel. > */ > mrs x0, CurrentEL > - cmp x0, #CURRENTEL_EL3 > - beq el3_init > + cmp x0, #CURRENTEL_EL2 > + bgt el3_init > + beq el2_init > > /* > * We stay in the current EL for entering the kernel > */ > +keep_el: > mov w0, #1 > ldr x1, =flag_keep_el > str w0, [x1] > @@ -127,6 +135,85 @@ el3_init: > str w0, [x1] > b el_max_init > > + /* > + * EL2 Armv8-R AArch64 initialisation > + */ > +el2_init: > + /* Detect Armv8-R AArch64 */ > + mrs x1, id_aa64mmfr0_el1 > + /* > + * Check MSA, bits [51:48]: > + * 0xf means Armv8-R AArch64. > + * If not 0xf, proceed in Armv8-A EL2. > + */ > + ubfx x0, x1, #48, #4 // MSA > + cmp x0, 0xf > + bne keep_el > + /* > + * Check MSA_frac, bits [55:52]: > + * 0x2 means EL1&0 translation regime also supports VMSAv8-64. > + */ > + ubfx x0, x1, #52, #4 // MSA_frac > + cmp x0, 0x2 > + /* > + * If not 0x2, no VMSA, so cannot boot Linux and dead loop. > + * Also, since the architecture guarantees that those CPUID > + * fields never lose features when the value in a field > + * increases, we use blt to cover it. > + */ > + blt err_invalid_arch > + > + mrs x0, midr_el1 > + msr vpidr_el2, x0 > + > + mrs x0, mpidr_el1 > + msr vmpidr_el2, x0 > + > + mov x0, #(1 << 31) // VTCR_MSA: VMSAv8-64 support > + msr vtcr_el2, x0 > + > + /* Init HCR_EL2 */ > + mov x0, #(1 << 31) // RES1: Armv8-R aarch64 only > + > + mrs x1, id_aa64pfr0_el1 > + ubfx x2, x1, #56, 4 // ID_AA64PFR0_EL1.CSV2 > + cmp x2, 0x2 > + b.lt 1f > + /* > + * Disable trap when accessing SCTXNUM_EL0 or SCTXNUM_EL1 > + * if FEAT_CSV2. > + */ > + orr x0, x0, #(1 << 53) // HCR_EL2.EnSCXT > + > +1: ubfx x2, x1, #28, 4 // ID_AA64PFR0_EL1.RAS > + cmp x2, 0x2 > + b.lt 1f > + /* Disable trap when accessing ERXPFGCDN_EL1 if FEAT_RASv1p1. */ > + orr x0, x0, #(1 << 47) // HCR_EL2.FIEN > + > + /* Enable pointer authentication if present */ > +1: mrs x1, id_aa64isar1_el1 > + /* > + * If ID_AA64ISAR1_EL1.{GPI, GPA, API, APA} == {0000, 0000, 0000, 0000} > + * then HCR_EL2.APK and HCR_EL2.API are RES 0. > + * Else > + * set HCR_EL2.APK and HCR_EL2.API. > + */ > + ldr x2, =(((0xff) << 24) | (0xff << 4)) > + and x1, x1, x2 > + cbz x1, 1f > + > + orr x0, x0, #(1 << 40) // HCR_EL2.APK > + orr x0, x0, #(1 << 41) // HCR_EL2.API > + > +1: msr hcr_el2, x0 > + isb > + > + mov w0, #SPSR_KERNEL_EL1 > + ldr x1, =spsr_to_elx > + str w0, [x1] > + // fall through > + > el_max_init: > ldr x0, =CNTFRQ > msr cntfrq_el0, x0 > @@ -136,6 +223,7 @@ el_max_init: > b start_el_max > > err_invalid_id: > +err_invalid_arch: > b . > > /* > diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h > index 3c1ba4b..2b3a0a4 100644 > --- a/arch/aarch64/include/asm/cpu.h > +++ b/arch/aarch64/include/asm/cpu.h > @@ -25,6 +25,7 @@ > #define SPSR_I (1 << 7) /* IRQ masked */ > #define SPSR_F (1 << 6) /* FIQ masked */ > #define SPSR_T (1 << 5) /* Thumb */ > +#define SPSR_EL1H (5 << 0) /* EL1 Handler mode */ > #define SPSR_EL2H (9 << 0) /* EL2 Handler mode */ > #define SPSR_HYP (0x1a << 0) /* M[3:0] = hyp, M[4] = AArch32 */ > > @@ -43,6 +44,7 @@ > #else > #define SCTLR_EL1_RESET SCTLR_EL1_RES1 > #define SPSR_KERNEL (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H) > +#define SPSR_KERNEL_EL1 (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H) > #endif > > #ifndef __ASSEMBLY__ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel