From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5AD21CFD31D for ; Fri, 11 Oct 2024 10:56:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OInb7V9tIq20qQWP/ZRizhDcghTx8ozXith3C4Qgvvo=; b=kol0Zp/EaQ0EyW4E2VG8BEde1/ +z9pY+yJOrDIvFiLJ+WyTr+c/XjaqmnWVzIbgAT9p0tcCm1X6Fe6Ed1MZNxs6GL5AIJqqK3txcack 9LVjoHtdYaGqLL/qdjZusAagivZ0HlQrj1Y6m18GAXH/axrGqH0tVJDr0DUz/32DD2g++cYyAkZ4/ CYGumINRlLskddvARukH2j+dffc31e0SSSYsEgYHW1351oFsJkj+msl6wppV5EsDlTKEfDIhAIwxl D15cHNjC3kj1SZ5Z9volI+8GSIC1BaafN9ade+QlvZ0jxyIfuZ4OrOQJoT0Z4F7nPV4twPJgU9FoZ hmPLtaxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szDJm-0000000G4Lv-2rAJ; Fri, 11 Oct 2024 10:56:22 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szDFp-0000000G3jM-24qc for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 10:52:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BFBC6497; Fri, 11 Oct 2024 03:52:46 -0700 (PDT) Received: from J2N7QTR9R3 (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 703DF3F73F; Fri, 11 Oct 2024 03:52:16 -0700 (PDT) Date: Fri, 11 Oct 2024 11:52:13 +0100 From: Mark Rutland To: Luca Fancellu Cc: andre.przywara@arm.com, linux-arm-kernel@lists.infradead.org Subject: Re: [boot-wrapper v3 2/4] aarch64: Enable Armv8-R EL2 boot Message-ID: References: <20240731141103.2559706-1-luca.fancellu@arm.com> <20240731141103.2559706-3-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240731141103.2559706-3-luca.fancellu@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_035217_647202_61D03AEE X-CRM114-Status: GOOD ( 31.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Luca, Overall this looks good, I just have a few structural comments below. On Wed, Jul 31, 2024 at 03:11:01PM +0100, Luca Fancellu wrote: > When booting on Armv8-R, EL3 is not implemented and the Linux > kernel needs to be booted in EL1, so initialise EL2 and start > the kernel in the expected exception level. > > To do so, introduce the 'aarch64-r' argument for the > --with-bw-arch parameter. > > PSCI is not yet implemented for aarch64-r. > > Signed-off-by: Luca Fancellu > --- > Changes from v2: > - Major rework, reason in the cover letter. > --- > Makefile.am | 4 +++ > arch/aarch64/boot.S | 5 ++++ > arch/aarch64/include/asm/cpu.h | 24 +++++++++++++++ > arch/aarch64/init.c | 55 ++++++++++++++++++++++++++++++++++ > configure.ac | 8 +++-- > 5 files changed, 93 insertions(+), 3 deletions(-) > > diff --git a/Makefile.am b/Makefile.am > index 6ebece25b230..bf97b989d5d7 100644 > --- a/Makefile.am > +++ b/Makefile.am > @@ -25,6 +25,10 @@ DEFINES += $(if $(SYSREGS_BASE), -DSYSREGS_BASE=$(SYSREGS_BASE), ) > DEFINES += -DUART_BASE=$(UART_BASE) > DEFINES += -DSTACK_SIZE=256 > > +if BOOTWRAPPER_64R > +DEFINES += -DBOOTWRAPPER_64R > +endif > + > if KERNEL_32 > DEFINES += -DKERNEL_32 > PSCI_CPU_ON := 0x84000003 > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S > index f8fc8082fbee..0b4f6824045a 100644 > --- a/arch/aarch64/boot.S > +++ b/arch/aarch64/boot.S > @@ -100,7 +100,12 @@ ASM_FUNC(jump_kernel) > mov x1, x21 > mov x2, x22 > mov x3, x23 > +#if defined(BOOTWRAPPER_64R) > + // On Armv8-R Linux needs to be booted at EL1 > + mov x4, #SPSR_KERNEL_EL1 > +#else > mov x4, #SPSR_KERNEL > +#endif > > mrs x5, CurrentEL > cmp x5, #CURRENTEL_EL3 > diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h > index a5744e16e4b2..6686161c7c0b 100644 > --- a/arch/aarch64/include/asm/cpu.h > +++ b/arch/aarch64/include/asm/cpu.h > @@ -32,10 +32,15 @@ > (BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(18) | BIT(16) | \ > BIT(11) | BIT(5) | BIT(4)) > > +#define CPTR_EL2_NO_E2H_RES1 \ > + (BITS(13,12) | BIT(9) | BITS(7,0)) > + > #define SCTLR_EL2_RES1 \ > (BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(18) | BIT(16) | \ > BIT(11) | BIT(5) | BIT(4)) > > +#define VSTCR_EL2_RES1 (BIT(31)) > + > #define SCTLR_EL1_RES1 \ > (BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(20) | BIT(11) | \ > BIT(8) | BIT(7) | BIT(4)) > @@ -64,7 +69,13 @@ > #define SCR_EL3_PIEN BIT(45) > #define SCR_EL3_D128En BIT(47) > > +#define VTCR_EL2_MSA BIT(31) > + > #define HCR_EL2_RES1 BIT(1) > +#define HCR_EL2_APK_NOTRAP BIT(40) > +#define HCR_EL2_API_NOTRAP BIT(41) > +#define HCR_EL2_FIEN_NOTRAP BIT(47) > +#define HCR_EL2_ENSCXT_NOTRAP BIT(53) These can drop the '_NOTRAP' suffix; that's only used on some of the MDSCR definitions because it's an enumerated value for a field (which the architecture doesn't name), whereas these should just be the architected names as with their SCR_EL3 counterparts, i.e. #define HCR_EL2_APK BIT(40) #define HCR_EL2_API BIT(41) #define HCR_EL2_FIEN BIT(47) #define HCR_EL2_EnSCXT BIT(53) That said, we don't set SCR_EL3.{FIEN,EnSCXT} today for A-class; and Linux doesn't currently care -- is that something you're actually using on R-class? > #define ID_AA64DFR0_EL1_PMSVER BITS(35, 32) > #define ID_AA64DFR0_EL1_TRACEBUFFER BITS(47, 44) > @@ -81,6 +92,8 @@ > #define ID_AA64ISAR2_EL1_GPA3 BITS(11, 8) > #define ID_AA64ISAR2_EL1_APA3 BITS(15, 12) > > +#define ID_AA64MMFR0_EL1_MSA BITS(51, 48) > +#define ID_AA64MMFR0_EL1_MSA_frac BITS(55, 52) > #define ID_AA64MMFR0_EL1_FGT BITS(59, 56) > #define ID_AA64MMFR0_EL1_ECV BITS(63, 60) > > @@ -96,8 +109,12 @@ > > #define ID_AA64PFR1_EL1_MTE BITS(11, 8) > #define ID_AA64PFR1_EL1_SME BITS(27, 24) > +#define ID_AA64PFR1_EL1_CSV2_frac BITS(35, 32) > #define ID_AA64PFR1_EL1_THE BITS(51, 48) > + > +#define ID_AA64PFR0_EL1_RAS BITS(31, 28) > #define ID_AA64PFR0_EL1_SVE BITS(35, 32) > +#define ID_AA64PFR0_EL1_CSV2 BITS(59, 56) > > #define ID_AA64SMFR0_EL1 s3_0_c0_c4_5 > #define ID_AA64SMFR0_EL1_FA64 BIT(63) > @@ -106,7 +123,9 @@ > * Initial register values required for the boot-wrapper to run out-of-reset. > */ > #define SCTLR_EL3_RESET SCTLR_EL3_RES1 > +#define CPTR_EL2_RESET CPTR_EL2_NO_E2H_RES1 > #define SCTLR_EL2_RESET SCTLR_EL2_RES1 > +#define VSTCR_EL2_RESET VSTCR_EL2_RES1 > #define SCTLR_EL1_RESET SCTLR_EL1_RES1 > #define HCR_EL2_RESET HCR_EL2_RES1 > > @@ -123,6 +142,7 @@ > #define SPSR_I (1 << 7) /* IRQ masked */ > #define SPSR_F (1 << 6) /* FIQ masked */ > #define SPSR_T (1 << 5) /* Thumb */ > +#define SPSR_EL1H (5 << 0) /* EL1 Handler mode */ > #define SPSR_EL2H (9 << 0) /* EL2 Handler mode */ > #define SPSR_HYP (0x1a << 0) /* M[3:0] = hyp, M[4] = AArch32 */ > > @@ -135,6 +155,9 @@ > #define ICC_CTLR_EL3 S3_6_C12_C12_4 > #define ICC_PMR_EL1 S3_0_C4_C6_0 > > +#define VSTCR_EL2 s3_4_c2_c6_2 > +#define VSCTLR_EL2 s3_4_c2_c0_0 > + > #define ZCR_EL3 s3_6_c1_c2_0 > #define ZCR_EL3_LEN_MAX 0xf > > @@ -162,6 +185,7 @@ > #else > #define SCTLR_EL1_KERNEL SCTLR_EL1_RES1 > #define SPSR_KERNEL (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H) > +#define SPSR_KERNEL_EL1 (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H) > #endif > > #ifndef __ASSEMBLY__ > diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c > index 81fe33ab80ac..b0abde11ebd6 100644 > --- a/arch/aarch64/init.c > +++ b/arch/aarch64/init.c > @@ -54,6 +54,18 @@ static inline bool cpu_has_permission_indirection(void) > return mrs(ID_AA64MMFR3_EL1) & mask; > } > > +static bool cpu_has_scxt(void) > +{ > + unsigned long csv2 = mrs_field(ID_AA64PFR0_EL1, CSV2); > + > + if (csv2 >= 2) > + return true; > + if (csv2 < 1) > + return false; > + > + return mrs_field(ID_AA64PFR1_EL1, CSV2_frac) >= 2; > +} > + > static void cpu_init_el3(void) > { > unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE; > @@ -157,6 +169,43 @@ static void cpu_init_el3(void) > } > } > > +void cpu_init_el2_armv8r(void) > +{ > + unsigned long hcr = mrs(hcr_el2); > + > + /* On Armv8-R ID_AA64MMFR0_EL1[51:48] == 0xF */ > + if ((mrs_field(ID_AA64MMFR0_EL1, MSA) != 0xF) || > + (mrs_field(ID_AA64MMFR0_EL1, MSA_frac) < 2)) > + { > + print_string("Initialization failed, Armv8-R not detected or VMSA not supported at EL1.\r\n"); > + while(1) {} > + unreachable(); > + } I'd suggest we make this: if (mrs_field(ID_AA64MMFR0_EL1, MSA) != 0xF) { print_string("ID_AA64MMFR0_EL1.MSA != 0xF, not R-class\r\n"); while (1) wfe(); } if (mrs_field(ID_AA64MMFR0_EL1, MSA_frac) < 2) { print_string("ID_AA64MMFR0_EL1.MSA_frac < 2, EL1&0 VMSA not supported\r\n"); while (1) wfe(); } > + > + msr(vpidr_el2, mrs(midr_el1)); > + msr(vmpidr_el2, mrs(mpidr_el1)); > + > + msr(VSCTLR_EL2, 0); > + msr(VSTCR_EL2, VSTCR_EL2_RESET); > + msr(vtcr_el2, VTCR_EL2_MSA); > + > + msr(cntvoff_el2, 0); > + msr(cptr_el2, CPTR_EL2_RESET); > + msr(mdcr_el2, 0); > + > + if (cpu_has_scxt()) > + hcr |= HCR_EL2_ENSCXT_NOTRAP; > + > + if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2) > + hcr |= HCR_EL2_FIEN_NOTRAP; > + > + if (cpu_has_pauth()) > + hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP; > + > + msr(hcr_el2, hcr); > + isb(); > +} > + > #ifdef PSCI > extern char psci_vectors[]; > > @@ -181,6 +230,12 @@ void cpu_init_arch(unsigned int cpu) > gic_secure_init(); > } > > +#if defined(BOOTWRAPPER_64R) > + if (mrs(CurrentEL) == CURRENTEL_EL2) { > + cpu_init_el2_armv8r(); > + } > +#endif We generally try to avoid ifdeffery within functions (e.g. see how we use kernel_is_32_bit()). Can you add a bootwrapper_is_r_class() above, e.g. static inline bool bootwrapper_is_r_class(void) { #ifdef BOOTWRAPPER_64R return true; #else return false; #endif } That way here we can have: if (!bootwrapper_is_r_class() && mrs(CurrentEL) == CURRENTEL_EL3) { ... } if (bootwrapper_is_r_class() && mrs(CurrentEL) == CURRENTEL_EL2) { ... } ... and it'll be easier to restructure that in future if necessary (e.g. to handle A-class starting at EL2). > + > cpu_init_psci_arch(cpu); > > msr(CNTFRQ_EL0, COUNTER_FREQ); > diff --git a/configure.ac b/configure.ac > index bba42fa1dba8..88dbf9ba4f08 100644 > --- a/configure.ac > +++ b/configure.ac > @@ -9,14 +9,16 @@ AC_INIT([boot-wrapper], [v0.2]) > > AM_INIT_AUTOMAKE([foreign]) > > -# Allow a user to pass --with-bw-arch={aarch64-a,aarch32-a} > +# Allow a user to pass --with-bw-arch={aarch64-a,aarch32-a,aarch64-r} > AC_ARG_WITH([bw-arch], > - AS_HELP_STRING([--with-bw-arch], [aarch64-a selects AArch64-A (default), aarch32-a selects AArch32-A]), > + AS_HELP_STRING([--with-bw-arch], [aarch64-a selects AArch64-A (default), aarch32-a selects AArch32-A, aarch64-r selects AArch64-R]), As on the ifrst patch, please simplify this to remover the repetition. Mark.