public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Add Armv8-R AArch64 support
@ 2024-07-16 14:29 Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 1/6] aarch64: Rename labels and prepare for lower EL booting Luca Fancellu
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

Currently, we cannot boot Linux with boot-wrapper on Armv8-R AArch64:
1. The Armv8-R AArch64 profile does not support the EL3.
2. The Armv8-R AArch64 EL2 only supports a PMSA, which Linux does not
support. So it's necessary to drop into EL1 before entering the kernel.
3. There is no EL2 booting code for Armv8-R AArch64 and no
configuration for dropping to EL1 in boot-wrapper.

These patches enable boot-wrapper booting Linux with Armv8-R AArch64.

This is a rework and rebase of a serie already present upstream [1], apart
from patch 3 which is addressing a small bug and patch 6 and 7 which are
introducing support for PSCI boot through hvc conduit and Xen boot under Armv8-R
AArch64.

[1] https://patchwork.kernel.org/project/linux-arm-kernel/cover/20210525062509.201464-1-jaxson.han@arm.com/

Changes from v1:
 - Dropped patch 4 regarding GIC changes, it's not needed anymore.

Luca Fancellu (6):
  aarch64: Rename labels and prepare for lower EL booting
  aarch64: Prepare for lower EL booting
  aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET
  aarch64: Introduce EL2 boot code for Armv8-R AArch64
  aarch64: Support PSCI for Armv8-R AArch64
  aarch64: Start Xen on Armv8-R at EL2

 Makefile.am                    |  6 ++-
 arch/aarch64/boot.S            | 75 ++++++++++++++++++++++++++++++----
 arch/aarch64/include/asm/cpu.h | 15 ++++++-
 arch/aarch64/init.c            | 44 ++++++++++++++++++--
 configure.ac                   | 16 +++++---
 5 files changed, 138 insertions(+), 18 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/6] aarch64: Rename labels and prepare for lower EL booting
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 2/6] aarch64: Prepare " Luca Fancellu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

The current code can boot from a lower EL than EL3, but the flag
'flag_no_el3' have the meaning of "Don't drop to a lower EL", so
rename the flag to flag_keep_el.
This is a preparation work to boot on Armv8-R AArch64 which has
no EL3.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
---
v2 changes:
 - Add Andre R-by
---
 arch/aarch64/boot.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index da5fa6548b65..7727475925c1 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -92,7 +92,7 @@ reset_no_el3:
 	bl	setup_stack
 
 	mov	w0, #1
-	ldr	x1, =flag_no_el3
+	ldr	x1, =flag_keep_el
 	str	w0, [x1]
 
 	bl	cpu_init_bootwrapper
@@ -124,7 +124,7 @@ ASM_FUNC(jump_kernel)
 	bl	find_logical_id
 	bl	setup_stack		// Reset stack pointer
 
-	ldr	w0, flag_no_el3
+	ldr	w0, flag_keep_el
 	cmp	w0, #0			// Prepare Z flag
 
 	mov	x0, x20
@@ -133,7 +133,7 @@ ASM_FUNC(jump_kernel)
 	mov	x3, x23
 
 	b.eq	1f
-	br	x19			// No EL3
+	br	x19			// Keep EL
 
 1:	mov	x4, #SPSR_KERNEL
 
@@ -151,5 +151,5 @@ ASM_FUNC(jump_kernel)
 
 	.data
 	.align 3
-flag_no_el3:
+flag_keep_el:
 	.long 0
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 2/6] aarch64: Prepare for lower EL booting
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 1/6] aarch64: Rename labels and prepare for lower EL booting Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET Luca Fancellu
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

Store the value of the initial SPSR into a variable during
EL3 initialization and load it from the variable before dropping
EL, this is done as preparation work to be able to boot from a
different exception level.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
---
v2 changes:
 - add Andre R-by
---
 arch/aarch64/boot.S | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 7727475925c1..211077af17c8 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -51,6 +51,10 @@ reset_at_el3:
 	b.eq	err_invalid_id
 	bl	setup_stack
 
+	mov	w0, #SPSR_KERNEL
+	ldr	x1, =spsr_to_elx
+	str	w0, [x1]
+
 	bl	cpu_init_bootwrapper
 
 	bl	cpu_init_el3
@@ -135,7 +139,7 @@ ASM_FUNC(jump_kernel)
 	b.eq	1f
 	br	x19			// Keep EL
 
-1:	mov	x4, #SPSR_KERNEL
+1:	ldr	w4, spsr_to_elx
 
 	/*
 	 * If bit 0 of the kernel address is set, we're entering in AArch32
@@ -153,3 +157,5 @@ ASM_FUNC(jump_kernel)
 	.align 3
 flag_keep_el:
 	.long 0
+spsr_to_elx:
+	.long 0
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 1/6] aarch64: Rename labels and prepare for lower EL booting Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 2/6] aarch64: Prepare " Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-19 10:05   ` Mark Rutland
  2024-07-16 14:29 ` [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Luca Fancellu
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

From the specification SCTLR_EL2.TSCXT is RES1 only "When
FEAT_CSV2_2 is not implemented, FEAT_CSV2_1p2 is not
implemented, HCR_EL2.E2H == 1 and HCR_EL2.TGE == 1", so
given that HCR_EL2.E2H is set by bootwrapper before to a
value of zero, the condition above can't happen and from
the specification the bit is RES0.

Fix the macro removing the bit.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
---
v2 changes:
 - Add Andre R-by
---
 arch/aarch64/include/asm/cpu.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h
index 124ef916ddfc..846b89f8405d 100644
--- a/arch/aarch64/include/asm/cpu.h
+++ b/arch/aarch64/include/asm/cpu.h
@@ -30,8 +30,8 @@
 	 BIT(11) | BIT(5) | BIT(4))
 
 #define SCTLR_EL2_RES1							\
-	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(20) | BIT(18) |	\
-	 BIT(16) | BIT(11) | BIT(5) | BIT(4))
+	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(18) | BIT(16) |	\
+	 BIT(11) | BIT(5) | BIT(4))
 
 #define SCTLR_EL1_RES1							\
 	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(20) | BIT(11) |	\
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
                   ` (2 preceding siblings ...)
  2024-07-16 14:29 ` [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-29 15:01   ` Mark Rutland
  2024-07-16 14:29 ` [PATCH v2 5/6] aarch64: Support PSCI " Luca Fancellu
  2024-07-16 14:29 ` [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2 Luca Fancellu
  5 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

The Armv8-R AArch64 profile does not support the EL3 exception level.
The Armv8-R AArch64 profile allows for an (optional) VMSAv8-64 MMU
at EL1, which allows to run off-the-shelf Linux. However EL2 only
supports a PMSA, which is not supported by Linux, so we need to drop
into EL1 before entering the kernel.

We add a new err_invalid_arch symbol as a dead loop. If we detect the
current Armv8-R aarch64 only supports with PMSA, meaning we cannot boot
Linux anymore, then we jump to err_invalid_arch.

During Armv8-R aarch64 init, to make sure nothing unexpected traps into
EL2, we auto-detect and config FIEN and EnSCXT in HCR_EL2.

The boot sequence is:
If CurrentEL == EL3, then goto EL3 initialisation and drop to lower EL
  before entering the kernel.
If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf (Armv8-R aarch64),
  if id_aa64mmfr0_el1.MSA_frac == 0x2,
    then goto Armv8-R AArch64 initialisation and drop to EL1 before
    entering the kernel.
  else, which means VMSA unsupported and cannot boot Linux,
    goto err_invalid_arch (dead loop).
Else, no initialisation and keep the current EL before entering the
  kernel.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v2 changes:
 - when booting from aarch64 armv8-r EL2, jump to reset_no_el3 to
   avoid code duplication.
 - codestyle fixes
 - write into HCR_EL2.ENSCXT unconditionally inside cpu_init_armv8r_el2
---
 arch/aarch64/boot.S            | 57 ++++++++++++++++++++++++++++++++--
 arch/aarch64/include/asm/cpu.h | 11 +++++++
 arch/aarch64/init.c            | 29 +++++++++++++++++
 3 files changed, 95 insertions(+), 2 deletions(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 211077af17c8..2a8234f7a17d 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -22,7 +22,8 @@
 	 *   EL2 must be implemented.
 	 *
 	 * - EL2 (Non-secure)
-	 *   Entering at EL2 is partially supported.
+	 *   Entering at EL2 is partially supported for Armv8-A.
+	 *   Entering at EL2 is supported for Armv8-R.
 	 *   PSCI is not supported when entered in this exception level.
 	 */
 ASM_FUNC(_start)
@@ -76,6 +77,39 @@ reset_at_el2:
 	msr	sctlr_el2, x0
 	isb
 
+	/* Detect Armv8-R AArch64 */
+	mrs	x1, id_aa64mmfr0_el1
+	/*
+	 * Check MSA, bits [51:48]:
+	 * 0xf means Armv8-R AArch64.
+	 * If not 0xf, proceed in Armv8-A EL2.
+	 */
+	ubfx	x0, x1, #48, #4			// MSA
+	cmp	x0, 0xf
+	bne	reset_no_el3
+
+	/*
+	 * Armv8-R AArch64 is found, check if Linux can be booted.
+	 * Check MSA_frac, bits [55:52]:
+	 * 0x2 means EL1&0 translation regime also supports VMSAv8-64.
+	 */
+	ubfx	x0, x1, #52, #4			// MSA_frac
+	cmp	x0, 0x2
+	/*
+	 * If not 0x2, no VMSA, so cannot boot Linux and dead loop.
+	 * Also, since the architecture guarantees that those CPUID
+	 * fields never lose features when the value in a field
+	 * increases, we use blt to cover it.
+	 */
+	blt	err_invalid_arch
+
+	/* Start Armv8-R Linux at EL1 */
+	mov	w0, #SPSR_KERNEL_EL1
+	ldr	x1, =spsr_to_elx
+	str	w0, [x1]
+
+	bl	cpu_init_armv8r_el2
+
 	b	reset_no_el3
 
 	/*
@@ -95,15 +129,22 @@ reset_no_el3:
 	b.eq	err_invalid_id
 	bl	setup_stack
 
+	ldr	w1, spsr_to_elx
+	and w0, w1, 0xf
+	cmp	w0, #SPSR_EL1H
+	b.eq drop_el
+
 	mov	w0, #1
 	ldr	x1, =flag_keep_el
 	str	w0, [x1]
 
+drop_el:
 	bl	cpu_init_bootwrapper
 
 	b	start_bootmethod
 
 err_invalid_id:
+err_invalid_arch:
 	b	.
 
 	/*
@@ -121,10 +162,14 @@ ASM_FUNC(jump_kernel)
 	ldr	x0, =SCTLR_EL1_KERNEL
 	msr	sctlr_el1, x0
 
+	mrs	x5, CurrentEL
+	cmp	x5, #CURRENTEL_EL2
+	b.eq	1f
+
 	ldr	x0, =SCTLR_EL2_KERNEL
 	msr	sctlr_el2, x0
 
-	cpuid	x0, x1
+1:	cpuid	x0, x1
 	bl	find_logical_id
 	bl	setup_stack		// Reset stack pointer
 
@@ -147,10 +192,18 @@ ASM_FUNC(jump_kernel)
 	 */
 	bfi	x4, x19, #5, #1
 
+	mrs	x5, CurrentEL
+	cmp	x5, #CURRENTEL_EL2
+	b.eq	1f
+
 	msr	elr_el3, x19
 	msr	spsr_el3, x4
 	eret
 
+1:	msr	elr_el2, x19
+	msr	spsr_el2, x4
+	eret
+
 	.ltorg
 
 	.data
diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h
index 846b89f8405d..280f488f267d 100644
--- a/arch/aarch64/include/asm/cpu.h
+++ b/arch/aarch64/include/asm/cpu.h
@@ -58,7 +58,13 @@
 #define SCR_EL3_TCR2EN			BIT(43)
 #define SCR_EL3_PIEN			BIT(45)
 
+#define VTCR_EL2_MSA			BIT(31)
+
 #define HCR_EL2_RES1			BIT(1)
+#define HCR_EL2_APK_NOTRAP		BIT(40)
+#define HCR_EL2_API_NOTRAP		BIT(41)
+#define HCR_EL2_FIEN_NOTRAP		BIT(47)
+#define HCR_EL2_ENSCXT_NOTRAP		BIT(53)
 
 #define ID_AA64DFR0_EL1_PMSVER		BITS(35, 32)
 #define ID_AA64DFR0_EL1_TRACEBUFFER	BITS(47, 44)
@@ -88,7 +94,10 @@
 
 #define ID_AA64PFR1_EL1_MTE		BITS(11, 8)
 #define ID_AA64PFR1_EL1_SME		BITS(27, 24)
+#define ID_AA64PFR1_EL1_CSV2_frac	BITS(35, 32)
+#define ID_AA64PFR0_EL1_RAS		BITS(31, 28)
 #define ID_AA64PFR0_EL1_SVE		BITS(35, 32)
+#define ID_AA64PFR0_EL1_CSV2		BITS(59, 56)
 
 #define ID_AA64SMFR0_EL1		s3_0_c0_c4_5
 #define ID_AA64SMFR0_EL1_FA64		BIT(63)
@@ -114,6 +123,7 @@
 #define SPSR_I			(1 << 7)	/* IRQ masked */
 #define SPSR_F			(1 << 6)	/* FIQ masked */
 #define SPSR_T			(1 << 5)	/* Thumb */
+#define SPSR_EL1H		(5 << 0)	/* EL1 Handler mode */
 #define SPSR_EL2H		(9 << 0)	/* EL2 Handler mode */
 #define SPSR_HYP		(0x1a << 0)	/* M[3:0] = hyp, M[4] = AArch32 */
 
@@ -153,6 +163,7 @@
 #else
 #define SCTLR_EL1_KERNEL	SCTLR_EL1_RES1
 #define SPSR_KERNEL		(SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H)
+#define SPSR_KERNEL_EL1		(SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H)
 #endif
 
 #ifndef __ASSEMBLY__
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 37cb45fde446..9402a01b9dca 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -145,6 +145,35 @@ void cpu_init_el3(void)
 	msr(CNTFRQ_EL0, COUNTER_FREQ);
 }
 
+void cpu_init_armv8r_el2(void)
+{
+	unsigned long hcr = mrs(hcr_el2);
+
+	msr(vpidr_el2, mrs(midr_el1));
+	msr(vmpidr_el2, mrs(mpidr_el1));
+
+	/* VTCR_MSA: VMSAv8-64 support */
+	msr(vtcr_el2, VTCR_EL2_MSA);
+
+	/*
+	 * HCR_EL2.ENSCXT is written unconditionally even if in some cases it's
+	 * RES0 (when FEAT_CSV2_2 or FEAT_CSV2_1p2 are not implemented) in order
+	 * to simplify the code, but it's safe in this case as the write would be
+	 * ignored when not implemented and would remove the trap otherwise.
+	 */
+	hcr |= HCR_EL2_ENSCXT_NOTRAP;
+
+	if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2)
+		hcr |= HCR_EL2_FIEN_NOTRAP;
+
+	if (cpu_has_pauth())
+		hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP;
+
+	msr(hcr_el2, hcr);
+	isb();
+	msr(CNTFRQ_EL0, COUNTER_FREQ);
+}
+
 #ifdef PSCI
 extern char psci_vectors[];
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 5/6] aarch64: Support PSCI for Armv8-R AArch64
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
                   ` (3 preceding siblings ...)
  2024-07-16 14:29 ` [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-29 16:09   ` Mark Rutland
  2024-07-16 14:29 ` [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2 Luca Fancellu
  5 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

Add support for PSCI when booting Linux on Armv8-R AArch64,
allow the autoconf parameter --enable-psci to take an argument
which is the conduit to be used, it can be empty or 'smc' to
select the smc conduit, it can be 'hvc' for the hvc conduit.

Depending on the selected conduit, the vector table will be
installed on the VBAR_EL3 or VBAR_EL2 register.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
---
v2 changes:
 - Add Andre R-by
---
 Makefile.am         |  5 ++++-
 arch/aarch64/init.c | 15 ++++++++++++---
 configure.ac        | 16 +++++++++++-----
 3 files changed, 27 insertions(+), 9 deletions(-)

diff --git a/Makefile.am b/Makefile.am
index 6ebece25b230..34fbfb1f4ff8 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -49,11 +49,14 @@ endif
 
 if PSCI
 DEFINES		+= -DPSCI
+if PSCI_HVC
+DEFINES		+= -DPSCI_HVC
+endif
 ARCH_OBJ	+= psci.o
 COMMON_OBJ	+= psci.o
 PSCI_NODE	:= psci {				\
 			compatible = \"arm,psci\";	\
-			method = \"smc\";		\
+			method = \"$(PSCI_METHOD)\";	\
 			cpu_on = <$(PSCI_CPU_ON)>;	\
 			cpu_off = <$(PSCI_CPU_OFF)>;	\
 		   };
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 9402a01b9dca..9b8bd8723dba 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -179,10 +179,19 @@ extern char psci_vectors[];
 
 bool cpu_init_psci_arch(void)
 {
-	if (mrs(CurrentEL) != CURRENTEL_EL3)
+	switch (mrs(CurrentEL)) {
+#if !defined(PSCI_HVC)
+	case CURRENTEL_EL3:
+		msr(VBAR_EL3, (unsigned long)psci_vectors);
+		break;
+#else
+	case CURRENTEL_EL2:
+		msr(VBAR_EL2, (unsigned long)psci_vectors);
+		break;
+#endif
+	default:
 		return false;
-
-	msr(VBAR_EL3, (unsigned long)psci_vectors);
+	}
 	isb();
 
 	return true;
diff --git a/configure.ac b/configure.ac
index 9e3b7226cd69..44459a4c849e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -83,13 +83,19 @@ AS_IF([test "x$X_IMAGE" != "x"],
 # Allow a user to pass --enable-psci
 AC_ARG_ENABLE([psci],
 	AS_HELP_STRING([--disable-psci], [disable the psci boot method]),
-	[USE_PSCI=$enableval], [USE_PSCI="yes"])
-AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes"])
-AS_IF([test "x$USE_PSCI" = "xyes"], [], [USE_PSCI=no])
-
-AS_IF([test "x$USE_PSCI" != "xyes" -a "x$KERNEL_ES" = "x32"],
+	[case "${enableval}" in
+		yes|smc) USE_PSCI=smc ;;
+		hvc) USE_PSCI=hvc ;;
+		no) ;;
+		*) AC_MSG_ERROR([Bad value "${enableval}" for --enable-psci. Use "smc" or "hvc"]) ;;
+	esac])
+AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes" -o "x$USE_PSCI" = "xsmc" -o "x$USE_PSCI" = "xhvc"])
+AM_CONDITIONAL([PSCI_HVC], [test "x$USE_PSCI" = "xhvc"])
+
+AS_IF([test "x$USE_PSCI" = "xno" -a "x$KERNEL_ES" = "x32"],
 	[AC_MSG_ERROR([With an AArch32 kernel, boot method must be PSCI.])]
 )
+AC_SUBST([PSCI_METHOD], [$USE_PSCI])
 
 # Allow a user to pass --with-initrd
 AC_ARG_WITH([initrd],
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2
  2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
                   ` (4 preceding siblings ...)
  2024-07-16 14:29 ` [PATCH v2 5/6] aarch64: Support PSCI " Luca Fancellu
@ 2024-07-16 14:29 ` Luca Fancellu
  2024-07-23 12:27   ` Mark Rutland
  5 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-16 14:29 UTC (permalink / raw)
  To: andre.przywara, mark.rutland; +Cc: linux-arm-kernel

When bootwrapper is compiled with Xen support and it is started
at EL2 on Armv8-R AArch64, keep the current EL and jump to the
Xen image.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v2 changes:
 - Don't write 1 on flag_keep_el since now the logic will jump to
   reset_no_el3 and write it there.
 - removed check for smc conduit when Xen is booted, change commit
   message.
---
 Makefile.am         | 1 +
 arch/aarch64/boot.S | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/Makefile.am b/Makefile.am
index 34fbfb1f4ff8..bafce34682c3 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -112,6 +112,7 @@ XEN_CHOSEN	:= xen,xen-bootargs = \"$(XEN_CMDLINE)\";		\
 			compatible = \"xen,linux-zimage\", \"xen,multiboot-module\"; \
 			reg = <0x0 $(DOM0_OFFSET) 0x0 $(KERNEL_SIZE)>;	\
 		   };
+DEFINES		+= -DXEN
 endif
 
 if INITRD
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 2a8234f7a17d..38ed4ed48985 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -88,6 +88,7 @@ reset_at_el2:
 	cmp	x0, 0xf
 	bne	reset_no_el3
 
+#if !defined(XEN)
 	/*
 	 * Armv8-R AArch64 is found, check if Linux can be booted.
 	 * Check MSA_frac, bits [55:52]:
@@ -107,6 +108,7 @@ reset_at_el2:
 	mov	w0, #SPSR_KERNEL_EL1
 	ldr	x1, =spsr_to_elx
 	str	w0, [x1]
+#endif
 
 	bl	cpu_init_armv8r_el2
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET
  2024-07-16 14:29 ` [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET Luca Fancellu
@ 2024-07-19 10:05   ` Mark Rutland
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2024-07-19 10:05 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: andre.przywara, linux-arm-kernel

On Tue, Jul 16, 2024 at 03:29:03PM +0100, Luca Fancellu wrote:
> From the specification SCTLR_EL2.TSCXT is RES1 only "When
> FEAT_CSV2_2 is not implemented, FEAT_CSV2_1p2 is not
> implemented, HCR_EL2.E2H == 1 and HCR_EL2.TGE == 1", so
> given that HCR_EL2.E2H is set by bootwrapper before to a
> value of zero, the condition above can't happen and from
> the specification the bit is RES0.
> 
> Fix the macro removing the bit.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Since this is a fix independenny of the rest of the series, I've applied
this on its own and pushed it out.

I'll chew through the rest of the series shortly.

Mark.

> ---
> v2 changes:
>  - Add Andre R-by
> ---
>  arch/aarch64/include/asm/cpu.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h
> index 124ef916ddfc..846b89f8405d 100644
> --- a/arch/aarch64/include/asm/cpu.h
> +++ b/arch/aarch64/include/asm/cpu.h
> @@ -30,8 +30,8 @@
>  	 BIT(11) | BIT(5) | BIT(4))
>  
>  #define SCTLR_EL2_RES1							\
> -	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(20) | BIT(18) |	\
> -	 BIT(16) | BIT(11) | BIT(5) | BIT(4))
> +	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(18) | BIT(16) |	\
> +	 BIT(11) | BIT(5) | BIT(4))
>  
>  #define SCTLR_EL1_RES1							\
>  	(BIT(29) | BIT(28) | BIT(23) | BIT(22) | BIT(20) | BIT(11) |	\
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2
  2024-07-16 14:29 ` [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2 Luca Fancellu
@ 2024-07-23 12:27   ` Mark Rutland
  2024-07-23 12:35     ` Luca Fancellu
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2024-07-23 12:27 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: andre.przywara, linux-arm-kernel

On Tue, Jul 16, 2024 at 03:29:06PM +0100, Luca Fancellu wrote:
> When bootwrapper is compiled with Xen support and it is started
> at EL2 on Armv8-R AArch64, keep the current EL and jump to the
> Xen image.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Just to check, this requires using spin-table, right?

I don't expect there to be interworking between the boot-wrapper and
Xen.

Mark.

> ---
> v2 changes:
>  - Don't write 1 on flag_keep_el since now the logic will jump to
>    reset_no_el3 and write it there.
>  - removed check for smc conduit when Xen is booted, change commit
>    message.
> ---
>  Makefile.am         | 1 +
>  arch/aarch64/boot.S | 2 ++
>  2 files changed, 3 insertions(+)
> 
> diff --git a/Makefile.am b/Makefile.am
> index 34fbfb1f4ff8..bafce34682c3 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -112,6 +112,7 @@ XEN_CHOSEN	:= xen,xen-bootargs = \"$(XEN_CMDLINE)\";		\
>  			compatible = \"xen,linux-zimage\", \"xen,multiboot-module\"; \
>  			reg = <0x0 $(DOM0_OFFSET) 0x0 $(KERNEL_SIZE)>;	\
>  		   };
> +DEFINES		+= -DXEN
>  endif
>  
>  if INITRD
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 2a8234f7a17d..38ed4ed48985 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -88,6 +88,7 @@ reset_at_el2:
>  	cmp	x0, 0xf
>  	bne	reset_no_el3
>  
> +#if !defined(XEN)
>  	/*
>  	 * Armv8-R AArch64 is found, check if Linux can be booted.
>  	 * Check MSA_frac, bits [55:52]:
> @@ -107,6 +108,7 @@ reset_at_el2:
>  	mov	w0, #SPSR_KERNEL_EL1
>  	ldr	x1, =spsr_to_elx
>  	str	w0, [x1]
> +#endif
>  
>  	bl	cpu_init_armv8r_el2
>  
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2
  2024-07-23 12:27   ` Mark Rutland
@ 2024-07-23 12:35     ` Luca Fancellu
  0 siblings, 0 replies; 16+ messages in thread
From: Luca Fancellu @ 2024-07-23 12:35 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Andre Przywara, linux-arm-kernel@lists.infradead.org

Hi Mark,

> On 23 Jul 2024, at 13:27, Mark Rutland <mark.rutland@arm.com> wrote:
> 
> On Tue, Jul 16, 2024 at 03:29:06PM +0100, Luca Fancellu wrote:
>> When bootwrapper is compiled with Xen support and it is started
>> at EL2 on Armv8-R AArch64, keep the current EL and jump to the
>> Xen image.
>> 
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> Just to check, this requires using spin-table, right?
> 
> I don't expect there to be interworking between the boot-wrapper and
> Xen.

Yes Xen on Armv8-R AArch64 requires spin-table.

Cheers,
Luca



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64
  2024-07-16 14:29 ` [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Luca Fancellu
@ 2024-07-29 15:01   ` Mark Rutland
  2024-07-29 15:27     ` Luca Fancellu
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2024-07-29 15:01 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: andre.przywara, linux-arm-kernel

Hi Luca,

On Tue, Jul 16, 2024 at 03:29:04PM +0100, Luca Fancellu wrote:
> The Armv8-R AArch64 profile does not support the EL3 exception level.
> The Armv8-R AArch64 profile allows for an (optional) VMSAv8-64 MMU
> at EL1, which allows to run off-the-shelf Linux. However EL2 only
> supports a PMSA, which is not supported by Linux, so we need to drop
> into EL1 before entering the kernel.
> 
> We add a new err_invalid_arch symbol as a dead loop. If we detect the
> current Armv8-R aarch64 only supports with PMSA, meaning we cannot boot
> Linux anymore, then we jump to err_invalid_arch.
> 
> During Armv8-R aarch64 init, to make sure nothing unexpected traps into
> EL2, we auto-detect and config FIEN and EnSCXT in HCR_EL2.
> 
> The boot sequence is:
> If CurrentEL == EL3, then goto EL3 initialisation and drop to lower EL
>   before entering the kernel.
> If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf (Armv8-R aarch64),
>   if id_aa64mmfr0_el1.MSA_frac == 0x2,
>     then goto Armv8-R AArch64 initialisation and drop to EL1 before
>     entering the kernel.
>   else, which means VMSA unsupported and cannot boot Linux,
>     goto err_invalid_arch (dead loop).
> Else, no initialisation and keep the current EL before entering the
>   kernel.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> v2 changes:
>  - when booting from aarch64 armv8-r EL2, jump to reset_no_el3 to
>    avoid code duplication.
>  - codestyle fixes
>  - write into HCR_EL2.ENSCXT unconditionally inside cpu_init_armv8r_el2
> ---
>  arch/aarch64/boot.S            | 57 ++++++++++++++++++++++++++++++++--
>  arch/aarch64/include/asm/cpu.h | 11 +++++++
>  arch/aarch64/init.c            | 29 +++++++++++++++++
>  3 files changed, 95 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 211077af17c8..2a8234f7a17d 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -22,7 +22,8 @@
>  	 *   EL2 must be implemented.
>  	 *
>  	 * - EL2 (Non-secure)
> -	 *   Entering at EL2 is partially supported.
> +	 *   Entering at EL2 is partially supported for Armv8-A.
> +	 *   Entering at EL2 is supported for Armv8-R.

Nit: IIUC ARMv8-R is Secure-only, so this isn't quite right.

>  	 *   PSCI is not supported when entered in this exception level.
>  	 */
>  ASM_FUNC(_start)
> @@ -76,6 +77,39 @@ reset_at_el2:
>  	msr	sctlr_el2, x0
>  	isb
>  
> +	/* Detect Armv8-R AArch64 */
> +	mrs	x1, id_aa64mmfr0_el1
> +	/*
> +	 * Check MSA, bits [51:48]:
> +	 * 0xf means Armv8-R AArch64.
> +	 * If not 0xf, proceed in Armv8-A EL2.
> +	 */
> +	ubfx	x0, x1, #48, #4			// MSA
> +	cmp	x0, 0xf
> +	bne	reset_no_el3
> +
> +	/*
> +	 * Armv8-R AArch64 is found, check if Linux can be booted.
> +	 * Check MSA_frac, bits [55:52]:
> +	 * 0x2 means EL1&0 translation regime also supports VMSAv8-64.
> +	 */
> +	ubfx	x0, x1, #52, #4			// MSA_frac
> +	cmp	x0, 0x2
> +	/*
> +	 * If not 0x2, no VMSA, so cannot boot Linux and dead loop.
> +	 * Also, since the architecture guarantees that those CPUID
> +	 * fields never lose features when the value in a field
> +	 * increases, we use blt to cover it.
> +	 */
> +	blt	err_invalid_arch
> +
> +	/* Start Armv8-R Linux at EL1 */
> +	mov	w0, #SPSR_KERNEL_EL1
> +	ldr	x1, =spsr_to_elx
> +	str	w0, [x1]

I'd prefer if we could do this in C code. I'll post a series shortly
where we'll have consistent cpu_init_arch() hook that we can do this
under.

> +
> +	bl	cpu_init_armv8r_el2
> +
>  	b	reset_no_el3
>  
>  	/*
> @@ -95,15 +129,22 @@ reset_no_el3:
>  	b.eq	err_invalid_id
>  	bl	setup_stack
>  
> +	ldr	w1, spsr_to_elx
> +	and w0, w1, 0xf
> +	cmp	w0, #SPSR_EL1H
> +	b.eq drop_el
> +
>  	mov	w0, #1
>  	ldr	x1, =flag_keep_el
>  	str	w0, [x1]
>  
> +drop_el:
>  	bl	cpu_init_bootwrapper
>  
>  	b	start_bootmethod
>  
>  err_invalid_id:
> +err_invalid_arch:
>  	b	.
>  
>  	/*
> @@ -121,10 +162,14 @@ ASM_FUNC(jump_kernel)
>  	ldr	x0, =SCTLR_EL1_KERNEL
>  	msr	sctlr_el1, x0
>  
> +	mrs	x5, CurrentEL
> +	cmp	x5, #CURRENTEL_EL2
> +	b.eq	1f
> +
>  	ldr	x0, =SCTLR_EL2_KERNEL
>  	msr	sctlr_el2, x0
>  
> -	cpuid	x0, x1
> +1:	cpuid	x0, x1
>  	bl	find_logical_id
>  	bl	setup_stack		// Reset stack pointer
>  
> @@ -147,10 +192,18 @@ ASM_FUNC(jump_kernel)
>  	 */
>  	bfi	x4, x19, #5, #1
>  
> +	mrs	x5, CurrentEL
> +	cmp	x5, #CURRENTEL_EL2
> +	b.eq	1f
> +
>  	msr	elr_el3, x19
>  	msr	spsr_el3, x4
>  	eret
>  
> +1:	msr	elr_el2, x19
> +	msr	spsr_el2, x4
> +	eret
> +
>  	.ltorg
>  
>  	.data
> diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h
> index 846b89f8405d..280f488f267d 100644
> --- a/arch/aarch64/include/asm/cpu.h
> +++ b/arch/aarch64/include/asm/cpu.h
> @@ -58,7 +58,13 @@
>  #define SCR_EL3_TCR2EN			BIT(43)
>  #define SCR_EL3_PIEN			BIT(45)
>  
> +#define VTCR_EL2_MSA			BIT(31)
> +
>  #define HCR_EL2_RES1			BIT(1)
> +#define HCR_EL2_APK_NOTRAP		BIT(40)
> +#define HCR_EL2_API_NOTRAP		BIT(41)
> +#define HCR_EL2_FIEN_NOTRAP		BIT(47)
> +#define HCR_EL2_ENSCXT_NOTRAP		BIT(53)
>  
>  #define ID_AA64DFR0_EL1_PMSVER		BITS(35, 32)
>  #define ID_AA64DFR0_EL1_TRACEBUFFER	BITS(47, 44)
> @@ -88,7 +94,10 @@
>  
>  #define ID_AA64PFR1_EL1_MTE		BITS(11, 8)
>  #define ID_AA64PFR1_EL1_SME		BITS(27, 24)
> +#define ID_AA64PFR1_EL1_CSV2_frac	BITS(35, 32)
> +#define ID_AA64PFR0_EL1_RAS		BITS(31, 28)
>  #define ID_AA64PFR0_EL1_SVE		BITS(35, 32)
> +#define ID_AA64PFR0_EL1_CSV2		BITS(59, 56)
>  
>  #define ID_AA64SMFR0_EL1		s3_0_c0_c4_5
>  #define ID_AA64SMFR0_EL1_FA64		BIT(63)
> @@ -114,6 +123,7 @@
>  #define SPSR_I			(1 << 7)	/* IRQ masked */
>  #define SPSR_F			(1 << 6)	/* FIQ masked */
>  #define SPSR_T			(1 << 5)	/* Thumb */
> +#define SPSR_EL1H		(5 << 0)	/* EL1 Handler mode */
>  #define SPSR_EL2H		(9 << 0)	/* EL2 Handler mode */
>  #define SPSR_HYP		(0x1a << 0)	/* M[3:0] = hyp, M[4] = AArch32 */
>  
> @@ -153,6 +163,7 @@
>  #else
>  #define SCTLR_EL1_KERNEL	SCTLR_EL1_RES1
>  #define SPSR_KERNEL		(SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H)
> +#define SPSR_KERNEL_EL1		(SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H)
>  #endif
>  
>  #ifndef __ASSEMBLY__
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index 37cb45fde446..9402a01b9dca 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -145,6 +145,35 @@ void cpu_init_el3(void)
>  	msr(CNTFRQ_EL0, COUNTER_FREQ);
>  }
>  
> +void cpu_init_armv8r_el2(void)
> +{
> +	unsigned long hcr = mrs(hcr_el2);
> +
> +	msr(vpidr_el2, mrs(midr_el1));
> +	msr(vmpidr_el2, mrs(mpidr_el1));
> +
> +	/* VTCR_MSA: VMSAv8-64 support */
> +	msr(vtcr_el2, VTCR_EL2_MSA);

I suspect we also need to initialize VSTCR_EL2?

... and don't we also need to initialize VSCTLR_EL2 to give all CPUs the
same VMID? Otherwise barriers won't work at EL1 and below...

> +
> +	/*
> +	 * HCR_EL2.ENSCXT is written unconditionally even if in some cases it's
> +	 * RES0 (when FEAT_CSV2_2 or FEAT_CSV2_1p2 are not implemented) in order
> +	 * to simplify the code, but it's safe in this case as the write would be
> +	 * ignored when not implemented and would remove the trap otherwise.
> +	 */
> +	hcr |= HCR_EL2_ENSCXT_NOTRAP;

I'd prefer if we can do the necessary checks. IIUC we can do this with a
helper, e.g.

	static bool cpu_has_scxt(void)
	{
		unsigned long csv2 = mrs_field(ID_AA64PFR0_EL1, CSV2);
		if (csv2 >= 2)
			return true;
		if (csv2 < 1)
			return false;
		return mrs_field(ID_AA64PFR1_EL1, CSV2_frac) >= 2;
	}

... then here we can have:

	if (cpu_has_scxt())
		 hcr |= HCR_EL2_ENSCXT_NOTRAP;

> +
> +	if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2)
> +		hcr |= HCR_EL2_FIEN_NOTRAP;
> +
> +	if (cpu_has_pauth())
> +		hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP;
> +
> +	msr(hcr_el2, hcr);
> +	isb();
> +	msr(CNTFRQ_EL0, COUNTER_FREQ);
> +}

I believe we also need to initialize:
	
* CNTVOFF_EL2 (for timers to work correctly)
* CNTHCTL_EL2 (for timers to not trap)
* CPTR_EL2 (for FP to not trap)
* MDCR_EL2 (for PMU & debug to not trap)

Mark.

> +
>  #ifdef PSCI
>  extern char psci_vectors[];
>  
> -- 
> 2.34.1
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64
  2024-07-29 15:01   ` Mark Rutland
@ 2024-07-29 15:27     ` Luca Fancellu
  2024-07-29 16:14       ` Mark Rutland
  0 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-29 15:27 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Andre Przywara, linux-arm-kernel@lists.infradead.org

Hi Mark,

>> * - EL2 (Non-secure)
>> - *   Entering at EL2 is partially supported.
>> + *   Entering at EL2 is partially supported for Armv8-A.
>> + *   Entering at EL2 is supported for Armv8-R.
> 
> Nit: IIUC ARMv8-R is Secure-only, so this isn't quite right.

Ok I’ll drop this change

> 
>> *   PSCI is not supported when entered in this exception level.
>> */
>> ASM_FUNC(_start)
>> @@ -76,6 +77,39 @@ reset_at_el2:
>> msr sctlr_el2, x0
>> isb
>> 
>> + /* Detect Armv8-R AArch64 */
>> + mrs x1, id_aa64mmfr0_el1
>> + /*
>> + * Check MSA, bits [51:48]:
>> + * 0xf means Armv8-R AArch64.
>> + * If not 0xf, proceed in Armv8-A EL2.
>> + */
>> + ubfx x0, x1, #48, #4 // MSA
>> + cmp x0, 0xf
>> + bne reset_no_el3
>> +
>> + /*
>> + * Armv8-R AArch64 is found, check if Linux can be booted.
>> + * Check MSA_frac, bits [55:52]:
>> + * 0x2 means EL1&0 translation regime also supports VMSAv8-64.
>> + */
>> + ubfx x0, x1, #52, #4 // MSA_frac
>> + cmp x0, 0x2
>> + /*
>> + * If not 0x2, no VMSA, so cannot boot Linux and dead loop.
>> + * Also, since the architecture guarantees that those CPUID
>> + * fields never lose features when the value in a field
>> + * increases, we use blt to cover it.
>> + */
>> + blt err_invalid_arch
>> +
>> + /* Start Armv8-R Linux at EL1 */
>> + mov w0, #SPSR_KERNEL_EL1
>> + ldr x1, =spsr_to_elx
>> + str w0, [x1]
> 
> I'd prefer if we could do this in C code. I'll post a series shortly
> where we'll have consistent cpu_init_arch() hook that we can do this
> under.

Ok are you suggesting to base this serie on the one you’ll push?


>> 
>> 
>> +void cpu_init_armv8r_el2(void)
>> +{
>> + unsigned long hcr = mrs(hcr_el2);
>> +
>> + msr(vpidr_el2, mrs(midr_el1));
>> + msr(vmpidr_el2, mrs(mpidr_el1));
>> +
>> + /* VTCR_MSA: VMSAv8-64 support */
>> + msr(vtcr_el2, VTCR_EL2_MSA);
> 
> I suspect we also need to initialize VSTCR_EL2?

Ok, I’ve booted Linux and it seems to work fine, is this considered at all when HCR_EL2.VM is off?
Anyway I’ll initialise it, I noticed it’s not done in TF-A.

> 
> ... and don't we also need to initialize VSCTLR_EL2 to give all CPUs the
> same VMID? Otherwise barriers won't work at EL1 and below...

I can see TF-A is initialising it so I’ll do the same

> 
>> +
>> + /*
>> + * HCR_EL2.ENSCXT is written unconditionally even if in some cases it's
>> + * RES0 (when FEAT_CSV2_2 or FEAT_CSV2_1p2 are not implemented) in order
>> + * to simplify the code, but it's safe in this case as the write would be
>> + * ignored when not implemented and would remove the trap otherwise.
>> + */
>> + hcr |= HCR_EL2_ENSCXT_NOTRAP;
> 
> I'd prefer if we can do the necessary checks. IIUC we can do this with a
> helper, e.g.
> 
> static bool cpu_has_scxt(void)
> {
> unsigned long csv2 = mrs_field(ID_AA64PFR0_EL1, CSV2);
> if (csv2 >= 2)
> return true;
> if (csv2 < 1)
> return false;
> return mrs_field(ID_AA64PFR1_EL1, CSV2_frac) >= 2;
> }
> 
> ... then here we can have:
> 
> if (cpu_has_scxt())
> hcr |= HCR_EL2_ENSCXT_NOTRAP;

Ok I’ll do

> 
>> +
>> + if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2)
>> + hcr |= HCR_EL2_FIEN_NOTRAP;
>> +
>> + if (cpu_has_pauth())
>> + hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP;
>> +
>> + msr(hcr_el2, hcr);
>> + isb();
>> + msr(CNTFRQ_EL0, COUNTER_FREQ);
>> +}
> 
> I believe we also need to initialize:
> 
> * CNTVOFF_EL2 (for timers to work correctly)
> * CNTHCTL_EL2 (for timers to not trap)
> * CPTR_EL2 (for FP to not trap)
> * MDCR_EL2 (for PMU & debug to not trap)

Sure, I’ll reset them like in TF-A.

Thanks fo your review!

Cheers,
Luca




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 5/6] aarch64: Support PSCI for Armv8-R AArch64
  2024-07-16 14:29 ` [PATCH v2 5/6] aarch64: Support PSCI " Luca Fancellu
@ 2024-07-29 16:09   ` Mark Rutland
  2024-07-30 11:31     ` Luca Fancellu
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Rutland @ 2024-07-29 16:09 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: andre.przywara, linux-arm-kernel

On Tue, Jul 16, 2024 at 03:29:05PM +0100, Luca Fancellu wrote:
> Add support for PSCI when booting Linux on Armv8-R AArch64,
> allow the autoconf parameter --enable-psci to take an argument
> which is the conduit to be used, it can be empty or 'smc' to
> select the smc conduit, it can be 'hvc' for the hvc conduit.
> 
> Depending on the selected conduit, the vector table will be
> installed on the VBAR_EL3 or VBAR_EL2 register.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> ---
> v2 changes:
>  - Add Andre R-by
> ---
>  Makefile.am         |  5 ++++-
>  arch/aarch64/init.c | 15 ++++++++++++---
>  configure.ac        | 16 +++++++++++-----
>  3 files changed, 27 insertions(+), 9 deletions(-)
> 
> diff --git a/Makefile.am b/Makefile.am
> index 6ebece25b230..34fbfb1f4ff8 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -49,11 +49,14 @@ endif
>  
>  if PSCI
>  DEFINES		+= -DPSCI
> +if PSCI_HVC
> +DEFINES		+= -DPSCI_HVC
> +endif
>  ARCH_OBJ	+= psci.o
>  COMMON_OBJ	+= psci.o
>  PSCI_NODE	:= psci {				\
>  			compatible = \"arm,psci\";	\
> -			method = \"smc\";		\
> +			method = \"$(PSCI_METHOD)\";	\
>  			cpu_on = <$(PSCI_CPU_ON)>;	\
>  			cpu_off = <$(PSCI_CPU_OFF)>;	\
>  		   };
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index 9402a01b9dca..9b8bd8723dba 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -179,10 +179,19 @@ extern char psci_vectors[];
>  
>  bool cpu_init_psci_arch(void)
>  {
> -	if (mrs(CurrentEL) != CURRENTEL_EL3)
> +	switch (mrs(CurrentEL)) {
> +#if !defined(PSCI_HVC)
> +	case CURRENTEL_EL3:
> +		msr(VBAR_EL3, (unsigned long)psci_vectors);
> +		break;
> +#else
> +	case CURRENTEL_EL2:
> +		msr(VBAR_EL2, (unsigned long)psci_vectors);
> +		break;
> +#endif
> +	default:
>  		return false;
> -
> -	msr(VBAR_EL3, (unsigned long)psci_vectors);
> +	}
>  	isb();
>  
>  	return true;
> diff --git a/configure.ac b/configure.ac
> index 9e3b7226cd69..44459a4c849e 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -83,13 +83,19 @@ AS_IF([test "x$X_IMAGE" != "x"],
>  # Allow a user to pass --enable-psci
>  AC_ARG_ENABLE([psci],
>  	AS_HELP_STRING([--disable-psci], [disable the psci boot method]),
> -	[USE_PSCI=$enableval], [USE_PSCI="yes"])
> -AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes"])
> -AS_IF([test "x$USE_PSCI" = "xyes"], [], [USE_PSCI=no])
> -
> -AS_IF([test "x$USE_PSCI" != "xyes" -a "x$KERNEL_ES" = "x32"],
> +	[case "${enableval}" in
> +		yes|smc) USE_PSCI=smc ;;
> +		hvc) USE_PSCI=hvc ;;
> +		no) ;;
> +		*) AC_MSG_ERROR([Bad value "${enableval}" for --enable-psci. Use "smc" or "hvc"]) ;;
> +	esac])
> +AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes" -o "x$USE_PSCI" = "xsmc" -o "x$USE_PSCI" = "xhvc"])
> +AM_CONDITIONAL([PSCI_HVC], [test "x$USE_PSCI" = "xhvc"])
> +
> +AS_IF([test "x$USE_PSCI" = "xno" -a "x$KERNEL_ES" = "x32"],
>  	[AC_MSG_ERROR([With an AArch32 kernel, boot method must be PSCI.])]
>  )
> +AC_SUBST([PSCI_METHOD], [$USE_PSCI])

As of this patch, if I build with --enable-psci=hvc, and boot on
ARMv8-A, it'll fail at boot time, since the boot-wrapper won't fix up
the SPSR (and will enter the kernel at EL2), and HVC will go to that
kernel.

I think that we either need to add support for dropping to EL1 in
ARMv8-A, or we should have an option to build for ARMv8-R specifically,
where we can automatically fix up the PSCI conduit.

Mark.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64
  2024-07-29 15:27     ` Luca Fancellu
@ 2024-07-29 16:14       ` Mark Rutland
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: Andre Przywara, linux-arm-kernel@lists.infradead.org

On Mon, Jul 29, 2024 at 04:27:37PM +0100, Luca Fancellu wrote:
> Hi Mark,
> 
> >> * - EL2 (Non-secure)
> >> - *   Entering at EL2 is partially supported.
> >> + *   Entering at EL2 is partially supported for Armv8-A.
> >> + *   Entering at EL2 is supported for Armv8-R.
> >
> > Nit: IIUC ARMv8-R is Secure-only, so this isn't quite right.
> 
> Ok I’ll drop this change
> 
> >
> >> *   PSCI is not supported when entered in this exception level.
> >> */
> >> ASM_FUNC(_start)
> >> @@ -76,6 +77,39 @@ reset_at_el2:
> >> msr sctlr_el2, x0
> >> isb
> >>
> >> + /* Detect Armv8-R AArch64 */
> >> + mrs x1, id_aa64mmfr0_el1
> >> + /*
> >> + * Check MSA, bits [51:48]:
> >> + * 0xf means Armv8-R AArch64.
> >> + * If not 0xf, proceed in Armv8-A EL2.
> >> + */
> >> + ubfx x0, x1, #48, #4 // MSA
> >> + cmp x0, 0xf
> >> + bne reset_no_el3
> >> +
> >> + /*
> >> + * Armv8-R AArch64 is found, check if Linux can be booted.
> >> + * Check MSA_frac, bits [55:52]:
> >> + * 0x2 means EL1&0 translation regime also supports VMSAv8-64.
> >> + */
> >> + ubfx x0, x1, #52, #4 // MSA_frac
> >> + cmp x0, 0x2
> >> + /*
> >> + * If not 0x2, no VMSA, so cannot boot Linux and dead loop.
> >> + * Also, since the architecture guarantees that those CPUID
> >> + * fields never lose features when the value in a field
> >> + * increases, we use blt to cover it.
> >> + */
> >> + blt err_invalid_arch
> >> +
> >> + /* Start Armv8-R Linux at EL1 */
> >> + mov w0, #SPSR_KERNEL_EL1
> >> + ldr x1, =spsr_to_elx
> >> + str w0, [x1]
> >
> > I'd prefer if we could do this in C code. I'll post a series shortly
> > where we'll have consistent cpu_init_arch() hook that we can do this
> > under.
> 
> Ok are you suggesting to base this serie on the one you’ll push?

Sorry; yes -- I'll send that out shortly, and I'd like to take that as a
base.

> >> +void cpu_init_armv8r_el2(void)
> >> +{
> >> + unsigned long hcr = mrs(hcr_el2);
> >> +
> >> + msr(vpidr_el2, mrs(midr_el1));
> >> + msr(vmpidr_el2, mrs(mpidr_el1));
> >> +
> >> + /* VTCR_MSA: VMSAv8-64 support */
> >> + msr(vtcr_el2, VTCR_EL2_MSA);
> >
> > I suspect we also need to initialize VSTCR_EL2?
> 
> Ok, I’ve booted Linux and it seems to work fine, is this considered at all when HCR_EL2.VM is off?
> Anyway I’ll initialise it, I noticed it’s not done in TF-A.

I don't know; the ARMv8-R manual (ARM DDI 0600B.a) says in E1.2.3 DSB:

| The ordering requirements of Data Synchronization Barrier instruction is as
| follows:
| * EL1 and EL0 memory accesses are ordered only with respect to memory accesses
|   using the same VMID.
| * EL2 memory accesses are ordered only with respect to other EL2 memory
|   accesses.

... which seems to apply regardless of HCR_EL2.VM?

It's probably worth clarifying with the relevant architects.

> > ... and don't we also need to initialize VSCTLR_EL2 to give all CPUs the
> > same VMID? Otherwise barriers won't work at EL1 and below...
> 
> I can see TF-A is initialising it so I’ll do the same

Great; thanks!


> 
> >
> >> +
> >> + /*
> >> + * HCR_EL2.ENSCXT is written unconditionally even if in some cases it's
> >> + * RES0 (when FEAT_CSV2_2 or FEAT_CSV2_1p2 are not implemented) in order
> >> + * to simplify the code, but it's safe in this case as the write would be
> >> + * ignored when not implemented and would remove the trap otherwise.
> >> + */
> >> + hcr |= HCR_EL2_ENSCXT_NOTRAP;
> >
> > I'd prefer if we can do the necessary checks. IIUC we can do this with a
> > helper, e.g.
> >
> > static bool cpu_has_scxt(void)
> > {
> > unsigned long csv2 = mrs_field(ID_AA64PFR0_EL1, CSV2);
> > if (csv2 >= 2)
> > return true;
> > if (csv2 < 1)
> > return false;
> > return mrs_field(ID_AA64PFR1_EL1, CSV2_frac) >= 2;
> > }
> >
> > ... then here we can have:
> >
> > if (cpu_has_scxt())
> > hcr |= HCR_EL2_ENSCXT_NOTRAP;
> 
> Ok I’ll do
> 
> >
> >> +
> >> + if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2)
> >> + hcr |= HCR_EL2_FIEN_NOTRAP;
> >> +
> >> + if (cpu_has_pauth())
> >> + hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP;
> >> +
> >> + msr(hcr_el2, hcr);
> >> + isb();
> >> + msr(CNTFRQ_EL0, COUNTER_FREQ);
> >> +}
> >
> > I believe we also need to initialize:
> >
> > * CNTVOFF_EL2 (for timers to work correctly)
> > * CNTHCTL_EL2 (for timers to not trap)
> > * CPTR_EL2 (for FP to not trap)
> > * MDCR_EL2 (for PMU & debug to not trap)
> 
> Sure, I’ll reset them like in TF-A.

Perfect!

Mark.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 5/6] aarch64: Support PSCI for Armv8-R AArch64
  2024-07-29 16:09   ` Mark Rutland
@ 2024-07-30 11:31     ` Luca Fancellu
  2024-07-30 12:55       ` Mark Rutland
  0 siblings, 1 reply; 16+ messages in thread
From: Luca Fancellu @ 2024-07-30 11:31 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Andre Przywara, linux-arm-kernel@lists.infradead.org

Hi Mark,

>> -
>> -AS_IF([test "x$USE_PSCI" != "xyes" -a "x$KERNEL_ES" = "x32"],
>> + [case "${enableval}" in
>> + yes|smc) USE_PSCI=smc ;;
>> + hvc) USE_PSCI=hvc ;;
>> + no) ;;
>> + *) AC_MSG_ERROR([Bad value "${enableval}" for --enable-psci. Use "smc" or "hvc"]) ;;
>> + esac])
>> +AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes" -o "x$USE_PSCI" = "xsmc" -o "x$USE_PSCI" = "xhvc"])
>> +AM_CONDITIONAL([PSCI_HVC], [test "x$USE_PSCI" = "xhvc"])
>> +
>> +AS_IF([test "x$USE_PSCI" = "xno" -a "x$KERNEL_ES" = "x32"],
>> [AC_MSG_ERROR([With an AArch32 kernel, boot method must be PSCI.])]
>> )
>> +AC_SUBST([PSCI_METHOD], [$USE_PSCI])
> 
> As of this patch, if I build with --enable-psci=hvc, and boot on
> ARMv8-A, it'll fail at boot time, since the boot-wrapper won't fix up
> the SPSR (and will enter the kernel at EL2), and HVC will go to that
> kernel.
> 
> I think that we either need to add support for dropping to EL1 in
> ARMv8-A, or we should have an option to build for ARMv8-R specifically,
> where we can automatically fix up the PSCI conduit.

True, maybe the best option is to have a flag to build for armv8r?
Would --armv8r64 be ok?

The behaviour would be:
 --armv8r64 without XEN -> starting at EL2 only, setting conduit to hvc, psci_vector to VBAR_EL2, drop to EL1 and start kernel
 --armv8r64 with XEN -> starting at EL2 only, psci not supported, keep xen at EL2

Cheers,
Luca



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 5/6] aarch64: Support PSCI for Armv8-R AArch64
  2024-07-30 11:31     ` Luca Fancellu
@ 2024-07-30 12:55       ` Mark Rutland
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2024-07-30 12:55 UTC (permalink / raw)
  To: Luca Fancellu; +Cc: Andre Przywara, linux-arm-kernel@lists.infradead.org

On Tue, Jul 30, 2024 at 12:31:14PM +0100, Luca Fancellu wrote:
> Hi Mark,
> 
> >> -
> >> -AS_IF([test "x$USE_PSCI" != "xyes" -a "x$KERNEL_ES" = "x32"],
> >> + [case "${enableval}" in
> >> + yes|smc) USE_PSCI=smc ;;
> >> + hvc) USE_PSCI=hvc ;;
> >> + no) ;;
> >> + *) AC_MSG_ERROR([Bad value "${enableval}" for --enable-psci. Use "smc" or "hvc"]) ;;
> >> + esac])
> >> +AM_CONDITIONAL([PSCI], [test "x$USE_PSCI" = "xyes" -o "x$USE_PSCI" = "xsmc" -o "x$USE_PSCI" = "xhvc"])
> >> +AM_CONDITIONAL([PSCI_HVC], [test "x$USE_PSCI" = "xhvc"])
> >> +
> >> +AS_IF([test "x$USE_PSCI" = "xno" -a "x$KERNEL_ES" = "x32"],
> >> [AC_MSG_ERROR([With an AArch32 kernel, boot method must be PSCI.])]
> >> )
> >> +AC_SUBST([PSCI_METHOD], [$USE_PSCI])
> >
> > As of this patch, if I build with --enable-psci=hvc, and boot on
> > ARMv8-A, it'll fail at boot time, since the boot-wrapper won't fix up
> > the SPSR (and will enter the kernel at EL2), and HVC will go to that
> > kernel.
> >
> > I think that we either need to add support for dropping to EL1 in
> > ARMv8-A, or we should have an option to build for ARMv8-R specifically,
> > where we can automatically fix up the PSCI conduit.
> 
> True, maybe the best option is to have a flag to build for armv8r?

I think so.

> Would --armv8r64 be ok?
> 
> The behaviour would be:
>  --armv8r64 without XEN -> starting at EL2 only, setting conduit to hvc, psci_vector to VBAR_EL2, drop to EL1 and start kernel
>  --armv8r64 with XEN -> starting at EL2 only, psci not supported, keep xen at EL2

That sounds good. If you can sort out the logic for that, we can change
the option name later if we want.

That will have to interact with --enable-aarch32-bw and
--enable-aarch32-kernel, so maybe it's worth having a single option to
select the boot-wrapper architecture, e.g. --bw-arch=${PARAM} that
takes:

* "aarch64-a"	// default today
* "aarch32-a"	// replaces --aarch32-bw
* "aarch64-r"	// For armv8r64

Note I've used "aarch64" since we already have ARMv9-A bits.

Mark.

> 
> Cheers,
> Luca
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-07-30 12:56 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-16 14:29 [PATCH v2 0/6] Add Armv8-R AArch64 support Luca Fancellu
2024-07-16 14:29 ` [PATCH v2 1/6] aarch64: Rename labels and prepare for lower EL booting Luca Fancellu
2024-07-16 14:29 ` [PATCH v2 2/6] aarch64: Prepare " Luca Fancellu
2024-07-16 14:29 ` [PATCH v2 3/6] aarch64: Remove TSCXT bit set from SCTLR_EL2_RESET Luca Fancellu
2024-07-19 10:05   ` Mark Rutland
2024-07-16 14:29 ` [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Luca Fancellu
2024-07-29 15:01   ` Mark Rutland
2024-07-29 15:27     ` Luca Fancellu
2024-07-29 16:14       ` Mark Rutland
2024-07-16 14:29 ` [PATCH v2 5/6] aarch64: Support PSCI " Luca Fancellu
2024-07-29 16:09   ` Mark Rutland
2024-07-30 11:31     ` Luca Fancellu
2024-07-30 12:55       ` Mark Rutland
2024-07-16 14:29 ` [PATCH v2 6/6] aarch64: Start Xen on Armv8-R at EL2 Luca Fancellu
2024-07-23 12:27   ` Mark Rutland
2024-07-23 12:35     ` Luca Fancellu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox