linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [BOOT-WRAPPER v2 00/10] Cleanup initialization
@ 2024-08-12 10:15 Mark Rutland
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic Mark Rutland
                   ` (9 more replies)
  0 siblings, 10 replies; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

These patches cleanup the boot-wrapper initialization logic to make it
more consistent and easier to extend in C code in future. The big
changes are:

* The kernel is always entered via an exception return. This allows us
  to initialize PSTATE consistently, and will allow us to adjust the
  SPSR dynamically in C code in future if necessary.

* Regardless of the entered exception level, CPU state is initialized
  under cpu_init_arch(), allowing for logic to be shared regardless of
  which exception level was entered.

* CPUs are initialized sequentially, which allows better for better
  logging within the boot-wrapper.

Since v1 [1]:
* Retain SPSR.T bit handling; necessary for PSCI
* Comment on handling of CNTFRQ_EL0 when not entered at the highest EL
* Fix missing braces in cpu_init_psci_arch()
* Add Marc's Acked-by tag series-wide

[1] https://lore.kernel.org/linux-arm-kernel/20240729161501.1806271-1-mark.rutland@arm.com/

Mark.

Mark Rutland (10):
  aarch64: Remove redundant EL1 entry logic
  aarch64: Implement cpu_init_arch()
  aarch64: Always enter kernel via exception return
  aarch32: Refactor inital entry
  aarch32: Implement cpu_init_arch()
  aarch32: Always enter kernel via exception return
  Unify assembly setup paths
  Simplify spin logic
  Add printing functions
  Boot CPUs sequentially

 arch/aarch32/boot.S                          | 95 +++++++++++---------
 arch/aarch32/include/asm/{gic-v3.h => gic.h} |  2 +-
 arch/aarch32/init.c                          | 30 +++++--
 arch/aarch64/boot.S                          | 62 ++++---------
 arch/aarch64/include/asm/{gic-v3.h => gic.h} |  2 +-
 arch/aarch64/init.c                          | 30 +++++--
 arch/aarch64/spin.S                          | 14 +--
 common/boot.c                                | 20 ++---
 common/gic-v3.c                              |  2 +-
 common/gic.c                                 |  2 +-
 common/init.c                                | 50 +++++++++--
 common/platform.c                            | 35 ++++++++
 common/psci.c                                | 16 +---
 include/boot.h                               |  8 +-
 include/gic.h                                | 16 ++++
 include/platform.h                           |  4 +
 16 files changed, 224 insertions(+), 164 deletions(-)
 rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
 rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
 create mode 100644 include/gic.h

-- 
2.30.2



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-12 17:37   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch() Mark Rutland
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

For historical reasons the boot-wrapper has code to handle being entered
at Non-secure EL1, but currently this is unsupported and cannot be used
to boot a kernel as jump_kernel() unconditionally writes to SCTLR_EL2,
which will UNDEF.

Remove the logic for handling Non-secure EL1.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch64/boot.S | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index da5fa65..73ddcd0 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -31,10 +31,8 @@ ASM_FUNC(_start)
 	b.eq	reset_at_el3
 	cmp	x0, #CURRENTEL_EL2
 	b.eq	reset_at_el2
-	cmp	x0, #CURRENTEL_EL1
-	b.eq	reset_at_el1
 
-	/* Booting at EL0 is not supported */
+	/* Booting at EL1 or EL0 is not supported */
 	b	.
 
 	/*
@@ -72,19 +70,6 @@ reset_at_el2:
 	msr	sctlr_el2, x0
 	isb
 
-	b	reset_no_el3
-
-	/*
-	 * EL1 initialization
-	 */
-reset_at_el1:
-	mov_64	x0, SCTLR_EL1_RESET
-	msr	sctlr_el1, x0
-	isb
-
-	b	reset_no_el3
-
-reset_no_el3:
 	cpuid	x0, x1
 	bl	find_logical_id
 	cmp	x0, #MPIDR_INVALID
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch()
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-13 17:13   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return Mark Rutland
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

When the boot-wrapper is entered at EL2 it does not initialise
CNTFRQ_EL0, and in future it may need to initialize other CPU state
regardless of the exeption level it was entered at.

Use a common cpu_init_arch() function to initialize CPU state regardless
of the exception level the boot-wrapper was entered at.

This change means that the boot-wrapper can only be used when enetered
at the highest implemented exception level, as accesses to CNTFRQ_EL0
will be UNDEFINED at lower exception levels. However, the boot-wrapper
only supports being booted at the highest implemented exception level,
as the comment at the top of boot.S describes:

| The boot-wrapper must be entered from the reset vector at the
| highest implemented exception level.

... so this should not adversely affect any supported configuration.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch64/boot.S |  4 +++-
 arch/aarch64/init.c | 12 +++++++++---
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 73ddcd0..52c617d 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -51,7 +51,7 @@ reset_at_el3:
 
 	bl	cpu_init_bootwrapper
 
-	bl	cpu_init_el3
+	bl	cpu_init_arch
 
 	bl	gic_secure_init
 
@@ -82,6 +82,8 @@ reset_at_el2:
 
 	bl	cpu_init_bootwrapper
 
+	bl	cpu_init_arch
+
 	b	start_bootmethod
 
 err_invalid_id:
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index c9fc7f1..49abdf7 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -52,7 +52,7 @@ static inline bool cpu_has_permission_indirection(void)
 	return mrs(ID_AA64MMFR3_EL1) & mask;
 }
 
-void cpu_init_el3(void)
+static void cpu_init_el3(void)
 {
 	unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE;
 	unsigned long mdcr = 0;
@@ -153,8 +153,6 @@ void cpu_init_el3(void)
 
 		msr(SMCR_EL3, smcr);
 	}
-
-	msr(CNTFRQ_EL0, COUNTER_FREQ);
 }
 
 #ifdef PSCI
@@ -171,3 +169,11 @@ bool cpu_init_psci_arch(void)
 	return true;
 }
 #endif
+
+void cpu_init_arch(void)
+{
+	if (mrs(CurrentEL) == CURRENTEL_EL3)
+		cpu_init_el3();
+
+	msr(CNTFRQ_EL0, COUNTER_FREQ);
+}
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic Mark Rutland
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch() Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-13 17:14   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry Mark Rutland
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

When the boot-wrapper is entered at EL3 it will enter the kernel via
ERET, and when entered at EL2 it will branch to the kernel directly.
This is an artifact of the way the boot-wrapper was originally written
in assembly, and it would be preferable to always enter the kernel via
ERET so that PSTATE is always initialized to a known-good value.

Rework jump_kernel() to always enter the kernel via ERET.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch64/boot.S | 30 +++++++++++++-----------------
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 52c617d..0ac0c98 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -76,10 +76,6 @@ reset_at_el2:
 	b.eq	err_invalid_id
 	bl	setup_stack
 
-	mov	w0, #1
-	ldr	x1, =flag_no_el3
-	str	w0, [x1]
-
 	bl	cpu_init_bootwrapper
 
 	bl	cpu_init_arch
@@ -111,18 +107,11 @@ ASM_FUNC(jump_kernel)
 	bl	find_logical_id
 	bl	setup_stack		// Reset stack pointer
 
-	ldr	w0, flag_no_el3
-	cmp	w0, #0			// Prepare Z flag
-
 	mov	x0, x20
 	mov	x1, x21
 	mov	x2, x22
 	mov	x3, x23
-
-	b.eq	1f
-	br	x19			// No EL3
-
-1:	mov	x4, #SPSR_KERNEL
+	mov	x4, #SPSR_KERNEL
 
 	/*
 	 * If bit 0 of the kernel address is set, we're entering in AArch32
@@ -130,13 +119,20 @@ ASM_FUNC(jump_kernel)
 	 */
 	bfi	x4, x19, #5, #1
 
+	mrs	x5, CurrentEL
+	cmp	x5, #CURRENTEL_EL3
+	b.eq	eret_at_el3
+	cmp	x5, #CURRENTEL_EL2
+	b.eq	eret_at_el2
+	b	.			// Not possible
+
+eret_at_el3:
 	msr	elr_el3, x19
 	msr	spsr_el3, x4
 	eret
+eret_at_el2:
+	msr	elr_el2, x19
+	msr	spsr_el2, x4
+	eret
 
 	.ltorg
-
-	.data
-	.align 3
-flag_no_el3:
-	.long 0
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (2 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-19 17:21   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch() Mark Rutland
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

For historical reasons the early AArch32 code is structured differently
from the early AArch64 code, with some common code (including stack
setup) performed before we identify the mode we were entered in.

Align the structure of the early AArch32 code with that of the early
AArch64 code. This will make subsequent refactoring easier.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch32/boot.S | 55 ++++++++++++++++++++++++++++-----------------
 1 file changed, 34 insertions(+), 21 deletions(-)

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 4d16c9c..cf83e55 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -31,7 +31,28 @@
 	 *   PSCI is not supported when entered in this mode.
 	 */
 ASM_FUNC(_start)
-	/* Stack initialisation */
+	mrs	r0, cpsr
+	and	r0, #PSR_MODE_MASK
+	cmp	r0, #PSR_SVC
+	beq	reset_at_svc
+	cmp	r0, #PSR_HYP
+	beq	reset_at_hyp
+
+	/* Booting at other modes is not supported */
+	b	.
+
+reset_at_svc:
+	/*
+	 * When entered in Secure SVC mode we must switch to monitor mode to
+	 * configure SCR.NS. Switch to monitor mode ASAP to simplify later
+	 * code.
+	 */
+	adr	lr, reset_at_mon
+	ldr	r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
+	msr	spsr, r0
+	movs	pc, lr
+
+reset_at_mon:
 	cpuid	r0, r1
 	bl	find_logical_id
 	cmp	r0, #MPIDR_INVALID
@@ -39,36 +60,28 @@ ASM_FUNC(_start)
 
 	bl	setup_stack
 
-	mrs	r0, cpsr
-	and	r0, #PSR_MODE_MASK
-
-	cmp	r0, #PSR_HYP
-	bne	_switch_monitor
+	bl	cpu_init_bootwrapper
 
-	mov	r0, #1
-	ldr	r1, =flag_no_el3
-	str	r0, [r1]
+	bl	cpu_init_secure_pl1
 
-	bl	cpu_init_bootwrapper
+	bl	gic_secure_init
 
 	b	start_bootmethod
 
-_switch_monitor:
-	adr	lr, _monitor
-	ldr	r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
-	msr	spsr, r0
-	movs	pc, lr
+reset_at_hyp:
+	cpuid	r0, r1
+	bl	find_logical_id
+	cmp	r0, #MPIDR_INVALID
+	beq	err_invalid_id
 
-_monitor:
-	/* Move the stack to Monitor mode*/
-	mrs	sp, sp_svc
+	bl	setup_stack
 
-	bl	cpu_init_secure_pl1
+	mov	r0, #1
+	ldr	r1, =flag_no_el3
+	str	r0, [r1]
 
 	bl	cpu_init_bootwrapper
 
-	bl	gic_secure_init
-
 	b	start_bootmethod
 
 err_invalid_id:
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch()
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (3 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-19 17:21   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Mark Rutland
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

When the boot-wrapper is entered at EL2/Hyp it does not initialise
CNTFRQ, and in future it may need to initialize other CPU state
regardless of the exeption level it was entered at.

Use a common cpu_init_arch() function to initialize CPU state regardless
of the exception level the boot-wrapper was entered at. For clarity
cpu_init_secure_pl1() is renamed to cpu_init_monitor(), which better
matches PSR_MON and will allow for the addition of cppu_init_hyp() and
cpu_init_svc() in future.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch32/boot.S |  4 +++-
 arch/aarch32/init.c | 12 +++++++++---
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index cf83e55..f21f89a 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -62,7 +62,7 @@ reset_at_mon:
 
 	bl	cpu_init_bootwrapper
 
-	bl	cpu_init_secure_pl1
+	bl	cpu_init_arch
 
 	bl	gic_secure_init
 
@@ -82,6 +82,8 @@ reset_at_hyp:
 
 	bl	cpu_init_bootwrapper
 
+	bl	cpu_init_arch
+
 	b	start_bootmethod
 
 err_invalid_id:
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index e25f0c7..35da37c 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -29,7 +29,7 @@ void announce_arch(void)
 	print_string("\r\n");
 }
 
-void cpu_init_secure_pl1(void)
+static void cpu_init_monitor(void)
 {
 	unsigned long scr = SCR_NS | SCR_HCE;
 	unsigned long nsacr = NSACR_CP10 | NSACR_CP11;
@@ -37,8 +37,6 @@ void cpu_init_secure_pl1(void)
 	mcr(SCR, scr);
 
 	mcr(NSACR, nsacr);
-
-	mcr(CNTFRQ, COUNTER_FREQ);
 }
 
 #ifdef PSCI
@@ -55,3 +53,11 @@ bool cpu_init_psci_arch(void)
 	return true;
 }
 #endif
+
+void cpu_init_arch(void)
+{
+	if (read_cpsr_mode() == PSR_MON)
+		cpu_init_monitor();
+
+	mcr(CNTFRQ, COUNTER_FREQ);
+}
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (4 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch() Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-19 17:22   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 07/10] Unify assembly setup paths Mark Rutland
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

When the boot-wrapper is entered at Seculre PL1 it will enter the kernel
via an exception return, ERET, and when entered at Hyp it will branch to
the kernel directly. This is an artifact of the way the boot-wrapper was
originally written in assembly, and it would be preferable to always
enter the kernel via an exception return so that PSTATE is always
initialized to a known-good value.

Rework jump_kernel() to always enter the kernel via ERET, matching the
stype of the AArch64 version of jump_kernel()

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch32/boot.S | 48 +++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 23 deletions(-)

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index f21f89a..78d19a0 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -76,10 +76,6 @@ reset_at_hyp:
 
 	bl	setup_stack
 
-	mov	r0, #1
-	ldr	r1, =flag_no_el3
-	str	r0, [r1]
-
 	bl	cpu_init_bootwrapper
 
 	bl	cpu_init_arch
@@ -96,9 +92,10 @@ err_invalid_id:
 	 * r1-r3, sp[0]: kernel arguments
 	 */
 ASM_FUNC(jump_kernel)
-	sub	sp, #4				@ Ignore fourth argument
-	push	{r0 - r3}
-	mov	r5, sp
+	mov	r4, r0
+	mov	r5, r1
+	mov	r6, r2
+	mov	r7, r3
 
 	ldr	r0, =HSCTLR_KERNEL
 	mcr	p15, 4, r0, c1, c0, 0		@ HSCTLR
@@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
 	bl	find_logical_id
 	bl	setup_stack
 
-	ldr	lr, [r5], #4
-	ldm	r5, {r0 - r2}
-
-	ldr	r4, =flag_no_el3
-	ldr	r4, [r4]
-	cmp	r4, #1
-	bxeq	lr				@ no EL3
+	mov	r0, r5
+	mov	r1, r6
+	mov	r2, r7
+	ldr	r3, =SPSR_KERNEL
 
-	ldr	r4, =SPSR_KERNEL
 	/* Return in thumb2 mode when bit 0 of address is 1 */
-	tst	lr, #1
-	orrne	r4, #PSR_T
+	tst	r4, #1
+	orrne	r3, #PSR_T
+
+	mrs	r5, cpsr
+	and	r5, #PSR_MODE_MASK
+	cmp	r5, #PSR_MON
+	beq	eret_at_mon
+	cmp	r5, #PSR_HYP
+	beq	eret_at_hyp
+	b	.
 
-	msr	spsr_cxf, r4
+eret_at_mon:
+	mov	lr, r4
+	msr	spsr_cxf, r3
 	movs	pc, lr
-
-	.section .data
-	.align 2
-flag_no_el3:
-	.long 0
+eret_at_hyp:
+	msr	elr_hyp, r4
+	msr	spsr_cxf, r3
+	eret
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 07/10] Unify assembly setup paths
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (5 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-21 16:54   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 08/10] Simplify spin logic Mark Rutland
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

The early assembly paths for EL3/Secure-PL1 and EL2/Hyp are almost
identical aside from the EL3/Secure-PL1 paths calling gic_secure_init().

Simplify the easlery assembly paths by conditionally calling
gic_secure_init() from cpu_init_arch(), allowing the EL3/Secure-PL1 and
EL2/Hyp paths to be unified.

In order to call gic_secure_init() from C code we need to expose a
prototype for gic_secure_init(), requiring a new <gic.h> header. For
clarity the existing <asm/gic-v3.h> headers are renamed to <asm/gic.h>
and are included through the common header.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch32/boot.S                          | 20 ++++++--------------
 arch/aarch32/include/asm/{gic-v3.h => gic.h} |  2 +-
 arch/aarch32/init.c                          |  5 ++++-
 arch/aarch64/boot.S                          | 17 ++++-------------
 arch/aarch64/include/asm/{gic-v3.h => gic.h} |  2 +-
 arch/aarch64/init.c                          |  5 ++++-
 common/gic-v3.c                              |  2 +-
 common/gic.c                                 |  2 +-
 include/gic.h                                | 16 ++++++++++++++++
 9 files changed, 38 insertions(+), 33 deletions(-)
 rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
 rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
 create mode 100644 include/gic.h

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 78d19a0..8b6ffac 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -53,22 +53,14 @@ reset_at_svc:
 	movs	pc, lr
 
 reset_at_mon:
-	cpuid	r0, r1
-	bl	find_logical_id
-	cmp	r0, #MPIDR_INVALID
-	beq	err_invalid_id
-
-	bl	setup_stack
-
-	bl	cpu_init_bootwrapper
-
-	bl	cpu_init_arch
-
-	bl	gic_secure_init
-
-	b	start_bootmethod
+	// TODO initialise monitor state
+	b	reset_common
 
 reset_at_hyp:
+	// TODO initialise hyp state
+	b	reset_common
+
+reset_common:
 	cpuid	r0, r1
 	bl	find_logical_id
 	cmp	r0, #MPIDR_INVALID
diff --git a/arch/aarch32/include/asm/gic-v3.h b/arch/aarch32/include/asm/gic.h
similarity index 91%
rename from arch/aarch32/include/asm/gic-v3.h
rename to arch/aarch32/include/asm/gic.h
index b28136a..0b9425d 100644
--- a/arch/aarch32/include/asm/gic-v3.h
+++ b/arch/aarch32/include/asm/gic.h
@@ -1,5 +1,5 @@
 /*
- * arch/aarch32/include/asm/gic-v3.h
+ * arch/aarch32/include/asm/gic.h
  *
  * Copyright (C) 2015 ARM Limited. All rights reserved.
  *
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index 35da37c..cb67bf6 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -7,6 +7,7 @@
  * found in the LICENSE.txt file.
  */
 #include <cpu.h>
+#include <gic.h>
 #include <platform.h>
 #include <stdbool.h>
 
@@ -56,8 +57,10 @@ bool cpu_init_psci_arch(void)
 
 void cpu_init_arch(void)
 {
-	if (read_cpsr_mode() == PSR_MON)
+	if (read_cpsr_mode() == PSR_MON) {
 		cpu_init_monitor();
+		gic_secure_init();
+	}
 
 	mcr(CNTFRQ, COUNTER_FREQ);
 }
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 0ac0c98..98ae77d 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -43,19 +43,7 @@ reset_at_el3:
 	msr	sctlr_el3, x0
 	isb
 
-	cpuid	x0, x1
-	bl	find_logical_id
-	cmp	x0, #MPIDR_INVALID
-	b.eq	err_invalid_id
-	bl	setup_stack
-
-	bl	cpu_init_bootwrapper
-
-	bl	cpu_init_arch
-
-	bl	gic_secure_init
-
-	b	start_bootmethod
+	b	reset_common
 
 	/*
 	 * EL2 initialization
@@ -70,6 +58,9 @@ reset_at_el2:
 	msr	sctlr_el2, x0
 	isb
 
+	b	reset_common
+
+reset_common:
 	cpuid	x0, x1
 	bl	find_logical_id
 	cmp	x0, #MPIDR_INVALID
diff --git a/arch/aarch64/include/asm/gic-v3.h b/arch/aarch64/include/asm/gic.h
similarity index 92%
rename from arch/aarch64/include/asm/gic-v3.h
rename to arch/aarch64/include/asm/gic.h
index 2447480..9d716f6 100644
--- a/arch/aarch64/include/asm/gic-v3.h
+++ b/arch/aarch64/include/asm/gic.h
@@ -1,5 +1,5 @@
 /*
- * arch/aarch64/include/asm/gic-v3.h
+ * arch/aarch64/include/asm/gic.h
  *
  * Copyright (C) 2015 ARM Limited. All rights reserved.
  *
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 49abdf7..68c220b 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -7,6 +7,7 @@
  * found in the LICENSE.txt file.
  */
 #include <cpu.h>
+#include <gic.h>
 #include <platform.h>
 #include <stdbool.h>
 
@@ -172,8 +173,10 @@ bool cpu_init_psci_arch(void)
 
 void cpu_init_arch(void)
 {
-	if (mrs(CurrentEL) == CURRENTEL_EL3)
+	if (mrs(CurrentEL) == CURRENTEL_EL3) {
 		cpu_init_el3();
+		gic_secure_init();
+	}
 
 	msr(CNTFRQ_EL0, COUNTER_FREQ);
 }
diff --git a/common/gic-v3.c b/common/gic-v3.c
index 6207007..4d8e620 100644
--- a/common/gic-v3.c
+++ b/common/gic-v3.c
@@ -10,7 +10,7 @@
 #include <stdint.h>
 
 #include <cpu.h>
-#include <asm/gic-v3.h>
+#include <gic.h>
 #include <asm/io.h>
 
 #define GICD_CTLR			0x0
diff --git a/common/gic.c b/common/gic.c
index 04d4289..15a3410 100644
--- a/common/gic.c
+++ b/common/gic.c
@@ -10,7 +10,7 @@
 #include <stdint.h>
 
 #include <cpu.h>
-#include <asm/gic-v3.h>
+#include <gic.h>
 #include <asm/io.h>
 
 #define GICD_CTLR			0x0
diff --git a/include/gic.h b/include/gic.h
new file mode 100644
index 0000000..127f82b
--- /dev/null
+++ b/include/gic.h
@@ -0,0 +1,16 @@
+/*
+ * include/gic.h
+ *
+ * Copyright (C) 2024 ARM Limited. All rights reserved.
+ *
+ * Use of this source code is governed by a BSD-style license that can be
+ * found in the LICENSE.txt file.
+ */
+#ifndef __GIC_H
+#define __GIC_H
+
+#include <asm/gic.h>
+
+void gic_secure_init(void);
+
+#endif
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 08/10] Simplify spin logic
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (6 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 07/10] Unify assembly setup paths Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-21 16:55   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 09/10] Add printing functions Mark Rutland
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially Mark Rutland
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

The logic for initial boot is more complicated than it needs to be,
with both first_spin() having a special case for CPU0 that requires an
additional argument to be passed to spin().

Simplify this by moving the special-case logic for CPU0 into
first_spin(). This removes the need to provide a dummy mailbox for CPU0
to spin on, simplfiies callers of first_spin() and spin(), which no
longer need to pass a dummy mailbox or 'is_entry' for CPU0.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch64/spin.S | 11 +----------
 common/boot.c       | 20 ++++++++------------
 common/psci.c       |  2 +-
 include/boot.h      |  2 +-
 4 files changed, 11 insertions(+), 24 deletions(-)

diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
index 375f732..a7879d4 100644
--- a/arch/aarch64/spin.S
+++ b/arch/aarch64/spin.S
@@ -23,15 +23,6 @@ ASM_FUNC(start_bootmethod)
 	 * Primary CPU (x0 = 0) jumps to kernel, the other ones wait for an
 	 * address to appear in mbox
 	 */
-	adr	x3, mbox
-	adr	x4, kernel_address
-	cmp	x0, #0
-	csel	x1, x3, x4, ne
+	adr	x1, mbox
 	mov	x2, #0
 	bl	first_spin
-
-	.align 3
-kernel_address:
-	.long 0
-
-	.ltorg
diff --git a/common/boot.c b/common/boot.c
index 29d53a4..4417649 100644
--- a/common/boot.c
+++ b/common/boot.c
@@ -27,7 +27,7 @@ const unsigned long id_table[] = { CPU_IDS };
  * @invalid: value of an invalid address, 0 or -1 depending on the boot method
  * @is_entry: when true, pass boot parameters to the kernel, instead of 0
  */
-void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
+void __noreturn spin(unsigned long *mbox, unsigned long invalid)
 {
 	unsigned long addr = invalid;
 
@@ -36,13 +36,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
 		addr = *mbox;
 	}
 
-	if (is_entry)
-#ifdef KERNEL_32
-		jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
-#else
-		jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
-#endif
-
 	jump_kernel(addr, 0, 0, 0, 0);
 
 	unreachable();
@@ -60,12 +53,15 @@ void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
 			   unsigned long invalid)
 {
 	if (cpu == 0) {
-		*mbox = (unsigned long)&entrypoint;
-		sevl();
-		spin(mbox, invalid, 1);
+		unsigned long addr = (unsigned long)&entrypoint;
+#ifdef KERNEL_32
+		jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
+#else
+		jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
+#endif
 	} else {
 		*mbox = invalid;
-		spin(mbox, invalid, 0);
+		spin(mbox, invalid);
 	}
 
 	unreachable();
diff --git a/common/psci.c b/common/psci.c
index 5ae4255..19cc315 100644
--- a/common/psci.c
+++ b/common/psci.c
@@ -57,7 +57,7 @@ static int psci_cpu_off(void)
 
 	branch_table[cpu] = PSCI_ADDR_INVALID;
 
-	spin(branch_table + cpu, PSCI_ADDR_INVALID, 0);
+	spin(branch_table + cpu, PSCI_ADDR_INVALID);
 
 	unreachable();
 }
diff --git a/include/boot.h b/include/boot.h
index 459d1d5..18b805d 100644
--- a/include/boot.h
+++ b/include/boot.h
@@ -12,7 +12,7 @@
 #include <compiler.h>
 #include <stdbool.h>
 
-void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry);
+void __noreturn spin(unsigned long *mbox, unsigned long invalid);
 
 void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
 			   unsigned long invalid_addr);
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 09/10] Add printing functions
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (7 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 08/10] Simplify spin logic Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-21 16:55   ` Andre Przywara
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially Mark Rutland
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

In subsequent patches we'll want to log messages from specific CPUs. Add
helpers to make this simpler.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 common/platform.c  | 35 +++++++++++++++++++++++++++++++++++
 include/platform.h |  4 ++++
 2 files changed, 39 insertions(+)

diff --git a/common/platform.c b/common/platform.c
index 1607ee6..4e390e1 100644
--- a/common/platform.c
+++ b/common/platform.c
@@ -65,6 +65,41 @@ void print_ulong_hex(unsigned long val)
 	}
 }
 
+// 2^32 is 4,294,967,296
+#define DEC_CHARS_PER_UINT	10
+
+void print_uint_dec(unsigned int val)
+{
+	char digits[DEC_CHARS_PER_UINT];
+	int d = 0;
+
+	do {
+		digits[d] = val % 10;
+		val /= 10;
+		d++;
+	} while (val);
+
+	while (d--) {
+		print_char('0' + digits[d]);
+	}
+}
+
+void print_cpu_warn(unsigned int cpu, const char *str)
+{
+	print_string("CPU");
+	print_uint_dec(cpu);
+	print_string(" WARNING: ");
+	print_string(str);
+}
+
+void print_cpu_msg(unsigned int cpu, const char *str)
+{
+	print_string("CPU");
+	print_uint_dec(cpu);
+	print_string(": ");
+	print_string(str);
+}
+
 void init_uart(void)
 {
 	/*
diff --git a/include/platform.h b/include/platform.h
index c88e124..09712b1 100644
--- a/include/platform.h
+++ b/include/platform.h
@@ -12,6 +12,10 @@
 void print_char(char c);
 void print_string(const char *str);
 void print_ulong_hex(unsigned long val);
+void print_uint_dec(unsigned int val);
+
+void print_cpu_warn(unsigned int cpu, const char *str);
+void print_cpu_msg(unsigned int cpu, const char *str);
 
 void init_uart(void);
 
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially
  2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
                   ` (8 preceding siblings ...)
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 09/10] Add printing functions Mark Rutland
@ 2024-08-12 10:15 ` Mark Rutland
  2024-08-21 17:49   ` Andre Przywara
  9 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-12 10:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz

Currently the boot-wrapper initializes all CPUs in parallel. This means
that we cannot log errors as they happen, as this would mean multiple
CPUs concurrently writing to the UART, producing garbled output. To
produce meaningful output we have to special-case errors on the boot CPU
and hope CPUs have been configured consistently.

To make it easier to handle errors, boot CPUs sequentially so that errors
can be logged as they happen. With this change it's pretty clear that
the cpu_init_bootmethod() abstraction isn't helpful, and so this is
removed with cpu_init_arch() directly initializing PSCI where necessary.

When things go well this looks like:

| Boot-wrapper v0.2
| Entered at EL3
| Memory layout:
| [0000000080000000..0000000080001f90] => boot-wrapper
| [000000008000fff8..0000000080010000] => mbox
| [0000000080200000..0000000082cbaa00] => kernel
| [0000000088000000..0000000088002df1] => dtb
| CPU0: (MPIDR 0000000000000000) initializing...
| CPU0: Done
| CPU1: (MPIDR 0000000000000100) initializing...
| CPU1: Done
| CPU2: (MPIDR 0000000000000200) initializing...
| CPU2: Done
| CPU3: (MPIDR 0000000000000300) initializing...
| CPU3: Done
| CPU4: (MPIDR 0000000000010000) initializing...
| CPU4: Done
| CPU5: (MPIDR 0000000000010100) initializing...
| CPU5: Done
| CPU6: (MPIDR 0000000000010200) initializing...
| CPU6: Done
| CPU7: (MPIDR 0000000000010300) initializing...
| CPU7: Done
| All CPUs initialized. Entering kernel...
|
| [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
---
 arch/aarch32/boot.S |  2 --
 arch/aarch32/init.c | 17 +++++++++------
 arch/aarch64/boot.S |  2 --
 arch/aarch64/init.c | 17 +++++++++------
 arch/aarch64/spin.S |  3 ---
 common/init.c       | 50 +++++++++++++++++++++++++++++++++++++--------
 common/psci.c       | 14 -------------
 include/boot.h      |  6 +-----
 8 files changed, 64 insertions(+), 47 deletions(-)

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 8b6ffac..08029ff 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -70,8 +70,6 @@ reset_common:
 
 	bl	cpu_init_bootwrapper
 
-	bl	cpu_init_arch
-
 	b	start_bootmethod
 
 err_invalid_id:
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index cb67bf6..d08ac83 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -6,6 +6,7 @@
  * Use of this source code is governed by a BSD-style license that can be
  * found in the LICENSE.txt file.
  */
+#include <boot.h>
 #include <cpu.h>
 #include <gic.h>
 #include <platform.h>
@@ -43,24 +44,28 @@ static void cpu_init_monitor(void)
 #ifdef PSCI
 extern char psci_vectors[];
 
-bool cpu_init_psci_arch(void)
+static void cpu_init_psci_arch(unsigned int cpu)
 {
-	if (read_cpsr_mode() != PSR_MON)
-		return false;
+	if (read_cpsr_mode() != PSR_MON) {
+		print_cpu_warn(cpu, "PSCI could not be initialized (not booted at PL1).\r\n");
+		return;
+	}
 
 	mcr(MVBAR, (unsigned long)psci_vectors);
 	isb();
-
-	return true;
 }
+#else
+static static void cpu_init_psci_arch(unsigned int cpu) { }
 #endif
 
-void cpu_init_arch(void)
+void cpu_init_arch(unsigned int cpu)
 {
 	if (read_cpsr_mode() == PSR_MON) {
 		cpu_init_monitor();
 		gic_secure_init();
 	}
 
+	cpu_init_psci_arch(cpu);
+
 	mcr(CNTFRQ, COUNTER_FREQ);
 }
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 98ae77d..6bec836 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -69,8 +69,6 @@ reset_common:
 
 	bl	cpu_init_bootwrapper
 
-	bl	cpu_init_arch
-
 	b	start_bootmethod
 
 err_invalid_id:
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 68c220b..63fa949 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -6,6 +6,7 @@
  * Use of this source code is governed by a BSD-style license that can be
  * found in the LICENSE.txt file.
  */
+#include <boot.h>
 #include <cpu.h>
 #include <gic.h>
 #include <platform.h>
@@ -159,24 +160,28 @@ static void cpu_init_el3(void)
 #ifdef PSCI
 extern char psci_vectors[];
 
-bool cpu_init_psci_arch(void)
+static void cpu_init_psci_arch(unsigned int cpu)
 {
-	if (mrs(CurrentEL) != CURRENTEL_EL3)
-		return false;
+	if (mrs(CurrentEL) != CURRENTEL_EL3) {
+		print_cpu_warn(cpu, "PSCI could not be initialized (not booted at EL3).\r\n");
+		return;
+	}
 
 	msr(VBAR_EL3, (unsigned long)psci_vectors);
 	isb();
-
-	return true;
 }
+#else
+static void cpu_init_psci_arch(unsigned int cpu) { }
 #endif
 
-void cpu_init_arch(void)
+void cpu_init_arch(unsigned int cpu)
 {
 	if (mrs(CurrentEL) == CURRENTEL_EL3) {
 		cpu_init_el3();
 		gic_secure_init();
 	}
 
+	cpu_init_psci_arch(cpu);
+
 	msr(CNTFRQ_EL0, COUNTER_FREQ);
 }
diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
index a7879d4..81d7ee4 100644
--- a/arch/aarch64/spin.S
+++ b/arch/aarch64/spin.S
@@ -12,9 +12,6 @@
 
 	.text
 
-ASM_FUNC(cpu_init_bootmethod)
-	ret
-
 ASM_FUNC(start_bootmethod)
 	cpuid	x0, x1
 	bl	find_logical_id
diff --git a/common/init.c b/common/init.c
index 3c05ac3..6197ead 100644
--- a/common/init.c
+++ b/common/init.c
@@ -43,18 +43,50 @@ static void announce_objects(void)
 
 void announce_arch(void);
 
+static void init_bootwrapper(void)
+{
+	init_uart();
+	announce_bootwrapper();
+	announce_arch();
+	announce_objects();
+	init_platform();
+}
+
+static void cpu_init_self(unsigned int cpu)
+{
+	print_string("CPU");
+	print_uint_dec(cpu);
+	print_string(": (MPIDR ");
+	print_ulong_hex(read_mpidr());
+	print_string(") initializing...\r\n");
+
+	cpu_init_arch(cpu);
+
+	print_cpu_msg(cpu, "Done\r\n");
+}
+
 void cpu_init_bootwrapper(void)
 {
+	static volatile unsigned int cpu_next = 0;
 	unsigned int cpu = this_cpu_logical_id();
 
-	if (cpu == 0) {
-		init_uart();
-		announce_bootwrapper();
-		announce_arch();
-		announce_objects();
-		print_string("\r\n");
-		init_platform();
-	}
+	if (cpu == 0)
+		init_bootwrapper();
+
+	while (cpu_next != cpu)
+		wfe();
+
+	cpu_init_self(cpu);
+
+	cpu_next = cpu + 1;
+	dsb(sy);
+	sev();
+
+	if (cpu != 0)
+		return;
+
+	while (cpu_next != NR_CPUS)
+		wfe();
 
-	cpu_init_bootmethod(cpu);
+	print_string("All CPUs initialized. Entering kernel...\r\n\r\n");
 }
diff --git a/common/psci.c b/common/psci.c
index 19cc315..5fe8999 100644
--- a/common/psci.c
+++ b/common/psci.c
@@ -87,17 +87,3 @@ void __noreturn psci_first_spin(void)
 
 	unreachable();
 }
-
-void cpu_init_bootmethod(unsigned int cpu)
-{
-	if (cpu_init_psci_arch())
-		return;
-
-	if (cpu == 0) {
-		print_string("WARNING: PSCI could not be initialized. Boot may fail\r\n\r\n");
-		return;
-	}
-
-	while (1)
-		wfe();
-}
diff --git a/include/boot.h b/include/boot.h
index 18b805d..12c9c5c 100644
--- a/include/boot.h
+++ b/include/boot.h
@@ -17,10 +17,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid);
 void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
 			   unsigned long invalid_addr);
 
-void cpu_init_bootmethod(unsigned int cpu);
-
-#ifdef PSCI
-bool cpu_init_psci_arch(void);
-#endif
+void cpu_init_arch(unsigned int cpu);
 
 #endif
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic Mark Rutland
@ 2024-08-12 17:37   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-12 17:37 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:46 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

Hi,

> For historical reasons the boot-wrapper has code to handle being entered
> at Non-secure EL1, but currently this is unsupported and cannot be used
> to boot a kernel as jump_kernel() unconditionally writes to SCTLR_EL2,
> which will UNDEF.
> 
> Remove the logic for handling Non-secure EL1.

Indeed it hangs late when entered in EL1 atm, so we are not losing any
functionality. It would be nice to support this, but it seems this could be
added much easier after this series, iff people really need that.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch64/boot.S | 17 +----------------
>  1 file changed, 1 insertion(+), 16 deletions(-)
> 
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index da5fa65..73ddcd0 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -31,10 +31,8 @@ ASM_FUNC(_start)
>  	b.eq	reset_at_el3
>  	cmp	x0, #CURRENTEL_EL2
>  	b.eq	reset_at_el2
> -	cmp	x0, #CURRENTEL_EL1
> -	b.eq	reset_at_el1
>  
> -	/* Booting at EL0 is not supported */
> +	/* Booting at EL1 or EL0 is not supported */
>  	b	.
>  
>  	/*
> @@ -72,19 +70,6 @@ reset_at_el2:
>  	msr	sctlr_el2, x0
>  	isb
>  
> -	b	reset_no_el3
> -
> -	/*
> -	 * EL1 initialization
> -	 */
> -reset_at_el1:
> -	mov_64	x0, SCTLR_EL1_RESET
> -	msr	sctlr_el1, x0
> -	isb
> -
> -	b	reset_no_el3
> -
> -reset_no_el3:
>  	cpuid	x0, x1
>  	bl	find_logical_id
>  	cmp	x0, #MPIDR_INVALID



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch()
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch() Mark Rutland
@ 2024-08-13 17:13   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-13 17:13 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:47 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> When the boot-wrapper is entered at EL2 it does not initialise
> CNTFRQ_EL0, and in future it may need to initialize other CPU state
> regardless of the exeption level it was entered at.
> 
> Use a common cpu_init_arch() function to initialize CPU state regardless
> of the exception level the boot-wrapper was entered at.
> 
> This change means that the boot-wrapper can only be used when enetered
> at the highest implemented exception level, as accesses to CNTFRQ_EL0
> will be UNDEFINED at lower exception levels. However, the boot-wrapper
> only supports being booted at the highest implemented exception level,
> as the comment at the top of boot.S describes:
> 
> | The boot-wrapper must be entered from the reset vector at the
> | highest implemented exception level.
> 
> ... so this should not adversely affect any supported configuration.

Yes, makes sense, and fixes the panic from the kernel when entered in
EL2, where it cannot determine the arch timer frequency:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch64/boot.S |  4 +++-
>  arch/aarch64/init.c | 12 +++++++++---
>  2 files changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 73ddcd0..52c617d 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -51,7 +51,7 @@ reset_at_el3:
>  
>  	bl	cpu_init_bootwrapper
>  
> -	bl	cpu_init_el3
> +	bl	cpu_init_arch
>  
>  	bl	gic_secure_init
>  
> @@ -82,6 +82,8 @@ reset_at_el2:
>  
>  	bl	cpu_init_bootwrapper
>  
> +	bl	cpu_init_arch
> +
>  	b	start_bootmethod
>  
>  err_invalid_id:
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index c9fc7f1..49abdf7 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -52,7 +52,7 @@ static inline bool cpu_has_permission_indirection(void)
>  	return mrs(ID_AA64MMFR3_EL1) & mask;
>  }
>  
> -void cpu_init_el3(void)
> +static void cpu_init_el3(void)
>  {
>  	unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE;
>  	unsigned long mdcr = 0;
> @@ -153,8 +153,6 @@ void cpu_init_el3(void)
>  
>  		msr(SMCR_EL3, smcr);
>  	}
> -
> -	msr(CNTFRQ_EL0, COUNTER_FREQ);
>  }
>  
>  #ifdef PSCI
> @@ -171,3 +169,11 @@ bool cpu_init_psci_arch(void)
>  	return true;
>  }
>  #endif
> +
> +void cpu_init_arch(void)
> +{
> +	if (mrs(CurrentEL) == CURRENTEL_EL3)
> +		cpu_init_el3();
> +
> +	msr(CNTFRQ_EL0, COUNTER_FREQ);
> +}



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return Mark Rutland
@ 2024-08-13 17:14   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-13 17:14 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:48 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> When the boot-wrapper is entered at EL3 it will enter the kernel via
> ERET, and when entered at EL2 it will branch to the kernel directly.
> This is an artifact of the way the boot-wrapper was originally written
> in assembly, and it would be preferable to always enter the kernel via
> ERET so that PSTATE is always initialized to a known-good value.
> 
> Rework jump_kernel() to always enter the kernel via ERET.

That looks fine and still boots when entered in EL2 or EL3. Just one nit
below, with that:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch64/boot.S | 30 +++++++++++++-----------------
>  1 file changed, 13 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 52c617d..0ac0c98 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -76,10 +76,6 @@ reset_at_el2:
>  	b.eq	err_invalid_id
>  	bl	setup_stack
>  
> -	mov	w0, #1
> -	ldr	x1, =flag_no_el3
> -	str	w0, [x1]
> -
>  	bl	cpu_init_bootwrapper
>  
>  	bl	cpu_init_arch
> @@ -111,18 +107,11 @@ ASM_FUNC(jump_kernel)
>  	bl	find_logical_id
>  	bl	setup_stack		// Reset stack pointer
>  
> -	ldr	w0, flag_no_el3
> -	cmp	w0, #0			// Prepare Z flag
> -
>  	mov	x0, x20
>  	mov	x1, x21
>  	mov	x2, x22
>  	mov	x3, x23
> -
> -	b.eq	1f
> -	br	x19			// No EL3
> -
> -1:	mov	x4, #SPSR_KERNEL
> +	mov	x4, #SPSR_KERNEL
>  
>  	/*
>  	 * If bit 0 of the kernel address is set, we're entering in AArch32
> @@ -130,13 +119,20 @@ ASM_FUNC(jump_kernel)
>  	 */
>  	bfi	x4, x19, #5, #1
>  
> +	mrs	x5, CurrentEL
> +	cmp	x5, #CURRENTEL_EL3
> +	b.eq	eret_at_el3
> +	cmp	x5, #CURRENTEL_EL2
> +	b.eq	eret_at_el2
> +	b	.			// Not possible
> +
> +eret_at_el3:
>  	msr	elr_el3, x19
>  	msr	spsr_el3, x4
>  	eret
> +eret_at_el2:
> +	msr	elr_el2, x19
> +	msr	spsr_el2, x4
> +	eret
>  
>  	.ltorg

Looks like we don't need this directive anymore.

Cheers,
Andre

> -
> -	.data
> -	.align 3
> -flag_no_el3:
> -	.long 0



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry Mark Rutland
@ 2024-08-19 17:21   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-19 17:21 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:49 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> For historical reasons the early AArch32 code is structured differently
> from the early AArch64 code, with some common code (including stack
> setup) performed before we identify the mode we were entered in.
> 
> Align the structure of the early AArch32 code with that of the early
> AArch64 code. This will make subsequent refactoring easier.

Right, the result of this patch looks like it can be further optimised,
but this is happening later in the series. As such the changes look
alright, they should not change behaviour.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch32/boot.S | 55 ++++++++++++++++++++++++++++-----------------
>  1 file changed, 34 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index 4d16c9c..cf83e55 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -31,7 +31,28 @@
>  	 *   PSCI is not supported when entered in this mode.
>  	 */
>  ASM_FUNC(_start)
> -	/* Stack initialisation */
> +	mrs	r0, cpsr
> +	and	r0, #PSR_MODE_MASK
> +	cmp	r0, #PSR_SVC
> +	beq	reset_at_svc
> +	cmp	r0, #PSR_HYP
> +	beq	reset_at_hyp
> +
> +	/* Booting at other modes is not supported */
> +	b	.
> +
> +reset_at_svc:
> +	/*
> +	 * When entered in Secure SVC mode we must switch to monitor mode to
> +	 * configure SCR.NS. Switch to monitor mode ASAP to simplify later
> +	 * code.
> +	 */
> +	adr	lr, reset_at_mon
> +	ldr	r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
> +	msr	spsr, r0
> +	movs	pc, lr
> +
> +reset_at_mon:
>  	cpuid	r0, r1
>  	bl	find_logical_id
>  	cmp	r0, #MPIDR_INVALID
> @@ -39,36 +60,28 @@ ASM_FUNC(_start)
>  
>  	bl	setup_stack
>  
> -	mrs	r0, cpsr
> -	and	r0, #PSR_MODE_MASK
> -
> -	cmp	r0, #PSR_HYP
> -	bne	_switch_monitor
> +	bl	cpu_init_bootwrapper
>  
> -	mov	r0, #1
> -	ldr	r1, =flag_no_el3
> -	str	r0, [r1]
> +	bl	cpu_init_secure_pl1
>  
> -	bl	cpu_init_bootwrapper
> +	bl	gic_secure_init
>  
>  	b	start_bootmethod
>  
> -_switch_monitor:
> -	adr	lr, _monitor
> -	ldr	r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
> -	msr	spsr, r0
> -	movs	pc, lr
> +reset_at_hyp:
> +	cpuid	r0, r1
> +	bl	find_logical_id
> +	cmp	r0, #MPIDR_INVALID
> +	beq	err_invalid_id
>  
> -_monitor:
> -	/* Move the stack to Monitor mode*/
> -	mrs	sp, sp_svc
> +	bl	setup_stack
>  
> -	bl	cpu_init_secure_pl1
> +	mov	r0, #1
> +	ldr	r1, =flag_no_el3
> +	str	r0, [r1]
>  
>  	bl	cpu_init_bootwrapper
>  
> -	bl	gic_secure_init
> -
>  	b	start_bootmethod
>  
>  err_invalid_id:



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch()
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch() Mark Rutland
@ 2024-08-19 17:21   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-19 17:21 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:50 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> When the boot-wrapper is entered at EL2/Hyp it does not initialise
> CNTFRQ, and in future it may need to initialize other CPU state
> regardless of the exeption level it was entered at.
> 
> Use a common cpu_init_arch() function to initialize CPU state regardless
> of the exception level the boot-wrapper was entered at. For clarity
> cpu_init_secure_pl1() is renamed to cpu_init_monitor(), which better
> matches PSR_MON and will allow for the addition of cppu_init_hyp() and
> cpu_init_svc() in future.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> ---
>  arch/aarch32/boot.S |  4 +++-
>  arch/aarch32/init.c | 12 +++++++++---
>  2 files changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index cf83e55..f21f89a 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -62,7 +62,7 @@ reset_at_mon:
>  
>  	bl	cpu_init_bootwrapper
>  
> -	bl	cpu_init_secure_pl1
> +	bl	cpu_init_arch
>  
>  	bl	gic_secure_init
>  
> @@ -82,6 +82,8 @@ reset_at_hyp:
>  
>  	bl	cpu_init_bootwrapper
>  
> +	bl	cpu_init_arch
> +
>  	b	start_bootmethod
>  
>  err_invalid_id:
> diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
> index e25f0c7..35da37c 100644
> --- a/arch/aarch32/init.c
> +++ b/arch/aarch32/init.c
> @@ -29,7 +29,7 @@ void announce_arch(void)
>  	print_string("\r\n");
>  }
>  
> -void cpu_init_secure_pl1(void)
> +static void cpu_init_monitor(void)
>  {
>  	unsigned long scr = SCR_NS | SCR_HCE;
>  	unsigned long nsacr = NSACR_CP10 | NSACR_CP11;
> @@ -37,8 +37,6 @@ void cpu_init_secure_pl1(void)
>  	mcr(SCR, scr);
>  
>  	mcr(NSACR, nsacr);
> -
> -	mcr(CNTFRQ, COUNTER_FREQ);
>  }
>  
>  #ifdef PSCI
> @@ -55,3 +53,11 @@ bool cpu_init_psci_arch(void)
>  	return true;
>  }
>  #endif
> +
> +void cpu_init_arch(void)
> +{
> +	if (read_cpsr_mode() == PSR_MON)
> +		cpu_init_monitor();
> +
> +	mcr(CNTFRQ, COUNTER_FREQ);
> +}



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Mark Rutland
@ 2024-08-19 17:22   ` Andre Przywara
  2024-08-20 11:43     ` Mark Rutland
  2024-08-20 11:47     ` Mark Rutland
  0 siblings, 2 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-19 17:22 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:51 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

Hi Mark,

> When the boot-wrapper is entered at Seculre PL1 it will enter the kernel

				      Secure

> via an exception return, ERET, and when entered at Hyp it will branch to
> the kernel directly. This is an artifact of the way the boot-wrapper was
> originally written in assembly, and it would be preferable to always
> enter the kernel via an exception return so that PSTATE is always
> initialized to a known-good value.
> 
> Rework jump_kernel() to always enter the kernel via ERET, matching the
> stype of the AArch64 version of jump_kernel()

  type

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch32/boot.S | 48 +++++++++++++++++++++++----------------------
>  1 file changed, 25 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index f21f89a..78d19a0 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -76,10 +76,6 @@ reset_at_hyp:
>  
>  	bl	setup_stack
>  
> -	mov	r0, #1
> -	ldr	r1, =flag_no_el3
> -	str	r0, [r1]
> -
>  	bl	cpu_init_bootwrapper
>  
>  	bl	cpu_init_arch
> @@ -96,9 +92,10 @@ err_invalid_id:
>  	 * r1-r3, sp[0]: kernel arguments
>  	 */
>  ASM_FUNC(jump_kernel)
> -	sub	sp, #4				@ Ignore fourth argument

Can we maybe keep the comment, to avoid confusion? The comment above
explicitly mentions sp[0], but we never use it.

> -	push	{r0 - r3}
> -	mov	r5, sp
> +	mov	r4, r0
> +	mov	r5, r1
> +	mov	r6, r2
> +	mov	r7, r3
>  
>  	ldr	r0, =HSCTLR_KERNEL
>  	mcr	p15, 4, r0, c1, c0, 0		@ HSCTLR
> @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
>  	bl	find_logical_id
>  	bl	setup_stack
>  
> -	ldr	lr, [r5], #4
> -	ldm	r5, {r0 - r2}
> -
> -	ldr	r4, =flag_no_el3
> -	ldr	r4, [r4]
> -	cmp	r4, #1
> -	bxeq	lr				@ no EL3
> +	mov	r0, r5
> +	mov	r1, r6
> +	mov	r2, r7
> +	ldr	r3, =SPSR_KERNEL
>  
> -	ldr	r4, =SPSR_KERNEL
>  	/* Return in thumb2 mode when bit 0 of address is 1 */
> -	tst	lr, #1
> -	orrne	r4, #PSR_T
> +	tst	r4, #1
> +	orrne	r3, #PSR_T
> +
> +	mrs	r5, cpsr
> +	and	r5, #PSR_MODE_MASK
> +	cmp	r5, #PSR_MON
> +	beq	eret_at_mon
> +	cmp	r5, #PSR_HYP
> +	beq	eret_at_hyp
> +	b	.
>  
> -	msr	spsr_cxf, r4
> +eret_at_mon:
> +	mov	lr, r4
> +	msr	spsr_cxf, r3
>  	movs	pc, lr

Maybe that's just a wording confusion between "exception return
instruction" and "eret", but both the commit message and the label
promise an eret, and we have a "movs pc" here.
Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM
I don't immediately see why an eret wouldn't be possible here.

If there is a restriction I missed, I guess either a comment here or in
the commit message would be helpful.

> -
> -	.section .data
> -	.align 2
> -flag_no_el3:
> -	.long 0
> +eret_at_hyp:
> +	msr	elr_hyp, r4
> +	msr	spsr_cxf, r3

Shouldn't that be spsr_hyp?

Cheers,
Andre


> +	eret



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-19 17:22   ` Andre Przywara
@ 2024-08-20 11:43     ` Mark Rutland
  2024-08-20 12:59       ` Andre Przywara
  2024-08-20 11:47     ` Mark Rutland
  1 sibling, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-20 11:43 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:
> On Mon, 12 Aug 2024 11:15:51 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> Hi Mark,
> 
> > When the boot-wrapper is entered at Seculre PL1 it will enter the kernel
> 
> 				      Secure
> 
> > via an exception return, ERET, and when entered at Hyp it will branch to

FWIW, that "ERET, " here was meant to go too (which I think addresses a
later concern below). I had taken the commit message wording from the
early AArch64 commit and adjusted it to suit AArch32, but clearly I had
done that in a rush and madea number of mistakes.

> > the kernel directly. This is an artifact of the way the boot-wrapper was
> > originally written in assembly, and it would be preferable to always
> > enter the kernel via an exception return so that PSTATE is always
> > initialized to a known-good value.
> > 
> > Rework jump_kernel() to always enter the kernel via ERET, matching the
> > stype of the AArch64 version of jump_kernel()
> 
>   type
> 
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Acked-by: Marc Zyngier <maz@kernel.org>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > ---
> >  arch/aarch32/boot.S | 48 +++++++++++++++++++++++----------------------
> >  1 file changed, 25 insertions(+), 23 deletions(-)
> > 
> > diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> > index f21f89a..78d19a0 100644
> > --- a/arch/aarch32/boot.S
> > +++ b/arch/aarch32/boot.S
> > @@ -76,10 +76,6 @@ reset_at_hyp:
> >  
> >  	bl	setup_stack
> >  
> > -	mov	r0, #1
> > -	ldr	r1, =flag_no_el3
> > -	str	r0, [r1]
> > -
> >  	bl	cpu_init_bootwrapper
> >  
> >  	bl	cpu_init_arch
> > @@ -96,9 +92,10 @@ err_invalid_id:
> >  	 * r1-r3, sp[0]: kernel arguments
> >  	 */
> >  ASM_FUNC(jump_kernel)
> > -	sub	sp, #4				@ Ignore fourth argument
> 
> Can we maybe keep the comment, to avoid confusion? The comment above
> explicitly mentions sp[0], but we never use it.
> 
> > -	push	{r0 - r3}
> > -	mov	r5, sp
> > +	mov	r4, r0
> > +	mov	r5, r1
> > +	mov	r6, r2
> > +	mov	r7, r3
> >  
> >  	ldr	r0, =HSCTLR_KERNEL
> >  	mcr	p15, 4, r0, c1, c0, 0		@ HSCTLR
> > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
> >  	bl	find_logical_id
> >  	bl	setup_stack
> >  
> > -	ldr	lr, [r5], #4
> > -	ldm	r5, {r0 - r2}
> > -
> > -	ldr	r4, =flag_no_el3
> > -	ldr	r4, [r4]
> > -	cmp	r4, #1
> > -	bxeq	lr				@ no EL3
> > +	mov	r0, r5
> > +	mov	r1, r6
> > +	mov	r2, r7
> > +	ldr	r3, =SPSR_KERNEL
> >  
> > -	ldr	r4, =SPSR_KERNEL
> >  	/* Return in thumb2 mode when bit 0 of address is 1 */
> > -	tst	lr, #1
> > -	orrne	r4, #PSR_T
> > +	tst	r4, #1
> > +	orrne	r3, #PSR_T
> > +
> > +	mrs	r5, cpsr
> > +	and	r5, #PSR_MODE_MASK
> > +	cmp	r5, #PSR_MON
> > +	beq	eret_at_mon
> > +	cmp	r5, #PSR_HYP
> > +	beq	eret_at_hyp
> > +	b	.
> >  
> > -	msr	spsr_cxf, r4
> > +eret_at_mon:
> > +	mov	lr, r4
> > +	msr	spsr_cxf, r3
> >  	movs	pc, lr
> 
> Maybe that's just a wording confusion between "exception return
> instruction" and "eret", but both the commit message and the label
> promise an eret, and we have a "movs pc" here.

Wording-wise, "ERET" was spurious, and the commit message was inteneded
to say "via an exception return", with the "movs pc, lr" being the
exception return.

> Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM
> I don't immediately see why an eret wouldn't be possible here.
> 
> If there is a restriction I missed, I guess either a comment here or in
> the commit message would be helpful.

We can use ERET here; IIRC that was added in the ARMv7 virtualization
extensions, but the boot-wrapper requires that and really it's ARMv8+
anyway. I had opted to stick with "movs pc, lr" because it was a
(trivially) smaller change, and kept the cases distinct, but I'm happy
to use ERET.

However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the
return address from ELR_HYP, while in all other modes it takes it from
the LR (as only hyp has an ELR).

> > -
> > -	.section .data
> > -	.align 2
> > -flag_no_el3:
> > -	.long 0
> > +eret_at_hyp:
> > +	msr	elr_hyp, r4
> > +	msr	spsr_cxf, r3
> 
> Shouldn't that be spsr_hyp?

It can be, but doesn't need to be. This is the SPSR_<fields> encoding,
which writes to the SPSR for owned by the active mode, though it skips
bits<23:16>, which we probably should initialise.

If I change that all to:

| eret_at_mon:
| 	mov	lr, r4
| 	msr	spsr_mon, r3
| 	eret
| eret_at_hyp:
| 	msr     elr_hyp, r4
| 	msr     spsr_hyp, r3
|

... do you think that's clear enough, or do you think we need a comment
about the "LR" vs "ELR_HYP" distinction?

Mark.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-19 17:22   ` Andre Przywara
  2024-08-20 11:43     ` Mark Rutland
@ 2024-08-20 11:47     ` Mark Rutland
  2024-08-20 12:23       ` Andre Przywara
  1 sibling, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-20 11:47 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:
> On Mon, 12 Aug 2024 11:15:51 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> > Rework jump_kernel() to always enter the kernel via ERET, matching the
> > stype of the AArch64 version of jump_kernel()
> 
>   type

That's actually meant to be "style", and "ERET" should be "an exception
return", i.e.

| Rework jump_kernel() to always enter the kernel via an exception
| return, matching the style of the AArch64 version of jump_kernel()

Mark.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-20 11:47     ` Mark Rutland
@ 2024-08-20 12:23       ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-20 12:23 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Tue, 20 Aug 2024 12:47:59 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:
> > On Mon, 12 Aug 2024 11:15:51 +0100
> > Mark Rutland <mark.rutland@arm.com> wrote:  
> > > Rework jump_kernel() to always enter the kernel via ERET, matching the
> > > stype of the AArch64 version of jump_kernel()  
> > 
> >   type  
> 
> That's actually meant to be "style", and "ERET" should be "an exception
> return", i.e.

Ah, but three tries is not bad for a Wordle ;-)

Cheers,
Andre

> 
> | Rework jump_kernel() to always enter the kernel via an exception
> | return, matching the style of the AArch64 version of jump_kernel()
> 
> Mark.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-20 11:43     ` Mark Rutland
@ 2024-08-20 12:59       ` Andre Przywara
  2024-08-20 13:36         ` Mark Rutland
  0 siblings, 1 reply; 30+ messages in thread
From: Andre Przywara @ 2024-08-20 12:59 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Tue, 20 Aug 2024 12:43:18 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

Hi,

> On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:
> > On Mon, 12 Aug 2024 11:15:51 +0100
> > Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> > Hi Mark,
> >   
> > > When the boot-wrapper is entered at Seculre PL1 it will enter the kernel  
> > 
> > 				      Secure
> >   
> > > via an exception return, ERET, and when entered at Hyp it will branch to  
> 
> FWIW, that "ERET, " here was meant to go too (which I think addresses a
> later concern below). I had taken the commit message wording from the
> early AArch64 commit and adjusted it to suit AArch32, but clearly I had
> done that in a rush and madea number of mistakes.

Yeah, no worries, I was just getting a bit confused, especially when I
learned more about the subtleties of exception returns on ARMv7 than I
ever wanted to know yesterday.

> > > the kernel directly. This is an artifact of the way the boot-wrapper was
> > > originally written in assembly, and it would be preferable to always
> > > enter the kernel via an exception return so that PSTATE is always
> > > initialized to a known-good value.
> > > 
> > > Rework jump_kernel() to always enter the kernel via ERET, matching the
> > > stype of the AArch64 version of jump_kernel()  
> > 
> >   type
> >   
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Acked-by: Marc Zyngier <maz@kernel.org>
> > > Cc: Akos Denke <akos.denke@arm.com>
> > > Cc: Andre Przywara <andre.przywara@arm.com>
> > > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > > ---
> > >  arch/aarch32/boot.S | 48 +++++++++++++++++++++++----------------------
> > >  1 file changed, 25 insertions(+), 23 deletions(-)
> > > 
> > > diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> > > index f21f89a..78d19a0 100644
> > > --- a/arch/aarch32/boot.S
> > > +++ b/arch/aarch32/boot.S
> > > @@ -76,10 +76,6 @@ reset_at_hyp:
> > >  
> > >  	bl	setup_stack
> > >  
> > > -	mov	r0, #1
> > > -	ldr	r1, =flag_no_el3
> > > -	str	r0, [r1]
> > > -
> > >  	bl	cpu_init_bootwrapper
> > >  
> > >  	bl	cpu_init_arch
> > > @@ -96,9 +92,10 @@ err_invalid_id:
> > >  	 * r1-r3, sp[0]: kernel arguments
> > >  	 */
> > >  ASM_FUNC(jump_kernel)
> > > -	sub	sp, #4				@ Ignore fourth argument  
> > 
> > Can we maybe keep the comment, to avoid confusion? The comment above
> > explicitly mentions sp[0], but we never use it.
> >   
> > > -	push	{r0 - r3}
> > > -	mov	r5, sp
> > > +	mov	r4, r0
> > > +	mov	r5, r1
> > > +	mov	r6, r2
> > > +	mov	r7, r3
> > >  
> > >  	ldr	r0, =HSCTLR_KERNEL
> > >  	mcr	p15, 4, r0, c1, c0, 0		@ HSCTLR
> > > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
> > >  	bl	find_logical_id
> > >  	bl	setup_stack
> > >  
> > > -	ldr	lr, [r5], #4
> > > -	ldm	r5, {r0 - r2}
> > > -
> > > -	ldr	r4, =flag_no_el3
> > > -	ldr	r4, [r4]
> > > -	cmp	r4, #1
> > > -	bxeq	lr				@ no EL3
> > > +	mov	r0, r5
> > > +	mov	r1, r6
> > > +	mov	r2, r7
> > > +	ldr	r3, =SPSR_KERNEL
> > >  
> > > -	ldr	r4, =SPSR_KERNEL
> > >  	/* Return in thumb2 mode when bit 0 of address is 1 */
> > > -	tst	lr, #1
> > > -	orrne	r4, #PSR_T
> > > +	tst	r4, #1
> > > +	orrne	r3, #PSR_T
> > > +
> > > +	mrs	r5, cpsr
> > > +	and	r5, #PSR_MODE_MASK
> > > +	cmp	r5, #PSR_MON
> > > +	beq	eret_at_mon
> > > +	cmp	r5, #PSR_HYP
> > > +	beq	eret_at_hyp
> > > +	b	.
> > >  
> > > -	msr	spsr_cxf, r4
> > > +eret_at_mon:
> > > +	mov	lr, r4
> > > +	msr	spsr_cxf, r3
> > >  	movs	pc, lr  
> > 
> > Maybe that's just a wording confusion between "exception return
> > instruction" and "eret", but both the commit message and the label
> > promise an eret, and we have a "movs pc" here.  
> 
> Wording-wise, "ERET" was spurious, and the commit message was inteneded
> to say "via an exception return", with the "movs pc, lr" being the
> exception return.

Ah thanks, that was one of the two possibilities I considered.

> > Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM
> > I don't immediately see why an eret wouldn't be possible here.
> > 
> > If there is a restriction I missed, I guess either a comment here or in
> > the commit message would be helpful.  
> 
> We can use ERET here; IIRC that was added in the ARMv7 virtualization
> extensions, but the boot-wrapper requires that and really it's ARMv8+

Is that so? I mean in all practicality we will indeed use the bootwrapper
on ARMv8 only these days, but I don't think we need to artificially limit
this. Also I consider the boot-wrapper one of the more reliable sources
for ARMv7 boot code, so not sure we should drop this aspect.
There is one ARMv7 compile time check, to avoid "sevl", so we have some
support, at least.

> anyway. I had opted to stick with "movs pc, lr" because it was a
> (trivially) smaller change, and kept the cases distinct, but I'm happy
> to use ERET.
> 
> However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the
> return address from ELR_HYP, while in all other modes it takes it from
> the LR (as only hyp has an ELR).

Yeah, I saw this yesterday, and am even more grateful for the ARMv8
exception model now ;-)

So I am fine with "movs pc, lr", if that's the more canonical way on
32-bit/ARMv7. On the other hand your revised sequence below looks
intriguingly simple ...

> 
> > > -
> > > -	.section .data
> > > -	.align 2
> > > -flag_no_el3:
> > > -	.long 0
> > > +eret_at_hyp:
> > > +	msr	elr_hyp, r4
> > > +	msr	spsr_cxf, r3  
> > 
> > Shouldn't that be spsr_hyp?  
> 
> It can be, but doesn't need to be. This is the SPSR_<fields> encoding,

So I didn't know about this until yesterday, and it's not easy to find,
since it seems not to be mentioned as such in the ARM ARM (at least not
"cxf"). binutils seems to disassemble this to SPSR_fxc, but I guess we
should indeed move to SPSR_fsxc (if we keep this at all).

> which writes to the SPSR for owned by the active mode, though it skips
> bits<23:16>, which we probably should initialise.
> 
> If I change that all to:
> 
> | eret_at_mon:
> | 	mov	lr, r4
> | 	msr	spsr_mon, r3
> | 	eret
> | eret_at_hyp:
> | 	msr     elr_hyp, r4
> | 	msr     spsr_hyp, r3
> |
> 
> ... do you think that's clear enough, or do you think we need a comment
> about the "LR" vs "ELR_HYP" distinction?

Oh, that certainly looks the clearest, but indeed a comment on LR vs. ELR
situation looks indicated.

Thanks,
Andre

> 
> Mark.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-20 12:59       ` Andre Przywara
@ 2024-08-20 13:36         ` Mark Rutland
  2024-08-20 13:50           ` Andre Przywara
  0 siblings, 1 reply; 30+ messages in thread
From: Mark Rutland @ 2024-08-20 13:36 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Tue, Aug 20, 2024 at 01:59:44PM +0100, Andre Przywara wrote:
> On Tue, 20 Aug 2024 12:43:18 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> > On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:
> > > On Mon, 12 Aug 2024 11:15:51 +0100
> > > Mark Rutland <mark.rutland@arm.com> wrote:

> > > > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
> > > >  	bl	find_logical_id
> > > >  	bl	setup_stack
> > > >  
> > > > -	ldr	lr, [r5], #4
> > > > -	ldm	r5, {r0 - r2}
> > > > -
> > > > -	ldr	r4, =flag_no_el3
> > > > -	ldr	r4, [r4]
> > > > -	cmp	r4, #1
> > > > -	bxeq	lr				@ no EL3
> > > > +	mov	r0, r5
> > > > +	mov	r1, r6
> > > > +	mov	r2, r7
> > > > +	ldr	r3, =SPSR_KERNEL
> > > >  
> > > > -	ldr	r4, =SPSR_KERNEL
> > > >  	/* Return in thumb2 mode when bit 0 of address is 1 */
> > > > -	tst	lr, #1
> > > > -	orrne	r4, #PSR_T
> > > > +	tst	r4, #1
> > > > +	orrne	r3, #PSR_T
> > > > +
> > > > +	mrs	r5, cpsr
> > > > +	and	r5, #PSR_MODE_MASK
> > > > +	cmp	r5, #PSR_MON
> > > > +	beq	eret_at_mon
> > > > +	cmp	r5, #PSR_HYP
> > > > +	beq	eret_at_hyp
> > > > +	b	.
> > > >  
> > > > -	msr	spsr_cxf, r4
> > > > +eret_at_mon:
> > > > +	mov	lr, r4
> > > > +	msr	spsr_cxf, r3
> > > >  	movs	pc, lr  

> > > Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM
> > > I don't immediately see why an eret wouldn't be possible here.
> > > 
> > > If there is a restriction I missed, I guess either a comment here or in
> > > the commit message would be helpful.  
> > 
> > We can use ERET here; IIRC that was added in the ARMv7 virtualization
> > extensions, but the boot-wrapper requires that and really it's ARMv8+
> 
> Is that so? I mean in all practicality we will indeed use the bootwrapper
> on ARMv8 only these days, but I don't think we need to artificially limit
> this. Also I consider the boot-wrapper one of the more reliable sources
> for ARMv7 boot code, so not sure we should drop this aspect.
> There is one ARMv7 compile time check, to avoid "sevl", so we have some
> support, at least.

What I was trying to say here was "the minimum bound is ARMv7 +
virtualization extensions", which is already required by the
".arch_extension virt" directive that's been in this file since it was
introduced.

Practically speaking, I don't think that we should care about ARMv7
here, but if that happens to work, great!

> > anyway. I had opted to stick with "movs pc, lr" because it was a
> > (trivially) smaller change, and kept the cases distinct, but I'm happy
> > to use ERET.
> > 
> > However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the
> > return address from ELR_HYP, while in all other modes it takes it from
> > the LR (as only hyp has an ELR).
> 
> Yeah, I saw this yesterday, and am even more grateful for the ARMv8
> exception model now ;-)
> 
> So I am fine with "movs pc, lr", if that's the more canonical way on
> 32-bit/ARMv7. On the other hand your revised sequence below looks
> intriguingly simple ...
> 
> > 
> > > > -
> > > > -	.section .data
> > > > -	.align 2
> > > > -flag_no_el3:
> > > > -	.long 0
> > > > +eret_at_hyp:
> > > > +	msr	elr_hyp, r4
> > > > +	msr	spsr_cxf, r3  
> > > 
> > > Shouldn't that be spsr_hyp?  
> > 
> > It can be, but doesn't need to be. This is the SPSR_<fields> encoding,
> 
> So I didn't know about this until yesterday, and it's not easy to find,
> since it seems not to be mentioned as such in the ARM ARM (at least not
> "cxf"). binutils seems to disassemble this to SPSR_fxc, but I guess we
> should indeed move to SPSR_fsxc (if we keep this at all).
> 
> > which writes to the SPSR for owned by the active mode, though it skips
> > bits<23:16>, which we probably should initialise.
> > 
> > If I change that all to:
> > 
> > | eret_at_mon:
> > | 	mov	lr, r4
> > | 	msr	spsr_mon, r3
> > | 	eret
> > | eret_at_hyp:
> > | 	msr     elr_hyp, r4
> > | 	msr     spsr_hyp, r3
> > |
> > 
> > ... do you think that's clear enough, or do you think we need a comment
> > about the "LR" vs "ELR_HYP" distinction?
> 
> Oh, that certainly looks the clearest, but indeed a comment on LR vs. ELR
> situation looks indicated.

Considering the earlier comments I'm going to make this:

| eret_at_mon:
| 	mov	lr, r4
| 	msr	spsr_mon
| 	movs	pc, lr
| eret_at_hyp:
| 	msr	elr_hyp, r4
| 	msr	spsr_hyp, r3
| 	eret

Using 'spsr_mon' and 'spsr_hyp' means we initialize *all* of the SPSR
bits, so that's a bug fix in addition to being clearer.

Using 'movs pc, lr' for the 'eret_at_mon' case is the standard way to do
exception returns in AArch32 generally, and then that clearly doesnt't
depend on the virtualization extensions, so if we ever want to handle a
CPU without hyp in future all we'll need to do is mess with the SPSR
value.

I'm not going to bother with a comment given that's standard AArch32
behaviour.

Mark.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return
  2024-08-20 13:36         ` Mark Rutland
@ 2024-08-20 13:50           ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-20 13:50 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Tue, 20 Aug 2024 14:36:37 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

Hi Mark,

> On Tue, Aug 20, 2024 at 01:59:44PM +0100, Andre Przywara wrote:
> > On Tue, 20 Aug 2024 12:43:18 +0100
> > Mark Rutland <mark.rutland@arm.com> wrote:  
> > > On Mon, Aug 19, 2024 at 06:22:41PM +0100, Andre Przywara wrote:  
> > > > On Mon, 12 Aug 2024 11:15:51 +0100
> > > > Mark Rutland <mark.rutland@arm.com> wrote:  
> 
> > > > > @@ -111,23 +108,28 @@ ASM_FUNC(jump_kernel)
> > > > >  	bl	find_logical_id
> > > > >  	bl	setup_stack
> > > > >  
> > > > > -	ldr	lr, [r5], #4
> > > > > -	ldm	r5, {r0 - r2}
> > > > > -
> > > > > -	ldr	r4, =flag_no_el3
> > > > > -	ldr	r4, [r4]
> > > > > -	cmp	r4, #1
> > > > > -	bxeq	lr				@ no EL3
> > > > > +	mov	r0, r5
> > > > > +	mov	r1, r6
> > > > > +	mov	r2, r7
> > > > > +	ldr	r3, =SPSR_KERNEL
> > > > >  
> > > > > -	ldr	r4, =SPSR_KERNEL
> > > > >  	/* Return in thumb2 mode when bit 0 of address is 1 */
> > > > > -	tst	lr, #1
> > > > > -	orrne	r4, #PSR_T
> > > > > +	tst	r4, #1
> > > > > +	orrne	r3, #PSR_T
> > > > > +
> > > > > +	mrs	r5, cpsr
> > > > > +	and	r5, #PSR_MODE_MASK
> > > > > +	cmp	r5, #PSR_MON
> > > > > +	beq	eret_at_mon
> > > > > +	cmp	r5, #PSR_HYP
> > > > > +	beq	eret_at_hyp
> > > > > +	b	.
> > > > >  
> > > > > -	msr	spsr_cxf, r4
> > > > > +eret_at_mon:
> > > > > +	mov	lr, r4
> > > > > +	msr	spsr_cxf, r3
> > > > >  	movs	pc, lr    
> 
> > > > Reading "B9.1 General restrictions on system instructions" in the ARMv7 ARM
> > > > I don't immediately see why an eret wouldn't be possible here.
> > > > 
> > > > If there is a restriction I missed, I guess either a comment here or in
> > > > the commit message would be helpful.    
> > > 
> > > We can use ERET here; IIRC that was added in the ARMv7 virtualization
> > > extensions, but the boot-wrapper requires that and really it's ARMv8+  
> > 
> > Is that so? I mean in all practicality we will indeed use the bootwrapper
> > on ARMv8 only these days, but I don't think we need to artificially limit
> > this. Also I consider the boot-wrapper one of the more reliable sources
> > for ARMv7 boot code, so not sure we should drop this aspect.
> > There is one ARMv7 compile time check, to avoid "sevl", so we have some
> > support, at least.  
> 
> What I was trying to say here was "the minimum bound is ARMv7 +
> virtualization extensions", which is already required by the
> ".arch_extension virt" directive that's been in this file since it was
> introduced.
> 
> Practically speaking, I don't think that we should care about ARMv7
> here, but if that happens to work, great!

Ah, no, I meant "armv7ve". Given that we either drop to HYP or stay in
HYP, I don't think supporting something before that makes much sense here
;-)

> > > anyway. I had opted to stick with "movs pc, lr" because it was a
> > > (trivially) smaller change, and kept the cases distinct, but I'm happy
> > > to use ERET.
> > > 
> > > However, beware that in AArch32 ERET is a bit odd: in Hyp mode takes the
> > > return address from ELR_HYP, while in all other modes it takes it from
> > > the LR (as only hyp has an ELR).  
> > 
> > Yeah, I saw this yesterday, and am even more grateful for the ARMv8
> > exception model now ;-)
> > 
> > So I am fine with "movs pc, lr", if that's the more canonical way on
> > 32-bit/ARMv7. On the other hand your revised sequence below looks
> > intriguingly simple ...
> >   
> > >   
> > > > > -
> > > > > -	.section .data
> > > > > -	.align 2
> > > > > -flag_no_el3:
> > > > > -	.long 0
> > > > > +eret_at_hyp:
> > > > > +	msr	elr_hyp, r4
> > > > > +	msr	spsr_cxf, r3    
> > > > 
> > > > Shouldn't that be spsr_hyp?    
> > > 
> > > It can be, but doesn't need to be. This is the SPSR_<fields> encoding,  
> > 
> > So I didn't know about this until yesterday, and it's not easy to find,
> > since it seems not to be mentioned as such in the ARM ARM (at least not
> > "cxf"). binutils seems to disassemble this to SPSR_fxc, but I guess we
> > should indeed move to SPSR_fsxc (if we keep this at all).
> >   
> > > which writes to the SPSR for owned by the active mode, though it skips
> > > bits<23:16>, which we probably should initialise.
> > > 
> > > If I change that all to:
> > > 
> > > | eret_at_mon:
> > > | 	mov	lr, r4
> > > | 	msr	spsr_mon, r3
> > > | 	eret
> > > | eret_at_hyp:
> > > | 	msr     elr_hyp, r4
> > > | 	msr     spsr_hyp, r3
> > > |
> > > 
> > > ... do you think that's clear enough, or do you think we need a comment
> > > about the "LR" vs "ELR_HYP" distinction?  
> > 
> > Oh, that certainly looks the clearest, but indeed a comment on LR vs. ELR
> > situation looks indicated.  
> 
> Considering the earlier comments I'm going to make this:
> 
> | eret_at_mon:
> | 	mov	lr, r4
> | 	msr	spsr_mon
> | 	movs	pc, lr
> | eret_at_hyp:
> | 	msr	elr_hyp, r4
> | 	msr	spsr_hyp, r3
> | 	eret
> 
> Using 'spsr_mon' and 'spsr_hyp' means we initialize *all* of the SPSR
> bits, so that's a bug fix in addition to being clearer.
> 
> Using 'movs pc, lr' for the 'eret_at_mon' case is the standard way to do
> exception returns in AArch32 generally, and then that clearly doesnt't
> depend on the virtualization extensions, so if we ever want to handle a
> CPU without hyp in future all we'll need to do is mess with the SPSR
> value.
> 
> I'm not going to bother with a comment given that's standard AArch32
> behaviour.

Many thanks, that looks absolutely fine to me and makes the most sense!

Cheers,
Andre.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 07/10] Unify assembly setup paths
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 07/10] Unify assembly setup paths Mark Rutland
@ 2024-08-21 16:54   ` Andre Przywara
  2024-08-22  9:50     ` Mark Rutland
  0 siblings, 1 reply; 30+ messages in thread
From: Andre Przywara @ 2024-08-21 16:54 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:52 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> The early assembly paths for EL3/Secure-PL1 and EL2/Hyp are almost
> identical aside from the EL3/Secure-PL1 paths calling gic_secure_init().
> 
> Simplify the easlery assembly paths by conditionally calling
> gic_secure_init() from cpu_init_arch(), allowing the EL3/Secure-PL1 and
> EL2/Hyp paths to be unified.
> 
> In order to call gic_secure_init() from C code we need to expose a
> prototype for gic_secure_init(), requiring a new <gic.h> header. For
> clarity the existing <asm/gic-v3.h> headers are renamed to <asm/gic.h>
> and are included through the common header.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>

Thanks, that seems like a nice cleanup and refactoring, and looks good to
me:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Just not sure what to do about the TODO comments in there?

Cheers,
Andre

> ---
>  arch/aarch32/boot.S                          | 20 ++++++--------------
>  arch/aarch32/include/asm/{gic-v3.h => gic.h} |  2 +-
>  arch/aarch32/init.c                          |  5 ++++-
>  arch/aarch64/boot.S                          | 17 ++++-------------
>  arch/aarch64/include/asm/{gic-v3.h => gic.h} |  2 +-
>  arch/aarch64/init.c                          |  5 ++++-
>  common/gic-v3.c                              |  2 +-
>  common/gic.c                                 |  2 +-
>  include/gic.h                                | 16 ++++++++++++++++
>  9 files changed, 38 insertions(+), 33 deletions(-)
>  rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
>  rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
>  create mode 100644 include/gic.h
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index 78d19a0..8b6ffac 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -53,22 +53,14 @@ reset_at_svc:
>  	movs	pc, lr
>  
>  reset_at_mon:
> -	cpuid	r0, r1
> -	bl	find_logical_id
> -	cmp	r0, #MPIDR_INVALID
> -	beq	err_invalid_id
> -
> -	bl	setup_stack
> -
> -	bl	cpu_init_bootwrapper
> -
> -	bl	cpu_init_arch
> -
> -	bl	gic_secure_init
> -
> -	b	start_bootmethod
> +	// TODO initialise monitor state
> +	b	reset_common
>  
>  reset_at_hyp:
> +	// TODO initialise hyp state
> +	b	reset_common
> +
> +reset_common:
>  	cpuid	r0, r1
>  	bl	find_logical_id
>  	cmp	r0, #MPIDR_INVALID
> diff --git a/arch/aarch32/include/asm/gic-v3.h b/arch/aarch32/include/asm/gic.h
> similarity index 91%
> rename from arch/aarch32/include/asm/gic-v3.h
> rename to arch/aarch32/include/asm/gic.h
> index b28136a..0b9425d 100644
> --- a/arch/aarch32/include/asm/gic-v3.h
> +++ b/arch/aarch32/include/asm/gic.h
> @@ -1,5 +1,5 @@
>  /*
> - * arch/aarch32/include/asm/gic-v3.h
> + * arch/aarch32/include/asm/gic.h
>   *
>   * Copyright (C) 2015 ARM Limited. All rights reserved.
>   *
> diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
> index 35da37c..cb67bf6 100644
> --- a/arch/aarch32/init.c
> +++ b/arch/aarch32/init.c
> @@ -7,6 +7,7 @@
>   * found in the LICENSE.txt file.
>   */
>  #include <cpu.h>
> +#include <gic.h>
>  #include <platform.h>
>  #include <stdbool.h>
>  
> @@ -56,8 +57,10 @@ bool cpu_init_psci_arch(void)
>  
>  void cpu_init_arch(void)
>  {
> -	if (read_cpsr_mode() == PSR_MON)
> +	if (read_cpsr_mode() == PSR_MON) {
>  		cpu_init_monitor();
> +		gic_secure_init();
> +	}
>  
>  	mcr(CNTFRQ, COUNTER_FREQ);
>  }
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 0ac0c98..98ae77d 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -43,19 +43,7 @@ reset_at_el3:
>  	msr	sctlr_el3, x0
>  	isb
>  
> -	cpuid	x0, x1
> -	bl	find_logical_id
> -	cmp	x0, #MPIDR_INVALID
> -	b.eq	err_invalid_id
> -	bl	setup_stack
> -
> -	bl	cpu_init_bootwrapper
> -
> -	bl	cpu_init_arch
> -
> -	bl	gic_secure_init
> -
> -	b	start_bootmethod
> +	b	reset_common
>  
>  	/*
>  	 * EL2 initialization
> @@ -70,6 +58,9 @@ reset_at_el2:
>  	msr	sctlr_el2, x0
>  	isb
>  
> +	b	reset_common
> +
> +reset_common:
>  	cpuid	x0, x1
>  	bl	find_logical_id
>  	cmp	x0, #MPIDR_INVALID
> diff --git a/arch/aarch64/include/asm/gic-v3.h b/arch/aarch64/include/asm/gic.h
> similarity index 92%
> rename from arch/aarch64/include/asm/gic-v3.h
> rename to arch/aarch64/include/asm/gic.h
> index 2447480..9d716f6 100644
> --- a/arch/aarch64/include/asm/gic-v3.h
> +++ b/arch/aarch64/include/asm/gic.h
> @@ -1,5 +1,5 @@
>  /*
> - * arch/aarch64/include/asm/gic-v3.h
> + * arch/aarch64/include/asm/gic.h
>   *
>   * Copyright (C) 2015 ARM Limited. All rights reserved.
>   *
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index 49abdf7..68c220b 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -7,6 +7,7 @@
>   * found in the LICENSE.txt file.
>   */
>  #include <cpu.h>
> +#include <gic.h>
>  #include <platform.h>
>  #include <stdbool.h>
>  
> @@ -172,8 +173,10 @@ bool cpu_init_psci_arch(void)
>  
>  void cpu_init_arch(void)
>  {
> -	if (mrs(CurrentEL) == CURRENTEL_EL3)
> +	if (mrs(CurrentEL) == CURRENTEL_EL3) {
>  		cpu_init_el3();
> +		gic_secure_init();
> +	}
>  
>  	msr(CNTFRQ_EL0, COUNTER_FREQ);
>  }
> diff --git a/common/gic-v3.c b/common/gic-v3.c
> index 6207007..4d8e620 100644
> --- a/common/gic-v3.c
> +++ b/common/gic-v3.c
> @@ -10,7 +10,7 @@
>  #include <stdint.h>
>  
>  #include <cpu.h>
> -#include <asm/gic-v3.h>
> +#include <gic.h>
>  #include <asm/io.h>
>  
>  #define GICD_CTLR			0x0
> diff --git a/common/gic.c b/common/gic.c
> index 04d4289..15a3410 100644
> --- a/common/gic.c
> +++ b/common/gic.c
> @@ -10,7 +10,7 @@
>  #include <stdint.h>
>  
>  #include <cpu.h>
> -#include <asm/gic-v3.h>
> +#include <gic.h>
>  #include <asm/io.h>
>  
>  #define GICD_CTLR			0x0
> diff --git a/include/gic.h b/include/gic.h
> new file mode 100644
> index 0000000..127f82b
> --- /dev/null
> +++ b/include/gic.h
> @@ -0,0 +1,16 @@
> +/*
> + * include/gic.h
> + *
> + * Copyright (C) 2024 ARM Limited. All rights reserved.
> + *
> + * Use of this source code is governed by a BSD-style license that can be
> + * found in the LICENSE.txt file.
> + */
> +#ifndef __GIC_H
> +#define __GIC_H
> +
> +#include <asm/gic.h>
> +
> +void gic_secure_init(void);
> +
> +#endif



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 08/10] Simplify spin logic
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 08/10] Simplify spin logic Mark Rutland
@ 2024-08-21 16:55   ` Andre Przywara
  2024-08-22  9:54     ` Mark Rutland
  0 siblings, 1 reply; 30+ messages in thread
From: Andre Przywara @ 2024-08-21 16:55 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:53 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> The logic for initial boot is more complicated than it needs to be,
> with both first_spin() having a special case for CPU0 that requires an
> additional argument to be passed to spin().
> 
> Simplify this by moving the special-case logic for CPU0 into
> first_spin(). This removes the need to provide a dummy mailbox for CPU0
> to spin on, simplfiies callers of first_spin() and spin(), which no
> longer need to pass a dummy mailbox or 'is_entry' for CPU0.

Ah, that's a nice one! Indeed this looks much cleaner now. The change
looks good to me, just some small thing below:

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch64/spin.S | 11 +----------
>  common/boot.c       | 20 ++++++++------------
>  common/psci.c       |  2 +-
>  include/boot.h      |  2 +-
>  4 files changed, 11 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
> index 375f732..a7879d4 100644
> --- a/arch/aarch64/spin.S
> +++ b/arch/aarch64/spin.S
> @@ -23,15 +23,6 @@ ASM_FUNC(start_bootmethod)
>  	 * Primary CPU (x0 = 0) jumps to kernel, the other ones wait for an
>  	 * address to appear in mbox

I think this comment is a bit out of place now, either remove it entirely,
or hint that first_spin will take care of the difference between mbox and
kernel entrypoint.

Regardless:

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

>  	 */
> -	adr	x3, mbox
> -	adr	x4, kernel_address
> -	cmp	x0, #0
> -	csel	x1, x3, x4, ne
> +	adr	x1, mbox
>  	mov	x2, #0
>  	bl	first_spin
> -
> -	.align 3
> -kernel_address:
> -	.long 0
> -
> -	.ltorg
> diff --git a/common/boot.c b/common/boot.c
> index 29d53a4..4417649 100644
> --- a/common/boot.c
> +++ b/common/boot.c
> @@ -27,7 +27,7 @@ const unsigned long id_table[] = { CPU_IDS };
>   * @invalid: value of an invalid address, 0 or -1 depending on the boot method
>   * @is_entry: when true, pass boot parameters to the kernel, instead of 0
>   */
> -void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
> +void __noreturn spin(unsigned long *mbox, unsigned long invalid)
>  {
>  	unsigned long addr = invalid;
>  
> @@ -36,13 +36,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
>  		addr = *mbox;
>  	}
>  
> -	if (is_entry)
> -#ifdef KERNEL_32
> -		jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
> -#else
> -		jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
> -#endif
> -
>  	jump_kernel(addr, 0, 0, 0, 0);
>  
>  	unreachable();
> @@ -60,12 +53,15 @@ void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
>  			   unsigned long invalid)
>  {
>  	if (cpu == 0) {
> -		*mbox = (unsigned long)&entrypoint;
> -		sevl();
> -		spin(mbox, invalid, 1);
> +		unsigned long addr = (unsigned long)&entrypoint;
> +#ifdef KERNEL_32
> +		jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
> +#else
> +		jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
> +#endif
>  	} else {
>  		*mbox = invalid;
> -		spin(mbox, invalid, 0);
> +		spin(mbox, invalid);
>  	}
>  
>  	unreachable();
> diff --git a/common/psci.c b/common/psci.c
> index 5ae4255..19cc315 100644
> --- a/common/psci.c
> +++ b/common/psci.c
> @@ -57,7 +57,7 @@ static int psci_cpu_off(void)
>  
>  	branch_table[cpu] = PSCI_ADDR_INVALID;
>  
> -	spin(branch_table + cpu, PSCI_ADDR_INVALID, 0);
> +	spin(branch_table + cpu, PSCI_ADDR_INVALID);
>  
>  	unreachable();
>  }
> diff --git a/include/boot.h b/include/boot.h
> index 459d1d5..18b805d 100644
> --- a/include/boot.h
> +++ b/include/boot.h
> @@ -12,7 +12,7 @@
>  #include <compiler.h>
>  #include <stdbool.h>
>  
> -void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry);
> +void __noreturn spin(unsigned long *mbox, unsigned long invalid);
>  
>  void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
>  			   unsigned long invalid_addr);



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 09/10] Add printing functions
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 09/10] Add printing functions Mark Rutland
@ 2024-08-21 16:55   ` Andre Przywara
  0 siblings, 0 replies; 30+ messages in thread
From: Andre Przywara @ 2024-08-21 16:55 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:54 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

Hi,

> In subsequent patches we'll want to log messages from specific CPUs. Add
> helpers to make this simpler.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers,
Andre

> ---
>  common/platform.c  | 35 +++++++++++++++++++++++++++++++++++
>  include/platform.h |  4 ++++
>  2 files changed, 39 insertions(+)
> 
> diff --git a/common/platform.c b/common/platform.c
> index 1607ee6..4e390e1 100644
> --- a/common/platform.c
> +++ b/common/platform.c
> @@ -65,6 +65,41 @@ void print_ulong_hex(unsigned long val)
>  	}
>  }
>  
> +// 2^32 is 4,294,967,296
> +#define DEC_CHARS_PER_UINT	10
> +
> +void print_uint_dec(unsigned int val)
> +{
> +	char digits[DEC_CHARS_PER_UINT];
> +	int d = 0;
> +
> +	do {
> +		digits[d] = val % 10;
> +		val /= 10;
> +		d++;
> +	} while (val);
> +
> +	while (d--) {
> +		print_char('0' + digits[d]);
> +	}
> +}
> +
> +void print_cpu_warn(unsigned int cpu, const char *str)
> +{
> +	print_string("CPU");
> +	print_uint_dec(cpu);
> +	print_string(" WARNING: ");
> +	print_string(str);
> +}
> +
> +void print_cpu_msg(unsigned int cpu, const char *str)
> +{
> +	print_string("CPU");
> +	print_uint_dec(cpu);
> +	print_string(": ");
> +	print_string(str);
> +}
> +
>  void init_uart(void)
>  {
>  	/*
> diff --git a/include/platform.h b/include/platform.h
> index c88e124..09712b1 100644
> --- a/include/platform.h
> +++ b/include/platform.h
> @@ -12,6 +12,10 @@
>  void print_char(char c);
>  void print_string(const char *str);
>  void print_ulong_hex(unsigned long val);
> +void print_uint_dec(unsigned int val);
> +
> +void print_cpu_warn(unsigned int cpu, const char *str);
> +void print_cpu_msg(unsigned int cpu, const char *str);
>  
>  void init_uart(void);
>  



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially
  2024-08-12 10:15 ` [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially Mark Rutland
@ 2024-08-21 17:49   ` Andre Przywara
  2024-08-22 10:02     ` Mark Rutland
  0 siblings, 1 reply; 30+ messages in thread
From: Andre Przywara @ 2024-08-21 17:49 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Mon, 12 Aug 2024 11:15:55 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> Currently the boot-wrapper initializes all CPUs in parallel. This means
> that we cannot log errors as they happen, as this would mean multiple
> CPUs concurrently writing to the UART, producing garbled output. To
> produce meaningful output we have to special-case errors on the boot CPU
> and hope CPUs have been configured consistently.
> 
> To make it easier to handle errors, boot CPUs sequentially so that errors
> can be logged as they happen. With this change it's pretty clear that
> the cpu_init_bootmethod() abstraction isn't helpful, and so this is
> removed with cpu_init_arch() directly initializing PSCI where necessary.

Yeah, makes sense, though the change is a bit hard to follow in the diff
below. But I convinced myself that it's correct. Just one small comment
inside:

> When things go well this looks like:
> 
> | Boot-wrapper v0.2
> | Entered at EL3
> | Memory layout:
> | [0000000080000000..0000000080001f90] => boot-wrapper
> | [000000008000fff8..0000000080010000] => mbox
> | [0000000080200000..0000000082cbaa00] => kernel
> | [0000000088000000..0000000088002df1] => dtb
> | CPU0: (MPIDR 0000000000000000) initializing...
> | CPU0: Done
> | CPU1: (MPIDR 0000000000000100) initializing...
> | CPU1: Done
> | CPU2: (MPIDR 0000000000000200) initializing...
> | CPU2: Done
> | CPU3: (MPIDR 0000000000000300) initializing...
> | CPU3: Done
> | CPU4: (MPIDR 0000000000010000) initializing...
> | CPU4: Done
> | CPU5: (MPIDR 0000000000010100) initializing...
> | CPU5: Done
> | CPU6: (MPIDR 0000000000010200) initializing...
> | CPU6: Done
> | CPU7: (MPIDR 0000000000010300) initializing...
> | CPU7: Done
> | All CPUs initialized. Entering kernel...
> |
> | [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  arch/aarch32/boot.S |  2 --
>  arch/aarch32/init.c | 17 +++++++++------
>  arch/aarch64/boot.S |  2 --
>  arch/aarch64/init.c | 17 +++++++++------
>  arch/aarch64/spin.S |  3 ---
>  common/init.c       | 50 +++++++++++++++++++++++++++++++++++++--------
>  common/psci.c       | 14 -------------
>  include/boot.h      |  6 +-----
>  8 files changed, 64 insertions(+), 47 deletions(-)
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index 8b6ffac..08029ff 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -70,8 +70,6 @@ reset_common:
>  
>  	bl	cpu_init_bootwrapper
>  
> -	bl	cpu_init_arch
> -
>  	b	start_bootmethod
>  
>  err_invalid_id:
> diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
> index cb67bf6..d08ac83 100644
> --- a/arch/aarch32/init.c
> +++ b/arch/aarch32/init.c
> @@ -6,6 +6,7 @@
>   * Use of this source code is governed by a BSD-style license that can be
>   * found in the LICENSE.txt file.
>   */
> +#include <boot.h>
>  #include <cpu.h>
>  #include <gic.h>
>  #include <platform.h>
> @@ -43,24 +44,28 @@ static void cpu_init_monitor(void)
>  #ifdef PSCI
>  extern char psci_vectors[];
>  
> -bool cpu_init_psci_arch(void)
> +static void cpu_init_psci_arch(unsigned int cpu)
>  {
> -	if (read_cpsr_mode() != PSR_MON)
> -		return false;
> +	if (read_cpsr_mode() != PSR_MON) {
> +		print_cpu_warn(cpu, "PSCI could not be initialized (not booted at PL1).\r\n");
> +		return;
> +	}
>  
>  	mcr(MVBAR, (unsigned long)psci_vectors);
>  	isb();
> -
> -	return true;
>  }
> +#else
> +static static void cpu_init_psci_arch(unsigned int cpu) { }
>  #endif
>  
> -void cpu_init_arch(void)
> +void cpu_init_arch(unsigned int cpu)
>  {
>  	if (read_cpsr_mode() == PSR_MON) {
>  		cpu_init_monitor();
>  		gic_secure_init();
>  	}
>  
> +	cpu_init_psci_arch(cpu);
> +
>  	mcr(CNTFRQ, COUNTER_FREQ);
>  }
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 98ae77d..6bec836 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -69,8 +69,6 @@ reset_common:
>  
>  	bl	cpu_init_bootwrapper
>  
> -	bl	cpu_init_arch
> -
>  	b	start_bootmethod
>  
>  err_invalid_id:
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index 68c220b..63fa949 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -6,6 +6,7 @@
>   * Use of this source code is governed by a BSD-style license that can be
>   * found in the LICENSE.txt file.
>   */
> +#include <boot.h>
>  #include <cpu.h>
>  #include <gic.h>
>  #include <platform.h>
> @@ -159,24 +160,28 @@ static void cpu_init_el3(void)
>  #ifdef PSCI
>  extern char psci_vectors[];
>  
> -bool cpu_init_psci_arch(void)
> +static void cpu_init_psci_arch(unsigned int cpu)
>  {
> -	if (mrs(CurrentEL) != CURRENTEL_EL3)
> -		return false;
> +	if (mrs(CurrentEL) != CURRENTEL_EL3) {
> +		print_cpu_warn(cpu, "PSCI could not be initialized (not booted at EL3).\r\n");
> +		return;
> +	}
>  
>  	msr(VBAR_EL3, (unsigned long)psci_vectors);
>  	isb();
> -
> -	return true;
>  }
> +#else
> +static void cpu_init_psci_arch(unsigned int cpu) { }
>  #endif
>  
> -void cpu_init_arch(void)
> +void cpu_init_arch(unsigned int cpu)
>  {
>  	if (mrs(CurrentEL) == CURRENTEL_EL3) {
>  		cpu_init_el3();
>  		gic_secure_init();
>  	}
>  
> +	cpu_init_psci_arch(cpu);
> +
>  	msr(CNTFRQ_EL0, COUNTER_FREQ);
>  }
> diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
> index a7879d4..81d7ee4 100644
> --- a/arch/aarch64/spin.S
> +++ b/arch/aarch64/spin.S
> @@ -12,9 +12,6 @@
>  
>  	.text
>  
> -ASM_FUNC(cpu_init_bootmethod)
> -	ret
> -
>  ASM_FUNC(start_bootmethod)
>  	cpuid	x0, x1
>  	bl	find_logical_id
> diff --git a/common/init.c b/common/init.c
> index 3c05ac3..6197ead 100644
> --- a/common/init.c
> +++ b/common/init.c
> @@ -43,18 +43,50 @@ static void announce_objects(void)
>  
>  void announce_arch(void);
>  
> +static void init_bootwrapper(void)
> +{
> +	init_uart();
> +	announce_bootwrapper();
> +	announce_arch();
> +	announce_objects();
> +	init_platform();
> +}
> +
> +static void cpu_init_self(unsigned int cpu)
> +{
> +	print_string("CPU");
> +	print_uint_dec(cpu);
> +	print_string(": (MPIDR ");
> +	print_ulong_hex(read_mpidr());
> +	print_string(") initializing...\r\n");

It feels like using two lines per core for something that rarely fails
is a bit wasteful. Can we get rid of the EOL here, and just put the "Done"
right behind it, in the same line? That reduces the noise on the terminal.

Cheers,
Andre

> +
> +	cpu_init_arch(cpu);
> +
> +	print_cpu_msg(cpu, "Done\r\n");
> +}
> +
>  void cpu_init_bootwrapper(void)
>  {
> +	static volatile unsigned int cpu_next = 0;
>  	unsigned int cpu = this_cpu_logical_id();
>  
> -	if (cpu == 0) {
> -		init_uart();
> -		announce_bootwrapper();
> -		announce_arch();
> -		announce_objects();
> -		print_string("\r\n");
> -		init_platform();
> -	}
> +	if (cpu == 0)
> +		init_bootwrapper();
> +
> +	while (cpu_next != cpu)
> +		wfe();
> +
> +	cpu_init_self(cpu);
> +
> +	cpu_next = cpu + 1;
> +	dsb(sy);
> +	sev();
> +
> +	if (cpu != 0)
> +		return;
> +
> +	while (cpu_next != NR_CPUS)
> +		wfe();
>  
> -	cpu_init_bootmethod(cpu);
> +	print_string("All CPUs initialized. Entering kernel...\r\n\r\n");
>  }
> diff --git a/common/psci.c b/common/psci.c
> index 19cc315..5fe8999 100644
> --- a/common/psci.c
> +++ b/common/psci.c
> @@ -87,17 +87,3 @@ void __noreturn psci_first_spin(void)
>  
>  	unreachable();
>  }
> -
> -void cpu_init_bootmethod(unsigned int cpu)
> -{
> -	if (cpu_init_psci_arch())
> -		return;
> -
> -	if (cpu == 0) {
> -		print_string("WARNING: PSCI could not be initialized. Boot may fail\r\n\r\n");
> -		return;
> -	}
> -
> -	while (1)
> -		wfe();
> -}
> diff --git a/include/boot.h b/include/boot.h
> index 18b805d..12c9c5c 100644
> --- a/include/boot.h
> +++ b/include/boot.h
> @@ -17,10 +17,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid);
>  void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
>  			   unsigned long invalid_addr);
>  
> -void cpu_init_bootmethod(unsigned int cpu);
> -
> -#ifdef PSCI
> -bool cpu_init_psci_arch(void);
> -#endif
> +void cpu_init_arch(unsigned int cpu);
>  
>  #endif



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 07/10] Unify assembly setup paths
  2024-08-21 16:54   ` Andre Przywara
@ 2024-08-22  9:50     ` Mark Rutland
  0 siblings, 0 replies; 30+ messages in thread
From: Mark Rutland @ 2024-08-22  9:50 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Wed, Aug 21, 2024 at 05:54:45PM +0100, Andre Przywara wrote:
> On Mon, 12 Aug 2024 11:15:52 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > The early assembly paths for EL3/Secure-PL1 and EL2/Hyp are almost
> > identical aside from the EL3/Secure-PL1 paths calling gic_secure_init().
> > 
> > Simplify the easlery assembly paths by conditionally calling

Whoops; I'll s/easlery/early/ here.

> > gic_secure_init() from cpu_init_arch(), allowing the EL3/Secure-PL1 and
> > EL2/Hyp paths to be unified.
> > 
> > In order to call gic_secure_init() from C code we need to expose a
> > prototype for gic_secure_init(), requiring a new <gic.h> header. For
> > clarity the existing <asm/gic-v3.h> headers are renamed to <asm/gic.h>
> > and are included through the common header.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Acked-by: Marc Zyngier <maz@kernel.org>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> 
> Thanks, that seems like a nice cleanup and refactoring, and looks good to
> me:
> 
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Cheers!

> Just not sure what to do about the TODO comments in there?

They're a note that the AArch32 code doesn't currently initialize
registers which are UNKNOWN at reset, and without the notes it looks a
little odd to have those labels at all ratehr than branching directly to
reset_common() above.

I'd prefer to keep those for now to indicate that there are known issues
here.

Mark.

> 
> Cheers,
> Andre
> 
> > ---
> >  arch/aarch32/boot.S                          | 20 ++++++--------------
> >  arch/aarch32/include/asm/{gic-v3.h => gic.h} |  2 +-
> >  arch/aarch32/init.c                          |  5 ++++-
> >  arch/aarch64/boot.S                          | 17 ++++-------------
> >  arch/aarch64/include/asm/{gic-v3.h => gic.h} |  2 +-
> >  arch/aarch64/init.c                          |  5 ++++-
> >  common/gic-v3.c                              |  2 +-
> >  common/gic.c                                 |  2 +-
> >  include/gic.h                                | 16 ++++++++++++++++
> >  9 files changed, 38 insertions(+), 33 deletions(-)
> >  rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
> >  rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
> >  create mode 100644 include/gic.h
> > 
> > diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> > index 78d19a0..8b6ffac 100644
> > --- a/arch/aarch32/boot.S
> > +++ b/arch/aarch32/boot.S
> > @@ -53,22 +53,14 @@ reset_at_svc:
> >  	movs	pc, lr
> >  
> >  reset_at_mon:
> > -	cpuid	r0, r1
> > -	bl	find_logical_id
> > -	cmp	r0, #MPIDR_INVALID
> > -	beq	err_invalid_id
> > -
> > -	bl	setup_stack
> > -
> > -	bl	cpu_init_bootwrapper
> > -
> > -	bl	cpu_init_arch
> > -
> > -	bl	gic_secure_init
> > -
> > -	b	start_bootmethod
> > +	// TODO initialise monitor state
> > +	b	reset_common
> >  
> >  reset_at_hyp:
> > +	// TODO initialise hyp state
> > +	b	reset_common
> > +
> > +reset_common:
> >  	cpuid	r0, r1
> >  	bl	find_logical_id
> >  	cmp	r0, #MPIDR_INVALID
> > diff --git a/arch/aarch32/include/asm/gic-v3.h b/arch/aarch32/include/asm/gic.h
> > similarity index 91%
> > rename from arch/aarch32/include/asm/gic-v3.h
> > rename to arch/aarch32/include/asm/gic.h
> > index b28136a..0b9425d 100644
> > --- a/arch/aarch32/include/asm/gic-v3.h
> > +++ b/arch/aarch32/include/asm/gic.h
> > @@ -1,5 +1,5 @@
> >  /*
> > - * arch/aarch32/include/asm/gic-v3.h
> > + * arch/aarch32/include/asm/gic.h
> >   *
> >   * Copyright (C) 2015 ARM Limited. All rights reserved.
> >   *
> > diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
> > index 35da37c..cb67bf6 100644
> > --- a/arch/aarch32/init.c
> > +++ b/arch/aarch32/init.c
> > @@ -7,6 +7,7 @@
> >   * found in the LICENSE.txt file.
> >   */
> >  #include <cpu.h>
> > +#include <gic.h>
> >  #include <platform.h>
> >  #include <stdbool.h>
> >  
> > @@ -56,8 +57,10 @@ bool cpu_init_psci_arch(void)
> >  
> >  void cpu_init_arch(void)
> >  {
> > -	if (read_cpsr_mode() == PSR_MON)
> > +	if (read_cpsr_mode() == PSR_MON) {
> >  		cpu_init_monitor();
> > +		gic_secure_init();
> > +	}
> >  
> >  	mcr(CNTFRQ, COUNTER_FREQ);
> >  }
> > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> > index 0ac0c98..98ae77d 100644
> > --- a/arch/aarch64/boot.S
> > +++ b/arch/aarch64/boot.S
> > @@ -43,19 +43,7 @@ reset_at_el3:
> >  	msr	sctlr_el3, x0
> >  	isb
> >  
> > -	cpuid	x0, x1
> > -	bl	find_logical_id
> > -	cmp	x0, #MPIDR_INVALID
> > -	b.eq	err_invalid_id
> > -	bl	setup_stack
> > -
> > -	bl	cpu_init_bootwrapper
> > -
> > -	bl	cpu_init_arch
> > -
> > -	bl	gic_secure_init
> > -
> > -	b	start_bootmethod
> > +	b	reset_common
> >  
> >  	/*
> >  	 * EL2 initialization
> > @@ -70,6 +58,9 @@ reset_at_el2:
> >  	msr	sctlr_el2, x0
> >  	isb
> >  
> > +	b	reset_common
> > +
> > +reset_common:
> >  	cpuid	x0, x1
> >  	bl	find_logical_id
> >  	cmp	x0, #MPIDR_INVALID
> > diff --git a/arch/aarch64/include/asm/gic-v3.h b/arch/aarch64/include/asm/gic.h
> > similarity index 92%
> > rename from arch/aarch64/include/asm/gic-v3.h
> > rename to arch/aarch64/include/asm/gic.h
> > index 2447480..9d716f6 100644
> > --- a/arch/aarch64/include/asm/gic-v3.h
> > +++ b/arch/aarch64/include/asm/gic.h
> > @@ -1,5 +1,5 @@
> >  /*
> > - * arch/aarch64/include/asm/gic-v3.h
> > + * arch/aarch64/include/asm/gic.h
> >   *
> >   * Copyright (C) 2015 ARM Limited. All rights reserved.
> >   *
> > diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> > index 49abdf7..68c220b 100644
> > --- a/arch/aarch64/init.c
> > +++ b/arch/aarch64/init.c
> > @@ -7,6 +7,7 @@
> >   * found in the LICENSE.txt file.
> >   */
> >  #include <cpu.h>
> > +#include <gic.h>
> >  #include <platform.h>
> >  #include <stdbool.h>
> >  
> > @@ -172,8 +173,10 @@ bool cpu_init_psci_arch(void)
> >  
> >  void cpu_init_arch(void)
> >  {
> > -	if (mrs(CurrentEL) == CURRENTEL_EL3)
> > +	if (mrs(CurrentEL) == CURRENTEL_EL3) {
> >  		cpu_init_el3();
> > +		gic_secure_init();
> > +	}
> >  
> >  	msr(CNTFRQ_EL0, COUNTER_FREQ);
> >  }
> > diff --git a/common/gic-v3.c b/common/gic-v3.c
> > index 6207007..4d8e620 100644
> > --- a/common/gic-v3.c
> > +++ b/common/gic-v3.c
> > @@ -10,7 +10,7 @@
> >  #include <stdint.h>
> >  
> >  #include <cpu.h>
> > -#include <asm/gic-v3.h>
> > +#include <gic.h>
> >  #include <asm/io.h>
> >  
> >  #define GICD_CTLR			0x0
> > diff --git a/common/gic.c b/common/gic.c
> > index 04d4289..15a3410 100644
> > --- a/common/gic.c
> > +++ b/common/gic.c
> > @@ -10,7 +10,7 @@
> >  #include <stdint.h>
> >  
> >  #include <cpu.h>
> > -#include <asm/gic-v3.h>
> > +#include <gic.h>
> >  #include <asm/io.h>
> >  
> >  #define GICD_CTLR			0x0
> > diff --git a/include/gic.h b/include/gic.h
> > new file mode 100644
> > index 0000000..127f82b
> > --- /dev/null
> > +++ b/include/gic.h
> > @@ -0,0 +1,16 @@
> > +/*
> > + * include/gic.h
> > + *
> > + * Copyright (C) 2024 ARM Limited. All rights reserved.
> > + *
> > + * Use of this source code is governed by a BSD-style license that can be
> > + * found in the LICENSE.txt file.
> > + */
> > +#ifndef __GIC_H
> > +#define __GIC_H
> > +
> > +#include <asm/gic.h>
> > +
> > +void gic_secure_init(void);
> > +
> > +#endif
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 08/10] Simplify spin logic
  2024-08-21 16:55   ` Andre Przywara
@ 2024-08-22  9:54     ` Mark Rutland
  0 siblings, 0 replies; 30+ messages in thread
From: Mark Rutland @ 2024-08-22  9:54 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Wed, Aug 21, 2024 at 05:55:25PM +0100, Andre Przywara wrote:
> On Mon, 12 Aug 2024 11:15:53 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > The logic for initial boot is more complicated than it needs to be,
> > with both first_spin() having a special case for CPU0 that requires an
> > additional argument to be passed to spin().
> > 
> > Simplify this by moving the special-case logic for CPU0 into
> > first_spin(). This removes the need to provide a dummy mailbox for CPU0
> > to spin on, simplfiies callers of first_spin() and spin(), which no
> > longer need to pass a dummy mailbox or 'is_entry' for CPU0.
> 
> Ah, that's a nice one! Indeed this looks much cleaner now. The change
> looks good to me, just some small thing below:
> 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Acked-by: Marc Zyngier <maz@kernel.org>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > ---
> >  arch/aarch64/spin.S | 11 +----------
> >  common/boot.c       | 20 ++++++++------------
> >  common/psci.c       |  2 +-
> >  include/boot.h      |  2 +-
> >  4 files changed, 11 insertions(+), 24 deletions(-)
> > 
> > diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
> > index 375f732..a7879d4 100644
> > --- a/arch/aarch64/spin.S
> > +++ b/arch/aarch64/spin.S
> > @@ -23,15 +23,6 @@ ASM_FUNC(start_bootmethod)
> >  	 * Primary CPU (x0 = 0) jumps to kernel, the other ones wait for an
> >  	 * address to appear in mbox
> 
> I think this comment is a bit out of place now, either remove it entirely,
> or hint that first_spin will take care of the difference between mbox and
> kernel entrypoint.

I'll delete it.

> Regardless:
> 
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>

Thanks!

Mark.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially
  2024-08-21 17:49   ` Andre Przywara
@ 2024-08-22 10:02     ` Mark Rutland
  0 siblings, 0 replies; 30+ messages in thread
From: Mark Rutland @ 2024-08-22 10:02 UTC (permalink / raw)
  To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz

On Wed, Aug 21, 2024 at 06:49:23PM +0100, Andre Przywara wrote:
> On Mon, 12 Aug 2024 11:15:55 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > Currently the boot-wrapper initializes all CPUs in parallel. This means
> > that we cannot log errors as they happen, as this would mean multiple
> > CPUs concurrently writing to the UART, producing garbled output. To
> > produce meaningful output we have to special-case errors on the boot CPU
> > and hope CPUs have been configured consistently.
> > 
> > To make it easier to handle errors, boot CPUs sequentially so that errors
> > can be logged as they happen. With this change it's pretty clear that
> > the cpu_init_bootmethod() abstraction isn't helpful, and so this is
> > removed with cpu_init_arch() directly initializing PSCI where necessary.
> 
> Yeah, makes sense, though the change is a bit hard to follow in the diff
> below. But I convinced myself that it's correct. Just one small comment
> inside:
> 
> > When things go well this looks like:
> > 
> > | Boot-wrapper v0.2
> > | Entered at EL3
> > | Memory layout:
> > | [0000000080000000..0000000080001f90] => boot-wrapper
> > | [000000008000fff8..0000000080010000] => mbox
> > | [0000000080200000..0000000082cbaa00] => kernel
> > | [0000000088000000..0000000088002df1] => dtb
> > | CPU0: (MPIDR 0000000000000000) initializing...
> > | CPU0: Done
> > | CPU1: (MPIDR 0000000000000100) initializing...
> > | CPU1: Done
> > | CPU2: (MPIDR 0000000000000200) initializing...
> > | CPU2: Done
> > | CPU3: (MPIDR 0000000000000300) initializing...
> > | CPU3: Done
> > | CPU4: (MPIDR 0000000000010000) initializing...
> > | CPU4: Done
> > | CPU5: (MPIDR 0000000000010100) initializing...
> > | CPU5: Done
> > | CPU6: (MPIDR 0000000000010200) initializing...
> > | CPU6: Done
> > | CPU7: (MPIDR 0000000000010300) initializing...
> > | CPU7: Done
> > | All CPUs initialized. Entering kernel...
> > |
> > | [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]

> > +static void cpu_init_self(unsigned int cpu)
> > +{
> > +	print_string("CPU");
> > +	print_uint_dec(cpu);
> > +	print_string(": (MPIDR ");
> > +	print_ulong_hex(read_mpidr());
> > +	print_string(") initializing...\r\n");
> 
> It feels like using two lines per core for something that rarely fails
> is a bit wasteful. Can we get rid of the EOL here, and just put the "Done"
> right behind it, in the same line? That reduces the noise on the terminal.

That'll interact poorly with logging errors, as the first will get
appended to the "initializing..." line.

Instead, I'll get rid of the "Done" line, so that the output ends up as:

| CPU0: (MPIDR 0000000000000000) initializing...
| CPU1: (MPIDR 0000000000000100) initializing...
| CPU2: (MPIDR 0000000000000200) initializing...
| CPU3: (MPIDR 0000000000000300) initializing...
| CPU4: (MPIDR 0000000000010000) initializing...
| CPU5: (MPIDR 0000000000010100) initializing...
| CPU6: (MPIDR 0000000000010200) initializing...
| CPU7: (MPIDR 0000000000010300) initializing...
| All CPUs initialized. Entering kernel...

... and if any error occurs it'll still be clear which CPU it belongs
to given the next line will either be the next CPU being announced or
the final "All CPUs initialized." message.

Mark.


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2024-08-22 10:03 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-12 10:15 [BOOT-WRAPPER v2 00/10] Cleanup initialization Mark Rutland
2024-08-12 10:15 ` [BOOT-WRAPPER v2 01/10] aarch64: Remove redundant EL1 entry logic Mark Rutland
2024-08-12 17:37   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 02/10] aarch64: Implement cpu_init_arch() Mark Rutland
2024-08-13 17:13   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 03/10] aarch64: Always enter kernel via exception return Mark Rutland
2024-08-13 17:14   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 04/10] aarch32: Refactor inital entry Mark Rutland
2024-08-19 17:21   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 05/10] aarch32: Implement cpu_init_arch() Mark Rutland
2024-08-19 17:21   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 06/10] aarch32: Always enter kernel via exception return Mark Rutland
2024-08-19 17:22   ` Andre Przywara
2024-08-20 11:43     ` Mark Rutland
2024-08-20 12:59       ` Andre Przywara
2024-08-20 13:36         ` Mark Rutland
2024-08-20 13:50           ` Andre Przywara
2024-08-20 11:47     ` Mark Rutland
2024-08-20 12:23       ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 07/10] Unify assembly setup paths Mark Rutland
2024-08-21 16:54   ` Andre Przywara
2024-08-22  9:50     ` Mark Rutland
2024-08-12 10:15 ` [BOOT-WRAPPER v2 08/10] Simplify spin logic Mark Rutland
2024-08-21 16:55   ` Andre Przywara
2024-08-22  9:54     ` Mark Rutland
2024-08-12 10:15 ` [BOOT-WRAPPER v2 09/10] Add printing functions Mark Rutland
2024-08-21 16:55   ` Andre Przywara
2024-08-12 10:15 ` [BOOT-WRAPPER v2 10/10] Boot CPUs sequentially Mark Rutland
2024-08-21 17:49   ` Andre Przywara
2024-08-22 10:02     ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).