* [BOOT-WRAPPER 00/11] Cleanup initialization
@ 2024-07-29 16:14 Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode Mark Rutland
` (11 more replies)
0 siblings, 12 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
These patches cleanup the boot-wrapper initialization logic to make it
more consistent and easier to extend in C code in future. The big
changes are:
* The kernel is always entered via an exception return. This allows us
to initialize PSTATE consistently, and will allow us to adjust the
SPSR dynamically in C code in future if necessary.
* Regardless of the entered exception level, CPU state is initialized
under cpu_init_arch(), allowing for logic to be shared regardless of
which exception level was entered.
* CPUs are initialized sequentially, which allows better for better
logging within the boot-wrapper.
Mark.
Mark Rutland (11):
Always enter AArch32 kernels in ARM mode
aarch64: Remove redundant EL1 entry logic
aarch64: Implement cpu_init_arch()
aarch64: Always enter kernel via exception return
aarch32: Refactor inital entry
aarch32: Implement cpu_init_arch()
aarch32: Always enter kernel via exception return
Unify assembly setup paths
Simplify spin logic
Add printing functions
Boot CPUs sequentially
arch/aarch32/boot.S | 95 ++++++++++----------
arch/aarch32/include/asm/{gic-v3.h => gic.h} | 2 +-
arch/aarch32/init.c | 30 +++++--
arch/aarch64/boot.S | 66 ++++----------
arch/aarch64/include/asm/{gic-v3.h => gic.h} | 2 +-
arch/aarch64/init.c | 30 +++++--
arch/aarch64/spin.S | 14 +--
common/boot.c | 20 ++---
common/gic-v3.c | 2 +-
common/gic.c | 2 +-
common/init.c | 50 +++++++++--
common/platform.c | 35 ++++++++
common/psci.c | 16 +---
include/boot.h | 8 +-
include/gic.h | 16 ++++
include/platform.h | 4 +
16 files changed, 221 insertions(+), 171 deletions(-)
rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
create mode 100644 include/gic.h
--
2.30.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-08-02 11:26 ` Andre Przywara
2024-07-29 16:14 ` [BOOT-WRAPPER 02/11] aarch64: Remove redundant EL1 entry logic Mark Rutland
` (10 subsequent siblings)
11 siblings, 1 reply; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
Currnetly we try to support entering AArch32 kernels, but this is
unnecessary, and the code is never exercised.
Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
supported by the AArch64 boot-wrapper should always be entered in ARM
mode:
| The boot loader is expected to call the kernel image by jumping
| directly to the first instruction of the kernel image.
|
| On CPUs supporting the ARM instruction set, the entry must be
| made in ARM state, even for a Thumb-2 kernel.
|
| On CPUs supporting only the Thumb instruction set such as
| Cortex-M class CPUs, the entry must be made in Thumb state.
Additionally, the kernel__start symbol that we use as the kernel
entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
into account any ARM/Thumb distinction in the AArch32 kernel image, and
hence we'll never try to set the Thumb bit in the SPSR.
Remove the redundant code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 4 ----
arch/aarch64/boot.S | 7 -------
2 files changed, 11 deletions(-)
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 4d16c9c..5c2a183 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -105,10 +105,6 @@ ASM_FUNC(jump_kernel)
bxeq lr @ no EL3
ldr r4, =SPSR_KERNEL
- /* Return in thumb2 mode when bit 0 of address is 1 */
- tst lr, #1
- orrne r4, #PSR_T
-
msr spsr_cxf, r4
movs pc, lr
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index da5fa65..b889137 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -136,13 +136,6 @@ ASM_FUNC(jump_kernel)
br x19 // No EL3
1: mov x4, #SPSR_KERNEL
-
- /*
- * If bit 0 of the kernel address is set, we're entering in AArch32
- * thumb mode. Set SPSR.T accordingly.
- */
- bfi x4, x19, #5, #1
-
msr elr_el3, x19
msr spsr_el3, x4
eret
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 02/11] aarch64: Remove redundant EL1 entry logic
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch() Mark Rutland
` (9 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
For historical reasons the boot-wrapper has code to handle being entered
at Non-secure EL1, but currently this is unsupported and cannot be used
to boot a kernel as jump_kernel() unconditionally writes to SCTLR_EL2,
which will UNDEF.
Remove the logic for handling Non-secure EL1.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch64/boot.S | 17 +----------------
1 file changed, 1 insertion(+), 16 deletions(-)
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index b889137..51ef41b 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -31,10 +31,8 @@ ASM_FUNC(_start)
b.eq reset_at_el3
cmp x0, #CURRENTEL_EL2
b.eq reset_at_el2
- cmp x0, #CURRENTEL_EL1
- b.eq reset_at_el1
- /* Booting at EL0 is not supported */
+ /* Booting at EL1 or EL0 is not supported */
b .
/*
@@ -72,19 +70,6 @@ reset_at_el2:
msr sctlr_el2, x0
isb
- b reset_no_el3
-
- /*
- * EL1 initialization
- */
-reset_at_el1:
- mov_64 x0, SCTLR_EL1_RESET
- msr sctlr_el1, x0
- isb
-
- b reset_no_el3
-
-reset_no_el3:
cpuid x0, x1
bl find_logical_id
cmp x0, #MPIDR_INVALID
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch()
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 02/11] aarch64: Remove redundant EL1 entry logic Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-08-02 9:29 ` Marc Zyngier
2024-07-29 16:14 ` [BOOT-WRAPPER 04/11] aarch64: Always enter kernel via exception return Mark Rutland
` (8 subsequent siblings)
11 siblings, 1 reply; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
When the boot-wrapper is entered at EL2 it does not initialise
CNTFRQ_EL0, and in future it may need to initialize other CPU state
regardless of the exeption level it was entered at.
Use a common cpu_init_arch() function to initialize CPU state regardless
of the exception level the boot-wrapper was entered at.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch64/boot.S | 4 +++-
arch/aarch64/init.c | 12 +++++++++---
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 51ef41b..d8d38dd 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -51,7 +51,7 @@ reset_at_el3:
bl cpu_init_bootwrapper
- bl cpu_init_el3
+ bl cpu_init_arch
bl gic_secure_init
@@ -82,6 +82,8 @@ reset_at_el2:
bl cpu_init_bootwrapper
+ bl cpu_init_arch
+
b start_bootmethod
err_invalid_id:
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index c9fc7f1..49abdf7 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -52,7 +52,7 @@ static inline bool cpu_has_permission_indirection(void)
return mrs(ID_AA64MMFR3_EL1) & mask;
}
-void cpu_init_el3(void)
+static void cpu_init_el3(void)
{
unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE;
unsigned long mdcr = 0;
@@ -153,8 +153,6 @@ void cpu_init_el3(void)
msr(SMCR_EL3, smcr);
}
-
- msr(CNTFRQ_EL0, COUNTER_FREQ);
}
#ifdef PSCI
@@ -171,3 +169,11 @@ bool cpu_init_psci_arch(void)
return true;
}
#endif
+
+void cpu_init_arch(void)
+{
+ if (mrs(CurrentEL) == CURRENTEL_EL3)
+ cpu_init_el3();
+
+ msr(CNTFRQ_EL0, COUNTER_FREQ);
+}
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 04/11] aarch64: Always enter kernel via exception return
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (2 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch() Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 05/11] aarch32: Refactor inital entry Mark Rutland
` (7 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
When the boot-wrapper is entered at EL3 it will enter the kernel via
ERET, and when entered at EL2 it will branch to the kernel directly.
This is an artifact of the way the boot-wrapper was originally written
in assembly, and it would be preferable to always enter the kernel via
ERET so that PSTATE is always initialized to a known-good value.
Rework jump_kernel() to always enter the kernel via ERET.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch64/boot.S | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index d8d38dd..3dbf85a 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -76,10 +76,6 @@ reset_at_el2:
b.eq err_invalid_id
bl setup_stack
- mov w0, #1
- ldr x1, =flag_no_el3
- str w0, [x1]
-
bl cpu_init_bootwrapper
bl cpu_init_arch
@@ -111,25 +107,26 @@ ASM_FUNC(jump_kernel)
bl find_logical_id
bl setup_stack // Reset stack pointer
- ldr w0, flag_no_el3
- cmp w0, #0 // Prepare Z flag
-
mov x0, x20
mov x1, x21
mov x2, x22
mov x3, x23
+ mov x4, #SPSR_KERNEL
- b.eq 1f
- br x19 // No EL3
+ mrs x5, CurrentEL
+ cmp x5, #CURRENTEL_EL3
+ b.eq eret_at_el3
+ cmp x5, #CURRENTEL_EL2
+ b.eq eret_at_el2
+ b . // Not possible
-1: mov x4, #SPSR_KERNEL
+eret_at_el3:
msr elr_el3, x19
msr spsr_el3, x4
eret
+eret_at_el2:
+ msr elr_el2, x19
+ msr spsr_el2, x4
+ eret
.ltorg
-
- .data
- .align 3
-flag_no_el3:
- .long 0
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 05/11] aarch32: Refactor inital entry
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (3 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 04/11] aarch64: Always enter kernel via exception return Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 06/11] aarch32: Implement cpu_init_arch() Mark Rutland
` (6 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
For historical reasons the early AArch32 code is structured differently
from the early AArch64 code, with some common code (including stack
setup) performed before we identify the mode we were entered in.
Align the structure of the early AArch32 code with that of the early
AArch64 code. This will make subsequent refactoring easier.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 55 ++++++++++++++++++++++++++++-----------------
1 file changed, 34 insertions(+), 21 deletions(-)
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 5c2a183..43dce75 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -31,7 +31,28 @@
* PSCI is not supported when entered in this mode.
*/
ASM_FUNC(_start)
- /* Stack initialisation */
+ mrs r0, cpsr
+ and r0, #PSR_MODE_MASK
+ cmp r0, #PSR_SVC
+ beq reset_at_svc
+ cmp r0, #PSR_HYP
+ beq reset_at_hyp
+
+ /* Booting at other modes is not supported */
+ b .
+
+reset_at_svc:
+ /*
+ * When entered in Secure SVC mode we must switch to monitor mode to
+ * configure SCR.NS. Switch to monitor mode ASAP to simplify later
+ * code.
+ */
+ adr lr, reset_at_mon
+ ldr r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
+ msr spsr, r0
+ movs pc, lr
+
+reset_at_mon:
cpuid r0, r1
bl find_logical_id
cmp r0, #MPIDR_INVALID
@@ -39,36 +60,28 @@ ASM_FUNC(_start)
bl setup_stack
- mrs r0, cpsr
- and r0, #PSR_MODE_MASK
-
- cmp r0, #PSR_HYP
- bne _switch_monitor
+ bl cpu_init_bootwrapper
- mov r0, #1
- ldr r1, =flag_no_el3
- str r0, [r1]
+ bl cpu_init_secure_pl1
- bl cpu_init_bootwrapper
+ bl gic_secure_init
b start_bootmethod
-_switch_monitor:
- adr lr, _monitor
- ldr r0, =(PSR_A | PSR_I | PSR_F | PSR_MON)
- msr spsr, r0
- movs pc, lr
+reset_at_hyp:
+ cpuid r0, r1
+ bl find_logical_id
+ cmp r0, #MPIDR_INVALID
+ beq err_invalid_id
-_monitor:
- /* Move the stack to Monitor mode*/
- mrs sp, sp_svc
+ bl setup_stack
- bl cpu_init_secure_pl1
+ mov r0, #1
+ ldr r1, =flag_no_el3
+ str r0, [r1]
bl cpu_init_bootwrapper
- bl gic_secure_init
-
b start_bootmethod
err_invalid_id:
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 06/11] aarch32: Implement cpu_init_arch()
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (4 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 05/11] aarch32: Refactor inital entry Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 07/11] aarch32: Always enter kernel via exception return Mark Rutland
` (5 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
When the boot-wrapper is entered at EL2/Hyp it does not initialise
CNTFRQ, and in future it may need to initialize other CPU state
regardless of the exeption level it was entered at.
Use a common cpu_init_arch() function to initialize CPU state regardless
of the exception level the boot-wrapper was entered at. For clarity
cpu_init_secure_pl1() is renamed to cpu_init_monitor(), which better
matches PSR_MON and will allow for the addition of cppu_init_hyp() and
cpu_init_svc() in future.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 4 +++-
arch/aarch32/init.c | 12 +++++++++---
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 43dce75..a6f0751 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -62,7 +62,7 @@ reset_at_mon:
bl cpu_init_bootwrapper
- bl cpu_init_secure_pl1
+ bl cpu_init_arch
bl gic_secure_init
@@ -82,6 +82,8 @@ reset_at_hyp:
bl cpu_init_bootwrapper
+ bl cpu_init_arch
+
b start_bootmethod
err_invalid_id:
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index e25f0c7..35da37c 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -29,7 +29,7 @@ void announce_arch(void)
print_string("\r\n");
}
-void cpu_init_secure_pl1(void)
+static void cpu_init_monitor(void)
{
unsigned long scr = SCR_NS | SCR_HCE;
unsigned long nsacr = NSACR_CP10 | NSACR_CP11;
@@ -37,8 +37,6 @@ void cpu_init_secure_pl1(void)
mcr(SCR, scr);
mcr(NSACR, nsacr);
-
- mcr(CNTFRQ, COUNTER_FREQ);
}
#ifdef PSCI
@@ -55,3 +53,11 @@ bool cpu_init_psci_arch(void)
return true;
}
#endif
+
+void cpu_init_arch(void)
+{
+ if (read_cpsr_mode() == PSR_MON)
+ cpu_init_monitor();
+
+ mcr(CNTFRQ, COUNTER_FREQ);
+}
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 07/11] aarch32: Always enter kernel via exception return
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (5 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 06/11] aarch32: Implement cpu_init_arch() Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 08/11] Unify assembly setup paths Mark Rutland
` (4 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
When the boot-wrapper is entered at Seculre PL1 it will enter the kernel
via an exception return, ERET, and when entered at Hyp it will branch to
the kernel directly. This is an artifact of the way the boot-wrapper was
originally written in assembly, and it would be preferable to always
enter the kernel via an exception return so that PSTATE is always
initialized to a known-good value.
Rework jump_kernel() to always enter the kernel via ERET, matching the
stype of the AArch64 version of jump_kernel()
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 44 +++++++++++++++++++++++---------------------
1 file changed, 23 insertions(+), 21 deletions(-)
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index a6f0751..11fd7aa 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -76,10 +76,6 @@ reset_at_hyp:
bl setup_stack
- mov r0, #1
- ldr r1, =flag_no_el3
- str r0, [r1]
-
bl cpu_init_bootwrapper
bl cpu_init_arch
@@ -96,9 +92,10 @@ err_invalid_id:
* r1-r3, sp[0]: kernel arguments
*/
ASM_FUNC(jump_kernel)
- sub sp, #4 @ Ignore fourth argument
- push {r0 - r3}
- mov r5, sp
+ mov r4, r0
+ mov r5, r1
+ mov r6, r2
+ mov r7, r3
ldr r0, =HSCTLR_KERNEL
mcr p15, 4, r0, c1, c0, 0 @ HSCTLR
@@ -111,19 +108,24 @@ ASM_FUNC(jump_kernel)
bl find_logical_id
bl setup_stack
- ldr lr, [r5], #4
- ldm r5, {r0 - r2}
-
- ldr r4, =flag_no_el3
- ldr r4, [r4]
- cmp r4, #1
- bxeq lr @ no EL3
+ mov r0, r5
+ mov r1, r6
+ mov r2, r7
+ ldr r3, =SPSR_KERNEL
+
+ mrs r5, cpsr
+ and r5, #PSR_MODE_MASK
+ cmp r5, #PSR_MON
+ beq eret_at_mon
+ cmp r5, #PSR_HYP
+ beq eret_at_hyp
+ b .
- ldr r4, =SPSR_KERNEL
- msr spsr_cxf, r4
+eret_at_mon:
+ mov lr, r4
+ msr spsr_cxf, r3
movs pc, lr
-
- .section .data
- .align 2
-flag_no_el3:
- .long 0
+eret_at_hyp:
+ msr elr_hyp, r4
+ msr spsr_cxf, r3
+ eret
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 08/11] Unify assembly setup paths
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (6 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 07/11] aarch32: Always enter kernel via exception return Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 09/11] Simplify spin logic Mark Rutland
` (3 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
The early assembly paths for EL3/Secure-PL1 and EL2/Hyp are almost
identical aside from the EL3/Secure-PL1 paths calling gic_secure_init().
Simplify the easlery assembly paths by conditionally calling
gic_secure_init() from cpu_init_arch(), allowing the EL3/Secure-PL1 and
EL2/Hyp paths to be unified.
In order to call gic_secure_init() from C code we need to expose a
prototype for gic_secure_init(), requiring a new <gic.h> header. For
clarity the existing <asm/gic-v3.h> headers are renamed to <asm/gic.h>
and are included through the common header.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 20 ++++++--------------
arch/aarch32/include/asm/{gic-v3.h => gic.h} | 2 +-
arch/aarch32/init.c | 5 ++++-
arch/aarch64/boot.S | 17 ++++-------------
arch/aarch64/include/asm/{gic-v3.h => gic.h} | 2 +-
arch/aarch64/init.c | 5 ++++-
common/gic-v3.c | 2 +-
common/gic.c | 2 +-
include/gic.h | 16 ++++++++++++++++
9 files changed, 38 insertions(+), 33 deletions(-)
rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
create mode 100644 include/gic.h
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 11fd7aa..3a51cb0 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -53,22 +53,14 @@ reset_at_svc:
movs pc, lr
reset_at_mon:
- cpuid r0, r1
- bl find_logical_id
- cmp r0, #MPIDR_INVALID
- beq err_invalid_id
-
- bl setup_stack
-
- bl cpu_init_bootwrapper
-
- bl cpu_init_arch
-
- bl gic_secure_init
-
- b start_bootmethod
+ // TODO initialise monitor state
+ b reset_common
reset_at_hyp:
+ // TODO initialise hyp state
+ b reset_common
+
+reset_common:
cpuid r0, r1
bl find_logical_id
cmp r0, #MPIDR_INVALID
diff --git a/arch/aarch32/include/asm/gic-v3.h b/arch/aarch32/include/asm/gic.h
similarity index 91%
rename from arch/aarch32/include/asm/gic-v3.h
rename to arch/aarch32/include/asm/gic.h
index b28136a..0b9425d 100644
--- a/arch/aarch32/include/asm/gic-v3.h
+++ b/arch/aarch32/include/asm/gic.h
@@ -1,5 +1,5 @@
/*
- * arch/aarch32/include/asm/gic-v3.h
+ * arch/aarch32/include/asm/gic.h
*
* Copyright (C) 2015 ARM Limited. All rights reserved.
*
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index 35da37c..cb67bf6 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -7,6 +7,7 @@
* found in the LICENSE.txt file.
*/
#include <cpu.h>
+#include <gic.h>
#include <platform.h>
#include <stdbool.h>
@@ -56,8 +57,10 @@ bool cpu_init_psci_arch(void)
void cpu_init_arch(void)
{
- if (read_cpsr_mode() == PSR_MON)
+ if (read_cpsr_mode() == PSR_MON) {
cpu_init_monitor();
+ gic_secure_init();
+ }
mcr(CNTFRQ, COUNTER_FREQ);
}
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 3dbf85a..4cc41bb 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -43,19 +43,7 @@ reset_at_el3:
msr sctlr_el3, x0
isb
- cpuid x0, x1
- bl find_logical_id
- cmp x0, #MPIDR_INVALID
- b.eq err_invalid_id
- bl setup_stack
-
- bl cpu_init_bootwrapper
-
- bl cpu_init_arch
-
- bl gic_secure_init
-
- b start_bootmethod
+ b reset_common
/*
* EL2 initialization
@@ -70,6 +58,9 @@ reset_at_el2:
msr sctlr_el2, x0
isb
+ b reset_common
+
+reset_common:
cpuid x0, x1
bl find_logical_id
cmp x0, #MPIDR_INVALID
diff --git a/arch/aarch64/include/asm/gic-v3.h b/arch/aarch64/include/asm/gic.h
similarity index 92%
rename from arch/aarch64/include/asm/gic-v3.h
rename to arch/aarch64/include/asm/gic.h
index 2447480..9d716f6 100644
--- a/arch/aarch64/include/asm/gic-v3.h
+++ b/arch/aarch64/include/asm/gic.h
@@ -1,5 +1,5 @@
/*
- * arch/aarch64/include/asm/gic-v3.h
+ * arch/aarch64/include/asm/gic.h
*
* Copyright (C) 2015 ARM Limited. All rights reserved.
*
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 49abdf7..68c220b 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -7,6 +7,7 @@
* found in the LICENSE.txt file.
*/
#include <cpu.h>
+#include <gic.h>
#include <platform.h>
#include <stdbool.h>
@@ -172,8 +173,10 @@ bool cpu_init_psci_arch(void)
void cpu_init_arch(void)
{
- if (mrs(CurrentEL) == CURRENTEL_EL3)
+ if (mrs(CurrentEL) == CURRENTEL_EL3) {
cpu_init_el3();
+ gic_secure_init();
+ }
msr(CNTFRQ_EL0, COUNTER_FREQ);
}
diff --git a/common/gic-v3.c b/common/gic-v3.c
index 6207007..4d8e620 100644
--- a/common/gic-v3.c
+++ b/common/gic-v3.c
@@ -10,7 +10,7 @@
#include <stdint.h>
#include <cpu.h>
-#include <asm/gic-v3.h>
+#include <gic.h>
#include <asm/io.h>
#define GICD_CTLR 0x0
diff --git a/common/gic.c b/common/gic.c
index 04d4289..15a3410 100644
--- a/common/gic.c
+++ b/common/gic.c
@@ -10,7 +10,7 @@
#include <stdint.h>
#include <cpu.h>
-#include <asm/gic-v3.h>
+#include <gic.h>
#include <asm/io.h>
#define GICD_CTLR 0x0
diff --git a/include/gic.h b/include/gic.h
new file mode 100644
index 0000000..127f82b
--- /dev/null
+++ b/include/gic.h
@@ -0,0 +1,16 @@
+/*
+ * include/gic.h
+ *
+ * Copyright (C) 2024 ARM Limited. All rights reserved.
+ *
+ * Use of this source code is governed by a BSD-style license that can be
+ * found in the LICENSE.txt file.
+ */
+#ifndef __GIC_H
+#define __GIC_H
+
+#include <asm/gic.h>
+
+void gic_secure_init(void);
+
+#endif
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 09/11] Simplify spin logic
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (7 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 08/11] Unify assembly setup paths Mark Rutland
@ 2024-07-29 16:14 ` Mark Rutland
2024-07-29 16:15 ` [BOOT-WRAPPER 10/11] Add printing functions Mark Rutland
` (2 subsequent siblings)
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:14 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
The logic for initial boot is more complicated than it needs to be,
with both first_spin() having a special case for CPU0 that requires an
additional argument to be passed to spin().
Simplify this by moving the special-case logic for CPU0 into
first_spin(). This removes the need to provide a dummy mailbox for CPU0
to spin on, simplfiies callers of first_spin() and spin(), which no
longer need to pass a dummy mailbox or 'is_entry' for CPU0.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch64/spin.S | 11 +----------
common/boot.c | 20 ++++++++------------
common/psci.c | 2 +-
include/boot.h | 2 +-
4 files changed, 11 insertions(+), 24 deletions(-)
diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
index 375f732..a7879d4 100644
--- a/arch/aarch64/spin.S
+++ b/arch/aarch64/spin.S
@@ -23,15 +23,6 @@ ASM_FUNC(start_bootmethod)
* Primary CPU (x0 = 0) jumps to kernel, the other ones wait for an
* address to appear in mbox
*/
- adr x3, mbox
- adr x4, kernel_address
- cmp x0, #0
- csel x1, x3, x4, ne
+ adr x1, mbox
mov x2, #0
bl first_spin
-
- .align 3
-kernel_address:
- .long 0
-
- .ltorg
diff --git a/common/boot.c b/common/boot.c
index 29d53a4..4417649 100644
--- a/common/boot.c
+++ b/common/boot.c
@@ -27,7 +27,7 @@ const unsigned long id_table[] = { CPU_IDS };
* @invalid: value of an invalid address, 0 or -1 depending on the boot method
* @is_entry: when true, pass boot parameters to the kernel, instead of 0
*/
-void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
+void __noreturn spin(unsigned long *mbox, unsigned long invalid)
{
unsigned long addr = invalid;
@@ -36,13 +36,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry)
addr = *mbox;
}
- if (is_entry)
-#ifdef KERNEL_32
- jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
-#else
- jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
-#endif
-
jump_kernel(addr, 0, 0, 0, 0);
unreachable();
@@ -60,12 +53,15 @@ void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
unsigned long invalid)
{
if (cpu == 0) {
- *mbox = (unsigned long)&entrypoint;
- sevl();
- spin(mbox, invalid, 1);
+ unsigned long addr = (unsigned long)&entrypoint;
+#ifdef KERNEL_32
+ jump_kernel(addr, 0, ~0, (unsigned long)&dtb, 0);
+#else
+ jump_kernel(addr, (unsigned long)&dtb, 0, 0, 0);
+#endif
} else {
*mbox = invalid;
- spin(mbox, invalid, 0);
+ spin(mbox, invalid);
}
unreachable();
diff --git a/common/psci.c b/common/psci.c
index 5ae4255..19cc315 100644
--- a/common/psci.c
+++ b/common/psci.c
@@ -57,7 +57,7 @@ static int psci_cpu_off(void)
branch_table[cpu] = PSCI_ADDR_INVALID;
- spin(branch_table + cpu, PSCI_ADDR_INVALID, 0);
+ spin(branch_table + cpu, PSCI_ADDR_INVALID);
unreachable();
}
diff --git a/include/boot.h b/include/boot.h
index 459d1d5..18b805d 100644
--- a/include/boot.h
+++ b/include/boot.h
@@ -12,7 +12,7 @@
#include <compiler.h>
#include <stdbool.h>
-void __noreturn spin(unsigned long *mbox, unsigned long invalid, int is_entry);
+void __noreturn spin(unsigned long *mbox, unsigned long invalid);
void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
unsigned long invalid_addr);
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 10/11] Add printing functions
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (8 preceding siblings ...)
2024-07-29 16:14 ` [BOOT-WRAPPER 09/11] Simplify spin logic Mark Rutland
@ 2024-07-29 16:15 ` Mark Rutland
2024-07-29 16:15 ` [BOOT-WRAPPER 11/11] Boot CPUs sequentially Mark Rutland
2024-08-02 10:18 ` [BOOT-WRAPPER 00/11] Cleanup initialization Marc Zyngier
11 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:15 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
In subsequent patches we'll want to log messages from specific CPUs. Add
helpers to make this simpler.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
common/platform.c | 35 +++++++++++++++++++++++++++++++++++
include/platform.h | 4 ++++
2 files changed, 39 insertions(+)
diff --git a/common/platform.c b/common/platform.c
index 1607ee6..4e390e1 100644
--- a/common/platform.c
+++ b/common/platform.c
@@ -65,6 +65,41 @@ void print_ulong_hex(unsigned long val)
}
}
+// 2^32 is 4,294,967,296
+#define DEC_CHARS_PER_UINT 10
+
+void print_uint_dec(unsigned int val)
+{
+ char digits[DEC_CHARS_PER_UINT];
+ int d = 0;
+
+ do {
+ digits[d] = val % 10;
+ val /= 10;
+ d++;
+ } while (val);
+
+ while (d--) {
+ print_char('0' + digits[d]);
+ }
+}
+
+void print_cpu_warn(unsigned int cpu, const char *str)
+{
+ print_string("CPU");
+ print_uint_dec(cpu);
+ print_string(" WARNING: ");
+ print_string(str);
+}
+
+void print_cpu_msg(unsigned int cpu, const char *str)
+{
+ print_string("CPU");
+ print_uint_dec(cpu);
+ print_string(": ");
+ print_string(str);
+}
+
void init_uart(void)
{
/*
diff --git a/include/platform.h b/include/platform.h
index c88e124..09712b1 100644
--- a/include/platform.h
+++ b/include/platform.h
@@ -12,6 +12,10 @@
void print_char(char c);
void print_string(const char *str);
void print_ulong_hex(unsigned long val);
+void print_uint_dec(unsigned int val);
+
+void print_cpu_warn(unsigned int cpu, const char *str);
+void print_cpu_msg(unsigned int cpu, const char *str);
void init_uart(void);
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [BOOT-WRAPPER 11/11] Boot CPUs sequentially
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (9 preceding siblings ...)
2024-07-29 16:15 ` [BOOT-WRAPPER 10/11] Add printing functions Mark Rutland
@ 2024-07-29 16:15 ` Mark Rutland
2024-07-31 9:57 ` Luca Fancellu
2024-08-02 10:18 ` [BOOT-WRAPPER 00/11] Cleanup initialization Marc Zyngier
11 siblings, 1 reply; 19+ messages in thread
From: Mark Rutland @ 2024-07-29 16:15 UTC (permalink / raw)
To: linux-arm-kernel
Cc: akos.denke, andre.przywara, luca.fancellu, mark.rutland, maz
Currently the boot-wrapper initializes all CPUs in parallel. This means
that we cannot log errors as they happen, as this would mean multiple
CPUs concurrently writing to the UART, producing garbled output. To
produce meaningful output we have to special-case errors on the boot CPU
and hope CPUs have been configured consistently.
To make it easier to handle errors, boot CPUs sequentially so that errors
can be logged as they happen. With this change it's pretty clear that
the cpu_init_bootmethod() abstraction isn't helpful, and so this is
removed with cpu_init_arch() directly initializing PSCI where necessary.
When things go well this looks like:
| Boot-wrapper v0.2
| Entered at EL3
| Memory layout:
| [0000000080000000..0000000080001f90] => boot-wrapper
| [000000008000fff8..0000000080010000] => mbox
| [0000000080200000..0000000082cbaa00] => kernel
| [0000000088000000..0000000088002df1] => dtb
| CPU0: (MPIDR 0000000000000000) initializing...
| CPU0: Done
| CPU1: (MPIDR 0000000000000100) initializing...
| CPU1: Done
| CPU2: (MPIDR 0000000000000200) initializing...
| CPU2: Done
| CPU3: (MPIDR 0000000000000300) initializing...
| CPU3: Done
| CPU4: (MPIDR 0000000000010000) initializing...
| CPU4: Done
| CPU5: (MPIDR 0000000000010100) initializing...
| CPU5: Done
| CPU6: (MPIDR 0000000000010200) initializing...
| CPU6: Done
| CPU7: (MPIDR 0000000000010300) initializing...
| CPU7: Done
| All CPUs initialized. Entering kernel...
|
| [ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
arch/aarch32/boot.S | 2 --
arch/aarch32/init.c | 17 +++++++++------
arch/aarch64/boot.S | 2 --
arch/aarch64/init.c | 17 +++++++++------
arch/aarch64/spin.S | 3 ---
common/init.c | 50 +++++++++++++++++++++++++++++++++++++--------
common/psci.c | 14 -------------
include/boot.h | 6 +-----
8 files changed, 64 insertions(+), 47 deletions(-)
diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 3a51cb0..679e28a 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -70,8 +70,6 @@ reset_common:
bl cpu_init_bootwrapper
- bl cpu_init_arch
-
b start_bootmethod
err_invalid_id:
diff --git a/arch/aarch32/init.c b/arch/aarch32/init.c
index cb67bf6..d08ac83 100644
--- a/arch/aarch32/init.c
+++ b/arch/aarch32/init.c
@@ -6,6 +6,7 @@
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE.txt file.
*/
+#include <boot.h>
#include <cpu.h>
#include <gic.h>
#include <platform.h>
@@ -43,24 +44,28 @@ static void cpu_init_monitor(void)
#ifdef PSCI
extern char psci_vectors[];
-bool cpu_init_psci_arch(void)
+static void cpu_init_psci_arch(unsigned int cpu)
{
- if (read_cpsr_mode() != PSR_MON)
- return false;
+ if (read_cpsr_mode() != PSR_MON) {
+ print_cpu_warn(cpu, "PSCI could not be initialized (not booted at PL1).\r\n");
+ return;
+ }
mcr(MVBAR, (unsigned long)psci_vectors);
isb();
-
- return true;
}
+#else
+static static void cpu_init_psci_arch(unsigned int cpu) { }
#endif
-void cpu_init_arch(void)
+void cpu_init_arch(unsigned int cpu)
{
if (read_cpsr_mode() == PSR_MON) {
cpu_init_monitor();
gic_secure_init();
}
+ cpu_init_psci_arch(cpu);
+
mcr(CNTFRQ, COUNTER_FREQ);
}
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index 4cc41bb..f8fc808 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -69,8 +69,6 @@ reset_common:
bl cpu_init_bootwrapper
- bl cpu_init_arch
-
b start_bootmethod
err_invalid_id:
diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
index 68c220b..81fe33a 100644
--- a/arch/aarch64/init.c
+++ b/arch/aarch64/init.c
@@ -6,6 +6,7 @@
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE.txt file.
*/
+#include <boot.h>
#include <cpu.h>
#include <gic.h>
#include <platform.h>
@@ -159,24 +160,28 @@ static void cpu_init_el3(void)
#ifdef PSCI
extern char psci_vectors[];
-bool cpu_init_psci_arch(void)
+static void cpu_init_psci_arch(unsigned int cpu)
{
- if (mrs(CurrentEL) != CURRENTEL_EL3)
- return false;
+ if (mrs(CurrentEL) != CURRENTEL_EL3) {
+ print_cpu_warn(cpu, "PSCI could not be initialized (not booted at EL3).\r\n");
+ return;
+ }
msr(VBAR_EL3, (unsigned long)psci_vectors);
isb();
-
- return true;
}
+#else
+static void cpu_init_psci_arch(unsigned int cpu);
#endif
-void cpu_init_arch(void)
+void cpu_init_arch(unsigned int cpu)
{
if (mrs(CurrentEL) == CURRENTEL_EL3) {
cpu_init_el3();
gic_secure_init();
}
+ cpu_init_psci_arch(cpu);
+
msr(CNTFRQ_EL0, COUNTER_FREQ);
}
diff --git a/arch/aarch64/spin.S b/arch/aarch64/spin.S
index a7879d4..81d7ee4 100644
--- a/arch/aarch64/spin.S
+++ b/arch/aarch64/spin.S
@@ -12,9 +12,6 @@
.text
-ASM_FUNC(cpu_init_bootmethod)
- ret
-
ASM_FUNC(start_bootmethod)
cpuid x0, x1
bl find_logical_id
diff --git a/common/init.c b/common/init.c
index 3c05ac3..6197ead 100644
--- a/common/init.c
+++ b/common/init.c
@@ -43,18 +43,50 @@ static void announce_objects(void)
void announce_arch(void);
+static void init_bootwrapper(void)
+{
+ init_uart();
+ announce_bootwrapper();
+ announce_arch();
+ announce_objects();
+ init_platform();
+}
+
+static void cpu_init_self(unsigned int cpu)
+{
+ print_string("CPU");
+ print_uint_dec(cpu);
+ print_string(": (MPIDR ");
+ print_ulong_hex(read_mpidr());
+ print_string(") initializing...\r\n");
+
+ cpu_init_arch(cpu);
+
+ print_cpu_msg(cpu, "Done\r\n");
+}
+
void cpu_init_bootwrapper(void)
{
+ static volatile unsigned int cpu_next = 0;
unsigned int cpu = this_cpu_logical_id();
- if (cpu == 0) {
- init_uart();
- announce_bootwrapper();
- announce_arch();
- announce_objects();
- print_string("\r\n");
- init_platform();
- }
+ if (cpu == 0)
+ init_bootwrapper();
+
+ while (cpu_next != cpu)
+ wfe();
+
+ cpu_init_self(cpu);
+
+ cpu_next = cpu + 1;
+ dsb(sy);
+ sev();
+
+ if (cpu != 0)
+ return;
+
+ while (cpu_next != NR_CPUS)
+ wfe();
- cpu_init_bootmethod(cpu);
+ print_string("All CPUs initialized. Entering kernel...\r\n\r\n");
}
diff --git a/common/psci.c b/common/psci.c
index 19cc315..5fe8999 100644
--- a/common/psci.c
+++ b/common/psci.c
@@ -87,17 +87,3 @@ void __noreturn psci_first_spin(void)
unreachable();
}
-
-void cpu_init_bootmethod(unsigned int cpu)
-{
- if (cpu_init_psci_arch())
- return;
-
- if (cpu == 0) {
- print_string("WARNING: PSCI could not be initialized. Boot may fail\r\n\r\n");
- return;
- }
-
- while (1)
- wfe();
-}
diff --git a/include/boot.h b/include/boot.h
index 18b805d..12c9c5c 100644
--- a/include/boot.h
+++ b/include/boot.h
@@ -17,10 +17,6 @@ void __noreturn spin(unsigned long *mbox, unsigned long invalid);
void __noreturn first_spin(unsigned int cpu, unsigned long *mbox,
unsigned long invalid_addr);
-void cpu_init_bootmethod(unsigned int cpu);
-
-#ifdef PSCI
-bool cpu_init_psci_arch(void);
-#endif
+void cpu_init_arch(unsigned int cpu);
#endif
--
2.30.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 11/11] Boot CPUs sequentially
2024-07-29 16:15 ` [BOOT-WRAPPER 11/11] Boot CPUs sequentially Mark Rutland
@ 2024-07-31 9:57 ` Luca Fancellu
2024-08-03 10:57 ` Mark Rutland
0 siblings, 1 reply; 19+ messages in thread
From: Luca Fancellu @ 2024-07-31 9:57 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-arm-kernel@lists.infradead.org, Akos Denke, Andre Przywara,
maz@kernel.org
Hi Mark,
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index 68c220b..81fe33a 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -6,6 +6,7 @@
> * Use of this source code is governed by a BSD-style license that can be
> * found in the LICENSE.txt file.
> */
> +#include <boot.h>
> #include <cpu.h>
> #include <gic.h>
> #include <platform.h>
> @@ -159,24 +160,28 @@ static void cpu_init_el3(void)
> #ifdef PSCI
> extern char psci_vectors[];
>
> -bool cpu_init_psci_arch(void)
> +static void cpu_init_psci_arch(unsigned int cpu)
> {
> - if (mrs(CurrentEL) != CURRENTEL_EL3)
> - return false;
> + if (mrs(CurrentEL) != CURRENTEL_EL3) {
> + print_cpu_warn(cpu, "PSCI could not be initialized (not booted at EL3).\r\n");
> + return;
> + }
>
> msr(VBAR_EL3, (unsigned long)psci_vectors);
> isb();
> -
> - return true;
> }
> +#else
> +static void cpu_init_psci_arch(unsigned int cpu);
This needs {} otherwise I’m getting:
| aarch64-poky-linux-ld: arch/aarch64/init.o: in function `cpu_init_arch’:
| /usr/src/debug/boot-wrapper-aarch64/gitAUTOINC+cd7fe8a88e-r0/arch/aarch64/init.c:246: undefined reference to `cpu_init_psci_arch’
| aarch64-poky-linux-ld: /usr/src/debug/boot-wrapper-aarch64/gitAUTOINC+cd7fe8a88e-r0/arch/aarch64/init.c:246: undefined reference to `cpu_init_psci_arch’
When compiling without PSCI.
Cheers,
Luca
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch()
2024-07-29 16:14 ` [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch() Mark Rutland
@ 2024-08-02 9:29 ` Marc Zyngier
2024-08-02 9:38 ` Mark Rutland
0 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2024-08-02 9:29 UTC (permalink / raw)
To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, andre.przywara, luca.fancellu
On Mon, 29 Jul 2024 17:14:53 +0100,
Mark Rutland <mark.rutland@arm.com> wrote:
>
> When the boot-wrapper is entered at EL2 it does not initialise
> CNTFRQ_EL0, and in future it may need to initialize other CPU state
> regardless of the exeption level it was entered at.
>
> Use a common cpu_init_arch() function to initialize CPU state regardless
> of the exception level the boot-wrapper was entered at.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> ---
> arch/aarch64/boot.S | 4 +++-
> arch/aarch64/init.c | 12 +++++++++---
> 2 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index 51ef41b..d8d38dd 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -51,7 +51,7 @@ reset_at_el3:
>
> bl cpu_init_bootwrapper
>
> - bl cpu_init_el3
> + bl cpu_init_arch
>
> bl gic_secure_init
>
> @@ -82,6 +82,8 @@ reset_at_el2:
>
> bl cpu_init_bootwrapper
>
> + bl cpu_init_arch
> +
> b start_bootmethod
>
> err_invalid_id:
> diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> index c9fc7f1..49abdf7 100644
> --- a/arch/aarch64/init.c
> +++ b/arch/aarch64/init.c
> @@ -52,7 +52,7 @@ static inline bool cpu_has_permission_indirection(void)
> return mrs(ID_AA64MMFR3_EL1) & mask;
> }
>
> -void cpu_init_el3(void)
> +static void cpu_init_el3(void)
> {
> unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE;
> unsigned long mdcr = 0;
> @@ -153,8 +153,6 @@ void cpu_init_el3(void)
>
> msr(SMCR_EL3, smcr);
> }
> -
> - msr(CNTFRQ_EL0, COUNTER_FREQ);
> }
>
> #ifdef PSCI
> @@ -171,3 +169,11 @@ bool cpu_init_psci_arch(void)
> return true;
> }
> #endif
> +
> +void cpu_init_arch(void)
> +{
> + if (mrs(CurrentEL) == CURRENTEL_EL3)
> + cpu_init_el3();
> +
> + msr(CNTFRQ_EL0, COUNTER_FREQ);
> +}
Hmmm. This means that you cannot use the BW on a system where EL3 is
implemented, but where you decide to enter at EL2 anyway (the write to
CNTFRQ_EL0 will UNDEF).
I don't care much (I always want the BW to be the first piece of SW to
run), but this is a rather subtle change in behaviour, and we'd better
capture it in the commit message.
With that,
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch()
2024-08-02 9:29 ` Marc Zyngier
@ 2024-08-02 9:38 ` Mark Rutland
0 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-08-02 9:38 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, akos.denke, andre.przywara, luca.fancellu
On Fri, Aug 02, 2024 at 10:29:36AM +0100, Marc Zyngier wrote:
> On Mon, 29 Jul 2024 17:14:53 +0100,
> Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > When the boot-wrapper is entered at EL2 it does not initialise
> > CNTFRQ_EL0, and in future it may need to initialize other CPU state
> > regardless of the exeption level it was entered at.
> >
> > Use a common cpu_init_arch() function to initialize CPU state regardless
> > of the exception level the boot-wrapper was entered at.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/aarch64/boot.S | 4 +++-
> > arch/aarch64/init.c | 12 +++++++++---
> > 2 files changed, 12 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> > index 51ef41b..d8d38dd 100644
> > --- a/arch/aarch64/boot.S
> > +++ b/arch/aarch64/boot.S
> > @@ -51,7 +51,7 @@ reset_at_el3:
> >
> > bl cpu_init_bootwrapper
> >
> > - bl cpu_init_el3
> > + bl cpu_init_arch
> >
> > bl gic_secure_init
> >
> > @@ -82,6 +82,8 @@ reset_at_el2:
> >
> > bl cpu_init_bootwrapper
> >
> > + bl cpu_init_arch
> > +
> > b start_bootmethod
> >
> > err_invalid_id:
> > diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> > index c9fc7f1..49abdf7 100644
> > --- a/arch/aarch64/init.c
> > +++ b/arch/aarch64/init.c
> > @@ -52,7 +52,7 @@ static inline bool cpu_has_permission_indirection(void)
> > return mrs(ID_AA64MMFR3_EL1) & mask;
> > }
> >
> > -void cpu_init_el3(void)
> > +static void cpu_init_el3(void)
> > {
> > unsigned long scr = SCR_EL3_RES1 | SCR_EL3_NS | SCR_EL3_HCE;
> > unsigned long mdcr = 0;
> > @@ -153,8 +153,6 @@ void cpu_init_el3(void)
> >
> > msr(SMCR_EL3, smcr);
> > }
> > -
> > - msr(CNTFRQ_EL0, COUNTER_FREQ);
> > }
> >
> > #ifdef PSCI
> > @@ -171,3 +169,11 @@ bool cpu_init_psci_arch(void)
> > return true;
> > }
> > #endif
> > +
> > +void cpu_init_arch(void)
> > +{
> > + if (mrs(CurrentEL) == CURRENTEL_EL3)
> > + cpu_init_el3();
> > +
> > + msr(CNTFRQ_EL0, COUNTER_FREQ);
> > +}
>
> Hmmm. This means that you cannot use the BW on a system where EL3 is
> implemented, but where you decide to enter at EL2 anyway (the write to
> CNTFRQ_EL0 will UNDEF).
>
> I don't care much (I always want the BW to be the first piece of SW to
> run), but this is a rather subtle change in behaviour, and we'd better
> capture it in the commit message.
Booting in that way has been explciitly documented as not supported since
January 2022 in commit:
286b8ecc86393c26 ("Document entry requirements")
... which added documentation to the start of boot.S which says:
| The boot-wrapper must be entered from the reset vector at the
| highest implemented exception level.
.. but regardless I can add some text to the commit message to say that
violating this requirement will now cause a silent failure.
> With that,
>
> Acked-by: Marc Zyngier <maz@kernel.org>
Thanks!
Mark.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 00/11] Cleanup initialization
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
` (10 preceding siblings ...)
2024-07-29 16:15 ` [BOOT-WRAPPER 11/11] Boot CPUs sequentially Mark Rutland
@ 2024-08-02 10:18 ` Marc Zyngier
11 siblings, 0 replies; 19+ messages in thread
From: Marc Zyngier @ 2024-08-02 10:18 UTC (permalink / raw)
To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, andre.przywara, luca.fancellu
On Mon, 29 Jul 2024 17:14:50 +0100,
Mark Rutland <mark.rutland@arm.com> wrote:
>
> These patches cleanup the boot-wrapper initialization logic to make it
> more consistent and easier to extend in C code in future. The big
> changes are:
>
> * The kernel is always entered via an exception return. This allows us
> to initialize PSTATE consistently, and will allow us to adjust the
> SPSR dynamically in C code in future if necessary.
>
> * Regardless of the entered exception level, CPU state is initialized
> under cpu_init_arch(), allowing for logic to be shared regardless of
> which exception level was entered.
>
> * CPUs are initialized sequentially, which allows better for better
> logging within the boot-wrapper.
>
> Mark.
>
> Mark Rutland (11):
> Always enter AArch32 kernels in ARM mode
> aarch64: Remove redundant EL1 entry logic
> aarch64: Implement cpu_init_arch()
> aarch64: Always enter kernel via exception return
> aarch32: Refactor inital entry
> aarch32: Implement cpu_init_arch()
> aarch32: Always enter kernel via exception return
> Unify assembly setup paths
> Simplify spin logic
> Add printing functions
> Boot CPUs sequentially
>
> arch/aarch32/boot.S | 95 ++++++++++----------
> arch/aarch32/include/asm/{gic-v3.h => gic.h} | 2 +-
> arch/aarch32/init.c | 30 +++++--
> arch/aarch64/boot.S | 66 ++++----------
> arch/aarch64/include/asm/{gic-v3.h => gic.h} | 2 +-
> arch/aarch64/init.c | 30 +++++--
> arch/aarch64/spin.S | 14 +--
> common/boot.c | 20 ++---
> common/gic-v3.c | 2 +-
> common/gic.c | 2 +-
> common/init.c | 50 +++++++++--
> common/platform.c | 35 ++++++++
> common/psci.c | 16 +---
> include/boot.h | 8 +-
> include/gic.h | 16 ++++
> include/platform.h | 4 +
> 16 files changed, 221 insertions(+), 171 deletions(-)
> rename arch/aarch32/include/asm/{gic-v3.h => gic.h} (91%)
> rename arch/aarch64/include/asm/{gic-v3.h => gic.h} (92%)
> create mode 100644 include/gic.h
This looks good to me. With the issue reported by Lucas fixed and an
extra comment in patch #3:
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode
2024-07-29 16:14 ` [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode Mark Rutland
@ 2024-08-02 11:26 ` Andre Przywara
2024-08-03 10:53 ` Mark Rutland
0 siblings, 1 reply; 19+ messages in thread
From: Andre Przywara @ 2024-08-02 11:26 UTC (permalink / raw)
To: Mark Rutland; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz
On Mon, 29 Jul 2024 17:14:51 +0100
Mark Rutland <mark.rutland@arm.com> wrote:
> Currnetly we try to support entering AArch32 kernels, but this is
I think you are missing "in Thumb mode" here? The read is a bit confusing
otherwise.
> unnecessary, and the code is never exercised.
>
> Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
> supported by the AArch64 boot-wrapper should always be entered in ARM
> mode:
>
> | The boot loader is expected to call the kernel image by jumping
> | directly to the first instruction of the kernel image.
> |
> | On CPUs supporting the ARM instruction set, the entry must be
> | made in ARM state, even for a Thumb-2 kernel.
> |
> | On CPUs supporting only the Thumb instruction set such as
> | Cortex-M class CPUs, the entry must be made in Thumb state.
>
> Additionally, the kernel__start symbol that we use as the kernel
> entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
> into account any ARM/Thumb distinction in the AArch32 kernel image, and
> hence we'll never try to set the Thumb bit in the SPSR.
Is that true? I see the first_spin code path for CPU 0 using those values,
which indeed never have bit 0 set, but the address could come from *mbox
as well, given by the live kernel in the PSCI code path, and we don't have
any control over that.
Or do I miss anything here?
I think the patch is still valid, but we might need to relax the commit
message here a bit?
Cheers,
Andre
>
> Remove the redundant code.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> ---
> arch/aarch32/boot.S | 4 ----
> arch/aarch64/boot.S | 7 -------
> 2 files changed, 11 deletions(-)
>
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index 4d16c9c..5c2a183 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -105,10 +105,6 @@ ASM_FUNC(jump_kernel)
> bxeq lr @ no EL3
>
> ldr r4, =SPSR_KERNEL
> - /* Return in thumb2 mode when bit 0 of address is 1 */
> - tst lr, #1
> - orrne r4, #PSR_T
> -
> msr spsr_cxf, r4
> movs pc, lr
>
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index da5fa65..b889137 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -136,13 +136,6 @@ ASM_FUNC(jump_kernel)
> br x19 // No EL3
>
> 1: mov x4, #SPSR_KERNEL
> -
> - /*
> - * If bit 0 of the kernel address is set, we're entering in AArch32
> - * thumb mode. Set SPSR.T accordingly.
> - */
> - bfi x4, x19, #5, #1
> -
> msr elr_el3, x19
> msr spsr_el3, x4
> eret
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode
2024-08-02 11:26 ` Andre Przywara
@ 2024-08-03 10:53 ` Mark Rutland
0 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-08-03 10:53 UTC (permalink / raw)
To: Andre Przywara; +Cc: linux-arm-kernel, akos.denke, luca.fancellu, maz
On Fri, Aug 02, 2024 at 12:26:39PM +0100, Andre Przywara wrote:
> On Mon, 29 Jul 2024 17:14:51 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
>
> > Currnetly we try to support entering AArch32 kernels, but this is
>
> I think you are missing "in Thumb mode" here? The read is a bit confusing
> otherwise.
>
> > unnecessary, and the code is never exercised.
> >
> > Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
> > supported by the AArch64 boot-wrapper should always be entered in ARM
> > mode:
> >
> > | The boot loader is expected to call the kernel image by jumping
> > | directly to the first instruction of the kernel image.
> > |
> > | On CPUs supporting the ARM instruction set, the entry must be
> > | made in ARM state, even for a Thumb-2 kernel.
> > |
> > | On CPUs supporting only the Thumb instruction set such as
> > | Cortex-M class CPUs, the entry must be made in Thumb state.
> >
> > Additionally, the kernel__start symbol that we use as the kernel
> > entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
> > into account any ARM/Thumb distinction in the AArch32 kernel image, and
> > hence we'll never try to set the Thumb bit in the SPSR.
>
> Is that true? I see the first_spin code path for CPU 0 using those values,
> which indeed never have bit 0 set, but the address could come from *mbox
> as well, given by the live kernel in the PSCI code path, and we don't have
> any control over that.
> Or do I miss anything here?
You're right, and PSCI explicitly describes that bit 0 in the entry
address results in the T bit being set in CPSR.
I will drop this patch, and adjust subsequent paqtches accordingly.
Mark.
> I think the patch is still valid, but we might need to relax the commit
> message here a bit?
>
> Cheers,
> Andre
>
> >
> > Remove the redundant code.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/aarch32/boot.S | 4 ----
> > arch/aarch64/boot.S | 7 -------
> > 2 files changed, 11 deletions(-)
> >
> > diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> > index 4d16c9c..5c2a183 100644
> > --- a/arch/aarch32/boot.S
> > +++ b/arch/aarch32/boot.S
> > @@ -105,10 +105,6 @@ ASM_FUNC(jump_kernel)
> > bxeq lr @ no EL3
> >
> > ldr r4, =SPSR_KERNEL
> > - /* Return in thumb2 mode when bit 0 of address is 1 */
> > - tst lr, #1
> > - orrne r4, #PSR_T
> > -
> > msr spsr_cxf, r4
> > movs pc, lr
> >
> > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> > index da5fa65..b889137 100644
> > --- a/arch/aarch64/boot.S
> > +++ b/arch/aarch64/boot.S
> > @@ -136,13 +136,6 @@ ASM_FUNC(jump_kernel)
> > br x19 // No EL3
> >
> > 1: mov x4, #SPSR_KERNEL
> > -
> > - /*
> > - * If bit 0 of the kernel address is set, we're entering in AArch32
> > - * thumb mode. Set SPSR.T accordingly.
> > - */
> > - bfi x4, x19, #5, #1
> > -
> > msr elr_el3, x19
> > msr spsr_el3, x4
> > eret
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [BOOT-WRAPPER 11/11] Boot CPUs sequentially
2024-07-31 9:57 ` Luca Fancellu
@ 2024-08-03 10:57 ` Mark Rutland
0 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2024-08-03 10:57 UTC (permalink / raw)
To: Luca Fancellu
Cc: linux-arm-kernel@lists.infradead.org, Akos Denke, Andre Przywara,
maz@kernel.org
On Wed, Jul 31, 2024 at 10:57:34AM +0100, Luca Fancellu wrote:
> Hi Mark,
>
> > diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c
> > index 68c220b..81fe33a 100644
> > --- a/arch/aarch64/init.c
> > +++ b/arch/aarch64/init.c
> > @@ -6,6 +6,7 @@
> > * Use of this source code is governed by a BSD-style license that can be
> > * found in the LICENSE.txt file.
> > */
> > +#include <boot.h>
> > #include <cpu.h>
> > #include <gic.h>
> > #include <platform.h>
> > @@ -159,24 +160,28 @@ static void cpu_init_el3(void)
> > #ifdef PSCI
> > extern char psci_vectors[];
> >
> > -bool cpu_init_psci_arch(void)
> > +static void cpu_init_psci_arch(unsigned int cpu)
> > {
> > - if (mrs(CurrentEL) != CURRENTEL_EL3)
> > - return false;
> > + if (mrs(CurrentEL) != CURRENTEL_EL3) {
> > + print_cpu_warn(cpu, "PSCI could not be initialized (not booted at EL3).\r\n");
> > + return;
> > + }
> >
> > msr(VBAR_EL3, (unsigned long)psci_vectors);
> > isb();
> > -
> > - return true;
> > }
> > +#else
> > +static void cpu_init_psci_arch(unsigned int cpu);
>
> This needs {} otherwise I’m getting:
>
> | aarch64-poky-linux-ld: arch/aarch64/init.o: in function `cpu_init_arch’:
> | /usr/src/debug/boot-wrapper-aarch64/gitAUTOINC+cd7fe8a88e-r0/arch/aarch64/init.c:246: undefined reference to `cpu_init_psci_arch’
> | aarch64-poky-linux-ld: /usr/src/debug/boot-wrapper-aarch64/gitAUTOINC+cd7fe8a88e-r0/arch/aarch64/init.c:246: undefined reference to `cpu_init_psci_arch’
>
> When compiling without PSCI.
Whoops; I've fixed that up locally. Thanks for the report.
Mark.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2024-08-03 10:58 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-29 16:14 [BOOT-WRAPPER 00/11] Cleanup initialization Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 01/11] Always enter AArch32 kernels in ARM mode Mark Rutland
2024-08-02 11:26 ` Andre Przywara
2024-08-03 10:53 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 02/11] aarch64: Remove redundant EL1 entry logic Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 03/11] aarch64: Implement cpu_init_arch() Mark Rutland
2024-08-02 9:29 ` Marc Zyngier
2024-08-02 9:38 ` Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 04/11] aarch64: Always enter kernel via exception return Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 05/11] aarch32: Refactor inital entry Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 06/11] aarch32: Implement cpu_init_arch() Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 07/11] aarch32: Always enter kernel via exception return Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 08/11] Unify assembly setup paths Mark Rutland
2024-07-29 16:14 ` [BOOT-WRAPPER 09/11] Simplify spin logic Mark Rutland
2024-07-29 16:15 ` [BOOT-WRAPPER 10/11] Add printing functions Mark Rutland
2024-07-29 16:15 ` [BOOT-WRAPPER 11/11] Boot CPUs sequentially Mark Rutland
2024-07-31 9:57 ` Luca Fancellu
2024-08-03 10:57 ` Mark Rutland
2024-08-02 10:18 ` [BOOT-WRAPPER 00/11] Cleanup initialization Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).