linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] initialize SCTRL2_ELx
@ 2025-08-13 12:01 Yeoreum Yun
  2025-08-13 12:01 ` [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible Yeoreum Yun
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun

This series introduces initial support for the SCTLR2_ELx registers in Linux.
The feature is optional starting from ARMv8.8/ARMv9.3,
and becomes mandatory from ARMv8.9/ARMv9.4.

Currently, Linux has no strict need to modify SCTLR2_ELx—
at least assuming that firmware initializes
these registers to reasonable defaults.

However, several upcoming architectural features will require configuring
control bits in these registers.
Notable examples include FEAT_PAuth_LR and FEAT_CPA2.

Patch History
==============
from v2 to v3:
  - rewrite commit messages.
  - fix missing SCTLR2_EL2 synchonization at boot.
  - https://lore.kernel.org/all/20250811163340.1561893-1-yeoreum.yun@arm.com/

from v1 to v2:
  - rebase to v6.17-rc1
  - https://lore.kernel.org/all/20250804121724.3681531-1-yeoreum.yun@arm.com/

Yeoreum Yun (5):
  arm64: make SCTLR2_EL1 accessible
  arm64: initialise SCTLR2_ELx register at boot time
  arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume()
  arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  arm64: make the per-task SCTLR2_EL1

 arch/arm64/include/asm/assembler.h   | 22 ++++++++++++++++++++++
 arch/arm64/include/asm/el2_setup.h   | 14 +++++++++++++-
 arch/arm64/include/asm/processor.h   |  5 +++++
 arch/arm64/include/asm/suspend.h     |  2 +-
 arch/arm64/include/asm/sysreg.h      |  5 +++++
 arch/arm64/kernel/cpu-reset.S        |  6 ++++++
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kernel/hyp-stub.S         | 10 ++++++++++
 arch/arm64/kernel/process.c          |  9 +++++++++
 arch/arm64/kvm/hyp/nvhe/psci-relay.c |  3 +++
 arch/arm64/mm/proc.S                 | 26 ++++++++++++++++++--------
 11 files changed, 97 insertions(+), 10 deletions(-)


base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
@ 2025-08-13 12:01 ` Yeoreum Yun
  2025-08-20 15:10   ` Dave Martin
  2025-08-13 12:01 ` [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time Yeoreum Yun
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun,
	Marc Zyngier

When the kernel runs at EL1, and yet is booted at EL2,
HCRX_EL2.SCTLR2En must be set to avoid trapping SCTLR2_EL1 accesses
from EL1 to EL2.

Ensure this bit is set at the point of initialising EL2.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/el2_setup.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index 46033027510c..d755b4d46d77 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -57,9 +57,15 @@
         /* Enable GCS if supported */
 	mrs_s	x1, SYS_ID_AA64PFR1_EL1
 	ubfx	x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
-	cbz	x1, .Lset_hcrx_\@
+	cbz	x1, .Lskip_hcrx_GCSEn_\@
 	orr	x0, x0, #HCRX_EL2_GCSEn
 
+.Lskip_hcrx_GCSEn_\@:
+	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
+	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
+	cbz	x1, .Lset_hcrx_\@
+	orr	x0, x0, HCRX_EL2_SCTLR2En
+
 .Lset_hcrx_\@:
 	msr_s	SYS_HCRX_EL2, x0
 .Lskip_hcrx_\@:
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
  2025-08-13 12:01 ` [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible Yeoreum Yun
@ 2025-08-13 12:01 ` Yeoreum Yun
  2025-08-20 15:10   ` Dave Martin
  2025-08-13 12:01 ` [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume() Yeoreum Yun
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun

SCTLR2_ELx register is optional starting from ARMv8.8/ARMv9.3,
and becomes mandatory from ARMv8.9/ARMv9.4
and serveral architectural feature are controled by bits in
these registers.

These register's value is UNKNOWN when it was reset
It wasn't need to be initialised if firmware initilises these registers
properly.
But for the case not initialised properly, initialise SCTLR2_ELx
registers at bootin cpu/vcpu so that
unexpected system behavior couldn't happen for
improper SCTLR2_ELx value.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
 arch/arm64/include/asm/assembler.h   | 22 ++++++++++++++++++++++
 arch/arm64/include/asm/el2_setup.h   |  6 ++++++
 arch/arm64/include/asm/sysreg.h      |  5 +++++
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kernel/hyp-stub.S         | 10 ++++++++++
 arch/arm64/kvm/hyp/nvhe/psci-relay.c |  3 +++
 6 files changed, 51 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 23be85d93348..eef169c105f0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -738,6 +738,28 @@ alternative_endif
 	set_sctlr sctlr_el2, \reg
 .endm

+	/*
+	 * Set SCTLR2_ELx to the @reg value.
+	 */
+	.macro __set_sctlr2_elx, el, reg, tmp
+	mrs_s	\tmp, SYS_ID_AA64MMFR3_EL1
+	ubfx	\tmp, \tmp, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
+	cbz	\tmp, .Lskip_sctlr2_\@
+	.if	\el == 2
+	msr_s	SYS_SCTLR_EL2, \reg
+	.elseif	\el == 12
+	msr_s	SYS_SCTLR_EL12, \reg
+	.else
+	msr_s	SYS_SCTLR_EL1, \reg
+	.endif
+.Lskip_sctlr2_\@:
+	.endm
+
+	.macro set_sctlr2_elx, el, reg, tmp
+	__set_sctlr2_elx	\el, \reg, \tmp
+	isb
+	.endm
+
 	/*
 	 * Check whether asm code should yield as soon as it is able. This is
 	 * the case if we are currently running in task context, and the
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index d755b4d46d77..c03cabd45fcf 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -48,6 +48,11 @@
 	isb
 .endm

+.macro __init_sctlr2_el2
+	mov_q	x0, INIT_SCTLR2_EL2
+	set_sctlr2_elx	2, x0, x1
+.endm
+
 .macro __init_el2_hcrx
 	mrs	x0, id_aa64mmfr1_el1
 	ubfx	x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
@@ -411,6 +416,7 @@
  */
 .macro init_el2_state
 	__init_el2_sctlr
+	__init_sctlr2_el2
 	__init_el2_hcrx
 	__init_el2_timers
 	__init_el2_debug
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d5b5f2ae1afa..0431b357b87b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -868,6 +868,8 @@
 #define INIT_SCTLR_EL2_MMU_OFF \
 	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)

+#define INIT_SCTLR2_EL2			UL(0)
+
 /* SCTLR_EL1 specific flags. */
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
@@ -888,6 +890,8 @@
 	 SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | SCTLR_EL1_EIS   | \
 	 SCTLR_EL1_TSCXT  | SCTLR_EL1_EOS)

+#define INIT_SCTLR2_EL1			UL(0)
+
 /* MAIR_ELx memory attributes (used by Linux) */
 #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
 #define MAIR_ATTR_DEVICE_nGnRE		UL(0x04)
@@ -1164,6 +1168,7 @@
 	msr	hcr_el2, \reg
 #endif
 	.endm
+
 #else

 #include <linux/bitfield.h>
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index ca04b338cb0d..c41015675eae 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -276,6 +276,8 @@ SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
 	pre_disable_mmu_workaround
 	msr	sctlr_el1, x0
+	mov_q	x0, INIT_SCTLR2_EL1
+	__set_sctlr2_elx	1, x0, x1
 	isb
 	mov_q	x0, INIT_PSTATE_EL1
 	msr	spsr_el1, x0
@@ -308,6 +310,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
 	isb

 	mov_q	x1, INIT_SCTLR_EL1_MMU_OFF
+	mov_q	x2, INIT_SCTLR2_EL1

 	mrs	x0, hcr_el2
 	and	x0, x0, #HCR_E2H
@@ -315,11 +318,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)

 	/* Set a sane SCTLR_EL1, the VHE way */
 	msr_s	SYS_SCTLR_EL12, x1
+	__set_sctlr2_elx	12, x2, x0
 	mov	x2, #BOOT_CPU_FLAG_E2H
 	b	3f

 2:
 	msr	sctlr_el1, x1
+	__set_sctlr2_elx	1, x2, x0
 	mov	x2, xzr
 3:
 	mov	x0, #INIT_PSTATE_EL1
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 36e2d26b54f5..ac12f1b4f8e2 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -144,7 +144,17 @@ SYM_CODE_START_LOCAL(__finalise_el2)

 .Lskip_indirection:
 .Lskip_tcr2:
+	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
+	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
+	cbz	x1, .Lskip_sctlr2
+	mrs_s	x1, SYS_SCTLR2_EL12
+	msr_s	SYS_SCTLR2_EL1, x1

+	// clean SCTLR2_EL1
+	mov_q	x1, INIT_SCTLR2_EL1
+	msr_s	SYS_SCTLR2_EL12, x1
+
+.Lskip_sctlr2:
 	isb

 	// Hack the exception return to stay at EL2
diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
index c3e196fb8b18..4ed4b7fa57c2 100644
--- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
+++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
@@ -4,6 +4,7 @@
  * Author: David Brazdil <dbrazdil@google.com>
  */

+#include <asm/alternative.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
@@ -219,6 +220,8 @@ asmlinkage void __noreturn __kvm_host_psci_cpu_entry(bool is_cpu_on)
 		release_boot_args(boot_args);

 	write_sysreg_el1(INIT_SCTLR_EL1_MMU_OFF, SYS_SCTLR);
+	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2))
+		write_sysreg_el1(INIT_SCTLR2_EL1, SYS_SCTLR2);
 	write_sysreg(INIT_PSTATE_EL1, SPSR_EL2);

 	__host_enter(host_ctxt);
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume()
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
  2025-08-13 12:01 ` [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible Yeoreum Yun
  2025-08-13 12:01 ` [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time Yeoreum Yun
@ 2025-08-13 12:01 ` Yeoreum Yun
  2025-08-20 15:11   ` Dave Martin
  2025-08-13 12:01 ` [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart() Yeoreum Yun
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun

SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
and becomes mandatory from ARMv8.9/ARMv9.4
and serveral architectural feature are controled by bits in
these registers (i.e) FEAT_PAuth_LR or FEAT_CPA/CPA2

Save and restore SCTLR2_EL1 when cpu_suspend() and resume().
so that configured value can sustain consistency before suspend and
after resume.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
 arch/arm64/include/asm/suspend.h |  2 +-
 arch/arm64/mm/proc.S             | 26 ++++++++++++++++++--------
 2 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
index 0cde2f473971..eb60c9735553 100644
--- a/arch/arm64/include/asm/suspend.h
+++ b/arch/arm64/include/asm/suspend.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_SUSPEND_H
 #define __ASM_SUSPEND_H
 
-#define NR_CTX_REGS 13
+#define NR_CTX_REGS 14
 #define NR_CALLEE_SAVED_REGS 12
 
 /*
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..f297bea7103b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -87,8 +87,14 @@ SYM_FUNC_START(cpu_do_suspend)
 	mrs	x9, mdscr_el1
 	mrs	x10, oslsr_el1
 	mrs	x11, sctlr_el1
-	get_this_cpu_offset x12
-	mrs	x13, sp_el0
+alternative_if_not ARM64_HAS_SCTLR2
+	mov	x12, xzr
+alternative_else
+	mrs_s	x12, SYS_SCTLR2_EL1
+alternative_endif
+	get_this_cpu_offset x13
+	mrs	x14, sp_el0
+
 	stp	x2, x3, [x0]
 	stp	x4, x5, [x0, #16]
 	stp	x6, x7, [x0, #32]
@@ -99,7 +105,7 @@ SYM_FUNC_START(cpu_do_suspend)
 	 * Save x18 as it may be used as a platform register, e.g. by shadow
 	 * call stack.
 	 */
-	str	x18, [x0, #96]
+	stp	x14, x18, [x0, #96]
 	ret
 SYM_FUNC_END(cpu_do_suspend)
 
@@ -120,8 +126,8 @@ SYM_FUNC_START(cpu_do_resume)
 	 * the buffer to minimize the risk of exposure when used for shadow
 	 * call stack.
 	 */
-	ldr	x18, [x0, #96]
-	str	xzr, [x0, #96]
+	ldp	x15, x18, [x0, #96]
+	str	xzr, [x0, #104]
 	msr	tpidr_el0, x2
 	msr	tpidrro_el0, x3
 	msr	contextidr_el1, x4
@@ -136,8 +142,12 @@ SYM_FUNC_START(cpu_do_resume)
 	msr	mdscr_el1, x10
 
 	msr	sctlr_el1, x12
-	set_this_cpu_offset x13
-	msr	sp_el0, x14
+alternative_if ARM64_HAS_SCTLR2
+	msr_s	SYS_SCTLR2_EL1, x13
+alternative_else_nop_endif
+
+	set_this_cpu_offset x14
+	msr	sp_el0, x15
 	/*
 	 * Restore oslsr_el1 by writing oslar_el1
 	 */
@@ -151,7 +161,7 @@ alternative_if ARM64_HAS_RAS_EXTN
 	msr_s	SYS_DISR_EL1, xzr
 alternative_else_nop_endif
 
-	ptrauth_keys_install_kernel_nosync x14, x1, x2, x3
+	ptrauth_keys_install_kernel_nosync x15, x1, x2, x3
 	isb
 	ret
 SYM_FUNC_END(cpu_do_resume)
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
                   ` (2 preceding siblings ...)
  2025-08-13 12:01 ` [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume() Yeoreum Yun
@ 2025-08-13 12:01 ` Yeoreum Yun
  2025-08-20 15:11   ` Dave Martin
  2025-08-13 12:01 ` [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1 Yeoreum Yun
  2025-08-20 15:11 ` [PATCH v3 0/5] initialize SCTRL2_ELx Dave Martin
  5 siblings, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun

SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
and becomes mandatory from ARMv8.9/ARMv9.4
and serveral architectural feature are controled by bits in
these registers.

Before, launching new kernel via kexec, initialise SCTLR2_EL1 explicitly.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
 arch/arm64/kernel/cpu-reset.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
index c87445dde674..123564af345b 100644
--- a/arch/arm64/kernel/cpu-reset.S
+++ b/arch/arm64/kernel/cpu-reset.S
@@ -37,6 +37,12 @@ SYM_TYPED_FUNC_START(cpu_soft_restart)
 	 * regime if HCR_EL2.E2H == 1
 	 */
 	msr	sctlr_el1, x12
+
+alternative_if ARM64_HAS_SCTLR2
+	mov_q	x12, INIT_SCTLR2_EL1
+	msr_s	SYS_SCTLR2_EL1, x12
+alternative_else_nop_endif
+
 	isb
 
 	cbz	x0, 1f				// el2_switch?
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
                   ` (3 preceding siblings ...)
  2025-08-13 12:01 ` [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart() Yeoreum Yun
@ 2025-08-13 12:01 ` Yeoreum Yun
  2025-08-20 15:11   ` Dave Martin
  2025-08-20 15:11 ` [PATCH v3 0/5] initialize SCTRL2_ELx Dave Martin
  5 siblings, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-13 12:01 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, Dave.Martin,
	ahmed.genidi, kevin.brodsky, scott, mbenes, james.clark, frederic,
	rafael, pavel, ryan.roberts, suzuki.poulose
  Cc: linux-arm-kernel, linux-kernel, linux-pm, kvmarm, Yeoreum Yun

SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
and becomes mandatory from ARMv8.9/ARMv9.4
and serveral architectural feature are controled by bits in
these registers and some of bits could be configurable per task
not globally -- i.e) FEAT_CPA2 related field and etc.

For future usage of these fields, make the per-task SCTLR2_EL1.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
 arch/arm64/include/asm/processor.h | 5 +++++
 arch/arm64/kernel/process.c        | 9 +++++++++
 2 files changed, 14 insertions(+)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 61d62bfd5a7b..2c962816de70 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -184,6 +184,7 @@ struct thread_struct {
 	u64			mte_ctrl;
 #endif
 	u64			sctlr_user;
+	u64			sctlr2_user;
 	u64			svcr;
 	u64			tpidr2_el0;
 	u64			por_el0;
@@ -258,6 +259,9 @@ static inline void task_set_sve_vl_onexec(struct task_struct *task,
 	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
 	 SCTLR_EL1_TCF0_MASK)
 
+#define SCTLR2_USER_MASK	\
+	(SCTLR2_EL1_EnPACM0 | SCTLR2_EL1_CPTA0 | SCTLR2_EL1_CPTM0)
+
 static inline void arch_thread_struct_whitelist(unsigned long *offset,
 						unsigned long *size)
 {
@@ -370,6 +374,7 @@ struct task_struct;
 unsigned long __get_wchan(struct task_struct *p);
 
 void update_sctlr_el1(u64 sctlr);
+void update_sctlr2_el1(u64 sctlr2);
 
 /* Thread switching */
 extern struct task_struct *cpu_switch_to(struct task_struct *prev,
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 96482a1412c6..9191180c4875 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -698,6 +698,11 @@ void update_sctlr_el1(u64 sctlr)
 	isb();
 }
 
+void update_sctlr2_el1(u64 sctlr2)
+{
+	sysreg_clear_set_s(SYS_SCTLR2_EL1, SCTLR2_USER_MASK, sctlr2);
+}
+
 /*
  * Thread switching.
  */
@@ -737,6 +742,10 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	if (prev->thread.sctlr_user != next->thread.sctlr_user)
 		update_sctlr_el1(next->thread.sctlr_user);
 
+	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2) &&
+	    prev->thread.sctlr2_user != next->thread.sctlr2_user)
+		update_sctlr2_el1(next->thread.sctlr2_user);
+
 	/* the actual thread switch */
 	last = cpu_switch_to(prev, next);
 
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible
  2025-08-13 12:01 ` [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible Yeoreum Yun
@ 2025-08-20 15:10   ` Dave Martin
  2025-08-20 18:51     ` Yeoreum Yun
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:10 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm, Marc Zyngier

Hi,

On Wed, Aug 13, 2025 at 01:01:14PM +0100, Yeoreum Yun wrote:
> When the kernel runs at EL1, and yet is booted at EL2,
> HCRX_EL2.SCTLR2En must be set to avoid trapping SCTLR2_EL1 accesses
> from EL1 to EL2.
> 
> Ensure this bit is set at the point of initialising EL2.
> 
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> Reviewed-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/el2_setup.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index 46033027510c..d755b4d46d77 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -57,9 +57,15 @@
>          /* Enable GCS if supported */
>  	mrs_s	x1, SYS_ID_AA64PFR1_EL1
>  	ubfx	x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
> -	cbz	x1, .Lset_hcrx_\@
> +	cbz	x1, .Lskip_hcrx_GCSEn_\@
>  	orr	x0, x0, #HCRX_EL2_GCSEn
>  
> +.Lskip_hcrx_GCSEn_\@:
> +	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
> +	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> +	cbz	x1, .Lset_hcrx_\@
> +	orr	x0, x0, HCRX_EL2_SCTLR2En

Nit: prefix immediate operands with # please -- see usage elsewhere in
this file.

(This comes from the legacy AArch32 syntax and has never been required
by AArch64 assemblers, but it has become a tradition in the Linux arch
code...)

The only execptions to this rule are macros (mov_q, mrs_s etc. --
frequently they have an underscore in the name; "real" instructions
never do.)

> +
>  .Lset_hcrx_\@:

Maybe rename this label to .Lskip_hcrx_SCTLR2En_\@, so that people
don't have to keep renaming an existing label whenever they add
another block here.

>  	msr_s	SYS_HCRX_EL2, x0
>  .Lskip_hcrx_\@:

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time
  2025-08-13 12:01 ` [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time Yeoreum Yun
@ 2025-08-20 15:10   ` Dave Martin
  2025-08-20 17:18     ` Yeoreum Yun
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:10 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 13, 2025 at 01:01:15PM +0100, Yeoreum Yun wrote:
> SCTLR2_ELx register is optional starting from ARMv8.8/ARMv9.3,
> and becomes mandatory from ARMv8.9/ARMv9.4
> and serveral architectural feature are controled by bits in
> these registers.
> 
> These register's value is UNKNOWN when it was reset
> It wasn't need to be initialised if firmware initilises these registers
> properly.
> But for the case not initialised properly, initialise SCTLR2_ELx
> registers at bootin cpu/vcpu so that
> unexpected system behavior couldn't happen for
> improper SCTLR2_ELx value.
> 
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---

Please note significant changes under the tearoff line or in the cover
letter.

(Merging the __kvm_host_psci_cpu_entry() changes into this patch is a
significant change.)

>  arch/arm64/include/asm/assembler.h   | 22 ++++++++++++++++++++++
>  arch/arm64/include/asm/el2_setup.h   |  6 ++++++
>  arch/arm64/include/asm/sysreg.h      |  5 +++++
>  arch/arm64/kernel/head.S             |  5 +++++
>  arch/arm64/kernel/hyp-stub.S         | 10 ++++++++++
>  arch/arm64/kvm/hyp/nvhe/psci-relay.c |  3 +++
>  6 files changed, 51 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 23be85d93348..eef169c105f0 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -738,6 +738,28 @@ alternative_endif
>  	set_sctlr sctlr_el2, \reg
>  .endm
> 
> +	/*
> +	 * Set SCTLR2_ELx to the @reg value.
> +	 */

One-line comment preferred:

	/* Set SCTLR2_ELx to the @reg value. */

(when the comment is short enough to fit).


Nit: can you follow the same indentation pattern as the "set_sctlr"
macros that appear above?

(I know, cond_yield etc. do things differently.  But the sctlr
accessors feel like they should belong together and look similar (as
far as possible).)


> +	.macro __set_sctlr2_elx, el, reg, tmp
> +	mrs_s	\tmp, SYS_ID_AA64MMFR3_EL1
> +	ubfx	\tmp, \tmp, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> +	cbz	\tmp, .Lskip_sctlr2_\@
> +	.if	\el == 2
> +	msr_s	SYS_SCTLR_EL2, \reg

Er, should this be SYS_SCTLR2_EL2 ?!

To make the code more readable at the call site, would it make sense
to have:

	msr_s	SYS_SCTLR2_\el, \reg

...?

This would mean that you can get rid of the lengthy .if...else, and the
assembler will report an error if the el argument is junk or missing.
(The argument would need to have an explicit EL prefix at the call
site.)

(At the moment, the code silently uses SCTLR(2)_EL1 when the el
argument is junk, which is not ideal.  Where it is easy to detect
stupid errors by the caller at build time, we should try to do it.)

> +	.elseif	\el == 12
> +	msr_s	SYS_SCTLR_EL12, \reg
> +	.else
> +	msr_s	SYS_SCTLR_EL1, \reg
> +	.endif
> +.Lskip_sctlr2_\@:
> +	.endm
> +
> +	.macro set_sctlr2_elx, el, reg, tmp
> +	__set_sctlr2_elx	\el, \reg, \tmp
> +	isb
> +	.endm
> +

Maybe we don't need two macros here?

__set_sctlr2_elx seems only to be used in one place.

The isb is needed when setting SCTLR2_ELx for the current EL, but 
the code at the call site knows whether this is the case.  It is
hard for the macro to know, and it feels overcomplicated to
explicitly check CurrentEL here.

I think it may be better to open-code the isb in the places where it
matters, including inside the __init_sctlr2_el2 macro.  (Compare with
the other __init_foo macros here.)

>  	/*
>  	 * Check whether asm code should yield as soon as it is able. This is
>  	 * the case if we are currently running in task context, and the
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index d755b4d46d77..c03cabd45fcf 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -48,6 +48,11 @@
>  	isb
>  .endm
> 
> +.macro __init_sctlr2_el2
> +	mov_q	x0, INIT_SCTLR2_EL2
> +	set_sctlr2_elx	2, x0, x1
> +.endm
> +
>  .macro __init_el2_hcrx
>  	mrs	x0, id_aa64mmfr1_el1
>  	ubfx	x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
> @@ -411,6 +416,7 @@
>   */
>  .macro init_el2_state
>  	__init_el2_sctlr
> +	__init_sctlr2_el2
>  	__init_el2_hcrx
>  	__init_el2_timers
>  	__init_el2_debug
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index d5b5f2ae1afa..0431b357b87b 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -868,6 +868,8 @@
>  #define INIT_SCTLR_EL2_MMU_OFF \
>  	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
> 
> +#define INIT_SCTLR2_EL2			UL(0)
> +
>  /* SCTLR_EL1 specific flags. */
>  #ifdef CONFIG_CPU_BIG_ENDIAN
>  #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
> @@ -888,6 +890,8 @@
>  	 SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | SCTLR_EL1_EIS   | \
>  	 SCTLR_EL1_TSCXT  | SCTLR_EL1_EOS)
> 
> +#define INIT_SCTLR2_EL1			UL(0)
> +
>  /* MAIR_ELx memory attributes (used by Linux) */
>  #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
>  #define MAIR_ATTR_DEVICE_nGnRE		UL(0x04)
> @@ -1164,6 +1168,7 @@
>  	msr	hcr_el2, \reg
>  #endif
>  	.endm
> +
>  #else
> 
>  #include <linux/bitfield.h>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index ca04b338cb0d..c41015675eae 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -276,6 +276,8 @@ SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
>  	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
>  	pre_disable_mmu_workaround
>  	msr	sctlr_el1, x0
> +	mov_q	x0, INIT_SCTLR2_EL1
> +	__set_sctlr2_elx	1, x0, x1
>  	isb
>  	mov_q	x0, INIT_PSTATE_EL1
>  	msr	spsr_el1, x0
> @@ -308,6 +310,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
>  	isb
> 
>  	mov_q	x1, INIT_SCTLR_EL1_MMU_OFF
> +	mov_q	x2, INIT_SCTLR2_EL1
> 
>  	mrs	x0, hcr_el2
>  	and	x0, x0, #HCR_E2H
> @@ -315,11 +318,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
> 
>  	/* Set a sane SCTLR_EL1, the VHE way */
>  	msr_s	SYS_SCTLR_EL12, x1
> +	__set_sctlr2_elx	12, x2, x0
>  	mov	x2, #BOOT_CPU_FLAG_E2H
>  	b	3f
> 
>  2:
>  	msr	sctlr_el1, x1
> +	__set_sctlr2_elx	1, x2, x0
>  	mov	x2, xzr
>  3:
>  	mov	x0, #INIT_PSTATE_EL1
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 36e2d26b54f5..ac12f1b4f8e2 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -144,7 +144,17 @@ SYM_CODE_START_LOCAL(__finalise_el2)
> 
>  .Lskip_indirection:
>  .Lskip_tcr2:
> +	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
> +	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> +	cbz	x1, .Lskip_sctlr2
> +	mrs_s	x1, SYS_SCTLR2_EL12
> +	msr_s	SYS_SCTLR2_EL1, x1

Is this code only applicable to the VHE case?  (I think this is
probably true, but it's not completely obvious just from the context in
this patch...)

Anyway, the pattern in this function is to transfer the "same EL"
controls over from the kernel into EL2, which is what the block above
does.


The next bit of code looks strange, though.

If we have committed to running the kernel at EL2, why does the code
reinitialise SCTLR2_EL1 here:

> +	// clean SCTLR2_EL1
> +	mov_q	x1, INIT_SCTLR2_EL1
> +	msr_s	SYS_SCTLR2_EL12, x1
> +
> +.Lskip_sctlr2:
>  	isb
> 
>  	// Hack the exception return to stay at EL2
> diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> index c3e196fb8b18..4ed4b7fa57c2 100644
> --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> @@ -4,6 +4,7 @@
>   * Author: David Brazdil <dbrazdil@google.com>
>   */
> 
> +#include <asm/alternative.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
> @@ -219,6 +220,8 @@ asmlinkage void __noreturn __kvm_host_psci_cpu_entry(bool is_cpu_on)
>  		release_boot_args(boot_args);
> 
>  	write_sysreg_el1(INIT_SCTLR_EL1_MMU_OFF, SYS_SCTLR);
> +	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2))
> +		write_sysreg_el1(INIT_SCTLR2_EL1, SYS_SCTLR2);

Maybe drop the "unlikely" in this case?

Compared with context switch, this is a slow path -- so keeping the
code as simple as possible is desirable so long as the overall effect
on performance is not significant.

(It works either way, though.)

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume()
  2025-08-13 12:01 ` [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume() Yeoreum Yun
@ 2025-08-20 15:11   ` Dave Martin
  2025-08-20 17:22     ` Yeoreum Yun
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:11 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 13, 2025 at 01:01:16PM +0100, Yeoreum Yun wrote:
> SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> and becomes mandatory from ARMv8.9/ARMv9.4
> and serveral architectural feature are controled by bits in
> these registers (i.e) FEAT_PAuth_LR or FEAT_CPA/CPA2
> 
> Save and restore SCTLR2_EL1 when cpu_suspend() and resume().
> so that configured value can sustain consistency before suspend and
> after resume.
> 
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
>  arch/arm64/include/asm/suspend.h |  2 +-
>  arch/arm64/mm/proc.S             | 26 ++++++++++++++++++--------
>  2 files changed, 19 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
> index 0cde2f473971..eb60c9735553 100644
> --- a/arch/arm64/include/asm/suspend.h
> +++ b/arch/arm64/include/asm/suspend.h
> @@ -2,7 +2,7 @@
>  #ifndef __ASM_SUSPEND_H
>  #define __ASM_SUSPEND_H
>  
> -#define NR_CTX_REGS 13
> +#define NR_CTX_REGS 14
>  #define NR_CALLEE_SAVED_REGS 12
>  
>  /*
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 8c75965afc9e..f297bea7103b 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -87,8 +87,14 @@ SYM_FUNC_START(cpu_do_suspend)
>  	mrs	x9, mdscr_el1
>  	mrs	x10, oslsr_el1
>  	mrs	x11, sctlr_el1
> -	get_this_cpu_offset x12
> -	mrs	x13, sp_el0
> +alternative_if_not ARM64_HAS_SCTLR2
> +	mov	x12, xzr

Looking at this, maybe it can just be a nop for the !ARM64_HAS_SCTLR2
case.

(So, alternative_if ... alternative_if_else_nop_endif, similarly to
what you have in cpu_do_resume.)

The memory used to save this state should not be accessible to anything
less privileged than the kernel anyway, so leaking whatever was in x12
does not really feel like a concern...


> +alternative_else
> +	mrs_s	x12, SYS_SCTLR2_EL1
> +alternative_endif
> +	get_this_cpu_offset x13
> +	mrs	x14, sp_el0
> +
>  	stp	x2, x3, [x0]
>  	stp	x4, x5, [x0, #16]
>  	stp	x6, x7, [x0, #32]
> @@ -99,7 +105,7 @@ SYM_FUNC_START(cpu_do_suspend)
>  	 * Save x18 as it may be used as a platform register, e.g. by shadow
>  	 * call stack.
>  	 */
> -	str	x18, [x0, #96]
> +	stp	x14, x18, [x0, #96]
>  	ret
>  SYM_FUNC_END(cpu_do_suspend)
>  
> @@ -120,8 +126,8 @@ SYM_FUNC_START(cpu_do_resume)
>  	 * the buffer to minimize the risk of exposure when used for shadow
>  	 * call stack.
>  	 */
> -	ldr	x18, [x0, #96]
> -	str	xzr, [x0, #96]
> +	ldp	x15, x18, [x0, #96]
> +	str	xzr, [x0, #104]
>  	msr	tpidr_el0, x2
>  	msr	tpidrro_el0, x3
>  	msr	contextidr_el1, x4
> @@ -136,8 +142,12 @@ SYM_FUNC_START(cpu_do_resume)
>  	msr	mdscr_el1, x10
>  
>  	msr	sctlr_el1, x12
> -	set_this_cpu_offset x13
> -	msr	sp_el0, x14
> +alternative_if ARM64_HAS_SCTLR2
> +	msr_s	SYS_SCTLR2_EL1, x13
> +alternative_else_nop_endif
> +
> +	set_this_cpu_offset x14
> +	msr	sp_el0, x15
>  	/*
>  	 * Restore oslsr_el1 by writing oslar_el1
>  	 */
> @@ -151,7 +161,7 @@ alternative_if ARM64_HAS_RAS_EXTN
>  	msr_s	SYS_DISR_EL0, xzr
>  alternative_else_nop_endif
>  
> -	ptrauth_keys_install_kernel_nosync x14, x1, x2, x3
> +	ptrauth_keys_install_kernel_nosync x15, x1, x2, x3
>  	isb
>  	ret
>  SYM_FUNC_END(cpu_do_resume)
> -- 
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}

Otherwise, this looks OK to me.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  2025-08-13 12:01 ` [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart() Yeoreum Yun
@ 2025-08-20 15:11   ` Dave Martin
  2025-08-20 17:32     ` Yeoreum Yun
  2025-08-21 10:14     ` Yeoreum Yun
  0 siblings, 2 replies; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:11 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 13, 2025 at 01:01:17PM +0100, Yeoreum Yun wrote:
> SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> and becomes mandatory from ARMv8.9/ARMv9.4
> and serveral architectural feature are controled by bits in
> these registers.
> 
> Before, launching new kernel via kexec, initialise SCTLR2_EL1 explicitly.
> 
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
>  arch/arm64/kernel/cpu-reset.S | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
> index c87445dde674..123564af345b 100644
> --- a/arch/arm64/kernel/cpu-reset.S
> +++ b/arch/arm64/kernel/cpu-reset.S
> @@ -37,6 +37,12 @@ SYM_TYPED_FUNC_START(cpu_soft_restart)
>  	 * regime if HCR_EL2.E2H == 1
>  	 */
>  	msr	sctlr_el1, x12
> +
> +alternative_if ARM64_HAS_SCTLR2
> +	mov_q	x12, INIT_SCTLR2_EL1
> +	msr_s	SYS_SCTLR2_EL1, x12
> +alternative_else_nop_endif
> +

It would be better to do this based on the ID regs.

Although the previous kernel _shouldn't_ have used SCTLR2 if the
capability ARM64_HAS_SCTLR2 did not get enabled, it would be better to
enforce a clean state here for the new kernel.

If so, maybe one of the macros that you already defined can be used
here?  (But it's also fine to open-code it.)

>  	isb
>  
>  	cbz	x0, 1f				// el2_switch?

[...]

In the case where the el2_switch argument in non-zero, don't we also
need to do something to reinitialise SCTLR2_EL2 after switching back
to EL2, in the HVC_SOFT_RESTART handler?

Maybe I missed something.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1
  2025-08-13 12:01 ` [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1 Yeoreum Yun
@ 2025-08-20 15:11   ` Dave Martin
  2025-08-20 17:34     ` Yeoreum Yun
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:11 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 13, 2025 at 01:01:18PM +0100, Yeoreum Yun wrote:
> SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> and becomes mandatory from ARMv8.9/ARMv9.4
> and serveral architectural feature are controled by bits in
> these registers and some of bits could be configurable per task
> not globally -- i.e) FEAT_CPA2 related field and etc.
> 
> For future usage of these fields, make the per-task SCTLR2_EL1.

It is worth pointing out the impact of this: for platforms without
FEAT_SCTLR2 support, there is no functional change and minimal
performance overhead.

> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
>  arch/arm64/include/asm/processor.h | 5 +++++
>  arch/arm64/kernel/process.c        | 9 +++++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 61d62bfd5a7b..2c962816de70 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -184,6 +184,7 @@ struct thread_struct {
>  	u64			mte_ctrl;
>  #endif
>  	u64			sctlr_user;
> +	u64			sctlr2_user;
>  	u64			svcr;
>  	u64			tpidr2_el0;
>  	u64			por_el0;
> @@ -258,6 +259,9 @@ static inline void task_set_sve_vl_onexec(struct task_struct *task,
>  	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
>  	 SCTLR_EL1_TCF0_MASK)
>  
> +#define SCTLR2_USER_MASK	\
> +	(SCTLR2_EL1_EnPACM0 | SCTLR2_EL1_CPTA0 | SCTLR2_EL1_CPTM0)
> +

The kernel doesn't know about any of these features, yet.

It's probably better to make this 0 for this patch series, and add bits
to this mask only when they are needed / used.

>  static inline void arch_thread_struct_whitelist(unsigned long *offset,
>  						unsigned long *size)
>  {
> @@ -370,6 +374,7 @@ struct task_struct;
>  unsigned long __get_wchan(struct task_struct *p);
>  
>  void update_sctlr_el1(u64 sctlr);
> +void update_sctlr2_el1(u64 sctlr2);

Is this function used outside process.c yet?  If not, you can drop this
declaration and [... below ...]
>  
>  /* Thread switching */
>  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 96482a1412c6..9191180c4875 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -698,6 +698,11 @@ void update_sctlr_el1(u64 sctlr)
>  	isb();
>  }
>  
> +void update_sctlr2_el1(u64 sctlr2)

[...] make the function static here.


> +{
> +	sysreg_clear_set_s(SYS_SCTLR2_EL1, SCTLR2_USER_MASK, sctlr2);
> +}
> +
>  /*
>   * Thread switching.
>   */
> @@ -737,6 +742,10 @@ struct task_struct *__switch_to(struct task_struct *prev,
>  	if (prev->thread.sctlr_user != next->thread.sctlr_user)
>  		update_sctlr_el1(next->thread.sctlr_user);
>  
> +	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2) &&
> +	    prev->thread.sctlr2_user != next->thread.sctlr2_user)
> +		update_sctlr2_el1(next->thread.sctlr2_user);
> +
>  	/* the actual thread switch */
>  	last = cpu_switch_to(prev, next);

[...]

Otherwise, I guess this looks OK.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 0/5] initialize SCTRL2_ELx
  2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
                   ` (4 preceding siblings ...)
  2025-08-13 12:01 ` [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1 Yeoreum Yun
@ 2025-08-20 15:11 ` Dave Martin
  5 siblings, 0 replies; 19+ messages in thread
From: Dave Martin @ 2025-08-20 15:11 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 13, 2025 at 01:01:13PM +0100, Yeoreum Yun wrote:
> This series introduces initial support for the SCTLR2_ELx registers in Linux.
> The feature is optional starting from ARMv8.8/ARMv9.3,
> and becomes mandatory from ARMv8.9/ARMv9.4.
> 
> Currently, Linux has no strict need to modify SCTLR2_ELx—
> at least assuming that firmware initializes
> these registers to reasonable defaults.
> 
> However, several upcoming architectural features will require configuring
> control bits in these registers.
> Notable examples include FEAT_PAuth_LR and FEAT_CPA2.

This looks OK overall to me, apart from some confusion between
SCTLR_ELx and SCTLR2_ELx in patch 2 (see my comments there).

This code will need to be tested somehow.  Do you have any thoughts on
this?  Hacking the bootwrapper, KVM and/or kvmtool may provide a way of
checking what happens when the kernel is entered with "wrong" initial
values in relevant bits (HRCX_EL2.SCTLR2En, SCTLR2_EL{1,2}).


Regarding the patch history:

> Patch History
> ==============
> from v2 to v3:
>   - rewrite commit messages.
>   - fix missing SCTLR2_EL2 synchonization at boot.
>   - https://lore.kernel.org/all/20250811163340.1561893-1-yeoreum.yun@arm.com/
> 
> from v1 to v2:
>   - rebase to v6.17-rc1
>   - https://lore.kernel.org/all/20250804121724.3681531-1-yeoreum.yun@arm.com/

Going forwards, can you try to make sure that significant changes to
the series are all mentioned here?

I didn't have time to look at v2, and in the meantime half of the
series disappeared and a chunk of code was moved from one patch to
another.  I saves reviewers some time and effort if they do not have to
dig through previous review conversations in order to understand what
changed between versions of a series.

(No need to mention every trivial change, though.)

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time
  2025-08-20 15:10   ` Dave Martin
@ 2025-08-20 17:18     ` Yeoreum Yun
  0 siblings, 0 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-20 17:18 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi Dave,

>
> On Wed, Aug 13, 2025 at 01:01:15PM +0100, Yeoreum Yun wrote:
> > SCTLR2_ELx register is optional starting from ARMv8.8/ARMv9.3,
> > and becomes mandatory from ARMv8.9/ARMv9.4
> > and serveral architectural feature are controled by bits in
> > these registers.
> >
> > These register's value is UNKNOWN when it was reset
> > It wasn't need to be initialised if firmware initilises these registers
> > properly.
> > But for the case not initialised properly, initialise SCTLR2_ELx
> > registers at bootin cpu/vcpu so that
> > unexpected system behavior couldn't happen for
> > improper SCTLR2_ELx value.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
>
> Please note significant changes under the tearoff line or in the cover
> letter.
>
> (Merging the __kvm_host_psci_cpu_entry() changes into this patch is a
> significant change.)
>
> >  arch/arm64/include/asm/assembler.h   | 22 ++++++++++++++++++++++
> >  arch/arm64/include/asm/el2_setup.h   |  6 ++++++
> >  arch/arm64/include/asm/sysreg.h      |  5 +++++
> >  arch/arm64/kernel/head.S             |  5 +++++
> >  arch/arm64/kernel/hyp-stub.S         | 10 ++++++++++
> >  arch/arm64/kvm/hyp/nvhe/psci-relay.c |  3 +++
> >  6 files changed, 51 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index 23be85d93348..eef169c105f0 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -738,6 +738,28 @@ alternative_endif
> >  	set_sctlr sctlr_el2, \reg
> >  .endm
> >
> > +	/*
> > +	 * Set SCTLR2_ELx to the @reg value.
> > +	 */
>
> One-line comment preferred:
>
> 	/* Set SCTLR2_ELx to the @reg value. */
>
> (when the comment is short enough to fit).
>
>
> Nit: can you follow the same indentation pattern as the "set_sctlr"
> macros that appear above?
>
> (I know, cond_yield etc. do things differently.  But the sctlr
> accessors feel like they should belong together and look similar (as
> far as possible).)

Thanks :) I'll change it.

>
>
> > +	.macro __set_sctlr2_elx, el, reg, tmp
> > +	mrs_s	\tmp, SYS_ID_AA64MMFR3_EL1
> > +	ubfx	\tmp, \tmp, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> > +	cbz	\tmp, .Lskip_sctlr2_\@
> > +	.if	\el == 2
> > +	msr_s	SYS_SCTLR_EL2, \reg
>
> Er, should this be SYS_SCTLR2_EL2 ?!

Oops. I was lucky when I test it. Thanks for catching this!

>
> To make the code more readable at the call site, would it make sense
> to have:
>
> 	msr_s	SYS_SCTLR2_\el, \reg
>
> ...?
>
> This would mean that you can get rid of the lengthy .if...else, and the
> assembler will report an error if the el argument is junk or missing.
> (The argument would need to have an explicit EL prefix at the call
> site.)
>
> (At the moment, the code silently uses SCTLR(2)_EL1 when the el
> argument is junk, which is not ideal.  Where it is easy to detect
> stupid errors by the caller at build time, we should try to do it.)


That was what I tried However, msr_s is also macro.
Unfortunately If I apply your suggestion, SYS_SCTLR2_EL2 doesn't expand.
IOW this macro expand like:
  emit(SYS_SCTLR2_EL2) // -> not the encoded hex of SYS_SCTLR2_EL2

So, I've used this way..

>
> > +	.elseif	\el == 12
> > +	msr_s	SYS_SCTLR_EL12, \reg
> > +	.else
> > +	msr_s	SYS_SCTLR_EL1, \reg
> > +	.endif
> > +.Lskip_sctlr2_\@:
> > +	.endm
> > +
> > +	.macro set_sctlr2_elx, el, reg, tmp
> > +	__set_sctlr2_elx	\el, \reg, \tmp
> > +	isb
> > +	.endm
> > +
>
> Maybe we don't need two macros here?
>
> __set_sctlr2_elx seems only to be used in one place.
>
> The isb is needed when setting SCTLR2_ELx for the current EL, but
> the code at the call site knows whether this is the case.  It is
> hard for the macro to know, and it feels overcomplicated to
> explicitly check CurrentEL here.
>
> I think it may be better to open-code the isb in the places where it
> matters, including inside the __init_sctlr2_el2 macro.  (Compare with
> the other __init_foo macros here.)

I see. Thanks for clarification!

>
> >  	/*
> >  	 * Check whether asm code should yield as soon as it is able. This is
> >  	 * the case if we are currently running in task context, and the
> > diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> > index d755b4d46d77..c03cabd45fcf 100644
> > --- a/arch/arm64/include/asm/el2_setup.h
> > +++ b/arch/arm64/include/asm/el2_setup.h
> > @@ -48,6 +48,11 @@
> >  	isb
> >  .endm
> >
> > +.macro __init_sctlr2_el2
> > +	mov_q	x0, INIT_SCTLR2_EL2
> > +	set_sctlr2_elx	2, x0, x1
> > +.endm
> > +
> >  .macro __init_el2_hcrx
> >  	mrs	x0, id_aa64mmfr1_el1
> >  	ubfx	x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
> > @@ -411,6 +416,7 @@
> >   */
> >  .macro init_el2_state
> >  	__init_el2_sctlr
> > +	__init_sctlr2_el2
> >  	__init_el2_hcrx
> >  	__init_el2_timers
> >  	__init_el2_debug
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index d5b5f2ae1afa..0431b357b87b 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -868,6 +868,8 @@
> >  #define INIT_SCTLR_EL2_MMU_OFF \
> >  	(SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
> >
> > +#define INIT_SCTLR2_EL2			UL(0)
> > +
> >  /* SCTLR_EL1 specific flags. */
> >  #ifdef CONFIG_CPU_BIG_ENDIAN
> >  #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
> > @@ -888,6 +890,8 @@
> >  	 SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | SCTLR_EL1_EIS   | \
> >  	 SCTLR_EL1_TSCXT  | SCTLR_EL1_EOS)
> >
> > +#define INIT_SCTLR2_EL1			UL(0)
> > +
> >  /* MAIR_ELx memory attributes (used by Linux) */
> >  #define MAIR_ATTR_DEVICE_nGnRnE		UL(0x00)
> >  #define MAIR_ATTR_DEVICE_nGnRE		UL(0x04)
> > @@ -1164,6 +1168,7 @@
> >  	msr	hcr_el2, \reg
> >  #endif
> >  	.endm
> > +
> >  #else
> >
> >  #include <linux/bitfield.h>
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index ca04b338cb0d..c41015675eae 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -276,6 +276,8 @@ SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
> >  	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
> >  	pre_disable_mmu_workaround
> >  	msr	sctlr_el1, x0
> > +	mov_q	x0, INIT_SCTLR2_EL1
> > +	__set_sctlr2_elx	1, x0, x1
> >  	isb
> >  	mov_q	x0, INIT_PSTATE_EL1
> >  	msr	spsr_el1, x0
> > @@ -308,6 +310,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
> >  	isb
> >
> >  	mov_q	x1, INIT_SCTLR_EL1_MMU_OFF
> > +	mov_q	x2, INIT_SCTLR2_EL1
> >
> >  	mrs	x0, hcr_el2
> >  	and	x0, x0, #HCR_E2H
> > @@ -315,11 +318,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
> >
> >  	/* Set a sane SCTLR_EL1, the VHE way */
> >  	msr_s	SYS_SCTLR_EL12, x1
> > +	__set_sctlr2_elx	12, x2, x0
> >  	mov	x2, #BOOT_CPU_FLAG_E2H
> >  	b	3f
> >
> >  2:
> >  	msr	sctlr_el1, x1
> > +	__set_sctlr2_elx	1, x2, x0
> >  	mov	x2, xzr
> >  3:
> >  	mov	x0, #INIT_PSTATE_EL1
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index 36e2d26b54f5..ac12f1b4f8e2 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -144,7 +144,17 @@ SYM_CODE_START_LOCAL(__finalise_el2)
> >
> >  .Lskip_indirection:
> >  .Lskip_tcr2:
> > +	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
> > +	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> > +	cbz	x1, .Lskip_sctlr2
> > +	mrs_s	x1, SYS_SCTLR2_EL12
> > +	msr_s	SYS_SCTLR2_EL1, x1
>
> Is this code only applicable to the VHE case?  (I think this is
> probably true, but it's not completely obvious just from the context in
> this patch...)
>
> Anyway, the pattern in this function is to transfer the "same EL"
> controls over from the kernel into EL2, which is what the block above
> does.
>
>
> The next bit of code looks strange, though.
>
> If we have committed to running the kernel at EL2, why does the code
> reinitialise SCTLR2_EL1 here:

When init_el2 returns regardless of VHE or not, it runs at EL1 by

  mov  x0, #INIT_PSTATE_EL1
  msr  spsr_el2, x0

And as SCTLR_EL1 is initialise by consequece code, and this syncs at
finalise_el2. and This will be the same for the SCTLR2_EL1
(although there's no modification of SCTLR2_EL1, but in the feature
that would be).

So, In case of VHE case, the SCTLR2_EL1 which was set by the code before
finalise_el2 will set this value. and it writes to SCTLR2_EL2.
since SCTLR2_EL1 is initilised for SCTLR2_EL2, this should be cleared.

>
> > +	// clean SCTLR2_EL1
> > +	mov_q	x1, INIT_SCTLR2_EL1
> > +	msr_s	SYS_SCTLR2_EL12, x1
> > +
> > +.Lskip_sctlr2:
> >  	isb
> >
> >  	// Hack the exception return to stay at EL2
> > diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> > index c3e196fb8b18..4ed4b7fa57c2 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> > @@ -4,6 +4,7 @@
> >   * Author: David Brazdil <dbrazdil@google.com>
> >   */
> >
> > +#include <asm/alternative.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> > @@ -219,6 +220,8 @@ asmlinkage void __noreturn __kvm_host_psci_cpu_entry(bool is_cpu_on)
> >  		release_boot_args(boot_args);
> >
> >  	write_sysreg_el1(INIT_SCTLR_EL1_MMU_OFF, SYS_SCTLR);
> > +	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2))
> > +		write_sysreg_el1(INIT_SCTLR2_EL1, SYS_SCTLR2);
>
> Maybe drop the "unlikely" in this case?
>
> Compared with context switch, this is a slow path -- so keeping the
> code as simple as possible is desirable so long as the overall effect
> on performance is not significant.
>
> (It works either way, though.)

Okay. I'll use as likely.

Thanks!
>
> Cheers
> ---Dave

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume()
  2025-08-20 15:11   ` Dave Martin
@ 2025-08-20 17:22     ` Yeoreum Yun
  0 siblings, 0 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-20 17:22 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi Dave,

> On Wed, Aug 13, 2025 at 01:01:16PM +0100, Yeoreum Yun wrote:
> > SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> > and becomes mandatory from ARMv8.9/ARMv9.4
> > and serveral architectural feature are controled by bits in
> > these registers (i.e) FEAT_PAuth_LR or FEAT_CPA/CPA2
> >
> > Save and restore SCTLR2_EL1 when cpu_suspend() and resume().
> > so that configured value can sustain consistency before suspend and
> > after resume.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> >  arch/arm64/include/asm/suspend.h |  2 +-
> >  arch/arm64/mm/proc.S             | 26 ++++++++++++++++++--------
> >  2 files changed, 19 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
> > index 0cde2f473971..eb60c9735553 100644
> > --- a/arch/arm64/include/asm/suspend.h
> > +++ b/arch/arm64/include/asm/suspend.h
> > @@ -2,7 +2,7 @@
> >  #ifndef __ASM_SUSPEND_H
> >  #define __ASM_SUSPEND_H
> >
> > -#define NR_CTX_REGS 13
> > +#define NR_CTX_REGS 14
> >  #define NR_CALLEE_SAVED_REGS 12
> >
> >  /*
> > diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> > index 8c75965afc9e..f297bea7103b 100644
> > --- a/arch/arm64/mm/proc.S
> > +++ b/arch/arm64/mm/proc.S
> > @@ -87,8 +87,14 @@ SYM_FUNC_START(cpu_do_suspend)
> >  	mrs	x9, mdscr_el1
> >  	mrs	x10, oslsr_el1
> >  	mrs	x11, sctlr_el1
> > -	get_this_cpu_offset x12
> > -	mrs	x13, sp_el0
> > +alternative_if_not ARM64_HAS_SCTLR2
> > +	mov	x12, xzr
>
> Looking at this, maybe it can just be a nop for the !ARM64_HAS_SCTLR2
> case.
>
> (So, alternative_if ... alternative_if_else_nop_endif, similarly to
> what you have in cpu_do_resume.)
>
> The memory used to save this state should not be accessible to anything
> less privileged than the kernel anyway, so leaking whatever was in x12
> does not really feel like a concern...

Right. I'll change this.

Thanks!

>
>
> > +alternative_else
> > +	mrs_s	x12, SYS_SCTLR2_EL1
> > +alternative_endif
> > +	get_this_cpu_offset x13
> > +	mrs	x14, sp_el0
> > +
> >  	stp	x2, x3, [x0]
> >  	stp	x4, x5, [x0, #16]
> >  	stp	x6, x7, [x0, #32]
> > @@ -99,7 +105,7 @@ SYM_FUNC_START(cpu_do_suspend)
> >  	 * Save x18 as it may be used as a platform register, e.g. by shadow
> >  	 * call stack.
> >  	 */
> > -	str	x18, [x0, #96]
> > +	stp	x14, x18, [x0, #96]
> >  	ret
> >  SYM_FUNC_END(cpu_do_suspend)
> >
> > @@ -120,8 +126,8 @@ SYM_FUNC_START(cpu_do_resume)
> >  	 * the buffer to minimize the risk of exposure when used for shadow
> >  	 * call stack.
> >  	 */
> > -	ldr	x18, [x0, #96]
> > -	str	xzr, [x0, #96]
> > +	ldp	x15, x18, [x0, #96]
> > +	str	xzr, [x0, #104]
> >  	msr	tpidr_el0, x2
> >  	msr	tpidrro_el0, x3
> >  	msr	contextidr_el1, x4
> > @@ -136,8 +142,12 @@ SYM_FUNC_START(cpu_do_resume)
> >  	msr	mdscr_el1, x10
> >
> >  	msr	sctlr_el1, x12
> > -	set_this_cpu_offset x13
> > -	msr	sp_el0, x14
> > +alternative_if ARM64_HAS_SCTLR2
> > +	msr_s	SYS_SCTLR2_EL1, x13
> > +alternative_else_nop_endif
> > +
> > +	set_this_cpu_offset x14
> > +	msr	sp_el0, x15
> >  	/*
> >  	 * Restore oslsr_el1 by writing oslar_el1
> >  	 */
> > @@ -151,7 +161,7 @@ alternative_if ARM64_HAS_RAS_EXTN
> >  	msr_s	SYS_DISR_EL0, xzr
> >  alternative_else_nop_endif
> >
> > -	ptrauth_keys_install_kernel_nosync x14, x1, x2, x3
> > +	ptrauth_keys_install_kernel_nosync x15, x1, x2, x3
> >  	isb
> >  	ret
> >  SYM_FUNC_END(cpu_do_resume)
> > --
> > LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
> Otherwise, this looks OK to me.
>
> Cheers
> ---Dave

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  2025-08-20 15:11   ` Dave Martin
@ 2025-08-20 17:32     ` Yeoreum Yun
  2025-09-01 15:01       ` Dave Martin
  2025-08-21 10:14     ` Yeoreum Yun
  1 sibling, 1 reply; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-20 17:32 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi Dave,

> On Wed, Aug 13, 2025 at 01:01:17PM +0100, Yeoreum Yun wrote:
> > SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> > and becomes mandatory from ARMv8.9/ARMv9.4
> > and serveral architectural feature are controled by bits in
> > these registers.
> >
> > Before, launching new kernel via kexec, initialise SCTLR2_EL1 explicitly.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> >  arch/arm64/kernel/cpu-reset.S | 6 ++++++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
> > index c87445dde674..123564af345b 100644
> > --- a/arch/arm64/kernel/cpu-reset.S
> > +++ b/arch/arm64/kernel/cpu-reset.S
> > @@ -37,6 +37,12 @@ SYM_TYPED_FUNC_START(cpu_soft_restart)
> >  	 * regime if HCR_EL2.E2H == 1
> >  	 */
> >  	msr	sctlr_el1, x12
> > +
> > +alternative_if ARM64_HAS_SCTLR2
> > +	mov_q	x12, INIT_SCTLR2_EL1
> > +	msr_s	SYS_SCTLR2_EL1, x12
> > +alternative_else_nop_endif
> > +
>
> It would be better to do this based on the ID regs.
>
> Although the previous kernel _shouldn't_ have used SCTLR2 if the
> capability ARM64_HAS_SCTLR2 did not get enabled, it would be better to
> enforce a clean state here for the new kernel.
>
> If so, maybe one of the macros that you already defined can be used
> here?  (But it's also fine to open-code it.)

But cpu_soft_restart() can be called before capability is enabled?
I think this function is called after "capability" setup,
Was it good to use alternative than check the ID register?

>
> >  	isb
> >
> >  	cbz	x0, 1f				// el2_switch?
>
> [...]
>
> In the case where the el2_switch argument in non-zero, don't we also
> need to do something to reinitialise SCTLR2_EL2 after switching back
> to EL2, in the HVC_SOFT_RESTART handler?
>
> Maybe I missed something.

No. I'm missing to init in NVHE's HVC_SOFT_RESTART handler to clear SCTLR2_EL2.

Thanks!

>
> Cheers
> ---Dave

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1
  2025-08-20 15:11   ` Dave Martin
@ 2025-08-20 17:34     ` Yeoreum Yun
  0 siblings, 0 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-20 17:34 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi Dave,

> On Wed, Aug 13, 2025 at 01:01:18PM +0100, Yeoreum Yun wrote:
> > SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> > and becomes mandatory from ARMv8.9/ARMv9.4
> > and serveral architectural feature are controled by bits in
> > these registers and some of bits could be configurable per task
> > not globally -- i.e) FEAT_CPA2 related field and etc.
> >
> > For future usage of these fields, make the per-task SCTLR2_EL1.
>
> It is worth pointing out the impact of this: for platforms without
> FEAT_SCTLR2 support, there is no functional change and minimal
> performance overhead.

Thanks for your suggestion :)

>
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> >  arch/arm64/include/asm/processor.h | 5 +++++
> >  arch/arm64/kernel/process.c        | 9 +++++++++
> >  2 files changed, 14 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 61d62bfd5a7b..2c962816de70 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -184,6 +184,7 @@ struct thread_struct {
> >  	u64			mte_ctrl;
> >  #endif
> >  	u64			sctlr_user;
> > +	u64			sctlr2_user;
> >  	u64			svcr;
> >  	u64			tpidr2_el0;
> >  	u64			por_el0;
> > @@ -258,6 +259,9 @@ static inline void task_set_sve_vl_onexec(struct task_struct *task,
> >  	(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB |   \
> >  	 SCTLR_EL1_TCF0_MASK)
> >
> > +#define SCTLR2_USER_MASK	\
> > +	(SCTLR2_EL1_EnPACM0 | SCTLR2_EL1_CPTA0 | SCTLR2_EL1_CPTM0)
> > +
>
> The kernel doesn't know about any of these features, yet.
>
> It's probably better to make this 0 for this patch series, and add bits
> to this mask only when they are needed / used.

I see. Thanks!

>
> >  static inline void arch_thread_struct_whitelist(unsigned long *offset,
> >  						unsigned long *size)
> >  {
> > @@ -370,6 +374,7 @@ struct task_struct;
> >  unsigned long __get_wchan(struct task_struct *p);
> >
> >  void update_sctlr_el1(u64 sctlr);
> > +void update_sctlr2_el1(u64 sctlr2);
>
> Is this function used outside process.c yet?  If not, you can drop this
> declaration and [... below ...]
> >
> >  /* Thread switching */
> >  extern struct task_struct *cpu_switch_to(struct task_struct *prev,
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index 96482a1412c6..9191180c4875 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -698,6 +698,11 @@ void update_sctlr_el1(u64 sctlr)
> >  	isb();
> >  }
> >
> > +void update_sctlr2_el1(u64 sctlr2)
>
> [...] make the function static here.

You're right. there's no usage anyware except here.
I'll change this.

Thanks!

>
>
> > +{
> > +	sysreg_clear_set_s(SYS_SCTLR2_EL1, SCTLR2_USER_MASK, sctlr2);
> > +}
> > +
> >  /*
> >   * Thread switching.
> >   */
> > @@ -737,6 +742,10 @@ struct task_struct *__switch_to(struct task_struct *prev,
> >  	if (prev->thread.sctlr_user != next->thread.sctlr_user)
> >  		update_sctlr_el1(next->thread.sctlr_user);
> >
> > +	if (alternative_has_cap_unlikely(ARM64_HAS_SCTLR2) &&
> > +	    prev->thread.sctlr2_user != next->thread.sctlr2_user)
> > +		update_sctlr2_el1(next->thread.sctlr2_user);
> > +
> >  	/* the actual thread switch */
> >  	last = cpu_switch_to(prev, next);
>
> [...]
>
> Otherwise, I guess this looks OK.
>
> Cheers
> ---Dave

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible
  2025-08-20 15:10   ` Dave Martin
@ 2025-08-20 18:51     ` Yeoreum Yun
  0 siblings, 0 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-20 18:51 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm, Marc Zyngier

Hi Dave,

> > When the kernel runs at EL1, and yet is booted at EL2,
> > HCRX_EL2.SCTLR2En must be set to avoid trapping SCTLR2_EL1 accesses
> > from EL1 to EL2.
> >
> > Ensure this bit is set at the point of initialising EL2.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > Reviewed-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/el2_setup.h | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> > index 46033027510c..d755b4d46d77 100644
> > --- a/arch/arm64/include/asm/el2_setup.h
> > +++ b/arch/arm64/include/asm/el2_setup.h
> > @@ -57,9 +57,15 @@
> >          /* Enable GCS if supported */
> >  	mrs_s	x1, SYS_ID_AA64PFR1_EL1
> >  	ubfx	x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
> > -	cbz	x1, .Lset_hcrx_\@
> > +	cbz	x1, .Lskip_hcrx_GCSEn_\@
> >  	orr	x0, x0, #HCRX_EL2_GCSEn
> >
> > +.Lskip_hcrx_GCSEn_\@:
> > +	mrs_s	x1, SYS_ID_AA64MMFR3_EL1
> > +	ubfx	x1, x1, #ID_AA64MMFR3_EL1_SCTLRX_SHIFT, #4
> > +	cbz	x1, .Lset_hcrx_\@
> > +	orr	x0, x0, HCRX_EL2_SCTLR2En
>
> Nit: prefix immediate operands with # please -- see usage elsewhere in
> this file.
>
> (This comes from the legacy AArch32 syntax and has never been required
> by AArch64 assemblers, but it has become a tradition in the Linux arch
> code...)
>
> The only execptions to this rule are macros (mov_q, mrs_s etc. --
> frequently they have an underscore in the name; "real" instructions
> never do.)

Grr.. My fat finger.. Sorry to bother you.
I'll fix it and thanks for the great comment :)

>
> > +
> >  .Lset_hcrx_\@:
>
> Maybe rename this label to .Lskip_hcrx_SCTLR2En_\@, so that people
> don't have to keep renaming an existing label whenever they add
> another block here.

Okay. I'll change it.

Thanks!

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  2025-08-20 15:11   ` Dave Martin
  2025-08-20 17:32     ` Yeoreum Yun
@ 2025-08-21 10:14     ` Yeoreum Yun
  1 sibling, 0 replies; 19+ messages in thread
From: Yeoreum Yun @ 2025-08-21 10:14 UTC (permalink / raw)
  To: Dave Martin
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi Dave,
> Hi,
>
> On Wed, Aug 13, 2025 at 01:01:17PM +0100, Yeoreum Yun wrote:
> > SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> > and becomes mandatory from ARMv8.9/ARMv9.4
> > and serveral architectural feature are controled by bits in
> > these registers.
> >
> > Before, launching new kernel via kexec, initialise SCTLR2_EL1 explicitly.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> >  arch/arm64/kernel/cpu-reset.S | 6 ++++++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
> > index c87445dde674..123564af345b 100644
> > --- a/arch/arm64/kernel/cpu-reset.S
> > +++ b/arch/arm64/kernel/cpu-reset.S
> > @@ -37,6 +37,12 @@ SYM_TYPED_FUNC_START(cpu_soft_restart)
> >  	 * regime if HCR_EL2.E2H == 1
> >  	 */
> >  	msr	sctlr_el1, x12
> > +
> > +alternative_if ARM64_HAS_SCTLR2
> > +	mov_q	x12, INIT_SCTLR2_EL1
> > +	msr_s	SYS_SCTLR2_EL1, x12
> > +alternative_else_nop_endif
> > +
>
> It would be better to do this based on the ID regs.
>
> Although the previous kernel _shouldn't_ have used SCTLR2 if the
> capability ARM64_HAS_SCTLR2 did not get enabled, it would be better to
> enforce a clean state here for the new kernel.
>
> If so, maybe one of the macros that you already defined can be used
> here?  (But it's also fine to open-code it.)

Ah got it. I'll change as your suggestion
Thanks!

[...]

--
Sincerely,
Yeoreum Yun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart()
  2025-08-20 17:32     ` Yeoreum Yun
@ 2025-09-01 15:01       ` Dave Martin
  0 siblings, 0 replies; 19+ messages in thread
From: Dave Martin @ 2025-09-01 15:01 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, oliver.upton, anshuman.khandual,
	robh, james.morse, mark.rutland, joey.gouly, ahmed.genidi,
	kevin.brodsky, scott, mbenes, james.clark, frederic, rafael,
	pavel, ryan.roberts, suzuki.poulose, linux-arm-kernel,
	linux-kernel, linux-pm, kvmarm

Hi,

On Wed, Aug 20, 2025 at 06:32:34PM +0100, Yeoreum Yun wrote:
> Hi Dave,
> 
> > On Wed, Aug 13, 2025 at 01:01:17PM +0100, Yeoreum Yun wrote:
> > > SCTLR2_EL1 register is optional starting from ARMv8.8/ARMv9.3,
> > > and becomes mandatory from ARMv8.9/ARMv9.4
> > > and serveral architectural feature are controled by bits in
> > > these registers.
> > >
> > > Before, launching new kernel via kexec, initialise SCTLR2_EL1 explicitly.
> > >
> > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > ---
> > >  arch/arm64/kernel/cpu-reset.S | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > >
> > > diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
> > > index c87445dde674..123564af345b 100644
> > > --- a/arch/arm64/kernel/cpu-reset.S
> > > +++ b/arch/arm64/kernel/cpu-reset.S
> > > @@ -37,6 +37,12 @@ SYM_TYPED_FUNC_START(cpu_soft_restart)
> > >  	 * regime if HCR_EL2.E2H == 1
> > >  	 */
> > >  	msr	sctlr_el1, x12
> > > +
> > > +alternative_if ARM64_HAS_SCTLR2
> > > +	mov_q	x12, INIT_SCTLR2_EL1
> > > +	msr_s	SYS_SCTLR2_EL1, x12
> > > +alternative_else_nop_endif
> > > +
> >
> > It would be better to do this based on the ID regs.
> >
> > Although the previous kernel _shouldn't_ have used SCTLR2 if the
> > capability ARM64_HAS_SCTLR2 did not get enabled, it would be better to
> > enforce a clean state here for the new kernel.
> >
> > If so, maybe one of the macros that you already defined can be used
> > here?  (But it's also fine to open-code it.)
> 
> But cpu_soft_restart() can be called before capability is enabled?
> I think this function is called after "capability" setup,
> Was it good to use alternative than check the ID register?

What I meant is that we should reset SCTLR2_EL1 here even if the
ARM64_HAS_SCTLR2 capability is not set.

The cpu_soft_restart() code has responsibilities similar to those of a
bootloader.  We want to put the CPU into a known state, irrespective of
how the current kernel has been using the CPU.

For one thing, we come through this path when booting a crash kernel if
the current kernel panicked.  So we should avoid making too many
assumptions about anything being in a sensible state here.

(Your rewrite of this in v4 looks fine.)

> > >  	isb
> > >
> > >  	cbz	x0, 1f				// el2_switch?
> >
> > [...]
> >
> > In the case where the el2_switch argument in non-zero, don't we also
> > need to do something to reinitialise SCTLR2_EL2 after switching back
> > to EL2, in the HVC_SOFT_RESTART handler?
> >
> > Maybe I missed something.
> 
> No. I'm missing to init in NVHE's HVC_SOFT_RESTART handler to clear SCTLR2_EL2.
> 
> Thanks!

I'll take a look at this in v4.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-09-01 15:01 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-13 12:01 [PATCH v3 0/5] initialize SCTRL2_ELx Yeoreum Yun
2025-08-13 12:01 ` [PATCH v3 1/5] arm64: make SCTLR2_EL1 accessible Yeoreum Yun
2025-08-20 15:10   ` Dave Martin
2025-08-20 18:51     ` Yeoreum Yun
2025-08-13 12:01 ` [PATCH v3 2/5] arm64: initialise SCTLR2_ELx register at boot time Yeoreum Yun
2025-08-20 15:10   ` Dave Martin
2025-08-20 17:18     ` Yeoreum Yun
2025-08-13 12:01 ` [PATCH v3 3/5] arm64: save/restore SCTLR2_EL1 when cpu_suspend()/resume() Yeoreum Yun
2025-08-20 15:11   ` Dave Martin
2025-08-20 17:22     ` Yeoreum Yun
2025-08-13 12:01 ` [PATCH v3 4/5] arm64: initialise SCTLR2_EL1 at cpu_soft_restart() Yeoreum Yun
2025-08-20 15:11   ` Dave Martin
2025-08-20 17:32     ` Yeoreum Yun
2025-09-01 15:01       ` Dave Martin
2025-08-21 10:14     ` Yeoreum Yun
2025-08-13 12:01 ` [PATCH v3 5/5] arm64: make the per-task SCTLR2_EL1 Yeoreum Yun
2025-08-20 15:11   ` Dave Martin
2025-08-20 17:34     ` Yeoreum Yun
2025-08-20 15:11 ` [PATCH v3 0/5] initialize SCTRL2_ELx Dave Martin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).