* [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support
@ 2026-01-07 18:06 Marc Zyngier
2026-01-07 18:06 ` [PATCH 1/3] arm64: Unconditionally enable LSE support Marc Zyngier
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Marc Zyngier @ 2026-01-07 18:06 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Will Deacon, Catalin Marinas, Mark Rutland, Joey Gouly,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
FEAT_LSE and FEAT_PAN have been around for a *very* long time (ARMv8.1
was published 11 years ago), and it is about time we enable these by
default. The additional text is very small, the advantages pretty
large in terms of performance (LSE) and security (PAN), and it is very
hard to find a semi-modern machine that doesn't have these (even the
RPi5 is ARMv8.2...).
On top of that, FEAT_PAN3 (aka EPAN) is a very nice to have, and
naturally complement PAN for exec-only mappings.
Drop the configuration symbols for these three extensions, and let the
automatic detection of features to its job.
Only very lightly tested, but what could possibly go wrong? ;-)
Marc Zyngier (3):
arm64: Unconditionally enable LSE support
arm64: Unconditionally enable PAN support
arm64: Unconditionally enable EPAN support
arch/arm64/Kconfig | 46 -----------------------------
arch/arm64/configs/hardening.config | 3 --
arch/arm64/include/asm/cpucaps.h | 4 ---
arch/arm64/include/asm/insn.h | 23 ---------------
arch/arm64/include/asm/lse.h | 9 ------
arch/arm64/include/asm/uaccess.h | 6 ++--
arch/arm64/kernel/cpufeature.c | 8 -----
arch/arm64/kvm/at.c | 7 -----
arch/arm64/kvm/hyp/entry.S | 2 +-
arch/arm64/lib/insn.c | 2 --
arch/arm64/net/bpf_jit_comp.c | 7 -----
11 files changed, 3 insertions(+), 114 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] arm64: Unconditionally enable LSE support
2026-01-07 18:06 [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Marc Zyngier
@ 2026-01-07 18:06 ` Marc Zyngier
2026-01-07 18:07 ` [PATCH 2/3] arm64: Unconditionally enable PAN support Marc Zyngier
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Marc Zyngier @ 2026-01-07 18:06 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Will Deacon, Catalin Marinas, Mark Rutland, Joey Gouly,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
LSE atomics have been in the architecture since ARMv8.1 (released in
2014), and are hopefully supported by all modern toolchains.
Drop the optional nature of LSE support in the kernel, and always
compile the support in, as this really is very little code. LL/SC
still is the default, and the switch to LSE is done dynamically.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/Kconfig | 16 ----------------
arch/arm64/include/asm/insn.h | 23 -----------------------
arch/arm64/include/asm/lse.h | 9 ---------
arch/arm64/kernel/cpufeature.c | 2 --
arch/arm64/kvm/at.c | 7 -------
arch/arm64/lib/insn.c | 2 --
arch/arm64/net/bpf_jit_comp.c | 7 -------
7 files changed, 66 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7d..b6f57cc1e4df8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1873,22 +1873,6 @@ config ARM64_PAN
The feature is detected at runtime, and will remain as a 'nop'
instruction if the cpu does not implement the feature.
-config ARM64_LSE_ATOMICS
- bool
- default ARM64_USE_LSE_ATOMICS
-
-config ARM64_USE_LSE_ATOMICS
- bool "Atomic instructions"
- default y
- help
- As part of the Large System Extensions, ARMv8.1 introduces new
- atomic instructions that are designed specifically to scale in
- very large systems.
-
- Say Y here to make use of these instructions for the in-kernel
- atomic routines. This incurs a small overhead on CPUs that do
- not support these instructions.
-
endmenu # "ARMv8.1 architectural features"
menu "ARMv8.2 architectural features"
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index e1d30ba99d016..f463a654a2bbd 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -671,7 +671,6 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
enum aarch64_insn_register Rn,
enum aarch64_insn_register Rd,
u8 lsb);
-#ifdef CONFIG_ARM64_LSE_ATOMICS
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
enum aarch64_insn_register address,
enum aarch64_insn_register value,
@@ -683,28 +682,6 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
enum aarch64_insn_register value,
enum aarch64_insn_size_type size,
enum aarch64_insn_mem_order_type order);
-#else
-static inline
-u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
- enum aarch64_insn_register address,
- enum aarch64_insn_register value,
- enum aarch64_insn_size_type size,
- enum aarch64_insn_mem_atomic_op op,
- enum aarch64_insn_mem_order_type order)
-{
- return AARCH64_BREAK_FAULT;
-}
-
-static inline
-u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
- enum aarch64_insn_register address,
- enum aarch64_insn_register value,
- enum aarch64_insn_size_type size,
- enum aarch64_insn_mem_order_type order)
-{
- return AARCH64_BREAK_FAULT;
-}
-#endif
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type);
u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h
index 3129a5819d0e0..1e77c45bb0a83 100644
--- a/arch/arm64/include/asm/lse.h
+++ b/arch/arm64/include/asm/lse.h
@@ -4,8 +4,6 @@
#include <asm/atomic_ll_sc.h>
-#ifdef CONFIG_ARM64_LSE_ATOMICS
-
#define __LSE_PREAMBLE ".arch_extension lse\n"
#include <linux/compiler_types.h>
@@ -27,11 +25,4 @@
#define ARM64_LSE_ATOMIC_INSN(llsc, lse) \
ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS)
-#else /* CONFIG_ARM64_LSE_ATOMICS */
-
-#define __lse_ll_sc_body(op, ...) __ll_sc_##op(__VA_ARGS__)
-
-#define ARM64_LSE_ATOMIC_INSN(llsc, lse) llsc
-
-#endif /* CONFIG_ARM64_LSE_ATOMICS */
#endif /* __ASM_LSE_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c840a93b9ef95..547ccf28f2893 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2560,7 +2560,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, PAN3)
},
#endif /* CONFIG_ARM64_EPAN */
-#ifdef CONFIG_ARM64_LSE_ATOMICS
{
.desc = "LSE atomic instructions",
.capability = ARM64_HAS_LSE_ATOMICS,
@@ -2568,7 +2567,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, ATOMIC, IMP)
},
-#endif /* CONFIG_ARM64_LSE_ATOMICS */
{
.desc = "Virtualization Host Extensions",
.capability = ARM64_HAS_VIRT_HOST_EXTN,
diff --git a/arch/arm64/kvm/at.c b/arch/arm64/kvm/at.c
index 53bf70126f81d..6cbcec041a9dd 100644
--- a/arch/arm64/kvm/at.c
+++ b/arch/arm64/kvm/at.c
@@ -1700,7 +1700,6 @@ int __kvm_find_s1_desc_level(struct kvm_vcpu *vcpu, u64 va, u64 ipa, int *level)
}
}
-#ifdef CONFIG_ARM64_LSE_ATOMICS
static int __lse_swap_desc(u64 __user *ptep, u64 old, u64 new)
{
u64 tmp = old;
@@ -1725,12 +1724,6 @@ static int __lse_swap_desc(u64 __user *ptep, u64 old, u64 new)
return ret;
}
-#else
-static int __lse_swap_desc(u64 __user *ptep, u64 old, u64 new)
-{
- return -EINVAL;
-}
-#endif
static int __llsc_swap_desc(u64 __user *ptep, u64 old, u64 new)
{
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c
index 4e298baddc2e5..cc5b40917d0dd 100644
--- a/arch/arm64/lib/insn.c
+++ b/arch/arm64/lib/insn.c
@@ -611,7 +611,6 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
state);
}
-#ifdef CONFIG_ARM64_LSE_ATOMICS
static u32 aarch64_insn_encode_ldst_order(enum aarch64_insn_mem_order_type type,
u32 insn)
{
@@ -755,7 +754,6 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn,
value);
}
-#endif
u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
enum aarch64_insn_register src,
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index b6eb7a465ad24..5ce82edc508e4 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -776,7 +776,6 @@ static int emit_atomic_ld_st(const struct bpf_insn *insn, struct jit_ctx *ctx)
return 0;
}
-#ifdef CONFIG_ARM64_LSE_ATOMICS
static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
{
const u8 code = insn->code;
@@ -843,12 +842,6 @@ static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
return 0;
}
-#else
-static inline int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
-{
- return -EINVAL;
-}
-#endif
static int emit_ll_sc_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: Unconditionally enable PAN support
2026-01-07 18:06 [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Marc Zyngier
2026-01-07 18:06 ` [PATCH 1/3] arm64: Unconditionally enable LSE support Marc Zyngier
@ 2026-01-07 18:07 ` Marc Zyngier
2026-01-22 11:21 ` Marc Zyngier
2026-01-07 18:07 ` [PATCH 3/3] arm64: Unconditionally enable EPAN support Marc Zyngier
2026-01-22 16:59 ` [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Will Deacon
3 siblings, 1 reply; 9+ messages in thread
From: Marc Zyngier @ 2026-01-07 18:07 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Will Deacon, Catalin Marinas, Mark Rutland, Joey Gouly,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
FEAT_PAN has been around since ARMv8.1 (over 11 years ago), has no compiler
dependency (we have our own accessors), and is a great security benefit.
Drop CONFIG_ARM64_PAN, and make the support unconditionnal.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/Kconfig | 17 -----------------
arch/arm64/include/asm/cpucaps.h | 2 --
arch/arm64/include/asm/uaccess.h | 6 ++----
arch/arm64/kernel/cpufeature.c | 4 ----
arch/arm64/kvm/hyp/entry.S | 2 +-
5 files changed, 3 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b6f57cc1e4df8..fcfb62ec4bae8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1680,7 +1680,6 @@ config MITIGATE_SPECTRE_BRANCH_HISTORY
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
depends on !KCSAN
- select ARM64_PAN
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
@@ -1859,20 +1858,6 @@ config ARM64_HW_AFDBM
to work on pre-ARMv8.1 hardware and the performance impact is
minimal. If unsure, say Y.
-config ARM64_PAN
- bool "Enable support for Privileged Access Never (PAN)"
- default y
- help
- Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
- prevents the kernel or hypervisor from accessing user-space (EL0)
- memory directly.
-
- Choosing this option will cause any unprotected (not using
- copy_to_user et al) memory access to fail with a permission fault.
-
- The feature is detected at runtime, and will remain as a 'nop'
- instruction if the cpu does not implement the feature.
-
endmenu # "ARMv8.1 architectural features"
menu "ARMv8.2 architectural features"
@@ -2109,7 +2094,6 @@ config ARM64_MTE
depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI
depends on AS_HAS_ARMV8_5
# Required for tag checking in the uaccess routines
- select ARM64_PAN
select ARCH_HAS_SUBPAGE_FAULTS
select ARCH_USES_HIGH_VMA_FLAGS
select ARCH_USES_PG_ARCH_2
@@ -2141,7 +2125,6 @@ menu "ARMv8.7 architectural features"
config ARM64_EPAN
bool "Enable support for Enhanced Privileged Access Never (EPAN)"
default y
- depends on ARM64_PAN
help
Enhanced Privileged Access Never (EPAN) allows Privileged
Access Never to be used with Execute-only mappings.
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 2c8029472ad45..177c691914f87 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -19,8 +19,6 @@ cpucap_is_possible(const unsigned int cap)
"cap must be < ARM64_NCAPS");
switch (cap) {
- case ARM64_HAS_PAN:
- return IS_ENABLED(CONFIG_ARM64_PAN);
case ARM64_HAS_EPAN:
return IS_ENABLED(CONFIG_ARM64_EPAN);
case ARM64_SVE:
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 6490930deef84..9810106a3f664 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -124,14 +124,12 @@ static inline bool uaccess_ttbr0_enable(void)
static inline void __uaccess_disable_hw_pan(void)
{
- asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,
- CONFIG_ARM64_PAN));
+ asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN));
}
static inline void __uaccess_enable_hw_pan(void)
{
- asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,
- CONFIG_ARM64_PAN));
+ asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN));
}
static inline void uaccess_disable_privileged(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 547ccf28f2893..716440d147a2d 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2164,7 +2164,6 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
return cpu_supports_bbml2_noabort();
}
-#ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
/*
@@ -2176,7 +2175,6 @@ static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
set_pstate_pan(1);
}
-#endif /* CONFIG_ARM64_PAN */
#ifdef CONFIG_ARM64_RAS_EXTN
static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
@@ -2541,7 +2539,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, ECV, CNTPOFF)
},
-#ifdef CONFIG_ARM64_PAN
{
.desc = "Privileged Access Never",
.capability = ARM64_HAS_PAN,
@@ -2550,7 +2547,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_enable_pan,
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, IMP)
},
-#endif /* CONFIG_ARM64_PAN */
#ifdef CONFIG_ARM64_EPAN
{
.desc = "Enhanced Privileged Access Never",
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 9f4e8d68ab505..11a10d8f5beb2 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -126,7 +126,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
add x1, x1, #VCPU_CONTEXT
- ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
+ ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN)
// Store the guest regs x2 and x3
stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] arm64: Unconditionally enable EPAN support
2026-01-07 18:06 [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Marc Zyngier
2026-01-07 18:06 ` [PATCH 1/3] arm64: Unconditionally enable LSE support Marc Zyngier
2026-01-07 18:07 ` [PATCH 2/3] arm64: Unconditionally enable PAN support Marc Zyngier
@ 2026-01-07 18:07 ` Marc Zyngier
2026-01-22 10:15 ` Will Deacon
2026-01-22 16:59 ` [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Will Deacon
3 siblings, 1 reply; 9+ messages in thread
From: Marc Zyngier @ 2026-01-07 18:07 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Will Deacon, Catalin Marinas, Mark Rutland, Joey Gouly,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
While FEAT_PAN3 is pretty recent, having it permanently enabled costs
exactly nothing, and does help with exec-only mappings on these fancy
ARMv9.2 machines that are rumoured to exist.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/Kconfig | 13 -------------
arch/arm64/configs/hardening.config | 3 ---
arch/arm64/include/asm/cpucaps.h | 2 --
arch/arm64/kernel/cpufeature.c | 2 --
4 files changed, 20 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fcfb62ec4bae8..c31079f4b611a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2120,19 +2120,6 @@ config ARM64_MTE
endmenu # "ARMv8.5 architectural features"
-menu "ARMv8.7 architectural features"
-
-config ARM64_EPAN
- bool "Enable support for Enhanced Privileged Access Never (EPAN)"
- default y
- help
- Enhanced Privileged Access Never (EPAN) allows Privileged
- Access Never to be used with Execute-only mappings.
-
- The feature is detected at runtime, and will remain disabled
- if the cpu does not implement the feature.
-endmenu # "ARMv8.7 architectural features"
-
config AS_HAS_MOPS
def_bool $(as-instr,.arch_extension mops)
diff --git a/arch/arm64/configs/hardening.config b/arch/arm64/configs/hardening.config
index 24179722927e1..e59034e7af256 100644
--- a/arch/arm64/configs/hardening.config
+++ b/arch/arm64/configs/hardening.config
@@ -18,6 +18,3 @@ CONFIG_ARM64_BTI_KERNEL=y
CONFIG_ARM64_MTE=y
CONFIG_KASAN_HW_TAGS=y
CONFIG_ARM64_E0PD=y
-
-# Available in ARMv8.7 and later.
-CONFIG_ARM64_EPAN=y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 177c691914f87..13c0fa54ea19f 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -19,8 +19,6 @@ cpucap_is_possible(const unsigned int cap)
"cap must be < ARM64_NCAPS");
switch (cap) {
- case ARM64_HAS_EPAN:
- return IS_ENABLED(CONFIG_ARM64_EPAN);
case ARM64_SVE:
return IS_ENABLED(CONFIG_ARM64_SVE);
case ARM64_SME:
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 716440d147a2d..30eea68178c87 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2547,7 +2547,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_enable_pan,
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, IMP)
},
-#ifdef CONFIG_ARM64_EPAN
{
.desc = "Enhanced Privileged Access Never",
.capability = ARM64_HAS_EPAN,
@@ -2555,7 +2554,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, PAN3)
},
-#endif /* CONFIG_ARM64_EPAN */
{
.desc = "LSE atomic instructions",
.capability = ARM64_HAS_LSE_ATOMICS,
--
2.47.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] arm64: Unconditionally enable EPAN support
2026-01-07 18:07 ` [PATCH 3/3] arm64: Unconditionally enable EPAN support Marc Zyngier
@ 2026-01-22 10:15 ` Will Deacon
2026-01-22 11:06 ` Marc Zyngier
0 siblings, 1 reply; 9+ messages in thread
From: Will Deacon @ 2026-01-22 10:15 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Wed, Jan 07, 2026 at 06:07:01PM +0000, Marc Zyngier wrote:
> While FEAT_PAN3 is pretty recent, having it permanently enabled costs
> exactly nothing, and does help with exec-only mappings on these fancy
> ARMv9.2 machines that are rumoured to exist.
I'm not sure it's _entirely_ accurate to say this one costs us "exactly
nothing":
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index 177c691914f87..13c0fa54ea19f 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -19,8 +19,6 @@ cpucap_is_possible(const unsigned int cap)
> "cap must be < ARM64_NCAPS");
>
> switch (cap) {
> - case ARM64_HAS_EPAN:
> - return IS_ENABLED(CONFIG_ARM64_EPAN);
> case ARM64_SVE:
> return IS_ENABLED(CONFIG_ARM64_SVE);
> case ARM64_SME:
as this means cpus_have_cap(EPAN) always ends up doing a test_bit(). It's
not exactly expensive, but it feels a little premature when compared to
PAN and LSE so I'll probably just take those changes for now.
Will
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] arm64: Unconditionally enable EPAN support
2026-01-22 10:15 ` Will Deacon
@ 2026-01-22 11:06 ` Marc Zyngier
0 siblings, 0 replies; 9+ messages in thread
From: Marc Zyngier @ 2026-01-22 11:06 UTC (permalink / raw)
To: Will Deacon
Cc: linux-arm-kernel, kvmarm, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Thu, 22 Jan 2026 10:15:56 +0000,
Will Deacon <will@kernel.org> wrote:
>
> On Wed, Jan 07, 2026 at 06:07:01PM +0000, Marc Zyngier wrote:
> > While FEAT_PAN3 is pretty recent, having it permanently enabled costs
> > exactly nothing, and does help with exec-only mappings on these fancy
> > ARMv9.2 machines that are rumoured to exist.
>
> I'm not sure it's _entirely_ accurate to say this one costs us "exactly
> nothing":
>
> > diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> > index 177c691914f87..13c0fa54ea19f 100644
> > --- a/arch/arm64/include/asm/cpucaps.h
> > +++ b/arch/arm64/include/asm/cpucaps.h
> > @@ -19,8 +19,6 @@ cpucap_is_possible(const unsigned int cap)
> > "cap must be < ARM64_NCAPS");
> >
> > switch (cap) {
> > - case ARM64_HAS_EPAN:
> > - return IS_ENABLED(CONFIG_ARM64_EPAN);
> > case ARM64_SVE:
> > return IS_ENABLED(CONFIG_ARM64_SVE);
> > case ARM64_SME:
>
> as this means cpus_have_cap(EPAN) always ends up doing a test_bit(). It's
> not exactly expensive, but it feels a little premature when compared to
> PAN and LSE so I'll probably just take those changes for now.
Ah, fair enough. It can probably wait another few years then!
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] arm64: Unconditionally enable PAN support
2026-01-07 18:07 ` [PATCH 2/3] arm64: Unconditionally enable PAN support Marc Zyngier
@ 2026-01-22 11:21 ` Marc Zyngier
2026-01-22 17:02 ` Will Deacon
0 siblings, 1 reply; 9+ messages in thread
From: Marc Zyngier @ 2026-01-22 11:21 UTC (permalink / raw)
To: Will Deacon
Cc: linux-arm-kernel, kvmarm, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hi Will,
On Wed, 07 Jan 2026 18:07:00 +0000,
Marc Zyngier <maz@kernel.org> wrote:
>
> FEAT_PAN has been around since ARMv8.1 (over 11 years ago), has no compiler
> dependency (we have our own accessors), and is a great security benefit.
>
> Drop CONFIG_ARM64_PAN, and make the support unconditionnal.
Since you mentioned that you were planning to merge this patch, I'd
like to point out that a related change[1] is on its way to Linus via
the KVM tree for 6.19. It is also in -next due to dependencies.
Could you please place these patches on a branch that I can pull in
the kvmarm tree to resolve the conflict? It really amounts to
reverting this patch now that we are guaranteed to have PAN on an
PAN- machine.
Thanks,
M.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git/commit/?h=fixes&id=86364832ba6f2777db98391060b2d7f69938ad9b
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support
2026-01-07 18:06 [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Marc Zyngier
` (2 preceding siblings ...)
2026-01-07 18:07 ` [PATCH 3/3] arm64: Unconditionally enable EPAN support Marc Zyngier
@ 2026-01-22 16:59 ` Will Deacon
3 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2026-01-22 16:59 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, Marc Zyngier
Cc: catalin.marinas, kernel-team, Will Deacon, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Wed, 07 Jan 2026 18:06:58 +0000, Marc Zyngier wrote:
> FEAT_LSE and FEAT_PAN have been around for a *very* long time (ARMv8.1
> was published 11 years ago), and it is about time we enable these by
> default. The additional text is very small, the advantages pretty
> large in terms of performance (LSE) and security (PAN), and it is very
> hard to find a semi-modern machine that doesn't have these (even the
> RPi5 is ARMv8.2...).
>
> [...]
Applied first two to arm64 (for-next/cpufeature), thanks!
[1/3] arm64: Unconditionally enable LSE support
https://git.kernel.org/arm64/c/6191b25d8bd9
[2/3] arm64: Unconditionally enable PAN support
https://git.kernel.org/arm64/c/018a231b0260
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] arm64: Unconditionally enable PAN support
2026-01-22 11:21 ` Marc Zyngier
@ 2026-01-22 17:02 ` Will Deacon
0 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2026-01-22 17:02 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hey Marc,
On Thu, Jan 22, 2026 at 11:21:01AM +0000, Marc Zyngier wrote:
> On Wed, 07 Jan 2026 18:07:00 +0000,
> Marc Zyngier <maz@kernel.org> wrote:
> >
> > FEAT_PAN has been around since ARMv8.1 (over 11 years ago), has no compiler
> > dependency (we have our own accessors), and is a great security benefit.
> >
> > Drop CONFIG_ARM64_PAN, and make the support unconditionnal.
>
> Since you mentioned that you were planning to merge this patch, I'd
> like to point out that a related change[1] is on its way to Linus via
> the KVM tree for 6.19. It is also in -next due to dependencies.
>
> Could you please place these patches on a branch that I can pull in
> the kvmarm tree to resolve the conflict? It really amounts to
> reverting this patch now that we are guaranteed to have PAN on an
> PAN- machine.
You can use for-next/cpufeature. It has these plus the LS64 stuff that
you might also want to pull into the kvm/arm tree.
Please yell if you have any issues.
Cheers,
Will
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-22 17:02 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-07 18:06 [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Marc Zyngier
2026-01-07 18:06 ` [PATCH 1/3] arm64: Unconditionally enable LSE support Marc Zyngier
2026-01-07 18:07 ` [PATCH 2/3] arm64: Unconditionally enable PAN support Marc Zyngier
2026-01-22 11:21 ` Marc Zyngier
2026-01-22 17:02 ` Will Deacon
2026-01-07 18:07 ` [PATCH 3/3] arm64: Unconditionally enable EPAN support Marc Zyngier
2026-01-22 10:15 ` Will Deacon
2026-01-22 11:06 ` Marc Zyngier
2026-01-22 16:59 ` [PATCH 0/3] arm64: Unconditionally compile LSE/PAN/EPAN support Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox