* [PATCH v11 RESEND 0/9] support FEAT_LSUI
@ 2025-12-14 11:22 Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
` (9 more replies)
0 siblings, 10 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
previleged level to access to access user memory without clearing
PSTATE.PAN bit.
This patchset support FEAT_LSUI and applies in futex atomic operation
and user_swpX emulation where can replace from ldxr/st{l}xr
pair implmentation with clearing PSTATE.PAN bit to correspondant
load/store unprevileged atomic operation without clearing PSTATE.PAN bit.
This patch based on v6.19-rc1
Patch Sequences
================
Patch #1 adds cpufeature for FEAT_LSUI
Patch #2-#3 expose FEAT_LSUI to guest
Patch #4 adds Kconfig for FEAT_LSUI
Patch #5-#6 support futex atomic-op with FEAT_LSUI
Patch #7-#9 support user_swpX emulation with FEAT_LSUI
Patch History
==============
from v10 to v11:
- rebase to v6.19-rc1
- use cast instruction to emulate deprecated swpb instruction
- https://lore.kernel.org/all/20251103163224.818353-1-yeoreum.yun@arm.com/
from v9 to v10:
- apply FEAT_LSUI to user_swpX emulation.
- add test coverage for LSUI bit in ID_AA64ISAR3_EL1
- rebase to v6.18-rc4
- https://lore.kernel.org/all/20250922102244.2068414-1-yeoreum.yun@arm.com/
from v8 to v9:
- refotoring __lsui_cmpxchg64()
- rebase to v6.17-rc7
- https://lore.kernel.org/all/20250917110838.917281-1-yeoreum.yun@arm.com/
from v7 to v8:
- implements futex_atomic_eor() and futex_atomic_cmpxchg() with casalt
with C helper.
- Drop the small optimisation on ll/sc futex_atomic_set operation.
- modify some commit message.
- https://lore.kernel.org/all/20250816151929.197589-1-yeoreum.yun@arm.com/
from v6 to v7:
- wrap FEAT_LSUI with CONFIG_AS_HAS_LSUI in cpufeature
- remove unnecessary addition of indentation.
- remove unnecessary mte_tco_enable()/disable() on LSUI operation.
- https://lore.kernel.org/all/20250811163635.1562145-1-yeoreum.yun@arm.com/
from v5 to v6:
- rebase to v6.17-rc1
- https://lore.kernel.org/all/20250722121956.1509403-1-yeoreum.yun@arm.com/
from v4 to v5:
- remove futex_ll_sc.h futext_lsui and lsui.h and move them to futex.h
- reorganize the patches.
- https://lore.kernel.org/all/20250721083618.2743569-1-yeoreum.yun@arm.com/
from v3 to v4:
- rebase to v6.16-rc7
- modify some patch's title.
- https://lore.kernel.org/all/20250617183635.1266015-1-yeoreum.yun@arm.com/
from v2 to v3:
- expose FEAT_LUSI to guest
- add help section for LUSI Kconfig
- https://lore.kernel.org/all/20250611151154.46362-1-yeoreum.yun@arm.com/
from v1 to v2:
- remove empty v9.6 menu entry
- locate HAS_LUSI in cpucaps in order
- https://lore.kernel.org/all/20250611104916.10636-1-yeoreum.yun@arm.com/
Yeoreum Yun (9):
arm64: cpufeature: add FEAT_LSUI
KVM: arm64: expose FEAT_LSUI to guest
KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI
arm64: Kconfig: Detect toolchain support for LSUI
arm64: futex: refactor futex atomic operation
arm64: futex: support futex with FEAT_LSUI
arm64: separate common LSUI definitions into lsui.h
arm64: armv8_deprecated: convert user_swpX to inline function
arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
arch/arm64/Kconfig | 5 +
arch/arm64/include/asm/futex.h | 291 +++++++++++++++---
arch/arm64/include/asm/lsui.h | 25 ++
arch/arm64/kernel/armv8_deprecated.c | 111 +++++--
arch/arm64/kernel/cpufeature.c | 10 +
arch/arm64/kvm/sys_regs.c | 3 +-
arch/arm64/tools/cpucaps | 1 +
.../testing/selftests/kvm/arm64/set_id_regs.c | 1 +
8 files changed, 381 insertions(+), 66 deletions(-)
create mode 100644 arch/arm64/include/asm/lsui.h
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 2/9] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
` (8 subsequent siblings)
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Since Armv9.6, FEAT_LSUI provides load/store instructions
that allow privileged code to access user memory without
clearing the PSTATE.PAN bit.
Add CPU feature detection for FEAT_LSUI.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/kernel/cpufeature.c | 10 ++++++++++
arch/arm64/tools/cpucaps | 1 +
2 files changed, 11 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c840a93b9ef9..4c75220e53a1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -280,6 +280,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
static const struct arm64_ftr_bits ftr_id_aa64isar3[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FPRCVT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_LSUI_SHIFT, 4, ID_AA64ISAR3_EL1_LSUI_NI),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_LSFE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FAMINMAX_SHIFT, 4, 0),
ARM64_FTR_END,
@@ -3148,6 +3149,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, XNX, IMP)
},
+#ifdef CONFIG_AS_HAS_LSUI
+ {
+ .desc = "Unprivileged Load Store Instructions (LSUI)",
+ .capability = ARM64_HAS_LSUI,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR3_EL1, LSUI, IMP)
+ },
+#endif
{},
};
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 0fac75f01534..4b2f7f3f2b80 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -46,6 +46,7 @@ HAS_HCX
HAS_LDAPR
HAS_LPA2
HAS_LSE_ATOMICS
+HAS_LSUI
HAS_MOPS
HAS_NESTED_VIRT
HAS_BBML2_NOABORT
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 2/9] KVM: arm64: expose FEAT_LSUI to guest
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 3/9] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
` (7 subsequent siblings)
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
expose FEAT_LSUI to guest.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/kvm/sys_regs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c8fd7c6a12a1..fa34910b22ae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1805,7 +1805,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
break;
case SYS_ID_AA64ISAR3_EL1:
val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_LSFE |
- ID_AA64ISAR3_EL1_FAMINMAX;
+ ID_AA64ISAR3_EL1_FAMINMAX | ID_AA64ISAR3_EL1_LSUI;
break;
case SYS_ID_AA64MMFR2_EL1:
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
@@ -3249,6 +3249,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_AA64ISAR2_EL1_GPA3)),
ID_WRITABLE(ID_AA64ISAR3_EL1, (ID_AA64ISAR3_EL1_FPRCVT |
ID_AA64ISAR3_EL1_LSFE |
+ ID_AA64ISAR3_EL1_LSUI |
ID_AA64ISAR3_EL1_FAMINMAX)),
ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5),
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 3/9] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 2/9] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
` (6 subsequent siblings)
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Add test coverage for FEAT_LSUI.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
tools/testing/selftests/kvm/arm64/set_id_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index c4815d365816..0b1714aa127c 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -125,6 +125,7 @@ static const struct reg_ftr_bits ftr_id_aa64isar2_el1[] = {
static const struct reg_ftr_bits ftr_id_aa64isar3_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, FPRCVT, 0),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, LSUI, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, LSFE, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, FAMINMAX, 0),
REG_FTR_END,
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (2 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 3/9] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2026-01-19 15:50 ` Will Deacon
2025-12-14 11:22 ` [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation Yeoreum Yun
` (5 subsequent siblings)
9 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
previleged level to access to access user memory without clearing
PSTATE.PAN bit.
It's enough to add CONFIG_AS_HAS_LSUI only because the code for LSUI uses
individual `.arch_extension` entries.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/Kconfig | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..36e87a1a1b5c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2227,6 +2227,11 @@ config ARM64_GCS
endmenu # "ARMv9.4 architectural features"
+config AS_HAS_LSUI
+ def_bool $(as-instr,.arch_extension lsui)
+ help
+ Supported by LLVM 20+ and binutils 2.45+.
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (3 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2026-01-19 15:57 ` Will Deacon
2025-12-14 11:22 ` [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
` (4 subsequent siblings)
9 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Refactor futex atomic operations using ll/sc method with
clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/include/asm/futex.h | 128 +++++++++++++++++++++------------
1 file changed, 82 insertions(+), 46 deletions(-)
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index bc06691d2062..f8cb674bdb3f 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -7,17 +7,21 @@
#include <linux/futex.h>
#include <linux/uaccess.h>
+#include <linux/stringify.h>
#include <asm/errno.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
-#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
-do { \
+#define LLSC_FUTEX_ATOMIC_OP(op, insn) \
+static __always_inline int \
+__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
unsigned int loops = FUTEX_MAX_LOOPS; \
+ int ret, oldval, tmp; \
\
uaccess_enable_privileged(); \
- asm volatile( \
+ asm volatile("// __llsc_futex_atomic_" #op "\n" \
" prfm pstl1strm, %2\n" \
"1: ldxr %w1, %2\n" \
insn "\n" \
@@ -35,45 +39,103 @@ do { \
: "r" (oparg), "Ir" (-EAGAIN) \
: "memory"); \
uaccess_disable_privileged(); \
-} while (0)
+ \
+ if (!ret) \
+ *oval = oldval; \
+ \
+ return ret; \
+}
+
+LLSC_FUTEX_ATOMIC_OP(add, "add %w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(or, "orr %w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(and, "and %w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(eor, "eor %w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(set, "mov %w3, %w5")
+
+static __always_inline int
+__llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ int ret = 0;
+ unsigned int loops = FUTEX_MAX_LOOPS;
+ u32 val, tmp;
+
+ uaccess_enable_privileged();
+ asm volatile("//__llsc_futex_cmpxchg\n"
+" prfm pstl1strm, %2\n"
+"1: ldxr %w1, %2\n"
+" eor %w3, %w1, %w5\n"
+" cbnz %w3, 4f\n"
+"2: stlxr %w3, %w6, %2\n"
+" cbz %w3, 3f\n"
+" sub %w4, %w4, %w3\n"
+" cbnz %w4, 1b\n"
+" mov %w0, %w7\n"
+"3:\n"
+" dmb ish\n"
+"4:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
+ _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
+ : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
+ : "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
+ : "memory");
+ uaccess_disable_privileged();
+
+ if (!ret)
+ *oval = val;
+
+ return ret;
+}
+
+#define FUTEX_ATOMIC_OP(op) \
+static __always_inline int \
+__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
+ return __llsc_futex_atomic_##op(oparg, uaddr, oval); \
+}
+
+FUTEX_ATOMIC_OP(add)
+FUTEX_ATOMIC_OP(or)
+FUTEX_ATOMIC_OP(and)
+FUTEX_ATOMIC_OP(eor)
+FUTEX_ATOMIC_OP(set)
+
+static __always_inline int
+__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval);
+}
static inline int
arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
{
- int oldval = 0, ret, tmp;
- u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+ int ret;
+ u32 __user *uaddr;
if (!access_ok(_uaddr, sizeof(u32)))
return -EFAULT;
+ uaddr = __uaccess_mask_ptr(_uaddr);
+
switch (op) {
case FUTEX_OP_SET:
- __futex_atomic_op("mov %w3, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_set(oparg, uaddr, oval);
break;
case FUTEX_OP_ADD:
- __futex_atomic_op("add %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_add(oparg, uaddr, oval);
break;
case FUTEX_OP_OR:
- __futex_atomic_op("orr %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_or(oparg, uaddr, oval);
break;
case FUTEX_OP_ANDN:
- __futex_atomic_op("and %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, ~oparg);
+ ret = __futex_atomic_and(~oparg, uaddr, oval);
break;
case FUTEX_OP_XOR:
- __futex_atomic_op("eor %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_eor(oparg, uaddr, oval);
break;
default:
ret = -ENOSYS;
}
- if (!ret)
- *oval = oldval;
-
return ret;
}
@@ -81,40 +143,14 @@ static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
u32 oldval, u32 newval)
{
- int ret = 0;
- unsigned int loops = FUTEX_MAX_LOOPS;
- u32 val, tmp;
u32 __user *uaddr;
if (!access_ok(_uaddr, sizeof(u32)))
return -EFAULT;
uaddr = __uaccess_mask_ptr(_uaddr);
- uaccess_enable_privileged();
- asm volatile("// futex_atomic_cmpxchg_inatomic\n"
-" prfm pstl1strm, %2\n"
-"1: ldxr %w1, %2\n"
-" sub %w3, %w1, %w5\n"
-" cbnz %w3, 4f\n"
-"2: stlxr %w3, %w6, %2\n"
-" cbz %w3, 3f\n"
-" sub %w4, %w4, %w3\n"
-" cbnz %w4, 1b\n"
-" mov %w0, %w7\n"
-"3:\n"
-" dmb ish\n"
-"4:\n"
- _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
- _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
- : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
- : "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
- : "memory");
- uaccess_disable_privileged();
-
- if (!ret)
- *uval = val;
- return ret;
+ return __futex_cmpxchg(uaddr, oldval, newval, uval);
}
#endif /* __ASM_FUTEX_H */
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (4 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2026-01-19 16:37 ` Will Deacon
2025-12-14 11:22 ` [PATCH v11 RESEND 7/9] arm64: separate common LSUI definitions into lsui.h Yeoreum Yun
` (3 subsequent siblings)
9 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Current futex atomic operations are implemented with ll/sc instructions
and clearing PSTATE.PAN.
Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
also atomic operation for user memory access in kernel it doesn't need
to clear PSTATE.PAN bit anymore.
With theses instructions some of futex atomic operations don't need to
be implmented with ldxr/stlxr pair instead can be implmented with
one atomic operation supplied by FEAT_LSUI.
However, some of futex atomic operation don't have matched
instructuion i.e) eor or cmpxchg with word size.
For those operation, uses cas{al}t to implement them.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/include/asm/futex.h | 180 ++++++++++++++++++++++++++++++++-
1 file changed, 178 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index f8cb674bdb3f..6778ff7e1c0e 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -9,6 +9,8 @@
#include <linux/uaccess.h>
#include <linux/stringify.h>
+#include <asm/alternative.h>
+#include <asm/alternative-macros.h>
#include <asm/errno.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
@@ -86,11 +88,185 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
return ret;
}
+#ifdef CONFIG_AS_HAS_LSUI
+
+/*
+ * When the LSUI feature is present, the CPU also implements PAN, because
+ * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
+ * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
+ * operation.
+ */
+
+#define __LSUI_PREAMBLE ".arch_extension lsui\n"
+
+#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
+static __always_inline int \
+__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
+ int ret = 0; \
+ int oldval; \
+ \
+ asm volatile("// __lsui_futex_atomic_" #op "\n" \
+ __LSUI_PREAMBLE \
+"1: " #asm_op #mb " %w3, %w2, %1\n" \
+"2:\n" \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
+ : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
+ : "r" (oparg) \
+ : "memory"); \
+ \
+ if (!ret) \
+ *oval = oldval; \
+ \
+ return ret; \
+}
+
+LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
+LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
+LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
+LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
+
+static __always_inline int
+__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
+{
+ int ret = 0;
+
+ asm volatile("// __lsui_cmpxchg64\n"
+ __LSUI_PREAMBLE
+"1: casalt %x2, %x3, %1\n"
+"2:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
+ : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
+ : "r" (newval)
+ : "memory");
+
+ return ret;
+}
+
+static __always_inline int
+__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ u64 __user *uaddr64;
+ bool futex_on_lo;
+ int ret = -EAGAIN, i;
+ u32 other, orig_other;
+ union {
+ struct futex_on_lo {
+ u32 val;
+ u32 other;
+ } lo_futex;
+
+ struct futex_on_hi {
+ u32 other;
+ u32 val;
+ } hi_futex;
+
+ u64 raw;
+ } oval64, orig64, nval64;
+
+ uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
+ futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
+ IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
+
+ for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
+ if (get_user(oval64.raw, uaddr64))
+ return -EFAULT;
+
+ nval64.raw = oval64.raw;
+
+ if (futex_on_lo) {
+ oval64.lo_futex.val = oldval;
+ nval64.lo_futex.val = newval;
+ } else {
+ oval64.hi_futex.val = oldval;
+ nval64.hi_futex.val = newval;
+ }
+
+ orig64.raw = oval64.raw;
+
+ if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
+ return -EFAULT;
+
+ if (futex_on_lo) {
+ oldval = oval64.lo_futex.val;
+ other = oval64.lo_futex.other;
+ orig_other = orig64.lo_futex.other;
+ } else {
+ oldval = oval64.hi_futex.val;
+ other = oval64.hi_futex.other;
+ orig_other = orig64.hi_futex.other;
+ }
+
+ if (other == orig_other) {
+ ret = 0;
+ break;
+ }
+ }
+
+ if (!ret)
+ *oval = oldval;
+
+ return ret;
+}
+
+static __always_inline int
+__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
+{
+ return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
+}
+
+static __always_inline int
+__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
+{
+ u32 oldval, newval, val;
+ int ret, i;
+
+ /*
+ * there are no ldteor/stteor instructions...
+ */
+ for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
+ if (get_user(oldval, uaddr))
+ return -EFAULT;
+
+ newval = oldval ^ oparg;
+
+ ret = __lsui_cmpxchg32(uaddr, oldval, newval, &val);
+ if (ret)
+ return ret;
+
+ if (val == oldval) {
+ *oval = val;
+ return 0;
+ }
+ }
+
+ return -EAGAIN;
+}
+
+static __always_inline int
+__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
+}
+
+#define __lsui_llsc_body(op, ...) \
+({ \
+ alternative_has_cap_likely(ARM64_HAS_LSUI) ? \
+ __lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__); \
+})
+
+#else /* CONFIG_AS_HAS_LSUI */
+
+#define __lsui_llsc_body(op, ...) __llsc_##op(__VA_ARGS__)
+
+#endif /* CONFIG_AS_HAS_LSUI */
+
+
#define FUTEX_ATOMIC_OP(op) \
static __always_inline int \
__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
{ \
- return __llsc_futex_atomic_##op(oparg, uaddr, oval); \
+ return __lsui_llsc_body(futex_atomic_##op, oparg, uaddr, oval); \
}
FUTEX_ATOMIC_OP(add)
@@ -102,7 +278,7 @@ FUTEX_ATOMIC_OP(set)
static __always_inline int
__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
{
- return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval);
+ return __lsui_llsc_body(futex_cmpxchg, uaddr, oldval, newval, oval);
}
static inline int
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 7/9] arm64: separate common LSUI definitions into lsui.h
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (5 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 8/9] arm64: armv8_deprecated: convert user_swpX to inline function Yeoreum Yun
` (2 subsequent siblings)
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
This patch prepares for applying LSUI to the armv8_deprecated
SWP instruction.
Some LSUI-related definitions can be reused by armv8_deprecated.c,
so move the common definitions into a separate header file, lsui.h.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/include/asm/futex.h | 15 +--------------
arch/arm64/include/asm/lsui.h | 25 +++++++++++++++++++++++++
2 files changed, 26 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/lsui.h
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 6778ff7e1c0e..ca9487219102 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -9,9 +9,8 @@
#include <linux/uaccess.h>
#include <linux/stringify.h>
-#include <asm/alternative.h>
-#include <asm/alternative-macros.h>
#include <asm/errno.h>
+#include <asm/lsui.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
@@ -97,8 +96,6 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
* operation.
*/
-#define __LSUI_PREAMBLE ".arch_extension lsui\n"
-
#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
static __always_inline int \
__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
@@ -249,16 +246,6 @@ __lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
}
-#define __lsui_llsc_body(op, ...) \
-({ \
- alternative_has_cap_likely(ARM64_HAS_LSUI) ? \
- __lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__); \
-})
-
-#else /* CONFIG_AS_HAS_LSUI */
-
-#define __lsui_llsc_body(op, ...) __llsc_##op(__VA_ARGS__)
-
#endif /* CONFIG_AS_HAS_LSUI */
diff --git a/arch/arm64/include/asm/lsui.h b/arch/arm64/include/asm/lsui.h
new file mode 100644
index 000000000000..1a2ad408a47b
--- /dev/null
+++ b/arch/arm64/include/asm/lsui.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_LSUI_H
+#define __ASM_LSUI_H
+
+#ifdef CONFIG_AS_HAS_LSUI
+
+#define __LSUI_PREAMBLE ".arch_extension lsui\n"
+
+#include <linux/stringify.h>
+#include <asm/alternative.h>
+#include <asm/alternative-macros.h>
+#include <asm/cpucaps.h>
+
+#define __lsui_llsc_body(op, ...) \
+({ \
+ alternative_has_cap_likely(ARM64_HAS_LSUI) ? \
+ __lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__); \
+})
+
+#else /* CONFIG_AS_HAS_LSUI */
+
+#define __lsui_llsc_body(op, ...) __llsc_##op(__VA_ARGS__)
+
+#endif /* CONFIG_AS_HAS_LSUI */
+#endif /* __ASM_LSUI_H */
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 8/9] arm64: armv8_deprecated: convert user_swpX to inline function
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (6 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 7/9] arm64: separate common LSUI definitions into lsui.h Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation Yeoreum Yun
2025-12-31 10:07 ` [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
This is preparation patch to apply FEAT_LSUI in
user_swpX operation.
For this, convert user_swpX macro into inline function.
No functional change.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/kernel/armv8_deprecated.c | 38 +++++++++++++++++-----------
1 file changed, 23 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index e737c6295ec7..d15e35f1075c 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -93,13 +93,18 @@ static unsigned int __maybe_unused aarch32_check_condition(u32 opcode, u32 psr)
/* Arbitrary constant to ensure forward-progress of the LL/SC loop */
#define __SWP_LL_SC_LOOPS 4
-#define __user_swpX_asm(data, addr, res, temp, temp2, B) \
-do { \
+#define LLSC_USER_SWPX(B) \
+static __always_inline int \
+__llsc_user_swp##B##_asm(unsigned int *data, unsigned int addr) \
+{ \
+ int err = 0; \
+ unsigned int temp, temp2; \
+ \
uaccess_enable_privileged(); \
__asm__ __volatile__( \
" mov %w3, %w6\n" \
- "0: ldxr"B" %w2, [%4]\n" \
- "1: stxr"B" %w0, %w1, [%4]\n" \
+ "0: ldxr"#B" %w2, [%4]\n" \
+ "1: stxr"#B" %w0, %w1, [%4]\n" \
" cbz %w0, 2f\n" \
" sub %w3, %w3, #1\n" \
" cbnz %w3, 0b\n" \
@@ -110,17 +115,22 @@ do { \
"3:\n" \
_ASM_EXTABLE_UACCESS_ERR(0b, 3b, %w0) \
_ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) \
- : "=&r" (res), "+r" (data), "=&r" (temp), "=&r" (temp2) \
+ : "=&r" (err), "+r" (*data), "=&r" (temp), "=&r" (temp2)\
: "r" ((unsigned long)addr), "i" (-EAGAIN), \
"i" (__SWP_LL_SC_LOOPS) \
: "memory"); \
uaccess_disable_privileged(); \
-} while (0)
+ \
+ return err; \
+}
+
+LLSC_USER_SWPX()
+LLSC_USER_SWPX(b)
-#define __user_swp_asm(data, addr, res, temp, temp2) \
- __user_swpX_asm(data, addr, res, temp, temp2, "")
-#define __user_swpb_asm(data, addr, res, temp, temp2) \
- __user_swpX_asm(data, addr, res, temp, temp2, "b")
+#define __user_swp_asm(data, addr) \
+ __llsc_user_swp_asm(data, addr)
+#define __user_swpb_asm(data, addr) \
+ __llsc_user_swpb_asm(data, addr)
/*
* Bit 22 of the instruction encoding distinguishes between
@@ -131,7 +141,7 @@ do { \
static int emulate_swpX(unsigned int address, unsigned int *data,
unsigned int type)
{
- unsigned int res = 0;
+ unsigned int res;
if ((type != TYPE_SWPB) && (address & 0x3)) {
/* SWP to unaligned address not permitted */
@@ -140,12 +150,10 @@ static int emulate_swpX(unsigned int address, unsigned int *data,
}
while (1) {
- unsigned long temp, temp2;
-
if (type == TYPE_SWPB)
- __user_swpb_asm(*data, address, res, temp, temp2);
+ res = __user_swpb_asm(data, address);
else
- __user_swp_asm(*data, address, res, temp, temp2);
+ res = __user_swp_asm(data, address);
if (likely(res != -EAGAIN) || signal_pending(current))
break;
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (7 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 8/9] arm64: armv8_deprecated: convert user_swpX to inline function Yeoreum Yun
@ 2025-12-14 11:22 ` Yeoreum Yun
2025-12-15 9:33 ` Marc Zyngier
2025-12-31 10:07 ` [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
9 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-14 11:22 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
Yeoreum Yun
Apply the FEAT_LSUI instruction to emulate the deprecated swpX
instruction, so that toggling of the PSTATE.PAN bit can be removed when
LSUI-related instructions are used.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/kernel/armv8_deprecated.c | 77 +++++++++++++++++++++++++---
1 file changed, 71 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index d15e35f1075c..b8e6d71f766d 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -13,6 +13,7 @@
#include <linux/uaccess.h>
#include <asm/cpufeature.h>
+#include <asm/lsui.h>
#include <asm/insn.h>
#include <asm/sysreg.h>
#include <asm/system_misc.h>
@@ -86,13 +87,77 @@ static unsigned int __maybe_unused aarch32_check_condition(u32 opcode, u32 psr)
* Rn = address
*/
+/* Arbitrary constant to ensure forward-progress of the loop */
+#define __SWP_LOOPS 4
+
+#ifdef CONFIG_AS_HAS_LSUI
+static __always_inline int
+__lsui_user_swp_asm(unsigned int *data, unsigned int addr)
+{
+ int err = 0;
+ unsigned int temp;
+
+ asm volatile("// __lsui_user_swp_asm\n"
+ __LSUI_PREAMBLE
+ "1: swpt %w1, %w2, [%3]\n"
+ " mov %w1, %w2\n"
+ "2:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
+ : "+r" (err), "+r" (*data), "=&r" (temp)
+ : "r" ((unsigned long)addr)
+ : "memory");
+
+ return err;
+}
+
+static __always_inline int
+__lsui_user_swpb_asm(unsigned int *data, unsigned int addr)
+{
+ u8 i, idx;
+ int err = -EAGAIN;
+ u64 __user *addr_al;
+ u64 oldval;
+ union {
+ u64 var;
+ u8 raw[sizeof(u64)];
+ } newval, curval;
+
+ idx = addr & (sizeof(u64) - 1);
+ addr_al = (u64 __user *)ALIGN_DOWN(addr, sizeof(u64));
+
+ for (i = 0; i < __SWP_LOOPS; i++) {
+ if (get_user(oldval, addr_al))
+ return -EFAULT;
+
+ curval.var = newval.var = oldval;
+ newval.raw[idx] = *data;
+
+ asm volatile("// __lsui_user_swpb_asm\n"
+ __LSUI_PREAMBLE
+ "1: cast %x2, %x3, %1\n"
+ "2:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
+ : "+r" (err), "+Q" (*addr_al), "+r" (curval.var)
+ : "r" (newval.var)
+ : "memory");
+
+ if (curval.var == oldval) {
+ err = 0;
+ break;
+ }
+ }
+
+ if (!err)
+ *data = curval.raw[idx];
+
+ return err;
+}
+#endif /* CONFIG_AS_HAS_LSUI */
+
/*
* Error-checking SWP macros implemented using ldxr{b}/stxr{b}
*/
-/* Arbitrary constant to ensure forward-progress of the LL/SC loop */
-#define __SWP_LL_SC_LOOPS 4
-
#define LLSC_USER_SWPX(B) \
static __always_inline int \
__llsc_user_swp##B##_asm(unsigned int *data, unsigned int addr) \
@@ -117,7 +182,7 @@ __llsc_user_swp##B##_asm(unsigned int *data, unsigned int addr) \
_ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) \
: "=&r" (err), "+r" (*data), "=&r" (temp), "=&r" (temp2)\
: "r" ((unsigned long)addr), "i" (-EAGAIN), \
- "i" (__SWP_LL_SC_LOOPS) \
+ "i" (__SWP_LOOPS) \
: "memory"); \
uaccess_disable_privileged(); \
\
@@ -128,9 +193,9 @@ LLSC_USER_SWPX()
LLSC_USER_SWPX(b)
#define __user_swp_asm(data, addr) \
- __llsc_user_swp_asm(data, addr)
+ __lsui_llsc_body(user_swp_asm, data, addr)
#define __user_swpb_asm(data, addr) \
- __llsc_user_swpb_asm(data, addr)
+ __lsui_llsc_body(user_swpb_asm, data, addr)
/*
* Bit 22 of the instruction encoding distinguishes between
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2025-12-14 11:22 ` [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation Yeoreum Yun
@ 2025-12-15 9:33 ` Marc Zyngier
2025-12-15 9:56 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Marc Zyngier @ 2025-12-15 9:33 UTC (permalink / raw)
To: Yeoreum Yun
Cc: catalin.marinas, will, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Sun, 14 Dec 2025 11:22:48 +0000,
Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> instruction, so that toggling of the PSTATE.PAN bit can be removed when
> LSUI-related instructions are used.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
It really begs the question: what are the odds of ever seeing a CPU
that implements both LSUI and AArch32?
This seems extremely unlikely to me.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2025-12-15 9:33 ` Marc Zyngier
@ 2025-12-15 9:56 ` Yeoreum Yun
2026-01-19 15:34 ` Will Deacon
0 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-15 9:56 UTC (permalink / raw)
To: Marc Zyngier
Cc: catalin.marinas, will, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Hi,
> On Sun, 14 Dec 2025 11:22:48 +0000,
> Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> >
> > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > LSUI-related instructions are used.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
>
> It really begs the question: what are the odds of ever seeing a CPU
> that implements both LSUI and AArch32?
>
> This seems extremely unlikely to me.
Well, I'm not sure how many CPU will have
both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
(except FVP currently) -- at least the CPU what I saw,
most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
If you this seems useless, I don't have any strong comments
whether drop patches related to deprecated swp instruction parts
(patch 8-9 only) or not.
(But, I hope to pass this decision to maintaining perspective...)
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 0/9] support FEAT_LSUI
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
` (8 preceding siblings ...)
2025-12-14 11:22 ` [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation Yeoreum Yun
@ 2025-12-31 10:07 ` Yeoreum Yun
9 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2025-12-31 10:07 UTC (permalink / raw)
To: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Gentle ping in case of forgotten.
On Sun, Dec 14, 2025 at 11:22:39AM +0000, Yeoreum Yun wrote:
> Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
> previleged level to access to access user memory without clearing
> PSTATE.PAN bit.
>
> This patchset support FEAT_LSUI and applies in futex atomic operation
> and user_swpX emulation where can replace from ldxr/st{l}xr
> pair implmentation with clearing PSTATE.PAN bit to correspondant
> load/store unprevileged atomic operation without clearing PSTATE.PAN bit.
>
> This patch based on v6.19-rc1
>
>
> Patch Sequences
> ================
>
> Patch #1 adds cpufeature for FEAT_LSUI
>
> Patch #2-#3 expose FEAT_LSUI to guest
>
> Patch #4 adds Kconfig for FEAT_LSUI
>
> Patch #5-#6 support futex atomic-op with FEAT_LSUI
>
> Patch #7-#9 support user_swpX emulation with FEAT_LSUI
>
>
> Patch History
> ==============
> from v10 to v11:
> - rebase to v6.19-rc1
> - use cast instruction to emulate deprecated swpb instruction
> - https://lore.kernel.org/all/20251103163224.818353-1-yeoreum.yun@arm.com/
>
> from v9 to v10:
> - apply FEAT_LSUI to user_swpX emulation.
> - add test coverage for LSUI bit in ID_AA64ISAR3_EL1
> - rebase to v6.18-rc4
> - https://lore.kernel.org/all/20250922102244.2068414-1-yeoreum.yun@arm.com/
>
> from v8 to v9:
> - refotoring __lsui_cmpxchg64()
> - rebase to v6.17-rc7
> - https://lore.kernel.org/all/20250917110838.917281-1-yeoreum.yun@arm.com/
>
> from v7 to v8:
> - implements futex_atomic_eor() and futex_atomic_cmpxchg() with casalt
> with C helper.
> - Drop the small optimisation on ll/sc futex_atomic_set operation.
> - modify some commit message.
> - https://lore.kernel.org/all/20250816151929.197589-1-yeoreum.yun@arm.com/
>
> from v6 to v7:
> - wrap FEAT_LSUI with CONFIG_AS_HAS_LSUI in cpufeature
> - remove unnecessary addition of indentation.
> - remove unnecessary mte_tco_enable()/disable() on LSUI operation.
> - https://lore.kernel.org/all/20250811163635.1562145-1-yeoreum.yun@arm.com/
>
> from v5 to v6:
> - rebase to v6.17-rc1
> - https://lore.kernel.org/all/20250722121956.1509403-1-yeoreum.yun@arm.com/
>
> from v4 to v5:
> - remove futex_ll_sc.h futext_lsui and lsui.h and move them to futex.h
> - reorganize the patches.
> - https://lore.kernel.org/all/20250721083618.2743569-1-yeoreum.yun@arm.com/
>
> from v3 to v4:
> - rebase to v6.16-rc7
> - modify some patch's title.
> - https://lore.kernel.org/all/20250617183635.1266015-1-yeoreum.yun@arm.com/
>
> from v2 to v3:
> - expose FEAT_LUSI to guest
> - add help section for LUSI Kconfig
> - https://lore.kernel.org/all/20250611151154.46362-1-yeoreum.yun@arm.com/
>
> from v1 to v2:
> - remove empty v9.6 menu entry
> - locate HAS_LUSI in cpucaps in order
> - https://lore.kernel.org/all/20250611104916.10636-1-yeoreum.yun@arm.com/
>
>
> Yeoreum Yun (9):
> arm64: cpufeature: add FEAT_LSUI
> KVM: arm64: expose FEAT_LSUI to guest
> KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI
> arm64: Kconfig: Detect toolchain support for LSUI
> arm64: futex: refactor futex atomic operation
> arm64: futex: support futex with FEAT_LSUI
> arm64: separate common LSUI definitions into lsui.h
> arm64: armv8_deprecated: convert user_swpX to inline function
> arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
>
> arch/arm64/Kconfig | 5 +
> arch/arm64/include/asm/futex.h | 291 +++++++++++++++---
> arch/arm64/include/asm/lsui.h | 25 ++
> arch/arm64/kernel/armv8_deprecated.c | 111 +++++--
> arch/arm64/kernel/cpufeature.c | 10 +
> arch/arm64/kvm/sys_regs.c | 3 +-
> arch/arm64/tools/cpucaps | 1 +
> .../testing/selftests/kvm/arm64/set_id_regs.c | 1 +
> 8 files changed, 381 insertions(+), 66 deletions(-)
> create mode 100644 arch/arm64/include/asm/lsui.h
>
>
> base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2025-12-15 9:56 ` Yeoreum Yun
@ 2026-01-19 15:34 ` Will Deacon
2026-01-19 22:32 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-19 15:34 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Marc Zyngier, catalin.marinas, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd, linux-arm-kernel, linux-kernel, kvmarm, kvm,
linux-kselftest
On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> Hi,
>
> > On Sun, 14 Dec 2025 11:22:48 +0000,
> > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > >
> > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > LSUI-related instructions are used.
> > >
> > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> >
> > It really begs the question: what are the odds of ever seeing a CPU
> > that implements both LSUI and AArch32?
> >
> > This seems extremely unlikely to me.
>
> Well, I'm not sure how many CPU will have
> both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> (except FVP currently) -- at least the CPU what I saw,
> most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
Just to make sure I understand you, you're saying that you have seen
a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> If you this seems useless, I don't have any strong comments
> whether drop patches related to deprecated swp instruction parts
> (patch 8-9 only) or not.
> (But, I hope to pass this decision to maintaining perspective...)
I think it depends on whether or not the hardware exists. Marc thinks
that it's extremely unlikely whereas you appear to have seen some (but
please confirm).
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI
2025-12-14 11:22 ` [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
@ 2026-01-19 15:50 ` Will Deacon
2026-01-19 15:54 ` Mark Brown
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-19 15:50 UTC (permalink / raw)
To: Yeoreum Yun
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Sun, Dec 14, 2025 at 11:22:43AM +0000, Yeoreum Yun wrote:
> Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
> previleged level to access to access user memory without clearing
> PSTATE.PAN bit.
> It's enough to add CONFIG_AS_HAS_LSUI only because the code for LSUI uses
> individual `.arch_extension` entries.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/arm64/Kconfig | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 93173f0a09c7..36e87a1a1b5c 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -2227,6 +2227,11 @@ config ARM64_GCS
>
> endmenu # "ARMv9.4 architectural features"
>
> +config AS_HAS_LSUI
> + def_bool $(as-instr,.arch_extension lsui)
> + help
> + Supported by LLVM 20+ and binutils 2.45+.
This is an internal Kconfig variable so please drop the help text.
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI
2026-01-19 15:50 ` Will Deacon
@ 2026-01-19 15:54 ` Mark Brown
2026-01-20 11:35 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Mark Brown @ 2026-01-19 15:54 UTC (permalink / raw)
To: Will Deacon
Cc: Yeoreum Yun, catalin.marinas, maz, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
[-- Attachment #1: Type: text/plain, Size: 478 bytes --]
On Mon, Jan 19, 2026 at 03:50:43PM +0000, Will Deacon wrote:
> On Sun, Dec 14, 2025 at 11:22:43AM +0000, Yeoreum Yun wrote:
> > +config AS_HAS_LSUI
> > + def_bool $(as-instr,.arch_extension lsui)
> > + help
> > + Supported by LLVM 20+ and binutils 2.45+.
> This is an internal Kconfig variable so please drop the help text.
It would be useful to keep the information about supported compilers as
a comment though (as is done for some of the other toolchain feature
tests).
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation
2025-12-14 11:22 ` [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation Yeoreum Yun
@ 2026-01-19 15:57 ` Will Deacon
2026-01-19 22:19 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-19 15:57 UTC (permalink / raw)
To: Yeoreum Yun
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Sun, Dec 14, 2025 at 11:22:44AM +0000, Yeoreum Yun wrote:
> Refactor futex atomic operations using ll/sc method with
> clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/arm64/include/asm/futex.h | 128 +++++++++++++++++++++------------
> 1 file changed, 82 insertions(+), 46 deletions(-)
>
> diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> index bc06691d2062..f8cb674bdb3f 100644
> --- a/arch/arm64/include/asm/futex.h
> +++ b/arch/arm64/include/asm/futex.h
> @@ -7,17 +7,21 @@
>
> #include <linux/futex.h>
> #include <linux/uaccess.h>
> +#include <linux/stringify.h>
>
> #include <asm/errno.h>
>
> #define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
>
> -#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
> -do { \
> +#define LLSC_FUTEX_ATOMIC_OP(op, insn) \
> +static __always_inline int \
> +__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> +{ \
> unsigned int loops = FUTEX_MAX_LOOPS; \
> + int ret, oldval, tmp; \
> \
> uaccess_enable_privileged(); \
> - asm volatile( \
> + asm volatile("// __llsc_futex_atomic_" #op "\n" \
> " prfm pstl1strm, %2\n" \
> "1: ldxr %w1, %2\n" \
> insn "\n" \
> @@ -35,45 +39,103 @@ do { \
> : "r" (oparg), "Ir" (-EAGAIN) \
> : "memory"); \
> uaccess_disable_privileged(); \
> -} while (0)
> + \
> + if (!ret) \
> + *oval = oldval; \
> + \
> + return ret; \
> +}
> +
> +LLSC_FUTEX_ATOMIC_OP(add, "add %w3, %w1, %w5")
> +LLSC_FUTEX_ATOMIC_OP(or, "orr %w3, %w1, %w5")
> +LLSC_FUTEX_ATOMIC_OP(and, "and %w3, %w1, %w5")
> +LLSC_FUTEX_ATOMIC_OP(eor, "eor %w3, %w1, %w5")
> +LLSC_FUTEX_ATOMIC_OP(set, "mov %w3, %w5")
Since you're reworking this code, how about we take the opportunity to
use named arguments instead of the numbers?
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2025-12-14 11:22 ` [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
@ 2026-01-19 16:37 ` Will Deacon
2026-01-19 22:17 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-19 16:37 UTC (permalink / raw)
To: Yeoreum Yun
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Sun, Dec 14, 2025 at 11:22:45AM +0000, Yeoreum Yun wrote:
> Current futex atomic operations are implemented with ll/sc instructions
> and clearing PSTATE.PAN.
>
> Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
> also atomic operation for user memory access in kernel it doesn't need
> to clear PSTATE.PAN bit anymore.
>
> With theses instructions some of futex atomic operations don't need to
> be implmented with ldxr/stlxr pair instead can be implmented with
> one atomic operation supplied by FEAT_LSUI.
>
> However, some of futex atomic operation don't have matched
> instructuion i.e) eor or cmpxchg with word size.
> For those operation, uses cas{al}t to implement them.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> arch/arm64/include/asm/futex.h | 180 ++++++++++++++++++++++++++++++++-
> 1 file changed, 178 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> index f8cb674bdb3f..6778ff7e1c0e 100644
> --- a/arch/arm64/include/asm/futex.h
> +++ b/arch/arm64/include/asm/futex.h
> @@ -9,6 +9,8 @@
> #include <linux/uaccess.h>
> #include <linux/stringify.h>
>
> +#include <asm/alternative.h>
> +#include <asm/alternative-macros.h>
> #include <asm/errno.h>
>
> #define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
> @@ -86,11 +88,185 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> return ret;
> }
>
> +#ifdef CONFIG_AS_HAS_LSUI
> +
> +/*
> + * When the LSUI feature is present, the CPU also implements PAN, because
> + * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
> + * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
> + * operation.
> + */
I'd prefer not to rely on these sorts of properties because:
- CPU bugs happen all the time
- Virtualisation and idreg overrides mean illegal feature combinations
can show up
- The architects sometimes change their mind
So let's either drop the assumption that we have PAN if LSUI *or* actually
test that someplace during feature initialisation.
> +
> +#define __LSUI_PREAMBLE ".arch_extension lsui\n"
> +
> +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
> +static __always_inline int \
> +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> +{ \
> + int ret = 0; \
> + int oldval; \
> + \
> + asm volatile("// __lsui_futex_atomic_" #op "\n" \
> + __LSUI_PREAMBLE \
> +"1: " #asm_op #mb " %w3, %w2, %1\n" \
What's the point in separating the barrier suffix from the rest of the
instruction mnemonic? All the callers use -AL.
> +"2:\n" \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
> + : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
> + : "r" (oparg) \
> + : "memory"); \
> + \
> + if (!ret) \
> + *oval = oldval; \
> + \
> + return ret; \
> +}
> +
> +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> +
> +static __always_inline int
> +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> +{
> + int ret = 0;
> +
> + asm volatile("// __lsui_cmpxchg64\n"
> + __LSUI_PREAMBLE
> +"1: casalt %x2, %x3, %1\n"
How bizarre, they changed the order of the AL and T compared to SWPTAL.
Fair enough...
Also, I don't think you need the 'x' prefix on the 64-bit variables.
> +"2:\n"
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> + : "r" (newval)
> + : "memory");
Don't you need to update *oldval here if the CAS didn't fault?
> +
> + return ret;
> +}
> +
> +static __always_inline int
> +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + u64 __user *uaddr64;
> + bool futex_on_lo;
> + int ret = -EAGAIN, i;
> + u32 other, orig_other;
> + union {
> + struct futex_on_lo {
> + u32 val;
> + u32 other;
> + } lo_futex;
> +
> + struct futex_on_hi {
> + u32 other;
> + u32 val;
> + } hi_futex;
> +
> + u64 raw;
> + } oval64, orig64, nval64;
> +
> + uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> + futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
> + IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
Just make LSUI depend on !CPU_BIG_ENDIAN in Kconfig. The latter already
depends on BROKEN and so we'll probably drop it soon anyway. There's
certainly no need to care about it for new features and it should simplify
the code you have here if you can assume little-endian.
> +
> + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> + if (get_user(oval64.raw, uaddr64))
> + return -EFAULT;
Since oldval is passed to us as an argument, can we get away with a
32-bit get_user() here?
> +
> + nval64.raw = oval64.raw;
> +
> + if (futex_on_lo) {
> + oval64.lo_futex.val = oldval;
> + nval64.lo_futex.val = newval;
> + } else {
> + oval64.hi_futex.val = oldval;
> + nval64.hi_futex.val = newval;
> + }
> +
> + orig64.raw = oval64.raw;
> +
> + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> + return -EFAULT;
> +
> + if (futex_on_lo) {
> + oldval = oval64.lo_futex.val;
> + other = oval64.lo_futex.other;
> + orig_other = orig64.lo_futex.other;
> + } else {
> + oldval = oval64.hi_futex.val;
> + other = oval64.hi_futex.other;
> + orig_other = orig64.hi_futex.other;
> + }
> +
> + if (other == orig_other) {
> + ret = 0;
> + break;
> + }
> + }
> +
> + if (!ret)
> + *oval = oldval;
Shouldn't we set *oval to the value we got back from the CAS?
> +
> + return ret;
> +}
> +
> +static __always_inline int
> +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
> +{
> + return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
Please a comment about the bitwise negation of oparg here as we're undoing
the one from the caller.
> +}
> +
> +static __always_inline int
> +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
> +{
> + u32 oldval, newval, val;
> + int ret, i;
> +
> + /*
> + * there are no ldteor/stteor instructions...
> + */
> + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> + if (get_user(oldval, uaddr))
> + return -EFAULT;
> +
> + newval = oldval ^ oparg;
> +
> + ret = __lsui_cmpxchg32(uaddr, oldval, newval, &val);
> + if (ret)
> + return ret;
> +
> + if (val == oldval) {
> + *oval = val;
> + return 0;
> + }
> + }
> +
> + return -EAGAIN;
> +}
> +
> +static __always_inline int
> +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
> +}
> +
> +#define __lsui_llsc_body(op, ...) \
> +({ \
> + alternative_has_cap_likely(ARM64_HAS_LSUI) ? \
This doesn't seem like it should be the "likely" case just yet?
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2026-01-19 16:37 ` Will Deacon
@ 2026-01-19 22:17 ` Yeoreum Yun
2026-01-20 15:44 ` Yeoreum Yun
2026-01-21 13:48 ` Will Deacon
0 siblings, 2 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-19 22:17 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Hi Will,
> On Sun, Dec 14, 2025 at 11:22:45AM +0000, Yeoreum Yun wrote:
> > Current futex atomic operations are implemented with ll/sc instructions
> > and clearing PSTATE.PAN.
> >
> > Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
> > also atomic operation for user memory access in kernel it doesn't need
> > to clear PSTATE.PAN bit anymore.
> >
> > With theses instructions some of futex atomic operations don't need to
> > be implmented with ldxr/stlxr pair instead can be implmented with
> > one atomic operation supplied by FEAT_LSUI.
> >
> > However, some of futex atomic operation don't have matched
> > instructuion i.e) eor or cmpxchg with word size.
> > For those operation, uses cas{al}t to implement them.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> > arch/arm64/include/asm/futex.h | 180 ++++++++++++++++++++++++++++++++-
> > 1 file changed, 178 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> > index f8cb674bdb3f..6778ff7e1c0e 100644
> > --- a/arch/arm64/include/asm/futex.h
> > +++ b/arch/arm64/include/asm/futex.h
> > @@ -9,6 +9,8 @@
> > #include <linux/uaccess.h>
> > #include <linux/stringify.h>
> >
> > +#include <asm/alternative.h>
> > +#include <asm/alternative-macros.h>
> > #include <asm/errno.h>
> >
> > #define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
> > @@ -86,11 +88,185 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > return ret;
> > }
> >
> > +#ifdef CONFIG_AS_HAS_LSUI
> > +
> > +/*
> > + * When the LSUI feature is present, the CPU also implements PAN, because
> > + * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
> > + * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
> > + * operation.
> > + */
>
> I'd prefer not to rely on these sorts of properties because:
>
> - CPU bugs happen all the time
> - Virtualisation and idreg overrides mean illegal feature combinations
> can show up
> - The architects sometimes change their mind
>
> So let's either drop the assumption that we have PAN if LSUI *or* actually
> test that someplace during feature initialisation.
Thanks for detail explain. I'll drop the my silly assumption and
call the uaccess_ttbr0_enable()/disable() then.
>
> > +
> > +#define __LSUI_PREAMBLE ".arch_extension lsui\n"
> > +
> > +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
> > +static __always_inline int \
> > +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> > +{ \
> > + int ret = 0; \
> > + int oldval; \
> > + \
> > + asm volatile("// __lsui_futex_atomic_" #op "\n" \
> > + __LSUI_PREAMBLE \
> > +"1: " #asm_op #mb " %w3, %w2, %1\n" \
>
> What's the point in separating the barrier suffix from the rest of the
> instruction mnemonic? All the callers use -AL.
Agree. I'll remove this.
>
> > +"2:\n" \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
> > + : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
> > + : "r" (oparg) \
> > + : "memory"); \
> > + \
> > + if (!ret) \
> > + *oval = oldval; \
> > + \
> > + return ret; \
> > +}
> > +
> > +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> > +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> > +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> > +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> > +{
> > + int ret = 0;
> > +
> > + asm volatile("// __lsui_cmpxchg64\n"
> > + __LSUI_PREAMBLE
> > +"1: casalt %x2, %x3, %1\n"
>
>
> How bizarre, they changed the order of the AL and T compared to SWPTAL.
> Fair enough...
>
> Also, I don't think you need the 'x' prefix on the 64-bit variables.
Right. I'll remove useless prefix.
>
> > +"2:\n"
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > + : "r" (newval)
> > + : "memory");
>
> Don't you need to update *oldval here if the CAS didn't fault?
No. if CAS doesn't make fault the oldval update already.
>
> > +
> > + return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > + u64 __user *uaddr64;
> > + bool futex_on_lo;
> > + int ret = -EAGAIN, i;
> > + u32 other, orig_other;
> > + union {
> > + struct futex_on_lo {
> > + u32 val;
> > + u32 other;
> > + } lo_futex;
> > +
> > + struct futex_on_hi {
> > + u32 other;
> > + u32 val;
> > + } hi_futex;
> > +
> > + u64 raw;
> > + } oval64, orig64, nval64;
> > +
> > + uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > + futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
> > + IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
>
> Just make LSUI depend on !CPU_BIG_ENDIAN in Kconfig. The latter already
> depends on BROKEN and so we'll probably drop it soon anyway. There's
> certainly no need to care about it for new features and it should simplify
> the code you have here if you can assume little-endian.
Thanks. then I'll enable LSUI feature only !CPU_BIG_ENDIAN.
>
> > +
> > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > + if (get_user(oval64.raw, uaddr64))
> > + return -EFAULT;
>
> Since oldval is passed to us as an argument, can we get away with a
> 32-bit get_user() here?
It's not a probelm. but is there any sigificant difference?
>
> > +
> > + nval64.raw = oval64.raw;
> > +
> > + if (futex_on_lo) {
> > + oval64.lo_futex.val = oldval;
> > + nval64.lo_futex.val = newval;
> > + } else {
> > + oval64.hi_futex.val = oldval;
> > + nval64.hi_futex.val = newval;
> > + }
> > +
> > + orig64.raw = oval64.raw;
> > +
> > + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> > + return -EFAULT;
> > +
> > + if (futex_on_lo) {
> > + oldval = oval64.lo_futex.val;
> > + other = oval64.lo_futex.other;
> > + orig_other = orig64.lo_futex.other;
> > + } else {
> > + oldval = oval64.hi_futex.val;
> > + other = oval64.hi_futex.other;
> > + orig_other = orig64.hi_futex.other;
> > + }
> > +
> > + if (other == orig_other) {
> > + ret = 0;
> > + break;
> > + }
> > + }
> > +
> > + if (!ret)
> > + *oval = oldval;
>
> Shouldn't we set *oval to the value we got back from the CAS?
Since it's a "success" case, the CAS return and oldval must be the same.
That's why it doesn't matter to use got back from the CAS.
Otherwise, it returns error and *oval doesn't matter for
futex_atomic_cmpxchg_inatomic().
>
> > +
> > + return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
> > +{
> > + return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
>
> Please a comment about the bitwise negation of oparg here as we're undoing
> the one from the caller.
I see. Thanks!
>
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
> > +{
> > + u32 oldval, newval, val;
> > + int ret, i;
> > +
> > + /*
> > + * there are no ldteor/stteor instructions...
> > + */
> > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > + if (get_user(oldval, uaddr))
> > + return -EFAULT;
> > +
> > + newval = oldval ^ oparg;
> > +
> > + ret = __lsui_cmpxchg32(uaddr, oldval, newval, &val);
> > + if (ret)
> > + return ret;
> > +
> > + if (val == oldval) {
> > + *oval = val;
> > + return 0;
> > + }
> > + }
> > +
> > + return -EAGAIN;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > + return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
> > +}
> > +
> > +#define __lsui_llsc_body(op, ...) \
> > +({ \
> > + alternative_has_cap_likely(ARM64_HAS_LSUI) ? \
>
> This doesn't seem like it should be the "likely" case just yet?
Okay. I'll change with "unlikely"
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation
2026-01-19 15:57 ` Will Deacon
@ 2026-01-19 22:19 ` Yeoreum Yun
0 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-19 22:19 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Hi Will,
> On Sun, Dec 14, 2025 at 11:22:44AM +0000, Yeoreum Yun wrote:
> > Refactor futex atomic operations using ll/sc method with
> > clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> > ---
> > arch/arm64/include/asm/futex.h | 128 +++++++++++++++++++++------------
> > 1 file changed, 82 insertions(+), 46 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> > index bc06691d2062..f8cb674bdb3f 100644
> > --- a/arch/arm64/include/asm/futex.h
> > +++ b/arch/arm64/include/asm/futex.h
> > @@ -7,17 +7,21 @@
> >
> > #include <linux/futex.h>
> > #include <linux/uaccess.h>
> > +#include <linux/stringify.h>
> >
> > #include <asm/errno.h>
> >
> > #define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
> >
> > -#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
> > -do { \
> > +#define LLSC_FUTEX_ATOMIC_OP(op, insn) \
> > +static __always_inline int \
> > +__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> > +{ \
> > unsigned int loops = FUTEX_MAX_LOOPS; \
> > + int ret, oldval, tmp; \
> > \
> > uaccess_enable_privileged(); \
> > - asm volatile( \
> > + asm volatile("// __llsc_futex_atomic_" #op "\n" \
> > " prfm pstl1strm, %2\n" \
> > "1: ldxr %w1, %2\n" \
> > insn "\n" \
> > @@ -35,45 +39,103 @@ do { \
> > : "r" (oparg), "Ir" (-EAGAIN) \
> > : "memory"); \
> > uaccess_disable_privileged(); \
> > -} while (0)
> > + \
> > + if (!ret) \
> > + *oval = oldval; \
> > + \
> > + return ret; \
> > +}
> > +
> > +LLSC_FUTEX_ATOMIC_OP(add, "add %w3, %w1, %w5")
> > +LLSC_FUTEX_ATOMIC_OP(or, "orr %w3, %w1, %w5")
> > +LLSC_FUTEX_ATOMIC_OP(and, "and %w3, %w1, %w5")
> > +LLSC_FUTEX_ATOMIC_OP(eor, "eor %w3, %w1, %w5")
> > +LLSC_FUTEX_ATOMIC_OP(set, "mov %w3, %w5")
>
> Since you're reworking this code, how about we take the opportunity to
> use named arguments instead of the numbers?
Okay. Let me try this.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-19 15:34 ` Will Deacon
@ 2026-01-19 22:32 ` Yeoreum Yun
2026-01-20 9:32 ` Yeoreum Yun
2026-01-20 9:46 ` Mark Rutland
0 siblings, 2 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-19 22:32 UTC (permalink / raw)
To: Will Deacon
Cc: Marc Zyngier, catalin.marinas, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd, linux-arm-kernel, linux-kernel, kvmarm, kvm,
linux-kselftest
Hi Will,
> On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > Hi,
> >
> > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > >
> > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > LSUI-related instructions are used.
> > > >
> > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > >
> > > It really begs the question: what are the odds of ever seeing a CPU
> > > that implements both LSUI and AArch32?
> > >
> > > This seems extremely unlikely to me.
> >
> > Well, I'm not sure how many CPU will have
> > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > (except FVP currently) -- at least the CPU what I saw,
> > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
>
> Just to make sure I understand you, you're saying that you have seen
> a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
>
> > If you this seems useless, I don't have any strong comments
> > whether drop patches related to deprecated swp instruction parts
> > (patch 8-9 only) or not.
> > (But, I hope to pass this decision to maintaining perspective...)
>
> I think it depends on whether or not the hardware exists. Marc thinks
> that it's extremely unlikely whereas you appear to have seen some (but
> please confirm).
>
What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
My point was that if CPUs implementing LSUI do appear, most of them will likely
continue to support the existing 32-bit EL0 compatibility that
the majority of current CPUs already have.
For that reason, I think it also makes sense to apply LSUI to SWPx emulation.
That said, since there are currently no real CPUs that actually implement LSUI,
it is hard to say that this is particularly useful right now.
I do not have a strong opinion on whether this patch should be applied or
dropped at this point.
Personally, given that most CPUs released so far support 32-bit compatibility,
I expect that future CPUs with LSUI will also retain 32-bit compatibility.
However, it is difficult to say with certainty which approach
is correct at this time.
I would therefore like to defer the final decision on this to the maintainers
Am I missing something?
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-19 22:32 ` Yeoreum Yun
@ 2026-01-20 9:32 ` Yeoreum Yun
2026-01-20 9:46 ` Mark Rutland
1 sibling, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 9:32 UTC (permalink / raw)
To: Will Deacon
Cc: Marc Zyngier, catalin.marinas, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah,
mark.rutland, arnd, linux-arm-kernel, linux-kernel, kvmarm, kvm,
linux-kselftest
Hi,
> Hi Will,
>
> > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > Hi,
> > >
> > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > >
> > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > LSUI-related instructions are used.
> > > > >
> > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > >
> > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > that implements both LSUI and AArch32?
> > > >
> > > > This seems extremely unlikely to me.
> > >
> > > Well, I'm not sure how many CPU will have
> > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > (except FVP currently) -- at least the CPU what I saw,
> > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> >
> > Just to make sure I understand you, you're saying that you have seen
> > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> >
> > > If you this seems useless, I don't have any strong comments
> > > whether drop patches related to deprecated swp instruction parts
> > > (patch 8-9 only) or not.
> > > (But, I hope to pass this decision to maintaining perspective...)
> >
> > I think it depends on whether or not the hardware exists. Marc thinks
> > that it's extremely unlikely whereas you appear to have seen some (but
> > please confirm).
> >
>
> What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> My point was that if CPUs implementing LSUI do appear, most of them will likely
> continue to support the existing 32-bit EL0 compatibility that
> the majority of current CPUs already have.
>
> For that reason, I think it also makes sense to apply LSUI to SWPx emulation.
> That said, since there are currently no real CPUs that actually implement LSUI,
> it is hard to say that this is particularly useful right now.
> I do not have a strong opinion on whether this patch should be applied or
> dropped at this point.
> Personally, given that most CPUs released so far support 32-bit compatibility,
> I expect that future CPUs with LSUI will also retain 32-bit compatibility.
> However, it is difficult to say with certainty which approach
> is correct at this time.
>
> I would therefore like to defer the final decision on this to the maintainers
>
> Am I missing something?
Ah, the Marc view was right. So I think the changes for swpX could be
droppable.
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-19 22:32 ` Yeoreum Yun
2026-01-20 9:32 ` Yeoreum Yun
@ 2026-01-20 9:46 ` Mark Rutland
2026-01-20 10:07 ` Yeoreum Yun
1 sibling, 1 reply; 37+ messages in thread
From: Mark Rutland @ 2026-01-20 9:46 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Will Deacon, Marc Zyngier, catalin.marinas, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Mon, Jan 19, 2026 at 10:32:11PM +0000, Yeoreum Yun wrote:
> > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > >
> > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > LSUI-related instructions are used.
> > > > >
> > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > >
> > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > that implements both LSUI and AArch32?
> > > >
> > > > This seems extremely unlikely to me.
> > >
> > > Well, I'm not sure how many CPU will have
> > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > (except FVP currently) -- at least the CPU what I saw,
> > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> >
> > Just to make sure I understand you, you're saying that you have seen
> > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> >
> > > If you this seems useless, I don't have any strong comments
> > > whether drop patches related to deprecated swp instruction parts
> > > (patch 8-9 only) or not.
> > > (But, I hope to pass this decision to maintaining perspective...)
> >
> > I think it depends on whether or not the hardware exists. Marc thinks
> > that it's extremely unlikely whereas you appear to have seen some (but
> > please confirm).
>
> What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> My point was that if CPUs implementing LSUI do appear, most of them will likely
> continue to support the existing 32-bit EL0 compatibility that
> the majority of current CPUs already have.
That doesn't really answer Will's question. Will asked:
Just to make sure I understand you, you're saying that you have seen a
real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
IIUC you have NOT seen any specific real CPU that supports this, and you
have been testing on an FVP AEM model (which can be configured to
support this combination of features). Can you please confirm?
I don't beleive it's likely that we'll see hardware that supports
both FEAT_LSUI and AArch32 (at EL0).
Mark.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-20 9:46 ` Mark Rutland
@ 2026-01-20 10:07 ` Yeoreum Yun
2026-01-20 11:50 ` Will Deacon
0 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 10:07 UTC (permalink / raw)
To: Mark Rutland
Cc: Will Deacon, Marc Zyngier, catalin.marinas, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
yangyicong, scott, joey.gouly, yuzenghui, pbonzini, shuah, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Hi Mark,
> On Mon, Jan 19, 2026 at 10:32:11PM +0000, Yeoreum Yun wrote:
> > > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > > >
> > > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > > LSUI-related instructions are used.
> > > > > >
> > > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > > >
> > > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > > that implements both LSUI and AArch32?
> > > > >
> > > > > This seems extremely unlikely to me.
> > > >
> > > > Well, I'm not sure how many CPU will have
> > > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > > (except FVP currently) -- at least the CPU what I saw,
> > > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> > >
> > > Just to make sure I understand you, you're saying that you have seen
> > > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > >
> > > > If you this seems useless, I don't have any strong comments
> > > > whether drop patches related to deprecated swp instruction parts
> > > > (patch 8-9 only) or not.
> > > > (But, I hope to pass this decision to maintaining perspective...)
> > >
> > > I think it depends on whether or not the hardware exists. Marc thinks
> > > that it's extremely unlikely whereas you appear to have seen some (but
> > > please confirm).
> >
> > What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> > 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> > My point was that if CPUs implementing LSUI do appear, most of them will likely
> > continue to support the existing 32-bit EL0 compatibility that
> > the majority of current CPUs already have.
>
> That doesn't really answer Will's question. Will asked:
>
> Just to make sure I understand you, you're saying that you have seen a
> real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
>
> IIUC you have NOT seen any specific real CPU that supports this, and you
> have been testing on an FVP AEM model (which can be configured to
> support this combination of features). Can you please confirm?
>
> I don't beleive it's likely that we'll see hardware that supports
> both FEAT_LSUI and AArch32 (at EL0).
Yes. I've tested in FVP model. and the latest of my reply said
I confirmed that Marc's and your view was right.
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI
2026-01-19 15:54 ` Mark Brown
@ 2026-01-20 11:35 ` Yeoreum Yun
0 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 11:35 UTC (permalink / raw)
To: Mark Brown
Cc: Will Deacon, catalin.marinas, maz, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
> On Mon, Jan 19, 2026 at 03:50:43PM +0000, Will Deacon wrote:
> > On Sun, Dec 14, 2025 at 11:22:43AM +0000, Yeoreum Yun wrote:
>
> > > +config AS_HAS_LSUI
> > > + def_bool $(as-instr,.arch_extension lsui)
> > > + help
> > > + Supported by LLVM 20+ and binutils 2.45+.
>
> > This is an internal Kconfig variable so please drop the help text.
>
> It would be useful to keep the information about supported compilers as
> a comment though (as is done for some of the other toolchain feature
> tests).
With some comments from other thread,
What about this?
+menu "ARMv9.6 architectural features"
+
+config ARM64_LSUI
+ bool "Support Unprivileged Load Store Instructions (LSUI)"
+ default y
+ depends on AS_HAS_LSUI && !CPU_BIG_ENDIAN
+ help
+ The Unprivileged Load Store Instructions (LSUI) provides
+ variants load/store instructions that access user-space memory
+ from the kernel without clearing PSTATE.PAN bit.
+
+ This feature is supported by LLVM 20+ and binutils 2.45+.
+
+endmenu # "ARMv9.6 architectural feature"
+
config AS_HAS_LSUI
def_bool $(as-instr,.arch_extension lsui)
- help
- Supported by LLVM 20+ and binutils 2.45+.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-20 10:07 ` Yeoreum Yun
@ 2026-01-20 11:50 ` Will Deacon
2026-01-20 12:14 ` Yeoreum Yun
2026-01-20 17:59 ` Yeoreum Yun
0 siblings, 2 replies; 37+ messages in thread
From: Will Deacon @ 2026-01-20 11:50 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Tue, Jan 20, 2026 at 10:07:33AM +0000, Yeoreum Yun wrote:
> Hi Mark,
>
> > On Mon, Jan 19, 2026 at 10:32:11PM +0000, Yeoreum Yun wrote:
> > > > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > > > >
> > > > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > > > LSUI-related instructions are used.
> > > > > > >
> > > > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > > > >
> > > > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > > > that implements both LSUI and AArch32?
> > > > > >
> > > > > > This seems extremely unlikely to me.
> > > > >
> > > > > Well, I'm not sure how many CPU will have
> > > > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > > > (except FVP currently) -- at least the CPU what I saw,
> > > > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> > > >
> > > > Just to make sure I understand you, you're saying that you have seen
> > > > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > > >
> > > > > If you this seems useless, I don't have any strong comments
> > > > > whether drop patches related to deprecated swp instruction parts
> > > > > (patch 8-9 only) or not.
> > > > > (But, I hope to pass this decision to maintaining perspective...)
> > > >
> > > > I think it depends on whether or not the hardware exists. Marc thinks
> > > > that it's extremely unlikely whereas you appear to have seen some (but
> > > > please confirm).
> > >
> > > What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> > > 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> > > My point was that if CPUs implementing LSUI do appear, most of them will likely
> > > continue to support the existing 32-bit EL0 compatibility that
> > > the majority of current CPUs already have.
> >
> > That doesn't really answer Will's question. Will asked:
> >
> > Just to make sure I understand you, you're saying that you have seen a
> > real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> >
> > IIUC you have NOT seen any specific real CPU that supports this, and you
> > have been testing on an FVP AEM model (which can be configured to
> > support this combination of features). Can you please confirm?
> >
> > I don't beleive it's likely that we'll see hardware that supports
> > both FEAT_LSUI and AArch32 (at EL0).
>
> Yes. I've tested in FVP model. and the latest of my reply said
> I confirmed that Marc's and your view was right.
It's probably still worth adding something to the cpufeature stuff to
WARN() if we spot both LSUI and support for AArch32.
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-20 11:50 ` Will Deacon
@ 2026-01-20 12:14 ` Yeoreum Yun
2026-01-20 17:59 ` Yeoreum Yun
1 sibling, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 12:14 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
> On Tue, Jan 20, 2026 at 10:07:33AM +0000, Yeoreum Yun wrote:
> > Hi Mark,
> >
> > > On Mon, Jan 19, 2026 at 10:32:11PM +0000, Yeoreum Yun wrote:
> > > > > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > > > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > > > > >
> > > > > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > > > > LSUI-related instructions are used.
> > > > > > > >
> > > > > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > > > > >
> > > > > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > > > > that implements both LSUI and AArch32?
> > > > > > >
> > > > > > > This seems extremely unlikely to me.
> > > > > >
> > > > > > Well, I'm not sure how many CPU will have
> > > > > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > > > > (except FVP currently) -- at least the CPU what I saw,
> > > > > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> > > > >
> > > > > Just to make sure I understand you, you're saying that you have seen
> > > > > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > > > >
> > > > > > If you this seems useless, I don't have any strong comments
> > > > > > whether drop patches related to deprecated swp instruction parts
> > > > > > (patch 8-9 only) or not.
> > > > > > (But, I hope to pass this decision to maintaining perspective...)
> > > > >
> > > > > I think it depends on whether or not the hardware exists. Marc thinks
> > > > > that it's extremely unlikely whereas you appear to have seen some (but
> > > > > please confirm).
> > > >
> > > > What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> > > > 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> > > > My point was that if CPUs implementing LSUI do appear, most of them will likely
> > > > continue to support the existing 32-bit EL0 compatibility that
> > > > the majority of current CPUs already have.
> > >
> > > That doesn't really answer Will's question. Will asked:
> > >
> > > Just to make sure I understand you, you're saying that you have seen a
> > > real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > >
> > > IIUC you have NOT seen any specific real CPU that supports this, and you
> > > have been testing on an FVP AEM model (which can be configured to
> > > support this combination of features). Can you please confirm?
> > >
> > > I don't beleive it's likely that we'll see hardware that supports
> > > both FEAT_LSUI and AArch32 (at EL0).
> >
> > Yes. I've tested in FVP model. and the latest of my reply said
> > I confirmed that Marc's and your view was right.
>
> It's probably still worth adding something to the cpufeature stuff to
> WARN() if we spot both LSUI and support for AArch32.
>
> Will
If adding some check on cpufeature stuff for this,
I think it also good to include the check PAN as
you mentioned in another thread then.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2026-01-19 22:17 ` Yeoreum Yun
@ 2026-01-20 15:44 ` Yeoreum Yun
2026-01-21 13:48 ` Will Deacon
1 sibling, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 15:44 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
> Hi Will,
>
> > On Sun, Dec 14, 2025 at 11:22:45AM +0000, Yeoreum Yun wrote:
> > > Current futex atomic operations are implemented with ll/sc instructions
> > > and clearing PSTATE.PAN.
> > >
> > > Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
> > > also atomic operation for user memory access in kernel it doesn't need
> > > to clear PSTATE.PAN bit anymore.
> > >
> > > With theses instructions some of futex atomic operations don't need to
> > > be implmented with ldxr/stlxr pair instead can be implmented with
> > > one atomic operation supplied by FEAT_LSUI.
> > >
> > > However, some of futex atomic operation don't have matched
> > > instructuion i.e) eor or cmpxchg with word size.
> > > For those operation, uses cas{al}t to implement them.
> > >
> > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > ---
> > > arch/arm64/include/asm/futex.h | 180 ++++++++++++++++++++++++++++++++-
> > > 1 file changed, 178 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> > > index f8cb674bdb3f..6778ff7e1c0e 100644
> > > --- a/arch/arm64/include/asm/futex.h
> > > +++ b/arch/arm64/include/asm/futex.h
> > > @@ -9,6 +9,8 @@
> > > #include <linux/uaccess.h>
> > > #include <linux/stringify.h>
> > >
> > > +#include <asm/alternative.h>
> > > +#include <asm/alternative-macros.h>
> > > #include <asm/errno.h>
> > >
> > > #define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
> > > @@ -86,11 +88,185 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > > return ret;
> > > }
> > >
> > > +#ifdef CONFIG_AS_HAS_LSUI
> > > +
> > > +/*
> > > + * When the LSUI feature is present, the CPU also implements PAN, because
> > > + * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
> > > + * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
> > > + * operation.
> > > + */
> >
> > I'd prefer not to rely on these sorts of properties because:
> >
> > - CPU bugs happen all the time
> > - Virtualisation and idreg overrides mean illegal feature combinations
> > can show up
> > - The architects sometimes change their mind
> >
> > So let's either drop the assumption that we have PAN if LSUI *or* actually
> > test that someplace during feature initialisation.
>
> Thanks for detail explain. I'll drop the my silly assumption and
> call the uaccess_ttbr0_enable()/disable() then.
>
> >
> > > +
> > > +#define __LSUI_PREAMBLE ".arch_extension lsui\n"
> > > +
> > > +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
> > > +static __always_inline int \
> > > +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> > > +{ \
> > > + int ret = 0; \
> > > + int oldval; \
> > > + \
> > > + asm volatile("// __lsui_futex_atomic_" #op "\n" \
> > > + __LSUI_PREAMBLE \
> > > +"1: " #asm_op #mb " %w3, %w2, %1\n" \
> >
> > What's the point in separating the barrier suffix from the rest of the
> > instruction mnemonic? All the callers use -AL.
>
> Agree. I'll remove this.
>
> >
> > > +"2:\n" \
> > > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
> > > + : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
> > > + : "r" (oparg) \
> > > + : "memory"); \
> > > + \
> > > + if (!ret) \
> > > + *oval = oldval; \
> > > + \
> > > + return ret; \
> > > +}
> > > +
> > > +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> > > +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> > > +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> > > +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> > > +
> > > +static __always_inline int
> > > +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> > > +{
> > > + int ret = 0;
> > > +
> > > + asm volatile("// __lsui_cmpxchg64\n"
> > > + __LSUI_PREAMBLE
> > > +"1: casalt %x2, %x3, %1\n"
> >
> >
> > How bizarre, they changed the order of the AL and T compared to SWPTAL.
> > Fair enough...
> >
> > Also, I don't think you need the 'x' prefix on the 64-bit variables.
>
> Right. I'll remove useless prefix.
>
> >
> > > +"2:\n"
> > > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > > + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > > + : "r" (newval)
> > > + : "memory");
> >
> > Don't you need to update *oldval here if the CAS didn't fault?
>
> No. if CAS doesn't make fault the oldval update already.
>
> >
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static __always_inline int
> > > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > > +{
> > > + u64 __user *uaddr64;
> > > + bool futex_on_lo;
> > > + int ret = -EAGAIN, i;
> > > + u32 other, orig_other;
> > > + union {
> > > + struct futex_on_lo {
> > > + u32 val;
> > > + u32 other;
> > > + } lo_futex;
> > > +
> > > + struct futex_on_hi {
> > > + u32 other;
> > > + u32 val;
> > > + } hi_futex;
> > > +
> > > + u64 raw;
> > > + } oval64, orig64, nval64;
> > > +
> > > + uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > > + futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
> > > + IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
> >
> > Just make LSUI depend on !CPU_BIG_ENDIAN in Kconfig. The latter already
> > depends on BROKEN and so we'll probably drop it soon anyway. There's
> > certainly no need to care about it for new features and it should simplify
> > the code you have here if you can assume little-endian.
>
> Thanks. then I'll enable LSUI feature only !CPU_BIG_ENDIAN.
>
> >
> > > +
> > > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > > + if (get_user(oval64.raw, uaddr64))
> > > + return -EFAULT;
> >
> > Since oldval is passed to us as an argument, can we get away with a
> > 32-bit get_user() here?
>
> It's not a probelm. but is there any sigificant difference?
Also, seems for 32-bit get_user() i believe this is to read the
*other" 32-bits to call __lsui_cmpxchg64().
However, this would be call the get_user() with different address
accoding to uaddr (uaddr -1 or uaddr + 1).
So if there is no significant difference for get_user() between 32-bit
and 64-bit, I hope to keep this as it is.
[...]
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-20 11:50 ` Will Deacon
2026-01-20 12:14 ` Yeoreum Yun
@ 2026-01-20 17:59 ` Yeoreum Yun
2026-01-21 13:56 ` Will Deacon
1 sibling, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-20 17:59 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
> On Tue, Jan 20, 2026 at 10:07:33AM +0000, Yeoreum Yun wrote:
> > Hi Mark,
> >
> > > On Mon, Jan 19, 2026 at 10:32:11PM +0000, Yeoreum Yun wrote:
> > > > > On Mon, Dec 15, 2025 at 09:56:04AM +0000, Yeoreum Yun wrote:
> > > > > > > On Sun, 14 Dec 2025 11:22:48 +0000,
> > > > > > > Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > > > > > > >
> > > > > > > > Apply the FEAT_LSUI instruction to emulate the deprecated swpX
> > > > > > > > instruction, so that toggling of the PSTATE.PAN bit can be removed when
> > > > > > > > LSUI-related instructions are used.
> > > > > > > >
> > > > > > > > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > > > > > >
> > > > > > > It really begs the question: what are the odds of ever seeing a CPU
> > > > > > > that implements both LSUI and AArch32?
> > > > > > >
> > > > > > > This seems extremely unlikely to me.
> > > > > >
> > > > > > Well, I'm not sure how many CPU will have
> > > > > > both ID_AA64PFR0_EL1.EL0 bit as 0b0010 and FEAT_LSUI
> > > > > > (except FVP currently) -- at least the CPU what I saw,
> > > > > > most of them set ID_AA64PFR0_EL1.EL0 as 0b0010.
> > > > >
> > > > > Just to make sure I understand you, you're saying that you have seen
> > > > > a real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > > > >
> > > > > > If you this seems useless, I don't have any strong comments
> > > > > > whether drop patches related to deprecated swp instruction parts
> > > > > > (patch 8-9 only) or not.
> > > > > > (But, I hope to pass this decision to maintaining perspective...)
> > > > >
> > > > > I think it depends on whether or not the hardware exists. Marc thinks
> > > > > that it's extremely unlikely whereas you appear to have seen some (but
> > > > > please confirm).
> > > >
> > > > What I meant was not a 32-bit CPU with LSUI, but a CPU that supports
> > > > 32-bit EL0 compatibility (i.e. ID_AA64PFR0_EL1.EL0 = 0b0010).
> > > > My point was that if CPUs implementing LSUI do appear, most of them will likely
> > > > continue to support the existing 32-bit EL0 compatibility that
> > > > the majority of current CPUs already have.
> > >
> > > That doesn't really answer Will's question. Will asked:
> > >
> > > Just to make sure I understand you, you're saying that you have seen a
> > > real CPU that implements both 32-bit EL0 *and* FEAT_LSUI?
> > >
> > > IIUC you have NOT seen any specific real CPU that supports this, and you
> > > have been testing on an FVP AEM model (which can be configured to
> > > support this combination of features). Can you please confirm?
> > >
> > > I don't beleive it's likely that we'll see hardware that supports
> > > both FEAT_LSUI and AArch32 (at EL0).
> >
> > Yes. I've tested in FVP model. and the latest of my reply said
> > I confirmed that Marc's and your view was right.
>
> It's probably still worth adding something to the cpufeature stuff to
> WARN() if we spot both LSUI and support for AArch32.
>
On second thought, while a CPU that implements LSUI is unlikely to
support AArch32 compatibility,
I don't think LSUI requires the absence of AArch32.
These two are independent features (and in fact our FVP reports/supports both).
Given that, I'm not sure a WARN is really necessary.
Would it be sufficient to just drop the patch for swpX instead?
Thanks!
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2026-01-19 22:17 ` Yeoreum Yun
2026-01-20 15:44 ` Yeoreum Yun
@ 2026-01-21 13:48 ` Will Deacon
2026-01-21 14:16 ` Yeoreum Yun
1 sibling, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-21 13:48 UTC (permalink / raw)
To: Yeoreum Yun
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
On Mon, Jan 19, 2026 at 10:17:47PM +0000, Yeoreum Yun wrote:
> > > +"2:\n"
> > > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > > + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > > + : "r" (newval)
> > > + : "memory");
> >
> > Don't you need to update *oldval here if the CAS didn't fault?
>
> No. if CAS doesn't make fault the oldval update already.
Sorry, it was the "+r" constraint with a pointer dereference that threw
me but you have the "memory" clobber so it looks like this will work.
> > > +
> > > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > > + if (get_user(oval64.raw, uaddr64))
> > > + return -EFAULT;
> >
> > Since oldval is passed to us as an argument, can we get away with a
> > 32-bit get_user() here?
>
> It's not a probelm. but is there any sigificant difference?
I think the code would be clearer if you only read what you actually
use.
> > > + nval64.raw = oval64.raw;
> > > +
> > > + if (futex_on_lo) {
> > > + oval64.lo_futex.val = oldval;
> > > + nval64.lo_futex.val = newval;
> > > + } else {
> > > + oval64.hi_futex.val = oldval;
> > > + nval64.hi_futex.val = newval;
> > > + }
> > > +
> > > + orig64.raw = oval64.raw;
> > > +
> > > + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> > > + return -EFAULT;
> > > +
> > > + if (futex_on_lo) {
> > > + oldval = oval64.lo_futex.val;
> > > + other = oval64.lo_futex.other;
> > > + orig_other = orig64.lo_futex.other;
> > > + } else {
> > > + oldval = oval64.hi_futex.val;
> > > + other = oval64.hi_futex.other;
> > > + orig_other = orig64.hi_futex.other;
> > > + }
> > > +
> > > + if (other == orig_other) {
> > > + ret = 0;
> > > + break;
> > > + }
> > > + }
> > > +
> > > + if (!ret)
> > > + *oval = oldval;
> >
> > Shouldn't we set *oval to the value we got back from the CAS?
>
> Since it's a "success" case, the CAS return and oldval must be the same.
> That's why it doesn't matter to use got back from the CAS.
> Otherwise, it returns error and *oval doesn't matter for
> futex_atomic_cmpxchg_inatomic().
Got it, but then the caller you have is very weird because e.g.
__lsui_futex_atomic_eor() goes and does another get_user() on the next
iteration instead of using the value returned by the CAS.
It would probably be clearer if you restructured your CAS helper to look
more like try_cmpxchg() and then the loop around it would be minimal.
You might need to distinguish the faulting case from the comparison
failure case with e.g. -EFAULT vs -EAGAIN.
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-20 17:59 ` Yeoreum Yun
@ 2026-01-21 13:56 ` Will Deacon
2026-01-21 14:51 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-21 13:56 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> On second thought, while a CPU that implements LSUI is unlikely to
> support AArch32 compatibility,
> I don't think LSUI requires the absence of AArch32.
> These two are independent features (and in fact our FVP reports/supports both).
Did you have to configure the FVP specially for this or that a "default"
configuration?
> Given that, I'm not sure a WARN is really necessary.
> Would it be sufficient to just drop the patch for swpX instead?
Given that the whole point of LSUI is to remove the PAN toggling, I think
we should make an effort to make sure that we don't retain PAN toggling
paths at runtime that could potentially be targetted by attackers. If we
drop the SWP emulation patch and then see that we have AArch32 at runtime,
we should forcefully disable the SWP emulation but, since we don't actually
think we're going to see this in practice, the WARN seemed simpler.
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
2026-01-21 13:48 ` Will Deacon
@ 2026-01-21 14:16 ` Yeoreum Yun
0 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-21 14:16 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, maz, broonie, oliver.upton, miko.lenczewski,
kevin.brodsky, ardb, suzuki.poulose, lpieralisi, yangyicong,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Hi Will,
> On Mon, Jan 19, 2026 at 10:17:47PM +0000, Yeoreum Yun wrote:
> > > > +"2:\n"
> > > > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > > > + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > > > + : "r" (newval)
> > > > + : "memory");
> > >
> > > Don't you need to update *oldval here if the CAS didn't fault?
> >
> > No. if CAS doesn't make fault the oldval update already.
>
> Sorry, it was the "+r" constraint with a pointer dereference that threw
> me but you have the "memory" clobber so it looks like this will work.
>
> > > > +
> > > > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > > > + if (get_user(oval64.raw, uaddr64))
> > > > + return -EFAULT;
> > >
> > > Since oldval is passed to us as an argument, can we get away with a
> > > 32-bit get_user() here?
> >
> > It's not a probelm. but is there any sigificant difference?
>
> I think the code would be clearer if you only read what you actually
> use.
>
> > > > + nval64.raw = oval64.raw;
> > > > +
> > > > + if (futex_on_lo) {
> > > > + oval64.lo_futex.val = oldval;
> > > > + nval64.lo_futex.val = newval;
> > > > + } else {
> > > > + oval64.hi_futex.val = oldval;
> > > > + nval64.hi_futex.val = newval;
> > > > + }
> > > > +
> > > > + orig64.raw = oval64.raw;
> > > > +
> > > > + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> > > > + return -EFAULT;
> > > > +
> > > > + if (futex_on_lo) {
> > > > + oldval = oval64.lo_futex.val;
> > > > + other = oval64.lo_futex.other;
> > > > + orig_other = orig64.lo_futex.other;
> > > > + } else {
> > > > + oldval = oval64.hi_futex.val;
> > > > + other = oval64.hi_futex.other;
> > > > + orig_other = orig64.hi_futex.other;
> > > > + }
> > > > +
> > > > + if (other == orig_other) {
> > > > + ret = 0;
> > > > + break;
> > > > + }
> > > > + }
> > > > +
> > > > + if (!ret)
> > > > + *oval = oldval;
> > >
> > > Shouldn't we set *oval to the value we got back from the CAS?
> >
> > Since it's a "success" case, the CAS return and oldval must be the same.
> > That's why it doesn't matter to use got back from the CAS.
> > Otherwise, it returns error and *oval doesn't matter for
> > futex_atomic_cmpxchg_inatomic().
>
> Got it, but then the caller you have is very weird because e.g.
> __lsui_futex_atomic_eor() goes and does another get_user() on the next
> iteration instead of using the value returned by the CAS.
>
> It would probably be clearer if you restructured your CAS helper to look
> more like try_cmpxchg() and then the loop around it would be minimal.
> You might need to distinguish the faulting case from the comparison
> failure case with e.g. -EFAULT vs -EAGAIN.
Oh, thanks for pointing this out.I understand your point clearly now.
Yes, I’ll respin the patch accordingly. Thanks again!
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-21 13:56 ` Will Deacon
@ 2026-01-21 14:51 ` Yeoreum Yun
2026-01-21 16:20 ` Will Deacon
0 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-21 14:51 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
> On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> > On second thought, while a CPU that implements LSUI is unlikely to
> > support AArch32 compatibility,
> > I don't think LSUI requires the absence of AArch32.
> > These two are independent features (and in fact our FVP reports/supports both).
>
> Did you have to configure the FVP specially for this or that a "default"
> configuration?
>
> > Given that, I'm not sure a WARN is really necessary.
> > Would it be sufficient to just drop the patch for swpX instead?
>
> Given that the whole point of LSUI is to remove the PAN toggling, I think
> we should make an effort to make sure that we don't retain PAN toggling
> paths at runtime that could potentially be targetted by attackers. If we
> drop the SWP emulation patch and then see that we have AArch32 at runtime,
> we should forcefully disable the SWP emulation but, since we don't actually
> think we're going to see this in practice, the WARN seemed simpler.
TBH, I missed the FVP configuration option clusterX.max_32bit_el, which
can disable AArch32 support by setting it to -1 (default: 3).
Given this, I think it’s reasonable to emit a WARN when LSUI is enabled and
drop the SWP emulation path under that condition.
Thanks!
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-21 14:51 ` Yeoreum Yun
@ 2026-01-21 16:20 ` Will Deacon
2026-01-21 16:31 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-21 16:20 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Wed, Jan 21, 2026 at 02:51:10PM +0000, Yeoreum Yun wrote:
> > On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> > > On second thought, while a CPU that implements LSUI is unlikely to
> > > support AArch32 compatibility,
> > > I don't think LSUI requires the absence of AArch32.
> > > These two are independent features (and in fact our FVP reports/supports both).
> >
> > Did you have to configure the FVP specially for this or that a "default"
> > configuration?
> >
> > > Given that, I'm not sure a WARN is really necessary.
> > > Would it be sufficient to just drop the patch for swpX instead?
> >
> > Given that the whole point of LSUI is to remove the PAN toggling, I think
> > we should make an effort to make sure that we don't retain PAN toggling
> > paths at runtime that could potentially be targetted by attackers. If we
> > drop the SWP emulation patch and then see that we have AArch32 at runtime,
> > we should forcefully disable the SWP emulation but, since we don't actually
> > think we're going to see this in practice, the WARN seemed simpler.
>
> TBH, I missed the FVP configuration option clusterX.max_32bit_el, which
> can disable AArch32 support by setting it to -1 (default: 3).
> Given this, I think it’s reasonable to emit a WARN when LSUI is enabled and
> drop the SWP emulation path under that condition.
I'm asking about the default value.
If Arm are going to provide models that default to having both LSUI and
AArch32 EL0 supported, then the WARN is just going to annoy people.
Please can you find out whether or not that's the case?
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-21 16:20 ` Will Deacon
@ 2026-01-21 16:31 ` Yeoreum Yun
2026-01-21 16:36 ` Will Deacon
0 siblings, 1 reply; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-21 16:31 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Wed, Jan 21, 2026 at 04:20:36PM +0000, Will Deacon wrote:
> On Wed, Jan 21, 2026 at 02:51:10PM +0000, Yeoreum Yun wrote:
> > > On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> > > > On second thought, while a CPU that implements LSUI is unlikely to
> > > > support AArch32 compatibility,
> > > > I don't think LSUI requires the absence of AArch32.
> > > > These two are independent features (and in fact our FVP reports/supports both).
> > >
> > > Did you have to configure the FVP specially for this or that a "default"
> > > configuration?
> > >
> > > > Given that, I'm not sure a WARN is really necessary.
> > > > Would it be sufficient to just drop the patch for swpX instead?
> > >
> > > Given that the whole point of LSUI is to remove the PAN toggling, I think
> > > we should make an effort to make sure that we don't retain PAN toggling
> > > paths at runtime that could potentially be targetted by attackers. If we
> > > drop the SWP emulation patch and then see that we have AArch32 at runtime,
> > > we should forcefully disable the SWP emulation but, since we don't actually
> > > think we're going to see this in practice, the WARN seemed simpler.
> >
> > TBH, I missed the FVP configuration option clusterX.max_32bit_el, which
> > can disable AArch32 support by setting it to -1 (default: 3).
> > Given this, I think it’s reasonable to emit a WARN when LSUI is enabled and
> > drop the SWP emulation path under that condition.
>
> I'm asking about the default value.
>
> If Arm are going to provide models that default to having both LSUI and
> AArch32 EL0 supported, then the WARN is just going to annoy people.
>
> Please can you find out whether or not that's the case?
Yes. I said the deafult == 3 which means that allow to execute
32-bit in EL0 to EL3 (IOW, ID_AA64PFR0_EL1.EL0 == 0b0010)
-- but sorry for lack of explanation.
When I check the latest model's default option value related for this
based on FVP version 11.30
(https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms/Arm%20Architecture%20FVPs),
- cluster0.has_lsui=1 default = '0x1' : Implement additional load and store unprivileged instructions (FEAT_LSUI).
- cluster0.max_32bit_el=3 default = '0x3' : Maximum exception level supporting AArch32 modes. -1: No Support for A32 at any EL, x:[0:3] - All the levels below supplied ELx supports A32 : [0xffffffffffffffff:0x3]
So it would be a annoying to people.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-21 16:31 ` Yeoreum Yun
@ 2026-01-21 16:36 ` Will Deacon
2026-01-21 16:51 ` Yeoreum Yun
0 siblings, 1 reply; 37+ messages in thread
From: Will Deacon @ 2026-01-21 16:36 UTC (permalink / raw)
To: Yeoreum Yun
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Wed, Jan 21, 2026 at 04:31:28PM +0000, Yeoreum Yun wrote:
> On Wed, Jan 21, 2026 at 04:20:36PM +0000, Will Deacon wrote:
> > On Wed, Jan 21, 2026 at 02:51:10PM +0000, Yeoreum Yun wrote:
> > > > On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> > > > > On second thought, while a CPU that implements LSUI is unlikely to
> > > > > support AArch32 compatibility,
> > > > > I don't think LSUI requires the absence of AArch32.
> > > > > These two are independent features (and in fact our FVP reports/supports both).
> > > >
> > > > Did you have to configure the FVP specially for this or that a "default"
> > > > configuration?
> > > >
> > > > > Given that, I'm not sure a WARN is really necessary.
> > > > > Would it be sufficient to just drop the patch for swpX instead?
> > > >
> > > > Given that the whole point of LSUI is to remove the PAN toggling, I think
> > > > we should make an effort to make sure that we don't retain PAN toggling
> > > > paths at runtime that could potentially be targetted by attackers. If we
> > > > drop the SWP emulation patch and then see that we have AArch32 at runtime,
> > > > we should forcefully disable the SWP emulation but, since we don't actually
> > > > think we're going to see this in practice, the WARN seemed simpler.
> > >
> > > TBH, I missed the FVP configuration option clusterX.max_32bit_el, which
> > > can disable AArch32 support by setting it to -1 (default: 3).
> > > Given this, I think it’s reasonable to emit a WARN when LSUI is enabled and
> > > drop the SWP emulation path under that condition.
> >
> > I'm asking about the default value.
> >
> > If Arm are going to provide models that default to having both LSUI and
> > AArch32 EL0 supported, then the WARN is just going to annoy people.
> >
> > Please can you find out whether or not that's the case?
>
> Yes. I said the deafult == 3 which means that allow to execute
> 32-bit in EL0 to EL3 (IOW, ID_AA64PFR0_EL1.EL0 == 0b0010)
> -- but sorry for lack of explanation.
>
> When I check the latest model's default option value related for this
> based on FVP version 11.30
> (https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms/Arm%20Architecture%20FVPs),
>
> - cluster0.has_lsui=1 default = '0x1' : Implement additional load and store unprivileged instructions (FEAT_LSUI).
> - cluster0.max_32bit_el=3 default = '0x3' : Maximum exception level supporting AArch32 modes. -1: No Support for A32 at any EL, x:[0:3] - All the levels below supplied ELx supports A32 : [0xffffffffffffffff:0x3]
>
> So it would be a annoying to people.
Right, so you can probably do something like setting the 'status'
field of 'insn_swp' to INSN_UNAVAILABLE if we detect LSUI.
Will
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation.
2026-01-21 16:36 ` Will Deacon
@ 2026-01-21 16:51 ` Yeoreum Yun
0 siblings, 0 replies; 37+ messages in thread
From: Yeoreum Yun @ 2026-01-21 16:51 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, Marc Zyngier, catalin.marinas, broonie,
oliver.upton, miko.lenczewski, kevin.brodsky, ardb,
suzuki.poulose, lpieralisi, yangyicong, scott, joey.gouly,
yuzenghui, pbonzini, shuah, arnd, linux-arm-kernel, linux-kernel,
kvmarm, kvm, linux-kselftest
On Wed, Jan 21, 2026 at 04:36:26PM +0000, Will Deacon wrote:
> On Wed, Jan 21, 2026 at 04:31:28PM +0000, Yeoreum Yun wrote:
> > On Wed, Jan 21, 2026 at 04:20:36PM +0000, Will Deacon wrote:
> > > On Wed, Jan 21, 2026 at 02:51:10PM +0000, Yeoreum Yun wrote:
> > > > > On Tue, Jan 20, 2026 at 05:59:47PM +0000, Yeoreum Yun wrote:
> > > > > > On second thought, while a CPU that implements LSUI is unlikely to
> > > > > > support AArch32 compatibility,
> > > > > > I don't think LSUI requires the absence of AArch32.
> > > > > > These two are independent features (and in fact our FVP reports/supports both).
> > > > >
> > > > > Did you have to configure the FVP specially for this or that a "default"
> > > > > configuration?
> > > > >
> > > > > > Given that, I'm not sure a WARN is really necessary.
> > > > > > Would it be sufficient to just drop the patch for swpX instead?
> > > > >
> > > > > Given that the whole point of LSUI is to remove the PAN toggling, I think
> > > > > we should make an effort to make sure that we don't retain PAN toggling
> > > > > paths at runtime that could potentially be targetted by attackers. If we
> > > > > drop the SWP emulation patch and then see that we have AArch32 at runtime,
> > > > > we should forcefully disable the SWP emulation but, since we don't actually
> > > > > think we're going to see this in practice, the WARN seemed simpler.
> > > >
> > > > TBH, I missed the FVP configuration option clusterX.max_32bit_el, which
> > > > can disable AArch32 support by setting it to -1 (default: 3).
> > > > Given this, I think it’s reasonable to emit a WARN when LSUI is enabled and
> > > > drop the SWP emulation path under that condition.
> > >
> > > I'm asking about the default value.
> > >
> > > If Arm are going to provide models that default to having both LSUI and
> > > AArch32 EL0 supported, then the WARN is just going to annoy people.
> > >
> > > Please can you find out whether or not that's the case?
> >
> > Yes. I said the deafult == 3 which means that allow to execute
> > 32-bit in EL0 to EL3 (IOW, ID_AA64PFR0_EL1.EL0 == 0b0010)
> > -- but sorry for lack of explanation.
> >
> > When I check the latest model's default option value related for this
> > based on FVP version 11.30
> > (https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms/Arm%20Architecture%20FVPs),
> >
> > - cluster0.has_lsui=1 default = '0x1' : Implement additional load and store unprivileged instructions (FEAT_LSUI).
> > - cluster0.max_32bit_el=3 default = '0x3' : Maximum exception level supporting AArch32 modes. -1: No Support for A32 at any EL, x:[0:3] - All the levels below supplied ELx supports A32 : [0xffffffffffffffff:0x3]
> >
> > So it would be a annoying to people.
>
> Right, so you can probably do something like setting the 'status'
> field of 'insn_swp' to INSN_UNAVAILABLE if we detect LSUI.
Thanks for your suggestion. That would be good.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2026-01-21 16:53 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 2/9] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 3/9] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
2026-01-19 15:50 ` Will Deacon
2026-01-19 15:54 ` Mark Brown
2026-01-20 11:35 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation Yeoreum Yun
2026-01-19 15:57 ` Will Deacon
2026-01-19 22:19 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
2026-01-19 16:37 ` Will Deacon
2026-01-19 22:17 ` Yeoreum Yun
2026-01-20 15:44 ` Yeoreum Yun
2026-01-21 13:48 ` Will Deacon
2026-01-21 14:16 ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 7/9] arm64: separate common LSUI definitions into lsui.h Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 8/9] arm64: armv8_deprecated: convert user_swpX to inline function Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation Yeoreum Yun
2025-12-15 9:33 ` Marc Zyngier
2025-12-15 9:56 ` Yeoreum Yun
2026-01-19 15:34 ` Will Deacon
2026-01-19 22:32 ` Yeoreum Yun
2026-01-20 9:32 ` Yeoreum Yun
2026-01-20 9:46 ` Mark Rutland
2026-01-20 10:07 ` Yeoreum Yun
2026-01-20 11:50 ` Will Deacon
2026-01-20 12:14 ` Yeoreum Yun
2026-01-20 17:59 ` Yeoreum Yun
2026-01-21 13:56 ` Will Deacon
2026-01-21 14:51 ` Yeoreum Yun
2026-01-21 16:20 ` Will Deacon
2026-01-21 16:31 ` Yeoreum Yun
2026-01-21 16:36 ` Will Deacon
2026-01-21 16:51 ` Yeoreum Yun
2025-12-31 10:07 ` [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox