public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops
@ 2025-09-17 11:08 Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 1/5] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
previleged level to access to access user memory without clearing
PSTATE.PAN bit.

This patchset support FEAT_LSUI and applies in futex atomic operation
where can replace from ldxr/stlxr pair implmentation with clearing
PSTATE.PAN bit to correspondant load/store unprevileged atomic operation
without clearing PSTATE.PAN bit.

Patch Sequences
================

Patch #1 adds cpufeature for FEAT_LSUI

Patch #2 expose FEAT_LSUI to guest

Patch #3 adds Kconfig for FEAT_LSUI

Patch #4 refactor former futex atomic-op implmentation with ll/sc &
         clearing PSTATE.PAN

Patch #5 support futext atomic-op with FEAT_LSUI

Patch History
==============
from v7 to v8:
  - implements futex_atomic_eor() and futex_atomic_cmpxchg() with casalt
    with C helper.
  - Drop the small optimisation on ll/sc futex_atomic_set operation.
  - modify some commit message.
  - https://lore.kernel.org/all/20250816151929.197589-1-yeoreum.yun@arm.com/

from v6 to v7:
  - wrap FEAT_LSUI with CONFIG_AS_HAS_LSUI in cpufeature
  - remove unnecessary addition of indentation.
  - remove unnecessary mte_tco_enable()/disable() on LSUI operation.
  - https://lore.kernel.org/all/20250811163635.1562145-1-yeoreum.yun@arm.com/

from v5 to v6:
  - rebase to v6.17-rc1
  - https://lore.kernel.org/all/20250722121956.1509403-1-yeoreum.yun@arm.com/

from v4 to v5:
  - remove futex_ll_sc.h futext_lsui and lsui.h and move them to futex.h
  - reorganize the patches.
  - https://lore.kernel.org/all/20250721083618.2743569-1-yeoreum.yun@arm.com/

from v3 to v4:
  - rebase to v6.16-rc7
  - modify some patch's title.
  - https://lore.kernel.org/all/20250617183635.1266015-1-yeoreum.yun@arm.com/

from v2 to v3:
  - expose FEAT_LUSI to guest
  - add help section for LUSI Kconfig
  - https://lore.kernel.org/all/20250611151154.46362-1-yeoreum.yun@arm.com/

from v1 to v2:
  - remove empty v9.6 menu entry
  - locate HAS_LUSI in cpucaps in order
  - https://lore.kernel.org/all/20250611104916.10636-1-yeoreum.yun@arm.com/


Yeoreum Yun (5):
  arm64: cpufeature: add FEAT_LSUI
  KVM: arm64: expose FEAT_LSUI to guest
  arm64: Kconfig: Detect toolchain support for LSUI
  arm64: futex: refactor futex atomic operation
  arm64: futex: support futex with FEAT_LSUI

 arch/arm64/Kconfig             |   5 +
 arch/arm64/include/asm/futex.h | 262 +++++++++++++++++++++++++++------
 arch/arm64/kernel/cpufeature.c |  10 ++
 arch/arm64/kvm/sys_regs.c      |   5 +-
 arch/arm64/tools/cpucaps       |   1 +
 5 files changed, 235 insertions(+), 48 deletions(-)

--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v8 1/5] arm64: cpufeature: add FEAT_LSUI
  2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
@ 2025-09-17 11:08 ` Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 2/5] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

Since Armv9.6, FEAT_LSUI provides load/store instructions
that allow privileged code to access user memory without
clearing the PSTATE.PAN bit.

Add CPU feature detection for FEAT_LSUI.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 10 ++++++++++
 arch/arm64/tools/cpucaps       |  1 +
 2 files changed, 11 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ef269a5a37e1..9f83d3fe941b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -279,6 +279,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
 
 static const struct arm64_ftr_bits ftr_id_aa64isar3[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FPRCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_LSUI_SHIFT, 4, ID_AA64ISAR3_EL1_LSUI_NI),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FAMINMAX_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
@@ -3156,6 +3157,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = has_cpuid_feature,
 		ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, GCIE, IMP)
 	},
+#ifdef CONFIG_AS_HAS_LSUI
+	{
+		.desc = "Unprivileged Load Store Instructions (LSUI)",
+		.capability = ARM64_HAS_LSUI,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_cpuid_feature,
+		ARM64_CPUID_FIELDS(ID_AA64ISAR3_EL1, LSUI, IMP)
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 9ff5cdbd2759..b728e5a2e05e 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -44,6 +44,7 @@ HAS_HCX
 HAS_LDAPR
 HAS_LPA2
 HAS_LSE_ATOMICS
+HAS_LSUI
 HAS_MOPS
 HAS_NESTED_VIRT
 HAS_BBML2_NOABORT
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 2/5] KVM: arm64: expose FEAT_LSUI to guest
  2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 1/5] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
@ 2025-09-17 11:08 ` Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 3/5] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

expose FEAT_LSUI to guest.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b29f72478a50..abdf19ae250e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1757,7 +1757,8 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 			val &= ~ID_AA64ISAR2_EL1_WFxT;
 		break;
 	case SYS_ID_AA64ISAR3_EL1:
-		val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX;
+		val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX |
+		       ID_AA64ISAR3_EL1_LSUI;
 		break;
 	case SYS_ID_AA64MMFR2_EL1:
 		val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
@@ -3141,7 +3142,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 					ID_AA64ISAR2_EL1_APA3 |
 					ID_AA64ISAR2_EL1_GPA3)),
 	ID_WRITABLE(ID_AA64ISAR3_EL1, (ID_AA64ISAR3_EL1_FPRCVT |
-				       ID_AA64ISAR3_EL1_FAMINMAX)),
+				       ID_AA64ISAR3_EL1_FAMINMAX | ID_AA64ISAR3_EL1_LSUI)),
 	ID_UNALLOCATED(6,4),
 	ID_UNALLOCATED(6,5),
 	ID_UNALLOCATED(6,6),
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 3/5] arm64: Kconfig: Detect toolchain support for LSUI
  2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 1/5] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 2/5] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
@ 2025-09-17 11:08 ` Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 4/5] arm64: futex: refactor futex atomic operation Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
  4 siblings, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
previleged level to access to access user memory without clearing
PSTATE.PAN bit.
It's enough to add CONFIG_AS_HAS_LSUI only because the code for LSUI uses
individual `.arch_extension` entries.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/Kconfig | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e9bbfacc35a6..89a1a3771ed5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2239,6 +2239,11 @@ config ARM64_GCS
 
 endmenu # "v9.4 architectural features"
 
+config AS_HAS_LSUI
+	def_bool $(as-instr,.arch_extension lsui)
+	help
+	  Supported by LLVM 20+ and binutils 2.45+.
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 4/5] arm64: futex: refactor futex atomic operation
  2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
                   ` (2 preceding siblings ...)
  2025-09-17 11:08 ` [PATCH v8 3/5] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
@ 2025-09-17 11:08 ` Yeoreum Yun
  2025-09-17 11:08 ` [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
  4 siblings, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

Refactor futex atomic operations using ll/sc method with
clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/futex.h | 128 +++++++++++++++++++++------------
 1 file changed, 82 insertions(+), 46 deletions(-)

diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index bc06691d2062..f8cb674bdb3f 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -7,17 +7,21 @@
 
 #include <linux/futex.h>
 #include <linux/uaccess.h>
+#include <linux/stringify.h>
 
 #include <asm/errno.h>
 
 #define FUTEX_MAX_LOOPS	128 /* What's the largest number you can think of? */
 
-#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
-do {									\
+#define LLSC_FUTEX_ATOMIC_OP(op, insn)					\
+static __always_inline int						\
+__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)	\
+{									\
 	unsigned int loops = FUTEX_MAX_LOOPS;				\
+	int ret, oldval, tmp;						\
 									\
 	uaccess_enable_privileged();					\
-	asm volatile(							\
+	asm volatile("// __llsc_futex_atomic_" #op "\n"			\
 "	prfm	pstl1strm, %2\n"					\
 "1:	ldxr	%w1, %2\n"						\
 	insn "\n"							\
@@ -35,45 +39,103 @@ do {									\
 	: "r" (oparg), "Ir" (-EAGAIN)					\
 	: "memory");							\
 	uaccess_disable_privileged();					\
-} while (0)
+									\
+	if (!ret)							\
+		*oval = oldval;						\
+									\
+	return ret;							\
+}
+
+LLSC_FUTEX_ATOMIC_OP(add, "add	%w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(or,  "orr	%w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(and, "and	%w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(eor, "eor	%w3, %w1, %w5")
+LLSC_FUTEX_ATOMIC_OP(set, "mov	%w3, %w5")
+
+static __always_inline int
+__llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+	int ret = 0;
+	unsigned int loops = FUTEX_MAX_LOOPS;
+	u32 val, tmp;
+
+	uaccess_enable_privileged();
+	asm volatile("//__llsc_futex_cmpxchg\n"
+"	prfm	pstl1strm, %2\n"
+"1:	ldxr	%w1, %2\n"
+"	eor	%w3, %w1, %w5\n"
+"	cbnz	%w3, 4f\n"
+"2:	stlxr	%w3, %w6, %2\n"
+"	cbz	%w3, 3f\n"
+"	sub	%w4, %w4, %w3\n"
+"	cbnz	%w4, 1b\n"
+"	mov	%w0, %w7\n"
+"3:\n"
+"	dmb	ish\n"
+"4:\n"
+	_ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
+	_ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
+	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
+	: "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
+	: "memory");
+	uaccess_disable_privileged();
+
+	if (!ret)
+		*oval = val;
+
+	return ret;
+}
+
+#define FUTEX_ATOMIC_OP(op)						\
+static __always_inline int						\
+__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)		\
+{									\
+	return __llsc_futex_atomic_##op(oparg, uaddr, oval);		\
+}
+
+FUTEX_ATOMIC_OP(add)
+FUTEX_ATOMIC_OP(or)
+FUTEX_ATOMIC_OP(and)
+FUTEX_ATOMIC_OP(eor)
+FUTEX_ATOMIC_OP(set)
+
+static __always_inline int
+__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+	return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval);
+}
 
 static inline int
 arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
 {
-	int oldval = 0, ret, tmp;
-	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+	int ret;
+	u32 __user *uaddr;
 
 	if (!access_ok(_uaddr, sizeof(u32)))
 		return -EFAULT;
 
+	uaddr = __uaccess_mask_ptr(_uaddr);
+
 	switch (op) {
 	case FUTEX_OP_SET:
-		__futex_atomic_op("mov	%w3, %w5",
-				  ret, oldval, uaddr, tmp, oparg);
+		ret = __futex_atomic_set(oparg, uaddr, oval);
 		break;
 	case FUTEX_OP_ADD:
-		__futex_atomic_op("add	%w3, %w1, %w5",
-				  ret, oldval, uaddr, tmp, oparg);
+		ret = __futex_atomic_add(oparg, uaddr, oval);
 		break;
 	case FUTEX_OP_OR:
-		__futex_atomic_op("orr	%w3, %w1, %w5",
-				  ret, oldval, uaddr, tmp, oparg);
+		ret = __futex_atomic_or(oparg, uaddr, oval);
 		break;
 	case FUTEX_OP_ANDN:
-		__futex_atomic_op("and	%w3, %w1, %w5",
-				  ret, oldval, uaddr, tmp, ~oparg);
+		ret = __futex_atomic_and(~oparg, uaddr, oval);
 		break;
 	case FUTEX_OP_XOR:
-		__futex_atomic_op("eor	%w3, %w1, %w5",
-				  ret, oldval, uaddr, tmp, oparg);
+		ret = __futex_atomic_eor(oparg, uaddr, oval);
 		break;
 	default:
 		ret = -ENOSYS;
 	}
 
-	if (!ret)
-		*oval = oldval;
-
 	return ret;
 }
 
@@ -81,40 +143,14 @@ static inline int
 futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
 			      u32 oldval, u32 newval)
 {
-	int ret = 0;
-	unsigned int loops = FUTEX_MAX_LOOPS;
-	u32 val, tmp;
 	u32 __user *uaddr;
 
 	if (!access_ok(_uaddr, sizeof(u32)))
 		return -EFAULT;
 
 	uaddr = __uaccess_mask_ptr(_uaddr);
-	uaccess_enable_privileged();
-	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
-"	prfm	pstl1strm, %2\n"
-"1:	ldxr	%w1, %2\n"
-"	sub	%w3, %w1, %w5\n"
-"	cbnz	%w3, 4f\n"
-"2:	stlxr	%w3, %w6, %2\n"
-"	cbz	%w3, 3f\n"
-"	sub	%w4, %w4, %w3\n"
-"	cbnz	%w4, 1b\n"
-"	mov	%w0, %w7\n"
-"3:\n"
-"	dmb	ish\n"
-"4:\n"
-	_ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
-	_ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
-	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
-	: "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
-	: "memory");
-	uaccess_disable_privileged();
-
-	if (!ret)
-		*uval = val;
 
-	return ret;
+	return __futex_cmpxchg(uaddr, oldval, newval, uval);
 }
 
 #endif /* __ASM_FUTEX_H */
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
                   ` (3 preceding siblings ...)
  2025-09-17 11:08 ` [PATCH v8 4/5] arm64: futex: refactor futex atomic operation Yeoreum Yun
@ 2025-09-17 11:08 ` Yeoreum Yun
  2025-09-17 12:57   ` Yeoreum Yun
  2025-09-17 13:04   ` Mark Rutland
  4 siblings, 2 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 11:08 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel, Yeoreum Yun

Current futex atomic operations are implemented with ll/sc instructions
and clearing PSTATE.PAN.

Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
also atomic operation for user memory access in kernel it doesn't need
to clear PSTATE.PAN bit anymore.

With theses instructions some of futex atomic operations don't need to
be implmented with ldxr/stlxr pair instead can be implmented with
one atomic operation supplied by FEAT_LSUI.

However, some of futex atomic operation don't have matched
instructuion i.e) eor or cmpxchg with word size.
For those operation, uses cas{al}t to implement them.

Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
 arch/arm64/include/asm/futex.h | 136 ++++++++++++++++++++++++++++++++-
 1 file changed, 135 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index f8cb674bdb3f..683291700ff5 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -9,6 +9,8 @@
 #include <linux/uaccess.h>
 #include <linux/stringify.h>
 
+#include <asm/alternative.h>
+#include <asm/alternative-macros.h>
 #include <asm/errno.h>
 
 #define FUTEX_MAX_LOOPS	128 /* What's the largest number you can think of? */
@@ -86,11 +88,143 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
 	return ret;
 }
 
+#ifdef CONFIG_AS_HAS_LSUI
+
+/*
+ * When the LSUI feature is present, the CPU also implements PAN, because
+ * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
+ * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
+ * operation.
+ */
+
+#define __LSUI_PREAMBLE	".arch_extension lsui\n"
+
+#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb)				\
+static __always_inline int						\
+__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)	\
+{									\
+	int ret = 0;							\
+	int oldval;							\
+									\
+	asm volatile("// __lsui_futex_atomic_" #op "\n"			\
+	__LSUI_PREAMBLE							\
+"1:	" #asm_op #mb "	%w3, %w2, %1\n"					\
+"2:\n"									\
+	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)				\
+	: "+r" (ret), "+Q" (*uaddr), "=r" (oldval)			\
+	: "r" (oparg)							\
+	: "memory");							\
+									\
+	if (!ret)							\
+		*oval = oldval;						\
+									\
+	return ret;							\
+}
+
+LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
+LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
+LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
+LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
+
+static __always_inline int
+__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
+{
+	int ret = 0;
+
+	asm volatile("// __lsui_cmpxchg64\n"
+	__LSUI_PREAMBLE
+"1:	casalt	%x2, %x3, %1\n"
+"2:\n"
+	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
+	: "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
+	: "r" (newval)
+	: "memory");
+
+	return ret;
+}
+
+static __always_inline int
+__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+	u64 __user *uaddr_al;
+	u64 oval64, nval64, tmp;
+	static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
+		GENMASK_U64(63, 32): GENMASK_U64(31, 0);
+	static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
+	static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
+
+	uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
+	if (get_user(oval64, uaddr_al))
+		return -EFAULT;
+
+	if ((u32 __user *)uaddr_al != uaddr) {
+		nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
+		oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
+	} else {
+		nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
+		oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
+	}
+
+	tmp = oval64;
+
+	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
+		return -EFAULT;
+
+	if (tmp != oval64)
+		return -EAGAIN;
+
+	*oval = oldval;
+
+	return 0;
+}
+
+static __always_inline int
+__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
+{
+	return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
+}
+
+static __always_inline int
+__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
+{
+	u32 oldval, newval;
+
+	/*
+	 * there are no ldteor/stteor instructions...
+	 */
+	if (get_user(oldval, uaddr))
+		return -EFAULT;
+
+	newval = oldval ^ oparg;
+
+	return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
+
+}
+
+static __always_inline int
+__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+	return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
+}
+
+#define __lsui_llsc_body(op, ...)					\
+({									\
+	alternative_has_cap_likely(ARM64_HAS_LSUI) ?			\
+		__lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__);	\
+})
+
+#else	/* CONFIG_AS_HAS_LSUI */
+
+#define __lsui_llsc_body(op, ...)	__llsc_##op(__VA_ARGS__)
+
+#endif	/* CONFIG_AS_HAS_LSUI */
+
+
 #define FUTEX_ATOMIC_OP(op)						\
 static __always_inline int						\
 __futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)		\
 {									\
-	return __llsc_futex_atomic_##op(oparg, uaddr, oval);		\
+	return __lsui_llsc_body(futex_atomic_##op, oparg, uaddr, oval);	\
 }
 
 FUTEX_ATOMIC_OP(add)
-- 
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 11:08 ` [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
@ 2025-09-17 12:57   ` Yeoreum Yun
  2025-09-17 13:04   ` Mark Rutland
  1 sibling, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 12:57 UTC (permalink / raw)
  To: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui, mark.rutland
  Cc: linux-arm-kernel, kvmarm, linux-kernel

Hi,

> +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> +
> +static __always_inline int
> +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> +{
> +	int ret = 0;
> +
> +	asm volatile("// __lsui_cmpxchg64\n"
> +	__LSUI_PREAMBLE
> +"1:	casalt	%x2, %x3, %1\n"
> +"2:\n"
> +	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> +	: "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> +	: "r" (newval)
> +	: "memory");
> +
> +	return ret;
> +}
> +
> +static __always_inline int
> +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> +	u64 __user *uaddr_al;
> +	u64 oval64, nval64, tmp;
> +	static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> +		GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> +	static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> +	static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> +
> +	uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> +	if (get_user(oval64, uaddr_al))
> +		return -EFAULT;
> +
> +	if ((u32 __user *)uaddr_al != uaddr) {
> +		nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> +		oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> +	} else {
> +		nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> +		oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> +	}
> +
> +	tmp = oval64;
> +
> +	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> +		return -EFAULT;
> +
> +	if (tmp != oval64)
> +		return -EAGAIN;
> +
> +	*oval = oldval;
> +
> +	return 0;
> +}
> +

While I see the code I couldn't erase some suspicion
because of below questions...:

1. Suppose there is structure:

struct s_test {
  u32 futex;
  u32 others;
};

Before CPU0 executing casalt futex, CPU1 executes the store32_rel() on
others. Then, Can CPU0 can observe the CPU1's store32_rel()
since casalt operates with &futex, but CPU1 operates with &others.

CPU0                                          CPU1
...                                           store32_rel(&s_test->others);

/// can this see CPU1's modification?
casalt(..., ..., &s_test->futex);


2. Suppose there is structure:

struct s_test {
  u32 others;
  u32 futex;
};

Then, can below "ldtr" be reordered after casalt?


  ldtr(&s_test->futex);
  ...
  casalt(..., ..., &s_test->others);


I think the both cases can break the memory consistency unintensionaly
in the view of user...

Well, the dmb ish; could be solved the above problem before casalt,
However, It seems it's much better to return former ll/sc method...?

Thanks!


--
Sincerely,
Yeoreum Yun


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 11:08 ` [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
  2025-09-17 12:57   ` Yeoreum Yun
@ 2025-09-17 13:04   ` Mark Rutland
  2025-09-17 13:35     ` Yeoreum Yun
  2025-09-17 13:42     ` Yeoreum Yun
  1 sibling, 2 replies; 13+ messages in thread
From: Mark Rutland @ 2025-09-17 13:04 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

On Wed, Sep 17, 2025 at 12:08:38PM +0100, Yeoreum Yun wrote:
> +static __always_inline int
> +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> +{
> +	int ret = 0;
> +
> +	asm volatile("// __lsui_cmpxchg64\n"
> +	__LSUI_PREAMBLE
> +"1:	casalt	%x2, %x3, %1\n"
> +"2:\n"
> +	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> +	: "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> +	: "r" (newval)
> +	: "memory");
> +
> +	return ret;
> +}
> +
> +static __always_inline int
> +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> +	u64 __user *uaddr_al;

Please use 'uaddr64' to match the other 64-bit variables.

I assume that the '_al' suffix is meant to be short for 'aligned', but I
think using '64' is more consistent and clearer.

> +	u64 oval64, nval64, tmp;

Likewise, 'orig64' would be clearer than 'tmp' here.

> +	static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> +		GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> +	static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> +	static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> +
> +	uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> +	if (get_user(oval64, uaddr_al))
> +		return -EFAULT;
> +
> +	if ((u32 __user *)uaddr_al != uaddr) {
> +		nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> +		oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> +	} else {
> +		nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> +		oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> +	}
> +
> +	tmp = oval64;
> +
> +	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> +		return -EFAULT;
> +
> +	if (tmp != oval64)
> +		return -EAGAIN;

This means that we'll immediately return -EAGAIN upon a spurious failure
(where the adjacent 4 bytes have changed), whereas the LL/SC ops would
retry FUTEX_MAX_LOOPS before returning -EGAIN.

I suspect we want to retry here (or in the immediate caller).

> +
> +	*oval = oldval;
> +
> +	return 0;
> +}

Aside from the retry issue, I *think* you can simplify this to something
like:

static __always_inline int
__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
{
	uaddr64 = (u64 __user *)PTR_ALIGN_DOWN(uaddr, sizeof(u64));
	u64 oval64, nval64, orig64;

	if (get_user(oval64, uaddr64)
		return -EFAULT;
	
	if (IS_ALIGNED(addr, sizeof(u64)) == IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))  {
		FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
		FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
	} else {
		FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
		FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
	}
	orig64 = oval64;

	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
		return -EFAULT;
	
	if (oval64 != orig64)
		return -EAGAIN;

	*oval = oldval;
	return 0;
}

Mark.

> +
> +static __always_inline int
> +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
> +{
> +	return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
> +}
> +
> +static __always_inline int
> +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
> +{
> +	u32 oldval, newval;
> +
> +	/*
> +	 * there are no ldteor/stteor instructions...
> +	 */
> +	if (get_user(oldval, uaddr))
> +		return -EFAULT;
> +
> +	newval = oldval ^ oparg;
> +
> +	return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
> +
> +}
> +
> +static __always_inline int
> +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> +	return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
> +}
> +
> +#define __lsui_llsc_body(op, ...)					\
> +({									\
> +	alternative_has_cap_likely(ARM64_HAS_LSUI) ?			\
> +		__lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__);	\
> +})
> +
> +#else	/* CONFIG_AS_HAS_LSUI */
> +
> +#define __lsui_llsc_body(op, ...)	__llsc_##op(__VA_ARGS__)
> +
> +#endif	/* CONFIG_AS_HAS_LSUI */
> +
> +
>  #define FUTEX_ATOMIC_OP(op)						\
>  static __always_inline int						\
>  __futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)		\
>  {									\
> -	return __llsc_futex_atomic_##op(oparg, uaddr, oval);		\
> +	return __lsui_llsc_body(futex_atomic_##op, oparg, uaddr, oval);	\
>  }
>  
>  FUTEX_ATOMIC_OP(add)
> -- 
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 13:04   ` Mark Rutland
@ 2025-09-17 13:35     ` Yeoreum Yun
  2025-09-17 13:57       ` Mark Rutland
  2025-09-18  9:11       ` Yeoreum Yun
  2025-09-17 13:42     ` Yeoreum Yun
  1 sibling, 2 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 13:35 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

Hi Mark,

> On Wed, Sep 17, 2025 at 12:08:38PM +0100, Yeoreum Yun wrote:
> > +static __always_inline int
> > +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> > +{
> > +	int ret = 0;
> > +
> > +	asm volatile("// __lsui_cmpxchg64\n"
> > +	__LSUI_PREAMBLE
> > +"1:	casalt	%x2, %x3, %1\n"
> > +"2:\n"
> > +	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > +	: "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > +	: "r" (newval)
> > +	: "memory");
> > +
> > +	return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > +	u64 __user *uaddr_al;
>
> Please use 'uaddr64' to match the other 64-bit variables.
>
> I assume that the '_al' suffix is meant to be short for 'aligned', but I
> think using '64' is more consistent and clearer.
>
> > +	u64 oval64, nval64, tmp;
>
> Likewise, 'orig64' would be clearer than 'tmp' here.

Thanks for your suggestion.
>
> > +	static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> > +		GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> > +	static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> > +	static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> > +
> > +	uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > +	if (get_user(oval64, uaddr_al))
> > +		return -EFAULT;
> > +
> > +	if ((u32 __user *)uaddr_al != uaddr) {
> > +		nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> > +		oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> > +	} else {
> > +		nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> > +		oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> > +	}
> > +
> > +	tmp = oval64;
> > +
> > +	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > +		return -EFAULT;
> > +
> > +	if (tmp != oval64)
> > +		return -EAGAIN;
>
> This means that we'll immediately return -EAGAIN upon a spurious failure
> (where the adjacent 4 bytes have changed), whereas the LL/SC ops would
> retry FUTEX_MAX_LOOPS before returning -EGAIN.
>
> I suspect we want to retry here (or in the immediate caller).

Right. I've thought about it but at the time of writing,
I return -EAGAIN immediately. Let's wait for other people's comments.

>
> > +
> > +	*oval = oldval;
> > +
> > +	return 0;
> > +}
>
> Aside from the retry issue, I *think* you can simplify this to something
> like:
>
> static __always_inline int
> __lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> {
> 	uaddr64 = (u64 __user *)PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> 	u64 oval64, nval64, orig64;
>
> 	if (get_user(oval64, uaddr64)
> 		return -EFAULT;
>
> 	if (IS_ALIGNED(addr, sizeof(u64)) == IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))  {
> 		FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
> 		FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
> 	} else {
> 		FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
> 		FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
> 	}
> 	orig64 = oval64;
>
> 	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> 		return -EFAULT;
>
> 	if (oval64 != orig64)
> 		return -EAGAIN;
>
> 	*oval = oldval;
> 	return 0;
> }

Hmm I think this wouldn'b cover the case below when big-endianess used.

struct {
  u32 others 0x55667788;
  u32 futex = 0x11223344;
};

In this case, memory layout would be:

  55 66 77 88 11 22 33 44

So, the value of fetched oval64 is 0x5566778811223344;

So, it should modify the GENMASK_U64(31, 0) fields.
But, it tries to modify GENMASK_U64(63, 32) fields.

Thanks!

[...]
--
Sincerely,
Yeoreum Yun


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 13:04   ` Mark Rutland
  2025-09-17 13:35     ` Yeoreum Yun
@ 2025-09-17 13:42     ` Yeoreum Yun
  1 sibling, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 13:42 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

[...]
> static __always_inline int
> __lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> {
> 	uaddr64 = (u64 __user *)PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> 	u64 oval64, nval64, orig64;
>
> 	if (get_user(oval64, uaddr64)
> 		return -EFAULT;
>
> 	if (IS_ALIGNED(addr, sizeof(u64)) == IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))  {
> 		FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
> 		FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
> 	} else {
> 		FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
> 		FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
> 	}
> 	orig64 = oval64;
>
> 	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> 		return -EFAULT;
>
> 	if (oval64 != orig64)
> 		return -EAGAIN;
>
> 	*oval = oldval;
> 	return 0;
> }

Oh, I misread the condition. Thanks for your suggetion.
Please ignore my previous email.

--
Sincerely,
Yeoreum Yun


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 13:35     ` Yeoreum Yun
@ 2025-09-17 13:57       ` Mark Rutland
  2025-09-17 14:07         ` Yeoreum Yun
  2025-09-18  9:11       ` Yeoreum Yun
  1 sibling, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2025-09-17 13:57 UTC (permalink / raw)
  To: Yeoreum Yun
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

On Wed, Sep 17, 2025 at 02:35:09PM +0100, Yeoreum Yun wrote:
> Hi Mark,

Hi Levi,

Please can you keep the relevant reply headers (i.e. the bit that says
"On ${DATE} ${PERSON} wrote:")? You kept yours from your first reply,
but dropped mine from the reply you're replying to, which is a bit
awkward for anyone following the thread.

> > Aside from the retry issue, I *think* you can simplify this to something
> > like:
> >
> > static __always_inline int
> > __lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > {
> > 	uaddr64 = (u64 __user *)PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > 	u64 oval64, nval64, orig64;
> >
> > 	if (get_user(oval64, uaddr64)
> > 		return -EFAULT;
> >
> > 	if (IS_ALIGNED(addr, sizeof(u64)) == IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))  {

Note: typo here, this should be 'uaddr', not 'addr'. Importantly it is
*NOT* 'uaddr64'

> > 		FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
> > 		FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
> > 	} else {
> > 		FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
> > 		FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
> > 	}
> > 	orig64 = oval64;
> >
> > 	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > 		return -EFAULT;
> >
> > 	if (oval64 != orig64)
> > 		return -EAGAIN;
> >
> > 	*oval = oldval;
> > 	return 0;
> > }
> 
> Hmm I think this wouldn'b cover the case below when big-endianess used.
> 
> struct {
>   u32 others 0x55667788;
>   u32 futex = 0x11223344;
> };
> 
> In this case, memory layout would be:
> 
>   55 66 77 88 11 22 33 44
> 
> So, the value of fetched oval64 is 0x5566778811223344;

Ok, so the entire struct is aligned to 8 bytes, and the 'futex' field is
4 bytes after that (and not itself aligned to 8 bytes). In that case:

	IS_ALIGNED(uaddr, sizeof(u64)) is false, becuase 'futex' is not
	aligned to 8 bytes.

	IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) is false, since this is
	big-endian.

... so the condition becomes:

	if (false == false)

... which is true, and hence we execute the first branch:

	FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
	FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);

> So, it should modify the GENMASK_U64(31, 0) fields.
> But, it tries to modify GENMASK_U64(63, 32) fields.

As above, I think the code does the right thing in this case, but the
typo didn't help -- sorry about that.

Mark.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 13:57       ` Mark Rutland
@ 2025-09-17 14:07         ` Yeoreum Yun
  0 siblings, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-17 14:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

Hi Mark,

[...]

> > > 		FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
> > > 		FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
> > > 	} else {
> > > 		FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
> > > 		FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
> > > 	}
> > > 	orig64 = oval64;
> > >
> > > 	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > > 		return -EFAULT;
> > >
> > > 	if (oval64 != orig64)
> > > 		return -EAGAIN;
> > >
> > > 	*oval = oldval;
> > > 	return 0;
> > > }
> >
> > Hmm I think this wouldn'b cover the case below when big-endianess used.
> >
> > struct {
> >   u32 others 0x55667788;
> >   u32 futex = 0x11223344;
> > };
> >
> > In this case, memory layout would be:
> >
> >   55 66 77 88 11 22 33 44
> >
> > So, the value of fetched oval64 is 0x5566778811223344;
>
> Ok, so the entire struct is aligned to 8 bytes, and the 'futex' field is
> 4 bytes after that (and not itself aligned to 8 bytes). In that case:
[...]

Sorry and thanks for explation. I've misread the condition as

  if (IS_ALIGNED(uaddr, sizeof(64) && IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN)

Again. sorry for my bad eyes :\


[...]

--
Sincerely,
Yeoreum Yun


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
  2025-09-17 13:35     ` Yeoreum Yun
  2025-09-17 13:57       ` Mark Rutland
@ 2025-09-18  9:11       ` Yeoreum Yun
  1 sibling, 0 replies; 13+ messages in thread
From: Yeoreum Yun @ 2025-09-18  9:11 UTC (permalink / raw)
  To: Mark Rutland
  Cc: catalin.marinas, will, broonie, maz, oliver.upton, joey.gouly,
	james.morse, ardb, scott, suzuki.poulose, yuzenghui,
	linux-arm-kernel, kvmarm, linux-kernel

Hi Mark,

[...]
> > > +	static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> > > +		GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> > > +	static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> > > +	static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> > > +
> > > +	uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > > +	if (get_user(oval64, uaddr_al))
> > > +		return -EFAULT;
> > > +
> > > +	if ((u32 __user *)uaddr_al != uaddr) {
> > > +		nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> > > +		oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> > > +	} else {
> > > +		nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> > > +		oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> > > +	}
> > > +
> > > +	tmp = oval64;
> > > +
> > > +	if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > > +		return -EFAULT;
> > > +
> > > +	if (tmp != oval64)
> > > +		return -EAGAIN;
> >
> > This means that we'll immediately return -EAGAIN upon a spurious failure
> > (where the adjacent 4 bytes have changed), whereas the LL/SC ops would
> > retry FUTEX_MAX_LOOPS before returning -EGAIN.
> >
> > I suspect we want to retry here (or in the immediate caller).
>
> Right. I've thought about it but at the time of writing,
> I return -EAGAIN immediately. Let's wait for other people's comments.

When I get step back, I found my thougt was wrong as you point out.

So, what about this?

static __always_inline int
__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
{
	u64 __user *uaddr64;
	bool futex_on_lo;
	int ret = -EAGAIN, i;
	u32 other, orig_other;
	union {
		struct futex_on_lo {
			u32 val;
			u32 other;
		} lo_futex;

		struct futex_on_hi {
			u32 other;
			u32 val;
		} hi_futex;

		u64 raw;
	} oval64, orig64, nval64;

	uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
	futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
			IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));

	for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
		if (get_user(oval64.raw, uaddr64))
			return -EFAULT;

		nval64.raw = oval64.raw;

		if (futex_on_lo) {
			oval64.lo_futex.val = oldval;
			nval64.lo_futex.val = newval;
		} else {
			oval64.hi_futex.val = oldval;
			nval64.hi_futex.val = newval;
		}

		orig64.raw = oval64.raw;

		if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
			return -EFAULT;

		if (futex_on_lo) {
			oldval = oval64.lo_futex.val;
			other = oval64.lo_futex.other;
			orig_other = orig64.lo_futex.other;
		} else {
			oldval = oval64.hi_futex.val;
			other = oval64.hi_futex.other;
			orig_other = orig64.hi_futex.other;
		}

		if (other == orig_other) {
			ret = 0;
			break;
		}
	}

	if (!ret)
		*oval = oldval;

	return ret;
}

Unfortunately, if there was high competition on "other"
I think return -EAGAIN is the best efforts..

Am I missing something?

Thanks.

--
Sincerely,
Yeoreum Yun


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-09-18  9:12 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-17 11:08 [PATCH v8 0/5] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
2025-09-17 11:08 ` [PATCH v8 1/5] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-09-17 11:08 ` [PATCH v8 2/5] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
2025-09-17 11:08 ` [PATCH v8 3/5] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
2025-09-17 11:08 ` [PATCH v8 4/5] arm64: futex: refactor futex atomic operation Yeoreum Yun
2025-09-17 11:08 ` [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
2025-09-17 12:57   ` Yeoreum Yun
2025-09-17 13:04   ` Mark Rutland
2025-09-17 13:35     ` Yeoreum Yun
2025-09-17 13:57       ` Mark Rutland
2025-09-17 14:07         ` Yeoreum Yun
2025-09-18  9:11       ` Yeoreum Yun
2025-09-17 13:42     ` Yeoreum Yun

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox