public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Yeoreum Yun <yeoreum.yun@arm.com>
To: Will Deacon <will@kernel.org>
Cc: catalin.marinas@arm.com, maz@kernel.org, broonie@kernel.org,
	oliver.upton@linux.dev, miko.lenczewski@arm.com,
	kevin.brodsky@arm.com, ardb@kernel.org, suzuki.poulose@arm.com,
	lpieralisi@kernel.org, yangyicong@hisilicon.com,
	scott@os.amperecomputing.com, joey.gouly@arm.com,
	yuzenghui@huawei.com, pbonzini@redhat.com, shuah@kernel.org,
	mark.rutland@arm.com, arnd@arndb.de,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev,
	kvm@vger.kernel.org, linux-kselftest@vger.kernel.org
Subject: Re: [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI
Date: Mon, 19 Jan 2026 22:17:47 +0000	[thread overview]
Message-ID: <aW6tix/GeqgXpTUN@e129823.arm.com> (raw)
In-Reply-To: <aW5dzb0ldp8u8Rdm@willie-the-truck>

Hi Will,

> On Sun, Dec 14, 2025 at 11:22:45AM +0000, Yeoreum Yun wrote:
> > Current futex atomic operations are implemented with ll/sc instructions
> > and clearing PSTATE.PAN.
> >
> > Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
> > also atomic operation for user memory access in kernel it doesn't need
> > to clear PSTATE.PAN bit anymore.
> >
> > With theses instructions some of futex atomic operations don't need to
> > be implmented with ldxr/stlxr pair instead can be implmented with
> > one atomic operation supplied by FEAT_LSUI.
> >
> > However, some of futex atomic operation don't have matched
> > instructuion i.e) eor or cmpxchg with word size.
> > For those operation, uses cas{al}t to implement them.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> >  arch/arm64/include/asm/futex.h | 180 ++++++++++++++++++++++++++++++++-
> >  1 file changed, 178 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> > index f8cb674bdb3f..6778ff7e1c0e 100644
> > --- a/arch/arm64/include/asm/futex.h
> > +++ b/arch/arm64/include/asm/futex.h
> > @@ -9,6 +9,8 @@
> >  #include <linux/uaccess.h>
> >  #include <linux/stringify.h>
> >
> > +#include <asm/alternative.h>
> > +#include <asm/alternative-macros.h>
> >  #include <asm/errno.h>
> >
> >  #define FUTEX_MAX_LOOPS	128 /* What's the largest number you can think of? */
> > @@ -86,11 +88,185 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> >  	return ret;
> >  }
> >
> > +#ifdef CONFIG_AS_HAS_LSUI
> > +
> > +/*
> > + * When the LSUI feature is present, the CPU also implements PAN, because
> > + * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
> > + * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
> > + * operation.
> > + */
>
> I'd prefer not to rely on these sorts of properties because:
>
>   - CPU bugs happen all the time
>   - Virtualisation and idreg overrides mean illegal feature combinations
>     can show up
>   - The architects sometimes change their mind
>
> So let's either drop the assumption that we have PAN if LSUI *or* actually
> test that someplace during feature initialisation.

Thanks for detail explain. I'll drop the my silly assumption and
call the uaccess_ttbr0_enable()/disable() then.

>
> > +
> > +#define __LSUI_PREAMBLE	".arch_extension lsui\n"
> > +
> > +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb)				\
> > +static __always_inline int						\
> > +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval)	\
> > +{									\
> > +	int ret = 0;							\
> > +	int oldval;							\
> > +									\
> > +	asm volatile("// __lsui_futex_atomic_" #op "\n"			\
> > +	__LSUI_PREAMBLE							\
> > +"1:	" #asm_op #mb "	%w3, %w2, %1\n"					\
>
> What's the point in separating the barrier suffix from the rest of the
> instruction mnemonic? All the callers use -AL.

Agree. I'll remove this.

>
> > +"2:\n"									\
> > +	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)				\
> > +	: "+r" (ret), "+Q" (*uaddr), "=r" (oldval)			\
> > +	: "r" (oparg)							\
> > +	: "memory");							\
> > +									\
> > +	if (!ret)							\
> > +		*oval = oldval;						\
> > +									\
> > +	return ret;							\
> > +}
> > +
> > +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> > +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> > +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> > +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> > +{
> > +	int ret = 0;
> > +
> > +	asm volatile("// __lsui_cmpxchg64\n"
> > +	__LSUI_PREAMBLE
> > +"1:	casalt	%x2, %x3, %1\n"
>
>
> How bizarre, they changed the order of the AL and T compared to SWPTAL.
> Fair enough...
>
> Also, I don't think you need the 'x' prefix on the 64-bit variables.

Right. I'll remove useless prefix.

>
> > +"2:\n"
> > +	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > +	: "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > +	: "r" (newval)
> > +	: "memory");
>
> Don't you need to update *oldval here if the CAS didn't fault?

No. if CAS doesn't make fault the oldval update already.

>
> > +
> > +	return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > +	u64 __user *uaddr64;
> > +	bool futex_on_lo;
> > +	int ret = -EAGAIN, i;
> > +	u32 other, orig_other;
> > +	union {
> > +		struct futex_on_lo {
> > +			u32 val;
> > +			u32 other;
> > +		} lo_futex;
> > +
> > +		struct futex_on_hi {
> > +			u32 other;
> > +			u32 val;
> > +		} hi_futex;
> > +
> > +		u64 raw;
> > +	} oval64, orig64, nval64;
> > +
> > +	uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > +	futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
> > +			IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
>
> Just make LSUI depend on !CPU_BIG_ENDIAN in Kconfig. The latter already
> depends on BROKEN and so we'll probably drop it soon anyway. There's
> certainly no need to care about it for new features and it should simplify
> the code you have here if you can assume little-endian.

Thanks. then I'll enable LSUI feature only !CPU_BIG_ENDIAN.

>
> > +
> > +	for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > +		if (get_user(oval64.raw, uaddr64))
> > +			return -EFAULT;
>
> Since oldval is passed to us as an argument, can we get away with a
> 32-bit get_user() here?

It's not a probelm. but is there any sigificant difference?

>
> > +
> > +		nval64.raw = oval64.raw;
> > +
> > +		if (futex_on_lo) {
> > +			oval64.lo_futex.val = oldval;
> > +			nval64.lo_futex.val = newval;
> > +		} else {
> > +			oval64.hi_futex.val = oldval;
> > +			nval64.hi_futex.val = newval;
> > +		}
> > +
> > +		orig64.raw = oval64.raw;
> > +
> > +		if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> > +			return -EFAULT;
> > +
> > +		if (futex_on_lo) {
> > +			oldval = oval64.lo_futex.val;
> > +			other = oval64.lo_futex.other;
> > +			orig_other = orig64.lo_futex.other;
> > +		} else {
> > +			oldval = oval64.hi_futex.val;
> > +			other = oval64.hi_futex.other;
> > +			orig_other = orig64.hi_futex.other;
> > +		}
> > +
> > +		if (other == orig_other) {
> > +			ret = 0;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!ret)
> > +		*oval = oldval;
>
> Shouldn't we set *oval to the value we got back from the CAS?

Since it's a "success" case, the CAS return and oldval must be the same.
That's why it doesn't matter to use got back from the CAS.
Otherwise, it returns error and *oval doesn't matter for
futex_atomic_cmpxchg_inatomic().

>
> > +
> > +	return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
> > +{
> > +	return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
>
> Please a comment about the bitwise negation of oparg here as we're undoing
> the one from the caller.

I see. Thanks!

>
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
> > +{
> > +	u32 oldval, newval, val;
> > +	int ret, i;
> > +
> > +	/*
> > +	 * there are no ldteor/stteor instructions...
> > +	 */
> > +	for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > +		if (get_user(oldval, uaddr))
> > +			return -EFAULT;
> > +
> > +		newval = oldval ^ oparg;
> > +
> > +		ret = __lsui_cmpxchg32(uaddr, oldval, newval, &val);
> > +		if (ret)
> > +			return ret;
> > +
> > +		if (val == oldval) {
> > +			*oval = val;
> > +			return 0;
> > +		}
> > +	}
> > +
> > +	return -EAGAIN;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > +	return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
> > +}
> > +
> > +#define __lsui_llsc_body(op, ...)					\
> > +({									\
> > +	alternative_has_cap_likely(ARM64_HAS_LSUI) ?			\
>
> This doesn't seem like it should be the "likely" case just yet?

Okay. I'll change with "unlikely"

--
Sincerely,
Yeoreum Yun


  reply	other threads:[~2026-01-19 22:19 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-14 11:22 [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 1/9] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 2/9] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 3/9] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 4/9] arm64: Kconfig: Detect toolchain support for LSUI Yeoreum Yun
2026-01-19 15:50   ` Will Deacon
2026-01-19 15:54     ` Mark Brown
2026-01-20 11:35       ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 5/9] arm64: futex: refactor futex atomic operation Yeoreum Yun
2026-01-19 15:57   ` Will Deacon
2026-01-19 22:19     ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 6/9] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
2026-01-19 16:37   ` Will Deacon
2026-01-19 22:17     ` Yeoreum Yun [this message]
2026-01-20 15:44       ` Yeoreum Yun
2026-01-21 13:48       ` Will Deacon
2026-01-21 14:16         ` Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 7/9] arm64: separate common LSUI definitions into lsui.h Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 8/9] arm64: armv8_deprecated: convert user_swpX to inline function Yeoreum Yun
2025-12-14 11:22 ` [PATCH v11 RESEND 9/9] arm64: armv8_deprecated: apply FEAT_LSUI for swpX emulation Yeoreum Yun
2025-12-15  9:33   ` Marc Zyngier
2025-12-15  9:56     ` Yeoreum Yun
2026-01-19 15:34       ` Will Deacon
2026-01-19 22:32         ` Yeoreum Yun
2026-01-20  9:32           ` Yeoreum Yun
2026-01-20  9:46           ` Mark Rutland
2026-01-20 10:07             ` Yeoreum Yun
2026-01-20 11:50               ` Will Deacon
2026-01-20 12:14                 ` Yeoreum Yun
2026-01-20 17:59                 ` Yeoreum Yun
2026-01-21 13:56                   ` Will Deacon
2026-01-21 14:51                     ` Yeoreum Yun
2026-01-21 16:20                       ` Will Deacon
2026-01-21 16:31                         ` Yeoreum Yun
2026-01-21 16:36                           ` Will Deacon
2026-01-21 16:51                             ` Yeoreum Yun
2025-12-31 10:07 ` [PATCH v11 RESEND 0/9] support FEAT_LSUI Yeoreum Yun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aW6tix/GeqgXpTUN@e129823.arm.com \
    --to=yeoreum.yun@arm.com \
    --cc=ardb@kernel.org \
    --cc=arnd@arndb.de \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kevin.brodsky@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=miko.lenczewski@arm.com \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=scott@os.amperecomputing.com \
    --cc=shuah@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --cc=yangyicong@hisilicon.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox