From: Yeoreum Yun <yeoreum.yun@arm.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev,
joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org,
scott@os.amperecomputing.com, suzuki.poulose@arm.com,
yuzenghui@huawei.com, linux-arm-kernel@lists.infradead.org,
kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RESEND v7 4/6] arm64: futex: refactor futex atomic operation
Date: Tue, 16 Sep 2025 14:27:37 +0100 [thread overview]
Message-ID: <aMllyaObyciHEEFX@e129823.arm.com> (raw)
In-Reply-To: <aMlcf5oUNZU65u_I@J2N7QTR9R3.cambridge.arm.com>
Hi Mark,
[...]
> > diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> > index 1d6d9f856ac5..0aeda7ced2c0 100644
> > --- a/arch/arm64/include/asm/futex.h
> > +++ b/arch/arm64/include/asm/futex.h
> > @@ -126,6 +126,60 @@ LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> > LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> > LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> >
> > +
> > +#define LSUI_CMPXCHG_HELPER(suffix, start_bit) \
> > +static __always_inline int \
> > +__lsui_cmpxchg_helper_##suffix(u64 __user *uaddr, u32 oldval, u32 newval) \
> > +{ \
> > + int ret = 0; \
> > + u64 oval, nval, tmp; \
> > + \
> > + asm volatile("//__lsui_cmpxchg_helper_" #suffix "\n" \
> > + __LSUI_PREAMBLE \
> > +" prfm pstl1strm, %2\n" \
> > +"1: ldtr %x1, %2\n" \
> > +" mov %x3, %x1\n" \
> > +" bfi %x1, %x5, #" #start_bit ", #32\n" \
> > +" bfi %x3, %x6, #" #start_bit ", #32\n" \
> > +" mov %x4, %x1\n" \
> > +"2: caslt %x1, %x3, %2\n" \
> > +" sub %x1, %x1, %x4\n" \
> > +" cbz %x1, 3f\n" \
> > +" mov %w0, %w7\n" \
> > +"3:\n" \
> > +" dmb ish\n" \
> > +"4:\n" \
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0) \
> > + _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0) \
> > + : "+r" (ret), "=&r" (oval), "+Q" (*uaddr), "=&r" (nval), "=&r" (tmp) \
> > + : "r" (oldval), "r" (newval), "Ir" (-EAGAIN) \
> > + : "memory"); \
> > + \
> > + return ret; \
> > +}
> > +
> > +LSUI_CMPXCHG_HELPER(lo, 0)
> > +LSUI_CMPXCHG_HELPER(hi, 32)
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg_helper(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > + int ret;
> > + unsigned long uaddr_al;
> > +
> > + uaddr_al = ALIGN_DOWN((unsigned long)uaddr, sizeof(u64));
> > +
> > + if (uaddr_al != (unsigned long)uaddr)
> > + ret = __lsui_cmpxchg_helper_hi((u64 __user *)uaddr_al, oldval, newval);
> > + else
> > + ret = __lsui_cmpxchg_helper_lo((u64 __user *)uaddr_al, oldval, newval);
> > +
> > + if (!ret)
> > + *oval = oldval;
> > +
> > + return ret;
> > +}
>
> I think Will expects that you do more of this in C, e.g. have a basic
> user cmpxchg on a 64-bit type, e.g.
>
> /*
> * NOTE: *oldp is NOT updated if a fault is taken.
> */
> static __always_inline int
> user_cmpxchg64_release(u64 __usr *addr, u64 *oldp, u64 new)
> {
> int err = 0;
>
> asm volatile(
> __LSUI_PREAMBLE
> "1: caslt %x[old], %x[new], %[addr]\n"
> "2:\n"
> _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
> : [addr] "+Q" (addr),
> [old] "+r" (*oldp)
> : [new] "r" (new)
> : "memory"
> );
>
> return err;
> }
>
> That should be the *only* assembly you need to implement.
>
> Atop that, have a wrapper that uses get_user() and that helper above to
> implement the 32-bit user cmpxchg, with all the bit manipulation in C:
Thanks for your suggestion. But small question.
I think it's enough to use usafe_get_user() instead of get_user() in here
since when FEAT_LSUI enabled, it doeesn't need to call
uaccess_ttbr0_enable()/disable().
or Am I missing something?
>
> /*
> * NOTE: *oldp is NOT updated if a fault is taken.
> */
> static int user_cmpxchg32_release(u32 __user *addr, u32 *oldp, u32 new)
> {
> u64 __user *addr64 = PTR_ALIGN_DOWN(addr, sizeof(u64));
> u64 old64, new64;
> int ret;
>
> if (get_user(old64, uaddr64))
> return -EFAULT;
>
> /*
> * TODO: merge '*oldp' into 'old64', and 'new' into 'new64',
> * taking into account endianness and alignment
> */
>
> ret = user_cmpxchg64_release(uaddr64, &old64, new64);
> if (ret)
> return ret;
>
> /*
> * TODO: extract '*oldp' from 'old64', taking into account
> * endianness and alignment.
> */
I think extraction doesn't need since futext_cmpxchg op passes the
"oldval" argument which fetched and passed already.
So when it success, it seems enough to return the passed arguments.
But, I'm not sure which is better whether dividing the cmpxchg64 and
cmpxchge32 interface or make one helper -- user_cmpxchg_release
(__lsui_cmpxchg_helper).
[...]
Thanks.
--
Sincerely,
Yeoreum Yun
next prev parent reply other threads:[~2025-09-16 13:28 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-16 15:19 [PATCH RESEND v7 0/6] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
2025-08-16 15:19 ` [PATCH RESEND v7 1/6] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-09-12 16:12 ` Catalin Marinas
2025-08-16 15:19 ` [PATCH RESEND v7 2/6] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
2025-09-12 16:25 ` Catalin Marinas
2025-08-16 15:19 ` [PATCH RESEND v7 3/6] arm64: Kconfig: add LSUI Kconfig Yeoreum Yun
2025-09-12 16:24 ` Catalin Marinas
2025-09-15 10:42 ` Yeoreum Yun
2025-09-15 11:32 ` Will Deacon
2025-09-15 11:41 ` Yeoreum Yun
2025-08-16 15:19 ` [PATCH RESEND v7 4/6] arm64: futex: refactor futex atomic operation Yeoreum Yun
2025-09-11 15:38 ` Will Deacon
2025-09-11 16:04 ` Yeoreum Yun
2025-09-12 16:44 ` Catalin Marinas
2025-09-12 17:01 ` Catalin Marinas
2025-09-15 10:39 ` Yeoreum Yun
2025-09-12 16:53 ` Catalin Marinas
2025-09-15 10:32 ` Yeoreum Yun
2025-09-15 19:40 ` Catalin Marinas
2025-09-15 20:35 ` Will Deacon
2025-09-16 7:02 ` Catalin Marinas
2025-09-16 9:15 ` Yeoreum Yun
2025-09-16 9:24 ` Yeoreum Yun
2025-09-16 10:02 ` Yeoreum Yun
2025-09-16 10:16 ` Will Deacon
2025-09-16 12:50 ` Yeoreum Yun
2025-09-17 9:32 ` Yeoreum Yun
2025-09-16 12:47 ` Mark Rutland
2025-09-16 13:27 ` Yeoreum Yun [this message]
2025-09-16 13:45 ` Mark Rutland
2025-09-16 13:58 ` Yeoreum Yun
2025-09-16 14:07 ` Mark Rutland
2025-09-16 14:15 ` Yeoreum Yun
2025-09-15 22:34 ` Yeoreum Yun
2025-09-16 12:53 ` Catalin Marinas
2025-08-16 15:19 ` [PATCH v7 RESEND 5/6] arm64: futex: small optimisation for __llsc_futex_atomic_set() Yeoreum Yun
2025-09-11 15:28 ` Will Deacon
2025-09-11 16:19 ` Yeoreum Yun
2025-09-12 16:36 ` Catalin Marinas
2025-09-15 10:41 ` Yeoreum Yun
2025-08-16 15:19 ` [PATCH RESEND v7 6/6] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
2025-09-11 15:22 ` Will Deacon
2025-09-11 16:45 ` Yeoreum Yun
2025-09-12 17:16 ` Catalin Marinas
2025-09-15 9:15 ` Yeoreum Yun
2025-09-12 17:09 ` Catalin Marinas
2025-09-15 8:24 ` Yeoreum Yun
2025-09-01 10:06 ` [PATCH RESEND v7 0/6] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
2025-09-11 15:09 ` Will Deacon
2025-09-11 16:22 ` Catalin Marinas
2025-09-15 20:37 ` Will Deacon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aMllyaObyciHEEFX@e129823.arm.com \
--to=yeoreum.yun@arm.com \
--cc=ardb@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=james.morse@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=scott@os.amperecomputing.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox