From: Mark Rutland <mark.rutland@arm.com>
To: Yeoreum Yun <yeoreum.yun@arm.com>
Cc: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org,
oliver.upton@linux.dev, ardb@kernel.org, frederic@kernel.org,
james.morse@arm.com, joey.gouly@arm.com,
scott@os.amperecomputing.com, maz@kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 4/7] arm64/futex: move futex atomic logic with clearing PAN bit
Date: Mon, 21 Jul 2025 11:56:48 +0100 [thread overview]
Message-ID: <aH4c8J3-qp9guE__@J2N7QTR9R3> (raw)
In-Reply-To: <20250721083618.2743569-5-yeoreum.yun@arm.com>
On Mon, Jul 21, 2025 at 09:36:15AM +0100, Yeoreum Yun wrote:
> Move current futex atomic logics which uses ll/sc method with cleraing
> PSTATE.PAN to separate file (futex_ll_sc_u.h) so that
> former method will be used only when FEAT_LSUI isn't supported.
This isn't moving logic, this is *duplicating* the existing logic. As of
this patch, this logic in the <asm/futex_ll_sc_u.h> header is unused,
and the existing logic in <asm/futex.h> is still used as-is.
Please refactor the existing logic first. The deletion of the existing
code should happen at the same time as this addition. That way it's
possible to see that the deleted logic corresponds to what is being
added in the header, and it's generally nicer for bisection.
Mark.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> arch/arm64/include/asm/futex_ll_sc_u.h | 115 +++++++++++++++++++++++++
> 1 file changed, 115 insertions(+)
> create mode 100644 arch/arm64/include/asm/futex_ll_sc_u.h
>
> diff --git a/arch/arm64/include/asm/futex_ll_sc_u.h b/arch/arm64/include/asm/futex_ll_sc_u.h
> new file mode 100644
> index 000000000000..6702ba66f1b2
> --- /dev/null
> +++ b/arch/arm64/include/asm/futex_ll_sc_u.h
> @@ -0,0 +1,115 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2025 Arm Ltd.
> + */
> +#ifndef __ASM_FUTEX_LL_SC_U_H
> +#define __ASM_FUTEX_LL_SC_U_H
> +
> +#include <linux/uaccess.h>
> +#include <linux/stringify.h>
> +
> +#define FUTEX_ATOMIC_OP(op, asm_op) \
> +static __always_inline int \
> +__ll_sc_u_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> +{ \
> + unsigned int loops = LL_SC_MAX_LOOPS; \
> + int ret, val, tmp; \
> + \
> + uaccess_enable_privileged(); \
> + asm volatile("// __ll_sc_u_futex_atomic_" #op "\n" \
> + " prfm pstl1strm, %2\n" \
> + "1: ldxr %w1, %2\n" \
> + " " #asm_op " %w3, %w1, %w5\n" \
> + "2: stlxr %w0, %w3, %2\n" \
> + " cbz %w0, 3f\n" \
> + " sub %w4, %w4, %w0\n" \
> + " cbnz %w4, 1b\n" \
> + " mov %w0, %w6\n" \
> + "3:\n" \
> + " dmb ish\n" \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) \
> + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %w0) \
> + : "=&r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), \
> + "+r" (loops) \
> + : "r" (oparg), "Ir" (-EAGAIN) \
> + : "memory"); \
> + uaccess_disable_privileged(); \
> + \
> + if (!ret) \
> + *oval = val; \
> + \
> + return ret; \
> +}
> +
> +FUTEX_ATOMIC_OP(add, add)
> +FUTEX_ATOMIC_OP(or, orr)
> +FUTEX_ATOMIC_OP(and, and)
> +FUTEX_ATOMIC_OP(eor, eor)
> +
> +#undef FUTEX_ATOMIC_OP
> +
> +static __always_inline int
> +__ll_sc_u_futex_atomic_set(int oparg, u32 __user *uaddr, int *oval)
> +{
> + unsigned int loops = LL_SC_MAX_LOOPS;
> + int ret, val;
> +
> + uaccess_enable_privileged();
> + asm volatile("//__ll_sc_u_futex_xchg\n"
> + " prfm pstl1strm, %2\n"
> + "1: ldxr %w1, %2\n"
> + "2: stlxr %w0, %w4, %2\n"
> + " cbz %w3, 3f\n"
> + " sub %w3, %w3, %w0\n"
> + " cbnz %w3, 1b\n"
> + " mov %w0, %w5\n"
> + "3:\n"
> + " dmb ish\n"
> + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0)
> + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %w0)
> + : "=&r" (ret), "=&r" (val), "+Q" (*uaddr), "+r" (loops)
> + : "r" (oparg), "Ir" (-EAGAIN)
> + : "memory");
> + uaccess_disable_privileged();
> +
> + if (!ret)
> + *oval = val;
> +
> + return ret;
> +}
> +
> +static __always_inline int
> +__ll_sc_u_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + int ret = 0;
> + unsigned int loops = LL_SC_MAX_LOOPS;
> + u32 val, tmp;
> +
> + uaccess_enable_privileged();
> + asm volatile("//__ll_sc_u_futex_cmpxchg\n"
> + " prfm pstl1strm, %2\n"
> + "1: ldxr %w1, %2\n"
> + " eor %w3, %w1, %w5\n"
> + " cbnz %w3, 4f\n"
> + "2: stlxr %w3, %w6, %2\n"
> + " cbz %w3, 3f\n"
> + " sub %w4, %w4, %w3\n"
> + " cbnz %w4, 1b\n"
> + " mov %w0, %w7\n"
> + "3:\n"
> + " dmb ish\n"
> + "4:\n"
> + _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
> + _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
> + : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
> + : "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
> + : "memory");
> + uaccess_disable_privileged();
> +
> + if (!ret)
> + *oval = val;
> +
> + return ret;
> +}
> +
> +#endif /* __ASM_FUTEX_LL_SC_U_H */
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
>
next prev parent reply other threads:[~2025-07-21 12:45 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-21 8:36 [PATCH v4 0/7] support FEAT_LSUI and apply it on futex atomic ops Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 1/7] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 2/7] KVM/arm64: expose FEAT_LSUI to guest Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 3/7] arm64/Kconfig: add LSUI Kconfig Yeoreum Yun
2025-07-21 10:52 ` Mark Rutland
2025-07-22 8:17 ` Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 4/7] arm64/futex: move futex atomic logic with clearing PAN bit Yeoreum Yun
2025-07-21 10:56 ` Mark Rutland [this message]
2025-07-22 8:21 ` Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 5/7] arm64/futex: add futex atomic operation with FEAT_LSUI Yeoreum Yun
2025-07-21 11:03 ` Mark Rutland
2025-07-22 8:34 ` Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 6/7] arm64/asm: introduce lsui.h Yeoreum Yun
2025-07-21 8:36 ` [PATCH v4 7/7] arm64/futex: support futex with FEAT_LSUI Yeoreum Yun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aH4c8J3-qp9guE__@J2N7QTR9R3 \
--to=mark.rutland@arm.com \
--cc=ardb@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=frederic@kernel.org \
--cc=james.morse@arm.com \
--cc=joey.gouly@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=scott@os.amperecomputing.com \
--cc=will@kernel.org \
--cc=yeoreum.yun@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox