* [PATCH v12 1/7] arm64: Kconfig: add support for LSUI
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-02-06 18:36 ` Catalin Marinas
2026-01-21 19:06 ` [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
` (6 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
previleged level to access to access user memory without clearing
PSTATE.PAN bit.
Add Kconfig option entry for FEAT_LSUI.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/Kconfig | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..af70778e966c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2227,6 +2227,26 @@ config ARM64_GCS
endmenu # "ARMv9.4 architectural features"
+config AS_HAS_LSUI
+ def_bool $(as-instr,.arch_extension lsui)
+ help
+ Supported by LLVM 20+ and binutils 2.45+.
+
+menu "ARMv9.6 architectural features"
+
+config ARM64_LSUI
+ bool "Support Unprivileged Load Store Instructions (LSUI)"
+ default y
+ depends on AS_HAS_LSUI && !CPU_BIG_ENDIAN
+ help
+ The Unprivileged Load Store Instructions (LSUI) provides
+ variants load/store instructions that access user-space memory
+ from the kernel without clearing PSTATE.PAN bit.
+
+ This feature is supported by LLVM 20+ and binutils 2.45+.
+
+endmenu # "ARMv9.6 architectural feature"
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 1/7] arm64: Kconfig: add support for LSUI
2026-01-21 19:06 ` [PATCH v12 1/7] arm64: Kconfig: add support for LSUI Yeoreum Yun
@ 2026-02-06 18:36 ` Catalin Marinas
2026-02-10 9:56 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-06 18:36 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Wed, Jan 21, 2026 at 07:06:16PM +0000, Yeoreum Yun wrote:
> Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
> previleged level to access to access user memory without clearing
> PSTATE.PAN bit.
>
> Add Kconfig option entry for FEAT_LSUI.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
In general we should move the Kconfig addition last for bisectability,
unless all the other patches introduced are ok on their own.
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 1/7] arm64: Kconfig: add support for LSUI
2026-02-06 18:36 ` Catalin Marinas
@ 2026-02-10 9:56 ` Yeoreum Yun
0 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-10 9:56 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
> On Wed, Jan 21, 2026 at 07:06:16PM +0000, Yeoreum Yun wrote:
> > Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
> > previleged level to access to access user memory without clearing
> > PSTATE.PAN bit.
> >
> > Add Kconfig option entry for FEAT_LSUI.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks!
>
> In general we should move the Kconfig addition last for bisectability,
> unless all the other patches introduced are ok on their own.
Oops.. I think I can move this patch as a last following your suggestion
in next round.
Thanks
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 1/7] arm64: Kconfig: add support for LSUI Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-02-06 18:42 ` Catalin Marinas
2026-01-21 19:06 ` [PATCH v12 3/7] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
` (5 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
Since Armv9.6, FEAT_LSUI introduces load/store instructions that allow
privileged code to access user memory without clearing the PSTATE.PAN bit.
Add CPU feature detection for FEAT_LSUI and enable its use
when FEAT_PAN is present so that removes the need for SW_PAN handling
when using LSUI instructions.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/kernel/cpufeature.c | 27 +++++++++++++++++++++++++++
arch/arm64/tools/cpucaps | 1 +
2 files changed, 28 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c840a93b9ef9..b41ea479c868 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -280,6 +280,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
static const struct arm64_ftr_bits ftr_id_aa64isar3[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FPRCVT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_LSUI_SHIFT, 4, ID_AA64ISAR3_EL1_LSUI_NI),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_LSFE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL1_FAMINMAX_SHIFT, 4, 0),
ARM64_FTR_END,
@@ -2509,6 +2510,23 @@ test_has_gicv5_legacy(const struct arm64_cpu_capabilities *entry, int scope)
return !!(read_sysreg_s(SYS_ICC_IDR0_EL1) & ICC_IDR0_EL1_GCIE_LEGACY);
}
+#ifdef CONFIG_ARM64_LSUI
+static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ if (!has_cpuid_feature(entry, scope))
+ return false;
+
+ /*
+ * A CPU that supports LSUI should also support FEAT_PAN,
+ * so that SW_PAN handling is not required.
+ */
+ if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
+ return false;
+
+ return true;
+}
+#endif
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.capability = ARM64_ALWAYS_BOOT,
@@ -3148,6 +3166,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, XNX, IMP)
},
+#ifdef CONFIG_ARM64_LSUI
+ {
+ .desc = "Unprivileged Load Store Instructions (LSUI)",
+ .capability = ARM64_HAS_LSUI,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_lsui,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR3_EL1, LSUI, IMP)
+ },
+#endif
{},
};
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 0fac75f01534..4b2f7f3f2b80 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -46,6 +46,7 @@ HAS_HCX
HAS_LDAPR
HAS_LPA2
HAS_LSE_ATOMICS
+HAS_LSUI
HAS_MOPS
HAS_NESTED_VIRT
HAS_BBML2_NOABORT
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-01-21 19:06 ` [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
@ 2026-02-06 18:42 ` Catalin Marinas
2026-02-09 18:57 ` Catalin Marinas
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-06 18:42 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Wed, Jan 21, 2026 at 07:06:17PM +0000, Yeoreum Yun wrote:
> +#ifdef CONFIG_ARM64_LSUI
> +static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> + if (!has_cpuid_feature(entry, scope))
> + return false;
> +
> + /*
> + * A CPU that supports LSUI should also support FEAT_PAN,
> + * so that SW_PAN handling is not required.
> + */
> + if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
> + return false;
> +
> + return true;
> +}
> +#endif
I still find this artificial dependency a bit strange. Maybe one doesn't
want any PAN at all (software or hardware) and won't get LSUI either
(it's unlikely but possible).
We have the uaccess_ttbr0_*() calls already for !LSUI, so maybe
structuring the macros in a way that they also take effect with LSUI.
For futex, we could add some new functions like uaccess_enable_futex()
which wouldn't do anything if LSUI is enabled with hw PAN.
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-06 18:42 ` Catalin Marinas
@ 2026-02-09 18:57 ` Catalin Marinas
2026-02-10 9:54 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-09 18:57 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Fri, Feb 06, 2026 at 06:42:19PM +0000, Catalin Marinas wrote:
> On Wed, Jan 21, 2026 at 07:06:17PM +0000, Yeoreum Yun wrote:
> > +#ifdef CONFIG_ARM64_LSUI
> > +static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
> > +{
> > + if (!has_cpuid_feature(entry, scope))
> > + return false;
> > +
> > + /*
> > + * A CPU that supports LSUI should also support FEAT_PAN,
> > + * so that SW_PAN handling is not required.
> > + */
> > + if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
> > + return false;
> > +
> > + return true;
> > +}
> > +#endif
>
> I still find this artificial dependency a bit strange. Maybe one doesn't
> want any PAN at all (software or hardware) and won't get LSUI either
> (it's unlikely but possible).
>
> We have the uaccess_ttbr0_*() calls already for !LSUI, so maybe
> structuring the macros in a way that they also take effect with LSUI.
> For futex, we could add some new functions like uaccess_enable_futex()
> which wouldn't do anything if LSUI is enabled with hw PAN.
Hmm, I forgot that we removed CONFIG_ARM64_PAN for 7.0, so it makes it
harder to disable. Give it a try but if the macros too complicated, we
can live with the additional check in has_lsui().
However, for completeness, we need to check the equivalent of
!system_uses_ttbr0_pan() but probing early, something like:
if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
!__system_matches_cap(ARM64_HAS_PAN)) {
pr_info_once("TTBR0 PAN incompatible with FEAT_LSUI; disabling FEAT_LSUI");
return false;
}
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-09 18:57 ` Catalin Marinas
@ 2026-02-10 9:54 ` Yeoreum Yun
2026-02-10 16:14 ` Catalin Marinas
0 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-10 9:54 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
> On Fri, Feb 06, 2026 at 06:42:19PM +0000, Catalin Marinas wrote:
> > On Wed, Jan 21, 2026 at 07:06:17PM +0000, Yeoreum Yun wrote:
> > > +#ifdef CONFIG_ARM64_LSUI
> > > +static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
> > > +{
> > > + if (!has_cpuid_feature(entry, scope))
> > > + return false;
> > > +
> > > + /*
> > > + * A CPU that supports LSUI should also support FEAT_PAN,
> > > + * so that SW_PAN handling is not required.
> > > + */
> > > + if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
> > > + return false;
> > > +
> > > + return true;
> > > +}
> > > +#endif
> >
> > I still find this artificial dependency a bit strange. Maybe one doesn't
> > want any PAN at all (software or hardware) and won't get LSUI either
> > (it's unlikely but possible).
> > We have the uaccess_ttbr0_*() calls already for !LSUI, so maybe
> > structuring the macros in a way that they also take effect with LSUI.
> > For futex, we could add some new functions like uaccess_enable_futex()
> > which wouldn't do anything if LSUI is enabled with hw PAN.
>
> Hmm, I forgot that we removed CONFIG_ARM64_PAN for 7.0, so it makes it
> harder to disable. Give it a try but if the macros too complicated, we
> can live with the additional check in has_lsui().
>
> However, for completeness, we need to check the equivalent of
> !system_uses_ttbr0_pan() but probing early, something like:
>
> if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
> !__system_matches_cap(ARM64_HAS_PAN)) {
> pr_info_once("TTBR0 PAN incompatible with FEAT_LSUI; disabling FEAT_LSUI");
> return false;
> }
>
> --
TBH, I'm not sure whether it's a artifical dependency or not.
AFAIK, FEAT_PAN is mandatory from Armv8.1 and the FEAT_LSUI seems to
implements based on the present of "FEAT_PAN".
So, for a hardware which doesn't have FEAT_PAN but has FEAT_LSUI
sounds like "wrong" hardware and I'm not sure whether it's right
to enable FEAT_LSUI in this case.
SW_PAN case is the same problem. Since If system uses SW_PAN,
that means this hardware doesn't have a "FEAT_PAN"
So this question seems to ultimately boil down to whether
it is appropriate to allow the use of FEAT_LSUI
even when FEAT_PAN is not supported.
That's why I think the purpose of "has_lsui()" is not for artifical
dependency but to disable for unlike case which have !FEAT_PAN and FEAT_LSUI
and IMHO it's enough to check only check with "ARM64_HAS_PAN" instead of
making a new function like uaccess_enable_futext().
Am I missing something?
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-10 9:54 ` Yeoreum Yun
@ 2026-02-10 16:14 ` Catalin Marinas
2026-02-10 17:01 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-10 16:14 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Levi,
On Tue, Feb 10, 2026 at 09:54:49AM +0000, Yeoreum Yun wrote:
> > On Fri, Feb 06, 2026 at 06:42:19PM +0000, Catalin Marinas wrote:
> > > On Wed, Jan 21, 2026 at 07:06:17PM +0000, Yeoreum Yun wrote:
> > > > +#ifdef CONFIG_ARM64_LSUI
> > > > +static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
> > > > +{
> > > > + if (!has_cpuid_feature(entry, scope))
> > > > + return false;
> > > > +
> > > > + /*
> > > > + * A CPU that supports LSUI should also support FEAT_PAN,
> > > > + * so that SW_PAN handling is not required.
> > > > + */
> > > > + if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
> > > > + return false;
> > > > +
> > > > + return true;
> > > > +}
> > > > +#endif
> > >
> > > I still find this artificial dependency a bit strange. Maybe one doesn't
> > > want any PAN at all (software or hardware) and won't get LSUI either
> > > (it's unlikely but possible).
> > > We have the uaccess_ttbr0_*() calls already for !LSUI, so maybe
> > > structuring the macros in a way that they also take effect with LSUI.
> > > For futex, we could add some new functions like uaccess_enable_futex()
> > > which wouldn't do anything if LSUI is enabled with hw PAN.
> >
> > Hmm, I forgot that we removed CONFIG_ARM64_PAN for 7.0, so it makes it
> > harder to disable. Give it a try but if the macros too complicated, we
> > can live with the additional check in has_lsui().
> >
> > However, for completeness, we need to check the equivalent of
> > !system_uses_ttbr0_pan() but probing early, something like:
> >
> > if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
> > !__system_matches_cap(ARM64_HAS_PAN)) {
> > pr_info_once("TTBR0 PAN incompatible with FEAT_LSUI; disabling FEAT_LSUI");
> > return false;
> > }
>
> TBH, I'm not sure whether it's a artifical dependency or not.
> AFAIK, FEAT_PAN is mandatory from Armv8.1 and the FEAT_LSUI seems to
> implements based on the present of "FEAT_PAN".
>
> So, for a hardware which doesn't have FEAT_PAN but has FEAT_LSUI
> sounds like "wrong" hardware and I'm not sure whether it's right
> to enable FEAT_LSUI in this case.
In principle we shouldn't have such hardware but, as Will pointed out,
we might have such combination due to other reasons like virtualisation,
id reg override.
It's not that FEAT_LSUI requires FEAT_PAN but rather that the way you
implemented it, the FEAT_LSUI futex code is incompatible with SW_PAN
because you no longer call uaccess_enable_privileged(). So I suggested a
small tweak above to make this more obvious. I would also remove the
WARN_ON, or at least make it WARN_ON_ONCE() if you still want the stack
dump.
However...
> SW_PAN case is the same problem. Since If system uses SW_PAN,
> that means this hardware doesn't have a "FEAT_PAN"
> So this question seems to ultimately boil down to whether
> it is appropriate to allow the use of FEAT_LSUI
> even when FEAT_PAN is not supported.
>
> That's why I think the purpose of "has_lsui()" is not for artifical
> dependency but to disable for unlike case which have !FEAT_PAN and FEAT_LSUI
> and IMHO it's enough to check only check with "ARM64_HAS_PAN" instead of
> making a new function like uaccess_enable_futex().
Why not keep uaccess_enable_privileged() in
arch_futex_atomic_op_inuser() and cmpxchg for all cases and make it a
no-op if FEAT_LSUI is implemented together with FEAT_PAN? A quick grep
shows a recent addition in __lse_swap_desc() (and the llsc equivalent)
but this one can also use CAST with FEAT_LSUI.
BTW, with the removal of uaccess_enable_privileged(), we now get MTE tag
checks for the futex operations. I think that's good as it matches the
other uaccess ops, though it's a slight ABI change. If we want to
preserve the old behaviour, we definitely need
uaccess_enable_privileged() that only does mte_enable_tco().
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-10 16:14 ` Catalin Marinas
@ 2026-02-10 17:01 ` Yeoreum Yun
2026-02-16 18:24 ` Catalin Marinas
0 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-10 17:01 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
> Hi Levi,
>
> On Tue, Feb 10, 2026 at 09:54:49AM +0000, Yeoreum Yun wrote:
> > > On Fri, Feb 06, 2026 at 06:42:19PM +0000, Catalin Marinas wrote:
> > > > On Wed, Jan 21, 2026 at 07:06:17PM +0000, Yeoreum Yun wrote:
> > > > > +#ifdef CONFIG_ARM64_LSUI
> > > > > +static bool has_lsui(const struct arm64_cpu_capabilities *entry, int scope)
> > > > > +{
> > > > > + if (!has_cpuid_feature(entry, scope))
> > > > > + return false;
> > > > > +
> > > > > + /*
> > > > > + * A CPU that supports LSUI should also support FEAT_PAN,
> > > > > + * so that SW_PAN handling is not required.
> > > > > + */
> > > > > + if (WARN_ON(!__system_matches_cap(ARM64_HAS_PAN)))
> > > > > + return false;
> > > > > +
> > > > > + return true;
> > > > > +}
> > > > > +#endif
> > > >
> > > > I still find this artificial dependency a bit strange. Maybe one doesn't
> > > > want any PAN at all (software or hardware) and won't get LSUI either
> > > > (it's unlikely but possible).
> > > > We have the uaccess_ttbr0_*() calls already for !LSUI, so maybe
> > > > structuring the macros in a way that they also take effect with LSUI.
> > > > For futex, we could add some new functions like uaccess_enable_futex()
> > > > which wouldn't do anything if LSUI is enabled with hw PAN.
> > >
> > > Hmm, I forgot that we removed CONFIG_ARM64_PAN for 7.0, so it makes it
> > > harder to disable. Give it a try but if the macros too complicated, we
> > > can live with the additional check in has_lsui().
> > >
> > > However, for completeness, we need to check the equivalent of
> > > !system_uses_ttbr0_pan() but probing early, something like:
> > >
> > > if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
> > > !__system_matches_cap(ARM64_HAS_PAN)) {
> > > pr_info_once("TTBR0 PAN incompatible with FEAT_LSUI; disabling FEAT_LSUI");
> > > return false;
> > > }
> >
> > TBH, I'm not sure whether it's a artifical dependency or not.
> > AFAIK, FEAT_PAN is mandatory from Armv8.1 and the FEAT_LSUI seems to
> > implements based on the present of "FEAT_PAN".
> >
> > So, for a hardware which doesn't have FEAT_PAN but has FEAT_LSUI
> > sounds like "wrong" hardware and I'm not sure whether it's right
> > to enable FEAT_LSUI in this case.
>
> In principle we shouldn't have such hardware but, as Will pointed out,
> we might have such combination due to other reasons like virtualisation,
> id reg override.
>
> It's not that FEAT_LSUI requires FEAT_PAN but rather that the way you
> implemented it, the FEAT_LSUI futex code is incompatible with SW_PAN
> because you no longer call uaccess_enable_privileged(). So I suggested a
> small tweak above to make this more obvious. I would also remove the
> WARN_ON, or at least make it WARN_ON_ONCE() if you still want the stack
> dump.
>
> However...
>
> > SW_PAN case is the same problem. Since If system uses SW_PAN,
> > that means this hardware doesn't have a "FEAT_PAN"
> > So this question seems to ultimately boil down to whether
> > it is appropriate to allow the use of FEAT_LSUI
> > even when FEAT_PAN is not supported.
> >
> > That's why I think the purpose of "has_lsui()" is not for artifical
> > dependency but to disable for unlike case which have !FEAT_PAN and FEAT_LSUI
> > and IMHO it's enough to check only check with "ARM64_HAS_PAN" instead of
> > making a new function like uaccess_enable_futex().
>
> Why not keep uaccess_enable_privileged() in
> arch_futex_atomic_op_inuser() and cmpxchg for all cases and make it a
> no-op if FEAT_LSUI is implemented together with FEAT_PAN?
This is because I had a assumption FEAT_PAN must be present
when FEAT_LSUI is presented and this was not considering the virtualisation case.
and FEAT_PAN is present uaccess_ttbr0_enable() becomes nop and
following feedback you gave - https://lore.kernel.org/all/aJ9oIes7LLF3Nsp1@arm.com/
and the reason you mention last, It doesn't need to call mte_enable_tco().
That's why I thought it doesn't need to call uaccess_enable_privileged().
But for a compatibility with SW_PAN, I think we can put only
uaccess_ttbr0_enable() in arch_futex_atomic_op_inuser() and cmpxchg simply
instead of adding a new APIs uaccess_enable_futex() and
by doing this I think has_lsui() can be removed with its WRAN.
Am I missing something?
> A quick grep shows a recent addition in __lse_swap_desc() (and the llsc equivalent)
> but this one can also use CAST with FEAT_LSUI.
Thanks. I'll apply this with FEAT_LSUI in next round.
>
> BTW, with the removal of uaccess_enable_privileged(), we now get MTE tag
> checks for the futex operations. I think that's good as it matches the
> other uaccess ops, though it's a slight ABI change. If we want to
> preserve the old behaviour, we definitely need
> uaccess_enable_privileged() that only does mte_enable_tco().
I think we don't need to preserve the old behaviour. so we can skip
mte_enable_tco() in case of FEAT_LSUI is presented.
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-10 17:01 ` Yeoreum Yun
@ 2026-02-16 18:24 ` Catalin Marinas
2026-02-23 15:54 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-16 18:24 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Tue, Feb 10, 2026 at 05:01:33PM +0000, Yeoreum Yun wrote:
> > Why not keep uaccess_enable_privileged() in
> > arch_futex_atomic_op_inuser() and cmpxchg for all cases and make it a
> > no-op if FEAT_LSUI is implemented together with FEAT_PAN?
>
> This is because I had a assumption FEAT_PAN must be present
> when FEAT_LSUI is presented and this was not considering the virtualisation case.
> and FEAT_PAN is present uaccess_ttbr0_enable() becomes nop and
> following feedback you gave - https://lore.kernel.org/all/aJ9oIes7LLF3Nsp1@arm.com/
> and the reason you mention last, It doesn't need to call mte_enable_tco().
>
> That's why I thought it doesn't need to call uaccess_enable_privileged().
>
> But for a compatibility with SW_PAN, I think we can put only
> uaccess_ttbr0_enable() in arch_futex_atomic_op_inuser() and cmpxchg simply
> instead of adding a new APIs uaccess_enable_futex() and
> by doing this I think has_lsui() can be removed with its WRAN.
Yes, I think you can use uaccess_ttbr0_enable() when we take the
FEAT_LSUI path. What I meant above was for uaccess_enable_privileged()
to avoid PAN disabling if we have FEAT_LSUI as we know all cases would
be executed with user privileges.
Either way, we don't need a new uaccess_enable_futex().
> > BTW, with the removal of uaccess_enable_privileged(), we now get MTE tag
> > checks for the futex operations. I think that's good as it matches the
> > other uaccess ops, though it's a slight ABI change. If we want to
> > preserve the old behaviour, we definitely need
> > uaccess_enable_privileged() that only does mte_enable_tco().
>
> I think we don't need to preserve the old behaviour. so we can skip
> mte_enable_tco() in case of FEAT_LSUI is presented.
Just spell it out in the commit log that we have a slight ABI change. I
don't think we'll have a problem but it needs at least checking with
some user-space (libc, Android) people.
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI
2026-02-16 18:24 ` Catalin Marinas
@ 2026-02-23 15:54 ` Yeoreum Yun
0 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-23 15:54 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
> On Tue, Feb 10, 2026 at 05:01:33PM +0000, Yeoreum Yun wrote:
> > > Why not keep uaccess_enable_privileged() in
> > > arch_futex_atomic_op_inuser() and cmpxchg for all cases and make it a
> > > no-op if FEAT_LSUI is implemented together with FEAT_PAN?
> >
> > This is because I had a assumption FEAT_PAN must be present
> > when FEAT_LSUI is presented and this was not considering the virtualisation case.
> > and FEAT_PAN is present uaccess_ttbr0_enable() becomes nop and
> > following feedback you gave - https://lore.kernel.org/all/aJ9oIes7LLF3Nsp1@arm.com/
> > and the reason you mention last, It doesn't need to call mte_enable_tco().
> >
> > That's why I thought it doesn't need to call uaccess_enable_privileged().
> >
> > But for a compatibility with SW_PAN, I think we can put only
> > uaccess_ttbr0_enable() in arch_futex_atomic_op_inuser() and cmpxchg simply
> > instead of adding a new APIs uaccess_enable_futex() and
> > by doing this I think has_lsui() can be removed with its WRAN.
>
> Yes, I think you can use uaccess_ttbr0_enable() when we take the
> FEAT_LSUI path. What I meant above was for uaccess_enable_privileged()
> to avoid PAN disabling if we have FEAT_LSUI as we know all cases would
> be executed with user privileges.
>
> Either way, we don't need a new uaccess_enable_futex().
Yes. But like raw_copy_from/to_user() where use ldtr*/sttr*
when MOPS isn't enabled, It seems better to use uaccess_ttbr0_enable()
instead of making special handling for LSUI in uaccess_enable_privileged().
This seems more consistent since ldtr*/sttr* are similar function of
LSUI (doesn't disable PAN) and doesn't enable mto.
>
> > > BTW, with the removal of uaccess_enable_privileged(), we now get MTE tag
> > > checks for the futex operations. I think that's good as it matches the
> > > other uaccess ops, though it's a slight ABI change. If we want to
> > > preserve the old behaviour, we definitely need
> > > uaccess_enable_privileged() that only does mte_enable_tco().
> >
> > I think we don't need to preserve the old behaviour. so we can skip
> > mte_enable_tco() in case of FEAT_LSUI is presented.
>
> Just spell it out in the commit log that we have a slight ABI change. I
> don't think we'll have a problem but it needs at least checking with
> some user-space (libc, Android) people.
I see. Thanks!
>
> --
> Catalin
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 3/7] KVM: arm64: expose FEAT_LSUI to guest
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 1/7] arm64: Kconfig: add support for LSUI Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 2/7] arm64: cpufeature: add FEAT_LSUI Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 4/7] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
` (4 subsequent siblings)
7 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
expose FEAT_LSUI to guest.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/kvm/sys_regs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c8fd7c6a12a1..fa34910b22ae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1805,7 +1805,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
break;
case SYS_ID_AA64ISAR3_EL1:
val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_LSFE |
- ID_AA64ISAR3_EL1_FAMINMAX;
+ ID_AA64ISAR3_EL1_FAMINMAX | ID_AA64ISAR3_EL1_LSUI;
break;
case SYS_ID_AA64MMFR2_EL1:
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
@@ -3249,6 +3249,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_AA64ISAR2_EL1_GPA3)),
ID_WRITABLE(ID_AA64ISAR3_EL1, (ID_AA64ISAR3_EL1_FPRCVT |
ID_AA64ISAR3_EL1_LSFE |
+ ID_AA64ISAR3_EL1_LSUI |
ID_AA64ISAR3_EL1_FAMINMAX)),
ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5),
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v12 4/7] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
` (2 preceding siblings ...)
2026-01-21 19:06 ` [PATCH v12 3/7] KVM: arm64: expose FEAT_LSUI to guest Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 5/7] arm64: futex: refactor futex atomic operation Yeoreum Yun
` (3 subsequent siblings)
7 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
Add test coverage for FEAT_LSUI.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
tools/testing/selftests/kvm/arm64/set_id_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index c4815d365816..0b1714aa127c 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -125,6 +125,7 @@ static const struct reg_ftr_bits ftr_id_aa64isar2_el1[] = {
static const struct reg_ftr_bits ftr_id_aa64isar3_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, FPRCVT, 0),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, LSUI, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, LSFE, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR3_EL1, FAMINMAX, 0),
REG_FTR_END,
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v12 5/7] arm64: futex: refactor futex atomic operation
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
` (3 preceding siblings ...)
2026-01-21 19:06 ` [PATCH v12 4/7] KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-01-21 19:06 ` [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
` (2 subsequent siblings)
7 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
Refactor futex atomic operations using ll/sc method with
clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/include/asm/futex.h | 137 +++++++++++++++++++++------------
1 file changed, 87 insertions(+), 50 deletions(-)
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index bc06691d2062..9a0efed50743 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -7,21 +7,25 @@
#include <linux/futex.h>
#include <linux/uaccess.h>
+#include <linux/stringify.h>
#include <asm/errno.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
-#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
-do { \
+#define LLSC_FUTEX_ATOMIC_OP(op, insn) \
+static __always_inline int \
+__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
unsigned int loops = FUTEX_MAX_LOOPS; \
+ int ret, oldval, newval; \
\
uaccess_enable_privileged(); \
- asm volatile( \
+ asm volatile("// __llsc_futex_atomic_" #op "\n" \
" prfm pstl1strm, %2\n" \
-"1: ldxr %w1, %2\n" \
+"1: ldxr %w[oldval], %2\n" \
insn "\n" \
-"2: stlxr %w0, %w3, %2\n" \
+"2: stlxr %w0, %w[newval], %2\n" \
" cbz %w0, 3f\n" \
" sub %w4, %w4, %w0\n" \
" cbnz %w4, 1b\n" \
@@ -30,50 +34,109 @@ do { \
" dmb ish\n" \
_ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) \
_ASM_EXTABLE_UACCESS_ERR(2b, 3b, %w0) \
- : "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp), \
+ : "=&r" (ret), [oldval] "=&r" (oldval), "+Q" (*uaddr), \
+ [newval] "=&r" (newval), \
"+r" (loops) \
- : "r" (oparg), "Ir" (-EAGAIN) \
+ : [oparg] "r" (oparg), "Ir" (-EAGAIN) \
: "memory"); \
uaccess_disable_privileged(); \
-} while (0)
+ \
+ if (!ret) \
+ *oval = oldval; \
+ \
+ return ret; \
+}
+
+LLSC_FUTEX_ATOMIC_OP(add, "add %w[newval], %w[oldval], %w[oparg]")
+LLSC_FUTEX_ATOMIC_OP(or, "orr %w[newval], %w[oldval], %w[oparg]")
+LLSC_FUTEX_ATOMIC_OP(and, "and %w[newval], %w[oldval], %w[oparg]")
+LLSC_FUTEX_ATOMIC_OP(eor, "eor %w[newval], %w[oldval], %w[oparg]")
+LLSC_FUTEX_ATOMIC_OP(set, "mov %w[newval], %w[oparg]")
+
+static __always_inline int
+__llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ int ret = 0;
+ unsigned int loops = FUTEX_MAX_LOOPS;
+ u32 val, tmp;
+
+ uaccess_enable_privileged();
+ asm volatile("//__llsc_futex_cmpxchg\n"
+" prfm pstl1strm, %2\n"
+"1: ldxr %w1, %2\n"
+" eor %w3, %w1, %w5\n"
+" cbnz %w3, 4f\n"
+"2: stlxr %w3, %w6, %2\n"
+" cbz %w3, 3f\n"
+" sub %w4, %w4, %w3\n"
+" cbnz %w4, 1b\n"
+" mov %w0, %w7\n"
+"3:\n"
+" dmb ish\n"
+"4:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
+ _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
+ : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
+ : "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
+ : "memory");
+ uaccess_disable_privileged();
+
+ if (!ret)
+ *oval = val;
+
+ return ret;
+}
+
+#define FUTEX_ATOMIC_OP(op) \
+static __always_inline int \
+__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
+ return __llsc_futex_atomic_##op(oparg, uaddr, oval); \
+}
+
+FUTEX_ATOMIC_OP(add)
+FUTEX_ATOMIC_OP(or)
+FUTEX_ATOMIC_OP(and)
+FUTEX_ATOMIC_OP(eor)
+FUTEX_ATOMIC_OP(set)
+
+static __always_inline int
+__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval);
+}
static inline int
arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
{
- int oldval = 0, ret, tmp;
- u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+ int ret;
+ u32 __user *uaddr;
if (!access_ok(_uaddr, sizeof(u32)))
return -EFAULT;
+ uaddr = __uaccess_mask_ptr(_uaddr);
+
switch (op) {
case FUTEX_OP_SET:
- __futex_atomic_op("mov %w3, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_set(oparg, uaddr, oval);
break;
case FUTEX_OP_ADD:
- __futex_atomic_op("add %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_add(oparg, uaddr, oval);
break;
case FUTEX_OP_OR:
- __futex_atomic_op("orr %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_or(oparg, uaddr, oval);
break;
case FUTEX_OP_ANDN:
- __futex_atomic_op("and %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, ~oparg);
+ ret = __futex_atomic_and(~oparg, uaddr, oval);
break;
case FUTEX_OP_XOR:
- __futex_atomic_op("eor %w3, %w1, %w5",
- ret, oldval, uaddr, tmp, oparg);
+ ret = __futex_atomic_eor(oparg, uaddr, oval);
break;
default:
ret = -ENOSYS;
}
- if (!ret)
- *oval = oldval;
-
return ret;
}
@@ -81,40 +144,14 @@ static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
u32 oldval, u32 newval)
{
- int ret = 0;
- unsigned int loops = FUTEX_MAX_LOOPS;
- u32 val, tmp;
u32 __user *uaddr;
if (!access_ok(_uaddr, sizeof(u32)))
return -EFAULT;
uaddr = __uaccess_mask_ptr(_uaddr);
- uaccess_enable_privileged();
- asm volatile("// futex_atomic_cmpxchg_inatomic\n"
-" prfm pstl1strm, %2\n"
-"1: ldxr %w1, %2\n"
-" sub %w3, %w1, %w5\n"
-" cbnz %w3, 4f\n"
-"2: stlxr %w3, %w6, %2\n"
-" cbz %w3, 3f\n"
-" sub %w4, %w4, %w3\n"
-" cbnz %w4, 1b\n"
-" mov %w0, %w7\n"
-"3:\n"
-" dmb ish\n"
-"4:\n"
- _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0)
- _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0)
- : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
- : "r" (oldval), "r" (newval), "Ir" (-EAGAIN)
- : "memory");
- uaccess_disable_privileged();
-
- if (!ret)
- *uval = val;
- return ret;
+ return __futex_cmpxchg(uaddr, oldval, newval, uval);
}
#endif /* __ASM_FUTEX_H */
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
` (4 preceding siblings ...)
2026-01-21 19:06 ` [PATCH v12 5/7] arm64: futex: refactor futex atomic operation Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-02-10 16:45 ` Catalin Marinas
2026-01-21 19:06 ` [PATCH v12 7/7] arm64: armv8_deprecated: disable swp emulation when FEAT_LSUI present Yeoreum Yun
2026-02-06 9:04 ` [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
7 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
Current futex atomic operations are implemented with ll/sc instructions
and clearing PSTATE.PAN.
Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
also atomic operation for user memory access in kernel it doesn't need
to clear PSTATE.PAN bit anymore.
With theses instructions some of futex atomic operations don't need to
be implmented with ldxr/stlxr pair instead can be implmented with
one atomic operation supplied by FEAT_LSUI.
However, some of futex atomic operation don't have matched
instructuion i.e) eor or cmpxchg with word size.
For those operation, uses cas{al}t to implement them.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/include/asm/futex.h | 189 ++++++++++++++++++++++++++++++++-
1 file changed, 187 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 9a0efed50743..568583982875 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -9,6 +9,8 @@
#include <linux/uaccess.h>
#include <linux/stringify.h>
+#include <asm/alternative.h>
+#include <asm/alternative-macros.h>
#include <asm/errno.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
@@ -87,11 +89,194 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
return ret;
}
+#ifdef CONFIG_ARM64_LSUI
+
+/*
+ * When the LSUI feature is present, the CPU also implements PAN, because
+ * FEAT_PAN has been mandatory since Armv8.1. Therefore, there is no need to
+ * call uaccess_ttbr0_enable()/uaccess_ttbr0_disable() around each LSUI
+ * operation.
+ */
+
+#define __LSUI_PREAMBLE ".arch_extension lsui\n"
+
+#define LSUI_FUTEX_ATOMIC_OP(op, asm_op) \
+static __always_inline int \
+__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
+{ \
+ int ret = 0; \
+ int oldval; \
+ \
+ asm volatile("// __lsui_futex_atomic_" #op "\n" \
+ __LSUI_PREAMBLE \
+"1: " #asm_op "al %w3, %w2, %1\n" \
+"2:\n" \
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
+ : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
+ : "r" (oparg) \
+ : "memory"); \
+ \
+ if (!ret) \
+ *oval = oldval; \
+ \
+ return ret; \
+}
+
+LSUI_FUTEX_ATOMIC_OP(add, ldtadd)
+LSUI_FUTEX_ATOMIC_OP(or, ldtset)
+LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr)
+LSUI_FUTEX_ATOMIC_OP(set, swpt)
+
+static __always_inline int
+__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
+{
+ int ret = 0;
+
+ asm volatile("// __lsui_cmpxchg64\n"
+ __LSUI_PREAMBLE
+"1: casalt %2, %3, %1\n"
+"2:\n"
+ _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
+ : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
+ : "r" (newval)
+ : "memory");
+
+ return ret;
+}
+
+static __always_inline int
+__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ u64 __user *uaddr64;
+ bool futex_on_lo;
+ int ret, i;
+ u32 other, orig_other;
+ union {
+ struct futex_on_lo {
+ u32 val;
+ u32 other;
+ } lo_futex;
+
+ struct futex_on_hi {
+ u32 other;
+ u32 val;
+ } hi_futex;
+
+ u64 raw;
+ } oval64, orig64, nval64;
+
+ uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
+ futex_on_lo = IS_ALIGNED((unsigned long)uaddr, sizeof(u64));
+
+ if (futex_on_lo) {
+ oval64.lo_futex.val = oldval;
+ ret = get_user(oval64.lo_futex.other, uaddr + 1);
+ } else {
+ oval64.hi_futex.val = oldval;
+ ret = get_user(oval64.hi_futex.other, uaddr - 1);
+ }
+
+ if (ret)
+ return -EFAULT;
+
+ ret = -EAGAIN;
+ for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
+ orig64.raw = nval64.raw = oval64.raw;
+
+ if (futex_on_lo)
+ nval64.lo_futex.val = newval;
+ else
+ nval64.hi_futex.val = newval;
+
+ if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
+ return -EFAULT;
+
+ if (futex_on_lo) {
+ oldval = oval64.lo_futex.val;
+ other = oval64.lo_futex.other;
+ orig_other = orig64.lo_futex.other;
+ } else {
+ oldval = oval64.hi_futex.val;
+ other = oval64.hi_futex.other;
+ orig_other = orig64.hi_futex.other;
+ }
+
+ if (other == orig_other) {
+ ret = 0;
+ break;
+ }
+ }
+
+ if (!ret)
+ *oval = oldval;
+
+ return ret;
+}
+
+static __always_inline int
+__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
+{
+ /*
+ * Undo the bitwise negation applied to the oparg passed from
+ * arch_futex_atomic_op_inuser() with FUTEX_OP_ANDN.
+ */
+ return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
+}
+
+static __always_inline int
+__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
+{
+ u32 oldval, newval, val;
+ int ret, i;
+
+ if (get_user(oldval, uaddr))
+ return -EFAULT;
+
+ /*
+ * there are no ldteor/stteor instructions...
+ */
+ for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
+ newval = oldval ^ oparg;
+
+ ret = __lsui_cmpxchg32(uaddr, oldval, newval, &val);
+ if (ret)
+ return ret;
+
+ if (val == oldval) {
+ *oval = val;
+ return 0;
+ }
+
+ oldval = val;
+ }
+
+ return -EAGAIN;
+}
+
+static __always_inline int
+__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
+{
+ return __lsui_cmpxchg32(uaddr, oldval, newval, oval);
+}
+
+#define __lsui_llsc_body(op, ...) \
+({ \
+ alternative_has_cap_unlikely(ARM64_HAS_LSUI) ? \
+ __lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__); \
+})
+
+#else /* CONFIG_ARM64_LSUI */
+
+#define __lsui_llsc_body(op, ...) __llsc_##op(__VA_ARGS__)
+
+#endif /* CONFIG_ARM64_LSUI */
+
+
#define FUTEX_ATOMIC_OP(op) \
static __always_inline int \
__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
{ \
- return __llsc_futex_atomic_##op(oparg, uaddr, oval); \
+ return __lsui_llsc_body(futex_atomic_##op, oparg, uaddr, oval); \
}
FUTEX_ATOMIC_OP(add)
@@ -103,7 +288,7 @@ FUTEX_ATOMIC_OP(set)
static __always_inline int
__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
{
- return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval);
+ return __lsui_llsc_body(futex_cmpxchg, uaddr, oldval, newval, oval);
}
static inline int
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI
2026-01-21 19:06 ` [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
@ 2026-02-10 16:45 ` Catalin Marinas
2026-02-10 17:17 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-10 16:45 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
I wonder whether we can shorten this function a bit. Not sure it would
be more readable but it would be shorter.
On Wed, Jan 21, 2026 at 07:06:21PM +0000, Yeoreum Yun wrote:
> +static __always_inline int
> +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + u64 __user *uaddr64;
> + bool futex_on_lo;
> + int ret, i;
> + u32 other, orig_other;
> + union {
> + struct futex_on_lo {
> + u32 val;
> + u32 other;
> + } lo_futex;
> +
> + struct futex_on_hi {
> + u32 other;
> + u32 val;
> + } hi_futex;
> +
> + u64 raw;
> + } oval64, orig64, nval64;
union {
u32 futex[2];
u64 raw;
}
> +
> + uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> + futex_on_lo = IS_ALIGNED((unsigned long)uaddr, sizeof(u64));
futex_pos = (unsigned long)uaddr & 4 ? 1 : 0;
> +
> + if (futex_on_lo) {
> + oval64.lo_futex.val = oldval;
> + ret = get_user(oval64.lo_futex.other, uaddr + 1);
> + } else {
> + oval64.hi_futex.val = oldval;
> + ret = get_user(oval64.hi_futex.other, uaddr - 1);
> + }
and here use
get_user(oval64.raw, uaddr64);
futex[futex_pos] = oldval;
> +
> + if (ret)
> + return -EFAULT;
> +
> + ret = -EAGAIN;
> + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> + orig64.raw = nval64.raw = oval64.raw;
> +
> + if (futex_on_lo)
> + nval64.lo_futex.val = newval;
> + else
> + nval64.hi_futex.val = newval;
> +
> + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> + return -EFAULT;
> +
> + if (futex_on_lo) {
> + oldval = oval64.lo_futex.val;
> + other = oval64.lo_futex.other;
> + orig_other = orig64.lo_futex.other;
> + } else {
> + oldval = oval64.hi_futex.val;
> + other = oval64.hi_futex.other;
> + orig_other = orig64.hi_futex.other;
> + }
Something similar here to use futex[futex_pos].
We probably also need to check that the user pointer is 32-bit aligned
and return -EFAULT if not.
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI
2026-02-10 16:45 ` Catalin Marinas
@ 2026-02-10 17:17 ` Yeoreum Yun
2026-02-16 18:04 ` Catalin Marinas
0 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-10 17:17 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
Thanks for your suggestion.
> I wonder whether we can shorten this function a bit. Not sure it would
> be more readable but it would be shorter.
>
> On Wed, Jan 21, 2026 at 07:06:21PM +0000, Yeoreum Yun wrote:
> > +static __always_inline int
> > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > + u64 __user *uaddr64;
> > + bool futex_on_lo;
> > + int ret, i;
> > + u32 other, orig_other;
> > + union {
> > + struct futex_on_lo {
> > + u32 val;
> > + u32 other;
> > + } lo_futex;
> > +
> > + struct futex_on_hi {
> > + u32 other;
> > + u32 val;
> > + } hi_futex;
> > +
> > + u64 raw;
> > + } oval64, orig64, nval64;
>
> union {
> u32 futex[2];
> u64 raw;
> }
>
> > +
> > + uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > + futex_on_lo = IS_ALIGNED((unsigned long)uaddr, sizeof(u64));
>
> futex_pos = (unsigned long)uaddr & 4 ? 1 : 0;
Okay. I'll try.
>
> > +
> > + if (futex_on_lo) {
> > + oval64.lo_futex.val = oldval;
> > + ret = get_user(oval64.lo_futex.other, uaddr + 1);
> > + } else {
> > + oval64.hi_futex.val = oldval;
> > + ret = get_user(oval64.hi_futex.other, uaddr - 1);
> > + }
>
> and here use
>
> get_user(oval64.raw, uaddr64);
> futex[futex_pos] = oldval;
But there is another feedback about this
(though I did first similarly with your suggestion -- use oval64.raw):
https://lore.kernel.org/all/aXDZGhFQDvoSwdc_@willie-the-truck/
>
> > +
> > + if (ret)
> > + return -EFAULT;
> > +
> > + ret = -EAGAIN;
> > + for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
> > + orig64.raw = nval64.raw = oval64.raw;
> > +
> > + if (futex_on_lo)
> > + nval64.lo_futex.val = newval;
> > + else
> > + nval64.hi_futex.val = newval;
> > +
> > + if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
> > + return -EFAULT;
> > +
> > + if (futex_on_lo) {
> > + oldval = oval64.lo_futex.val;
> > + other = oval64.lo_futex.other;
> > + orig_other = orig64.lo_futex.other;
> > + } else {
> > + oldval = oval64.hi_futex.val;
> > + other = oval64.hi_futex.other;
> > + orig_other = orig64.hi_futex.other;
> > + }
>
> Something similar here to use futex[futex_pos].
>
> We probably also need to check that the user pointer is 32-bit aligned
> and return -EFAULT if not.
Thanks. I'll respin it again ;)
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI
2026-02-10 17:17 ` Yeoreum Yun
@ 2026-02-16 18:04 ` Catalin Marinas
2026-02-17 9:56 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-16 18:04 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Tue, Feb 10, 2026 at 05:17:46PM +0000, Yeoreum Yun wrote:
> > On Wed, Jan 21, 2026 at 07:06:21PM +0000, Yeoreum Yun wrote:
> > > +
> > > + if (futex_on_lo) {
> > > + oval64.lo_futex.val = oldval;
> > > + ret = get_user(oval64.lo_futex.other, uaddr + 1);
> > > + } else {
> > > + oval64.hi_futex.val = oldval;
> > > + ret = get_user(oval64.hi_futex.other, uaddr - 1);
> > > + }
> >
> > and here use
> >
> > get_user(oval64.raw, uaddr64);
> > futex[futex_pos] = oldval;
>
> But there is another feedback about this
> (though I did first similarly with your suggestion -- use oval64.raw):
> https://lore.kernel.org/all/aXDZGhFQDvoSwdc_@willie-the-truck/
Do you mean the 64-bit read? You can do a 32-bit uaccess, something
like:
int other_pos = futex_pos ^ 1;
get_user(futex[other_pos], (u32 __user *)uaddr64 + other_pos);
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI
2026-02-16 18:04 ` Catalin Marinas
@ 2026-02-17 9:56 ` Yeoreum Yun
0 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-17 9:56 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
Hi Catalin,
> On Tue, Feb 10, 2026 at 05:17:46PM +0000, Yeoreum Yun wrote:
> > > On Wed, Jan 21, 2026 at 07:06:21PM +0000, Yeoreum Yun wrote:
> > > > +
> > > > + if (futex_on_lo) {
> > > > + oval64.lo_futex.val = oldval;
> > > > + ret = get_user(oval64.lo_futex.other, uaddr + 1);
> > > > + } else {
> > > > + oval64.hi_futex.val = oldval;
> > > > + ret = get_user(oval64.hi_futex.other, uaddr - 1);
> > > > + }
> > >
> > > and here use
> > >
> > > get_user(oval64.raw, uaddr64);
> > > futex[futex_pos] = oldval;
> >
> > But there is another feedback about this
> > (though I did first similarly with your suggestion -- use oval64.raw):
> > https://lore.kernel.org/all/aXDZGhFQDvoSwdc_@willie-the-truck/
>
> Do you mean the 64-bit read? You can do a 32-bit uaccess, something
> like:
>
> int other_pos = futex_pos ^ 1;
> get_user(futex[other_pos], (u32 __user *)uaddr64 + other_pos);
Oh, my asking was whether we use 64 bits get_user() or
use 32 bits get_user() and what is better among them.
TBH, I don't think there wouldn't be a much difference but
want to check again whether there's overlooked except Will pointed out.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v12 7/7] arm64: armv8_deprecated: disable swp emulation when FEAT_LSUI present
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
` (5 preceding siblings ...)
2026-01-21 19:06 ` [PATCH v12 6/7] arm64: futex: support futex with FEAT_LSUI Yeoreum Yun
@ 2026-01-21 19:06 ` Yeoreum Yun
2026-02-06 9:04 ` [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
7 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-01-21 19:06 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd,
Yeoreum Yun
The purpose of supporting LSUI is to eliminate PAN toggling.
CPUs that support LSUI are unlikely to support a 32-bit runtime.
Since environments that support both LSUI and
a 32-bit runtimeare expected to be extremely rare,
not to emulate the SWP instruction using LSUI instructions
in order to remove PAN toggling, and instead simply disable SWP emulation.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
arch/arm64/kernel/armv8_deprecated.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index e737c6295ec7..049754f7da36 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -610,6 +610,22 @@ static int __init armv8_deprecated_init(void)
}
#endif
+
+#ifdef CONFIG_SWP_EMULATION
+ /*
+ * The purpose of supporting LSUI is to eliminate PAN toggling.
+ * CPUs that support LSUI are unlikely to support a 32-bit runtime.
+ * Since environments that support both LSUI and a 32-bit runtime
+ * are expected to be extremely rare, we choose not to emulate
+ * the SWP instruction using LSUI instructions in order to remove PAN toggling,
+ * and instead simply disable SWP emulation.
+ */
+ if (cpus_have_final_cap(ARM64_HAS_LSUI)) {
+ insn_swp.status = INSN_UNAVAILABLE;
+ pr_info("swp/swpb instruction emulation is not supported on this system\n");
+ }
+#endif
+
for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) {
struct insn_emulation *ie = insn_emulations[i];
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v12 0/7] support FEAT_LSUI
2026-01-21 19:06 [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
` (6 preceding siblings ...)
2026-01-21 19:06 ` [PATCH v12 7/7] arm64: armv8_deprecated: disable swp emulation when FEAT_LSUI present Yeoreum Yun
@ 2026-02-06 9:04 ` Yeoreum Yun
2026-02-06 18:35 ` Catalin Marinas
7 siblings, 1 reply; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-06 9:04 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest
Cc: catalin.marinas, will, maz, broonie, oliver.upton,
miko.lenczewski, kevin.brodsky, ardb, suzuki.poulose, lpieralisi,
scott, joey.gouly, yuzenghui, pbonzini, shuah, mark.rutland, arnd
Gentle ping in case of forgotten.
On Wed, Jan 21, 2026 at 07:06:15PM +0000, Yeoreum Yun wrote:
> Since Armv9.6, FEAT_LSUI supplies the load/store instructions for
> previleged level to access to access user memory without clearing
> PSTATE.PAN bit.
>
> This patchset support FEAT_LSUI and applies in futex atomic operation
> and user_swpX emulation where can replace from ldxr/st{l}xr
> pair implmentation with clearing PSTATE.PAN bit to correspondant
> load/store unprevileged atomic operation without clearing PSTATE.PAN bit.
>
> This patch based on v6.19-rc6
>
> Patch History
> ==============
> from v11 to v12:
> - rebase to v6.19-rc6
> - add CONFIG_ARM64_LSUI
> - enable LSUI when !CPU_BIG_ENDIAN and PAN presents.
> - drop the swp emulation with LSUI insns instead, disable it
> when LSUI presents.
> - some of small fixes (useless prefix and suffix and etc).
> - https://lore.kernel.org/all/20251214112248.901769-1-yeoreum.yun@arm.com/
>
> from v10 to v11:
> - rebase to v6.19-rc1
> - use cast instruction to emulate deprecated swpb instruction
> - https://lore.kernel.org/all/20251103163224.818353-1-yeoreum.yun@arm.com/
>
> from v9 to v10:
> - apply FEAT_LSUI to user_swpX emulation.
> - add test coverage for LSUI bit in ID_AA64ISAR3_EL1
> - rebase to v6.18-rc4
> - https://lore.kernel.org/all/20250922102244.2068414-1-yeoreum.yun@arm.com/
>
> from v8 to v9:
> - refotoring __lsui_cmpxchg64()
> - rebase to v6.17-rc7
> - https://lore.kernel.org/all/20250917110838.917281-1-yeoreum.yun@arm.com/
>
> from v7 to v8:
> - implements futex_atomic_eor() and futex_atomic_cmpxchg() with casalt
> with C helper.
> - Drop the small optimisation on ll/sc futex_atomic_set operation.
> - modify some commit message.
> - https://lore.kernel.org/all/20250816151929.197589-1-yeoreum.yun@arm.com/
>
> from v6 to v7:
> - wrap FEAT_LSUI with CONFIG_AS_HAS_LSUI in cpufeature
> - remove unnecessary addition of indentation.
> - remove unnecessary mte_tco_enable()/disable() on LSUI operation.
> - https://lore.kernel.org/all/20250811163635.1562145-1-yeoreum.yun@arm.com/
>
> from v5 to v6:
> - rebase to v6.17-rc1
> - https://lore.kernel.org/all/20250722121956.1509403-1-yeoreum.yun@arm.com/
>
> from v4 to v5:
> - remove futex_ll_sc.h futext_lsui and lsui.h and move them to futex.h
> - reorganize the patches.
> - https://lore.kernel.org/all/20250721083618.2743569-1-yeoreum.yun@arm.com/
>
> from v3 to v4:
> - rebase to v6.16-rc7
> - modify some patch's title.
> - https://lore.kernel.org/all/20250617183635.1266015-1-yeoreum.yun@arm.com/
>
> from v2 to v3:
> - expose FEAT_LSUI to guest
> - add help section for LSUI Kconfig
> - https://lore.kernel.org/all/20250611151154.46362-1-yeoreum.yun@arm.com/
>
> from v1 to v2:
> - remove empty v9.6 menu entry
> - locate HAS_LSUI in cpucaps in order
> - https://lore.kernel.org/all/20250611104916.10636-1-yeoreum.yun@arm.com/
>
>
> Yeoreum Yun (7):
> arm64: Kconfig: add support for LSUI
> arm64: cpufeature: add FEAT_LSUI
> KVM: arm64: expose FEAT_LSUI to guest
> KVM: arm64: kselftest: set_id_regs: add test for FEAT_LSUI
> arm64: futex: refactor futex atomic operation
> arm64: futex: support futex with FEAT_LSUI
> arm64: armv8_deprecated: disable swp emulation when FEAT_LSUI present
>
> arch/arm64/Kconfig | 20 ++
> arch/arm64/include/asm/futex.h | 322 +++++++++++++++---
> arch/arm64/kernel/armv8_deprecated.c | 16 +
> arch/arm64/kernel/cpufeature.c | 27 ++
> arch/arm64/kvm/sys_regs.c | 3 +-
> arch/arm64/tools/cpucaps | 1 +
> .../testing/selftests/kvm/arm64/set_id_regs.c | 1 +
> 7 files changed, 339 insertions(+), 51 deletions(-)
>
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v12 0/7] support FEAT_LSUI
2026-02-06 9:04 ` [PATCH v12 0/7] support FEAT_LSUI Yeoreum Yun
@ 2026-02-06 18:35 ` Catalin Marinas
2026-02-12 8:08 ` Yeoreum Yun
0 siblings, 1 reply; 24+ messages in thread
From: Catalin Marinas @ 2026-02-06 18:35 UTC (permalink / raw)
To: Yeoreum Yun
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
On Fri, Feb 06, 2026 at 09:04:41AM +0000, Yeoreum Yun wrote:
> Gentle ping in case of forgotten.
Not forgotten but lower priority given that the merging window is about
to open this Sunday. The LSUI series is now aimed at the next cycle.
--
Catalin
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v12 0/7] support FEAT_LSUI
2026-02-06 18:35 ` Catalin Marinas
@ 2026-02-12 8:08 ` Yeoreum Yun
0 siblings, 0 replies; 24+ messages in thread
From: Yeoreum Yun @ 2026-02-12 8:08 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, kvmarm, kvm, linux-kselftest,
will, maz, broonie, oliver.upton, miko.lenczewski, kevin.brodsky,
ardb, suzuki.poulose, lpieralisi, scott, joey.gouly, yuzenghui,
pbonzini, shuah, mark.rutland, arnd
> On Fri, Feb 06, 2026 at 09:04:41AM +0000, Yeoreum Yun wrote:
> > Gentle ping in case of forgotten.
>
> Not forgotten but lower priority given that the merging window is about
> to open this Sunday. The LSUI series is now aimed at the next cycle.
Thanks for keeping this series in your mind :D
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 24+ messages in thread