From: Mark Rutland <mark.rutland@arm.com>
To: Tian Tao <tiantao6@hisilicon.com>
Cc: catalin.marinas@arm.com, will@kernel.org,
jonathan.cameron@huawei.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linuxarm@huawei.com
Subject: Re: [PATCH] arm64: Add ARM64_HAS_LSE2 CPU capability
Date: Fri, 6 Sep 2024 10:44:41 +0100 [thread overview]
Message-ID: <ZtrPCVhqj5qLrQVY@J2N7QTR9R3> (raw)
In-Reply-To: <20240906090812.249473-1-tiantao6@hisilicon.com>
On Fri, Sep 06, 2024 at 05:08:12PM +0800, Tian Tao wrote:
> When FEAT_LSE2 is implemented and Bit 6 of sctlr_elx is nAA, the
> full name of the Not-aligned access. nAA bit has two values:
> 0b0 Unaligned accesses by the specified instructions generate an
> Alignment fault.
> 0b1 Unaligned accesses by the specified instructions do not generate
> an Alignment fault.
>
> this patch sets the nAA bit to 1,The following instructions will not
> generate an Alignment fault if all bytes being accessed are not within
> a single 16-byte quantity:
> • LDAPR, LDAPRH, LDAPUR, LDAPURH, LDAPURSH, LDAPURSW, LDAR, LDARH,LDLAR,
> LDLARH.
> • STLLR, STLLRH, STLR, STLRH, STLUR, and STLURH
>
> Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
What is going to depend on this? Nothing in the kernel depends on being
able to make unaligned accesses with these instructions, and (since you
haven't added a HWCAP), userspace has no idea that these accesses won't
generate an alignment fault.
Mark.
> ---
> arch/arm64/Kconfig | 10 ++++++++++
> arch/arm64/include/asm/sysreg.h | 1 +
> arch/arm64/kernel/cpufeature.c | 18 ++++++++++++++++++
> arch/arm64/tools/cpucaps | 1 +
> 4 files changed, 30 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 77d7ef0b16c2..7afe73ebcd79 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -2023,6 +2023,16 @@ config ARM64_TLB_RANGE
> The feature introduces new assembly instructions, and they were
> support when binutils >= 2.30.
>
> +config ARM64_LSE2_NAA
> + bool "Enable support for not-aligned access"
> + depends on AS_HAS_ARMV8_4
> + help
> + LSE2 is an extension to the original LSE (Large System Extensions) feature,
> + introduced in ARMv8.4.
> +
> + Enable this feature will not generate an Alignment fault if all bytes being
> + accessed are not within a single 16-byte quantity.
> +
> endmenu # "ARMv8.4 architectural features"
>
> menu "ARMv8.5 architectural features"
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 8cced8aa75a9..42e3a1959aa8 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -854,6 +854,7 @@
> #define SCTLR_ELx_ENDB (BIT(13))
> #define SCTLR_ELx_I (BIT(12))
> #define SCTLR_ELx_EOS (BIT(11))
> +#define SCTLR_ELx_nAA (BIT(6))
> #define SCTLR_ELx_SA (BIT(3))
> #define SCTLR_ELx_C (BIT(2))
> #define SCTLR_ELx_A (BIT(1))
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 646ecd3069fd..558869a7c7f0 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2299,6 +2299,14 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
> }
> #endif /* CONFIG_ARM64_MTE */
>
> +#ifdef CONFIG_ARM64_LSE2_NAA
> +static void cpu_enable_lse2(const struct arm64_cpu_capabilities *__unused)
> +{
> + sysreg_clear_set(sctlr_el2, SCTLR_ELx_nAA, SCTLR_ELx_nAA);
> + isb();
> +}
> +#endif
> +
> static void user_feature_fixup(void)
> {
> if (cpus_have_cap(ARM64_WORKAROUND_2658417)) {
> @@ -2427,6 +2435,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, ATOMIC, IMP)
> },
> #endif /* CONFIG_ARM64_LSE_ATOMICS */
> +#ifdef CONFIG_ARM64_LSE2_NAA
> + {
> + .desc = "Support for not-aligned access",
> + .capability = ARM64_HAS_LSE2,
> + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> + .matches = has_cpuid_feature,
> + .cpu_enable = cpu_enable_lse2,
> + ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, AT, IMP)
> + },
> +#endif
> {
> .desc = "Virtualization Host Extensions",
> .capability = ARM64_HAS_VIRT_HOST_EXTN,
> diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
> index ac3429d892b9..0c7c0a293574 100644
> --- a/arch/arm64/tools/cpucaps
> +++ b/arch/arm64/tools/cpucaps
> @@ -41,6 +41,7 @@ HAS_HCX
> HAS_LDAPR
> HAS_LPA2
> HAS_LSE_ATOMICS
> +HAS_LSE2
> HAS_MOPS
> HAS_NESTED_VIRT
> HAS_PAN
> --
> 2.33.0
>
>
next prev parent reply other threads:[~2024-09-06 10:03 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-06 9:08 [PATCH] arm64: Add ARM64_HAS_LSE2 CPU capability Tian Tao
2024-09-06 9:44 ` Mark Rutland [this message]
2024-09-06 10:58 ` tiantao (H)
2024-09-06 11:09 ` Mark Rutland
2024-09-06 11:18 ` tiantao (H)
2024-09-06 11:42 ` Mark Rutland
2024-09-06 12:20 ` tiantao (H)
2024-09-06 13:05 ` Mark Rutland
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZtrPCVhqj5qLrQVY@J2N7QTR9R3 \
--to=mark.rutland@arm.com \
--cc=catalin.marinas@arm.com \
--cc=jonathan.cameron@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=tiantao6@hisilicon.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox