From: Dave Martin <Dave.Martin@arm.com>
To: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
Kees Cook <keescook@chromium.org>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Mark Brown <broonie@kernel.org>,
James Morse <james.morse@arm.com>,
Vincenzo Frascino <Vincenzo.Frascino@arm.com>,
Will Deacon <will@kernel.org>,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
Date: Wed, 29 Jul 2020 11:36:48 +0100 [thread overview]
Message-ID: <20200729103648.GC21941@arm.com> (raw)
In-Reply-To: <1594368010-4419-3-git-send-email-amit.kachhap@arm.com>
On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
> The current address authentication cpufeature levels are set as LOWER_SAFE
> which is not compatible with the different configurations added for Armv8.3
> ptrauth enhancements as the different levels have different behaviour and
> there is no tunable to enable the lower safe versions. This is rectified
> by setting those cpufeature type as EXACT.
>
> The current cpufeature framework also does not interfere in the booting of
> non-exact secondary cpus but rather marks them as tainted. As a workaround
> this is fixed by replacing the generic match handler with a new handler
> specific to ptrauth.
>
> After this change, if there is any variation in ptrauth configurations in
> secondary cpus from boot cpu then those mismatched cpus are parked in an
> infinite loop.
>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> [Suzuki: Introduce new matching function for address authentication]
> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> Changes since v3:
> * Commit logs cleanup.
>
> arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
> 1 file changed, 47 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9fae0efc80c1..8ac8c18f70c9 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> + FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> + FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
> @@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
> #endif /* CONFIG_ARM64_RAS_EXTN */
>
> #ifdef CONFIG_ARM64_PTR_AUTH
> -static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> - int __unused)
> +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
> {
> - return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> - __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> + int local_cpu_auth;
> + static int boot_cpu_auth_arch;
> + static int boot_cpu_auth_imp_def;
I guess having static variables here is probably safe, provided that
calls to this function are strictly serialised with sufficiently
heavyweight synchronisation.
Might be worth a comment.
Coming up with a cleaner approach using locks or atomics might be
overkill, but other folks might take a stronger view on this.
> +
> + /* We don't expect to be called with SCOPE_SYSTEM */
> + WARN_ON(scope == SCOPE_SYSTEM);
> +
> + local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> + entry->field_pos, entry->sign);
> + /*
> + * The ptr-auth feature levels are not intercompatible with
> + * lower levels. Hence we must match all the CPUs with that
> + * of the boot CPU. So cache the level of boot CPU and compare
> + * it against the secondary CPUs.
> + */
> + if (scope & SCOPE_BOOT_CPU) {
> + if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> + boot_cpu_auth_imp_def = local_cpu_auth;
> + return boot_cpu_auth_imp_def >= entry->min_field_value;
> + } else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> + boot_cpu_auth_arch = local_cpu_auth;
> + return boot_cpu_auth_arch >= entry->min_field_value;
> + }
> + } else if (scope & SCOPE_LOCAL_CPU) {
> + if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> + return (local_cpu_auth >= entry->min_field_value) &&
> + (local_cpu_auth == boot_cpu_auth_imp_def);
> + }
> + if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> + return (local_cpu_auth >= entry->min_field_value) &&
> + (local_cpu_auth == boot_cpu_auth_arch);
> + }
> + }
This looks complex. I guess this is something to do with the change to
FTR_EXACT, but won't the cpufeatures code do the right thing anyway with
mismatched features?
I may be missing something here.
> + return false;
> +}
> +
> +static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
> + int scope)
> +{
> + return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
> + has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
> }
>
> static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
> @@ -1958,7 +1996,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .sign = FTR_UNSIGNED,
> .field_pos = ID_AA64ISAR1_APA_SHIFT,
> .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
> - .matches = has_cpuid_feature,
> + .matches = has_address_auth_cpucap,
> },
> {
> .desc = "Address authentication (IMP DEF algorithm)",
> @@ -1968,12 +2006,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .sign = FTR_UNSIGNED,
> .field_pos = ID_AA64ISAR1_API_SHIFT,
> .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
> - .matches = has_cpuid_feature,
> + .matches = has_address_auth_cpucap,
> },
> {
> .capability = ARM64_HAS_ADDRESS_AUTH,
> .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
> - .matches = has_address_auth,
> + .matches = has_address_auth_metacap,
> },
> {
> .desc = "Generic authentication (architected algorithm)",
[...]
Cheers
---Dave
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-07-29 10:38 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-10 8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-07-10 8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
2020-07-29 11:07 ` Dave Martin
2020-08-03 10:13 ` Amit Kachhap
2020-07-10 8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-07-29 10:36 ` Dave Martin [this message]
2020-07-29 12:28 ` Suzuki K Poulose
2020-07-29 14:31 ` Dave Martin
2020-07-29 17:09 ` Suzuki K Poulose
2020-08-05 9:16 ` Amit Kachhap
2020-07-10 8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
2020-07-29 10:43 ` Dave Martin
2020-08-03 10:16 ` Amit Kachhap
2020-07-10 8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
2020-07-29 10:44 ` Dave Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200729103648.GC21941@arm.com \
--to=dave.martin@arm.com \
--cc=Vincenzo.Frascino@arm.com \
--cc=amit.kachhap@arm.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=james.morse@arm.com \
--cc=keescook@chromium.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mark.rutland@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).