linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Dave Martin <Dave.Martin@arm.com>
To: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: mark.rutland@arm.com, keescook@chromium.org,
	catalin.marinas@arm.com, broonie@kernel.org, james.morse@arm.com,
	amit.kachhap@arm.com, Vincenzo.Frascino@arm.com, will@kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact
Date: Wed, 29 Jul 2020 15:31:02 +0100	[thread overview]
Message-ID: <20200729143101.GG21941@arm.com> (raw)
In-Reply-To: <66cc667d-7397-bbc7-bde1-c562964a7672@arm.com>

On Wed, Jul 29, 2020 at 01:28:06PM +0100, Suzuki K Poulose wrote:
> On 07/29/2020 11:36 AM, Dave Martin wrote:
> >On Fri, Jul 10, 2020 at 01:30:08PM +0530, Amit Daniel Kachhap wrote:
> >>The current address authentication cpufeature levels are set as LOWER_SAFE
> >>which is not compatible with the different configurations added for Armv8.3
> >>ptrauth enhancements as the different levels have different behaviour and
> >>there is no tunable to enable the lower safe versions. This is rectified
> >>by setting those cpufeature type as EXACT.
> >>
> >>The current cpufeature framework also does not interfere in the booting of
> >>non-exact secondary cpus but rather marks them as tainted. As a workaround
> >>this is fixed by replacing the generic match handler with a new handler
> >>specific to ptrauth.
> >>
> >>After this change, if there is any variation in ptrauth configurations in
> >>secondary cpus from boot cpu then those mismatched cpus are parked in an
> >>infinite loop.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>[Suzuki: Introduce new matching function for address authentication]
> >>Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> >>---
> >>Changes since v3:
> >>  * Commit logs cleanup.
> >>
> >>  arch/arm64/kernel/cpufeature.c | 56 ++++++++++++++++++++++++++++------
> >>  1 file changed, 47 insertions(+), 9 deletions(-)
> >>
> >>diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> >>index 9fae0efc80c1..8ac8c18f70c9 100644
> >>--- a/arch/arm64/kernel/cpufeature.c
> >>+++ b/arch/arm64/kernel/cpufeature.c
> >>@@ -210,9 +210,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> >>-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> >>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> >>  	ARM64_FTR_END,
> >>  };
> >>@@ -1605,11 +1605,49 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
> >>  #endif /* CONFIG_ARM64_RAS_EXTN */
> >>  #ifdef CONFIG_ARM64_PTR_AUTH
> >>-static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
> >>-			     int __unused)
> >>+static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
> >>  {
> >>-	return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> >>-	       __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
> >>+	int local_cpu_auth;
> >>+	static int boot_cpu_auth_arch;
> >>+	static int boot_cpu_auth_imp_def;
> >
> >I guess having static variables here is probably safe, provided that
> >calls to this function are strictly serialised with sufficiently
> >heavyweight synchronisation.
> >
> 
> These are initialised with the primary boot CPU. And later on called
> from CPU bring up. So they are serialized, except when
> this_cpu_has_cap() could be called. But this is fine, as it is a read
> operation.

To guarantee that any update to those variables is visible to a booting
CPU, we'd probably need a DSB and/or DMB somewhere.  For now I don't see
anything explicity, though it's likely we're getting them as a side
effect of something else.

In any case, I guess we've been managing for some time without this
being strictly correct.


> >Might be worth a comment.
> >
> >Coming up with a cleaner approach using locks or atomics might be
> >overkill, but other folks might take a stronger view on this.
> >
> >>+
> >>+	/* We don't expect to be called with SCOPE_SYSTEM */
> >>+	WARN_ON(scope == SCOPE_SYSTEM);
> >>+
> >>+	local_cpu_auth = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
> >>+						     entry->field_pos, entry->sign);
> >>+	/*
> >>+	 * The ptr-auth feature levels are not intercompatible with
> >>+	 * lower levels. Hence we must match all the CPUs with that
> >>+	 * of the boot CPU. So cache the level of boot CPU and compare
> >>+	 * it against the secondary CPUs.
> >>+	 */
> >>+	if (scope & SCOPE_BOOT_CPU) {
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> >>+			boot_cpu_auth_imp_def = local_cpu_auth;

Actually, can we make use of read_sanitised_ftr_reg() here rather than
having to cache the values in static variables?

> >>+			return boot_cpu_auth_imp_def >= entry->min_field_value;
> >>+		} else if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> >>+			boot_cpu_auth_arch = local_cpu_auth;
> >>+			return boot_cpu_auth_arch >= entry->min_field_value;
> >>+		}
> >>+	} else if (scope & SCOPE_LOCAL_CPU) {
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_IMP_DEF) {
> >>+			return (local_cpu_auth >= entry->min_field_value) &&
> >>+			       (local_cpu_auth == boot_cpu_auth_imp_def);
> >>+		}
> >>+		if (entry->capability == ARM64_HAS_ADDRESS_AUTH_ARCH) {
> >>+			return (local_cpu_auth >= entry->min_field_value) &&
> >>+			       (local_cpu_auth == boot_cpu_auth_arch);
> >>+		}
> >>+	}
> >
> >This looks complex.  I guess this is something to do with the change to
> >FTR_EXACT, but won't the cpufeatures code do the right thing anyway with
> >mismatched features?
> 
> Part of the issue is that the capability is SCOPE_BOOT_CPU. So,
> we need to verify the features on the secondary CPUs, making sure that
> they match the "level" as detected by the boot CPU. Otherwise, we could
> end up with mismatched CPUs, where all of them support different levels
> of AUTH.
> 
> Also, implies that the checks are performed before the cpufeature gets
> updated with the secondary CPUs values. Any conflict implies the
> secondary CPU ends up parked.
> 
> So, the "FTR_EXACT" change only has an impact of keeping the cpufeature
> infrastructure sanitised only in the absence of CONFIG_ARM64_PTRAUTH.

I think the framework could be improved to handle this kind of thing
more gracefully, but I don't see an obviously better way to do this for
now, especially if this is just a one-off case.

(I certainly hope we don't get more features that behave this way...)

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-07-29 14:33 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-10  8:00 [PATCH v4 0/4] arm64: add Armv8.3 pointer authentication enhancements Amit Daniel Kachhap
2020-07-10  8:00 ` [PATCH v4 1/4] arm64: ptrauth: add Armv8.3 pointer authentication enhanced features Amit Daniel Kachhap
2020-07-29 11:07   ` Dave Martin
2020-08-03 10:13     ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 2/4] arm64: cpufeature: Modify address authentication cpufeature to exact Amit Daniel Kachhap
2020-07-29 10:36   ` Dave Martin
2020-07-29 12:28     ` Suzuki K Poulose
2020-07-29 14:31       ` Dave Martin [this message]
2020-07-29 17:09         ` Suzuki K Poulose
2020-08-05  9:16           ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 3/4] arm64: kprobe: disable probe of fault prone ptrauth instruction Amit Daniel Kachhap
2020-07-29 10:43   ` Dave Martin
2020-08-03 10:16     ` Amit Kachhap
2020-07-10  8:00 ` [PATCH v4 4/4] arm64: kprobe: clarify the comment of steppable hint instructions Amit Daniel Kachhap
2020-07-29 10:44   ` Dave Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200729143101.GG21941@arm.com \
    --to=dave.martin@arm.com \
    --cc=Vincenzo.Frascino@arm.com \
    --cc=amit.kachhap@arm.com \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=keescook@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mark.rutland@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).