From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99F43C388F7 for ; Tue, 10 Nov 2020 12:06:40 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0183F205ED for ; Tue, 10 Nov 2020 12:06:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="yAKozahf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0183F205ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=litX+BrbMzZc4iNrd+WRoitgZMNjUgxR2Yae5dZx4cc=; b=yAKozahfmC27RLCa7zPbhen6h YARR5h7zDxdfgc9I4NZ+qMK2+nvDfXoPDQUCEmhP/FkvLwr81fJaLL0LZqfiZGd0VXzthwH8kioFv IOTMHRoElcv4+h479uVCGLsKzwlKtKxe2taccphR7EuWJ6z/awsGLy1dtV9VwfI4VF6VBTJc70xpc CRl35mKRHnwPUX/00Pn62rpIH8gvKESVmQZnOMVJbfrM3SjDE41T6Ogsj9Hf2BgSysZrmbuhqthFy xUujJWnixP0+BA0MPChcPBQayRgxPtmuJ8g9Jo+3U/cJwmxRP+09Zh4Ou73kLiPOXe4JAh0rT3zRv eP/z1jyvg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kcSOt-0001OZ-F1; Tue, 10 Nov 2020 12:05:27 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kcSOr-0001Np-CA for linux-arm-kernel@lists.infradead.org; Tue, 10 Nov 2020 12:05:26 +0000 Received: from trantor (unknown [2.26.170.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0E2C720678; Tue, 10 Nov 2020 12:05:22 +0000 (UTC) Date: Tue, 10 Nov 2020 12:05:20 +0000 From: Catalin Marinas To: Suzuki K Poulose Subject: Re: [PATCH] arm64: errata: Fix handling of 1418040 with late CPU onlining Message-ID: References: <20201106114952.10032-1-will@kernel.org> <20201106124400.GF29329@gaia> <20201110103957.GA1104295@ewhatever.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20201110103957.GA1104295@ewhatever.cambridge.arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201110_070525_583245_97380B3A X-CRM114-Status: GOOD ( 25.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Sai Prakash Ranjan , Will Deacon , Stephen Boyd , Marc Zyngier , kernel-team@android.com, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Nov 10, 2020 at 10:39:57AM +0000, Suzuki K Poulose wrote: > Catalin > > On Fri, Nov 06, 2020 at 12:44:00PM +0000, Catalin Marinas wrote: > > On Fri, Nov 06, 2020 at 12:18:32PM +0000, Suzuki K Poulose wrote: > > > On 11/6/20 11:49 AM, Will Deacon wrote: > > +/* > > + * Test for a capability with a runtime check. This is an alias for > > + * cpus_have_cap() but with the name chosen to emphasize the applicability to > > + * late capability setting. > > + */ > > +static __always_inline bool cpus_have_late_cap(int num) > > +{ > > + return cpus_have_cap(num); > > +} > > + > > It is quite easy for people to misuse the ubiquitous cpus_have_const_cap(). > So, unless we make sure that we fix that to handle these "caps" we might > see subtle bugs. How about the following fix on top of your patch, to > add another (sigh!) set of keys to detect if we can use the const caps > or not. I thought about this as well but decided it's not worth the additional code. > Untested... > > ---8>--- > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > index 849b2691fadd..84dd43f1f322 100644 > --- a/arch/arm64/include/asm/cpufeature.h > +++ b/arch/arm64/include/asm/cpufeature.h > @@ -379,6 +379,7 @@ cpucap_multi_entry_cap_matches(const struct arm64_cpu_capabilities *entry, > > extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); > extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; > +extern struct static_key_false cpu_hwcap_const_key[ARM64_NCAPS]; > extern struct static_key_false arm64_const_caps_ready; > > /* ARM64 CAPS + alternative_cb */ > @@ -414,6 +415,14 @@ static inline bool cpus_have_cap(unsigned int num) > return test_bit(num, cpu_hwcaps); > } > > +static __always_inline bool cpucap_has_const_key(int num) > +{ > + if (num >= ARM64_NCAPS) > + return false; > + > + return static_branch_unlikely(&cpu_hwcap_const_key[num]); > +} > + > /* > * Test for a capability without a runtime check. > * > @@ -424,7 +433,7 @@ static inline bool cpus_have_cap(unsigned int num) > */ > static __always_inline bool __cpus_have_const_cap(int num) > { > - if (num >= ARM64_NCAPS) > + if (num >= ARM64_NCAPS && WARN_ON(!cpucap_has_const_key(num))) > return false; > return static_branch_unlikely(&cpu_hwcap_keys[num]); > } > @@ -439,7 +448,7 @@ static __always_inline bool __cpus_have_const_cap(int num) > */ > static __always_inline bool cpus_have_const_cap(int num) > { > - if (system_capabilities_finalized()) > + if (cpucap_has_const_key(num) && system_capabilities_finalized()) > return __cpus_have_const_cap(num); > else > return cpus_have_cap(num); I don't particularly like that we end up with three static key checks here but, OTOH, this has the advantage that we don't need a new cpus_have_late_cap(). Going back to the original problem, we want to allow late cpucap setting after the system caps have been finalised and, in a addition, we want to detect API misuse. We solve the former by setting the corresponding cpu_hwcaps bit late, based on a new capability flag. For the API, we can use your approach with another static key and that's pretty foolproof, it gives us warnings. Alternatively (not as foolproof), we could skip the static key initialisation entirely for a late feature even if we detect it early and that would allow one to (hopefully) figure out the wrong API (cpus_have_const_cap() never true): diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e346938e9a37..531b7787ee8e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2436,7 +2436,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask) continue; /* Ensure cpus_have_const_cap(num) works */ - static_branch_enable(&cpu_hwcap_keys[num]); + if (!cpucap_set_for_late_cpu(caps)) + static_branch_enable(&cpu_hwcap_keys[num]); if (boot_scope && caps->cpu_enable) /* Anyway, either option is fine by me. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel