From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10820CDB465 for ; Mon, 16 Oct 2023 16:12:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VR8RjymY8vYoXWvRwiaD0iP0KFCa49WQ8eIB06fCcQQ=; b=ckbSAAse3jGPon OC2n5ExyWkeDKJJ2ghL/f8yYwtoqg7pR8qf+PAaoygFkaV7V6q4y/yb/OCrxzLh6OwJXF8f+BdG3W peffPSuwGqgfq5saRo8cRRFhFF+obiuhODTUn7uzWEVbofJIY67y0k7UwWahdVQiF5KGVsgY8CPyH kxt+5v7vtNOZ85vrh/0vhyIB16XAVrlibTsgSIzNapl95YIcc1Vnp5J8K/dSSk0P1+w681Avcp7BZ pCsdaqOApklxg7yQFWoZ/QbwM1EE8HWM/GjhgCoDsU+j3fqSMpDMKEhInUKKGFkDRsMkroMyCrrDs IYGBZnXiSup9AriAOHCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qsQBs-00A5wX-2G; Mon, 16 Oct 2023 16:11:36 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qsQBd-00A5u3-0w for linux-arm-kernel@lists.infradead.org; Mon, 16 Oct 2023 16:11:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 78A806103C; Mon, 16 Oct 2023 16:11:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C06CC433C7; Mon, 16 Oct 2023 16:11:17 +0000 (UTC) Date: Mon, 16 Oct 2023 17:11:15 +0100 From: Catalin Marinas To: Mark Brown Cc: Mark Rutland , linux-arm-kernel@lists.infradead.org, ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: Re: [PATCH v4 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Message-ID: References: <20231016102501.3643901-1-mark.rutland@arm.com> <20231016102501.3643901-11-mark.rutland@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231016_091121_412768_748C5661 X-CRM114-Status: GOOD ( 18.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Oct 16, 2023 at 01:02:13PM +0100, Mark Brown wrote: > On Mon, Oct 16, 2023 at 11:24:33AM +0100, Mark Rutland wrote: > > When a CPUs onlined we first probe for supported features and > > propetites, and then we subsequently enable features that have been > > detected. This is a little problematic for SVE and SME, as some > > properties (e.g. vector lengths) cannot be probed while they are > > disabled. Due to this, the code probing for SVE properties has to enable > > Reviewed-by: Mark Brown Thanks Mark for reviewing. Could you please also check my conflict resolution? Mark R's patches conflict with your patches on for-next/sve-remove-pseudo-regs (maybe I should have applied them on top; hopefully git rerere remembers it correctly). diff --cc arch/arm64/include/asm/fpsimd.h index 9e5d3a0812b6,c43ae9c013ec..50e5f25d3024 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@@ -123,11 -149,13 +149,12 @@@ extern void sme_save_state(void *state extern void sme_load_state(void const *state, int zt); struct arm64_cpu_capabilities; - extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); - extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused); - extern void sme2_kernel_enable(const struct arm64_cpu_capabilities *__unused); - extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused); + extern void cpu_enable_fpsimd(const struct arm64_cpu_capabilities *__unused); + extern void cpu_enable_sve(const struct arm64_cpu_capabilities *__unused); + extern void cpu_enable_sme(const struct arm64_cpu_capabilities *__unused); + extern void cpu_enable_sme2(const struct arm64_cpu_capabilities *__unused); + extern void cpu_enable_fa64(const struct arm64_cpu_capabilities *__unused); -extern u64 read_zcr_features(void); extern u64 read_smcr_features(void); /* diff --cc arch/arm64/kernel/cpufeature.c index 55a3bc719d46,ad7ec30d3bd3..397a1bbf4fba --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@@ -1026,21 -1040,30 +1026,26 @@@ void __init init_cpu_features(struct cp if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { - sve_kernel_enable(NULL); + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + - info->reg_zcr = read_zcr_features(); - init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); vec_init_vq_map(ARM64_VEC_SVE); + + cpacr_restore(cpacr); } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { - sme_kernel_enable(NULL); + unsigned long cpacr = cpacr_save_enable_kernel_sme(); - info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware * supports priorities the kernel does not at present * and we block access to them. */ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS; - init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr); vec_init_vq_map(ARM64_VEC_SME); + + cpacr_restore(cpacr); } if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) @@@ -1274,29 -1297,40 +1279,35 @@@ void update_cpu_features(int cpu taint |= check_update_ftr_reg(SYS_ID_AA64SMFR0_EL1, cpu, info->reg_id_aa64smfr0, boot->reg_id_aa64smfr0); + /* Probe vector lengths */ if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { - unsigned long cpacr = cpacr_save_enable_kernel_sve(); + if (!system_capabilities_finalized()) { - sve_kernel_enable(NULL); ++ unsigned long cpacr = cpacr_save_enable_kernel_sve(); + - info->reg_zcr = read_zcr_features(); - taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu, - info->reg_zcr, boot->reg_zcr); - - /* Probe vector lengths */ - if (!system_capabilities_finalized()) vec_update_vq_map(ARM64_VEC_SVE); + - cpacr_restore(cpacr); ++ cpacr_restore(cpacr); + } } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { - sme_kernel_enable(NULL); - unsigned long cpacr = cpacr_save_enable_kernel_sme(); -- - info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware * supports priorities the kernel does not at present * and we block access to them. */ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS; - taint |= check_update_ftr_reg(SYS_SMCR_EL1, cpu, - info->reg_smcr, boot->reg_smcr); /* Probe vector lengths */ -- if (!system_capabilities_finalized()) ++ if (!system_capabilities_finalized()) { ++ unsigned long cpacr = cpacr_save_enable_kernel_sme(); ++ vec_update_vq_map(ARM64_VEC_SME); + - cpacr_restore(cpacr); ++ cpacr_restore(cpacr); ++ } } /* @@@ -3138,7 -3182,15 +3162,9 @@@ static void verify_local_elf_hwcaps(voi static void verify_sve_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + - u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); - u64 zcr = read_zcr_features(); - - unsigned int safe_len = safe_zcr & ZCR_ELx_LEN_MASK; - unsigned int len = zcr & ZCR_ELx_LEN_MASK; - - if (len < safe_len || vec_verify_vq_map(ARM64_VEC_SVE)) { + if (vec_verify_vq_map(ARM64_VEC_SVE)) { pr_crit("CPU%d: SVE: vector length support mismatch\n", smp_processor_id()); cpu_die_early(); @@@ -3147,7 -3201,15 +3175,9 @@@ static void verify_sme_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + - u64 safe_smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1); - u64 smcr = read_smcr_features(); - - unsigned int safe_len = safe_smcr & SMCR_ELx_LEN_MASK; - unsigned int len = smcr & SMCR_ELx_LEN_MASK; - - if (len < safe_len || vec_verify_vq_map(ARM64_VEC_SME)) { + if (vec_verify_vq_map(ARM64_VEC_SME)) { pr_crit("CPU%d: SME: vector length support mismatch\n", smp_processor_id()); cpu_die_early(); diff --cc arch/arm64/kernel/fpsimd.c index 04c801001767,d0d28bc069d2..5ddc246f1482 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@@ -1173,11 -1169,30 +1169,11 @@@ void cpu_enable_sve(const struct arm64_ void __init sve_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SVE]; - u64 zcr; DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); unsigned long b; + int max_bit; - if (!system_supports_sve()) + if (!cpus_have_cap(ARM64_SVE)) return; /* @@@ -1307,9 -1329,29 +1301,9 @@@ void cpu_enable_fa64(const struct arm64 void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; - u64 smcr; - int min_bit; + int min_bit, max_bit; - if (!system_supports_sme()) + if (!cpus_have_cap(ARM64_SME)) return; /* -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel