From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9874D30DD1A; Tue, 2 Sep 2025 11:46:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756813574; cv=none; b=HZiU2lSAAIUcGfjbWwQVYOxK2PZBvfdCJe1lRL7M4q+f4aXqnon66vU+8kDuVSHXmIwkDZis2b0GiAEn3hTN2gSoAEmGh3dqK9dsc8JNaQEig2Nt7m4BpyqPWmJ4LP3BYUB9VumDklHGgvxKW2c3gxOKYuc0jhtxZwYHrqnL3Xo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756813574; c=relaxed/simple; bh=NTFrR4Ul2FjKNCZg2te+RhXOLPTLiTRbz6mCjfo2Vis=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=SETSIt4/r9cqY/YIZDDQeRwLRZzyL/NIMD1Q7xP6lIWZN1W+8D+09SEeb+TpktdF3XLExKLAa5lw2uGPYGUuT8vk1yOyRicU4e4UrbX7YvjgrKFV/NXMyFlLg7LjMKpG5dbJWmEmAE5MQR6zRyEBSIJeNXH/Eq2qk9jGZNbFp0Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LU/fN+g1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LU/fN+g1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C12DC4CEED; Tue, 2 Sep 2025 11:46:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756813572; bh=NTFrR4Ul2FjKNCZg2te+RhXOLPTLiTRbz6mCjfo2Vis=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=LU/fN+g127OF0iEEH5mh9j4gTeSNuspKfz0iLbiV0e7pJj7Bjg6o4A504i1I6hSXc Or3FyGb/wisfYdoq+OF/yxKWZmUEbdzJTAfU+PZR2lrsEWY0p+DOmPlugmJAazoJVc NTipps/n2A9xgky2mxR/b8jsZ13ai6vSjJ4caGM9RUmu7eHPcPLWPhfaXXbhAFOi+1 3JyoUtkLUXrV8BOMJfjuI641GRCoOEQGmFtBhkEY7qPYEmce5eEZshZ7+/BnDN5jpV hEP9j9cCdsKvRS3y6Fmz5rR1ZaKYIAQOinPhxkCisp0XdvQkMHUAqgdHz4aGK+agRa ZJ1Fnr6doa2zw== From: Mark Brown Date: Tue, 02 Sep 2025 12:36:28 +0100 Subject: [PATCH v8 25/29] KVM: arm64: Handle SME exceptions Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20250902-kvm-arm64-sme-v8-25-2cb2199c656c@kernel.org> References: <20250902-kvm-arm64-sme-v8-0-2cb2199c656c@kernel.org> In-Reply-To: <20250902-kvm-arm64-sme-v8-0-2cb2199c656c@kernel.org> To: Marc Zyngier , Oliver Upton , Joey Gouly , Catalin Marinas , Suzuki K Poulose , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: Dave Martin , Fuad Tabba , Mark Rutland , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Maydell , Eric Auger , Mark Brown X-Mailer: b4 0.15-dev-cff91 X-Developer-Signature: v=1; a=openpgp-sha256; l=6175; i=broonie@kernel.org; h=from:subject:message-id; bh=NTFrR4Ul2FjKNCZg2te+RhXOLPTLiTRbz6mCjfo2Vis=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBottiNqgPfkaWMCSeQ2d8u5PqsX4XybQjxBRtlB s0L51CavqyJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCaLbYjQAKCRAk1otyXVSH 0KJWB/43c6WsW+Bi69WMyiwE1PeIMs5+KboO9kaMDTUp1dRAzMKf6SZW+Nt/TgMvtyZaRzdXzqT r6xVWyojMTDtY1Wr8Nnc9l7PR7sQMVOGzs0lYu57GoMW4OnsQtxCLxtxu28NxdJhClfAbXHCYBp rQ2waBf+wppakR4Lq/FYARQiWMxPJbTu127zGR70Q8uR8W9vL4TIFYN3w7Lw+OFAQjF+OJf0FcX jHNTuyKtEXry/58P3RRVw7z8ySGfX3xkytQLxdJaI7VECxm/2G478BsP83fgBUv6CjlQT3WpOJT 40mITMw3kI53loFSXuNaw9qi6Z0mezwN+1qKBkoJfYqyStvA X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE. Signed-off-by: Mark Brown --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vhe/switch.c | 17 ++++++++++++----- 4 files changed, 35 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index a598072f36d2..d96f3a585d70 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -232,6 +232,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -385,6 +398,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index c128b4d25a2d..9375afa96b71 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -69,11 +69,8 @@ static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) { u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; - /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |= CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TSM; if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |= CPTR_EL2_TZ; @@ -101,6 +98,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) val |= CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN; } if (!vcpu_has_nv(vcpu)) @@ -142,6 +141,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) val &= ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &= ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &= ~CPACR_EL1_SMEN; if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |= cptr & CPACR_EL1_E0POE; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index ccd575d5f6de..79a3e5c290f9 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -175,6 +175,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, @@ -186,7 +187,8 @@ static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, - [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index e482181c6632..86a892966a18 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -448,22 +448,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu *vcpu, u64 *exit_code) return true; } -static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); if (!vcpu_has_nv(vcpu)) return false; - if (sysreg != SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + } if (guest_owns_fp_regs()) return false; /* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -483,7 +489,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true; - if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true; return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -512,6 +518,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, -- 2.39.5