From: Sasha Levin <sashal@kernel.org>
To: stable@vger.kernel.org
Cc: Mark Brown <broonie@kernel.org>, Sasha Levin <sashal@kernel.org>
Subject: Re: [PATCH 6.13 v2 8/8] KVM: arm64: Eagerly switch ZCR_EL{1,2}
Date: Fri, 21 Mar 2025 13:26:52 -0400 [thread overview]
Message-ID: <20250321132313-863ae2f236f2561b@stable.kernel.org> (raw)
In-Reply-To: <20250321-stable-sve-6-13-v2-8-3150e3370c40@kernel.org>
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected.
No action required from the submitter.
The upstream commit SHA1 provided is correct: 59419f10045bc955d2229819c7cf7a8b0b9c5b59
WARNING: Author mismatch between patch and upstream commit:
Backport author: Mark Brown<broonie@kernel.org>
Commit author: Mark Rutland<mark.rutland@arm.com>
Note: The patch differs from the upstream commit:
---
1: 59419f10045bc ! 1: 7fd4a8f975638 KVM: arm64: Eagerly switch ZCR_EL{1,2}
@@ Metadata
## Commit message ##
KVM: arm64: Eagerly switch ZCR_EL{1,2}
+ [ Upstream commit 59419f10045bc955d2229819c7cf7a8b0b9c5b59 ]
+
In non-protected KVM modes, while the guest FPSIMD/SVE/SME state is live on the
CPU, the host's active SVE VL may differ from the guest's maximum SVE VL:
@@ Commit message
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20250210195226.1215254-9-mark.rutland@arm.com
Signed-off-by: Marc Zyngier <maz@kernel.org>
+ Signed-off-by: Mark Brown <broonie@kernel.org>
## arch/arm64/kvm/fpsimd.c ##
@@ arch/arm64/kvm/fpsimd.c: void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
@@ arch/arm64/kvm/hyp/nvhe/hyp-main.c
#include <asm/pgtable-types.h>
#include <asm/kvm_asm.h>
@@ arch/arm64/kvm/hyp/nvhe/hyp-main.c: static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
-
- sync_hyp_vcpu(hyp_vcpu);
+ pkvm_put_hyp_vcpu(hyp_vcpu);
} else {
-+ struct kvm_vcpu *vcpu = kern_hyp_va(host_vcpu);
-+
/* The host is fully trusted, run its vCPU directly. */
-- ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu));
-+ fpsimd_lazy_switch_to_guest(vcpu);
-+ ret = __kvm_vcpu_run(vcpu);
-+ fpsimd_lazy_switch_to_host(vcpu);
++ fpsimd_lazy_switch_to_guest(host_vcpu);
+ ret = __kvm_vcpu_run(host_vcpu);
++ fpsimd_lazy_switch_to_host(host_vcpu);
}
+
out:
- cpu_reg(host_ctxt, 1) = ret;
@@ arch/arm64/kvm/hyp/nvhe/hyp-main.c: void handle_trap(struct kvm_cpu_context *host_ctxt)
case ESR_ELx_EC_SMC64:
handle_host_smc(host_ctxt);
break;
- case ESR_ELx_EC_SVE:
-- cpacr_clear_set(0, CPACR_EL1_ZEN);
+- cpacr_clear_set(0, CPACR_ELx_ZEN);
- isb();
- sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1,
- SYS_ZCR_EL2);
@@ arch/arm64/kvm/hyp/nvhe/hyp-main.c: void handle_trap(struct kvm_cpu_context *hos
## arch/arm64/kvm/hyp/nvhe/switch.c ##
@@ arch/arm64/kvm/hyp/nvhe/switch.c: static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
-
- static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
{
-- struct kvm *kvm = kern_hyp_va(vcpu->kvm);
--
+ u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */
+
++ if (!guest_owns_fp_regs())
++ __activate_traps_fpsimd32(vcpu);
++
if (has_hvhe()) {
- u64 val = CPACR_EL1_FPEN;
+ val |= CPACR_ELx_TTA;
-- if (!kvm_has_sve(kvm) || !guest_owns_fp_regs())
-+ if (cpus_have_final_cap(ARM64_SVE))
- val |= CPACR_EL1_ZEN;
- if (cpus_have_final_cap(ARM64_SME))
- val |= CPACR_EL1_SMEN;
-@@ arch/arm64/kvm/hyp/nvhe/switch.c: static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
+@@ arch/arm64/kvm/hyp/nvhe/switch.c: static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
+ if (vcpu_has_sve(vcpu))
+ val |= CPACR_ELx_ZEN;
+ }
++
++ write_sysreg(val, cpacr_el1);
} else {
- u64 val = CPTR_NVHE_EL2_RES1;
+ val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
-- if (kvm_has_sve(kvm) && guest_owns_fp_regs())
+@@ arch/arm64/kvm/hyp/nvhe/switch.c: static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
+
+ if (!guest_owns_fp_regs())
+ val |= CPTR_EL2_TFP;
++
++ write_sysreg(val, cptr_el2);
+ }
++}
+
+- if (!guest_owns_fp_regs())
+- __activate_traps_fpsimd32(vcpu);
++static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
++{
++ if (has_hvhe()) {
++ u64 val = CPACR_ELx_FPEN;
++
++ if (cpus_have_final_cap(ARM64_SVE))
++ val |= CPACR_ELx_ZEN;
++ if (cpus_have_final_cap(ARM64_SME))
++ val |= CPACR_ELx_SMEN;
++
++ write_sysreg(val, cpacr_el1);
++ } else {
++ u64 val = CPTR_NVHE_EL2_RES1;
++
+ if (!cpus_have_final_cap(ARM64_SVE))
- val |= CPTR_EL2_TZ;
- if (!cpus_have_final_cap(ARM64_SME))
- val |= CPTR_EL2_TSM;
++ val |= CPTR_EL2_TZ;
++ if (!cpus_have_final_cap(ARM64_SME))
++ val |= CPTR_EL2_TSM;
+
+- kvm_write_cptr_el2(val);
++ write_sysreg(val, cptr_el2);
++ }
+ }
+
+ static void __activate_traps(struct kvm_vcpu *vcpu)
+@@ arch/arm64/kvm/hyp/nvhe/switch.c: static void __deactivate_traps(struct kvm_vcpu *vcpu)
+
+ write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
+
+- kvm_reset_cptr_el2(vcpu);
++ __deactivate_cptr_traps(vcpu);
+ write_sysreg(__kvm_hyp_host_vector, vbar_el2);
+ }
+
## arch/arm64/kvm/hyp/vhe/switch.c ##
@@ arch/arm64/kvm/hyp/vhe/switch.c: static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
---
Results of testing on various branches:
| Branch | Patch Apply | Build Test |
|---------------------------|-------------|------------|
| stable/linux-5.4.y | Success | Success |
prev parent reply other threads:[~2025-03-21 17:27 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-21 0:10 [PATCH 6.13 v2 0/8] KVM: arm64: Backport of SVE fixes to v6.13 Mark Brown
2025-03-21 0:10 ` [PATCH 6.13 v2 1/8] KVM: arm64: Calculate cptr_el2 traps on activating traps Mark Brown
2025-03-21 17:28 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 2/8] KVM: arm64: Unconditionally save+flush host FPSIMD/SVE/SME state Mark Brown
2025-03-21 17:30 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 3/8] KVM: arm64: Remove host FPSIMD saving for non-protected KVM Mark Brown
2025-03-21 17:30 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 4/8] KVM: arm64: Remove VHE host restore of CPACR_EL1.ZEN Mark Brown
2025-03-21 17:27 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 5/8] KVM: arm64: Remove VHE host restore of CPACR_EL1.SMEN Mark Brown
2025-03-21 17:25 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 6/8] KVM: arm64: Refactor exit handlers Mark Brown
2025-03-21 17:28 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 7/8] KVM: arm64: Mark some header functions as inline Mark Brown
2025-03-21 17:29 ` Sasha Levin
2025-03-21 0:10 ` [PATCH 6.13 v2 8/8] KVM: arm64: Eagerly switch ZCR_EL{1,2} Mark Brown
2025-03-21 17:26 ` Sasha Levin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250321132313-863ae2f236f2561b@stable.kernel.org \
--to=sashal@kernel.org \
--cc=broonie@kernel.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox