* [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
@ 2023-11-20 13:09 Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2 Marc Zyngier
` (46 more replies)
0 siblings, 47 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
This is the 5th drop of NV support on arm64 for this year, and most
probably the last one for this side of Christmas.
For the previous episodes, see [1].
What's changed:
- Drop support for the original FEAT_NV. No existing hardware supports
it without FEAT_NV2, and the architecture is deprecating the former
entirely. This results in fewer patches, and a slightly simpler
model overall.
- Reorganise the series to make it a bit more logical now that FEAT_NV
is gone.
- Apply the NV idreg restrictions on VM first run rather than on each
access.
- Make the nested vgic shadow CPU interface a per-CPU structure rather
than per-vcpu.
- Fix the EL0 timer fastpath
- Work around the architecture deficiencies when trapping WFI from a
L2 guest.
- Fix sampling of nested vgic state (MISR, ELRSR, EISR)
- Drop the patches that have already been merged (NV trap forwarding,
per-MMU VTCR)
- Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
The branch containing these patches (and more) is at [3]. As for the
previous rounds, my intention is to take a prefix of this series into
6.8, provided that it gets enough reviewing.
[1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
[2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
[3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
Andre Przywara (1):
KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
Christoffer Dall (2):
KVM: arm64: nv: Implement nested Stage-2 page table walk logic
KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
Jintack Lim (3):
KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
Marc Zyngier (37):
arm64: cpufeatures: Restrict NV support to FEAT_NV2
KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt()
KVM: arm64: nv: Compute NV view of idregs as a one-off
KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR
KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
KVM: arm64: Introduce a bad_trap() primitive for unexpected trap
handling
KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers
KVM: arm64: nv: Map VNCR-capable registers to a separate page
KVM: arm64: nv: Handle virtual EL2 registers in
vcpu_read/write_sys_reg()
KVM: arm64: nv: Handle HCR_EL2.E2H specially
KVM: arm64: nv: Handle CNTHCTL_EL2 specially
KVM: arm64: nv: Save/Restore vEL2 sysregs
KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
KVM: arm64: nv: Handle shadow stage 2 page faults
KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
KVM: arm64: nv: Set a handler for the system instruction traps
KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
KVM: arm64: nv: Hide RAS from nested guests
KVM: arm64: nv: Add handling of EL2-specific timer registers
KVM: arm64: nv: Sync nested timer state with FEAT_NV2
KVM: arm64: nv: Publish emulated timer interrupt state in the
in-memory state
KVM: arm64: nv: Load timer before the GIC
KVM: arm64: nv: Nested GICv3 Support
KVM: arm64: nv: Don't block in WFI from nested state
KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup
KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt
delivery
KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation
KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like
information
KVM: arm64: nv: Tag shadow S2 entries with nested level
KVM: arm64: nv: Allocate VNCR page when required
KVM: arm64: nv: Fast-track 'InHost' exception returns
KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers
KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on
KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
.../virt/kvm/devices/arm-vgic-v3.rst | 12 +-
arch/arm64/include/asm/esr.h | 1 +
arch/arm64/include/asm/kvm_arm.h | 3 +
arch/arm64/include/asm/kvm_asm.h | 4 +
arch/arm64/include/asm/kvm_emulate.h | 53 +-
arch/arm64/include/asm/kvm_host.h | 223 +++-
arch/arm64/include/asm/kvm_hyp.h | 2 +
arch/arm64/include/asm/kvm_mmu.h | 12 +
arch/arm64/include/asm/kvm_nested.h | 130 ++-
arch/arm64/include/asm/sysreg.h | 7 +
arch/arm64/include/asm/vncr_mapping.h | 102 ++
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kernel/cpufeature.c | 22 +-
arch/arm64/kvm/Makefile | 4 +-
arch/arm64/kvm/arch_timer.c | 115 +-
arch/arm64/kvm/arm.c | 46 +-
arch/arm64/kvm/at.c | 219 ++++
arch/arm64/kvm/emulate-nested.c | 48 +-
arch/arm64/kvm/handle_exit.c | 29 +-
arch/arm64/kvm/hyp/include/hyp/switch.h | 8 +-
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
arch/arm64/kvm/hyp/vhe/switch.c | 211 +++-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 133 ++-
arch/arm64/kvm/hyp/vhe/tlb.c | 83 ++
arch/arm64/kvm/mmu.c | 248 ++++-
arch/arm64/kvm/nested.c | 813 ++++++++++++++-
arch/arm64/kvm/reset.c | 7 +
arch/arm64/kvm/sys_regs.c | 978 ++++++++++++++++--
arch/arm64/kvm/vgic/vgic-init.c | 35 +
arch/arm64/kvm/vgic/vgic-kvm-device.c | 29 +-
arch/arm64/kvm/vgic/vgic-v3-nested.c | 270 +++++
arch/arm64/kvm/vgic/vgic-v3.c | 35 +-
arch/arm64/kvm/vgic/vgic.c | 29 +
arch/arm64/kvm/vgic/vgic.h | 10 +
arch/arm64/tools/cpucaps | 2 +
include/clocksource/arm_arch_timer.h | 4 +
include/kvm/arm_arch_timer.h | 19 +
include/kvm/arm_vgic.h | 16 +
include/uapi/linux/kvm.h | 1 +
tools/arch/arm/include/uapi/asm/kvm.h | 1 +
43 files changed, 3725 insertions(+), 255 deletions(-)
create mode 100644 arch/arm64/include/asm/vncr_mapping.h
create mode 100644 arch/arm64/kvm/at.c
create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-21 9:07 ` Ganapatrao Kulkarni
2023-11-20 13:09 ` [PATCH v11 02/43] KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt() Marc Zyngier
` (45 subsequent siblings)
46 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
To anyone who has played with FEAT_NV, it is obvious that the level
of performance is rather low due to the trap amplification that it
imposes on the host hypervisor. FEAT_NV2 solves a number of the
problems that FEAT_NV had.
It also turns out that all the existing hardware that has FEAT_NV
also has FEAT_NV2. Finally, it is now allowed by the architecture
to build FEAT_NV2 *only* (as denoted by ID_AA64MMFR4_EL1.NV_frac),
which effectively seals the fate of FEAT_NV.
Restrict the NV support to NV2, and be done with it. Nobody will
cry over the old crap.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 22 +++++++++++++++-------
arch/arm64/tools/cpucaps | 2 ++
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7dcda39537f8..95a677cf8c04 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -439,6 +439,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr3[] = {
static const struct arm64_ftr_bits ftr_id_aa64mmfr4[] = {
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR4_EL1_E2H0_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR4_EL1_NV_frac_SHIFT, 4, 0),
ARM64_FTR_END,
};
@@ -2080,12 +2081,8 @@ static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
if (kvm_get_mode() != KVM_MODE_NV)
return false;
- if (!has_cpuid_feature(cap, scope)) {
- pr_warn("unavailable: %s\n", cap->desc);
- return false;
- }
-
- return true;
+ return (__system_matches_cap(ARM64_HAS_NV2) |
+ __system_matches_cap(ARM64_HAS_NV2_ONLY));
}
static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
@@ -2391,12 +2388,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = runs_at_el2,
.cpu_enable = cpu_copy_el2regs,
},
+ {
+ .capability = ARM64_HAS_NV2,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, NV2)
+ },
+ {
+ .capability = ARM64_HAS_NV2_ONLY,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY)
+ },
{
.desc = "Nested Virtualization Support",
.capability = ARM64_HAS_NESTED_VIRT,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_nested_virt_support,
- ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, IMP)
},
{
.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index fea24bcd6252..480de648cd03 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -41,6 +41,8 @@ HAS_LSE_ATOMICS
HAS_MOPS
HAS_NESTED_VIRT
HAS_NO_HW_PREFETCH
+HAS_NV2
+HAS_NV2_ONLY
HAS_PAN
HAS_S1PIE
HAS_RAS_EXTN
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 02/43] KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt()
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2 Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 03/43] KVM: arm64: nv: Compute NV view of idregs as a one-off Marc Zyngier
` (44 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
A rather common idiom when writing NV code as part of KVM is
to have things such has:
if (vcpu_has_nv(vcpu) && is_hyp_ctxt(vcpu)) {
[...]
}
to check that we are in a hyp-related context. The second part of
the conjunction would be enough, but the first one contains a
static key that allows the rest of the checkis to be elided when
in a non-NV environment.
Rewrite is_hyp_ctxt() to directly use vcpu_has_nv(). The result
is the same, and the code easier to read. The one occurence of
this that is already merged is rewritten in the process.
In order to avoid nasty cirtular dependencies between kvm_emulate.h
and kvm_nested.h, vcpu_has_feature() is itself hoisted into kvm_host.h,
at the cost of some #deferry...
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 8 ++------
arch/arm64/include/asm/kvm_host.h | 7 +++++++
arch/arm64/kvm/arch_timer.c | 3 +--
3 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 7b10a44189d0..ca9168d23cd6 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -17,6 +17,7 @@
#include <asm/esr.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_nested.h>
#include <asm/ptrace.h>
#include <asm/cputype.h>
#include <asm/virt.h>
@@ -54,11 +55,6 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
-static inline bool vcpu_has_feature(const struct kvm_vcpu *vcpu, int feature)
-{
- return test_bit(feature, vcpu->kvm->arch.vcpu_features);
-}
-
#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__)
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
@@ -249,7 +245,7 @@ static inline bool __is_hyp_ctxt(const struct kvm_cpu_context *ctxt)
static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu)
{
- return __is_hyp_ctxt(&vcpu->arch.ctxt);
+ return vcpu_has_nv(vcpu) && __is_hyp_ctxt(&vcpu->arch.ctxt);
}
/*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 824f29f04916..4103a12ecaaf 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1177,6 +1177,13 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
#define kvm_vm_has_ran_once(kvm) \
(test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &(kvm)->arch.flags))
+static inline bool __vcpu_has_feature(const struct kvm_arch *ka, int feature)
+{
+ return test_bit(feature, ka->vcpu_features);
+}
+
+#define vcpu_has_feature(v, f) __vcpu_has_feature(&(v)->kvm->arch, (f))
+
int kvm_trng_call(struct kvm_vcpu *vcpu);
#ifdef CONFIG_KVM
extern phys_addr_t hyp_mem_base;
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 13ba691b848f..9dec8c419bf4 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -295,8 +295,7 @@ static u64 wfit_delay_ns(struct kvm_vcpu *vcpu)
u64 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));
struct arch_timer_context *ctx;
- ctx = (vcpu_has_nv(vcpu) && is_hyp_ctxt(vcpu)) ? vcpu_hvtimer(vcpu)
- : vcpu_vtimer(vcpu);
+ ctx = is_hyp_ctxt(vcpu) ? vcpu_hvtimer(vcpu) : vcpu_vtimer(vcpu);
return kvm_counter_compute_delta(ctx, val);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 03/43] KVM: arm64: nv: Compute NV view of idregs as a one-off
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2 Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 02/43] KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt() Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 04/43] KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR Marc Zyngier
` (43 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Now that we have a full copy of the idregs for each VM, there is
no point in repainting the sysregs on each access. Instead, we
can simply perform the transmation as a one-off and be done
with it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/include/asm/kvm_nested.h | 6 +-----
arch/arm64/kvm/arm.c | 6 ++++++
arch/arm64/kvm/nested.c | 22 +++++++++++++++-------
arch/arm64/kvm/sys_regs.c | 2 --
5 files changed, 23 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4103a12ecaaf..fce2e5f583a7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -306,6 +306,7 @@ struct kvm_arch {
* Atomic access to multiple idregs are guarded by kvm_arch.config_lock.
*/
#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
+#define IDX_IDREG(idx) sys_reg(3, 0, 0, ((idx) >> 3) + 1, (idx) & Op2_mask)
#define IDREG(kvm, id) ((kvm)->arch.id_regs[IDREG_IDX(id)])
#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
u64 id_regs[KVM_ARM_ID_REG_NUM];
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 6cec8e9c6c91..249b03fc2cce 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -14,10 +14,6 @@ static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
-struct sys_reg_params;
-struct sys_reg_desc;
-
-void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
- const struct sys_reg_desc *r);
+int kvm_init_nv_sysregs(struct kvm *kvm);
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e5f75f1f1085..b65df612b41b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -669,6 +669,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
return ret;
}
+ if (vcpu_has_nv(vcpu)) {
+ ret = kvm_init_nv_sysregs(vcpu->kvm);
+ if (ret)
+ return ret;
+ }
+
ret = kvm_timer_enable(vcpu);
if (ret)
return ret;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 3885f1c93979..66d05f5d39a2 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -23,13 +23,9 @@
* This list should get updated as new features get added to the NV
* support, and new extension to the architecture.
*/
-void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
- const struct sys_reg_desc *r)
+static u64 limit_nv_id_reg(u32 id, u64 val)
{
- u32 id = reg_to_encoding(r);
- u64 val, tmp;
-
- val = p->regval;
+ u64 tmp;
switch (id) {
case SYS_ID_AA64ISAR0_EL1:
@@ -162,5 +158,17 @@ void access_nested_id_reg(struct kvm_vcpu *v, struct sys_reg_params *p,
break;
}
- p->regval = val;
+ return val;
+}
+int kvm_init_nv_sysregs(struct kvm *kvm)
+{
+ mutex_lock(&kvm->arch.config_lock);
+
+ for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
+ kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
+ kvm->arch.id_regs[i]);
+
+ mutex_unlock(&kvm->arch.config_lock);
+
+ return 0;
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9e1e3da2ed4a..4aacce494ee2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1505,8 +1505,6 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
return write_to_read_only(vcpu, p, r);
p->regval = read_id_reg(vcpu, r);
- if (vcpu_has_nv(vcpu))
- access_nested_id_reg(vcpu, p, r);
return true;
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 04/43] KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (2 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 03/43] KVM: arm64: nv: Compute NV view of idregs as a one-off Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 05/43] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers Marc Zyngier
` (42 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
With FEAT_NV2, a bunch of system register writes are turned into
memory writes. This is specially the fate of the EL12 registers
that the guest hypervisor manipulates out of context.
Remove the trap descriptors for those, as they are never going
to be used again.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 15 ---------------
1 file changed, 15 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4aacce494ee2..6405d9ebc28a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2577,21 +2577,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(CNTVOFF_EL2, access_rw, reset_val, 0),
EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0),
- EL12_REG(SCTLR, access_vm_reg, reset_val, 0x00C50078),
- EL12_REG(CPACR, access_rw, reset_val, 0),
- EL12_REG(TTBR0, access_vm_reg, reset_unknown, 0),
- EL12_REG(TTBR1, access_vm_reg, reset_unknown, 0),
- EL12_REG(TCR, access_vm_reg, reset_val, 0),
- { SYS_DESC(SYS_SPSR_EL12), access_spsr},
- { SYS_DESC(SYS_ELR_EL12), access_elr},
- EL12_REG(AFSR0, access_vm_reg, reset_unknown, 0),
- EL12_REG(AFSR1, access_vm_reg, reset_unknown, 0),
- EL12_REG(ESR, access_vm_reg, reset_unknown, 0),
- EL12_REG(FAR, access_vm_reg, reset_unknown, 0),
- EL12_REG(MAIR, access_vm_reg, reset_unknown, 0),
- EL12_REG(AMAIR, access_vm_reg, reset_amair_el1, 0),
- EL12_REG(VBAR, access_rw, reset_val, 0),
- EL12_REG(CONTEXTIDR, access_vm_reg, reset_val, 0),
EL12_REG(CNTKCTL, access_rw, reset_val, 0),
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 05/43] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (3 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 04/43] KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 06/43] KVM: arm64: nv: Add include containing the VNCR_EL2 offsets Marc Zyngier
` (41 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Some EL2 system registers immediately affect the current execution
of the system, so we need to use their respective EL1 counterparts.
For this we need to define a mapping between the two. In general,
this only affects non-VHE guest hypervisors, as VHE system registers
are compatible with the EL1 counterparts.
These helpers will get used in subsequent patches.
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 50 ++++++++++++++++++++++++++++-
1 file changed, 49 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 249b03fc2cce..4882905357f4 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -2,8 +2,9 @@
#ifndef __ARM64_KVM_NESTED_H
#define __ARM64_KVM_NESTED_H
-#include <asm/kvm_emulate.h>
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
{
@@ -12,6 +13,53 @@ static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2));
}
+/* Translation helpers from non-VHE EL2 to EL1 */
+static inline u64 tcr_el2_ps_to_tcr_el1_ips(u64 tcr_el2)
+{
+ return (u64)FIELD_GET(TCR_EL2_PS_MASK, tcr_el2) << TCR_IPS_SHIFT;
+}
+
+static inline u64 translate_tcr_el2_to_tcr_el1(u64 tcr)
+{
+ return TCR_EPD1_MASK | /* disable TTBR1_EL1 */
+ ((tcr & TCR_EL2_TBI) ? TCR_TBI0 : 0) |
+ tcr_el2_ps_to_tcr_el1_ips(tcr) |
+ (tcr & TCR_EL2_TG0_MASK) |
+ (tcr & TCR_EL2_ORGN0_MASK) |
+ (tcr & TCR_EL2_IRGN0_MASK) |
+ (tcr & TCR_EL2_T0SZ_MASK);
+}
+
+static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
+{
+ u64 cpacr_el1 = 0;
+
+ if (cptr_el2 & CPTR_EL2_TTA)
+ cpacr_el1 |= CPACR_ELx_TTA;
+ if (!(cptr_el2 & CPTR_EL2_TFP))
+ cpacr_el1 |= CPACR_ELx_FPEN;
+ if (!(cptr_el2 & CPTR_EL2_TZ))
+ cpacr_el1 |= CPACR_ELx_ZEN;
+
+ return cpacr_el1;
+}
+
+static inline u64 translate_sctlr_el2_to_sctlr_el1(u64 val)
+{
+ /* Only preserve the minimal set of bits we support */
+ val &= (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | SCTLR_ELx_SA |
+ SCTLR_ELx_I | SCTLR_ELx_IESB | SCTLR_ELx_WXN | SCTLR_ELx_EE);
+ val |= SCTLR_EL1_RES1;
+
+ return val;
+}
+
+static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
+{
+ /* Clear the ASID field */
+ return ttbr0 & ~GENMASK_ULL(63, 48);
+}
+
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
int kvm_init_nv_sysregs(struct kvm *kvm);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 06/43] KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (4 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 05/43] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 07/43] KVM: arm64: Introduce a bad_trap() primitive for unexpected trap handling Marc Zyngier
` (40 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
VNCR_EL2 points to a page containing a number of system registers
accessed by a guest hypervisor when ARMv8.4-NV is enabled.
Let's document the offsets in that page, as we are going to use
this layout.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/vncr_mapping.h | 102 ++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
create mode 100644 arch/arm64/include/asm/vncr_mapping.h
diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h
new file mode 100644
index 000000000000..497d37780d15
--- /dev/null
+++ b/arch/arm64/include/asm/vncr_mapping.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * System register offsets in the VNCR page
+ * All offsets are *byte* displacements!
+ */
+
+#ifndef __ARM64_VNCR_MAPPING_H__
+#define __ARM64_VNCR_MAPPING_H__
+
+#define VNCR_VTTBR_EL2 0x020
+#define VNCR_VTCR_EL2 0x040
+#define VNCR_VMPIDR_EL2 0x050
+#define VNCR_CNTVOFF_EL2 0x060
+#define VNCR_HCR_EL2 0x078
+#define VNCR_HSTR_EL2 0x080
+#define VNCR_VPIDR_EL2 0x088
+#define VNCR_TPIDR_EL2 0x090
+#define VNCR_HCRX_EL2 0x0A0
+#define VNCR_VNCR_EL2 0x0B0
+#define VNCR_CPACR_EL1 0x100
+#define VNCR_CONTEXTIDR_EL1 0x108
+#define VNCR_SCTLR_EL1 0x110
+#define VNCR_ACTLR_EL1 0x118
+#define VNCR_TCR_EL1 0x120
+#define VNCR_AFSR0_EL1 0x128
+#define VNCR_AFSR1_EL1 0x130
+#define VNCR_ESR_EL1 0x138
+#define VNCR_MAIR_EL1 0x140
+#define VNCR_AMAIR_EL1 0x148
+#define VNCR_MDSCR_EL1 0x158
+#define VNCR_SPSR_EL1 0x160
+#define VNCR_CNTV_CVAL_EL0 0x168
+#define VNCR_CNTV_CTL_EL0 0x170
+#define VNCR_CNTP_CVAL_EL0 0x178
+#define VNCR_CNTP_CTL_EL0 0x180
+#define VNCR_SCXTNUM_EL1 0x188
+#define VNCR_TFSR_EL1 0x190
+#define VNCR_HFGRTR_EL2 0x1B8
+#define VNCR_HFGWTR_EL2 0x1C0
+#define VNCR_HFGITR_EL2 0x1C8
+#define VNCR_HDFGRTR_EL2 0x1D0
+#define VNCR_HDFGWTR_EL2 0x1D8
+#define VNCR_ZCR_EL1 0x1E0
+#define VNCR_TTBR0_EL1 0x200
+#define VNCR_TTBR1_EL1 0x210
+#define VNCR_FAR_EL1 0x220
+#define VNCR_ELR_EL1 0x230
+#define VNCR_SP_EL1 0x240
+#define VNCR_VBAR_EL1 0x250
+#define VNCR_TCR2_EL1 0x270
+#define VNCR_PIRE0_EL1 0x290
+#define VNCR_PIRE0_EL2 0x298
+#define VNCR_PIR_EL1 0x2A0
+#define VNCR_ICH_LR0_EL2 0x400
+#define VNCR_ICH_LR1_EL2 0x408
+#define VNCR_ICH_LR2_EL2 0x410
+#define VNCR_ICH_LR3_EL2 0x418
+#define VNCR_ICH_LR4_EL2 0x420
+#define VNCR_ICH_LR5_EL2 0x428
+#define VNCR_ICH_LR6_EL2 0x430
+#define VNCR_ICH_LR7_EL2 0x438
+#define VNCR_ICH_LR8_EL2 0x440
+#define VNCR_ICH_LR9_EL2 0x448
+#define VNCR_ICH_LR10_EL2 0x450
+#define VNCR_ICH_LR11_EL2 0x458
+#define VNCR_ICH_LR12_EL2 0x460
+#define VNCR_ICH_LR13_EL2 0x468
+#define VNCR_ICH_LR14_EL2 0x470
+#define VNCR_ICH_LR15_EL2 0x478
+#define VNCR_ICH_AP0R0_EL2 0x480
+#define VNCR_ICH_AP0R1_EL2 0x488
+#define VNCR_ICH_AP0R2_EL2 0x490
+#define VNCR_ICH_AP0R3_EL2 0x498
+#define VNCR_ICH_AP1R0_EL2 0x4A0
+#define VNCR_ICH_AP1R1_EL2 0x4A8
+#define VNCR_ICH_AP1R2_EL2 0x4B0
+#define VNCR_ICH_AP1R3_EL2 0x4B8
+#define VNCR_ICH_HCR_EL2 0x4C0
+#define VNCR_ICH_VMCR_EL2 0x4C8
+#define VNCR_VDISR_EL2 0x500
+#define VNCR_PMBLIMITR_EL1 0x800
+#define VNCR_PMBPTR_EL1 0x810
+#define VNCR_PMBSR_EL1 0x820
+#define VNCR_PMSCR_EL1 0x828
+#define VNCR_PMSEVFR_EL1 0x830
+#define VNCR_PMSICR_EL1 0x838
+#define VNCR_PMSIRR_EL1 0x840
+#define VNCR_PMSLATFR_EL1 0x848
+#define VNCR_TRFCR_EL1 0x880
+#define VNCR_MPAM1_EL1 0x900
+#define VNCR_MPAMHCR_EL2 0x930
+#define VNCR_MPAMVPMV_EL2 0x938
+#define VNCR_MPAMVPM0_EL2 0x940
+#define VNCR_MPAMVPM1_EL2 0x948
+#define VNCR_MPAMVPM2_EL2 0x950
+#define VNCR_MPAMVPM3_EL2 0x958
+#define VNCR_MPAMVPM4_EL2 0x960
+#define VNCR_MPAMVPM5_EL2 0x968
+#define VNCR_MPAMVPM6_EL2 0x970
+#define VNCR_MPAMVPM7_EL2 0x978
+
+#endif /* __ARM64_VNCR_MAPPING_H__ */
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 07/43] KVM: arm64: Introduce a bad_trap() primitive for unexpected trap handling
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (5 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 06/43] KVM: arm64: nv: Add include containing the VNCR_EL2 offsets Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 08/43] KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers Marc Zyngier
` (39 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
In order to ease the debugging of NV, it is helpful to have the kernel
shout at you when an unexpected trap is handled. We already have this
in a couple of cases. Make this a more generic infrastructure that we
will make use of very shortly.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6405d9ebc28a..a529ce5ba987 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -45,24 +45,31 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
u64 val);
-static bool read_from_write_only(struct kvm_vcpu *vcpu,
- struct sys_reg_params *params,
- const struct sys_reg_desc *r)
+static bool bad_trap(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *params,
+ const struct sys_reg_desc *r,
+ const char *msg)
{
- WARN_ONCE(1, "Unexpected sys_reg read to write-only register\n");
+ WARN_ONCE(1, "Unexpected %s\n", msg);
print_sys_reg_instr(params);
kvm_inject_undefined(vcpu);
return false;
}
+static bool read_from_write_only(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *params,
+ const struct sys_reg_desc *r)
+{
+ return bad_trap(vcpu, params, r,
+ "sys_reg read to write-only register");
+}
+
static bool write_to_read_only(struct kvm_vcpu *vcpu,
struct sys_reg_params *params,
const struct sys_reg_desc *r)
{
- WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n");
- print_sys_reg_instr(params);
- kvm_inject_undefined(vcpu);
- return false;
+ return bad_trap(vcpu, params, r,
+ "sys_reg write to read-only register");
}
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 08/43] KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (6 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 07/43] KVM: arm64: Introduce a bad_trap() primitive for unexpected trap handling Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 09/43] KVM: arm64: nv: Map VNCR-capable registers to a separate page Marc Zyngier
` (38 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Add two helpers to deal with EL2 registers are are either redirected
to the VNCR page, or that are redirected to their EL1 counterpart.
In either cases, no trap is expected.
THe relevant register descriptors are repainted accordingly.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 65 ++++++++++++++++++++++++++++-----------
1 file changed, 47 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a529ce5ba987..c31fddc1591d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1891,6 +1891,32 @@ static unsigned int el2_visibility(const struct kvm_vcpu *vcpu,
return REG_HIDDEN;
}
+static bool bad_vncr_trap(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * We really shouldn't be here, and this is likely the result
+ * of a misconfigured trap, as this register should target the
+ * VNCR page, and nothing else.
+ */
+ return bad_trap(vcpu, p, r,
+ "trap of VNCR-backed register");
+}
+
+static bool bad_redir_trap(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * We really shouldn't be here, and this is likely the result
+ * of a misconfigured trap, as this register should target the
+ * corresponding EL1, and nothing else.
+ */
+ return bad_trap(vcpu, p, r,
+ "trap of EL2 register redirected to EL1");
+}
+
#define EL2_REG(name, acc, rst, v) { \
SYS_DESC(SYS_##name), \
.access = acc, \
@@ -1900,6 +1926,9 @@ static unsigned int el2_visibility(const struct kvm_vcpu *vcpu,
.val = v, \
}
+#define EL2_REG_VNCR(name, rst, v) EL2_REG(name, bad_vncr_trap, rst, v)
+#define EL2_REG_REDIR(name, rst, v) EL2_REG(name, bad_redir_trap, rst, v)
+
/*
* EL{0,1}2 registers are the EL2 view on an EL0 or EL1 register when
* HCR_EL2.E2H==1, and only in the sysreg table for convenience of
@@ -2524,32 +2553,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper,
.reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 },
- EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0),
- EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0),
+ EL2_REG_VNCR(VPIDR_EL2, reset_unknown, 0),
+ EL2_REG_VNCR(VMPIDR_EL2, reset_unknown, 0),
EL2_REG(SCTLR_EL2, access_rw, reset_val, SCTLR_EL2_RES1),
EL2_REG(ACTLR_EL2, access_rw, reset_val, 0),
- EL2_REG(HCR_EL2, access_rw, reset_hcr, 0),
+ EL2_REG_VNCR(HCR_EL2, reset_hcr, 0),
EL2_REG(MDCR_EL2, access_rw, reset_val, 0),
EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_NVHE_EL2_RES1),
- EL2_REG(HSTR_EL2, access_rw, reset_val, 0),
- EL2_REG(HFGRTR_EL2, access_rw, reset_val, 0),
- EL2_REG(HFGWTR_EL2, access_rw, reset_val, 0),
- EL2_REG(HFGITR_EL2, access_rw, reset_val, 0),
- EL2_REG(HACR_EL2, access_rw, reset_val, 0),
+ EL2_REG_VNCR(HSTR_EL2, reset_val, 0),
+ EL2_REG_VNCR(HFGRTR_EL2, reset_val, 0),
+ EL2_REG_VNCR(HFGWTR_EL2, reset_val, 0),
+ EL2_REG_VNCR(HFGITR_EL2, reset_val, 0),
+ EL2_REG_VNCR(HACR_EL2, reset_val, 0),
- EL2_REG(HCRX_EL2, access_rw, reset_val, 0),
+ EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
EL2_REG(TTBR1_EL2, access_rw, reset_val, 0),
EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
- EL2_REG(VTTBR_EL2, access_rw, reset_val, 0),
- EL2_REG(VTCR_EL2, access_rw, reset_val, 0),
+ EL2_REG_VNCR(VTTBR_EL2, reset_val, 0),
+ EL2_REG_VNCR(VTCR_EL2, reset_val, 0),
{ SYS_DESC(SYS_DACR32_EL2), trap_undef, reset_unknown, DACR32_EL2 },
- EL2_REG(HDFGRTR_EL2, access_rw, reset_val, 0),
- EL2_REG(HDFGWTR_EL2, access_rw, reset_val, 0),
- EL2_REG(SPSR_EL2, access_rw, reset_val, 0),
- EL2_REG(ELR_EL2, access_rw, reset_val, 0),
+ EL2_REG_VNCR(HDFGRTR_EL2, reset_val, 0),
+ EL2_REG_VNCR(HDFGWTR_EL2, reset_val, 0),
+ EL2_REG_REDIR(SPSR_EL2, reset_val, 0),
+ EL2_REG_REDIR(ELR_EL2, reset_val, 0),
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
/* AArch32 SPSR_* are RES0 if trapped from a NV guest */
@@ -2565,10 +2594,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_IFSR32_EL2), trap_undef, reset_unknown, IFSR32_EL2 },
EL2_REG(AFSR0_EL2, access_rw, reset_val, 0),
EL2_REG(AFSR1_EL2, access_rw, reset_val, 0),
- EL2_REG(ESR_EL2, access_rw, reset_val, 0),
+ EL2_REG_REDIR(ESR_EL2, reset_val, 0),
{ SYS_DESC(SYS_FPEXC32_EL2), trap_undef, reset_val, FPEXC32_EL2, 0x700 },
- EL2_REG(FAR_EL2, access_rw, reset_val, 0),
+ EL2_REG_REDIR(FAR_EL2, reset_val, 0),
EL2_REG(HPFAR_EL2, access_rw, reset_val, 0),
EL2_REG(MAIR_EL2, access_rw, reset_val, 0),
@@ -2581,7 +2610,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(CONTEXTIDR_EL2, access_rw, reset_val, 0),
EL2_REG(TPIDR_EL2, access_rw, reset_val, 0),
- EL2_REG(CNTVOFF_EL2, access_rw, reset_val, 0),
+ EL2_REG_VNCR(CNTVOFF_EL2, reset_val, 0),
EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0),
EL12_REG(CNTKCTL, access_rw, reset_val, 0),
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 09/43] KVM: arm64: nv: Map VNCR-capable registers to a separate page
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (7 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 08/43] KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 10/43] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg() Marc Zyngier
` (37 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
With ARMv8.4-NV, registers that can be directly accessed in memory
by the guest have to live at architected offsets in a special page.
Let's annotate the sysreg enum to reflect the offset at which they
are in this page, whith a little twist:
If running on HW that doesn't have the ARMv8.4-NV feature, or even
a VM that doesn't use NV, we store all the system registers in the
usual sys_regs array. The only difference with the pre-8.4
situation is that VNCR-capable registers are at a "similar" offset
as in the VNCR page (we can compute the actual offset at compile
time), and that the sys_regs array is both bigger and sparse.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 127 +++++++++++++++++++-----------
1 file changed, 81 insertions(+), 46 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fce2e5f583a7..9e8cd2bb95c3 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -27,6 +27,7 @@
#include <asm/fpsimd.h>
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
+#include <asm/vncr_mapping.h>
#define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -325,33 +326,33 @@ struct kvm_vcpu_fault_info {
u64 disr_el1; /* Deferred [SError] Status Register */
};
+/*
+ * VNCR() just places the VNCR_capable registers in the enum after
+ * __VNCR_START__, and the value (after correction) to be an 8-byte offset
+ * from the VNCR base. As we don't require the enum to be otherwise ordered,
+ * we need the terrible hack below to ensure that we correctly size the
+ * sys_regs array, no matter what.
+ *
+ * The __MAX__ macro has been lifted from Sean Eron Anderson's wonderful
+ * treasure trove of bit hacks:
+ * https://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax
+ */
+#define __MAX__(x,y) ((x) ^ (((x) ^ (y)) & -((x) < (y))))
+#define VNCR(r) \
+ __before_##r, \
+ r = __VNCR_START__ + ((VNCR_ ## r) / 8), \
+ __after_##r = __MAX__(__before_##r - 1, r)
+
enum vcpu_sysreg {
__INVALID_SYSREG__, /* 0 is reserved as an invalid value */
MPIDR_EL1, /* MultiProcessor Affinity Register */
CLIDR_EL1, /* Cache Level ID Register */
CSSELR_EL1, /* Cache Size Selection Register */
- SCTLR_EL1, /* System Control Register */
- ACTLR_EL1, /* Auxiliary Control Register */
- CPACR_EL1, /* Coprocessor Access Control */
- ZCR_EL1, /* SVE Control */
- TTBR0_EL1, /* Translation Table Base Register 0 */
- TTBR1_EL1, /* Translation Table Base Register 1 */
- TCR_EL1, /* Translation Control Register */
- TCR2_EL1, /* Extended Translation Control Register */
- ESR_EL1, /* Exception Syndrome Register */
- AFSR0_EL1, /* Auxiliary Fault Status Register 0 */
- AFSR1_EL1, /* Auxiliary Fault Status Register 1 */
- FAR_EL1, /* Fault Address Register */
- MAIR_EL1, /* Memory Attribute Indirection Register */
- VBAR_EL1, /* Vector Base Address Register */
- CONTEXTIDR_EL1, /* Context ID Register */
TPIDR_EL0, /* Thread ID, User R/W */
TPIDRRO_EL0, /* Thread ID, User R/O */
TPIDR_EL1, /* Thread ID, Privileged */
- AMAIR_EL1, /* Aux Memory Attribute Indirection Register */
CNTKCTL_EL1, /* Timer Control Register (EL1) */
PAR_EL1, /* Physical Address Register */
- MDSCR_EL1, /* Monitor Debug System Control Register */
MDCCINT_EL1, /* Monitor Debug Comms Channel Interrupt Enable Reg */
OSLSR_EL1, /* OS Lock Status Register */
DISR_EL1, /* Deferred Interrupt Status Register */
@@ -382,26 +383,11 @@ enum vcpu_sysreg {
APGAKEYLO_EL1,
APGAKEYHI_EL1,
- ELR_EL1,
- SP_EL1,
- SPSR_EL1,
-
- CNTVOFF_EL2,
- CNTV_CVAL_EL0,
- CNTV_CTL_EL0,
- CNTP_CVAL_EL0,
- CNTP_CTL_EL0,
-
/* Memory Tagging Extension registers */
RGSR_EL1, /* Random Allocation Tag Seed Register */
GCR_EL1, /* Tag Control Register */
- TFSR_EL1, /* Tag Fault Status Register (EL1) */
TFSRE0_EL1, /* Tag Fault Status Register (EL0) */
- /* Permission Indirection Extension registers */
- PIR_EL1, /* Permission Indirection Register 1 (EL1) */
- PIRE0_EL1, /* Permission Indirection Register 0 (EL1) */
-
/* 32bit specific registers. */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
@@ -409,21 +395,14 @@ enum vcpu_sysreg {
DBGVCR32_EL2, /* Debug Vector Catch Register */
/* EL2 registers */
- VPIDR_EL2, /* Virtualization Processor ID Register */
- VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
SCTLR_EL2, /* System Control Register (EL2) */
ACTLR_EL2, /* Auxiliary Control Register (EL2) */
- HCR_EL2, /* Hypervisor Configuration Register */
MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
- HSTR_EL2, /* Hypervisor System Trap Register */
HACR_EL2, /* Hypervisor Auxiliary Control Register */
- HCRX_EL2, /* Extended Hypervisor Configuration Register */
TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
TCR_EL2, /* Translation Control Register (EL2) */
- VTTBR_EL2, /* Virtualization Translation Table Base Register */
- VTCR_EL2, /* Virtualization Translation Control Register */
SPSR_EL2, /* EL2 saved program status register */
ELR_EL2, /* EL2 exception link register */
AFSR0_EL2, /* Auxiliary Fault Status Register 0 (EL2) */
@@ -436,19 +415,61 @@ enum vcpu_sysreg {
VBAR_EL2, /* Vector Base Address Register (EL2) */
RVBAR_EL2, /* Reset Vector Base Address Register */
CONTEXTIDR_EL2, /* Context ID Register (EL2) */
- TPIDR_EL2, /* EL2 Software Thread ID Register */
CNTHCTL_EL2, /* Counter-timer Hypervisor Control register */
SP_EL2, /* EL2 Stack Pointer */
- HFGRTR_EL2,
- HFGWTR_EL2,
- HFGITR_EL2,
- HDFGRTR_EL2,
- HDFGWTR_EL2,
CNTHP_CTL_EL2,
CNTHP_CVAL_EL2,
CNTHV_CTL_EL2,
CNTHV_CVAL_EL2,
+ __VNCR_START__, /* Any VNCR-capable reg goes after this point */
+
+ VNCR(SCTLR_EL1),/* System Control Register */
+ VNCR(ACTLR_EL1),/* Auxiliary Control Register */
+ VNCR(CPACR_EL1),/* Coprocessor Access Control */
+ VNCR(ZCR_EL1), /* SVE Control */
+ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */
+ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */
+ VNCR(TCR_EL1), /* Translation Control Register */
+ VNCR(TCR2_EL1), /* Extended Translation Control Register */
+ VNCR(ESR_EL1), /* Exception Syndrome Register */
+ VNCR(AFSR0_EL1),/* Auxiliary Fault Status Register 0 */
+ VNCR(AFSR1_EL1),/* Auxiliary Fault Status Register 1 */
+ VNCR(FAR_EL1), /* Fault Address Register */
+ VNCR(MAIR_EL1), /* Memory Attribute Indirection Register */
+ VNCR(VBAR_EL1), /* Vector Base Address Register */
+ VNCR(CONTEXTIDR_EL1), /* Context ID Register */
+ VNCR(AMAIR_EL1),/* Aux Memory Attribute Indirection Register */
+ VNCR(MDSCR_EL1),/* Monitor Debug System Control Register */
+ VNCR(ELR_EL1),
+ VNCR(SP_EL1),
+ VNCR(SPSR_EL1),
+ VNCR(TFSR_EL1), /* Tag Fault Status Register (EL1) */
+ VNCR(VPIDR_EL2),/* Virtualization Processor ID Register */
+ VNCR(VMPIDR_EL2),/* Virtualization Multiprocessor ID Register */
+ VNCR(HCR_EL2), /* Hypervisor Configuration Register */
+ VNCR(HSTR_EL2), /* Hypervisor System Trap Register */
+ VNCR(VTTBR_EL2),/* Virtualization Translation Table Base Register */
+ VNCR(VTCR_EL2), /* Virtualization Translation Control Register */
+ VNCR(TPIDR_EL2),/* EL2 Software Thread ID Register */
+ VNCR(HCRX_EL2), /* Extended Hypervisor Configuration Register */
+
+ /* Permission Indirection Extension registers */
+ VNCR(PIR_EL1), /* Permission Indirection Register 1 (EL1) */
+ VNCR(PIRE0_EL1), /* Permission Indirection Register 0 (EL1) */
+
+ VNCR(HFGRTR_EL2),
+ VNCR(HFGWTR_EL2),
+ VNCR(HFGITR_EL2),
+ VNCR(HDFGRTR_EL2),
+ VNCR(HDFGWTR_EL2),
+
+ VNCR(CNTVOFF_EL2),
+ VNCR(CNTV_CVAL_EL0),
+ VNCR(CNTV_CTL_EL0),
+ VNCR(CNTP_CVAL_EL0),
+ VNCR(CNTP_CTL_EL0),
+
NR_SYS_REGS /* Nothing after this line! */
};
@@ -465,6 +486,9 @@ struct kvm_cpu_context {
u64 sys_regs[NR_SYS_REGS];
struct kvm_vcpu *__hyp_running_vcpu;
+
+ /* This pointer has to be 4kB aligned. */
+ u64 *vncr_array;
};
struct kvm_host_data {
@@ -827,8 +851,19 @@ struct kvm_vcpu_arch {
* accessed by a running VCPU. For example, for userspace access or
* for system registers that are never context switched, but only
* emulated.
+ *
+ * Don't bother with VNCR-based accesses in the nVHE code, it has no
+ * business dealing with NV.
*/
-#define __ctxt_sys_reg(c,r) (&(c)->sys_regs[(r)])
+static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
+{
+#if !defined (__KVM_NVHE_HYPERVISOR__)
+ if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
+ r >= __VNCR_START__ && ctxt->vncr_array))
+ return &ctxt->vncr_array[r - __VNCR_START__];
+#endif
+ return (u64 *)&ctxt->sys_regs[r];
+}
#define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r))
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 10/43] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg()
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (8 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 09/43] KVM: arm64: nv: Map VNCR-capable registers to a separate page Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 11/43] KVM: arm64: nv: Handle HCR_EL2.E2H specially Marc Zyngier
` (36 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
KVM internally uses accessor functions when reading or writing the
guest's system registers. This takes care of accessing either the stored
copy or using the "live" EL1 system registers when the host uses VHE.
With the introduction of virtual EL2 we add a bunch of EL2 system
registers, which now must also be taken care of:
- If the guest is running in vEL2, and we access an EL1 sysreg, we must
revert to the stored version of that, and not use the CPU's copy.
- If the guest is running in vEL1, and we access an EL2 sysreg, we must
also use the stored version, since the CPU carries the EL1 copy.
- Some EL2 system registers are supposed to affect the current execution
of the system, so we need to put them into their respective EL1
counterparts. For this we need to define a mapping between the two.
- Some EL2 system registers have a different format than their EL1
counterpart, so we need to translate them before writing them to the
CPU. This is done using an (optional) translate function in the map.
All of these cases are now wrapped into the existing accessor functions,
so KVM users wouldn't need to care whether they access EL2 or EL1
registers and also which state the guest is in.
Reviewed-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Co-developed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kvm/sys_regs.c | 129 ++++++++++++++++++++++++++++--
2 files changed, 126 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9e8cd2bb95c3..f17fb7c42973 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -907,6 +907,7 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
case AMAIR_EL1: *val = read_sysreg_s(SYS_AMAIR_EL12); break;
case CNTKCTL_EL1: *val = read_sysreg_s(SYS_CNTKCTL_EL12); break;
case ELR_EL1: *val = read_sysreg_s(SYS_ELR_EL12); break;
+ case SPSR_EL1: *val = read_sysreg_s(SYS_SPSR_EL12); break;
case PAR_EL1: *val = read_sysreg_par(); break;
case DACR32_EL2: *val = read_sysreg_s(SYS_DACR32_EL2); break;
case IFSR32_EL2: *val = read_sysreg_s(SYS_IFSR32_EL2); break;
@@ -951,6 +952,7 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
case AMAIR_EL1: write_sysreg_s(val, SYS_AMAIR_EL12); break;
case CNTKCTL_EL1: write_sysreg_s(val, SYS_CNTKCTL_EL12); break;
case ELR_EL1: write_sysreg_s(val, SYS_ELR_EL12); break;
+ case SPSR_EL1: write_sysreg_s(val, SYS_SPSR_EL12); break;
case PAR_EL1: write_sysreg_s(val, SYS_PAR_EL1); break;
case DACR32_EL2: write_sysreg_s(val, SYS_DACR32_EL2); break;
case IFSR32_EL2: write_sysreg_s(val, SYS_IFSR32_EL2); break;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c31fddc1591d..92bb91e520a8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -72,24 +72,143 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu,
"sys_reg write to read-only register");
}
+#define PURE_EL2_SYSREG(el2) \
+ case el2: { \
+ *el1r = el2; \
+ return true; \
+ }
+
+#define MAPPED_EL2_SYSREG(el2, el1, fn) \
+ case el2: { \
+ *xlate = fn; \
+ *el1r = el1; \
+ return true; \
+ }
+
+static bool get_el2_to_el1_mapping(unsigned int reg,
+ unsigned int *el1r, u64 (**xlate)(u64))
+{
+ switch (reg) {
+ PURE_EL2_SYSREG( VPIDR_EL2 );
+ PURE_EL2_SYSREG( VMPIDR_EL2 );
+ PURE_EL2_SYSREG( ACTLR_EL2 );
+ PURE_EL2_SYSREG( HCR_EL2 );
+ PURE_EL2_SYSREG( MDCR_EL2 );
+ PURE_EL2_SYSREG( HSTR_EL2 );
+ PURE_EL2_SYSREG( HACR_EL2 );
+ PURE_EL2_SYSREG( VTTBR_EL2 );
+ PURE_EL2_SYSREG( VTCR_EL2 );
+ PURE_EL2_SYSREG( RVBAR_EL2 );
+ PURE_EL2_SYSREG( TPIDR_EL2 );
+ PURE_EL2_SYSREG( HPFAR_EL2 );
+ PURE_EL2_SYSREG( CNTHCTL_EL2 );
+ MAPPED_EL2_SYSREG(SCTLR_EL2, SCTLR_EL1,
+ translate_sctlr_el2_to_sctlr_el1 );
+ MAPPED_EL2_SYSREG(CPTR_EL2, CPACR_EL1,
+ translate_cptr_el2_to_cpacr_el1 );
+ MAPPED_EL2_SYSREG(TTBR0_EL2, TTBR0_EL1,
+ translate_ttbr0_el2_to_ttbr0_el1 );
+ MAPPED_EL2_SYSREG(TTBR1_EL2, TTBR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(TCR_EL2, TCR_EL1,
+ translate_tcr_el2_to_tcr_el1 );
+ MAPPED_EL2_SYSREG(VBAR_EL2, VBAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR0_EL2, AFSR0_EL1, NULL );
+ MAPPED_EL2_SYSREG(AFSR1_EL2, AFSR1_EL1, NULL );
+ MAPPED_EL2_SYSREG(ESR_EL2, ESR_EL1, NULL );
+ MAPPED_EL2_SYSREG(FAR_EL2, FAR_EL1, NULL );
+ MAPPED_EL2_SYSREG(MAIR_EL2, MAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
+ MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL );
+ MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL );
+ default:
+ return false;
+ }
+}
+
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
{
u64 val = 0x8badf00d8badf00d;
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
+ goto memory_read;
- if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
- __vcpu_read_sys_reg_from_cpu(reg, &val))
+ if (unlikely(get_el2_to_el1_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_read;
+
+ /*
+ * If this register does not have an EL1 counterpart,
+ * then read the stored EL2 version.
+ */
+ if (reg == el1r)
+ goto memory_read;
+
+ /*
+ * If we have a non-VHE guest and that the sysreg
+ * requires translation to be used at EL1, use the
+ * in-memory copy instead.
+ */
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ goto memory_read;
+
+ /* Get the current version of the EL1 counterpart. */
+ WARN_ON(!__vcpu_read_sys_reg_from_cpu(el1r, &val));
+ return val;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_read;
+
+ if (__vcpu_read_sys_reg_from_cpu(reg, &val))
return val;
+memory_read:
return __vcpu_sys_reg(vcpu, reg);
}
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
{
- if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
- __vcpu_write_sys_reg_to_cpu(val, reg))
+ u64 (*xlate)(u64) = NULL;
+ unsigned int el1r;
+
+ if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
+ goto memory_write;
+
+ if (unlikely(get_el2_to_el1_mapping(reg, &el1r, &xlate))) {
+ if (!is_hyp_ctxt(vcpu))
+ goto memory_write;
+
+ /*
+ * Always store a copy of the write to memory to avoid having
+ * to reverse-translate virtual EL2 system registers for a
+ * non-VHE guest hypervisor.
+ */
+ __vcpu_sys_reg(vcpu, reg) = val;
+
+ /* No EL1 counterpart? We're done here.? */
+ if (reg == el1r)
+ return;
+
+ if (!vcpu_el2_e2h_is_set(vcpu) && xlate)
+ val = xlate(val);
+
+ /* Redirect this to the EL1 version of the register. */
+ WARN_ON(!__vcpu_write_sys_reg_to_cpu(val, el1r));
+ return;
+ }
+
+ /* EL1 register can't be on the CPU if the guest is in vEL2. */
+ if (unlikely(is_hyp_ctxt(vcpu)))
+ goto memory_write;
+
+ if (__vcpu_write_sys_reg_to_cpu(val, reg))
return;
- __vcpu_sys_reg(vcpu, reg) = val;
+memory_write:
+ __vcpu_sys_reg(vcpu, reg) = val;
}
/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 11/43] KVM: arm64: nv: Handle HCR_EL2.E2H specially
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (9 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 10/43] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg() Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 12/43] KVM: arm64: nv: Handle CNTHCTL_EL2 specially Marc Zyngier
` (35 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
HCR_EL2.E2H is nasty, as a flip of this bit completely changes the way
we deal with a lot of the state. So when the guest flips this bit
(sysregs are live), do the put/load dance so that we have a consistent
state.
Yes, this is slow. Don't do it.
Suggested-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 92bb91e520a8..d5c0f29c121f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -178,9 +178,25 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
goto memory_write;
if (unlikely(get_el2_to_el1_mapping(reg, &el1r, &xlate))) {
+ bool need_put_load;
+
if (!is_hyp_ctxt(vcpu))
goto memory_write;
+ /*
+ * HCR_EL2.E2H is nasty: it changes the way we interpret a
+ * lot of the EL2 state, so treat is as a full state
+ * transition.
+ */
+ need_put_load = (!cpus_have_final_cap(ARM64_HCR_NV1_RES0) &&
+ (reg == HCR_EL2) &&
+ vcpu_el2_e2h_is_set(vcpu) != !!(val & HCR_E2H));
+
+ if (need_put_load) {
+ preempt_disable();
+ kvm_arch_vcpu_put(vcpu);
+ }
+
/*
* Always store a copy of the write to memory to avoid having
* to reverse-translate virtual EL2 system registers for a
@@ -188,6 +204,11 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
*/
__vcpu_sys_reg(vcpu, reg) = val;
+ if (need_put_load) {
+ kvm_arch_vcpu_load(vcpu, smp_processor_id());
+ preempt_enable();
+ }
+
/* No EL1 counterpart? We're done here.? */
if (reg == el1r)
return;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 12/43] KVM: arm64: nv: Handle CNTHCTL_EL2 specially
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (10 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 11/43] KVM: arm64: nv: Handle HCR_EL2.E2H specially Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 13/43] KVM: arm64: nv: Save/Restore vEL2 sysregs Marc Zyngier
` (34 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Accessing CNTHCTL_EL2 is fraught with danger if running with
HCR_EL2.E2H=1: half of the bits are held in CNTKCTL_EL1, and
thus can be changed behind our back, while the rest lives
in the CNTHCTL_EL2 shadow copy that is memory-based.
Yes, this is a lot of fun!
Make sure that we merge the two on read access, while we can
write to CNTKCTL_EL1 in a more straightforward manner.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 28 ++++++++++++++++++++++++++++
include/kvm/arm_arch_timer.h | 3 +++
2 files changed, 31 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d5c0f29c121f..f42f3ed3724c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -138,6 +138,21 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
if (!is_hyp_ctxt(vcpu))
goto memory_read;
+ /*
+ * CNTHCTL_EL2 requires some special treatment to
+ * account for the bits that can be set via CNTKCTL_EL1.
+ */
+ switch (reg) {
+ case CNTHCTL_EL2:
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ val = read_sysreg_el1(SYS_CNTKCTL);
+ val &= CNTKCTL_VALID_BITS;
+ val |= __vcpu_sys_reg(vcpu, reg) & ~CNTKCTL_VALID_BITS;
+ return val;
+ }
+ break;
+ }
+
/*
* If this register does not have an EL1 counterpart,
* then read the stored EL2 version.
@@ -209,6 +224,19 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
preempt_enable();
}
+ switch (reg) {
+ case CNTHCTL_EL2:
+ /*
+ * If E2H=0, CNHTCTL_EL2 is a pure shadow register.
+ * Otherwise, some of the bits are backed by
+ * CNTKCTL_EL1, while the rest is kept in memory.
+ * Yes, this is fun stuff.
+ */
+ if (vcpu_el2_e2h_is_set(vcpu))
+ write_sysreg_el1(val, SYS_CNTKCTL);
+ return;
+ }
+
/* No EL1 counterpart? We're done here.? */
if (reg == el1r)
return;
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index c819c5d16613..fd650a8789b9 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -147,6 +147,9 @@ u64 timer_get_cval(struct arch_timer_context *ctxt);
void kvm_timer_cpu_up(void);
void kvm_timer_cpu_down(void);
+/* CNTKCTL_EL1 valid bits as of DDI0487J.a */
+#define CNTKCTL_VALID_BITS (BIT(17) | GENMASK_ULL(9, 0))
+
static inline bool has_cntpoff(void)
{
return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF));
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 13/43] KVM: arm64: nv: Save/Restore vEL2 sysregs
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (11 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 12/43] KVM: arm64: nv: Handle CNTHCTL_EL2 specially Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 14/43] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting Marc Zyngier
` (33 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Whenever we need to restore the guest's system registers to the CPU, we
now need to take care of the EL2 system registers as well. Most of them
are accessed via traps only, but some have an immediate effect and also
a guest running in VHE mode would expect them to be accessible via their
EL1 encoding, which we do not trap.
For vEL2 we write the virtual EL2 registers with an identical format directly
into their EL1 counterpart, and translate the few registers that have a
different format for the same effect on the execution when running a
non-VHE guest guest hypervisor.
Based on an initial patch from Andre Przywara, rewritten many times
since.
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 133 ++++++++++++++++++++-
3 files changed, 135 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index bb6b571ec627..3472eb6628e6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -97,9 +97,10 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0);
}
-static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
+static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt,
+ u64 mpidr)
{
- write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg(mpidr, vmpidr_el2);
if (has_vhe() ||
!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
index 29305022bc04..dba101565de3 100644
--- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c
@@ -28,7 +28,7 @@ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt)
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt)
{
- __sysreg_restore_el1_state(ctxt);
+ __sysreg_restore_el1_state(ctxt, ctxt_sys_reg(ctxt, MPIDR_EL1));
__sysreg_restore_common_state(ctxt);
__sysreg_restore_user_state(ctxt);
__sysreg_restore_el2_return_state(ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 8e1e0d5033b6..d33f64cd0745 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -15,6 +15,104 @@
#include <asm/kvm_hyp.h>
#include <asm/kvm_nested.h>
+static void __sysreg_save_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ /* These registers are common with EL1 */
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+ ctxt_sys_reg(ctxt, TPIDR_EL1) = read_sysreg(tpidr_el1);
+
+ ctxt_sys_reg(ctxt, ESR_EL2) = read_sysreg_el1(SYS_ESR);
+ ctxt_sys_reg(ctxt, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0);
+ ctxt_sys_reg(ctxt, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1);
+ ctxt_sys_reg(ctxt, FAR_EL2) = read_sysreg_el1(SYS_FAR);
+ ctxt_sys_reg(ctxt, MAIR_EL2) = read_sysreg_el1(SYS_MAIR);
+ ctxt_sys_reg(ctxt, VBAR_EL2) = read_sysreg_el1(SYS_VBAR);
+ ctxt_sys_reg(ctxt, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR);
+ ctxt_sys_reg(ctxt, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR);
+
+ /*
+ * In VHE mode those registers are compatible between EL1 and EL2,
+ * and the guest uses the _EL1 versions on the CPU naturally.
+ * So we save them into their _EL2 versions here.
+ * For nVHE mode we trap accesses to those registers, so our
+ * _EL2 copy in sys_regs[] is always up-to-date and we don't need
+ * to save anything here.
+ */
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ u64 val;
+
+ ctxt_sys_reg(ctxt, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR);
+ ctxt_sys_reg(ctxt, CPTR_EL2) = read_sysreg_el1(SYS_CPACR);
+ ctxt_sys_reg(ctxt, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0);
+ ctxt_sys_reg(ctxt, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1);
+ ctxt_sys_reg(ctxt, TCR_EL2) = read_sysreg_el1(SYS_TCR);
+
+ /*
+ * The EL1 view of CNTKCTL_EL1 has a bunch of RES0 bits where
+ * the interesting CNTHCTL_EL2 bits live. So preserve these
+ * bits when reading back the guest-visible value.
+ */
+ val = read_sysreg_el1(SYS_CNTKCTL);
+ val &= CNTKCTL_VALID_BITS;
+ ctxt_sys_reg(ctxt, CNTHCTL_EL2) &= ~CNTKCTL_VALID_BITS;
+ ctxt_sys_reg(ctxt, CNTHCTL_EL2) |= val;
+ }
+
+ ctxt_sys_reg(ctxt, SP_EL2) = read_sysreg(sp_el1);
+ ctxt_sys_reg(ctxt, ELR_EL2) = read_sysreg_el1(SYS_ELR);
+ ctxt_sys_reg(ctxt, SPSR_EL2) = read_sysreg_el1(SYS_SPSR);
+}
+
+static void __sysreg_restore_vel2_state(struct kvm_cpu_context *ctxt)
+{
+ u64 val;
+
+ /* These registers are common with EL1 */
+ write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1), par_el1);
+ write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1), tpidr_el1);
+
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+ write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1), vmpidr_el2);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, MAIR_EL2), SYS_MAIR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, VBAR_EL2), SYS_VBAR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL2),SYS_CONTEXTIDR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL2), SYS_AMAIR);
+
+ if (__vcpu_el2_e2h_is_set(ctxt)) {
+ /*
+ * In VHE mode those registers are compatible between
+ * EL1 and EL2.
+ */
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CPTR_EL2), SYS_CPACR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, CNTHCTL_EL2), SYS_CNTKCTL);
+ } else {
+ /*
+ * CNTHCTL_EL2 only affects EL1 when running nVHE, so
+ * no need to restore it.
+ */
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+ val = translate_cptr_el2_to_cpacr_el1(ctxt_sys_reg(ctxt, CPTR_EL2));
+ write_sysreg_el1(val, SYS_CPACR);
+ val = translate_ttbr0_el2_to_ttbr0_el1(ctxt_sys_reg(ctxt, TTBR0_EL2));
+ write_sysreg_el1(val, SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ }
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL2), SYS_ESR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR0_EL2), SYS_AFSR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR1_EL2), SYS_AFSR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, FAR_EL2), SYS_FAR);
+ write_sysreg(ctxt_sys_reg(ctxt, SP_EL2), sp_el1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, ELR_EL2), SYS_ELR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SPSR_EL2), SYS_SPSR);
+}
+
/*
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
* pstate, which are handled as part of the el2 return state) on every
@@ -66,6 +164,7 @@ void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
+ u64 mpidr;
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
__sysreg_save_user_state(host_ctxt);
@@ -89,7 +188,29 @@ void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
*/
__sysreg32_restore_state(vcpu);
__sysreg_restore_user_state(guest_ctxt);
- __sysreg_restore_el1_state(guest_ctxt);
+
+ if (unlikely(__is_hyp_ctxt(guest_ctxt))) {
+ __sysreg_restore_vel2_state(guest_ctxt);
+ } else {
+ if (vcpu_has_nv(vcpu)) {
+ /*
+ * Only set VPIDR_EL2 for nested VMs, as this is the
+ * only time it changes. We'll restore the MIDR_EL1
+ * view on put.
+ */
+ write_sysreg(ctxt_sys_reg(guest_ctxt, VPIDR_EL2), vpidr_el2);
+
+ /*
+ * As we're restoring a nested guest, set the value
+ * provided by the guest hypervisor.
+ */
+ mpidr = ctxt_sys_reg(guest_ctxt, VMPIDR_EL2);
+ } else {
+ mpidr = ctxt_sys_reg(guest_ctxt, MPIDR_EL1);
+ }
+
+ __sysreg_restore_el1_state(guest_ctxt, mpidr);
+ }
vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
}
@@ -112,12 +233,20 @@ void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu)
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
- __sysreg_save_el1_state(guest_ctxt);
+ if (unlikely(__is_hyp_ctxt(guest_ctxt)))
+ __sysreg_save_vel2_state(guest_ctxt);
+ else
+ __sysreg_save_el1_state(guest_ctxt);
+
__sysreg_save_user_state(guest_ctxt);
__sysreg32_save_state(vcpu);
/* Restore host user state */
__sysreg_restore_user_state(host_ctxt);
+ /* If leaving a nesting guest, restore MPIDR_EL1 default view */
+ if (vcpu_has_nv(vcpu))
+ write_sysreg(read_cpuid_id(), vpidr_el2);
+
vcpu_clear_flag(vcpu, SYSREGS_ON_CPU);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 14/43] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (12 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 13/43] KVM: arm64: nv: Save/Restore vEL2 sysregs Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 15/43] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings Marc Zyngier
` (32 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Jintack Lim <jintack.lim@linaro.org>
Forward exceptions due to WFI or WFE instructions to the virtual EL2 if
they are not coming from the virtual EL2 and virtual HCR_EL2.TWX is set.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 13 +++++++++++++
arch/arm64/kvm/handle_exit.c | 6 +++++-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index ca9168d23cd6..c767f79651c3 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -299,6 +299,19 @@ static __always_inline u64 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
return vcpu->arch.fault.esr_el2;
}
+static inline bool guest_hyp_wfx_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ u64 esr = kvm_vcpu_get_esr(vcpu);
+ bool is_wfe = !!(esr & ESR_ELx_WFx_ISS_WFE);
+ u64 hcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ if (!vcpu_has_nv(vcpu) || vcpu_is_el2(vcpu))
+ return false;
+
+ return ((is_wfe && (hcr_el2 & HCR_TWE)) ||
+ (!is_wfe && (hcr_el2 & HCR_TWI)));
+}
+
static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 617ae6dea5d5..90d0b690a59d 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -114,8 +114,12 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
+ bool is_wfe = !!(esr & ESR_ELx_WFx_ISS_WFE);
- if (esr & ESR_ELx_WFx_ISS_WFE) {
+ if (guest_hyp_wfx_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ if (is_wfe) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
vcpu->stat.wfe_exit_stat++;
} else {
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 15/43] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (13 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 14/43] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting Marc Zyngier
@ 2023-11-20 13:09 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 16/43] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
` (31 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:09 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Jintack Lim <jintack.lim@linaro.org>
Forward traps due to FP/ASIMD register accesses to the virtual EL2
if virtual CPTR_EL2.TFP is set (with HCR_EL2.E2H == 0) or
CPTR_EL2.FPEN is configure to do so (with HCR_EL2.E2h == 1).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: account for HCR_EL2.E2H when testing for TFP/FPEN, with
all the hard work actually being done by Chase Conklin]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 25 +++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 ++++++++++++----
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++++
3 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index c767f79651c3..8ccf8a1d37ff 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -11,6 +11,7 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
@@ -294,6 +295,30 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
return mode != PSR_MODE_EL0t;
}
+static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ u64 val;
+
+ if (!vcpu_has_nv(vcpu))
+ return false;
+
+ val = vcpu_read_sys_reg(vcpu, CPTR_EL2);
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ return (val & CPTR_EL2_TFP);
+
+ switch (FIELD_GET(CPACR_ELx_FPEN, val)) {
+ case 0b00:
+ case 0b10:
+ return true;
+ case 0b01:
+ return vcpu_el2_tge_is_set(vcpu) && !vcpu_is_el2(vcpu);
+ case 0b11:
+ default: /* GCC is dumb */
+ return false;
+ }
+}
+
static __always_inline u64 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.esr_el2;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 90d0b690a59d..aab052bed102 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -87,11 +87,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
- * Guest access to FP/ASIMD registers are routed to this handler only
- * when the system doesn't support FP/ASIMD.
+ * This handles the cases where the system does not support FP/ASIMD or when
+ * we are running nested virtualization and the guest hypervisor is trapping
+ * FP/ASIMD accesses by its guest guest.
+ *
+ * All other handling of guest vs. host FP/ASIMD register state is handled in
+ * fixup_guest_exit().
*/
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
+static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ /* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@@ -280,7 +288,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
- [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index f99d8af0b9af..9a76f3a75a43 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -305,6 +305,10 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
if (!system_supports_fpsimd())
return false;
+ /* Forward traps to the guest hypervisor as required */
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return false;
+
sve_guest = vcpu_has_sve(vcpu);
esr_ec = kvm_vcpu_trap_get_class(vcpu);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 16/43] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (14 preceding siblings ...)
2023-11-20 13:09 ` [PATCH v11 15/43] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures Marc Zyngier
` (30 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Add the HCR_EL2 configuration for FEAT_NV2:
- when running a guest hypervisor, completely ignore the guest's
HCR_EL2 (although this will eventually be relaxed as we improve
the NV support)
- when running a L2 guest, fold in a limited number of the L1
hypervisor's HCR_EL2 properties
Non-NV guests are completely unaffected by this.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 1 +
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/kvm/emulate-nested.c | 27 +++++++++++++
arch/arm64/kvm/handle_exit.c | 7 ++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 53 ++++++++++++++++++++++++-
7 files changed, 90 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4882905357f4..aa085f2f1947 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,6 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
return ttbr0 & ~GENMASK_ULL(63, 48);
}
+extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
int kvm_init_nv_sysregs(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3cb18c7a1ef0..a54464c415fc 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -498,6 +498,7 @@
#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
+#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 06185216a297..61721f870be0 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1933,6 +1933,26 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
return true;
}
+static bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+ bool control_bit_set;
+
+ if (!vcpu_has_nv(vcpu))
+ return false;
+
+ control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+ if (!vcpu_is_el2(vcpu) && control_bit_set) {
+ kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+ return true;
+ }
+ return false;
+}
+
+bool forward_smc_trap(struct kvm_vcpu *vcpu)
+{
+ return forward_traps(vcpu, HCR_TSC);
+}
+
static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
{
u64 mode = spsr & PSR_MODE_MASK;
@@ -1971,6 +1991,13 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
u64 spsr, elr, mode;
bool direct_eret;
+ /*
+ * Forward this trap to the virtual EL2 if the virtual
+ * HCR_EL2.NV bit is set and this is coming from !EL2.
+ */
+ if (forward_traps(vcpu, HCR_NV))
+ return;
+
/*
* Going through the whole put/load motions is a waste of time
* if this is a VHE guest hypervisor returning to its own
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index aab052bed102..e7bc52047ff3 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -55,6 +55,13 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
static int handle_smc(struct kvm_vcpu *vcpu)
{
+ /*
+ * Forward this trapped smc instruction to the virtual EL2 if
+ * the guest has asked for it.
+ */
+ if (forward_smc_trap(vcpu))
+ return 1;
+
/*
* "If an SMC instruction executed at Non-secure EL1 is
* trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 9a76f3a75a43..aed2ea35082c 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -234,10 +234,8 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
__deactivate_traps_hfgxtr(vcpu);
}
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
{
- u64 hcr = vcpu->arch.hcr_el2;
-
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
hcr |= HCR_TVM;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index c50f8459e4fc..4103625e46c5 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -40,7 +40,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
- ___activate_traps(vcpu);
+ ___activate_traps(vcpu, vcpu->arch.hcr_el2);
__activate_traps_common(vcpu);
val = vcpu->arch.cptr_el2;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 1581df6aec87..0926011deae7 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -33,11 +33,62 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
+/*
+ * Common set of HCR_EL2 bits that we do not want to set while running
+ * a NV guest, irrespective of the context the guest is in.
+ */
+#define __HCR_GUEST_NV_FILTER \
+ (HCR_TGE | HCR_ATA | HCR_API | HCR_APK | HCR_FIEN | \
+ HCR_BSU | HCR_NV | HCR_NV1 | HCR_NV2)
+
+/*
+ * Running as a host? Drop HCR_EL2 setting that should not affect the
+ * execution of EL2, or that are guaranteed to be enforced by the L0
+ * host anyway. For the time being, we don't allow anything. At all.
+ */
+#define HCR_GUEST_NV_INHOST_FILTER (~(0U) | __HCR_GUEST_NV_FILTER)
+
+/*
+ * Running as a guest? Drop HCR_EL2 setting that should not affect the
+ * execution of EL0/EL1, or that are guaranteed to be enforced by the
+ * L0 host anyway.
+ */
+#define HCR_GUEST_NV_INGUEST_FILTER __HCR_GUEST_NV_FILTER
+
+static u64 __compute_hcr(struct kvm_vcpu *vcpu)
+{
+ u64 hcr, vhcr_el2, mask;
+
+ hcr = vcpu->arch.hcr_el2;
+
+ if (!vcpu_has_nv(vcpu))
+ return hcr;
+
+ vhcr_el2 = __vcpu_sys_reg(vcpu, HCR_EL2);
+
+ if (is_hyp_ctxt(vcpu)) {
+ hcr |= HCR_NV | HCR_NV2 | HCR_AT | HCR_TTLB;
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ hcr |= HCR_NV1;
+
+ mask = HCR_GUEST_NV_INHOST_FILTER;
+
+ write_sysreg_s(vcpu->arch.ctxt.vncr_array, SYS_VNCR_EL2);
+ } else {
+ mask = HCR_GUEST_NV_INGUEST_FILTER;
+ }
+
+ hcr |= vhcr_el2 & ~mask;
+
+ return hcr;
+}
+
static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
- ___activate_traps(vcpu);
+ ___activate_traps(vcpu, __compute_hcr(vcpu));
if (has_cntpoff()) {
struct timer_map map;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (15 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 16/43] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2024-01-23 9:55 ` Ganapatrao Kulkarni
2023-11-20 13:10 ` [PATCH v11 18/43] KVM: arm64: nv: Implement nested Stage-2 page table walk logic Marc Zyngier
` (29 subsequent siblings)
46 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Add Stage-2 mmu data structures for virtual EL2 and for nested guests.
We don't yet populate shadow Stage-2 page tables, but we now have a
framework for getting to a shadow Stage-2 pgd.
We allocate twice the number of vcpus as Stage-2 mmu structures because
that's sufficient for each vcpu running two translation regimes without
having to flush the Stage-2 page tables.
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 41 ++++++
arch/arm64/include/asm/kvm_mmu.h | 9 ++
arch/arm64/include/asm/kvm_nested.h | 7 +
arch/arm64/kvm/arm.c | 12 ++
arch/arm64/kvm/mmu.c | 78 ++++++++---
arch/arm64/kvm/nested.c | 207 ++++++++++++++++++++++++++++
arch/arm64/kvm/reset.c | 6 +
7 files changed, 338 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f17fb7c42973..eb96fe9b686e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -188,8 +188,40 @@ struct kvm_s2_mmu {
uint64_t split_page_chunk_size;
struct kvm_arch *arch;
+
+ /*
+ * For a shadow stage-2 MMU, the virtual vttbr used by the
+ * host to parse the guest S2.
+ * This either contains:
+ * - the virtual VTTBR programmed by the guest hypervisor with
+ * CnP cleared
+ * - The value 1 (VMID=0, BADDR=0, CnP=1) if invalid
+ *
+ * We also cache the full VTCR which gets used for TLB invalidation,
+ * taking the ARM ARM's "Any of the bits in VTCR_EL2 are permitted
+ * to be cached in a TLB" to the letter.
+ */
+ u64 tlb_vttbr;
+ u64 tlb_vtcr;
+
+ /*
+ * true when this represents a nested context where virtual
+ * HCR_EL2.VM == 1
+ */
+ bool nested_stage2_enabled;
+
+ /*
+ * 0: Nobody is currently using this, check vttbr for validity
+ * >0: Somebody is actively using this.
+ */
+ atomic_t refcnt;
};
+static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
+{
+ return !(mmu->tlb_vttbr & 1);
+}
+
struct kvm_arch_memory_slot {
};
@@ -241,6 +273,14 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
struct kvm_arch {
struct kvm_s2_mmu mmu;
+ /*
+ * Stage 2 paging state for VMs with nested S2 using a virtual
+ * VMID.
+ */
+ struct kvm_s2_mmu *nested_mmus;
+ size_t nested_mmus_size;
+ int nested_mmus_next;
+
/* Interrupt controller */
struct vgic_dist vgic;
@@ -1186,6 +1226,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu);
int __init kvm_set_ipa_limit(void);
+u32 kvm_get_pa_bits(struct kvm *kvm);
#define __KVM_HAVE_ARCH_VM_ALLOC
struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 49e0d4b36bd0..5c6fb2fb8287 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -119,6 +119,7 @@ alternative_cb_end
#include <asm/mmu_context.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_host.h>
+#include <asm/kvm_nested.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@@ -171,6 +172,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr);
void __init free_hyp_pgds(void);
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type);
void kvm_uninit_stage2_mmu(struct kvm *kvm);
@@ -339,5 +341,12 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
}
+
+static inline u64 get_vmid(u64 vttbr)
+{
+ return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
+ VTTBR_VMID_SHIFT;
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index aa085f2f1947..f421ad294e68 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,6 +60,13 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
return ttbr0 & ~GENMASK_ULL(63, 48);
}
+extern void kvm_init_nested(struct kvm *kvm);
+extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
+extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
+extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu);
+extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
+extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index b65df612b41b..2e76892c1a56 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -147,6 +147,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
mutex_unlock(&kvm->lock);
#endif
+ kvm_init_nested(kvm);
+
ret = kvm_share_hyp(kvm, kvm + 1);
if (ret)
return ret;
@@ -429,6 +431,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct kvm_s2_mmu *mmu;
int *last_ran;
+ if (vcpu_has_nv(vcpu))
+ kvm_vcpu_load_hw_mmu(vcpu);
+
mmu = vcpu->arch.hw_mmu;
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
@@ -479,9 +484,12 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_timer_vcpu_put(vcpu);
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
+ if (vcpu_has_nv(vcpu))
+ kvm_vcpu_put_hw_mmu(vcpu);
kvm_arm_vmid_clear_active();
vcpu_clear_on_unsupported_cpu(vcpu);
+
vcpu->cpu = -1;
}
@@ -1336,6 +1344,10 @@ static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu)
ret = kvm_arm_set_default_pmu(kvm);
+ /* Prepare for nested if required */
+ if (!ret)
+ ret = kvm_vcpu_init_nested(vcpu);
+
return ret;
}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d87c8fcc4c24..588ce46c0ad0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -305,7 +305,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
* does.
*/
/**
- * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
+ * __unmap_stage2_range -- Clear stage2 page table entries to unmap a range
* @mmu: The KVM stage-2 MMU pointer
* @start: The intermediate physical base address of the range to unmap
* @size: The size of the area to unmap
@@ -328,7 +328,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
-static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
+void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
{
__unmap_stage2_range(mmu, start, size, true);
}
@@ -853,21 +853,9 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
.icache_inval_pou = invalidate_icache_guest_page,
};
-/**
- * kvm_init_stage2_mmu - Initialise a S2 MMU structure
- * @kvm: The pointer to the KVM structure
- * @mmu: The pointer to the s2 MMU structure
- * @type: The machine type of the virtual machine
- *
- * Allocates only the stage-2 HW PGD level table(s).
- * Note we don't need locking here as this is only called when the VM is
- * created, which can only be done once.
- */
-int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
+static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type)
{
u32 kvm_ipa_limit = get_kvm_ipa_limit();
- int cpu, err;
- struct kvm_pgtable *pgt;
u64 mmfr0, mmfr1;
u32 phys_shift;
@@ -894,11 +882,58 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
+ return 0;
+}
+
+/**
+ * kvm_init_stage2_mmu - Initialise a S2 MMU structure
+ * @kvm: The pointer to the KVM structure
+ * @mmu: The pointer to the s2 MMU structure
+ * @type: The machine type of the virtual machine
+ *
+ * Allocates only the stage-2 HW PGD level table(s).
+ * Note we don't need locking here as this is only called in two cases:
+ *
+ * - when the VM is created, which can't race against anything
+ *
+ * - when secondary kvm_s2_mmu structures are initialised for NV
+ * guests, and the caller must hold kvm->lock as this is called on a
+ * per-vcpu basis.
+ */
+int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
+{
+ int cpu, err;
+ struct kvm_pgtable *pgt;
+
+ /*
+ * If we already have our page tables in place, and that the
+ * MMU context is the canonical one, we have a bug somewhere,
+ * as this is only supposed to ever happen once per VM.
+ *
+ * Otherwise, we're building nested page tables, and that's
+ * probably because userspace called KVM_ARM_VCPU_INIT more
+ * than once on the same vcpu. Since that's actually legal,
+ * don't kick a fuss and leave gracefully.
+ */
if (mmu->pgt != NULL) {
+ if (&kvm->arch.mmu != mmu)
+ return 0;
+
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
+ /*
+ * We only initialise the IPA range on the canonical MMU, so
+ * the type is meaningless in all other situations.
+ */
+ if (&kvm->arch.mmu != mmu)
+ type = kvm_get_pa_bits(kvm);
+
+ err = kvm_init_ipa_range(mmu, type);
+ if (err)
+ return err;
+
pgt = kzalloc(sizeof(*pgt), GFP_KERNEL_ACCOUNT);
if (!pgt)
return -ENOMEM;
@@ -923,6 +958,10 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
+
+ if (&kvm->arch.mmu != mmu)
+ kvm_init_nested_s2_mmu(mmu);
+
return 0;
out_destroy_pgtable:
@@ -974,7 +1013,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
- unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
@@ -2054,11 +2093,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
{
}
-void kvm_arch_flush_shadow_all(struct kvm *kvm)
-{
- kvm_uninit_stage2_mmu(kvm);
-}
-
void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
@@ -2066,7 +2100,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
write_lock(&kvm->mmu_lock);
- unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
write_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 66d05f5d39a2..c5752ab8c3fe 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -7,7 +7,9 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
#include <asm/sysreg.h>
@@ -16,6 +18,211 @@
/* Protection against the sysreg repainting madness... */
#define NV_FTR(r, f) ID_AA64##r##_EL1_##f
+void kvm_init_nested(struct kvm *kvm)
+{
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+}
+
+int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_s2_mmu *tmp;
+ int num_mmus;
+ int ret = -ENOMEM;
+
+ if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->kvm->arch.vcpu_features))
+ return 0;
+
+ if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
+ return -EINVAL;
+
+ /*
+ * Let's treat memory allocation failures as benign: If we fail to
+ * allocate anything, return an error and keep the allocated array
+ * alive. Userspace may try to recover by intializing the vcpu
+ * again, and there is no reason to affect the whole VM for this.
+ */
+ num_mmus = atomic_read(&kvm->online_vcpus) * 2;
+ tmp = krealloc(kvm->arch.nested_mmus,
+ num_mmus * sizeof(*kvm->arch.nested_mmus),
+ GFP_KERNEL_ACCOUNT | __GFP_ZERO);
+ if (tmp) {
+ /*
+ * If we went through a realocation, adjust the MMU
+ * back-pointers in the previously initialised
+ * pg_table structures.
+ */
+ if (kvm->arch.nested_mmus != tmp) {
+ int i;
+
+ for (i = 0; i < num_mmus - 2; i++)
+ tmp[i].pgt->mmu = &tmp[i];
+ }
+
+ if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1], 0) ||
+ kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2], 0)) {
+ kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
+ kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
+ } else {
+ kvm->arch.nested_mmus_size = num_mmus;
+ ret = 0;
+ }
+
+ kvm->arch.nested_mmus = tmp;
+ }
+
+ return ret;
+}
+
+/* Must be called with kvm->mmu_lock held */
+struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu)
+{
+ bool nested_stage2_enabled;
+ u64 vttbr, vtcr, hcr;
+ struct kvm *kvm;
+ int i;
+
+ kvm = vcpu->kvm;
+
+ vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ hcr = vcpu_read_sys_reg(vcpu, HCR_EL2);
+
+ nested_stage2_enabled = hcr & HCR_VM;
+
+ /* Don't consider the CnP bit for the vttbr match */
+ vttbr = vttbr & ~VTTBR_CNP_BIT;
+
+ /*
+ * Two possibilities when looking up a S2 MMU context:
+ *
+ * - either S2 is enabled in the guest, and we need a context that is
+ * S2-enabled and matches the full VTTBR (VMID+BADDR) and VTCR,
+ * which makes it safe from a TLB conflict perspective (a broken
+ * guest won't be able to generate them),
+ *
+ * - or S2 is disabled, and we need a context that is S2-disabled
+ * and matches the VMID only, as all TLBs are tagged by VMID even
+ * if S2 translation is disabled.
+ */
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (!kvm_s2_mmu_valid(mmu))
+ continue;
+
+ if (nested_stage2_enabled &&
+ mmu->nested_stage2_enabled &&
+ vttbr == mmu->tlb_vttbr &&
+ vtcr == mmu->tlb_vtcr)
+ return mmu;
+
+ if (!nested_stage2_enabled &&
+ !mmu->nested_stage2_enabled &&
+ get_vmid(vttbr) == get_vmid(mmu->tlb_vttbr))
+ return mmu;
+ }
+ return NULL;
+}
+
+/* Must be called with kvm->mmu_lock held */
+static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_s2_mmu *s2_mmu;
+ int i;
+
+ s2_mmu = lookup_s2_mmu(vcpu);
+ if (s2_mmu)
+ goto out;
+
+ /*
+ * Make sure we don't always search from the same point, or we
+ * will always reuse a potentially active context, leaving
+ * free contexts unused.
+ */
+ for (i = kvm->arch.nested_mmus_next;
+ i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next);
+ i++) {
+ s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size];
+
+ if (atomic_read(&s2_mmu->refcnt) == 0)
+ break;
+ }
+ BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */
+
+ /* Set the scene for the next search */
+ kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
+
+ if (kvm_s2_mmu_valid(s2_mmu)) {
+ /* Clear the old state */
+ kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(s2_mmu));
+ if (atomic64_read(&s2_mmu->vmid.id))
+ kvm_call_hyp(__kvm_tlb_flush_vmid, s2_mmu);
+ }
+
+ /*
+ * The virtual VMID (modulo CnP) will be used as a key when matching
+ * an existing kvm_s2_mmu.
+ *
+ * We cache VTCR at allocation time, once and for all. It'd be great
+ * if the guest didn't screw that one up, as this is not very
+ * forgiving...
+ */
+ s2_mmu->tlb_vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2) & ~VTTBR_CNP_BIT;
+ s2_mmu->tlb_vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ s2_mmu->nested_stage2_enabled = vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_VM;
+
+out:
+ atomic_inc(&s2_mmu->refcnt);
+ return s2_mmu;
+}
+
+void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
+{
+ mmu->tlb_vttbr = 1;
+ mmu->nested_stage2_enabled = false;
+ atomic_set(&mmu->refcnt, 0);
+}
+
+void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (is_hyp_ctxt(vcpu)) {
+ vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
+ } else {
+ write_lock(&vcpu->kvm->mmu_lock);
+ vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
+ write_unlock(&vcpu->kvm->mmu_lock);
+ }
+}
+
+void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu) {
+ atomic_dec(&vcpu->arch.hw_mmu->refcnt);
+ vcpu->arch.hw_mmu = NULL;
+ }
+}
+
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ WARN_ON(atomic_read(&mmu->refcnt));
+
+ if (!atomic_read(&mmu->refcnt))
+ kvm_free_stage2_pgd(mmu);
+ }
+ kfree(kvm->arch.nested_mmus);
+ kvm->arch.nested_mmus = NULL;
+ kvm->arch.nested_mmus_size = 0;
+ kvm_uninit_stage2_mmu(kvm);
+}
+
/*
* Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 5bb4de162cab..e106ea01598f 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -266,6 +266,12 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
preempt_enable();
}
+u32 kvm_get_pa_bits(struct kvm *kvm)
+{
+ /* Fixed limit until we can configure ID_AA64MMFR0.PARange */
+ return kvm_ipa_limit;
+}
+
u32 get_kvm_ipa_limit(void)
{
return kvm_ipa_limit;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 18/43] KVM: arm64: nv: Implement nested Stage-2 page table walk logic
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (16 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults Marc Zyngier
` (28 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Christoffer Dall <christoffer.dall@linaro.org>
Based on the pseudo-code in the ARM ARM, implement a stage 2 software
page table walker.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: heavily reworked for future ARMv8.4-TTL support]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 1 +
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 13 ++
arch/arm64/kvm/nested.c | 267 ++++++++++++++++++++++++++++
4 files changed, 283 insertions(+)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index ae35939f395b..7dd4a176d5eb 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -158,6 +158,7 @@
#define ESR_ELx_Xs_MASK (GENMASK_ULL(4, 0))
/* ISS field definitions for exceptions taken in to Hyp */
+#define ESR_ELx_FSC_ADDRSZ (0x00)
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index b85f46a73e21..9c10c88d2fc2 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -286,6 +286,8 @@
#define VTTBR_VMID_SHIFT (UL(48))
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
+#define SCTLR_EE (UL(1) << 25)
+
/* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index f421ad294e68..a2e719d1cd53 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -67,6 +67,19 @@ extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+struct kvm_s2_trans {
+ phys_addr_t output;
+ unsigned long block_size;
+ bool writable;
+ bool readable;
+ int level;
+ u32 esr;
+ u64 upper_attr;
+};
+
+extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result);
+
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c5752ab8c3fe..cd74d8ee93cb 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -75,6 +75,273 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
return ret;
}
+struct s2_walk_info {
+ int (*read_desc)(phys_addr_t pa, u64 *desc, void *data);
+ void *data;
+ u64 baddr;
+ unsigned int max_pa_bits;
+ unsigned int max_ipa_bits;
+ unsigned int pgshift;
+ unsigned int pgsize;
+ unsigned int ps;
+ unsigned int sl;
+ unsigned int t0sz;
+ bool be;
+};
+
+static unsigned int ps_to_output_size(unsigned int ps)
+{
+ switch (ps) {
+ case 0: return 32;
+ case 1: return 36;
+ case 2: return 40;
+ case 3: return 42;
+ case 4: return 44;
+ case 5:
+ default:
+ return 48;
+ }
+}
+
+static u32 compute_fsc(int level, u32 fsc)
+{
+ return fsc | (level & 0x3);
+}
+
+static int check_base_s2_limits(struct s2_walk_info *wi,
+ int level, int input_size, int stride)
+{
+ int start_size;
+
+ /* Check translation limits */
+ switch (wi->pgsize) {
+ case SZ_64K:
+ if (level == 0 || (level == 1 && wi->max_ipa_bits <= 42))
+ return -EFAULT;
+ break;
+ case SZ_16K:
+ if (level == 0 || (level == 1 && wi->max_ipa_bits <= 40))
+ return -EFAULT;
+ break;
+ case SZ_4K:
+ if (level < 0 || (level == 0 && wi->max_ipa_bits <= 42))
+ return -EFAULT;
+ break;
+ }
+
+ /* Check input size limits */
+ if (input_size > wi->max_ipa_bits)
+ return -EFAULT;
+
+ /* Check number of entries in starting level table */
+ start_size = input_size - ((3 - level) * stride + wi->pgshift);
+ if (start_size < 1 || start_size > stride + 4)
+ return -EFAULT;
+
+ return 0;
+}
+
+/* Check if output is within boundaries */
+static int check_output_size(struct s2_walk_info *wi, phys_addr_t output)
+{
+ unsigned int output_size = ps_to_output_size(wi->ps);
+
+ if (output_size > wi->max_pa_bits)
+ output_size = wi->max_pa_bits;
+
+ if (output_size != 48 && (output & GENMASK_ULL(47, output_size)))
+ return -1;
+
+ return 0;
+}
+
+/*
+ * This is essentially a C-version of the pseudo code from the ARM ARM
+ * AArch64.TranslationTableWalk function. I strongly recommend looking at
+ * that pseudocode in trying to understand this.
+ *
+ * Must be called with the kvm->srcu read lock held
+ */
+static int walk_nested_s2_pgd(phys_addr_t ipa,
+ struct s2_walk_info *wi, struct kvm_s2_trans *out)
+{
+ int first_block_level, level, stride, input_size, base_lower_bound;
+ phys_addr_t base_addr;
+ unsigned int addr_top, addr_bottom;
+ u64 desc; /* page table entry */
+ int ret;
+ phys_addr_t paddr;
+
+ switch (wi->pgsize) {
+ case SZ_64K:
+ case SZ_16K:
+ level = 3 - wi->sl;
+ first_block_level = 2;
+ break;
+ case SZ_4K:
+ level = 2 - wi->sl;
+ first_block_level = 1;
+ break;
+ default:
+ /* GCC is braindead */
+ unreachable();
+ }
+
+ stride = wi->pgshift - 3;
+ input_size = 64 - wi->t0sz;
+ if (input_size > 48 || input_size < 25)
+ return -EFAULT;
+
+ ret = check_base_s2_limits(wi, level, input_size, stride);
+ if (WARN_ON(ret))
+ return ret;
+
+ base_lower_bound = 3 + input_size - ((3 - level) * stride +
+ wi->pgshift);
+ base_addr = wi->baddr & GENMASK_ULL(47, base_lower_bound);
+
+ if (check_output_size(wi, base_addr)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ return 1;
+ }
+
+ addr_top = input_size - 1;
+
+ while (1) {
+ phys_addr_t index;
+
+ addr_bottom = (3 - level) * stride + wi->pgshift;
+ index = (ipa & GENMASK_ULL(addr_top, addr_bottom))
+ >> (addr_bottom - 3);
+
+ paddr = base_addr | index;
+ ret = wi->read_desc(paddr, &desc, wi->data);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Handle reversedescriptors if endianness differs between the
+ * host and the guest hypervisor.
+ */
+ if (wi->be)
+ desc = be64_to_cpu(desc);
+ else
+ desc = le64_to_cpu(desc);
+
+ /* Check for valid descriptor at this point */
+ if (!(desc & 1) || ((desc & 3) == 1 && level == 3)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* We're at the final level or block translation level */
+ if ((desc & 3) == 1 || level == 3)
+ break;
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ base_addr = desc & GENMASK_ULL(47, wi->pgshift);
+
+ level += 1;
+ addr_top = addr_bottom - 1;
+ }
+
+ if (level < first_block_level) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_FAULT);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /*
+ * We don't use the contiguous bit in the stage-2 ptes, so skip check
+ * for misprogramming of the contiguous bit.
+ */
+
+ if (check_output_size(wi, desc)) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ if (!(desc & BIT(10))) {
+ out->esr = compute_fsc(level, ESR_ELx_FSC_ACCESS);
+ out->upper_attr = desc;
+ return 1;
+ }
+
+ /* Calculate and return the result */
+ paddr = (desc & GENMASK_ULL(47, addr_bottom)) |
+ (ipa & GENMASK_ULL(addr_bottom - 1, 0));
+ out->output = paddr;
+ out->block_size = 1UL << ((3 - level) * stride + wi->pgshift);
+ out->readable = desc & (0b01 << 6);
+ out->writable = desc & (0b10 << 6);
+ out->level = level;
+ out->upper_attr = desc & GENMASK_ULL(63, 52);
+ return 0;
+}
+
+static int read_guest_s2_desc(phys_addr_t pa, u64 *desc, void *data)
+{
+ struct kvm_vcpu *vcpu = data;
+
+ return kvm_read_guest(vcpu->kvm, pa, desc, sizeof(*desc));
+}
+
+static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
+{
+ wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
+
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ wi->pgshift = 12; break;
+ case VTCR_EL2_TG0_16K:
+ wi->pgshift = 14; break;
+ case VTCR_EL2_TG0_64K:
+ default:
+ wi->pgshift = 16; break;
+ }
+
+ wi->pgsize = BIT(wi->pgshift);
+ wi->ps = FIELD_GET(VTCR_EL2_PS_MASK, vtcr);
+ wi->sl = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr);
+ wi->max_ipa_bits = VTCR_EL2_IPA(vtcr);
+ /* Global limit for now, should eventually be per-VM */
+ wi->max_pa_bits = get_kvm_ipa_limit();
+}
+
+int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
+ struct kvm_s2_trans *result)
+{
+ u64 vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
+ struct s2_walk_info wi;
+ int ret;
+
+ result->esr = 0;
+
+ if (!vcpu_has_nv(vcpu))
+ return 0;
+
+ wi.read_desc = read_guest_s2_desc;
+ wi.data = vcpu;
+ wi.baddr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ vtcr_to_walk_info(vtcr, &wi);
+
+ wi.be = vcpu_read_sys_reg(vcpu, SCTLR_EL2) & SCTLR_EE;
+
+ ret = walk_nested_s2_pgd(gipa, &wi, result);
+ if (ret)
+ result->esr |= (kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC);
+
+ return ret;
+}
+
/* Must be called with kvm->mmu_lock held */
struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu)
{
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (17 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 18/43] KVM: arm64: nv: Implement nested Stage-2 page table walk logic Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2024-01-17 14:53 ` Joey Gouly
2023-11-20 13:10 ` [PATCH v11 20/43] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's Marc Zyngier
` (27 subsequent siblings)
46 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
If we are faulting on a shadow stage 2 translation, we first walk the
guest hypervisor's stage 2 page table to see if it has a mapping. If
not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
create a mapping in the shadow stage 2 page table.
Note that we have to deal with two IPAs when we got a shadow stage 2
page fault. One is the address we faulted on, and is in the L2 guest
phys space. The other is from the guest stage-2 page table walk, and is
in the L1 guest phys space. To differentiate them, we rename variables
so that fault_ipa is used for the former and ipa is used for the latter.
Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: rewrote this multiple times...]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 7 +++
arch/arm64/include/asm/kvm_nested.h | 19 ++++++
arch/arm64/kvm/mmu.c | 89 ++++++++++++++++++++++++----
arch/arm64/kvm/nested.c | 48 +++++++++++++++
4 files changed, 153 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 8ccf8a1d37ff..5173f8cf2904 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -649,4 +649,11 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu)
kvm_write_cptr_el2(val);
}
+
+static inline bool kvm_is_shadow_s2_fault(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu &&
+ vcpu->arch.hw_mmu->nested_stage2_enabled);
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index a2e719d1cd53..a9aec29bf7a1 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -77,8 +77,27 @@ struct kvm_s2_trans {
u64 upper_attr;
};
+static inline phys_addr_t kvm_s2_trans_output(struct kvm_s2_trans *trans)
+{
+ return trans->output;
+}
+
+static inline unsigned long kvm_s2_trans_size(struct kvm_s2_trans *trans)
+{
+ return trans->block_size;
+}
+
+static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
+{
+ return trans->esr;
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
+extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
+ struct kvm_s2_trans *trans);
+extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
+int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 588ce46c0ad0..41de7616b735 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1412,14 +1412,16 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_memory_slot *memslot, unsigned long hva,
- unsigned long fault_status)
+ struct kvm_s2_trans *nested,
+ struct kvm_memory_slot *memslot,
+ unsigned long hva, unsigned long fault_status)
{
int ret = 0;
bool write_fault, writable, force_pte = false;
bool exec_fault, mte_allowed;
bool device = false;
unsigned long mmu_seq;
+ phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
@@ -1504,10 +1506,38 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
vma_pagesize = 1UL << vma_shift;
+
+ if (nested) {
+ unsigned long max_map_size;
+
+ max_map_size = force_pte ? PUD_SIZE : PAGE_SIZE;
+
+ ipa = kvm_s2_trans_output(nested);
+
+ /*
+ * If we're about to create a shadow stage 2 entry, then we
+ * can only create a block mapping if the guest stage 2 page
+ * table uses at least as big a mapping.
+ */
+ max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
+
+ /*
+ * Be careful that if the mapping size falls between
+ * two host sizes, take the smallest of the two.
+ */
+ if (max_map_size >= PMD_SIZE && max_map_size < PUD_SIZE)
+ max_map_size = PMD_SIZE;
+ else if (max_map_size >= PAGE_SIZE && max_map_size < PMD_SIZE)
+ max_map_size = PAGE_SIZE;
+
+ force_pte = (max_map_size == PAGE_SIZE);
+ vma_pagesize = min(vma_pagesize, (long)max_map_size);
+ }
+
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
- gfn = fault_ipa >> PAGE_SHIFT;
+ gfn = ipa >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);
/* Don't use the VMA after the unlock -- it may have vanished */
@@ -1657,8 +1687,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
*/
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
{
+ struct kvm_s2_trans nested_trans, *nested = NULL;
unsigned long fault_status;
- phys_addr_t fault_ipa;
+ phys_addr_t fault_ipa; /* The address we faulted on */
+ phys_addr_t ipa; /* Always the IPA in the L1 guest phys space */
struct kvm_memory_slot *memslot;
unsigned long hva;
bool is_iabt, write_fault, writable;
@@ -1667,7 +1699,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
- fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
+ ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
if (fault_status == ESR_ELx_FSC_FAULT) {
@@ -1708,6 +1740,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
if (fault_status != ESR_ELx_FSC_FAULT &&
fault_status != ESR_ELx_FSC_PERM &&
fault_status != ESR_ELx_FSC_ACCESS) {
+ /*
+ * We must never see an address size fault on shadow stage 2
+ * page table walk, because we would have injected an addr
+ * size fault when we walked the nested s2 page and not
+ * create the shadow entry.
+ */
kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n",
kvm_vcpu_trap_get_class(vcpu),
(unsigned long)kvm_vcpu_trap_get_fault(vcpu),
@@ -1717,7 +1755,37 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
idx = srcu_read_lock(&vcpu->kvm->srcu);
- gfn = fault_ipa >> PAGE_SHIFT;
+ /*
+ * We may have faulted on a shadow stage 2 page table if we are
+ * running a nested guest. In this case, we have to resolve the L2
+ * IPA to the L1 IPA first, before knowing what kind of memory should
+ * back the L1 IPA.
+ *
+ * If the shadow stage 2 page table walk faults, then we simply inject
+ * this to the guest and carry on.
+ */
+ if (kvm_is_shadow_s2_fault(vcpu)) {
+ u32 esr;
+
+ ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans);
+ if (ret) {
+ esr = kvm_s2_trans_esr(&nested_trans);
+ kvm_inject_s2_fault(vcpu, esr);
+ goto out_unlock;
+ }
+
+ ret = kvm_s2_handle_perm_fault(vcpu, &nested_trans);
+ if (ret) {
+ esr = kvm_s2_trans_esr(&nested_trans);
+ kvm_inject_s2_fault(vcpu, esr);
+ goto out_unlock;
+ }
+
+ ipa = kvm_s2_trans_output(&nested_trans);
+ nested = &nested_trans;
+ }
+
+ gfn = ipa >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
@@ -1761,13 +1829,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
- fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
- ret = io_mem_abort(vcpu, fault_ipa);
+ ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
+ ret = io_mem_abort(vcpu, ipa);
goto out_unlock;
}
/* Userspace should not be able to register out-of-bounds IPAs */
- VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->arch.hw_mmu));
+ VM_BUG_ON(ipa >= kvm_phys_size(vcpu->arch.hw_mmu));
if (fault_status == ESR_ELx_FSC_ACCESS) {
handle_access_fault(vcpu, fault_ipa);
@@ -1775,7 +1843,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
- ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
+ ret = user_mem_abort(vcpu, fault_ipa, nested,
+ memslot, hva, fault_status);
if (ret == 0)
ret = 1;
out:
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index cd74d8ee93cb..f4014ae0f901 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -108,6 +108,15 @@ static u32 compute_fsc(int level, u32 fsc)
return fsc | (level & 0x3);
}
+static int esr_s2_fault(struct kvm_vcpu *vcpu, int level, u32 fsc)
+{
+ u32 esr;
+
+ esr = kvm_vcpu_get_esr(vcpu) & ~ESR_ELx_FSC;
+ esr |= compute_fsc(level, fsc);
+ return esr;
+}
+
static int check_base_s2_limits(struct s2_walk_info *wi,
int level, int input_size, int stride)
{
@@ -472,6 +481,45 @@ void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
}
}
+/*
+ * Returns non-zero if permission fault is handled by injecting it to the next
+ * level hypervisor.
+ */
+int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
+{
+ unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu);
+ bool forward_fault = false;
+
+ trans->esr = 0;
+
+ if (fault_status != ESR_ELx_FSC_PERM)
+ return 0;
+
+ if (kvm_vcpu_trap_is_iabt(vcpu)) {
+ forward_fault = (trans->upper_attr & BIT(54));
+ } else {
+ bool write_fault = kvm_is_write_fault(vcpu);
+
+ forward_fault = ((write_fault && !trans->writable) ||
+ (!write_fault && !trans->readable));
+ }
+
+ if (forward_fault) {
+ trans->esr = esr_s2_fault(vcpu, trans->level, ESR_ELx_FSC_PERM);
+ return 1;
+ }
+
+ return 0;
+}
+
+int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
+{
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.far_el2, FAR_EL2);
+ vcpu_write_sys_reg(vcpu, vcpu->arch.fault.hpfar_el2, HPFAR_EL2);
+
+ return kvm_inject_nested_sync(vcpu, esr_el2);
+}
+
void kvm_arch_flush_shadow_all(struct kvm *kvm)
{
int i;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 20/43] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (18 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 21/43] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables Marc Zyngier
` (26 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
When mapping a page in a shadow stage-2, special care must be
taken not to be more permissive than the guest is (writable or
readable page when the guest hasn't set that permission).
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 15 +++++++++++++++
arch/arm64/kvm/mmu.c | 14 +++++++++++++-
arch/arm64/kvm/nested.c | 2 +-
3 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index a9aec29bf7a1..cbcddc2e8379 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -92,6 +92,21 @@ static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
return trans->esr;
}
+static inline bool kvm_s2_trans_readable(struct kvm_s2_trans *trans)
+{
+ return trans->readable;
+}
+
+static inline bool kvm_s2_trans_writable(struct kvm_s2_trans *trans)
+{
+ return trans->writable;
+}
+
+static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans)
+{
+ return !(trans->upper_attr & BIT(54));
+}
+
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 41de7616b735..b885a02200a1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1586,6 +1586,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
+ /*
+ * Potentially reduce shadow S2 permissions to match the guest's own
+ * S2. For exec faults, we'd only reach this point if the guest
+ * actually allowed it (see kvm_s2_handle_perm_fault).
+ */
+ if (nested) {
+ writable &= kvm_s2_trans_writable(nested);
+ if (!kvm_s2_trans_readable(nested))
+ prot &= ~KVM_PGTABLE_PROT_R;
+ }
+
read_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_invalidate_retry(kvm, mmu_seq))
@@ -1628,7 +1639,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
- else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
+ else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) &&
+ (!nested || kvm_s2_trans_executable(nested)))
prot |= KVM_PGTABLE_PROT_X;
/*
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index f4014ae0f901..e4203d106b71 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -496,7 +496,7 @@ int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, struct kvm_s2_trans *trans)
return 0;
if (kvm_vcpu_trap_is_iabt(vcpu)) {
- forward_fault = (trans->upper_attr & BIT(54));
+ forward_fault = !kvm_s2_trans_executable(trans);
} else {
bool write_fault = kvm_is_write_fault(vcpu);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 21/43] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (19 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 20/43] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 22/43] KVM: arm64: nv: Set a handler for the system instruction traps Marc Zyngier
` (25 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Christoffer Dall <christoffer.dall@linaro.org>
Unmap/flush shadow stage 2 page tables for the nested VMs as well as the
stage 2 page table for the guest hypervisor.
Note: A bunch of the code in mmu.c relating to MMU notifiers is
currently dealt with in an extremely abrupt way, for example by clearing
out an entire shadow stage-2 table. This will be handled in a more
efficient way using the reverse mapping feature in a later version of
the patch series.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_mmu.h | 3 +++
arch/arm64/include/asm/kvm_nested.h | 3 +++
arch/arm64/kvm/mmu.c | 30 ++++++++++++++++++----
arch/arm64/kvm/nested.c | 39 +++++++++++++++++++++++++++++
4 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 5c6fb2fb8287..6017645312c5 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -170,6 +170,8 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void **haddr);
int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr);
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end);
void __init free_hyp_pgds(void);
void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
@@ -179,6 +181,7 @@ void kvm_uninit_stage2_mmu(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size, bool writable);
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index cbcddc2e8379..0b3f764f44e8 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -112,6 +112,9 @@ extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
+extern void kvm_nested_s2_wp(struct kvm *kvm);
+extern void kvm_nested_s2_unmap(struct kvm *kvm);
+extern void kvm_nested_s2_flush(struct kvm *kvm);
int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index b885a02200a1..35c196a69e3b 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -333,13 +333,19 @@ void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
__unmap_stage2_range(mmu, start, size, true);
}
+void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu,
+ phys_addr_t addr, phys_addr_t end)
+{
+ stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush);
+}
+
static void stage2_flush_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
- stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush);
+ kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
}
/**
@@ -362,6 +368,8 @@ static void stage2_flush_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, bkt, slots)
stage2_flush_memslot(kvm, memslot);
+ kvm_nested_s2_flush(kvm);
+
write_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
@@ -1040,6 +1048,8 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, bkt, slots)
stage2_unmap_memslot(kvm, memslot);
+ kvm_nested_s2_unmap(kvm);
+
write_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
srcu_read_unlock(&kvm->srcu, idx);
@@ -1139,12 +1149,12 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
}
/**
- * stage2_wp_range() - write protect stage2 memory region range
+ * kvm_stage2_wp_range() - write protect stage2 memory region range
* @mmu: The KVM stage-2 MMU pointer
* @addr: Start address of range
* @end: End address of range
*/
-static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
+void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect);
}
@@ -1175,7 +1185,8 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
write_lock(&kvm->mmu_lock);
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_nested_s2_wp(kvm);
write_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs_memslot(kvm, memslot);
}
@@ -1229,7 +1240,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
lockdep_assert_held_write(&kvm->mmu_lock);
- stage2_wp_range(&kvm->arch.mmu, start, end);
+ kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
/*
* Eager-splitting is done when manual-protect is set. We
@@ -1241,6 +1252,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
*/
if (kvm_dirty_log_manual_protect_and_init_set(kvm))
kvm_mmu_split_huge_pages(kvm, start, end);
+
+ kvm_nested_s2_wp(kvm);
}
static void kvm_send_hwpoison_signal(unsigned long address, short lsb)
@@ -1878,6 +1891,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
+ kvm_nested_s2_unmap(kvm);
return false;
}
@@ -1912,6 +1926,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
PAGE_SIZE, __pfn_to_phys(pfn),
KVM_PGTABLE_PROT_R, NULL, 0);
+ kvm_nested_s2_unmap(kvm);
return false;
}
@@ -1925,6 +1940,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
range->start << PAGE_SHIFT,
size, true);
+ /*
+ * TODO: Handle nested_mmu structures here using the reverse mapping in
+ * a later version of patch series.
+ */
}
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
@@ -2182,6 +2201,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
write_lock(&kvm->mmu_lock);
kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
+ kvm_nested_s2_unmap(kvm);
write_unlock(&kvm->mmu_lock);
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index e4203d106b71..58e8a3dc5fef 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -520,6 +520,45 @@ int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2)
return kvm_inject_nested_sync(vcpu, esr_el2);
}
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_wp(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_wp_range(mmu, 0, kvm_phys_size(mmu));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_unmap(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_unmap_stage2_range(mmu, 0, kvm_phys_size(mmu));
+ }
+}
+
+/* expects kvm->mmu_lock to be held */
+void kvm_nested_s2_flush(struct kvm *kvm)
+{
+ int i;
+
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (kvm_s2_mmu_valid(mmu))
+ kvm_stage2_flush_range(mmu, 0, kvm_phys_size(mmu));
+ }
+}
+
void kvm_arch_flush_shadow_all(struct kvm *kvm)
{
int i;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 22/43] KVM: arm64: nv: Set a handler for the system instruction traps
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (20 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 21/43] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 23/43] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2 Marc Zyngier
` (24 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
When HCR.NV bit is set, execution of the EL2 translation regime address
aranslation instructions and TLB maintenance instructions are trapped to
EL2. In addition, execution of the EL1 translation regime address
aranslation instructions and TLB maintenance instructions that are only
accessible from EL2 and above are trapped to EL2. In these cases,
ESR_EL2.EC will be set to 0x18.
Rework the system instruction emulation framework to handle potentially
all system instruction traps other than MSR/MRS instructions. Those
system instructions would be AT and TLBI instructions controlled by
HCR_EL2.NV, AT, and TTLB bits.
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
[maz: squashed two patches together, redispatched various bits around]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
1 file changed, 44 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f42f3ed3724c..49d00f0cdda0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2246,16 +2246,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
* guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- { SYS_DESC(SYS_DC_ISW), access_dcsw },
- { SYS_DESC(SYS_DC_IGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CSW), access_dcsw },
- { SYS_DESC(SYS_DC_CGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CISW), access_dcsw },
- { SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
-
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2786,6 +2776,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
};
+static struct sys_reg_desc sys_insn_descs[] = {
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_IGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+};
+
static const struct sys_reg_desc *first_idreg;
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
@@ -3479,6 +3481,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
return false;
}
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ const struct sys_reg_desc *r;
+
+ /* Search from the system instruction table. */
+ r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+ if (likely(r)) {
+ perform_access(vcpu, p, r);
+ } else {
+ kvm_err("Unsupported guest sys instruction at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(p);
+ kvm_inject_undefined(vcpu);
+ }
+ return 1;
+}
+
static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
{
const struct sys_reg_desc *idreg = first_idreg;
@@ -3526,7 +3546,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
/**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
+ * trap on a guest execution
* @vcpu: The VCPU pointer
*/
int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
@@ -3543,12 +3564,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
params = esr_sys64_to_params(esr);
params.regval = vcpu_get_reg(vcpu, Rt);
- if (!emulate_sys_reg(vcpu, ¶ms))
+ /* System register? */
+ if (params.Op0 == 2 || params.Op0 == 3) {
+ if (!emulate_sys_reg(vcpu, ¶ms))
+ return 1;
+
+ if (!params.is_write)
+ vcpu_set_reg(vcpu, Rt, params.regval);
+
return 1;
+ }
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
- return 1;
+ /* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
+ return emulate_sys_instr(vcpu, ¶ms);
}
/******************************************************************************
@@ -4002,6 +4030,7 @@ int __init kvm_sys_reg_table_init(void)
valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
+ valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
if (!valid)
return -EINVAL;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 23/43] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (21 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 22/43] KVM: arm64: nv: Set a handler for the system instruction traps Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 24/43] KVM: arm64: nv: Trap and emulate TLBI " Marc Zyngier
` (23 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because untrapped AT instructions operating on S1E1 will use the wrong
translation regieme (the one used to emulate virtual EL2 in EL1 instead
of virtual EL1) and AT instructions operating on S12 will not work from
EL1.
This patch does several things.
1. List and define all AT system instructions to emulate and document
the emulation design.
2. Implement AT instruction handling logic in EL2. This will be used to
emulate AT instructions executed in the virtual EL2.
AT instruction emulation works by loading the proper processor
context, which depends on the trapped instruction and the virtual
HCR_EL2, to the EL1 virtual memory control registers and executing AT
instructions. Note that ctxt->hw_sys_regs is expected to have the
proper processor context before calling the handling
function(__kvm_at_insn) implemented in this patch.
4. Emulate AT S1E[01] instructions by issuing the same instructions in
EL2. We set the physical EL1 registers, NV and NV1 bits as described in
the AT instruction emulation overview.
5. Emulate AT A12E[01] instructions in two steps: First, do the stage-1
translation by reusing the existing AT emulation functions. Second, do
the stage-2 translation by walking the guest hypervisor's stage-2 page
table in software. Record the translation result to PAR_EL1.
6. Emulate AT S1E2 instructions by issuing the corresponding S1E1
instructions in EL2. We set the physical EL1 registers and the HCR_EL2
register as described in the AT instruction emulation overview.
7. Forward system instruction traps to the virtual EL2 if the corresponding
virtual AT bit is set in the virtual HCR_EL2.
[ Much logic above has been reworked by Marc Zyngier ]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/at.c | 219 +++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 217 ++++++++++++++++++++++++++++++
5 files changed, 440 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kvm/at.c
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 9c10c88d2fc2..d0b5ba7ecccf 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -132,6 +132,7 @@
#define VTCR_EL2_TG0_16K TCR_TG0_16K
#define VTCR_EL2_TG0_64K TCR_TG0_64K
#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
+#define VTCR_EL2_SH0_SHIFT TCR_SH0_SHIFT
#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 24b5e6b23417..ee50ed8a2a47 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -235,6 +235,8 @@ extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
+extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
+extern void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index c0c050e53157..c2717a8f12f5 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o stacktrace.o \
vgic-sys-reg-v3.o fpsimd.o pkvm.o \
- arch_timer.o trng.o vmid.o emulate-nested.o nested.o \
+ arch_timer.o trng.o vmid.o emulate-nested.o nested.o at.o \
vgic/vgic.o vgic/vgic-init.o \
vgic/vgic-irqfd.o vgic/vgic-v2.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
diff --git a/arch/arm64/kvm/at.c b/arch/arm64/kvm/at.c
new file mode 100644
index 000000000000..6d47dd409384
--- /dev/null
+++ b/arch/arm64/kvm/at.c
@@ -0,0 +1,219 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2017 - Linaro Ltd
+ * Author: Jintack Lim <jintack.lim@linaro.org>
+ */
+
+#include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
+
+struct mmu_config {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 sctlr;
+ u64 vttbr;
+ u64 vtcr;
+ u64 hcr;
+};
+
+static void __mmu_config_save(struct mmu_config *config)
+{
+ config->ttbr0 = read_sysreg_el1(SYS_TTBR0);
+ config->ttbr1 = read_sysreg_el1(SYS_TTBR1);
+ config->tcr = read_sysreg_el1(SYS_TCR);
+ config->sctlr = read_sysreg_el1(SYS_SCTLR);
+ config->vttbr = read_sysreg(vttbr_el2);
+ config->vtcr = read_sysreg(vtcr_el2);
+ config->hcr = read_sysreg(hcr_el2);
+}
+
+static void __mmu_config_restore(struct mmu_config *config)
+{
+ write_sysreg_el1(config->ttbr0, SYS_TTBR0);
+ write_sysreg_el1(config->ttbr1, SYS_TTBR1);
+ write_sysreg_el1(config->tcr, SYS_TCR);
+ write_sysreg_el1(config->sctlr, SYS_SCTLR);
+ write_sysreg(config->vttbr, vttbr_el2);
+ write_sysreg(config->vtcr, vtcr_el2);
+ write_sysreg(config->hcr, hcr_el2);
+
+ isb();
+}
+
+void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+ bool fail;
+
+ write_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * If HCR_EL2.{E2H,TGE} == {1,1}, the MMU context is already
+ * the right one (as we trapped from vEL2).
+ */
+ if (vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))
+ goto skip_mmu_switch;
+
+ /*
+ * FIXME: Obtaining the S2 MMU for a L2 is horribly racy, and
+ * we may not find it (recycled by another vcpu, for example).
+ * See the other FIXME comment below about the need for a SW
+ * PTW in this case.
+ */
+ mmu = lookup_s2_mmu(vcpu);
+ if (WARN_ON(!mmu))
+ goto out;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1), SYS_SCTLR);
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /*
+ * REVISIT: do we need anything from the guest's VTCR_EL2? If
+ * looks like keeping the hosts configuration is the right
+ * thing to do at this stage (and we could avoid save/restore
+ * it. Keep the host's version for now.
+ */
+ write_sysreg((config.hcr & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+skip_mmu_switch:
+
+ switch (op) {
+ case OP_AT_S1E1R:
+ case OP_AT_S1E1RP:
+ fail = __kvm_at("s1e1r", vaddr);
+ break;
+ case OP_AT_S1E1W:
+ case OP_AT_S1E1WP:
+ fail = __kvm_at("s1e1w", vaddr);
+ break;
+ case OP_AT_S1E0R:
+ fail = __kvm_at("s1e0r", vaddr);
+ break;
+ case OP_AT_S1E0W:
+ fail = __kvm_at("s1e0w", vaddr);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ if (!fail)
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+ else
+ ctxt_sys_reg(ctxt, PAR_EL1) = SYS_PAR_EL1_F;
+
+ /*
+ * Failed? let's leave the building now.
+ *
+ * FIXME: how about a failed translation because the shadow S2
+ * wasn't populated? We may need to perform a SW PTW,
+ * populating our shadow S2 and retry the instruction.
+ */
+ if (ctxt_sys_reg(ctxt, PAR_EL1) & SYS_PAR_EL1_F)
+ goto nopan;
+
+ /* No PAN? No problem. */
+ if (!vcpu_el2_e2h_is_set(vcpu) || !(*vcpu_cpsr(vcpu) & PSR_PAN_BIT))
+ goto nopan;
+
+ /*
+ * For PAN-involved AT operations, perform the same
+ * translation, using EL0 this time.
+ */
+ switch (op) {
+ case OP_AT_S1E1RP:
+ fail = __kvm_at("s1e0r", vaddr);
+ break;
+ case OP_AT_S1E1WP:
+ fail = __kvm_at("s1e0w", vaddr);
+ break;
+ default:
+ goto nopan;
+ }
+
+ /*
+ * If the EL0 translation has succeeded, we need to pretend
+ * the AT operation has failed, as the PAN setting forbids
+ * such a translation.
+ *
+ * FIXME: we hardcode a Level-3 permission fault. We really
+ * should return the real fault level.
+ */
+ if (fail || !(read_sysreg(par_el1) & SYS_PAR_EL1_F))
+ ctxt_sys_reg(ctxt, PAR_EL1) = (0xf << 1) | SYS_PAR_EL1_F;
+
+nopan:
+ if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ __mmu_config_restore(&config);
+
+out:
+ write_unlock(&vcpu->kvm->mmu_lock);
+}
+
+void __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
+{
+ struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
+ struct mmu_config config;
+ struct kvm_s2_mmu *mmu;
+ u64 val;
+
+ write_lock(&vcpu->kvm->mmu_lock);
+
+ mmu = &vcpu->kvm->arch.mmu;
+
+ /* We've trapped, so everything is live on the CPU. */
+ __mmu_config_save(&config);
+
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL2), SYS_TTBR1);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL2), SYS_TCR);
+ write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL2), SYS_SCTLR);
+
+ val = config.hcr;
+ } else {
+ write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL2), SYS_TTBR0);
+ val = translate_tcr_el2_to_tcr_el1(ctxt_sys_reg(ctxt, TCR_EL2));
+ write_sysreg_el1(val, SYS_TCR);
+ val = translate_sctlr_el2_to_sctlr_el1(ctxt_sys_reg(ctxt, SCTLR_EL2));
+ write_sysreg_el1(val, SYS_SCTLR);
+
+ val = config.hcr | HCR_NV | HCR_NV1;
+ }
+
+ write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
+ /* FIXME: write S2 MMU VTCR_EL2? */
+ write_sysreg((val & ~HCR_TGE) | HCR_VM, hcr_el2);
+
+ isb();
+
+ switch (op) {
+ case OP_AT_S1E2R:
+ asm volatile("at s1e1r, %0" : : "r" (vaddr));
+ break;
+ case OP_AT_S1E2W:
+ asm volatile("at s1e1w, %0" : : "r" (vaddr));
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
+
+ isb();
+
+ /* FIXME: handle failed translation due to shadow S2 */
+ ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg(par_el1);
+
+ __mmu_config_restore(&config);
+ write_unlock(&vcpu->kvm->mmu_lock);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 49d00f0cdda0..475e245cd653 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2776,16 +2776,233 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
};
+static bool handle_s1e01(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e01(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static bool handle_s1e2(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ __kvm_at_s1e2(vcpu, sys_encoding, p->regval);
+
+ return true;
+}
+
+static u64 setup_par_aborted(u32 esr)
+{
+ u64 par = 0;
+
+ /* S [9]: fault in the stage 2 translation */
+ par |= (1 << 9);
+ /* FST [6:1]: Fault status code */
+ par |= (esr << 1);
+ /* F [0]: translation is aborted */
+ par |= 1;
+
+ return par;
+}
+
+static u64 setup_par_completed(struct kvm_vcpu *vcpu, struct kvm_s2_trans *out)
+{
+ u64 par, vtcr_sh0;
+
+ /* F [0]: Translation is completed successfully */
+ par = 0;
+ /* ATTR [63:56] */
+ par |= out->upper_attr;
+ /* PA [47:12] */
+ par |= out->output & GENMASK_ULL(11, 0);
+ /* RES1 [11] */
+ par |= (1UL << 11);
+ /* SH [8:7]: Shareability attribute */
+ vtcr_sh0 = vcpu_read_sys_reg(vcpu, VTCR_EL2) & VTCR_EL2_SH0_MASK;
+ par |= (vtcr_sh0 >> VTCR_EL2_SH0_SHIFT) << 7;
+
+ return par;
+}
+
+static bool handle_s12(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r, bool write)
+{
+ u64 par, va;
+ u32 esr, op;
+ phys_addr_t ipa;
+ struct kvm_s2_trans out;
+ int ret;
+
+ /* Do the stage-1 translation */
+ va = p->regval;
+ op = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+ switch (op) {
+ case OP_AT_S12E1R:
+ op = OP_AT_S1E1R;
+ break;
+ case OP_AT_S12E1W:
+ op = OP_AT_S1E1W;
+ break;
+ case OP_AT_S12E0R:
+ op = OP_AT_S1E0R;
+ break;
+ case OP_AT_S12E0W:
+ op = OP_AT_S1E0W;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ return true;
+ }
+
+ __kvm_at_s1e01(vcpu, op, va);
+ par = vcpu_read_sys_reg(vcpu, PAR_EL1);
+ if (par & 1) {
+ /* The stage-1 translation aborted */
+ return true;
+ }
+
+ /* Do the stage-2 translation */
+ ipa = (par & GENMASK_ULL(47, 12)) | (va & GENMASK_ULL(11, 0));
+ out.esr = 0;
+ ret = kvm_walk_nested_s2(vcpu, ipa, &out);
+ if (ret < 0)
+ return false;
+
+ /* Check if the stage-2 PTW is aborted */
+ if (out.esr) {
+ esr = out.esr;
+ goto s2_trans_abort;
+ }
+
+ /* Check the access permission */
+ if ((!write && !out.readable) || (write && !out.writable)) {
+ esr = ESR_ELx_FSC_PERM;
+ esr |= out.level & 0x3;
+ goto s2_trans_abort;
+ }
+
+ vcpu_write_sys_reg(vcpu, setup_par_completed(vcpu, &out), PAR_EL1);
+ return true;
+
+s2_trans_abort:
+ vcpu_write_sys_reg(vcpu, setup_par_aborted(esr), PAR_EL1);
+ return true;
+}
+
+static bool handle_s12r(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, false);
+}
+
+static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ return handle_s12(vcpu, p, r, true);
+}
+
+/*
+ * AT instruction emulation
+ *
+ * We emulate AT instructions executed in the virtual EL2.
+ * Basic strategy for the stage-1 translation emulation is to load proper
+ * context, which depends on the trapped instruction and the virtual HCR_EL2,
+ * to the EL1 virtual memory control registers and execute S1E[01] instructions
+ * in EL2. See below for more detail.
+ *
+ * For the stage-2 translation, which is necessary for S12E[01] emulation,
+ * we walk the guest hypervisor's stage-2 page table in software.
+ *
+ * The stage-1 translation emulations can be divided into two groups depending
+ * on the translation regime.
+ *
+ * 1. EL2 AT instructions: S1E2x
+ * +-----------------------------------------------------------------------+
+ * | | Setting for the emulation |
+ * | Virtual HCR_EL2.E2H on trap |-----------------------------------------+
+ * | | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |-----------------------------------------------------------------------|
+ * | 0 | vEL2 | (1, 1) | 0 |
+ * | 1 | vEL2 | (0, 0) | 0 |
+ * +-----------------------------------------------------------------------+
+ *
+ * We emulate the EL2 AT instructions by loading virtual EL2 context
+ * to the EL1 virtual memory control registers and executing corresponding
+ * EL1 AT instructions.
+ *
+ * We set physical NV and NV1 bits to use EL2 page table format for non-VHE
+ * guest hypervisor (i.e. HCR_EL2.E2H == 0). As a VHE guest hypervisor uses the
+ * EL1 page table format, we don't set those bits.
+ *
+ * We should clear physical TGE bit not to use the EL2 translation regime when
+ * the host uses the VHE feature.
+ *
+ *
+ * 2. EL0/EL1 AT instructions: S1E[01]x, S12E1x
+ * +----------------------------------------------------------------------+
+ * | Virtual HCR_EL2 on trap | Setting for the emulation |
+ * |----------------------------------------------------------------------+
+ * | (vE2H, vTGE) | (vNV, vNV1) | Phys EL1 regs | Phys NV, NV1 | Phys TGE |
+ * |----------------------------------------------------------------------|
+ * | (0, 0)* | (0, 0) | vEL1 | (0, 0) | 0 |
+ * | (0, 0) | (1, 1) | vEL1 | (1, 1) | 0 |
+ * | (1, 1) | (0, 0) | vEL2 | (0, 0) | 0 |
+ * | (1, 1) | (1, 1) | vEL2 | (1, 1) | 0 |
+ * +----------------------------------------------------------------------+
+ *
+ * *For (0, 0) in the 'Virtual HCR_EL2 on trap' column, it actually means
+ * (1, 1). Keep them (0, 0) just for the readability.
+ *
+ * We set physical EL1 virtual memory control registers depending on
+ * (vE2H, vTGE) pair. When the pair is (0, 0) where AT instructions are
+ * supposed to use EL0/EL1 translation regime, we load the EL1 registers with
+ * the virtual EL1 registers (i.e. EL1 registers from the guest hypervisor's
+ * point of view). When the pair is (1, 1), however, AT instructions are defined
+ * to apply EL2 translation regime. To emulate this behavior, we load the EL1
+ * registers with the virtual EL2 context. (i.e the shadow registers)
+ *
+ * We respect the virtual NV and NV1 bit for the emulation. When those bits are
+ * set, it means that a guest hypervisor would like to use EL2 page table format
+ * for the EL1 translation regime. We emulate this by setting the physical
+ * NV and NV1 bits.
+ */
+
+#define SYS_INSN(insn, access_fn) \
+ { \
+ SYS_DESC(OP_##insn), \
+ .access = (access_fn), \
+ }
+
static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_ISW), access_dcsw },
{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
+
+ SYS_INSN(AT_S1E1R, handle_s1e01),
+ SYS_INSN(AT_S1E1W, handle_s1e01),
+ SYS_INSN(AT_S1E0R, handle_s1e01),
+ SYS_INSN(AT_S1E0W, handle_s1e01),
+ SYS_INSN(AT_S1E1RP, handle_s1e01),
+ SYS_INSN(AT_S1E1WP, handle_s1e01),
+
{ SYS_DESC(SYS_DC_CSW), access_dcsw },
{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+
+ SYS_INSN(AT_S1E2R, handle_s1e2),
+ SYS_INSN(AT_S1E2W, handle_s1e2),
+ SYS_INSN(AT_S12E1R, handle_s12r),
+ SYS_INSN(AT_S12E1W, handle_s12w),
+ SYS_INSN(AT_S12E0R, handle_s12r),
+ SYS_INSN(AT_S12E0W, handle_s12w),
};
static const struct sys_reg_desc *first_idreg;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 24/43] KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (22 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 23/43] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2 Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 25/43] KVM: arm64: nv: Hide RAS from nested guests Marc Zyngier
` (22 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Jintack Lim <jintack.lim@linaro.org>
When supporting nested virtualization a guest hypervisor executing TLBI
instructions must be trapped and emulated by the host hypervisor,
because the guest hypervisor can only affect physical TLB entries
relating to its own execution environment (virtual EL2 in EL1) but not
to the nested guests as required by the semantics of the instructions
and TLBI instructions might also result in updates (invalidations) to
shadow page tables.
This patch does several things.
1. Emulate TLBI ALLE2(IS) instruction executed in the virtual EL2. Since
we emulate the virtual EL2 in the EL1, we invalidate EL1&0 regime stage
1 TLB entries with setting vttbr_el2 having the VMID of the virtual EL2.
2. Emulate TLBI VAE2* instruction executed in the virtual EL2. Based on the
same principle as TLBI ALLE2 instruction, we can simply emulate those
instructions by executing corresponding VAE1* instructions with the
virtual EL2's VMID assigned by the host hypervisor.
Note that we are able to emulate TLBI ALLE2IS precisely by only
invalidating stage 1 TLB entries via TLBI VMALL1IS instruction, but to
make it simeple, we reuse the existing function, __kvm_tlb_flush_vmid(),
which invalidates both of stage 1 and 2 TLB entries.
3. TLBI ALLE1(IS) instruction invalidates all EL1&0 regime stage 1 and 2
TLB entries (on all PEs in the same Inner Shareable domain). To emulate
these instructions, we first need to clear all the mappings in the
shadow page tables since executing those instructions implies the change
of mappings in the stage 2 page tables maintained by the guest
hypervisor. We then need to invalidate all EL1&0 regime stage 1 and 2
TLB entries of all VMIDs, which are assigned by the host hypervisor, for
this VM.
4. Based on the same principle as TLBI ALLE1(IS) emulation, we clear the
mappings in the shadow stage-2 page tables and invalidate TLB entries.
But this time we do it only for the current VMID from the guest
hypervisor's perspective, not for all VMIDs.
5. Based on the same principle as TLBI ALLE1(IS) and TLBI VMALLS12E1(IS)
emulation, we clear the mappings in the shadow stage-2 page tables and
invalidate TLB entries. We do it only for one mapping for the current
VMID from the guest hypervisor's view.
6. Even though a guest hypervisor can execute TLBI instructions that are
accesible at EL1 without trap, it's wrong; All those TLBI instructions
work based on current VMID, and when running a guest hypervisor current
VMID is the one for itself, not the one from the virtual vttbr_el2. So
letting a guest hypervisor execute those TLBI instructions results in
invalidating its own TLB entries and leaving invalid TLB entries
unhandled.
Therefore we trap and emulate those TLBI instructions. The emulation is
simple; we find a shadow VMID mapped to the virtual vttbr_el2, set it in
the physical vttbr_el2, then execute the same instruction in EL2.
We don't set HCR_EL2.TTLB bit yet.
[ Changes performed by Marc Zynger:
The TLBI handling code more or less directly execute the same
instruction that has been trapped (with an EL2->EL1 conversion
in the case of an EL2 TLBI), but that's unfortunately not enough:
- TLBIs must be upgraded to the Inner Shareable domain to account
for vcpu migration, just like we already have with HCR_EL2.FB.
- The DSB instruction that synchronises these must thus be on
the Inner Shareable domain as well.
- Prior to executing the TLBI, we need another DSB ISHST to make
sure that the update to the page tables is now visible.
Ordering of system instructions fixed
- The current TLB invalidation code is pretty buggy, as it assume a
page mapping. On the contrary, it is likely that TLB invalidation
will cover more than a single page, and the size should be decided
by the guests configuration (and not the host's).
Since we don't cache the guest mapping sizes in the shadow PT yet,
let's assume the worse case (a block mapping) and invalidate that.
Take this opportunity to fix the decoding of the parameter (it
isn't a straight IPA).
- In general, we always emulate local TBL invalidations as being
as upgraded to the Inner Shareable domain so that we can easily
deal with vcpu migration. This is consistent with the fact that
we set HCR_EL2.FB when running non-nested VMs.
So let's emulate TLBI ALLE2 as ALLE2IS.
]
[ Changes performed by Christoffer Dall:
Sometimes when we are invalidating the TLB for a certain S2 MMU
context, this context can also have EL2 context associated with it
and we have to invalidate this too.
]
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 7 +
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/kvm/hyp/vhe/tlb.c | 81 ++++++++++
arch/arm64/kvm/mmu.c | 21 ++-
arch/arm64/kvm/nested.c | 35 ++++
arch/arm64/kvm/sys_regs.c | 238 ++++++++++++++++++++++++++++
7 files changed, 385 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ee50ed8a2a47..5719fd64e64d 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -233,6 +233,8 @@ extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu,
extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
phys_addr_t start, unsigned long pages);
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
+extern void __kvm_tlb_vae2is(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding);
+extern void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
extern void __kvm_at_s1e01(struct kvm_vcpu *vcpu, u32 op, u64 vaddr);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 0b3f764f44e8..2f427348506a 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -64,6 +64,13 @@ extern void kvm_init_nested(struct kvm *kvm);
extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu);
+
+union tlbi_info;
+
+extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
+ const union tlbi_info *info,
+ void (*)(struct kvm_s2_mmu *,
+ const union tlbi_info *));
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index a54464c415fc..30952f1ac997 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -654,6 +654,10 @@
#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
/* TLBI instructions */
+#define TLBI_Op0 1
+#define TLBI_Op1_EL1 0 /* Accessible from EL1 or higher */
+#define TLBI_Op1_EL2 4 /* Accessible from EL2 or higher */
+
#define OP_TLBI_VMALLE1OS sys_insn(1, 0, 8, 1, 0)
#define OP_TLBI_VAE1OS sys_insn(1, 0, 8, 1, 1)
#define OP_TLBI_ASIDE1OS sys_insn(1, 0, 8, 1, 2)
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index b636b4111dbf..737ea0591b54 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -231,3 +231,84 @@ void __kvm_flush_vm_context(void)
dsb(ish);
}
+
+void __kvm_tlb_vae2is(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding)
+{
+ struct tlb_inv_context cxt;
+
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ __tlb_switch_to_guest(mmu, &cxt);
+
+ /*
+ * Execute the EL1 version of TLBI VAE2* instruction, forcing
+ * an upgrade to the Inner Shareable domain in order to
+ * perform the invalidation on all CPUs.
+ */
+ switch (sys_encoding) {
+ case OP_TLBI_VAE2:
+ case OP_TLBI_VAE2IS:
+ __tlbi(vae1is, va);
+ break;
+ case OP_TLBI_VALE2:
+ case OP_TLBI_VALE2IS:
+ __tlbi(vale1is, va);
+ break;
+ default:
+ break;
+ }
+ dsb(ish);
+ isb();
+
+ __tlb_switch_to_host(&cxt);
+}
+
+void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding)
+{
+ struct tlb_inv_context cxt;
+
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ __tlb_switch_to_guest(mmu, &cxt);
+
+ /*
+ * Execute the same instruction as the guest hypervisor did,
+ * expanding the scope of local TLB invalidations to the Inner
+ * Shareable domain so that it takes place on all CPUs. This
+ * is equivalent to having HCR_EL2.FB set.
+ */
+ switch (sys_encoding) {
+ case OP_TLBI_VMALLE1:
+ case OP_TLBI_VMALLE1IS:
+ __tlbi(vmalle1is);
+ break;
+ case OP_TLBI_VAE1:
+ case OP_TLBI_VAE1IS:
+ __tlbi(vae1is, val);
+ break;
+ case OP_TLBI_ASIDE1:
+ case OP_TLBI_ASIDE1IS:
+ __tlbi(aside1is, val);
+ break;
+ case OP_TLBI_VAAE1:
+ case OP_TLBI_VAAE1IS:
+ __tlbi(vaae1is, val);
+ break;
+ case OP_TLBI_VALE1:
+ case OP_TLBI_VALE1IS:
+ __tlbi(vale1is, val);
+ break;
+ case OP_TLBI_VAALE1:
+ case OP_TLBI_VAALE1IS:
+ __tlbi(vaale1is, val);
+ break;
+ default:
+ break;
+ }
+ dsb(ish);
+ isb();
+
+ __tlb_switch_to_host(&cxt);
+}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 35c196a69e3b..8c77547f5582 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -173,10 +173,25 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
}
int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm,
- gfn_t gfn, u64 nr_pages)
+ gfn_t gfn, u64 nr_pages)
{
- kvm_tlb_flush_vmid_range(&kvm->arch.mmu,
- gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+ if (!kvm->arch.nested_mmus) {
+ /*
+ * For a normal (i.e. non-nested) guest, flush entries for the
+ * given VMID.
+ */
+ kvm_tlb_flush_vmid_range(&kvm->arch.mmu,
+ gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+ } else {
+ /*
+ * When supporting nested virtualization, we can have multiple
+ * VMIDs in play for each VCPU in the VM, so it's really not
+ * worth it to try to quiesce the system and flush all the
+ * VMIDs that may be in use, instead just nuke the whole thing.
+ */
+ kvm_call_hyp(__kvm_flush_vm_context);
+ }
+
return 0;
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 58e8a3dc5fef..61af8a42389d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -351,6 +351,41 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
return ret;
}
+/*
+ * We can have multiple *different* MMU contexts with the same VMID:
+ *
+ * - S2 being enabled or not, hence differing by the HCR_EL2.VM bit
+ *
+ * - Multiple vcpus using private S2s (huh huh...), hence differing by the
+ * VBBTR_EL2.BADDR address
+ *
+ * - A combination of the above...
+ *
+ * We can always identify which MMU context to pick at run-time. However,
+ * TLB invalidation involving a VMID must take action on all the TLBs using
+ * this particular VMID. This translates into applying the same invalidation
+ * operation to all the contexts that are using this VMID. Moar phun!
+ */
+void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
+ const union tlbi_info *info,
+ void (*tlbi_callback)(struct kvm_s2_mmu *,
+ const union tlbi_info *))
+{
+ write_lock(&kvm->mmu_lock);
+
+ for (int i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ if (!kvm_s2_mmu_valid(mmu))
+ continue;
+
+ if (vmid == get_vmid(mmu->tlb_vttbr))
+ tlbi_callback(mmu, info);
+ }
+
+ write_unlock(&kvm->mmu_lock);
+}
+
/* Must be called with kvm->mmu_lock held */
struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu)
{
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 475e245cd653..4c5bb883773d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2906,6 +2906,216 @@ static bool handle_s12w(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return handle_s12(vcpu, p, r, true);
}
+static bool handle_alle2is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * To emulate invalidating all EL2 regime stage 1 TLB entries for all
+ * PEs, executing TLBI VMALLE1IS is enough. But reuse the existing
+ * interface for the simplicity; invalidating stage 2 entries doesn't
+ * affect the correctness.
+ */
+ __kvm_tlb_flush_vmid(&vcpu->kvm->arch.mmu);
+ return true;
+}
+
+static bool handle_vae2is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ int sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+
+ /*
+ * Based on the same principle as TLBI ALLE2 instruction
+ * emulation, we emulate TLBI VAE2* instructions by executing
+ * corresponding TLBI VAE1* instructions with the virtual
+ * EL2's VMID assigned by the host hypervisor.
+ */
+ __kvm_tlb_vae2is(&vcpu->kvm->arch.mmu, p->regval, sys_encoding);
+ return true;
+}
+
+static bool handle_alle1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ struct kvm_s2_mmu *mmu = &vcpu->kvm->arch.mmu;
+
+ write_lock(&vcpu->kvm->mmu_lock);
+
+ /*
+ * Clear all mappings in the shadow page tables and invalidate the stage
+ * 1 and 2 TLB entries via kvm_tlb_flush_vmid_ipa().
+ */
+ kvm_nested_s2_unmap(vcpu->kvm);
+
+ if (atomic64_read(&mmu->vmid.id)) {
+ /*
+ * Invalidate the stage 1 and 2 TLB entries for the host OS
+ * in a VM only if there is one.
+ */
+ __kvm_tlb_flush_vmid(mmu);
+ }
+
+ write_unlock(&vcpu->kvm->mmu_lock);
+
+ return true;
+}
+
+/* Only defined here as this is an internal "abstraction" */
+union tlbi_info {
+ struct {
+ u64 start;
+ u64 size;
+ } range;
+
+ struct {
+ u64 addr;
+ } ipa;
+
+ struct {
+ u64 addr;
+ u32 encoding;
+ } va;
+};
+
+static void s2_mmu_unmap_stage2_range(struct kvm_s2_mmu *mmu,
+ const union tlbi_info *info)
+{
+ kvm_unmap_stage2_range(mmu, info->range.start, info->range.size);
+}
+
+static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 limit, vttbr;
+
+ vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+ limit = BIT_ULL(kvm_get_pa_bits(vcpu->kvm));
+
+ kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
+ &(union tlbi_info) {
+ .range = {
+ .start = 0,
+ .size = limit,
+ },
+ },
+ s2_mmu_unmap_stage2_range);
+
+ return true;
+}
+
+static void s2_mmu_unmap_stage2_ipa(struct kvm_s2_mmu *mmu,
+ const union tlbi_info *info)
+{
+ unsigned long max_size;
+ u64 base_addr;
+
+ /*
+ * We drop a number of things from the supplied value:
+ *
+ * - NS bit: we're non-secure only.
+ *
+ * - TTL field: We already have the granule size from the
+ * VTCR_EL2.TG0 field, and the level is only relevant to the
+ * guest's S2PT.
+ *
+ * - IPA[51:48]: We don't support 52bit IPA just yet...
+ *
+ * And of course, adjust the IPA to be on an actual address.
+ */
+ base_addr = (info->ipa.addr & GENMASK_ULL(35, 0)) << 12;
+
+ /* Compute the maximum extent of the invalidation */
+ switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ max_size = SZ_1G;
+ break;
+ case VTCR_EL2_TG0_16K:
+ max_size = SZ_32M;
+ break;
+ case VTCR_EL2_TG0_64K:
+ /*
+ * No, we do not support 52bit IPA in nested yet. Once
+ * we do, this should be 4TB.
+ */
+ /* FIXME: remove the 52bit PA support from the IDregs */
+ max_size = SZ_512M;
+ break;
+ default:
+ BUG();
+ }
+
+ base_addr &= ~(max_size - 1);
+
+ kvm_unmap_stage2_range(mmu, base_addr, max_size);
+}
+
+static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
+ &(union tlbi_info) {
+ .ipa = {
+ .addr = p->regval,
+ },
+ },
+ s2_mmu_unmap_stage2_ipa);
+
+ return true;
+}
+
+static void s2_mmu_unmap_stage2_va(struct kvm_s2_mmu *mmu,
+ const union tlbi_info *info)
+{
+ __kvm_tlb_el1_instr(mmu, info->va.addr, info->va.encoding);
+}
+
+static bool handle_tlbi_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
+ u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
+
+ /*
+ * If we're here, this is because we've trapped on a EL1 TLBI
+ * instruction that affects the EL1 translation regime while
+ * we're running in a context that doesn't allow us to let the
+ * HW do its thing (aka vEL2):
+ *
+ * - HCR_EL2.E2H == 0 : a non-VHE guest
+ * - HCR_EL2.{E2H,TGE} == { 1, 0 } : a VHE guest in guest mode
+ *
+ * We don't expect these helpers to ever be called when running
+ * in a vEL1 context.
+ */
+
+ WARN_ON(!vcpu_is_el2(vcpu));
+
+ if ((__vcpu_sys_reg(vcpu, HCR_EL2) & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+ mutex_lock(&vcpu->kvm->lock);
+ /*
+ * ARMv8.4-NV allows the guest to change TGE behind
+ * our back, so we always trap EL1 TLBIs from vEL2...
+ */
+ __kvm_tlb_el1_instr(&vcpu->kvm->arch.mmu, p->regval, sys_encoding);
+ mutex_unlock(&vcpu->kvm->lock);
+
+ return true;
+ }
+
+ kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
+ &(union tlbi_info) {
+ .va = {
+ .addr = p->regval,
+ .encoding = sys_encoding,
+ },
+ },
+ s2_mmu_unmap_stage2_va);
+
+ return true;
+}
+
/*
* AT instruction emulation
*
@@ -2997,12 +3207,40 @@ static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+ SYS_INSN(TLBI_VMALLE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_ASIDE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAAE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_VALE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAALE1IS, handle_tlbi_el1),
+ SYS_INSN(TLBI_VMALLE1, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAE1, handle_tlbi_el1),
+ SYS_INSN(TLBI_ASIDE1, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAAE1, handle_tlbi_el1),
+ SYS_INSN(TLBI_VALE1, handle_tlbi_el1),
+ SYS_INSN(TLBI_VAALE1, handle_tlbi_el1),
+
SYS_INSN(AT_S1E2R, handle_s1e2),
SYS_INSN(AT_S1E2W, handle_s1e2),
SYS_INSN(AT_S12E1R, handle_s12r),
SYS_INSN(AT_S12E1W, handle_s12w),
SYS_INSN(AT_S12E0R, handle_s12r),
SYS_INSN(AT_S12E0W, handle_s12w),
+
+ SYS_INSN(TLBI_IPAS2E1IS, handle_ipas2e1is),
+ SYS_INSN(TLBI_IPAS2LE1IS, handle_ipas2e1is),
+ SYS_INSN(TLBI_ALLE2IS, handle_alle2is),
+ SYS_INSN(TLBI_VAE2IS, handle_vae2is),
+ SYS_INSN(TLBI_ALLE1IS, handle_alle1is),
+ SYS_INSN(TLBI_VALE2IS, handle_vae2is),
+ SYS_INSN(TLBI_VMALLS12E1IS, handle_vmalls12e1is),
+ SYS_INSN(TLBI_IPAS2E1, handle_ipas2e1is),
+ SYS_INSN(TLBI_IPAS2LE1, handle_ipas2e1is),
+ SYS_INSN(TLBI_ALLE2, handle_alle2is),
+ SYS_INSN(TLBI_VAE2, handle_vae2is),
+ SYS_INSN(TLBI_ALLE1, handle_alle1is),
+ SYS_INSN(TLBI_VALE2, handle_vae2is),
+ SYS_INSN(TLBI_VMALLS12E1, handle_vmalls12e1is),
};
static const struct sys_reg_desc *first_idreg;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 25/43] KVM: arm64: nv: Hide RAS from nested guests
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (23 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 24/43] KVM: arm64: nv: Trap and emulate TLBI " Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 26/43] KVM: arm64: nv: Add handling of EL2-specific timer registers Marc Zyngier
` (21 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
We don't want to expose complicated features to guests until we have
a good grasp on the basic CPU emulation. So let's pretend that RAS,
doesn't exist in a nested guest. We already hide the feature bits,
let's now make sure VDISR_EL1 will UNDEF.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4c5bb883773d..7405053a6dc8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2764,6 +2764,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(VBAR_EL2, access_rw, reset_val, 0),
EL2_REG(RVBAR_EL2, access_rw, reset_val, 0),
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
+ { SYS_DESC(SYS_VDISR_EL2), trap_undef },
EL2_REG(CONTEXTIDR_EL2, access_rw, reset_val, 0),
EL2_REG(TPIDR_EL2, access_rw, reset_val, 0),
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 26/43] KVM: arm64: nv: Add handling of EL2-specific timer registers
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (24 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 25/43] KVM: arm64: nv: Hide RAS from nested guests Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 27/43] KVM: arm64: nv: Sync nested timer state with FEAT_NV2 Marc Zyngier
` (20 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Add the required handling for EL2 and EL02 registers, as
well as EL1 registers used in the E2H context. This includes
handling the virtual timer accesses when CNTHCTL_EL2.EL1TVT is set.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/kvm/sys_regs.c | 123 ++++++++++++++++++++++++++++++++
2 files changed, 125 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 30952f1ac997..e36abbcbf4fa 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -457,6 +457,7 @@
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1)
+#define SYS_CNTVCT_EL0 sys_reg(3, 3, 14, 0, 2)
#define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5)
#define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6)
@@ -464,6 +465,7 @@
#define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1)
#define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2)
+#define SYS_CNTV_TVAL_EL0 sys_reg(3, 3, 14, 3, 0)
#define SYS_CNTV_CTL_EL0 sys_reg(3, 3, 14, 3, 1)
#define SYS_CNTV_CVAL_EL0 sys_reg(3, 3, 14, 3, 2)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7405053a6dc8..a24feb4b2839 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1424,26 +1424,130 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
switch (reg) {
case SYS_CNTP_TVAL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HPTIMER;
+ else
+ tmr = TIMER_PTIMER;
+ treg = TIMER_REG_TVAL;
+ break;
+
+ case SYS_CNTV_TVAL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HVTIMER;
+ else
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_TVAL;
+ break;
+
case SYS_AARCH32_CNTP_TVAL:
+ case SYS_CNTP_TVAL_EL02:
tmr = TIMER_PTIMER;
treg = TIMER_REG_TVAL;
break;
+
+ case SYS_CNTV_TVAL_EL02:
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_TVAL;
+ break;
+
+ case SYS_CNTHP_TVAL_EL2:
+ tmr = TIMER_HPTIMER;
+ treg = TIMER_REG_TVAL;
+ break;
+
+ case SYS_CNTHV_TVAL_EL2:
+ tmr = TIMER_HVTIMER;
+ treg = TIMER_REG_TVAL;
+ break;
+
case SYS_CNTP_CTL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HPTIMER;
+ else
+ tmr = TIMER_PTIMER;
+ treg = TIMER_REG_CTL;
+ break;
+
+ case SYS_CNTV_CTL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HVTIMER;
+ else
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_CTL;
+ break;
+
case SYS_AARCH32_CNTP_CTL:
+ case SYS_CNTP_CTL_EL02:
tmr = TIMER_PTIMER;
treg = TIMER_REG_CTL;
break;
+
+ case SYS_CNTV_CTL_EL02:
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_CTL;
+ break;
+
+ case SYS_CNTHP_CTL_EL2:
+ tmr = TIMER_HPTIMER;
+ treg = TIMER_REG_CTL;
+ break;
+
+ case SYS_CNTHV_CTL_EL2:
+ tmr = TIMER_HVTIMER;
+ treg = TIMER_REG_CTL;
+ break;
+
case SYS_CNTP_CVAL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HPTIMER;
+ else
+ tmr = TIMER_PTIMER;
+ treg = TIMER_REG_CVAL;
+ break;
+
+ case SYS_CNTV_CVAL_EL0:
+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))
+ tmr = TIMER_HVTIMER;
+ else
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_CVAL;
+ break;
+
case SYS_AARCH32_CNTP_CVAL:
+ case SYS_CNTP_CVAL_EL02:
tmr = TIMER_PTIMER;
treg = TIMER_REG_CVAL;
break;
+
+ case SYS_CNTV_CVAL_EL02:
+ tmr = TIMER_VTIMER;
+ treg = TIMER_REG_CVAL;
+ break;
+
+ case SYS_CNTHP_CVAL_EL2:
+ tmr = TIMER_HPTIMER;
+ treg = TIMER_REG_CVAL;
+ break;
+
+ case SYS_CNTHV_CVAL_EL2:
+ tmr = TIMER_HVTIMER;
+ treg = TIMER_REG_CVAL;
+ break;
+
case SYS_CNTPCT_EL0:
case SYS_CNTPCTSS_EL0:
+ if (is_hyp_ctxt(vcpu))
+ tmr = TIMER_HPTIMER;
+ else
+ tmr = TIMER_PTIMER;
+ treg = TIMER_REG_CNT;
+ break;
+
case SYS_AARCH32_CNTPCT:
tmr = TIMER_PTIMER;
treg = TIMER_REG_CNT;
break;
+
default:
print_sys_reg_msg(p, "%s", "Unhandled trapped timer register");
kvm_inject_undefined(vcpu);
@@ -2640,6 +2744,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer },
{ SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer },
+ { SYS_DESC(SYS_CNTV_TVAL_EL0), access_arch_timer },
+ { SYS_DESC(SYS_CNTV_CTL_EL0), access_arch_timer },
+ { SYS_DESC(SYS_CNTV_CVAL_EL0), access_arch_timer },
+
/* PMEVCNTRn_EL0 */
PMU_PMEVCNTR_EL0(0),
PMU_PMEVCNTR_EL0(1),
@@ -2771,9 +2879,24 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(CNTVOFF_EL2, reset_val, 0),
EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0),
+ { SYS_DESC(SYS_CNTHP_TVAL_EL2), access_arch_timer },
+ EL2_REG(CNTHP_CTL_EL2, access_arch_timer, reset_val, 0),
+ EL2_REG(CNTHP_CVAL_EL2, access_arch_timer, reset_val, 0),
+
+ { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_arch_timer },
+ EL2_REG(CNTHV_CTL_EL2, access_arch_timer, reset_val, 0),
+ EL2_REG(CNTHV_CVAL_EL2, access_arch_timer, reset_val, 0),
EL12_REG(CNTKCTL, access_rw, reset_val, 0),
+ { SYS_DESC(SYS_CNTP_TVAL_EL02), access_arch_timer },
+ { SYS_DESC(SYS_CNTP_CTL_EL02), access_arch_timer },
+ { SYS_DESC(SYS_CNTP_CVAL_EL02), access_arch_timer },
+
+ { SYS_DESC(SYS_CNTV_TVAL_EL02), access_arch_timer },
+ { SYS_DESC(SYS_CNTV_CTL_EL02), access_arch_timer },
+ { SYS_DESC(SYS_CNTV_CVAL_EL02), access_arch_timer },
+
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
};
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 27/43] KVM: arm64: nv: Sync nested timer state with FEAT_NV2
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (25 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 26/43] KVM: arm64: nv: Add handling of EL2-specific timer registers Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 28/43] KVM: arm64: nv: Publish emulated timer interrupt state in the in-memory state Marc Zyngier
` (19 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Emulating the timers with FEAT_NV2 is a bit odd, as the timers
can be reconfigured behind our back without the hypervisor even
noticing. In the VHE case, that's an actual regression in the
architecture...
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 44 ++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/arm.c | 3 +++
include/kvm/arm_arch_timer.h | 1 +
3 files changed, 48 insertions(+)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 9dec8c419bf4..e3f6feef2c83 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -906,6 +906,50 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
kvm_timer_blocking(vcpu);
}
+void kvm_timer_sync_nested(struct kvm_vcpu *vcpu)
+{
+ /*
+ * When NV2 is on, guest hypervisors have their EL0 timer register
+ * accesses redirected to the VNCR page. Any guest action taken on
+ * the timer is postponed until the next exit, leading to a very
+ * poor quality of emulation.
+ */
+ if (!is_hyp_ctxt(vcpu))
+ return;
+
+ if (!vcpu_el2_e2h_is_set(vcpu)) {
+ /*
+ * A non-VHE guest hypervisor doesn't have any direct access
+ * to its timers: the EL2 registers trap (and the HW is
+ * fully emulated), while the EL0 registers access memory
+ * despite the access being notionally direct. Boo.
+ *
+ * We update the hardware timer registers with the
+ * latest value written by the guest to the VNCR page
+ * and let the hardware take care of the rest.
+ */
+ write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTV_CTL_EL0), SYS_CNTV_CTL);
+ write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTV_CVAL_EL0), SYS_CNTV_CVAL);
+ write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTP_CTL_EL0), SYS_CNTP_CTL);
+ write_sysreg_el0(__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0), SYS_CNTP_CVAL);
+ } else {
+ /*
+ * For a VHE guest hypervisor, the EL2 state is directly
+ * stored in the host EL0 timers, while the emulated EL0
+ * state is stored in the VNCR page. The latter could have
+ * been updated behind our back, and we must reset the
+ * emulation of the timers.
+ */
+ struct timer_map map;
+ get_timer_map(vcpu, &map);
+
+ soft_timer_cancel(&map.emul_vtimer->hrtimer);
+ soft_timer_cancel(&map.emul_ptimer->hrtimer);
+ timer_emulate(map.emul_vtimer);
+ timer_emulate(map.emul_ptimer);
+ }
+}
+
/*
* With a userspace irqchip we have to check if the guest de-asserted the
* timer and if so, unmask the timer irq signal on the host interrupt
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2e76892c1a56..35f079c3026c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1093,6 +1093,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
if (static_branch_unlikely(&userspace_irqchip_in_use))
kvm_timer_sync_user(vcpu);
+ if (vcpu_has_nv(vcpu))
+ kvm_timer_sync_nested(vcpu);
+
kvm_arch_vcpu_ctxsync_fp(vcpu);
/*
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index fd650a8789b9..6e3f6b7ff2b2 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -98,6 +98,7 @@ int __init kvm_timer_hyp_init(bool has_gic);
int kvm_timer_enable(struct kvm_vcpu *vcpu);
void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu);
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_timer_sync_nested(struct kvm_vcpu *vcpu);
void kvm_timer_sync_user(struct kvm_vcpu *vcpu);
bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu);
void kvm_timer_update_run(struct kvm_vcpu *vcpu);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 28/43] KVM: arm64: nv: Publish emulated timer interrupt state in the in-memory state
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (26 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 27/43] KVM: arm64: nv: Sync nested timer state with FEAT_NV2 Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 29/43] KVM: arm64: nv: Load timer before the GIC Marc Zyngier
` (18 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
With FEAT_NV2, the EL0 timer state is entirely stored in memory,
meaning that the hypervisor can only provide a very poor emulation.
The only thing we can really do is to publish the interrupt state
in the guest view of CNT{P,V}_CTL_EL0, and defer everything else
to the next exit.
Only FEAT_ECV will allow us to fix it, at the cost of extra trapping.
Suggested-by: Chase Conklin <chase.conklin@arm.com>
Suggested-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index e3f6feef2c83..dba92bbe4617 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -447,6 +447,25 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
{
int ret;
+ /*
+ * Paper over NV2 brokenness by publishing the interrupt status
+ * bit. This still results in a poor quality of emulation (guest
+ * writes will have no effect until the next exit).
+ *
+ * But hey, it's fast, right?
+ */
+ if (is_hyp_ctxt(vcpu) &&
+ (timer_ctx == vcpu_vtimer(vcpu) || timer_ctx == vcpu_ptimer(vcpu))) {
+ u32 ctl = timer_get_ctl(timer_ctx);
+
+ if (new_level)
+ ctl |= ARCH_TIMER_CTRL_IT_STAT;
+ else
+ ctl &= ~ARCH_TIMER_CTRL_IT_STAT;
+
+ timer_set_ctl(timer_ctx, ctl);
+ }
+
timer_ctx->irq.level = new_level;
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx),
timer_ctx->irq.level);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 29/43] KVM: arm64: nv: Load timer before the GIC
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (27 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 28/43] KVM: arm64: nv: Publish emulated timer interrupt state in the in-memory state Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 30/43] KVM: arm64: nv: Nested GICv3 Support Marc Zyngier
` (17 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
In order for vgic_v3_load_nested to be able to observe which timer
interrupts have the HW bit set for the current context, the timers
must have been loaded in the new mode and the right timer mapped
to their corresponding HW IRQs.
At the moment, we load the GIC first, meaning that timer interrupts
injected to an L2 guest will never have the HW bit set (we see the
old configuration).
Swapping the two loads solves this particular problem.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 35f079c3026c..683a2e6ec799 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -453,8 +453,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu->cpu = cpu;
- kvm_vgic_load(vcpu);
kvm_timer_vcpu_load(vcpu);
+ kvm_vgic_load(vcpu);
if (has_vhe())
kvm_vcpu_load_vhe(vcpu);
kvm_arch_vcpu_load_fp(vcpu);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 30/43] KVM: arm64: nv: Nested GICv3 Support
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (28 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 29/43] KVM: arm64: nv: Load timer before the GIC Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 31/43] KVM: arm64: nv: Don't block in WFI from nested state Marc Zyngier
` (16 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
When entering a nested VM, we set up the hypervisor control interface
based on what the guest hypervisor has set. Especially, we investigate
each list register written by the guest hypervisor whether HW bit is
set. If so, we translate hw irq number from the guest's point of view
to the real hardware irq number if there is a mapping.
Co-developed-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
[Christoffer: Redesigned execution flow around vcpu load/put]
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: Rewritten to support GICv3 instead of GICv2, NV2 support]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 43 ++++-
arch/arm64/include/asm/kvm_hyp.h | 2 +
arch/arm64/include/asm/kvm_nested.h | 1 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/arm.c | 11 ++
arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
arch/arm64/kvm/nested.c | 16 ++
arch/arm64/kvm/sys_regs.c | 93 +++++++++-
arch/arm64/kvm/vgic/vgic-init.c | 35 ++++
arch/arm64/kvm/vgic/vgic-v3-nested.c | 253 +++++++++++++++++++++++++++
arch/arm64/kvm/vgic/vgic-v3.c | 35 +++-
arch/arm64/kvm/vgic/vgic.c | 29 +++
arch/arm64/kvm/vgic/vgic.h | 10 ++
include/kvm/arm_vgic.h | 16 ++
14 files changed, 537 insertions(+), 15 deletions(-)
create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index eb96fe9b686e..f96fc5a3dde0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -44,13 +44,14 @@
#define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
-#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
-#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
-#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
-#define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
-#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
-#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)
-#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)
+#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
+#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
+#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
+#define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4)
+#define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5)
+#define KVM_REQ_SUSPEND KVM_ARCH_REQ(6)
+#define KVM_REQ_RESYNC_PMU_EL0 KVM_ARCH_REQ(7)
+#define KVM_REQ_GUEST_HYP_IRQ_PENDING KVM_ARCH_REQ(8)
#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
KVM_DIRTY_LOG_INITIALLY_SET)
@@ -510,6 +511,34 @@ enum vcpu_sysreg {
VNCR(CNTP_CVAL_EL0),
VNCR(CNTP_CTL_EL0),
+ VNCR(ICH_LR0_EL2),
+ VNCR(ICH_LR1_EL2),
+ VNCR(ICH_LR2_EL2),
+ VNCR(ICH_LR3_EL2),
+ VNCR(ICH_LR4_EL2),
+ VNCR(ICH_LR5_EL2),
+ VNCR(ICH_LR6_EL2),
+ VNCR(ICH_LR7_EL2),
+ VNCR(ICH_LR8_EL2),
+ VNCR(ICH_LR9_EL2),
+ VNCR(ICH_LR10_EL2),
+ VNCR(ICH_LR11_EL2),
+ VNCR(ICH_LR12_EL2),
+ VNCR(ICH_LR13_EL2),
+ VNCR(ICH_LR14_EL2),
+ VNCR(ICH_LR15_EL2),
+
+ VNCR(ICH_AP0R0_EL2),
+ VNCR(ICH_AP0R1_EL2),
+ VNCR(ICH_AP0R2_EL2),
+ VNCR(ICH_AP0R3_EL2),
+ VNCR(ICH_AP1R0_EL2),
+ VNCR(ICH_AP1R1_EL2),
+ VNCR(ICH_AP1R2_EL2),
+ VNCR(ICH_AP1R3_EL2),
+ VNCR(ICH_HCR_EL2),
+ VNCR(ICH_VMCR_EL2),
+
NR_SYS_REGS /* Nothing after this line! */
};
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 145ce73fc16c..5b270f20b84f 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -76,6 +76,8 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
+u64 __gic_v3_get_lr(unsigned int lr);
+
void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if);
void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if);
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 2f427348506a..4f94aef8a750 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -73,6 +73,7 @@ extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
const union tlbi_info *));
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
+extern void check_nested_vcpu_requests(struct kvm_vcpu *vcpu);
struct kvm_s2_trans {
phys_addr_t output;
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index c2717a8f12f5..4462bede5c60 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -20,7 +20,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
- vgic/vgic-its.o vgic/vgic-debug.o
+ vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 683a2e6ec799..95760ed448bf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -681,6 +681,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
ret = kvm_init_nv_sysregs(vcpu->kvm);
if (ret)
return ret;
+
+ ret = kvm_vgic_vcpu_nv_init(vcpu);
+ if (ret)
+ return ret;
}
ret = kvm_timer_enable(vcpu);
@@ -881,6 +885,8 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_dirty_ring_check_request(vcpu))
return 0;
+
+ check_nested_vcpu_requests(vcpu);
}
return 1;
@@ -1018,6 +1024,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* preserved on VMID roll-over if the task was preempted,
* making a thread's VMID inactive. So we need to call
* kvm_arm_vmid_update() in non-premptible context.
+ *
+ * Note that this must happen after the check_vcpu_request()
+ * call to pick the correct s2_mmu structure, as a pending
+ * nested exception (IRQ, for example) can trigger a change
+ * in translation regime.
*/
if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) &&
has_vhe())
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 6cb638b184b1..aaaea35099e5 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -18,7 +18,7 @@
#define vtr_to_nr_pre_bits(v) ((((u32)(v) >> 26) & 7) + 1)
#define vtr_to_nr_apr_regs(v) (1 << (vtr_to_nr_pre_bits(v) - 5))
-static u64 __gic_v3_get_lr(unsigned int lr)
+u64 __gic_v3_get_lr(unsigned int lr)
{
switch (lr & 0xf) {
case 0:
@@ -484,7 +484,7 @@ static int __vgic_v3_get_group(struct kvm_vcpu *vcpu)
static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr,
u64 *lr_val)
{
- unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs;
+ unsigned int used_lrs = kern_hyp_va(vcpu->arch.vgic_cpu.current_cpu_if)->used_lrs;
u8 priority = GICv3_IDLE_PRIORITY;
int i, lr = -1;
@@ -523,7 +523,7 @@ static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr,
static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid,
u64 *lr_val)
{
- unsigned int used_lrs = vcpu->arch.vgic_cpu.vgic_v3.used_lrs;
+ unsigned int used_lrs = kern_hyp_va(vcpu->arch.vgic_cpu.current_cpu_if)->used_lrs;
int i;
for (i = 0; i < used_lrs; i++) {
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 61af8a42389d..ad1df851997d 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -612,6 +612,22 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
kvm_uninit_stage2_mmu(kvm);
}
+bool vgic_state_is_nested(struct kvm_vcpu *vcpu)
+{
+ bool imo = __vcpu_sys_reg(vcpu, HCR_EL2) & HCR_IMO;
+ bool fmo = __vcpu_sys_reg(vcpu, HCR_EL2) & HCR_FMO;
+
+ WARN_ONCE(imo != fmo, "Separate virtual IRQ/FIQ settings not supported\n");
+
+ return vcpu_has_nv(vcpu) && imo && fmo && !is_hyp_ctxt(vcpu);
+}
+
+void check_nested_vcpu_requests(struct kvm_vcpu *vcpu)
+{
+ if (kvm_check_request(KVM_REQ_GUEST_HYP_IRQ_PENDING, vcpu))
+ kvm_inject_nested_irq(vcpu);
+}
+
/*
* Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a24feb4b2839..c9e55c7697d5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -17,6 +17,8 @@
#include <linux/printk.h>
#include <linux/uaccess.h>
+#include <linux/irqchip/arm-gic-v3.h>
+
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <asm/debug-monitors.h>
@@ -522,7 +524,13 @@ static bool access_gic_sre(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
- p->regval = vcpu->arch.vgic_cpu.vgic_v3.vgic_sre;
+ if (p->Op1 == 4) { /* ICC_SRE_EL2 */
+ p->regval = (ICC_SRE_EL2_ENABLE | ICC_SRE_EL2_SRE |
+ ICC_SRE_EL1_DIB | ICC_SRE_EL1_DFB);
+ } else { /* ICC_SRE_EL1 */
+ p->regval = vcpu->arch.vgic_cpu.vgic_v3.vgic_sre;
+ }
+
return true;
}
@@ -2338,6 +2346,54 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
return __vcpu_sys_reg(vcpu, r->reg) = val;
}
+static bool access_gic_vtr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, r);
+
+ p->regval = kvm_vgic_global_state.ich_vtr_el2;
+
+ return true;
+}
+
+static bool access_gic_misr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, r);
+
+ p->regval = vgic_v3_get_misr(vcpu);
+
+ return true;
+}
+
+static bool access_gic_eisr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, r);
+
+ p->regval = vgic_v3_get_eisr(vcpu);
+
+ return true;
+}
+
+static bool access_gic_elrsr(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return write_to_read_only(vcpu, p, r);
+
+ p->regval = vgic_v3_get_elrsr(vcpu);
+
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -2874,6 +2930,41 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_RMR_EL2), trap_undef },
{ SYS_DESC(SYS_VDISR_EL2), trap_undef },
+ EL2_REG_VNCR(ICH_AP0R0_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP0R1_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP0R2_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP0R3_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP1R0_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP1R1_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP1R2_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_AP1R3_EL2, reset_val, 0),
+
+ { SYS_DESC(SYS_ICC_SRE_EL2), access_gic_sre },
+
+ EL2_REG_VNCR(ICH_HCR_EL2, reset_val, 0),
+ { SYS_DESC(SYS_ICH_VTR_EL2), access_gic_vtr },
+ { SYS_DESC(SYS_ICH_MISR_EL2), access_gic_misr },
+ { SYS_DESC(SYS_ICH_EISR_EL2), access_gic_eisr },
+ { SYS_DESC(SYS_ICH_ELRSR_EL2), access_gic_elrsr },
+ EL2_REG_VNCR(ICH_VMCR_EL2, reset_val, 0),
+
+ EL2_REG_VNCR(ICH_LR0_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR1_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR2_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR3_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR4_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR5_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR6_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR7_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR8_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR9_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR10_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR11_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR12_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR13_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR14_EL2, reset_val, 0),
+ EL2_REG_VNCR(ICH_LR15_EL2, reset_val, 0),
+
EL2_REG(CONTEXTIDR_EL2, access_rw, reset_val, 0),
EL2_REG(TPIDR_EL2, access_rw, reset_val, 0),
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
index c8c3cb812783..9cc7809c9c5e 100644
--- a/arch/arm64/kvm/vgic/vgic-init.c
+++ b/arch/arm64/kvm/vgic/vgic-init.c
@@ -6,10 +6,12 @@
#include <linux/uaccess.h>
#include <linux/interrupt.h>
#include <linux/cpu.h>
+#include <linux/irq.h>
#include <linux/kvm_host.h>
#include <kvm/arm_vgic.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include "vgic.h"
/*
@@ -182,6 +184,18 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
return 0;
}
+int kvm_vgic_vcpu_nv_init(struct kvm_vcpu *vcpu)
+{
+ int ret;
+
+ /* Cope with vintage userspace. Maybe we should fail instead */
+ if (vcpu->kvm->arch.vgic.maint_irq == 0)
+ vcpu->kvm->arch.vgic.maint_irq = kvm_vgic_global_state.maint_irq;
+ ret = kvm_vgic_set_owner(vcpu, vcpu->kvm->arch.vgic.maint_irq, vcpu);
+
+ return ret;
+}
+
/**
* kvm_vgic_vcpu_init() - Initialize static VGIC VCPU data
* structures and register VCPU-specific KVM iodevs
@@ -506,12 +520,23 @@ void kvm_vgic_cpu_down(void)
static irqreturn_t vgic_maintenance_handler(int irq, void *data)
{
+ struct kvm_vcpu *vcpu = *(struct kvm_vcpu **)data;
+
/*
* We cannot rely on the vgic maintenance interrupt to be
* delivered synchronously. This means we can only use it to
* exit the VM, and we perform the handling of EOIed
* interrupts on the exit path (see vgic_fold_lr_state).
*/
+
+ /* If not nested, deactivate */
+ if (!vcpu || !vgic_state_is_nested(vcpu)) {
+ irq_set_irqchip_state(irq, IRQCHIP_STATE_ACTIVE, false);
+ return IRQ_HANDLED;
+ }
+
+ /* Assume nested from now */
+ vgic_v3_handle_nested_maint_irq(vcpu);
return IRQ_HANDLED;
}
@@ -610,6 +635,16 @@ int kvm_vgic_hyp_init(void)
return ret;
}
+ if (has_mask) {
+ ret = irq_set_vcpu_affinity(kvm_vgic_global_state.maint_irq,
+ kvm_get_running_vcpus());
+ if (ret) {
+ kvm_err("Error setting vcpu affinity\n");
+ free_percpu_irq(kvm_vgic_global_state.maint_irq, kvm_get_running_vcpus());
+ return ret;
+ }
+ }
+
kvm_info("vgic interrupt IRQ%d\n", kvm_vgic_global_state.maint_irq);
return 0;
}
diff --git a/arch/arm64/kvm/vgic/vgic-v3-nested.c b/arch/arm64/kvm/vgic/vgic-v3-nested.c
new file mode 100644
index 000000000000..e4919cc82daf
--- /dev/null
+++ b/arch/arm64/kvm/vgic/vgic-v3-nested.c
@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/uaccess.h>
+
+#include <linux/irqchip/arm-gic-v3.h>
+
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_arm.h>
+#include <kvm/arm_vgic.h>
+
+#include "vgic.h"
+
+/*
+ * The shadow registers loaded to the hardware when running a L2 guest
+ * with the virtual IMO/FMO bits set.
+ */
+static DEFINE_PER_CPU(struct vgic_v3_cpu_if, shadow_cpuif);
+
+static inline struct vgic_v3_cpu_if *vcpu_shadow_if(struct kvm_vcpu *vcpu)
+{
+ return this_cpu_ptr(&shadow_cpuif);
+}
+
+static inline bool lr_triggers_eoi(u64 lr)
+{
+ return !(lr & (ICH_LR_STATE | ICH_LR_HW)) && (lr & ICH_LR_EOI);
+}
+
+u16 vgic_v3_get_eisr(struct kvm_vcpu *vcpu)
+{
+ u16 reg = 0;
+ int i;
+
+ if (vgic_state_is_nested(vcpu))
+ return read_sysreg_s(SYS_ICH_EISR_EL2);
+
+ for (i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
+ if (lr_triggers_eoi(__vcpu_sys_reg(vcpu, ICH_LRN(i))))
+ reg |= BIT(i);
+ }
+
+ return reg;
+}
+
+u16 vgic_v3_get_elrsr(struct kvm_vcpu *vcpu)
+{
+ u16 reg = 0;
+ int i;
+
+ if (vgic_state_is_nested(vcpu))
+ return read_sysreg_s(SYS_ICH_ELRSR_EL2);
+
+ for (i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
+ if (!(__vcpu_sys_reg(vcpu, ICH_LRN(i)) & ICH_LR_STATE))
+ reg |= BIT(i);
+ }
+
+ return reg;
+}
+
+u64 vgic_v3_get_misr(struct kvm_vcpu *vcpu)
+{
+ int nr_lr = kvm_vgic_global_state.nr_lr;
+ u64 reg = 0;
+
+ if (vgic_state_is_nested(vcpu))
+ return read_sysreg_s(SYS_ICH_MISR_EL2);
+
+ if (vgic_v3_get_eisr(vcpu))
+ reg |= ICH_MISR_EOI;
+
+ if (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_UIE) {
+ int used_lrs;
+
+ used_lrs = nr_lr - hweight16(vgic_v3_get_elrsr(vcpu));
+ if (used_lrs <= 1)
+ reg |= ICH_MISR_U;
+ }
+
+ /* TODO: Support remaining bits in this register */
+ return reg;
+}
+
+/*
+ * For LRs which have HW bit set such as timer interrupts, we modify them to
+ * have the host hardware interrupt number instead of the virtual one programmed
+ * by the guest hypervisor.
+ */
+static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
+ struct vgic_irq *irq;
+ int i, used_lrs = 0;
+
+ for (i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
+ u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
+ int l1_irq;
+
+ if (!(lr & ICH_LR_STATE))
+ lr = 0;
+
+ if (!(lr & ICH_LR_HW))
+ goto next;
+
+ /* We have the HW bit set */
+ l1_irq = (lr & ICH_LR_PHYS_ID_MASK) >> ICH_LR_PHYS_ID_SHIFT;
+ irq = vgic_get_irq(vcpu->kvm, vcpu, l1_irq);
+
+ if (!irq || !irq->hw) {
+ /* There was no real mapping, so nuke the HW bit */
+ lr &= ~ICH_LR_HW;
+ if (irq)
+ vgic_put_irq(vcpu->kvm, irq);
+ goto next;
+ }
+
+ /* Translate the virtual mapping to the real one */
+ lr &= ~ICH_LR_EOI; /* Why? */
+ lr &= ~ICH_LR_PHYS_ID_MASK;
+ lr |= (u64)irq->hwintid << ICH_LR_PHYS_ID_SHIFT;
+ vgic_put_irq(vcpu->kvm, irq);
+
+next:
+ s_cpu_if->vgic_lr[i] = lr;
+ used_lrs = i + 1;
+ }
+
+ s_cpu_if->used_lrs = used_lrs;
+}
+
+void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
+ struct vgic_irq *irq;
+ int i;
+
+ for (i = 0; i < s_cpu_if->used_lrs; i++) {
+ u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
+ int l1_irq;
+
+ if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE))
+ continue;
+
+ /*
+ * If we had a HW lr programmed by the guest hypervisor, we
+ * need to emulate the HW effect between the guest hypervisor
+ * and the nested guest.
+ */
+ l1_irq = (lr & ICH_LR_PHYS_ID_MASK) >> ICH_LR_PHYS_ID_SHIFT;
+ irq = vgic_get_irq(vcpu->kvm, vcpu, l1_irq);
+ if (!irq)
+ continue; /* oh well, the guest hyp is broken */
+
+ lr = __gic_v3_get_lr(i);
+ if (!(lr & ICH_LR_STATE))
+ irq->active = false;
+
+ vgic_put_irq(vcpu->kvm, irq);
+ }
+}
+
+void vgic_v3_create_shadow_state(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
+ struct vgic_v3_cpu_if *host_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ int i;
+
+ s_cpu_if->vgic_hcr = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
+ s_cpu_if->vgic_vmcr = __vcpu_sys_reg(vcpu, ICH_VMCR_EL2);
+ s_cpu_if->vgic_sre = host_if->vgic_sre;
+
+ for (i = 0; i < 4; i++) {
+ s_cpu_if->vgic_ap0r[i] = __vcpu_sys_reg(vcpu, ICH_AP0RN(i));
+ s_cpu_if->vgic_ap1r[i] = __vcpu_sys_reg(vcpu, ICH_AP1RN(i));
+ }
+
+ vgic_v3_create_shadow_lr(vcpu);
+
+ vcpu->arch.vgic_cpu.current_cpu_if = s_cpu_if;
+}
+
+void vgic_v3_load_nested(struct kvm_vcpu *vcpu)
+{
+ struct vgic_irq *irq;
+ unsigned long flags;
+
+ __vgic_v3_restore_state(vcpu_shadow_if(vcpu));
+
+ irq = vgic_get_irq(vcpu->kvm, vcpu, vcpu->kvm->arch.vgic.maint_irq);
+ raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ if (irq->line_level || irq->active)
+ irq_set_irqchip_state(kvm_vgic_global_state.maint_irq,
+ IRQCHIP_STATE_ACTIVE, true);
+ raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+ vgic_put_irq(vcpu->kvm, irq);
+}
+
+void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
+ int i;
+
+ __vgic_v3_save_state(s_cpu_if);
+
+ /*
+ * Translate the shadow state HW fields back to the virtual ones
+ * before copying the shadow struct back to the nested one.
+ */
+ __vcpu_sys_reg(vcpu, ICH_HCR_EL2) = s_cpu_if->vgic_hcr;
+ __vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr;
+
+ for (i = 0; i < 4; i++) {
+ __vcpu_sys_reg(vcpu, ICH_AP0RN(i)) = s_cpu_if->vgic_ap0r[i];
+ __vcpu_sys_reg(vcpu, ICH_AP1RN(i)) = s_cpu_if->vgic_ap1r[i];
+ }
+
+ for (i = 0; i < s_cpu_if->used_lrs; i++) {
+ u64 val = __vcpu_sys_reg(vcpu, ICH_LRN(i));
+
+ val &= ~ICH_LR_STATE;
+ val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;
+
+ __vcpu_sys_reg(vcpu, ICH_LRN(i)) = val;
+ s_cpu_if->vgic_lr[i] = 0;
+ }
+
+ irq_set_irqchip_state(kvm_vgic_global_state.maint_irq,
+ IRQCHIP_STATE_ACTIVE, false);
+}
+
+void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu *vcpu)
+{
+ /*
+ * If we exit a nested VM with a pending maintenance interrupt from the
+ * GIC, then we need to forward this to the guest hypervisor so that it
+ * can re-sync the appropriate LRs and sample level triggered interrupts
+ * again.
+ */
+ if (vgic_state_is_nested(vcpu)) {
+ bool state;
+
+ state = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EN;
+ state &= vgic_v3_get_misr(vcpu);
+
+ kvm_vgic_inject_irq(vcpu->kvm, vcpu,
+ vcpu->kvm->arch.vgic.maint_irq, state, vcpu);
+ }
+}
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 9465d3706ab9..2a6a864c29ee 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -9,6 +9,7 @@
#include <kvm/arm_vgic.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_nested.h>
#include <asm/kvm_asm.h>
#include "vgic.h"
@@ -721,6 +722,19 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ vcpu->arch.vgic_cpu.current_cpu_if = cpu_if;
+
+ /*
+ * If the vgic is in nested state, populate the shadow state
+ * from the guest's nested state. As vgic_v3_load_nested()
+ * will only load LRs, let's deal with the rest of the state
+ * here as if it was a non-nested state. Cunning.
+ */
+ if (vgic_state_is_nested(vcpu)) {
+ vgic_v3_create_shadow_state(vcpu);
+ cpu_if = vcpu->arch.vgic_cpu.current_cpu_if;
+ }
+
/*
* If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen
* is dependent on ICC_SRE_EL1.SRE, and we have to perform the
@@ -734,12 +748,15 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
if (has_vhe())
__vgic_v3_activate_traps(cpu_if);
- WARN_ON(vgic_v4_load(vcpu));
+ if (vgic_state_is_nested(vcpu))
+ vgic_v3_load_nested(vcpu);
+ else
+ WARN_ON(vgic_v4_load(vcpu));
}
void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu)
{
- struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ struct vgic_v3_cpu_if *cpu_if = vcpu->arch.vgic_cpu.current_cpu_if;
if (likely(cpu_if->vgic_sre))
cpu_if->vgic_vmcr = kvm_call_hyp_ret(__vgic_v3_read_vmcr);
@@ -747,8 +764,14 @@ void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu)
void vgic_v3_put(struct kvm_vcpu *vcpu)
{
- struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ struct vgic_v3_cpu_if *cpu_if = vcpu->arch.vgic_cpu.current_cpu_if;
+ /*
+ * vgic_v4_put will do nothing if we were not resident. This
+ * covers both the cases where we've blocked (we already have
+ * done a vgic_v4_put) and when running a nested guest (the
+ * vPE was never resident in order to generate a doorbell).
+ */
WARN_ON(vgic_v4_put(vcpu));
vgic_v3_vmcr_sync(vcpu);
@@ -757,4 +780,10 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
if (has_vhe())
__vgic_v3_deactivate_traps(cpu_if);
+
+ if (vgic_state_is_nested(vcpu))
+ vgic_v3_put_nested(vcpu);
+
+ /* Default back to the non-nested state for sanity */
+ vcpu->arch.vgic_cpu.current_cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
}
diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
index db2a95762b1b..b40b3c3694b3 100644
--- a/arch/arm64/kvm/vgic/vgic.c
+++ b/arch/arm64/kvm/vgic/vgic.c
@@ -876,6 +876,12 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
{
int used_lrs;
+ /* If nesting, emulate the HW effect from L0 to L1 */
+ if (vgic_state_is_nested(vcpu)) {
+ vgic_v3_sync_nested(vcpu);
+ return;
+ }
+
/* An empty ap_list_head implies used_lrs == 0 */
if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
return;
@@ -904,6 +910,29 @@ static inline void vgic_restore_state(struct kvm_vcpu *vcpu)
/* Flush our emulation state into the GIC hardware before entering the guest. */
void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
{
+ /*
+ * If in a nested state, we must return early. Two possibilities:
+ *
+ * - If we have any pending IRQ for the guest and the guest
+ * expects IRQs to be handled in its virtual EL2 mode (the
+ * virtual IMO bit is set) and it is not already running in
+ * virtual EL2 mode, then we have to emulate an IRQ
+ * exception to virtual EL2.
+ *
+ * We do that by placing a request to ourselves which will
+ * abort the entry procedure and inject the exception at the
+ * beginning of the run loop.
+ *
+ * - Otherwise, do exactly *NOTHING*. The guest state is
+ * already loaded, and we can carry on with running it.
+ */
+ if (vgic_state_is_nested(vcpu)) {
+ if (kvm_vgic_vcpu_pending_irq(vcpu))
+ kvm_make_request(KVM_REQ_GUEST_HYP_IRQ_PENDING, vcpu);
+
+ return;
+ }
+
/*
* If there are no virtual interrupts active or pending for this
* VCPU, then there is no work to do and we can bail out without
diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
index 0ab09b0d4440..4e2624e2f3a8 100644
--- a/arch/arm64/kvm/vgic/vgic.h
+++ b/arch/arm64/kvm/vgic/vgic.h
@@ -342,4 +342,14 @@ void vgic_v4_configure_vsgis(struct kvm *kvm);
void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq);
+void vgic_v3_sync_nested(struct kvm_vcpu *vcpu);
+void vgic_v3_create_shadow_state(struct kvm_vcpu *vcpu);
+void vgic_v3_load_nested(struct kvm_vcpu *vcpu);
+void vgic_v3_put_nested(struct kvm_vcpu *vcpu);
+void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu *vcpu);
+
+#define ICH_LRN(n) (ICH_LR0_EL2 + (n))
+#define ICH_AP0RN(n) (ICH_AP0R0_EL2 + (n))
+#define ICH_AP1RN(n) (ICH_AP1R0_EL2 + (n))
+
#endif
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8cc38e836f54..01da24d3d76c 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -243,6 +243,9 @@ struct vgic_dist {
int nr_spis;
+ /* The GIC maintenance IRQ for nested hypervisors. */
+ u32 maint_irq;
+
/* base addresses in guest physical address space: */
gpa_t vgic_dist_base; /* distributor */
union {
@@ -329,6 +332,12 @@ struct vgic_cpu {
struct vgic_v3_cpu_if vgic_v3;
};
+ /*
+ * Pointer to the live CPU vif state. Normally set to vgic_v3,
+ * but will be set to the per-CPU state when running a L2 guest.
+ */
+ struct vgic_v3_cpu_if *current_cpu_if;
+
struct vgic_irq private_irqs[VGIC_NR_PRIVATE_IRQS];
raw_spinlock_t ap_list_lock; /* Protects the ap_list */
@@ -368,6 +377,7 @@ extern struct static_key_false vgic_v3_cpuif_trap;
int kvm_set_legacy_vgic_v2_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr);
void kvm_vgic_early_init(struct kvm *kvm);
int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
+int kvm_vgic_vcpu_nv_init(struct kvm_vcpu *vcpu);
int kvm_vgic_create(struct kvm *kvm, u32 type);
void kvm_vgic_destroy(struct kvm *kvm);
void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
@@ -389,6 +399,10 @@ void kvm_vgic_load(struct kvm_vcpu *vcpu);
void kvm_vgic_put(struct kvm_vcpu *vcpu);
void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu);
+u16 vgic_v3_get_eisr(struct kvm_vcpu *vcpu);
+u16 vgic_v3_get_elrsr(struct kvm_vcpu *vcpu);
+u64 vgic_v3_get_misr(struct kvm_vcpu *vcpu);
+
#define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel))
#define vgic_initialized(k) ((k)->arch.vgic.initialized)
#define vgic_ready(k) ((k)->arch.vgic.ready)
@@ -433,6 +447,8 @@ int vgic_v4_load(struct kvm_vcpu *vcpu);
void vgic_v4_commit(struct kvm_vcpu *vcpu);
int vgic_v4_put(struct kvm_vcpu *vcpu);
+bool vgic_state_is_nested(struct kvm_vcpu *vcpu);
+
/* CPU HP callbacks */
void kvm_vgic_cpu_up(void);
void kvm_vgic_cpu_down(void);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 31/43] KVM: arm64: nv: Don't block in WFI from nested state
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (29 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 30/43] KVM: arm64: nv: Nested GICv3 Support Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 32/43] KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ Marc Zyngier
` (15 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
If trapping WFI from a L2 guest, and that L1 hasn't asked for
such trap, it is very hard to decide when to unblock the vcpu,
as we only have a very partial view on the guest's nested
interrupt state (the L1 hypervisor knows about it, but L0 doesn't).
In such a case, we're better off just returning to the L2 guest
immediately. It isn't wrong from an architecture perspective.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 95760ed448bf..d684a2af3406 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -781,6 +781,15 @@ static void kvm_vcpu_sleep(struct kvm_vcpu *vcpu)
*/
void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
{
+ /*
+ * If we're in nested state and the guest hypervisor does not trap
+ * WFI, we're in a bit of trouble, as we don't have a good handle
+ * on the interrupts that are pending for the guest yet. Revisit
+ * this at some point.
+ */
+ if (vgic_state_is_nested(vcpu))
+ return;
+
/*
* Sync back the state of the GIC CPU interface so that we have
* the latest PMR and group enables. This ensures that
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 32/43] KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (30 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 31/43] KVM: arm64: nv: Don't block in WFI from nested state Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 33/43] KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup Marc Zyngier
` (14 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
From: Andre Przywara <andre.przywara@arm.com>
The VGIC maintenance IRQ signals various conditions about the LRs, when
the GIC's virtualization extension is used.
So far we didn't need it, but nested virtualization needs to know about
this interrupt, so add a userland interface to setup the IRQ number.
The architecture mandates that it must be a PPI, on top of that this code
only exports a per-device option, so the PPI is the same on all VCPUs.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[added some bits of documentation]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
.../virt/kvm/devices/arm-vgic-v3.rst | 12 +++++++-
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/vgic/vgic-kvm-device.c | 29 +++++++++++++++++--
tools/arch/arm/include/uapi/asm/kvm.h | 1 +
4 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/Documentation/virt/kvm/devices/arm-vgic-v3.rst b/Documentation/virt/kvm/devices/arm-vgic-v3.rst
index 5817edb4e046..e860498b1e35 100644
--- a/Documentation/virt/kvm/devices/arm-vgic-v3.rst
+++ b/Documentation/virt/kvm/devices/arm-vgic-v3.rst
@@ -291,8 +291,18 @@ Groups:
| Aff3 | Aff2 | Aff1 | Aff0 |
Errors:
-
======= =============================================
-EINVAL vINTID is not multiple of 32 or info field is
not VGIC_LEVEL_INFO_LINE_LEVEL
======= =============================================
+
+ KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ
+ Attributes:
+
+ The attr field of kvm_device_attr encodes the following values:
+
+ bits: | 31 .... 5 | 4 .... 0 |
+ values: | RES0 | vINTID |
+
+ The vINTID specifies which interrupt is generated when the vGIC
+ must generate a maintenance interrupt. This must be a PPI.
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 89d2fc872d9f..4e0ab0e84ca9 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -403,6 +403,7 @@ enum {
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
+#define KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ 9
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
index f48b8dab8b3d..1cefcaadd479 100644
--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
+++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
@@ -298,6 +298,12 @@ static int vgic_get_common_attr(struct kvm_device *dev,
VGIC_NR_PRIVATE_IRQS, uaddr);
break;
}
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ: {
+ u32 __user *uaddr = (u32 __user *)(long)attr->addr;
+
+ r = put_user(dev->kvm->arch.vgic.maint_irq, uaddr);
+ break;
+ }
}
return r;
@@ -512,7 +518,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
struct vgic_reg_attr reg_attr;
gpa_t addr;
struct kvm_vcpu *vcpu;
- bool uaccess;
+ bool uaccess, post_init = true;
u32 val;
int ret;
@@ -528,6 +534,9 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
/* Sysregs uaccess is performed by the sysreg handling code */
uaccess = false;
break;
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
+ post_init = false;
+ fallthrough;
default:
uaccess = true;
}
@@ -547,7 +556,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
mutex_lock(&dev->kvm->arch.config_lock);
- if (unlikely(!vgic_initialized(dev->kvm))) {
+ if (post_init != vgic_initialized(dev->kvm)) {
ret = -EBUSY;
goto out;
}
@@ -577,6 +586,19 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev,
}
break;
}
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
+ if (!is_write) {
+ val = dev->kvm->arch.vgic.maint_irq;
+ ret = 0;
+ break;
+ }
+
+ ret = -EINVAL;
+ if ((val < VGIC_NR_PRIVATE_IRQS) && (val >= VGIC_NR_SGIS)) {
+ dev->kvm->arch.vgic.maint_irq = val;
+ ret = 0;
+ }
+ break;
default:
ret = -EINVAL;
break;
@@ -603,6 +625,7 @@ static int vgic_v3_set_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_REDIST_REGS:
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO:
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return vgic_v3_attr_regs_access(dev, attr, true);
default:
return vgic_set_common_attr(dev, attr);
@@ -617,6 +640,7 @@ static int vgic_v3_get_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_REDIST_REGS:
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO:
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return vgic_v3_attr_regs_access(dev, attr, false);
default:
return vgic_get_common_attr(dev, attr);
@@ -640,6 +664,7 @@ static int vgic_v3_has_attr(struct kvm_device *dev,
case KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS:
return vgic_v3_has_attr_regs(dev, attr);
case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
+ case KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ:
return 0;
case KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO: {
if (((attr->attr & KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK) >>
diff --git a/tools/arch/arm/include/uapi/asm/kvm.h b/tools/arch/arm/include/uapi/asm/kvm.h
index 03cd7c19a683..d5dd96902817 100644
--- a/tools/arch/arm/include/uapi/asm/kvm.h
+++ b/tools/arch/arm/include/uapi/asm/kvm.h
@@ -246,6 +246,7 @@ struct kvm_vcpu_events {
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
+#define KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ 9
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 33/43] KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (31 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 32/43] KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 34/43] KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt delivery Marc Zyngier
` (13 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Popular HW that is able to use NV also has a broken vgic implementation
that requires trapping.
On such HW, propagate the host trap bits into the guest's shadow
ICH_HCR_EL2 register, making sure we don't allow an L2 guest to bring
the system down.
This involves a bit of tweaking so that the emulation code correctly
poicks up the shadow state as needed, and to only partially sync
ICH_HCR_EL2 back with the guest state to capture EOIcount.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/vgic/vgic-v3-nested.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/vgic/vgic-v3-nested.c b/arch/arm64/kvm/vgic/vgic-v3-nested.c
index e4919cc82daf..b8f4dd39676c 100644
--- a/arch/arm64/kvm/vgic/vgic-v3-nested.c
+++ b/arch/arm64/kvm/vgic/vgic-v3-nested.c
@@ -168,9 +168,19 @@ void vgic_v3_create_shadow_state(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
struct vgic_v3_cpu_if *host_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ u64 val = 0;
int i;
- s_cpu_if->vgic_hcr = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
+ /*
+ * If we're on a system with a broken vgic that requires
+ * trapping, propagate the trapping requirements.
+ *
+ * Ah, the smell of rotten fruits...
+ */
+ if (static_branch_unlikely(&vgic_v3_cpuif_trap))
+ val = host_if->vgic_hcr & (ICH_HCR_TALL0 | ICH_HCR_TALL1 |
+ ICH_HCR_TC | ICH_HCR_TDIR);
+ s_cpu_if->vgic_hcr = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) | val;
s_cpu_if->vgic_vmcr = __vcpu_sys_reg(vcpu, ICH_VMCR_EL2);
s_cpu_if->vgic_sre = host_if->vgic_sre;
@@ -203,6 +213,7 @@ void vgic_v3_load_nested(struct kvm_vcpu *vcpu)
void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *s_cpu_if = vcpu_shadow_if(vcpu);
+ u64 val;
int i;
__vgic_v3_save_state(s_cpu_if);
@@ -211,7 +222,10 @@ void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
* Translate the shadow state HW fields back to the virtual ones
* before copying the shadow struct back to the nested one.
*/
- __vcpu_sys_reg(vcpu, ICH_HCR_EL2) = s_cpu_if->vgic_hcr;
+ val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
+ val &= ~ICH_HCR_EOIcount_MASK;
+ val |= (s_cpu_if->vgic_hcr & ICH_HCR_EOIcount_MASK);
+ __vcpu_sys_reg(vcpu, ICH_HCR_EL2) = val;
__vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr;
for (i = 0; i < 4; i++) {
@@ -220,7 +234,7 @@ void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
}
for (i = 0; i < s_cpu_if->used_lrs; i++) {
- u64 val = __vcpu_sys_reg(vcpu, ICH_LRN(i));
+ val = __vcpu_sys_reg(vcpu, ICH_LRN(i));
val &= ~ICH_LR_STATE;
val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 34/43] KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt delivery
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (32 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 33/43] KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 35/43] KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation Marc Zyngier
` (12 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Normal, non-nesting KVM deals with maintenance interrupt in a very
simple way: we don't even try to handle it and just turn it off
as soon as we exit, long before the kernel can handle it.
However, with NV, we rely on the actual handling of the interrupt
to leave it active and pass it down to the L1 guest hypervisor
(we effectively treat it as an assigned interrupt, just like the
timer).
This doesn't work with something like the Apple M2, which doesn't
have an active state that allows the interrupt to be masked.
Instead, just disable the vgic after having taken the interrupt and
injected a virtual interrupt. This is enough for the guest to make
forward progress, but will limit its ability to handle further
interrupts until it next exits (IAR will always report "spurious").
Oh well.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/vgic/vgic-v3-nested.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kvm/vgic/vgic-v3-nested.c b/arch/arm64/kvm/vgic/vgic-v3-nested.c
index b8f4dd39676c..ea76b1f7285c 100644
--- a/arch/arm64/kvm/vgic/vgic-v3-nested.c
+++ b/arch/arm64/kvm/vgic/vgic-v3-nested.c
@@ -264,4 +264,7 @@ void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu *vcpu)
kvm_vgic_inject_irq(vcpu->kvm, vcpu,
vcpu->kvm->arch.vgic.maint_irq, state, vcpu);
}
+
+ if (unlikely(kvm_vgic_global_state.no_hw_deactivation))
+ sysreg_clear_set_s(SYS_ICH_HCR_EL2, ICH_HCR_EN, 0);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 35/43] KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (33 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 34/43] KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt delivery Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 36/43] KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like information Marc Zyngier
` (11 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Support guest-provided information information to find out about
the range of required invalidation.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 +
arch/arm64/kvm/nested.c | 90 +++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 26 +--------
3 files changed, 93 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4f94aef8a750..b878577bc2ce 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -128,6 +128,8 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe);
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val);
+
int kvm_init_nv_sysregs(struct kvm *kvm);
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index ad1df851997d..6f574602b7df 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -351,6 +351,96 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
return ret;
}
+static unsigned int ttl_to_size(u8 ttl)
+{
+ int level = ttl & 3;
+ int gran = (ttl >> 2) & 3;
+ unsigned int max_size = 0;
+
+ switch (gran) {
+ case TLBI_TTL_TG_4K:
+ switch (level) {
+ case 0:
+ break;
+ case 1:
+ max_size = SZ_1G;
+ break;
+ case 2:
+ max_size = SZ_2M;
+ break;
+ case 3:
+ max_size = SZ_4K;
+ break;
+ }
+ break;
+ case TLBI_TTL_TG_16K:
+ switch (level) {
+ case 0:
+ case 1:
+ break;
+ case 2:
+ max_size = SZ_32M;
+ break;
+ case 3:
+ max_size = SZ_16K;
+ break;
+ }
+ break;
+ case TLBI_TTL_TG_64K:
+ switch (level) {
+ case 0:
+ case 1:
+ /* No 52bit IPA support */
+ break;
+ case 2:
+ max_size = SZ_512M;
+ break;
+ case 3:
+ max_size = SZ_64K;
+ break;
+ }
+ break;
+ default: /* No size information */
+ break;
+ }
+
+ return max_size;
+}
+
+unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
+{
+ unsigned long max_size;
+ u8 ttl;
+
+ ttl = FIELD_GET(GENMASK_ULL(47, 44), val);
+
+ max_size = ttl_to_size(ttl);
+
+ if (!max_size) {
+ /* Compute the maximum extent of the invalidation */
+ switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ max_size = SZ_1G;
+ break;
+ case VTCR_EL2_TG0_16K:
+ max_size = SZ_32M;
+ break;
+ case VTCR_EL2_TG0_64K:
+ /*
+ * No, we do not support 52bit IPA in nested yet. Once
+ * we do, this should be 4TB.
+ */
+ max_size = SZ_512M;
+ break;
+ default:
+ BUG();
+ }
+ }
+
+ WARN_ON(!max_size);
+ return max_size;
+}
+
/*
* We can have multiple *different* MMU contexts with the same VMID:
*
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c9e55c7697d5..9a82f42b45ed 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3229,36 +3229,12 @@ static void s2_mmu_unmap_stage2_ipa(struct kvm_s2_mmu *mmu,
*
* - NS bit: we're non-secure only.
*
- * - TTL field: We already have the granule size from the
- * VTCR_EL2.TG0 field, and the level is only relevant to the
- * guest's S2PT.
- *
* - IPA[51:48]: We don't support 52bit IPA just yet...
*
* And of course, adjust the IPA to be on an actual address.
*/
base_addr = (info->ipa.addr & GENMASK_ULL(35, 0)) << 12;
-
- /* Compute the maximum extent of the invalidation */
- switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
- case VTCR_EL2_TG0_4K:
- max_size = SZ_1G;
- break;
- case VTCR_EL2_TG0_16K:
- max_size = SZ_32M;
- break;
- case VTCR_EL2_TG0_64K:
- /*
- * No, we do not support 52bit IPA in nested yet. Once
- * we do, this should be 4TB.
- */
- /* FIXME: remove the 52bit PA support from the IDregs */
- max_size = SZ_512M;
- break;
- default:
- BUG();
- }
-
+ max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
base_addr &= ~(max_size - 1);
kvm_unmap_stage2_range(mmu, base_addr, max_size);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 36/43] KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like information
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (34 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 35/43] KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 37/43] KVM: arm64: nv: Tag shadow S2 entries with nested level Marc Zyngier
` (10 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
In order to be able to make S2 TLB invalidations more performant on NV,
let's use a scheme derived from the ARMv8.4 TTL extension.
If bits [56:55] in the descriptor are non-zero, they indicate a level
which can be used as an invalidation range.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 +
arch/arm64/kvm/nested.c | 81 +++++++++++++++++++++++++++++
2 files changed, 83 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index b878577bc2ce..128c1d8281af 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -132,4 +132,6 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val);
int kvm_init_nv_sysregs(struct kvm *kvm);
+#define KVM_NV_GUEST_MAP_SZ (KVM_PGTABLE_PROT_SW1 | KVM_PGTABLE_PROT_SW0)
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 6f574602b7df..4a90ec0268e4 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -4,6 +4,7 @@
* Author: Jintack Lim <jintack.lim@linaro.org>
*/
+#include <linux/bitfield.h>
#include <linux/kvm.h>
#include <linux/kvm_host.h>
@@ -407,6 +408,81 @@ static unsigned int ttl_to_size(u8 ttl)
return max_size;
}
+/*
+ * Compute the equivalent of the TTL field by parsing the shadow PT. The
+ * granule size is extracted from the cached VTCR_EL2.TG0 while the level is
+ * retrieved from first entry carrying the level as a tag.
+ */
+static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
+{
+ u64 tmp, sz = 0, vtcr = mmu->tlb_vtcr;
+ kvm_pte_t pte;
+ u8 ttl, level;
+
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ ttl = (1 << 2);
+ break;
+ case VTCR_EL2_TG0_16K:
+ ttl = (2 << 2);
+ break;
+ case VTCR_EL2_TG0_64K:
+ ttl = (3 << 2);
+ break;
+ default:
+ BUG();
+ }
+
+ tmp = addr;
+
+again:
+ /* Iteratively compute the block sizes for a particular granule size */
+ switch (vtcr & VTCR_EL2_TG0_MASK) {
+ case VTCR_EL2_TG0_4K:
+ if (sz < SZ_4K) sz = SZ_4K;
+ else if (sz < SZ_2M) sz = SZ_2M;
+ else if (sz < SZ_1G) sz = SZ_1G;
+ else sz = 0;
+ break;
+ case VTCR_EL2_TG0_16K:
+ if (sz < SZ_16K) sz = SZ_16K;
+ else if (sz < SZ_32M) sz = SZ_32M;
+ else sz = 0;
+ break;
+ case VTCR_EL2_TG0_64K:
+ if (sz < SZ_64K) sz = SZ_64K;
+ else if (sz < SZ_512M) sz = SZ_512M;
+ else sz = 0;
+ break;
+ default:
+ BUG();
+ }
+
+ if (sz == 0)
+ return 0;
+
+ tmp &= ~(sz - 1);
+ if (kvm_pgtable_get_leaf(mmu->pgt, tmp, &pte, NULL))
+ goto again;
+ if (!(pte & PTE_VALID))
+ goto again;
+ level = FIELD_GET(KVM_NV_GUEST_MAP_SZ, pte);
+ if (!level)
+ goto again;
+
+ ttl |= level;
+
+ /*
+ * We now have found some level information in the shadow S2. Check
+ * that the resulting range is actually including the original IPA.
+ */
+ sz = ttl_to_size(ttl);
+ if (addr < (tmp + sz))
+ return ttl;
+
+ return 0;
+}
+
unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
{
unsigned long max_size;
@@ -414,6 +490,11 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
ttl = FIELD_GET(GENMASK_ULL(47, 44), val);
+ if (!(cpus_have_final_cap(ARM64_HAS_ARMv8_4_TTL) && ttl)) {
+ u64 addr = (val & GENMASK_ULL(35, 0)) << 12;
+ ttl = get_guest_mapping_ttl(mmu, addr);
+ }
+
max_size = ttl_to_size(ttl);
if (!max_size) {
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 37/43] KVM: arm64: nv: Tag shadow S2 entries with nested level
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (35 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 36/43] KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like information Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 38/43] KVM: arm64: nv: Allocate VNCR page when required Marc Zyngier
` (9 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Populate bits [56:55] of the leaf entry with the level provided
by the guest's S2 translation.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 6 ++++++
arch/arm64/kvm/mmu.c | 16 ++++++++++++++--
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 128c1d8281af..da7ebd2f6e24 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -5,6 +5,7 @@
#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/kvm_pgtable.h>
static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
{
@@ -134,4 +135,9 @@ int kvm_init_nv_sysregs(struct kvm *kvm);
#define KVM_NV_GUEST_MAP_SZ (KVM_PGTABLE_PROT_SW1 | KVM_PGTABLE_PROT_SW0)
+static inline u64 kvm_encode_nested_level(struct kvm_s2_trans *trans)
+{
+ return FIELD_PREP(KVM_NV_GUEST_MAP_SZ, trans->level);
+}
+
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 8c77547f5582..61bdd8798f83 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1618,11 +1618,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* Potentially reduce shadow S2 permissions to match the guest's own
* S2. For exec faults, we'd only reach this point if the guest
* actually allowed it (see kvm_s2_handle_perm_fault).
+ *
+ * Also encode the level of the nested translation in the SW bits of
+ * the PTE/PMD/PUD. This will be retrived on TLB invalidation from
+ * the guest.
*/
if (nested) {
writable &= kvm_s2_trans_writable(nested);
if (!kvm_s2_trans_readable(nested))
prot &= ~KVM_PGTABLE_PROT_R;
+
+ prot |= kvm_encode_nested_level(nested);
}
read_lock(&kvm->mmu_lock);
@@ -1676,14 +1682,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* permissions only if vma_pagesize equals fault_granule. Otherwise,
* kvm_pgtable_stage2_map() should be called to change block size.
*/
- if (fault_status == ESR_ELx_FSC_PERM && vma_pagesize == fault_granule)
+ if (fault_status == ESR_ELx_FSC_PERM && vma_pagesize == fault_granule) {
+ /*
+ * Drop the SW bits in favour of those stored in the
+ * PTE, which will be preserved.
+ */
+ prot &= ~KVM_NV_GUEST_MAP_SZ;
ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
- else
+ } else {
ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
__pfn_to_phys(pfn), prot,
memcache,
KVM_PGTABLE_WALK_HANDLE_FAULT |
KVM_PGTABLE_WALK_SHARED);
+ }
/* Mark the page dirty only if the fault is handled successfully */
if (writable && !ret) {
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 38/43] KVM: arm64: nv: Allocate VNCR page when required
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (36 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 37/43] KVM: arm64: nv: Tag shadow S2 entries with nested level Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 39/43] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
` (8 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
If running a NV guest on an ARMv8.4-NV capable system, let's
allocate an additional page that will be used by the hypervisor
to fulfill system register accesses.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/nested.c | 8 ++++++++
arch/arm64/kvm/reset.c | 1 +
2 files changed, 9 insertions(+)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 4a90ec0268e4..e07960c77526 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -38,6 +38,12 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
return -EINVAL;
+ if (!vcpu->arch.ctxt.vncr_array)
+ vcpu->arch.ctxt.vncr_array = (u64 *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
+
+ if (!vcpu->arch.ctxt.vncr_array)
+ return -ENOMEM;
+
/*
* Let's treat memory allocation failures as benign: If we fail to
* allocate anything, return an error and keep the allocated array
@@ -65,6 +71,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2], 0)) {
kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
+ free_page((unsigned long)vcpu->arch.ctxt.vncr_array);
+ vcpu->arch.ctxt.vncr_array = NULL;
} else {
kvm->arch.nested_mmus_size = num_mmus;
ret = 0;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index e106ea01598f..699000cc505b 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -156,6 +156,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu)
if (sve_state)
kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
kfree(sve_state);
+ free_page((unsigned long)vcpu->arch.ctxt.vncr_array);
kfree(vcpu->arch.ccsidr);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 39/43] KVM: arm64: nv: Fast-track 'InHost' exception returns
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (37 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 38/43] KVM: arm64: nv: Allocate VNCR page when required Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 40/43] KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests Marc Zyngier
` (7 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
A significant part of the ARMv8.3-NV extension is to trap ERET
instructions so that the hypervisor gets a chance to switch
from a vEL2 L1 guest to an EL1 L2 guest.
But this also has the unfortunate consequence of trapping ERET
in unsuspecting circumstances, such as staying at vEL2 (interrupt
handling while being in the guest hypervisor), or returning to host
userspace in the case of a VHE guest.
Although we already make some effort to handle these ERET quicker
by not doing the put/load dance, it is still way too far down the
line for it to be efficient enough.
For these cases, it would ideal to ERET directly, no question asked.
Of course, we can't do that. But the next best thing is to do it as
early as possible, in fixup_guest_exit(), much as we would handle
FPSIMD exceptions.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 29 +++-------------------
arch/arm64/kvm/hyp/vhe/switch.c | 44 +++++++++++++++++++++++++++++++++
2 files changed, 47 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 61721f870be0..9a454ce1fbba 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1988,8 +1988,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
{
- u64 spsr, elr, mode;
- bool direct_eret;
+ u64 spsr, elr;
/*
* Forward this trap to the virtual EL2 if the virtual
@@ -1998,33 +1997,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
if (forward_traps(vcpu, HCR_NV))
return;
- /*
- * Going through the whole put/load motions is a waste of time
- * if this is a VHE guest hypervisor returning to its own
- * userspace, or the hypervisor performing a local exception
- * return. No need to save/restore registers, no need to
- * switch S2 MMU. Just do the canonical ERET.
- */
- spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
- spsr = kvm_check_illegal_exception_return(vcpu, spsr);
-
- mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
-
- direct_eret = (mode == PSR_MODE_EL0t &&
- vcpu_el2_e2h_is_set(vcpu) &&
- vcpu_el2_tge_is_set(vcpu));
- direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
-
- if (direct_eret) {
- *vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
- *vcpu_cpsr(vcpu) = spsr;
- trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
- return;
- }
-
preempt_disable();
kvm_arch_vcpu_put(vcpu);
+ spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
+ spsr = kvm_check_illegal_exception_return(vcpu, spsr);
elr = __vcpu_sys_reg(vcpu, ELR_EL2);
trace_kvm_nested_eret(vcpu, elr, spsr);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 0926011deae7..85db519ea811 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -224,6 +224,49 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
__vcpu_put_switch_sysregs(vcpu);
}
+static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+ u64 spsr, mode;
+
+ /*
+ * Going through the whole put/load motions is a waste of time
+ * if this is a VHE guest hypervisor returning to its own
+ * userspace, or the hypervisor performing a local exception
+ * return. No need to save/restore registers, no need to
+ * switch S2 MMU. Just do the canonical ERET.
+ *
+ * Unless the trap has to be forwarded further down the line,
+ * of course...
+ */
+ if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
+ return false;
+
+ spsr = read_sysreg_el1(SYS_SPSR);
+ mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+ switch (mode) {
+ case PSR_MODE_EL0t:
+ if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ return false;
+ break;
+ case PSR_MODE_EL2t:
+ mode = PSR_MODE_EL1t;
+ break;
+ case PSR_MODE_EL2h:
+ mode = PSR_MODE_EL1h;
+ break;
+ default:
+ return false;
+ }
+
+ spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+
+ write_sysreg_el2(spsr, SYS_SPSR);
+ write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
+
+ return true;
+}
+
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
@@ -234,6 +277,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
[ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low,
[ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low,
[ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth,
+ [ESR_ELx_EC_ERET] = kvm_hyp_handle_eret,
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
};
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 40/43] KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (38 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 39/43] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 41/43] KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers Marc Zyngier
` (6 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Due to the way ARMv8.4-NV suppresses traps when accessing EL2
system registers, we can't track when the guest changes its
HCR_EL2.TGE setting. This means we always trap EL1 TLBIs,
even if they don't affect any guest.
This obviously has a huge impact on performance, as we handle
TLBI traps as a normal exit, and a normal VHE host issues
thousands of TLBIs when booting (and quite a few when running
userspace).
A cheap way to reduce the overhead is to handle the limited
case of {E2H,TGE}=={1,1} as a guest fixup, as we already have
the right mmu configuration in place. Just execute the decoded
instruction right away and return to the guest.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/vhe/switch.c | 44 ++++++++++++++++++++++++++++++++-
arch/arm64/kvm/hyp/vhe/tlb.c | 6 +++--
arch/arm64/kvm/sys_regs.c | 12 ---------
3 files changed, 47 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 85db519ea811..360328aaaf7c 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -224,6 +224,48 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
__vcpu_put_switch_sysregs(vcpu);
}
+static bool kvm_hyp_handle_tlbi_el1(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+ u32 instr;
+ u64 val;
+
+ /*
+ * Ideally, we would never trap on EL1 TLB invalidations when the
+ * guest's HCR_EL2.{E2H,TGE} == {1,1}. But "thanks" to ARMv8.4, we
+ * don't trap writes to HCR_EL2, meaning that we can't track
+ * changes to the virtual TGE bit. So we leave HCR_EL2.TTLB set on
+ * the host. Oopsie...
+ *
+ * In order to speed-up EL1 TLBIs from the vEL2 guest when TGE is
+ * set, try and handle these invalidation as quickly as possible,
+ * without fully exiting. Note that we don't need to consider
+ * any forwarding here, as having E2H+TGE set is the very definition
+ * of being InHost.
+ */
+ if (!vcpu_has_nv(vcpu) || !vcpu_is_el2(vcpu) ||
+ !(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+ return false;
+
+ instr = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
+ if (sys_reg_Op0(instr) != TLBI_Op0 ||
+ sys_reg_Op1(instr) != TLBI_Op1_EL1)
+ return false;
+
+ val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));
+ __kvm_tlb_el1_instr(NULL, val, instr);
+ __kvm_skip_instr(vcpu);
+
+ return true;
+}
+
+static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+ if (kvm_hyp_handle_tlbi_el1(vcpu, exit_code))
+ return true;
+
+ return kvm_hyp_handle_sysreg(vcpu, exit_code);
+}
+
static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
{
u64 spsr, mode;
@@ -270,7 +312,7 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
- [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg,
+ [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe,
[ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd,
[ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd,
[ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low,
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 737ea0591b54..bf7ab30522e9 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -271,7 +271,8 @@ void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding)
dsb(ishst);
/* Switch to requested VMID */
- __tlb_switch_to_guest(mmu, &cxt);
+ if (mmu)
+ __tlb_switch_to_guest(mmu, &cxt);
/*
* Execute the same instruction as the guest hypervisor did,
@@ -310,5 +311,6 @@ void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding)
dsb(ish);
isb();
- __tlb_switch_to_host(&cxt);
+ if (mmu)
+ __tlb_switch_to_host(&cxt);
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9a82f42b45ed..e53bc33a23cc 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3283,18 +3283,6 @@ static bool handle_tlbi_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
WARN_ON(!vcpu_is_el2(vcpu));
- if ((__vcpu_sys_reg(vcpu, HCR_EL2) & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
- mutex_lock(&vcpu->kvm->lock);
- /*
- * ARMv8.4-NV allows the guest to change TGE behind
- * our back, so we always trap EL1 TLBIs from vEL2...
- */
- __kvm_tlb_el1_instr(&vcpu->kvm->arch.mmu, p->regval, sys_encoding);
- mutex_unlock(&vcpu->kvm->lock);
-
- return true;
- }
-
kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
&(union tlbi_info) {
.va = {
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 41/43] KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (39 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 40/43] KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 42/43] KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on Marc Zyngier
` (5 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Although FEAT_NV2 makes most things fast, it also makes it impossible
to correctly emulate the timers, as the sysreg accesses are redirected
to memory.
FEAT_ECV addresses this by giving a hypervisor the ability to trap
the EL02 sysregs as well as the virtual timer.
Add the required trap setting to make use of the feature, allowing
us to elide the ugly resync in the middle of the run loop.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 36 +++++++++++++++++++++++++---
include/clocksource/arm_arch_timer.h | 4 ++++
2 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index dba92bbe4617..860f6e190e63 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -782,7 +782,7 @@ static void kvm_timer_vcpu_load_nested_switch(struct kvm_vcpu *vcpu,
static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
{
- bool tpt, tpc;
+ bool tvt, tpt, tvc, tpc, tvt02, tpt02;
u64 clr, set;
/*
@@ -797,7 +797,29 @@ static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
* within this function, reality kicks in and we start adding
* traps based on emulation requirements.
*/
- tpt = tpc = false;
+ tvt = tpt = tvc = tpc = false;
+ tvt02 = tpt02 = false;
+
+ /*
+ * NV2 badly breaks the timer semantics by redirecting accesses to
+ * the EL0 timer state to memory, so let's call ECV to the rescue if
+ * available: we trap all CNT{P,V}_{CTL,CVAL,TVAL}_EL0 accesses.
+ *
+ * The treatment slightly varies depending whether we run a nVHE or
+ * VHE guest: nVHE will use the _EL0 registers directly, while VHE
+ * will use the _EL02 accessors. This translates in different trap
+ * bits.
+ *
+ * None of the trapping is required when running in non-HYP context,
+ * unless required by the L1 hypervisor settings once we advertise
+ * ECV+NV in the guest, or that we need trapping for other reasons.
+ */
+ if (cpus_have_final_cap(ARM64_HAS_ECV) && is_hyp_ctxt(vcpu)) {
+ if (vcpu_el2_e2h_is_set(vcpu))
+ tvt02 = tpt02 = true;
+ else
+ tvt = tpt = true;
+ }
/*
* We have two possibility to deal with a physical offset:
@@ -837,6 +859,10 @@ static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
assign_clear_set_bit(tpt, CNTHCTL_EL1PCEN << 10, set, clr);
assign_clear_set_bit(tpc, CNTHCTL_EL1PCTEN << 10, set, clr);
+ assign_clear_set_bit(tvt, CNTHCTL_EL1TVT, clr, set);
+ assign_clear_set_bit(tvc, CNTHCTL_EL1TVCT, clr, set);
+ assign_clear_set_bit(tvt02, CNTHCTL_EL1NVVCT, clr, set);
+ assign_clear_set_bit(tpt02, CNTHCTL_EL1NVPCT, clr, set);
/* This only happens on VHE, so use the CNTHCTL_EL2 accessor. */
sysreg_clear_set(cnthctl_el2, clr, set);
@@ -932,8 +958,12 @@ void kvm_timer_sync_nested(struct kvm_vcpu *vcpu)
* accesses redirected to the VNCR page. Any guest action taken on
* the timer is postponed until the next exit, leading to a very
* poor quality of emulation.
+ *
+ * This is an unmitigated disaster, only papered over by FEAT_ECV,
+ * which allows trapping of the timer registers even with NV2.
+ * Still, this is still worse than FEAT_NV on its own. Meh.
*/
- if (!is_hyp_ctxt(vcpu))
+ if (cpus_have_final_cap(ARM64_HAS_ECV) || !is_hyp_ctxt(vcpu))
return;
if (!vcpu_el2_e2h_is_set(vcpu)) {
diff --git a/include/clocksource/arm_arch_timer.h b/include/clocksource/arm_arch_timer.h
index cbbc9a6dc571..c62811fb4130 100644
--- a/include/clocksource/arm_arch_timer.h
+++ b/include/clocksource/arm_arch_timer.h
@@ -22,6 +22,10 @@
#define CNTHCTL_EVNTDIR (1 << 3)
#define CNTHCTL_EVNTI (0xF << 4)
#define CNTHCTL_ECV (1 << 12)
+#define CNTHCTL_EL1TVT (1 << 13)
+#define CNTHCTL_EL1TVCT (1 << 14)
+#define CNTHCTL_EL1NVPCT (1 << 15)
+#define CNTHCTL_EL1NVVCT (1 << 16)
enum arch_timer_reg {
ARCH_TIMER_REG_CTRL,
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 42/43] KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (40 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 41/43] KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 43/43] KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT Marc Zyngier
` (4 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Although FEAT_ECV allows us to correctly enable the timers, it also
reduces performances pretty badly (a L2 guest doing a lot of virtio
emulated in L1 userspace results in a 30% degradation).
Mitigate this by emulating the CTL/CVAL register reads in the
inner run loop, without returning to the general kernel. This halves
the overhead described above.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 15 -------
arch/arm64/kvm/hyp/vhe/switch.c | 70 +++++++++++++++++++++++++++++++++
include/kvm/arm_arch_timer.h | 15 +++++++
3 files changed, 85 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 860f6e190e63..1ee1ede23607 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -101,21 +101,6 @@ u64 timer_get_cval(struct arch_timer_context *ctxt)
}
}
-static u64 timer_get_offset(struct arch_timer_context *ctxt)
-{
- u64 offset = 0;
-
- if (!ctxt)
- return 0;
-
- if (ctxt->offset.vm_offset)
- offset += *ctxt->offset.vm_offset;
- if (ctxt->offset.vcpu_offset)
- offset += *ctxt->offset.vcpu_offset;
-
- return offset;
-}
-
static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
{
struct kvm_vcpu *vcpu = ctxt->vcpu;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 360328aaaf7c..8d1e9d1adabe 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -258,11 +258,81 @@ static bool kvm_hyp_handle_tlbi_el1(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
+static bool kvm_hyp_handle_timer(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+ u64 esr, val;
+
+ /*
+ * Having FEAT_ECV allows for a better quality of timer emulation.
+ * However, this comes at a huge cost in terms of traps. Try and
+ * satisfy the reads without returning to the kernel if we can.
+ */
+ if (!is_hyp_ctxt(vcpu))
+ return false;
+
+ esr = kvm_vcpu_get_esr(vcpu);
+ if ((esr & ESR_ELx_SYS64_ISS_DIR_MASK) != ESR_ELx_SYS64_ISS_DIR_READ)
+ return false;
+
+ switch (esr_sys64_to_sysreg(esr)) {
+ case SYS_CNTP_CTL_EL02:
+ val = __vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
+ break;
+ case SYS_CNTP_CTL_EL0:
+ if (vcpu_el2_e2h_is_set(vcpu))
+ val = read_sysreg_el0(SYS_CNTP_CTL);
+ else
+ val = __vcpu_sys_reg(vcpu, CNTP_CTL_EL0);
+ break;
+ case SYS_CNTP_CVAL_EL02:
+ val = __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
+ break;
+ case SYS_CNTP_CVAL_EL0:
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ val = read_sysreg_el0(SYS_CNTP_CVAL);
+
+ if (!has_cntpoff())
+ val -= timer_get_offset(vcpu_hptimer(vcpu));
+ } else {
+ val = __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0);
+ }
+ break;
+ case SYS_CNTV_CTL_EL02:
+ val = __vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
+ break;
+ case SYS_CNTV_CTL_EL0:
+ if (vcpu_el2_e2h_is_set(vcpu))
+ val = read_sysreg_el0(SYS_CNTV_CTL);
+ else
+ val = __vcpu_sys_reg(vcpu, CNTV_CTL_EL0);
+ break;
+ case SYS_CNTV_CVAL_EL02:
+ val = __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
+ break;
+ case SYS_CNTV_CVAL_EL0:
+ if (vcpu_el2_e2h_is_set(vcpu))
+ val = read_sysreg_el0(SYS_CNTV_CVAL);
+ else
+ val = __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0);
+ break;
+ default:
+ return false;
+ }
+
+ vcpu_set_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu), val);
+ __kvm_skip_instr(vcpu);
+
+ return true;
+}
+
static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code)
{
if (kvm_hyp_handle_tlbi_el1(vcpu, exit_code))
return true;
+ if (kvm_hyp_handle_timer(vcpu, exit_code))
+ return true;
+
return kvm_hyp_handle_sysreg(vcpu, exit_code);
}
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 6e3f6b7ff2b2..c1ba31fab6f5 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -156,4 +156,19 @@ static inline bool has_cntpoff(void)
return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF));
}
+static inline u64 timer_get_offset(struct arch_timer_context *ctxt)
+{
+ u64 offset = 0;
+
+ if (!ctxt)
+ return 0;
+
+ if (ctxt->offset.vm_offset)
+ offset += *ctxt->offset.vm_offset;
+ if (ctxt->offset.vcpu_offset)
+ offset += *ctxt->offset.vcpu_offset;
+
+ return offset;
+}
+
#endif
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v11 43/43] KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (41 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 42/43] KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on Marc Zyngier
@ 2023-11-20 13:10 ` Marc Zyngier
2023-11-21 8:51 ` [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Ganapatrao Kulkarni
` (3 subsequent siblings)
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-20 13:10 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Since we're (almost) feature complete, let's allow userspace to
request KVM_ARM_VCPU_NESTED_VIRT by bumping the KVM_VCPU_MAX_FEATURES
up. We also now advertise the feature to userspace with a new capability.
It's going to be great...
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/kvm/arm.c | 3 +++
include/uapi/linux/kvm.h | 1 +
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f96fc5a3dde0..74dcae972d77 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -39,7 +39,7 @@
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
-#define KVM_VCPU_MAX_FEATURES 7
+#define KVM_VCPU_MAX_FEATURES 8
#define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1)
#define KVM_REQ_SLEEP \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index d684a2af3406..2ce02d9ef11d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -289,6 +289,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_EL1_32BIT:
r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
break;
+ case KVM_CAP_ARM_EL2:
+ r = cpus_have_final_cap(ARM64_HAS_NESTED_VIRT);
+ break;
case KVM_CAP_GUEST_DEBUG_HW_BPS:
r = get_num_brps();
break;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 211b86de35ac..6cd6a677163a 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1201,6 +1201,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228
#define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229
#define KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 230
+#define KVM_CAP_ARM_EL2 231
#ifdef KVM_CAP_IRQ_ROUTING
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (42 preceding siblings ...)
2023-11-20 13:10 ` [PATCH v11 43/43] KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT Marc Zyngier
@ 2023-11-21 8:51 ` Ganapatrao Kulkarni
2023-11-21 9:08 ` Marc Zyngier
2023-11-21 16:49 ` Miguel Luis
` (2 subsequent siblings)
46 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-21 8:51 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Darren Hart, Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hi Marc,
On 20-11-2023 06:39 pm, Marc Zyngier wrote:
> This is the 5th drop of NV support on arm64 for this year, and most
> probably the last one for this side of Christmas.
>
> For the previous episodes, see [1].
>
> What's changed:
>
> - Drop support for the original FEAT_NV. No existing hardware supports
> it without FEAT_NV2, and the architecture is deprecating the former
> entirely. This results in fewer patches, and a slightly simpler
> model overall.
>
> - Reorganise the series to make it a bit more logical now that FEAT_NV
> is gone.
>
> - Apply the NV idreg restrictions on VM first run rather than on each
> access.
>
> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> than per-vcpu.
>
> - Fix the EL0 timer fastpath
>
> - Work around the architecture deficiencies when trapping WFI from a
> L2 guest.
>
> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>
> - Drop the patches that have already been merged (NV trap forwarding,
> per-MMU VTCR)
>
> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>
> The branch containing these patches (and more) is at [3]. As for the
> previous rounds, my intention is to take a prefix of this series into
> 6.8, provided that it gets enough reviewing.
>
> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>
V11 series is not booting on Ampere platform (I am yet to debug).
With lkvm, it is stuck at the very early stage itself and no early boot
prints/logs.
Are there any changes needed in kvmtool for V11?
> Andre Przywara (1):
> KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
>
> Christoffer Dall (2):
> KVM: arm64: nv: Implement nested Stage-2 page table walk logic
> KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
>
> Jintack Lim (3):
> KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
> KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
> KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
>
> Marc Zyngier (37):
> arm64: cpufeatures: Restrict NV support to FEAT_NV2
> KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt()
> KVM: arm64: nv: Compute NV view of idregs as a one-off
> KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR
> KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
> KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
> KVM: arm64: Introduce a bad_trap() primitive for unexpected trap
> handling
> KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers
> KVM: arm64: nv: Map VNCR-capable registers to a separate page
> KVM: arm64: nv: Handle virtual EL2 registers in
> vcpu_read/write_sys_reg()
> KVM: arm64: nv: Handle HCR_EL2.E2H specially
> KVM: arm64: nv: Handle CNTHCTL_EL2 specially
> KVM: arm64: nv: Save/Restore vEL2 sysregs
> KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
> KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
> KVM: arm64: nv: Handle shadow stage 2 page faults
> KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
> KVM: arm64: nv: Set a handler for the system instruction traps
> KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
> KVM: arm64: nv: Hide RAS from nested guests
> KVM: arm64: nv: Add handling of EL2-specific timer registers
> KVM: arm64: nv: Sync nested timer state with FEAT_NV2
> KVM: arm64: nv: Publish emulated timer interrupt state in the
> in-memory state
> KVM: arm64: nv: Load timer before the GIC
> KVM: arm64: nv: Nested GICv3 Support
> KVM: arm64: nv: Don't block in WFI from nested state
> KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup
> KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt
> delivery
> KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation
> KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like
> information
> KVM: arm64: nv: Tag shadow S2 entries with nested level
> KVM: arm64: nv: Allocate VNCR page when required
> KVM: arm64: nv: Fast-track 'InHost' exception returns
> KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
> KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers
> KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on
> KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
>
> .../virt/kvm/devices/arm-vgic-v3.rst | 12 +-
> arch/arm64/include/asm/esr.h | 1 +
> arch/arm64/include/asm/kvm_arm.h | 3 +
> arch/arm64/include/asm/kvm_asm.h | 4 +
> arch/arm64/include/asm/kvm_emulate.h | 53 +-
> arch/arm64/include/asm/kvm_host.h | 223 +++-
> arch/arm64/include/asm/kvm_hyp.h | 2 +
> arch/arm64/include/asm/kvm_mmu.h | 12 +
> arch/arm64/include/asm/kvm_nested.h | 130 ++-
> arch/arm64/include/asm/sysreg.h | 7 +
> arch/arm64/include/asm/vncr_mapping.h | 102 ++
> arch/arm64/include/uapi/asm/kvm.h | 1 +
> arch/arm64/kernel/cpufeature.c | 22 +-
> arch/arm64/kvm/Makefile | 4 +-
> arch/arm64/kvm/arch_timer.c | 115 +-
> arch/arm64/kvm/arm.c | 46 +-
> arch/arm64/kvm/at.c | 219 ++++
> arch/arm64/kvm/emulate-nested.c | 48 +-
> arch/arm64/kvm/handle_exit.c | 29 +-
> arch/arm64/kvm/hyp/include/hyp/switch.h | 8 +-
> arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
> arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
> arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
> arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
> arch/arm64/kvm/hyp/vhe/switch.c | 211 +++-
> arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 133 ++-
> arch/arm64/kvm/hyp/vhe/tlb.c | 83 ++
> arch/arm64/kvm/mmu.c | 248 ++++-
> arch/arm64/kvm/nested.c | 813 ++++++++++++++-
> arch/arm64/kvm/reset.c | 7 +
> arch/arm64/kvm/sys_regs.c | 978 ++++++++++++++++--
> arch/arm64/kvm/vgic/vgic-init.c | 35 +
> arch/arm64/kvm/vgic/vgic-kvm-device.c | 29 +-
> arch/arm64/kvm/vgic/vgic-v3-nested.c | 270 +++++
> arch/arm64/kvm/vgic/vgic-v3.c | 35 +-
> arch/arm64/kvm/vgic/vgic.c | 29 +
> arch/arm64/kvm/vgic/vgic.h | 10 +
> arch/arm64/tools/cpucaps | 2 +
> include/clocksource/arm_arch_timer.h | 4 +
> include/kvm/arm_arch_timer.h | 19 +
> include/kvm/arm_vgic.h | 16 +
> include/uapi/linux/kvm.h | 1 +
> tools/arch/arm/include/uapi/asm/kvm.h | 1 +
> 43 files changed, 3725 insertions(+), 255 deletions(-)
> create mode 100644 arch/arm64/include/asm/vncr_mapping.h
> create mode 100644 arch/arm64/kvm/at.c
> create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2
2023-11-20 13:09 ` [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2 Marc Zyngier
@ 2023-11-21 9:07 ` Ganapatrao Kulkarni
2023-11-21 9:27 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-21 9:07 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Darren Hart, Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
On 20-11-2023 06:39 pm, Marc Zyngier wrote:
> To anyone who has played with FEAT_NV, it is obvious that the level
> of performance is rather low due to the trap amplification that it
> imposes on the host hypervisor. FEAT_NV2 solves a number of the
> problems that FEAT_NV had.
>
> It also turns out that all the existing hardware that has FEAT_NV
> also has FEAT_NV2. Finally, it is now allowed by the architecture
> to build FEAT_NV2 *only* (as denoted by ID_AA64MMFR4_EL1.NV_frac),
> which effectively seals the fate of FEAT_NV.
>
> Restrict the NV support to NV2, and be done with it. Nobody will
> cry over the old crap.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kernel/cpufeature.c | 22 +++++++++++++++-------
> arch/arm64/tools/cpucaps | 2 ++
> 2 files changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 7dcda39537f8..95a677cf8c04 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -439,6 +439,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr3[] = {
>
> static const struct arm64_ftr_bits ftr_id_aa64mmfr4[] = {
> S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR4_EL1_E2H0_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR4_EL1_NV_frac_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
>
> @@ -2080,12 +2081,8 @@ static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap,
> if (kvm_get_mode() != KVM_MODE_NV)
> return false;
>
> - if (!has_cpuid_feature(cap, scope)) {
> - pr_warn("unavailable: %s\n", cap->desc);
> - return false;
> - }
> -
> - return true;
> + return (__system_matches_cap(ARM64_HAS_NV2) |
> + __system_matches_cap(ARM64_HAS_NV2_ONLY));
This seems to be typo and should it be logical OR?
> }
>
> static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
> @@ -2391,12 +2388,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .matches = runs_at_el2,
> .cpu_enable = cpu_copy_el2regs,
> },
> + {
> + .capability = ARM64_HAS_NV2,
> + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> + .matches = has_cpuid_feature,
> + ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, NV2)
> + },
> + {
> + .capability = ARM64_HAS_NV2_ONLY,
> + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> + .matches = has_cpuid_feature,
> + ARM64_CPUID_FIELDS(ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY)
> + },
> {
> .desc = "Nested Virtualization Support",
> .capability = ARM64_HAS_NESTED_VIRT,
> .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> .matches = has_nested_virt_support,
> - ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, IMP)
Since only NV2 is supported, is it more appropriate to have description
as "Enhanced Nested Virtualization Support"?
> },
> {
> .capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
> diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
> index fea24bcd6252..480de648cd03 100644
> --- a/arch/arm64/tools/cpucaps
> +++ b/arch/arm64/tools/cpucaps
> @@ -41,6 +41,8 @@ HAS_LSE_ATOMICS
> HAS_MOPS
> HAS_NESTED_VIRT
> HAS_NO_HW_PREFETCH
> +HAS_NV2
> +HAS_NV2_ONLY
> HAS_PAN
> HAS_S1PIE
> HAS_RAS_EXTN
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 8:51 ` [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Ganapatrao Kulkarni
@ 2023-11-21 9:08 ` Marc Zyngier
2023-11-21 9:26 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-21 9:08 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Tue, 21 Nov 2023 08:51:35 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
> Hi Marc,
>
> On 20-11-2023 06:39 pm, Marc Zyngier wrote:
> > This is the 5th drop of NV support on arm64 for this year, and most
> > probably the last one for this side of Christmas.
> >
> > For the previous episodes, see [1].
> >
> > What's changed:
> >
> > - Drop support for the original FEAT_NV. No existing hardware supports
> > it without FEAT_NV2, and the architecture is deprecating the former
> > entirely. This results in fewer patches, and a slightly simpler
> > model overall.
> >
> > - Reorganise the series to make it a bit more logical now that FEAT_NV
> > is gone.
> >
> > - Apply the NV idreg restrictions on VM first run rather than on each
> > access.
> >
> > - Make the nested vgic shadow CPU interface a per-CPU structure rather
> > than per-vcpu.
> >
> > - Fix the EL0 timer fastpath
> >
> > - Work around the architecture deficiencies when trapping WFI from a
> > L2 guest.
> >
> > - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >
> > - Drop the patches that have already been merged (NV trap forwarding,
> > per-MMU VTCR)
> >
> > - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >
> > The branch containing these patches (and more) is at [3]. As for the
> > previous rounds, my intention is to take a prefix of this series into
> > 6.8, provided that it gets enough reviewing.
> >
> > [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> > [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> > [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >
>
> V11 series is not booting on Ampere platform (I am yet to debug).
> With lkvm, it is stuck at the very early stage itself and no early
> boot prints/logs.
>
> Are there any changes needed in kvmtool for V11?
Not really, I'm still using the version I had built for 6.5. Is the
problem with L1 or L2?
However, this looks like a problem I've been chasing, and which I
though was only a M2 issue. In some situations, I'm getting interrupt
storms when L1 gets a level interrupt while in L2.
Can you cherry-pick [1] from my tree, and let me know if this helps?
This isn't a proper fix, but if L2 starts booting with this, I would
know this is a common issue.
Now, if your problem is with L1, I really have no idea.
Thanks,
M.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=kvm-arm64/nv-6.8-nv2-only&id=759d2e18f8954f4c76eb1772f38301df6ed8fa5d
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 9:08 ` Marc Zyngier
@ 2023-11-21 9:26 ` Ganapatrao Kulkarni
2023-11-21 9:41 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-21 9:26 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 21-11-2023 02:38 pm, Marc Zyngier wrote:
> On Tue, 21 Nov 2023 08:51:35 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>> Hi Marc,
>>
>> On 20-11-2023 06:39 pm, Marc Zyngier wrote:
>>> This is the 5th drop of NV support on arm64 for this year, and most
>>> probably the last one for this side of Christmas.
>>>
>>> For the previous episodes, see [1].
>>>
>>> What's changed:
>>>
>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>> it without FEAT_NV2, and the architecture is deprecating the former
>>> entirely. This results in fewer patches, and a slightly simpler
>>> model overall.
>>>
>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>> is gone.
>>>
>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>> access.
>>>
>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>> than per-vcpu.
>>>
>>> - Fix the EL0 timer fastpath
>>>
>>> - Work around the architecture deficiencies when trapping WFI from a
>>> L2 guest.
>>>
>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>
>>> - Drop the patches that have already been merged (NV trap forwarding,
>>> per-MMU VTCR)
>>>
>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>
>>> The branch containing these patches (and more) is at [3]. As for the
>>> previous rounds, my intention is to take a prefix of this series into
>>> 6.8, provided that it gets enough reviewing.
>>>
>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>
>>
>> V11 series is not booting on Ampere platform (I am yet to debug).
>> With lkvm, it is stuck at the very early stage itself and no early
>> boot prints/logs.
>>
>> Are there any changes needed in kvmtool for V11?
>
> Not really, I'm still using the version I had built for 6.5. Is the
> problem with L1 or L2?
Stuck in the L1 itself.
I am using kvmtool from
https://git.kernel.org/pub/scm/linux/kernel/git/maz/kvmtool.git/log/?h=arm64/nv-5.16
>
> However, this looks like a problem I've been chasing, and which I
> though was only a M2 issue. In some situations, I'm getting interrupt
> storms when L1 gets a level interrupt while in L2.
>
> Can you cherry-pick [1] from my tree, and let me know if this helps?
> This isn't a proper fix, but if L2 starts booting with this, I would
> know this is a common issue.
>
> Now, if your problem is with L1, I really have no idea.
>
> Thanks,
>
> M.
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=kvm-arm64/nv-6.8-nv2-only&id=759d2e18f8954f4c76eb1772f38301df6ed8fa5d
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2
2023-11-21 9:07 ` Ganapatrao Kulkarni
@ 2023-11-21 9:27 ` Marc Zyngier
0 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-21 9:27 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Tue, 21 Nov 2023 09:07:30 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 20-11-2023 06:39 pm, Marc Zyngier wrote:
> > To anyone who has played with FEAT_NV, it is obvious that the level
> > of performance is rather low due to the trap amplification that it
> > imposes on the host hypervisor. FEAT_NV2 solves a number of the
> > problems that FEAT_NV had.
> >
> > It also turns out that all the existing hardware that has FEAT_NV
> > also has FEAT_NV2. Finally, it is now allowed by the architecture
> > to build FEAT_NV2 *only* (as denoted by ID_AA64MMFR4_EL1.NV_frac),
> > which effectively seals the fate of FEAT_NV.
> >
> > Restrict the NV support to NV2, and be done with it. Nobody will
> > cry over the old crap.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kernel/cpufeature.c | 22 +++++++++++++++-------
> > arch/arm64/tools/cpucaps | 2 ++
> > 2 files changed, 17 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 7dcda39537f8..95a677cf8c04 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -439,6 +439,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr3[] = {
> > static const struct arm64_ftr_bits ftr_id_aa64mmfr4[] = {
> > S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR4_EL1_E2H0_SHIFT, 4, 0),
> > + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR4_EL1_NV_frac_SHIFT, 4, 0),
> > ARM64_FTR_END,
> > };
> > @@ -2080,12 +2081,8 @@ static bool has_nested_virt_support(const
> > struct arm64_cpu_capabilities *cap,
> > if (kvm_get_mode() != KVM_MODE_NV)
> > return false;
> > - if (!has_cpuid_feature(cap, scope)) {
> > - pr_warn("unavailable: %s\n", cap->desc);
> > - return false;
> > - }
> > -
> > - return true;
> > + return (__system_matches_cap(ARM64_HAS_NV2) |
> > + __system_matches_cap(ARM64_HAS_NV2_ONLY));
>
> This seems to be typo and should it be logical OR?
Indeed, this is a bug. Not that it will have any effect as
__system_matches_cap() returns a bool, so | and || are strictly
equivalent.
Worth addressing though.
>
> > }
> > static bool hvhe_possible(const struct arm64_cpu_capabilities
> > *entry,
> > @@ -2391,12 +2388,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> > .matches = runs_at_el2,
> > .cpu_enable = cpu_copy_el2regs,
> > },
> > + {
> > + .capability = ARM64_HAS_NV2,
> > + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> > + .matches = has_cpuid_feature,
> > + ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, NV2)
> > + },
> > + {
> > + .capability = ARM64_HAS_NV2_ONLY,
> > + .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> > + .matches = has_cpuid_feature,
> > + ARM64_CPUID_FIELDS(ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY)
> > + },
> > {
> > .desc = "Nested Virtualization Support",
> > .capability = ARM64_HAS_NESTED_VIRT,
> > .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> > .matches = has_nested_virt_support,
> > - ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, IMP)
>
> Since only NV2 is supported, is it more appropriate to have
> description as "Enhanced Nested Virtualization Support"?
Nah. There is nothing 'enhanced' about NV2. It is NV that should have
been named "Unusable Nested Virt"... So I'm perfectly happy to leave
it as is.
And to be honest, I'd rather we display FEAT_* rather than some
interpretation of it, but I'm not going to repaint cpufeature.c.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 9:26 ` Ganapatrao Kulkarni
@ 2023-11-21 9:41 ` Marc Zyngier
2023-11-22 11:10 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-21 9:41 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Tue, 21 Nov 2023 09:26:22 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 21-11-2023 02:38 pm, Marc Zyngier wrote:
> > On Tue, 21 Nov 2023 08:51:35 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >>
> >> Hi Marc,
> >>
> >> On 20-11-2023 06:39 pm, Marc Zyngier wrote:
> >>> This is the 5th drop of NV support on arm64 for this year, and most
> >>> probably the last one for this side of Christmas.
> >>>
> >>> For the previous episodes, see [1].
> >>>
> >>> What's changed:
> >>>
> >>> - Drop support for the original FEAT_NV. No existing hardware supports
> >>> it without FEAT_NV2, and the architecture is deprecating the former
> >>> entirely. This results in fewer patches, and a slightly simpler
> >>> model overall.
> >>>
> >>> - Reorganise the series to make it a bit more logical now that FEAT_NV
> >>> is gone.
> >>>
> >>> - Apply the NV idreg restrictions on VM first run rather than on each
> >>> access.
> >>>
> >>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> >>> than per-vcpu.
> >>>
> >>> - Fix the EL0 timer fastpath
> >>>
> >>> - Work around the architecture deficiencies when trapping WFI from a
> >>> L2 guest.
> >>>
> >>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >>>
> >>> - Drop the patches that have already been merged (NV trap forwarding,
> >>> per-MMU VTCR)
> >>>
> >>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >>>
> >>> The branch containing these patches (and more) is at [3]. As for the
> >>> previous rounds, my intention is to take a prefix of this series into
> >>> 6.8, provided that it gets enough reviewing.
> >>>
> >>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> >>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> >>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >>>
> >>
> >> V11 series is not booting on Ampere platform (I am yet to debug).
> >> With lkvm, it is stuck at the very early stage itself and no early
> >> boot prints/logs.
> >>
> >> Are there any changes needed in kvmtool for V11?
> >
> > Not really, I'm still using the version I had built for 6.5. Is the
> > problem with L1 or L2?
>
> Stuck in the L1 itself.
>
> I am using kvmtool from
> https://git.kernel.org/pub/scm/linux/kernel/git/maz/kvmtool.git/log/?h=arm64/nv-5.16
Huh. That's positively ancient. Yet, you shouldn't get into a
situation where the L1 guest locks up.
I have pushed out my kvmtool branch[1]. Please give it a go.
Thanks,
M.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/kvmtool.git/log/?h=arm64/nv-6.5
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (43 preceding siblings ...)
2023-11-21 8:51 ` [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Ganapatrao Kulkarni
@ 2023-11-21 16:49 ` Miguel Luis
2023-11-21 19:02 ` Marc Zyngier
2023-12-18 12:39 ` Marc Zyngier
2023-12-19 10:32 ` (subset) " Marc Zyngier
46 siblings, 1 reply; 79+ messages in thread
From: Miguel Luis @ 2023-11-21 16:49 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hi Marc,
> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
>
> This is the 5th drop of NV support on arm64 for this year, and most
> probably the last one for this side of Christmas.
>
> For the previous episodes, see [1].
>
> What's changed:
>
> - Drop support for the original FEAT_NV. No existing hardware supports
> it without FEAT_NV2, and the architecture is deprecating the former
> entirely. This results in fewer patches, and a slightly simpler
> model overall.
>
> - Reorganise the series to make it a bit more logical now that FEAT_NV
> is gone.
>
> - Apply the NV idreg restrictions on VM first run rather than on each
> access.
>
> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> than per-vcpu.
>
> - Fix the EL0 timer fastpath
>
> - Work around the architecture deficiencies when trapping WFI from a
> L2 guest.
>
> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>
> - Drop the patches that have already been merged (NV trap forwarding,
> per-MMU VTCR)
>
> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>
> The branch containing these patches (and more) is at [3]. As for the
> previous rounds, my intention is to take a prefix of this series into
> 6.8, provided that it gets enough reviewing.
>
> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>
While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
[ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
{ Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
This is CPACR_EL12.
Still need yet to debug.
As for QEMU, it is having issues enabling _EL2 feature although EL2 is supported by
checking KVM_CAP_ARM_EL2; need yet to debug this.
Thanks
Miguel
> Andre Przywara (1):
> KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ
>
> Christoffer Dall (2):
> KVM: arm64: nv: Implement nested Stage-2 page table walk logic
> KVM: arm64: nv: Unmap/flush shadow stage 2 page tables
>
> Jintack Lim (3):
> KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting
> KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings
> KVM: arm64: nv: Trap and emulate TLBI instructions from virtual EL2
>
> Marc Zyngier (37):
> arm64: cpufeatures: Restrict NV support to FEAT_NV2
> KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt()
> KVM: arm64: nv: Compute NV view of idregs as a one-off
> KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR
> KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
> KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
> KVM: arm64: Introduce a bad_trap() primitive for unexpected trap
> handling
> KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers
> KVM: arm64: nv: Map VNCR-capable registers to a separate page
> KVM: arm64: nv: Handle virtual EL2 registers in
> vcpu_read/write_sys_reg()
> KVM: arm64: nv: Handle HCR_EL2.E2H specially
> KVM: arm64: nv: Handle CNTHCTL_EL2 specially
> KVM: arm64: nv: Save/Restore vEL2 sysregs
> KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
> KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
> KVM: arm64: nv: Handle shadow stage 2 page faults
> KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's
> KVM: arm64: nv: Set a handler for the system instruction traps
> KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2
> KVM: arm64: nv: Hide RAS from nested guests
> KVM: arm64: nv: Add handling of EL2-specific timer registers
> KVM: arm64: nv: Sync nested timer state with FEAT_NV2
> KVM: arm64: nv: Publish emulated timer interrupt state in the
> in-memory state
> KVM: arm64: nv: Load timer before the GIC
> KVM: arm64: nv: Nested GICv3 Support
> KVM: arm64: nv: Don't block in WFI from nested state
> KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup
> KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt
> delivery
> KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation
> KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like
> information
> KVM: arm64: nv: Tag shadow S2 entries with nested level
> KVM: arm64: nv: Allocate VNCR page when required
> KVM: arm64: nv: Fast-track 'InHost' exception returns
> KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests
> KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers
> KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on
> KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT
>
> .../virt/kvm/devices/arm-vgic-v3.rst | 12 +-
> arch/arm64/include/asm/esr.h | 1 +
> arch/arm64/include/asm/kvm_arm.h | 3 +
> arch/arm64/include/asm/kvm_asm.h | 4 +
> arch/arm64/include/asm/kvm_emulate.h | 53 +-
> arch/arm64/include/asm/kvm_host.h | 223 +++-
> arch/arm64/include/asm/kvm_hyp.h | 2 +
> arch/arm64/include/asm/kvm_mmu.h | 12 +
> arch/arm64/include/asm/kvm_nested.h | 130 ++-
> arch/arm64/include/asm/sysreg.h | 7 +
> arch/arm64/include/asm/vncr_mapping.h | 102 ++
> arch/arm64/include/uapi/asm/kvm.h | 1 +
> arch/arm64/kernel/cpufeature.c | 22 +-
> arch/arm64/kvm/Makefile | 4 +-
> arch/arm64/kvm/arch_timer.c | 115 +-
> arch/arm64/kvm/arm.c | 46 +-
> arch/arm64/kvm/at.c | 219 ++++
> arch/arm64/kvm/emulate-nested.c | 48 +-
> arch/arm64/kvm/handle_exit.c | 29 +-
> arch/arm64/kvm/hyp/include/hyp/switch.h | 8 +-
> arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 5 +-
> arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
> arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 2 +-
> arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +-
> arch/arm64/kvm/hyp/vhe/switch.c | 211 +++-
> arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 133 ++-
> arch/arm64/kvm/hyp/vhe/tlb.c | 83 ++
> arch/arm64/kvm/mmu.c | 248 ++++-
> arch/arm64/kvm/nested.c | 813 ++++++++++++++-
> arch/arm64/kvm/reset.c | 7 +
> arch/arm64/kvm/sys_regs.c | 978 ++++++++++++++++--
> arch/arm64/kvm/vgic/vgic-init.c | 35 +
> arch/arm64/kvm/vgic/vgic-kvm-device.c | 29 +-
> arch/arm64/kvm/vgic/vgic-v3-nested.c | 270 +++++
> arch/arm64/kvm/vgic/vgic-v3.c | 35 +-
> arch/arm64/kvm/vgic/vgic.c | 29 +
> arch/arm64/kvm/vgic/vgic.h | 10 +
> arch/arm64/tools/cpucaps | 2 +
> include/clocksource/arm_arch_timer.h | 4 +
> include/kvm/arm_arch_timer.h | 19 +
> include/kvm/arm_vgic.h | 16 +
> include/uapi/linux/kvm.h | 1 +
> tools/arch/arm/include/uapi/asm/kvm.h | 1 +
> 43 files changed, 3725 insertions(+), 255 deletions(-)
> create mode 100644 arch/arm64/include/asm/vncr_mapping.h
> create mode 100644 arch/arm64/kvm/at.c
> create mode 100644 arch/arm64/kvm/vgic/vgic-v3-nested.c
>
> --
> 2.39.2
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 16:49 ` Miguel Luis
@ 2023-11-21 19:02 ` Marc Zyngier
2023-11-23 16:21 ` Miguel Luis
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-21 19:02 UTC (permalink / raw)
To: Miguel Luis
Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Tue, 21 Nov 2023 16:49:52 +0000,
Miguel Luis <miguel.luis@oracle.com> wrote:
>
> Hi Marc,
>
> > On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
> >
> > This is the 5th drop of NV support on arm64 for this year, and most
> > probably the last one for this side of Christmas.
> >
> > For the previous episodes, see [1].
> >
> > What's changed:
> >
> > - Drop support for the original FEAT_NV. No existing hardware supports
> > it without FEAT_NV2, and the architecture is deprecating the former
> > entirely. This results in fewer patches, and a slightly simpler
> > model overall.
> >
> > - Reorganise the series to make it a bit more logical now that FEAT_NV
> > is gone.
> >
> > - Apply the NV idreg restrictions on VM first run rather than on each
> > access.
> >
> > - Make the nested vgic shadow CPU interface a per-CPU structure rather
> > than per-vcpu.
> >
> > - Fix the EL0 timer fastpath
> >
> > - Work around the architecture deficiencies when trapping WFI from a
> > L2 guest.
> >
> > - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >
> > - Drop the patches that have already been merged (NV trap forwarding,
> > per-MMU VTCR)
> >
> > - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >
> > The branch containing these patches (and more) is at [3]. As for the
> > previous rounds, my intention is to take a prefix of this series into
> > 6.8, provided that it gets enough reviewing.
> >
> > [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> > [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> > [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >
>
> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
>
> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>
> This is CPACR_EL12.
CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
> Still need yet to debug.
Can you disassemble the guest around the offending PC?
> As for QEMU, it is having issues enabling _EL2 feature although EL2
> is supported by checking KVM_CAP_ARM_EL2; need yet to debug this.
The capability number changes at each release. Make sure you resync
your includes.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 9:41 ` Marc Zyngier
@ 2023-11-22 11:10 ` Ganapatrao Kulkarni
2023-11-22 11:39 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-22 11:10 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 21-11-2023 03:11 pm, Marc Zyngier wrote:
> On Tue, 21 Nov 2023 09:26:22 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>>
>> On 21-11-2023 02:38 pm, Marc Zyngier wrote:
>>> On Tue, 21 Nov 2023 08:51:35 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>>
>>>> Hi Marc,
>>>>
>>>> On 20-11-2023 06:39 pm, Marc Zyngier wrote:
>>>>> This is the 5th drop of NV support on arm64 for this year, and most
>>>>> probably the last one for this side of Christmas.
>>>>>
>>>>> For the previous episodes, see [1].
>>>>>
>>>>> What's changed:
>>>>>
>>>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>>>> it without FEAT_NV2, and the architecture is deprecating the former
>>>>> entirely. This results in fewer patches, and a slightly simpler
>>>>> model overall.
>>>>>
>>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>>>> is gone.
>>>>>
>>>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>>>> access.
>>>>>
>>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>>>> than per-vcpu.
>>>>>
>>>>> - Fix the EL0 timer fastpath
>>>>>
>>>>> - Work around the architecture deficiencies when trapping WFI from a
>>>>> L2 guest.
>>>>>
>>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>>>
>>>>> - Drop the patches that have already been merged (NV trap forwarding,
>>>>> per-MMU VTCR)
>>>>>
>>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>>>
>>>>> The branch containing these patches (and more) is at [3]. As for the
>>>>> previous rounds, my intention is to take a prefix of this series into
>>>>> 6.8, provided that it gets enough reviewing.
>>>>>
>>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>>>
>>>>
>>>> V11 series is not booting on Ampere platform (I am yet to debug).
>>>> With lkvm, it is stuck at the very early stage itself and no early
>>>> boot prints/logs.
>>>>
>>>> Are there any changes needed in kvmtool for V11?
>>>
>>> Not really, I'm still using the version I had built for 6.5. Is the
>>> problem with L1 or L2?
>>
>> Stuck in the L1 itself.
>>
>> I am using kvmtool from
>> https://git.kernel.org/pub/scm/linux/kernel/git/maz/kvmtool.git/log/?h=arm64/nv-5.16
>
> Huh. That's positively ancient. Yet, you shouldn't get into a
> situation where the L1 guest locks up.
>
> I have pushed out my kvmtool branch[1]. Please give it a go.
>
No change, still L1 hangs. Captured ftrace and the L1 is keep
looping/faulting around same address across kvm_entry and kvm_exits.
It is a weird behavior, L1 is faulting and looping around MDCR and
AA64MMFR3_EL1 access in function __finalise_el2.
asm:
ffffffc080528a58: d53dc000 mrs x0, vbar_el12
ffffffc080528a5c: d518c000 msr vbar_el1, x0
ffffffc080528a60: d53c1120 mrs x0, mdcr_el2
ffffffc080528a64: 9272f400 and x0, x0, #0xffffffffffffcfff
ffffffc080528a68: 9266f400 and x0, x0, #0xfffffffffcffffff
ffffffc080528a6c: d51c1120 msr mdcr_el2, x0
ffffffc080528a70: d53d2040 mrs x0, tcr_el12
ffffffc080528a74: d5182040 msr tcr_el1, x0
ffffffc080528a78: d53d2000 mrs x0, ttbr0_el12
ffffffc080528a7c: d5182000 msr ttbr0_el1, x0
ffffffc080528a80: d53d2020 mrs x0, ttbr1_el12
ffffffc080528a84: d5182020 msr ttbr1_el1, x0
ffffffc080528a88: d53da200 mrs x0, mair_el12
ffffffc080528a8c: d518a200 msr mair_el1, x0
ffffffc080528a90: d5380761 mrs x1, s3_0_c0_c7_3
ffffffc080528a94: d3400c21 ubfx x1, x1, #0, #4
ffffffc080528a98: b4000141 cbz x1, ffffffc080528ac0
<__finalise_el2+0x270>
ffffffc080528a9c: d53d2060 mrs x0, s3_5_c2_c0_3
ftrace:
kvm-vcpu-0-88776 [001] ...1. 6076.581774: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a6c
kvm-vcpu-0-88776 [001] d..1. 6076.581774: kvm_entry: PC:
0x0000000080528a6c
kvm-vcpu-0-88776 [001] ...1. 6076.581775: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a90
kvm-vcpu-0-88776 [001] d..1. 6076.581776: kvm_entry: PC:
0x0000000080528a90
kvm-vcpu-0-88776 [001] ...1. 6076.581778: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a60
kvm-vcpu-0-88776 [001] d..1. 6076.581778: kvm_entry: PC:
0x0000000080528a60
kvm-vcpu-0-88776 [001] ...1. 6076.581779: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a6c
kvm-vcpu-0-88776 [001] d..1. 6076.581779: kvm_entry: PC:
0x0000000080528a6c
kvm-vcpu-0-88776 [001] ...1. 6076.581780: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a90
kvm-vcpu-0-88776 [001] d..1. 6076.581781: kvm_entry: PC:
0x0000000080528a90
kvm-vcpu-0-88776 [001] ...1. 6076.581783: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a60
kvm-vcpu-0-88776 [001] d..1. 6076.581783: kvm_entry: PC:
0x0000000080528a60
kvm-vcpu-0-88776 [001] ...1. 6076.581784: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a6c
kvm-vcpu-0-88776 [001] d..1. 6076.581784: kvm_entry: PC:
0x0000000080528a6c
kvm-vcpu-0-88776 [001] ...1. 6076.581785: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a90
kvm-vcpu-0-88776 [001] d..1. 6076.581786: kvm_entry: PC:
0x0000000080528a90
kvm-vcpu-0-88776 [001] ...1. 6076.581788: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a60
kvm-vcpu-0-88776 [001] d..1. 6076.581788: kvm_entry: PC:
0x0000000080528a60
kvm-vcpu-0-88776 [001] ...1. 6076.581789: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a6c
kvm-vcpu-0-88776 [001] d..1. 6076.581789: kvm_entry: PC:
0x0000000080528a6c
kvm-vcpu-0-88776 [001] ...1. 6076.581790: kvm_exit: TRAP:
HSR_EC: 0x0018 (SYS64), PC: 0x0000000080528a90
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-22 11:10 ` Ganapatrao Kulkarni
@ 2023-11-22 11:39 ` Marc Zyngier
0 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-22 11:39 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Wed, 22 Nov 2023 11:10:10 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
> No change, still L1 hangs. Captured ftrace and the L1 is keep
> looping/faulting around same address across kvm_entry and kvm_exits.
>
> It is a weird behavior, L1 is faulting and looping around MDCR and
> AA64MMFR3_EL1 access in function __finalise_el2.
I really can't see how this happens. There are no backward branches,
and we don't seem to reach the ERET either. So something must affect
the state after the trap of ID_AA64MMFR3_EL1.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-21 19:02 ` Marc Zyngier
@ 2023-11-23 16:21 ` Miguel Luis
2023-11-23 16:44 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Miguel Luis @ 2023-11-23 16:21 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hi Marc,
On 21/11/2023 18:02, Marc Zyngier wrote:
> On Tue, 21 Nov 2023 16:49:52 +0000,
> Miguel Luis <miguel.luis@oracle.com> wrote:
>> Hi Marc,
>>
>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
>>>
>>> This is the 5th drop of NV support on arm64 for this year, and most
>>> probably the last one for this side of Christmas.
>>>
>>> For the previous episodes, see [1].
>>>
>>> What's changed:
>>>
>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>> it without FEAT_NV2, and the architecture is deprecating the former
>>> entirely. This results in fewer patches, and a slightly simpler
>>> model overall.
>>>
>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>> is gone.
>>>
>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>> access.
>>>
>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>> than per-vcpu.
>>>
>>> - Fix the EL0 timer fastpath
>>>
>>> - Work around the architecture deficiencies when trapping WFI from a
>>> L2 guest.
>>>
>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>
>>> - Drop the patches that have already been merged (NV trap forwarding,
>>> per-MMU VTCR)
>>>
>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>
>>> The branch containing these patches (and more) is at [3]. As for the
>>> previous rounds, my intention is to take a prefix of this series into
>>> 6.8, provided that it gets enough reviewing.
>>>
>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>
>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
>>
>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>
>> This is CPACR_EL12.
> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
>
>> Still need yet to debug.
> Can you disassemble the guest around the offending PC?
[ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
{ Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
12baa00: 14000008 b 0x12baa20
12baa04: d000d501 adrp x1, 0x2d5c000
12baa08: 91154021 add x1, x1, #0x550
12baa0c: f9400022 ldr x2, [x1]
12baa10: f9400421 ldr x1, [x1, #8]
12baa14: 8a010042 and x2, x2, x1
12baa18: d3441c42 ubfx x2, x2, #4, #4
12baa1c: b4000082 cbz x2, 0x12baa2c
12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
12baa24: f2994220 movk x0, #0xca11
12baa28: d69f03e0 eret
12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
12baa30: f2b10000 movk x0, #0x8800, lsl #16
12baa34: f2800000 movk x0, #0x0
12baa38: d51c1100 msr hcr_el2, x0
12baa3c: d5033fdf isb
12baa40: d53c4100 mrs x0, sp_el1
12baa44: 9100001f mov sp, x0
12baa48: d538d080 mrs x0, tpidr_el1
12baa4c: d51cd040 msr tpidr_el2, x0
12baa50: d53d1040 mrs x0, cpacr_el12
12baa54: d5181040 msr cpacr_el1, x0
12baa58: d53dc000 mrs x0, vbar_el12
12baa5c: d518c000 msr vbar_el1, x0
12baa60: d53c1120 mrs x0, mdcr_el2
12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
12baa6c: d51c1120 msr mdcr_el2, x0
12baa70: d53d2040 mrs x0, tcr_el12
12baa74: d5182040 msr tcr_el1, x0
12baa78: d53d2000 mrs x0, ttbr0_el12
12baa7c: d5182000 msr ttbr0_el1, x0
12baa80: d53d2020 mrs x0, ttbr1_el12
12baa84: d5182020 msr ttbr1_el1, x0
12baa88: d53da200 mrs x0, mair_el12
12baa8c: d518a200 msr mair_el1, x0
12baa90: d5380761 mrs x1, s3_0_c0_c7_3
12baa94: d3400c21 ubfx x1, x1, #0, #4
12baa98: b4000141 cbz x1, 0x12baac0
12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
>> As for QEMU, it is having issues enabling _EL2 feature although EL2
>> is supported by checking KVM_CAP_ARM_EL2; need yet to debug this.
> The capability number changes at each release. Make sure you resync
> your includes.
Been there but it seems a different problem this time.
Thank you
Miguel
> M.
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-23 16:21 ` Miguel Luis
@ 2023-11-23 16:44 ` Marc Zyngier
2023-11-24 9:50 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-23 16:44 UTC (permalink / raw)
To: Miguel Luis
Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Thu, 23 Nov 2023 16:21:48 +0000,
Miguel Luis <miguel.luis@oracle.com> wrote:
>
> Hi Marc,
>
> On 21/11/2023 18:02, Marc Zyngier wrote:
> > On Tue, 21 Nov 2023 16:49:52 +0000,
> > Miguel Luis <miguel.luis@oracle.com> wrote:
> >> Hi Marc,
> >>
> >>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
> >>>
> >>> This is the 5th drop of NV support on arm64 for this year, and most
> >>> probably the last one for this side of Christmas.
> >>>
> >>> For the previous episodes, see [1].
> >>>
> >>> What's changed:
> >>>
> >>> - Drop support for the original FEAT_NV. No existing hardware supports
> >>> it without FEAT_NV2, and the architecture is deprecating the former
> >>> entirely. This results in fewer patches, and a slightly simpler
> >>> model overall.
> >>>
> >>> - Reorganise the series to make it a bit more logical now that FEAT_NV
> >>> is gone.
> >>>
> >>> - Apply the NV idreg restrictions on VM first run rather than on each
> >>> access.
> >>>
> >>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> >>> than per-vcpu.
> >>>
> >>> - Fix the EL0 timer fastpath
> >>>
> >>> - Work around the architecture deficiencies when trapping WFI from a
> >>> L2 guest.
> >>>
> >>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >>>
> >>> - Drop the patches that have already been merged (NV trap forwarding,
> >>> per-MMU VTCR)
> >>>
> >>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >>>
> >>> The branch containing these patches (and more) is at [3]. As for the
> >>> previous rounds, my intention is to take a prefix of this series into
> >>> 6.8, provided that it gets enough reviewing.
> >>>
> >>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> >>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> >>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >>>
> >> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
> >>
> >> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
> >> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
> >>
> >> This is CPACR_EL12.
> > CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
> >
> >> Still need yet to debug.
> > Can you disassemble the guest around the offending PC?
>
> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>
> 12baa00: 14000008 b 0x12baa20
> 12baa04: d000d501 adrp x1, 0x2d5c000
> 12baa08: 91154021 add x1, x1, #0x550
> 12baa0c: f9400022 ldr x2, [x1]
> 12baa10: f9400421 ldr x1, [x1, #8]
> 12baa14: 8a010042 and x2, x2, x1
> 12baa18: d3441c42 ubfx x2, x2, #4, #4
> 12baa1c: b4000082 cbz x2, 0x12baa2c
> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
> 12baa24: f2994220 movk x0, #0xca11
> 12baa28: d69f03e0 eret
> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
> 12baa34: f2800000 movk x0, #0x0
> 12baa38: d51c1100 msr hcr_el2, x0
> 12baa3c: d5033fdf isb
> 12baa40: d53c4100 mrs x0, sp_el1
> 12baa44: 9100001f mov sp, x0
> 12baa48: d538d080 mrs x0, tpidr_el1
> 12baa4c: d51cd040 msr tpidr_el2, x0
> 12baa50: d53d1040 mrs x0, cpacr_el12
> 12baa54: d5181040 msr cpacr_el1, x0
> 12baa58: d53dc000 mrs x0, vbar_el12
> 12baa5c: d518c000 msr vbar_el1, x0
> 12baa60: d53c1120 mrs x0, mdcr_el2
> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
> 12baa6c: d51c1120 msr mdcr_el2, x0
> 12baa70: d53d2040 mrs x0, tcr_el12
> 12baa74: d5182040 msr tcr_el1, x0
> 12baa78: d53d2000 mrs x0, ttbr0_el12
> 12baa7c: d5182000 msr ttbr0_el1, x0
> 12baa80: d53d2020 mrs x0, ttbr1_el12
> 12baa84: d5182020 msr ttbr1_el1, x0
> 12baa88: d53da200 mrs x0, mair_el12
> 12baa8c: d518a200 msr mair_el1, x0
> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
> 12baa94: d3400c21 ubfx x1, x1, #0, #4
> 12baa98: b4000141 cbz x1, 0x12baac0
> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
OK, this is suspiciously close to the location Ganapatrao was having
issues with. Are you running on the same hardware?
In any case, we should never take a trap for this access. Can you dump
HCR_EL2 at the point where the guest traps (in switch.c)?
> >> As for QEMU, it is having issues enabling _EL2 feature although EL2
> >> is supported by checking KVM_CAP_ARM_EL2; need yet to debug this.
> > The capability number changes at each release. Make sure you resync
> > your includes.
>
> Been there but it seems a different problem this time.
Creating the VM with SVE? NV doesn't support it yet (and it has been
the case for a long while).
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-23 16:44 ` Marc Zyngier
@ 2023-11-24 9:50 ` Ganapatrao Kulkarni
2023-11-24 10:19 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-24 9:50 UTC (permalink / raw)
To: Marc Zyngier, Miguel Luis
Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 23-11-2023 10:14 pm, Marc Zyngier wrote:
> On Thu, 23 Nov 2023 16:21:48 +0000,
> Miguel Luis <miguel.luis@oracle.com> wrote:
>>
>> Hi Marc,
>>
>> On 21/11/2023 18:02, Marc Zyngier wrote:
>>> On Tue, 21 Nov 2023 16:49:52 +0000,
>>> Miguel Luis <miguel.luis@oracle.com> wrote:
>>>> Hi Marc,
>>>>
>>>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
>>>>>
>>>>> This is the 5th drop of NV support on arm64 for this year, and most
>>>>> probably the last one for this side of Christmas.
>>>>>
>>>>> For the previous episodes, see [1].
>>>>>
>>>>> What's changed:
>>>>>
>>>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>>>> it without FEAT_NV2, and the architecture is deprecating the former
>>>>> entirely. This results in fewer patches, and a slightly simpler
>>>>> model overall.
>>>>>
>>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>>>> is gone.
>>>>>
>>>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>>>> access.
>>>>>
>>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>>>> than per-vcpu.
>>>>>
>>>>> - Fix the EL0 timer fastpath
>>>>>
>>>>> - Work around the architecture deficiencies when trapping WFI from a
>>>>> L2 guest.
>>>>>
>>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>>>
>>>>> - Drop the patches that have already been merged (NV trap forwarding,
>>>>> per-MMU VTCR)
>>>>>
>>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>>>
>>>>> The branch containing these patches (and more) is at [3]. As for the
>>>>> previous rounds, my intention is to take a prefix of this series into
>>>>> 6.8, provided that it gets enough reviewing.
>>>>>
>>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>>>
>>>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
>>>>
>>>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>>>
>>>> This is CPACR_EL12.
>>> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
>>>
>>>> Still need yet to debug.
>>> Can you disassemble the guest around the offending PC?
>>
>> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>
>> 12baa00: 14000008 b 0x12baa20
>> 12baa04: d000d501 adrp x1, 0x2d5c000
>> 12baa08: 91154021 add x1, x1, #0x550
>> 12baa0c: f9400022 ldr x2, [x1]
>> 12baa10: f9400421 ldr x1, [x1, #8]
>> 12baa14: 8a010042 and x2, x2, x1
>> 12baa18: d3441c42 ubfx x2, x2, #4, #4
>> 12baa1c: b4000082 cbz x2, 0x12baa2c
>> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
>> 12baa24: f2994220 movk x0, #0xca11
>> 12baa28: d69f03e0 eret
>> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
>> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
>> 12baa34: f2800000 movk x0, #0x0
>> 12baa38: d51c1100 msr hcr_el2, x0
>> 12baa3c: d5033fdf isb
>> 12baa40: d53c4100 mrs x0, sp_el1
>> 12baa44: 9100001f mov sp, x0
>> 12baa48: d538d080 mrs x0, tpidr_el1
>> 12baa4c: d51cd040 msr tpidr_el2, x0
>> 12baa50: d53d1040 mrs x0, cpacr_el12
>> 12baa54: d5181040 msr cpacr_el1, x0
>> 12baa58: d53dc000 mrs x0, vbar_el12
>> 12baa5c: d518c000 msr vbar_el1, x0
>> 12baa60: d53c1120 mrs x0, mdcr_el2
>> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
>> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
>> 12baa6c: d51c1120 msr mdcr_el2, x0
>> 12baa70: d53d2040 mrs x0, tcr_el12
>> 12baa74: d5182040 msr tcr_el1, x0
>> 12baa78: d53d2000 mrs x0, ttbr0_el12
>> 12baa7c: d5182000 msr ttbr0_el1, x0
>> 12baa80: d53d2020 mrs x0, ttbr1_el12
>> 12baa84: d5182020 msr ttbr1_el1, x0
>> 12baa88: d53da200 mrs x0, mair_el12
>> 12baa8c: d518a200 msr mair_el1, x0
>> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
>> 12baa94: d3400c21 ubfx x1, x1, #0, #4
>> 12baa98: b4000141 cbz x1, 0x12baac0
>> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
>
> OK, this is suspiciously close to the location Ganapatrao was having
> issues with. Are you running on the same hardware?
>
> In any case, we should never take a trap for this access. Can you dump
> HCR_EL2 at the point where the guest traps (in switch.c)?
>
I have dumped HCR_EL2 before entry to L1 in both V11 and V10.
on V10 HCR_EL2=0x2743c827c263f
on V11 HCR_EL2=0x27c3c827c263f
on V11 the function vcpu_el2_e2h_is_set(vcpu) is returning false
resulting in NV1 bit set along with NV and NV2.
AFAIK, For L1 to be in VHE, NV1 bit should be zero and NV=NV2=1.
I could boot L1 then L2, if I hack vcpu_el2_e2h_is_set to return true.
There could be a bug in V11 or E2H0 patchset resulting in
vcpu_el2_e2h_is_set() returning false?
>>>> As for QEMU, it is having issues enabling _EL2 feature although EL2
>>>> is supported by checking KVM_CAP_ARM_EL2; need yet to debug this.
>>> The capability number changes at each release. Make sure you resync
>>> your includes.
>>
>> Been there but it seems a different problem this time.
>
> Creating the VM with SVE? NV doesn't support it yet (and it has been
> the case for a long while).
>
> M.
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 9:50 ` Ganapatrao Kulkarni
@ 2023-11-24 10:19 ` Marc Zyngier
2023-11-24 12:34 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-24 10:19 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Fri, 24 Nov 2023 09:50:33 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 23-11-2023 10:14 pm, Marc Zyngier wrote:
> > On Thu, 23 Nov 2023 16:21:48 +0000,
> > Miguel Luis <miguel.luis@oracle.com> wrote:
> >>
> >> Hi Marc,
> >>
> >> On 21/11/2023 18:02, Marc Zyngier wrote:
> >>> On Tue, 21 Nov 2023 16:49:52 +0000,
> >>> Miguel Luis <miguel.luis@oracle.com> wrote:
> >>>> Hi Marc,
> >>>>
> >>>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
> >>>>>
> >>>>> This is the 5th drop of NV support on arm64 for this year, and most
> >>>>> probably the last one for this side of Christmas.
> >>>>>
> >>>>> For the previous episodes, see [1].
> >>>>>
> >>>>> What's changed:
> >>>>>
> >>>>> - Drop support for the original FEAT_NV. No existing hardware supports
> >>>>> it without FEAT_NV2, and the architecture is deprecating the former
> >>>>> entirely. This results in fewer patches, and a slightly simpler
> >>>>> model overall.
> >>>>>
> >>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
> >>>>> is gone.
> >>>>>
> >>>>> - Apply the NV idreg restrictions on VM first run rather than on each
> >>>>> access.
> >>>>>
> >>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> >>>>> than per-vcpu.
> >>>>>
> >>>>> - Fix the EL0 timer fastpath
> >>>>>
> >>>>> - Work around the architecture deficiencies when trapping WFI from a
> >>>>> L2 guest.
> >>>>>
> >>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >>>>>
> >>>>> - Drop the patches that have already been merged (NV trap forwarding,
> >>>>> per-MMU VTCR)
> >>>>>
> >>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >>>>>
> >>>>> The branch containing these patches (and more) is at [3]. As for the
> >>>>> previous rounds, my intention is to take a prefix of this series into
> >>>>> 6.8, provided that it gets enough reviewing.
> >>>>>
> >>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> >>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> >>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >>>>>
> >>>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
> >>>>
> >>>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
> >>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
> >>>>
> >>>> This is CPACR_EL12.
> >>> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
> >>>
> >>>> Still need yet to debug.
> >>> Can you disassemble the guest around the offending PC?
> >>
> >> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
> >> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
> >>
> >> 12baa00: 14000008 b 0x12baa20
> >> 12baa04: d000d501 adrp x1, 0x2d5c000
> >> 12baa08: 91154021 add x1, x1, #0x550
> >> 12baa0c: f9400022 ldr x2, [x1]
> >> 12baa10: f9400421 ldr x1, [x1, #8]
> >> 12baa14: 8a010042 and x2, x2, x1
> >> 12baa18: d3441c42 ubfx x2, x2, #4, #4
> >> 12baa1c: b4000082 cbz x2, 0x12baa2c
> >> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
> >> 12baa24: f2994220 movk x0, #0xca11
> >> 12baa28: d69f03e0 eret
> >> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
> >> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
> >> 12baa34: f2800000 movk x0, #0x0
> >> 12baa38: d51c1100 msr hcr_el2, x0
> >> 12baa3c: d5033fdf isb
> >> 12baa40: d53c4100 mrs x0, sp_el1
> >> 12baa44: 9100001f mov sp, x0
> >> 12baa48: d538d080 mrs x0, tpidr_el1
> >> 12baa4c: d51cd040 msr tpidr_el2, x0
> >> 12baa50: d53d1040 mrs x0, cpacr_el12
> >> 12baa54: d5181040 msr cpacr_el1, x0
> >> 12baa58: d53dc000 mrs x0, vbar_el12
> >> 12baa5c: d518c000 msr vbar_el1, x0
> >> 12baa60: d53c1120 mrs x0, mdcr_el2
> >> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
> >> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
> >> 12baa6c: d51c1120 msr mdcr_el2, x0
> >> 12baa70: d53d2040 mrs x0, tcr_el12
> >> 12baa74: d5182040 msr tcr_el1, x0
> >> 12baa78: d53d2000 mrs x0, ttbr0_el12
> >> 12baa7c: d5182000 msr ttbr0_el1, x0
> >> 12baa80: d53d2020 mrs x0, ttbr1_el12
> >> 12baa84: d5182020 msr ttbr1_el1, x0
> >> 12baa88: d53da200 mrs x0, mair_el12
> >> 12baa8c: d518a200 msr mair_el1, x0
> >> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
> >> 12baa94: d3400c21 ubfx x1, x1, #0, #4
> >> 12baa98: b4000141 cbz x1, 0x12baac0
> >> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
> >
> > OK, this is suspiciously close to the location Ganapatrao was having
> > issues with. Are you running on the same hardware?
> >
> > In any case, we should never take a trap for this access. Can you dump
> > HCR_EL2 at the point where the guest traps (in switch.c)?
> >
>
> I have dumped HCR_EL2 before entry to L1 in both V11 and V10.
> on V10 HCR_EL2=0x2743c827c263f
> on V11 HCR_EL2=0x27c3c827c263f
>
> on V11 the function vcpu_el2_e2h_is_set(vcpu) is returning false
> resulting in NV1 bit set along with NV and NV2.
> AFAIK, For L1 to be in VHE, NV1 bit should be zero and NV=NV2=1.
>
> I could boot L1 then L2, if I hack vcpu_el2_e2h_is_set to return true.
> There could be a bug in V11 or E2H0 patchset resulting in
> vcpu_el2_e2h_is_set() returning false?
The E2H0 series should only force vcpu_el2_e2h_is_set() to return
true, but not set it to false. Can you dump the *guest's* version of
HCR_EL2 at this point?
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 10:19 ` Marc Zyngier
@ 2023-11-24 12:34 ` Ganapatrao Kulkarni
2023-11-24 12:51 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-24 12:34 UTC (permalink / raw)
To: Marc Zyngier
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 24-11-2023 03:49 pm, Marc Zyngier wrote:
> On Fri, 24 Nov 2023 09:50:33 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>>
>> On 23-11-2023 10:14 pm, Marc Zyngier wrote:
>>> On Thu, 23 Nov 2023 16:21:48 +0000,
>>> Miguel Luis <miguel.luis@oracle.com> wrote:
>>>>
>>>> Hi Marc,
>>>>
>>>> On 21/11/2023 18:02, Marc Zyngier wrote:
>>>>> On Tue, 21 Nov 2023 16:49:52 +0000,
>>>>> Miguel Luis <miguel.luis@oracle.com> wrote:
>>>>>> Hi Marc,
>>>>>>
>>>>>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
>>>>>>>
>>>>>>> This is the 5th drop of NV support on arm64 for this year, and most
>>>>>>> probably the last one for this side of Christmas.
>>>>>>>
>>>>>>> For the previous episodes, see [1].
>>>>>>>
>>>>>>> What's changed:
>>>>>>>
>>>>>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>>>>>> it without FEAT_NV2, and the architecture is deprecating the former
>>>>>>> entirely. This results in fewer patches, and a slightly simpler
>>>>>>> model overall.
>>>>>>>
>>>>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>>>>>> is gone.
>>>>>>>
>>>>>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>>>>>> access.
>>>>>>>
>>>>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>>>>>> than per-vcpu.
>>>>>>>
>>>>>>> - Fix the EL0 timer fastpath
>>>>>>>
>>>>>>> - Work around the architecture deficiencies when trapping WFI from a
>>>>>>> L2 guest.
>>>>>>>
>>>>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>>>>>
>>>>>>> - Drop the patches that have already been merged (NV trap forwarding,
>>>>>>> per-MMU VTCR)
>>>>>>>
>>>>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>>>>>
>>>>>>> The branch containing these patches (and more) is at [3]. As for the
>>>>>>> previous rounds, my intention is to take a prefix of this series into
>>>>>>> 6.8, provided that it gets enough reviewing.
>>>>>>>
>>>>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>>>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>>>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>>>>>
>>>>>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
>>>>>>
>>>>>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
>>>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>>>>>
>>>>>> This is CPACR_EL12.
>>>>> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
>>>>>
>>>>>> Still need yet to debug.
>>>>> Can you disassemble the guest around the offending PC?
>>>>
>>>> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>>>
>>>> 12baa00: 14000008 b 0x12baa20
>>>> 12baa04: d000d501 adrp x1, 0x2d5c000
>>>> 12baa08: 91154021 add x1, x1, #0x550
>>>> 12baa0c: f9400022 ldr x2, [x1]
>>>> 12baa10: f9400421 ldr x1, [x1, #8]
>>>> 12baa14: 8a010042 and x2, x2, x1
>>>> 12baa18: d3441c42 ubfx x2, x2, #4, #4
>>>> 12baa1c: b4000082 cbz x2, 0x12baa2c
>>>> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
>>>> 12baa24: f2994220 movk x0, #0xca11
>>>> 12baa28: d69f03e0 eret
>>>> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
>>>> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
>>>> 12baa34: f2800000 movk x0, #0x0
>>>> 12baa38: d51c1100 msr hcr_el2, x0
>>>> 12baa3c: d5033fdf isb
>>>> 12baa40: d53c4100 mrs x0, sp_el1
>>>> 12baa44: 9100001f mov sp, x0
>>>> 12baa48: d538d080 mrs x0, tpidr_el1
>>>> 12baa4c: d51cd040 msr tpidr_el2, x0
>>>> 12baa50: d53d1040 mrs x0, cpacr_el12
>>>> 12baa54: d5181040 msr cpacr_el1, x0
>>>> 12baa58: d53dc000 mrs x0, vbar_el12
>>>> 12baa5c: d518c000 msr vbar_el1, x0
>>>> 12baa60: d53c1120 mrs x0, mdcr_el2
>>>> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
>>>> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
>>>> 12baa6c: d51c1120 msr mdcr_el2, x0
>>>> 12baa70: d53d2040 mrs x0, tcr_el12
>>>> 12baa74: d5182040 msr tcr_el1, x0
>>>> 12baa78: d53d2000 mrs x0, ttbr0_el12
>>>> 12baa7c: d5182000 msr ttbr0_el1, x0
>>>> 12baa80: d53d2020 mrs x0, ttbr1_el12
>>>> 12baa84: d5182020 msr ttbr1_el1, x0
>>>> 12baa88: d53da200 mrs x0, mair_el12
>>>> 12baa8c: d518a200 msr mair_el1, x0
>>>> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
>>>> 12baa94: d3400c21 ubfx x1, x1, #0, #4
>>>> 12baa98: b4000141 cbz x1, 0x12baac0
>>>> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
>>>
>>> OK, this is suspiciously close to the location Ganapatrao was having
>>> issues with. Are you running on the same hardware?
>>>
>>> In any case, we should never take a trap for this access. Can you dump
>>> HCR_EL2 at the point where the guest traps (in switch.c)?
>>>
>>
>> I have dumped HCR_EL2 before entry to L1 in both V11 and V10.
>> on V10 HCR_EL2=0x2743c827c263f
>> on V11 HCR_EL2=0x27c3c827c263f
>>
>> on V11 the function vcpu_el2_e2h_is_set(vcpu) is returning false
>> resulting in NV1 bit set along with NV and NV2.
>> AFAIK, For L1 to be in VHE, NV1 bit should be zero and NV=NV2=1.
>>
>> I could boot L1 then L2, if I hack vcpu_el2_e2h_is_set to return true.
>> There could be a bug in V11 or E2H0 patchset resulting in
>> vcpu_el2_e2h_is_set() returning false?
>
> The E2H0 series should only force vcpu_el2_e2h_is_set() to return
> true, but not set it to false. Can you dump the *guest's* version of
> HCR_EL2 at this point?
>
with V11: vhcr_el2=0x100030080000000 mask=0x100af00ffffffff
with V10: vhcr_el2=0x488000000
with hack+V11: vhcr_el2=0x488000000 mask=0x100af00ffffffff
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 12:34 ` Ganapatrao Kulkarni
@ 2023-11-24 12:51 ` Marc Zyngier
2023-11-24 13:22 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-24 12:51 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Fri, 24 Nov 2023 12:34:41 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 24-11-2023 03:49 pm, Marc Zyngier wrote:
> > On Fri, 24 Nov 2023 09:50:33 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >>
> >>
> >> On 23-11-2023 10:14 pm, Marc Zyngier wrote:
> >>> On Thu, 23 Nov 2023 16:21:48 +0000,
> >>> Miguel Luis <miguel.luis@oracle.com> wrote:
> >>>>
> >>>> Hi Marc,
> >>>>
> >>>> On 21/11/2023 18:02, Marc Zyngier wrote:
> >>>>> On Tue, 21 Nov 2023 16:49:52 +0000,
> >>>>> Miguel Luis <miguel.luis@oracle.com> wrote:
> >>>>>> Hi Marc,
> >>>>>>
> >>>>>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
> >>>>>>>
> >>>>>>> This is the 5th drop of NV support on arm64 for this year, and most
> >>>>>>> probably the last one for this side of Christmas.
> >>>>>>>
> >>>>>>> For the previous episodes, see [1].
> >>>>>>>
> >>>>>>> What's changed:
> >>>>>>>
> >>>>>>> - Drop support for the original FEAT_NV. No existing hardware supports
> >>>>>>> it without FEAT_NV2, and the architecture is deprecating the former
> >>>>>>> entirely. This results in fewer patches, and a slightly simpler
> >>>>>>> model overall.
> >>>>>>>
> >>>>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
> >>>>>>> is gone.
> >>>>>>>
> >>>>>>> - Apply the NV idreg restrictions on VM first run rather than on each
> >>>>>>> access.
> >>>>>>>
> >>>>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
> >>>>>>> than per-vcpu.
> >>>>>>>
> >>>>>>> - Fix the EL0 timer fastpath
> >>>>>>>
> >>>>>>> - Work around the architecture deficiencies when trapping WFI from a
> >>>>>>> L2 guest.
> >>>>>>>
> >>>>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
> >>>>>>>
> >>>>>>> - Drop the patches that have already been merged (NV trap forwarding,
> >>>>>>> per-MMU VTCR)
> >>>>>>>
> >>>>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
> >>>>>>>
> >>>>>>> The branch containing these patches (and more) is at [3]. As for the
> >>>>>>> previous rounds, my intention is to take a prefix of this series into
> >>>>>>> 6.8, provided that it gets enough reviewing.
> >>>>>>>
> >>>>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
> >>>>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
> >>>>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
> >>>>>>>
> >>>>>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
> >>>>>>
> >>>>>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
> >>>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
> >>>>>>
> >>>>>> This is CPACR_EL12.
> >>>>> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
> >>>>>
> >>>>>> Still need yet to debug.
> >>>>> Can you disassemble the guest around the offending PC?
> >>>>
> >>>> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
> >>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
> >>>>
> >>>> 12baa00: 14000008 b 0x12baa20
> >>>> 12baa04: d000d501 adrp x1, 0x2d5c000
> >>>> 12baa08: 91154021 add x1, x1, #0x550
> >>>> 12baa0c: f9400022 ldr x2, [x1]
> >>>> 12baa10: f9400421 ldr x1, [x1, #8]
> >>>> 12baa14: 8a010042 and x2, x2, x1
> >>>> 12baa18: d3441c42 ubfx x2, x2, #4, #4
> >>>> 12baa1c: b4000082 cbz x2, 0x12baa2c
> >>>> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
> >>>> 12baa24: f2994220 movk x0, #0xca11
> >>>> 12baa28: d69f03e0 eret
> >>>> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
> >>>> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
> >>>> 12baa34: f2800000 movk x0, #0x0
> >>>> 12baa38: d51c1100 msr hcr_el2, x0
> >>>> 12baa3c: d5033fdf isb
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This.
> >>>> 12baa40: d53c4100 mrs x0, sp_el1
> >>>> 12baa44: 9100001f mov sp, x0
> >>>> 12baa48: d538d080 mrs x0, tpidr_el1
> >>>> 12baa4c: d51cd040 msr tpidr_el2, x0
> >>>> 12baa50: d53d1040 mrs x0, cpacr_el12
> >>>> 12baa54: d5181040 msr cpacr_el1, x0
> >>>> 12baa58: d53dc000 mrs x0, vbar_el12
> >>>> 12baa5c: d518c000 msr vbar_el1, x0
> >>>> 12baa60: d53c1120 mrs x0, mdcr_el2
> >>>> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
> >>>> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
> >>>> 12baa6c: d51c1120 msr mdcr_el2, x0
> >>>> 12baa70: d53d2040 mrs x0, tcr_el12
> >>>> 12baa74: d5182040 msr tcr_el1, x0
> >>>> 12baa78: d53d2000 mrs x0, ttbr0_el12
> >>>> 12baa7c: d5182000 msr ttbr0_el1, x0
> >>>> 12baa80: d53d2020 mrs x0, ttbr1_el12
> >>>> 12baa84: d5182020 msr ttbr1_el1, x0
> >>>> 12baa88: d53da200 mrs x0, mair_el12
> >>>> 12baa8c: d518a200 msr mair_el1, x0
> >>>> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
> >>>> 12baa94: d3400c21 ubfx x1, x1, #0, #4
> >>>> 12baa98: b4000141 cbz x1, 0x12baac0
> >>>> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
> >>>
> >>> OK, this is suspiciously close to the location Ganapatrao was having
> >>> issues with. Are you running on the same hardware?
> >>>
> >>> In any case, we should never take a trap for this access. Can you dump
> >>> HCR_EL2 at the point where the guest traps (in switch.c)?
> >>>
> >>
> >> I have dumped HCR_EL2 before entry to L1 in both V11 and V10.
> >> on V10 HCR_EL2=0x2743c827c263f
> >> on V11 HCR_EL2=0x27c3c827c263f
> >>
> >> on V11 the function vcpu_el2_e2h_is_set(vcpu) is returning false
> >> resulting in NV1 bit set along with NV and NV2.
> >> AFAIK, For L1 to be in VHE, NV1 bit should be zero and NV=NV2=1.
> >>
> >> I could boot L1 then L2, if I hack vcpu_el2_e2h_is_set to return true.
> >> There could be a bug in V11 or E2H0 patchset resulting in
> >> vcpu_el2_e2h_is_set() returning false?
> >
> > The E2H0 series should only force vcpu_el2_e2h_is_set() to return
> > true, but not set it to false. Can you dump the *guest's* version of
> > HCR_EL2 at this point?
> >
>
> with V11: vhcr_el2=0x100030080000000 mask=0x100af00ffffffff
How is this value possible if the write to HCR_EL2 has taken place?
When do you sample this?
> with V10: vhcr_el2=0x488000000
> with hack+V11: vhcr_el2=0x488000000 mask=0x100af00ffffffff
Well, of course, if you constrain the value of HCR_EL2...
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 12:51 ` Marc Zyngier
@ 2023-11-24 13:22 ` Ganapatrao Kulkarni
2023-11-24 14:32 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-24 13:22 UTC (permalink / raw)
To: Marc Zyngier
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 24-11-2023 06:21 pm, Marc Zyngier wrote:
> On Fri, 24 Nov 2023 12:34:41 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>>
>> On 24-11-2023 03:49 pm, Marc Zyngier wrote:
>>> On Fri, 24 Nov 2023 09:50:33 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>>
>>>>
>>>> On 23-11-2023 10:14 pm, Marc Zyngier wrote:
>>>>> On Thu, 23 Nov 2023 16:21:48 +0000,
>>>>> Miguel Luis <miguel.luis@oracle.com> wrote:
>>>>>>
>>>>>> Hi Marc,
>>>>>>
>>>>>> On 21/11/2023 18:02, Marc Zyngier wrote:
>>>>>>> On Tue, 21 Nov 2023 16:49:52 +0000,
>>>>>>> Miguel Luis <miguel.luis@oracle.com> wrote:
>>>>>>>> Hi Marc,
>>>>>>>>
>>>>>>>>> On 20 Nov 2023, at 12:09, Marc Zyngier <maz@kernel.org> wrote:
>>>>>>>>>
>>>>>>>>> This is the 5th drop of NV support on arm64 for this year, and most
>>>>>>>>> probably the last one for this side of Christmas.
>>>>>>>>>
>>>>>>>>> For the previous episodes, see [1].
>>>>>>>>>
>>>>>>>>> What's changed:
>>>>>>>>>
>>>>>>>>> - Drop support for the original FEAT_NV. No existing hardware supports
>>>>>>>>> it without FEAT_NV2, and the architecture is deprecating the former
>>>>>>>>> entirely. This results in fewer patches, and a slightly simpler
>>>>>>>>> model overall.
>>>>>>>>>
>>>>>>>>> - Reorganise the series to make it a bit more logical now that FEAT_NV
>>>>>>>>> is gone.
>>>>>>>>>
>>>>>>>>> - Apply the NV idreg restrictions on VM first run rather than on each
>>>>>>>>> access.
>>>>>>>>>
>>>>>>>>> - Make the nested vgic shadow CPU interface a per-CPU structure rather
>>>>>>>>> than per-vcpu.
>>>>>>>>>
>>>>>>>>> - Fix the EL0 timer fastpath
>>>>>>>>>
>>>>>>>>> - Work around the architecture deficiencies when trapping WFI from a
>>>>>>>>> L2 guest.
>>>>>>>>>
>>>>>>>>> - Fix sampling of nested vgic state (MISR, ELRSR, EISR)
>>>>>>>>>
>>>>>>>>> - Drop the patches that have already been merged (NV trap forwarding,
>>>>>>>>> per-MMU VTCR)
>>>>>>>>>
>>>>>>>>> - Rebased on top of 6.7-rc2 + the FEAT_E2H0 support [2].
>>>>>>>>>
>>>>>>>>> The branch containing these patches (and more) is at [3]. As for the
>>>>>>>>> previous rounds, my intention is to take a prefix of this series into
>>>>>>>>> 6.8, provided that it gets enough reviewing.
>>>>>>>>>
>>>>>>>>> [1] https://lore.kernel.org/r/20230515173103.1017669-1-maz@kernel.org
>>>>>>>>> [2] https://lore.kernel.org/r/20231120123721.851738-1-maz@kernel.org
>>>>>>>>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/nv-6.8-nv2-only
>>>>>>>>>
>>>>>>>> While I was testing this with kvmtool for 5.16 I noted the following on dmesg:
>>>>>>>>
>>>>>>>> [ 803.014258] kvm [19040]: Unsupported guest sys_reg access at: 8129fa50 [600003c9]
>>>>>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>>>>>>>
>>>>>>>> This is CPACR_EL12.
>>>>>>> CPACR_EL12 is redirected to VNCR[0x100]. It really shouldn't trap...
>>>>>>>
>>>>>>>> Still need yet to debug.
>>>>>>> Can you disassemble the guest around the offending PC?
>>>>>>
>>>>>> [ 1248.686350] kvm [7013]: Unsupported guest sys_reg access at: 812baa50 [600003c9]
>>>>>> { Op0( 3), Op1( 5), CRn( 1), CRm( 0), Op2( 2), func_read },
>>>>>>
>>>>>> 12baa00: 14000008 b 0x12baa20
>>>>>> 12baa04: d000d501 adrp x1, 0x2d5c000
>>>>>> 12baa08: 91154021 add x1, x1, #0x550
>>>>>> 12baa0c: f9400022 ldr x2, [x1]
>>>>>> 12baa10: f9400421 ldr x1, [x1, #8]
>>>>>> 12baa14: 8a010042 and x2, x2, x1
>>>>>> 12baa18: d3441c42 ubfx x2, x2, #4, #4
>>>>>> 12baa1c: b4000082 cbz x2, 0x12baa2c
>>>>>> 12baa20: d2a175a0 mov x0, #0xbad0000 // #195887104
>>>>>> 12baa24: f2994220 movk x0, #0xca11
>>>>>> 12baa28: d69f03e0 eret
>>>>>> 12baa2c: d2c00080 mov x0, #0x400000000 // #17179869184
>>>>>> 12baa30: f2b10000 movk x0, #0x8800, lsl #16
>>>>>> 12baa34: f2800000 movk x0, #0x0
>>>>>> 12baa38: d51c1100 msr hcr_el2, x0
>>>>>> 12baa3c: d5033fdf isb
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> This.
>
>>>>>> 12baa40: d53c4100 mrs x0, sp_el1
>>>>>> 12baa44: 9100001f mov sp, x0
>>>>>> 12baa48: d538d080 mrs x0, tpidr_el1
>>>>>> 12baa4c: d51cd040 msr tpidr_el2, x0
>>>>>> 12baa50: d53d1040 mrs x0, cpacr_el12
>>>>>> 12baa54: d5181040 msr cpacr_el1, x0
>>>>>> 12baa58: d53dc000 mrs x0, vbar_el12
>>>>>> 12baa5c: d518c000 msr vbar_el1, x0
>>>>>> 12baa60: d53c1120 mrs x0, mdcr_el2
>>>>>> 12baa64: 9272f400 and x0, x0, #0xffffffffffffcfff
>>>>>> 12baa68: 9266f400 and x0, x0, #0xfffffffffcffffff
>>>>>> 12baa6c: d51c1120 msr mdcr_el2, x0
>>>>>> 12baa70: d53d2040 mrs x0, tcr_el12
>>>>>> 12baa74: d5182040 msr tcr_el1, x0
>>>>>> 12baa78: d53d2000 mrs x0, ttbr0_el12
>>>>>> 12baa7c: d5182000 msr ttbr0_el1, x0
>>>>>> 12baa80: d53d2020 mrs x0, ttbr1_el12
>>>>>> 12baa84: d5182020 msr ttbr1_el1, x0
>>>>>> 12baa88: d53da200 mrs x0, mair_el12
>>>>>> 12baa8c: d518a200 msr mair_el1, x0
>>>>>> 12baa90: d5380761 mrs x1, s3_0_c0_c7_3
>>>>>> 12baa94: d3400c21 ubfx x1, x1, #0, #4
>>>>>> 12baa98: b4000141 cbz x1, 0x12baac0
>>>>>> 12baa9c: d53d2060 mrs x0, s3_5_c2_c0_3
>>>>>
>>>>> OK, this is suspiciously close to the location Ganapatrao was having
>>>>> issues with. Are you running on the same hardware?
>>>>>
>>>>> In any case, we should never take a trap for this access. Can you dump
>>>>> HCR_EL2 at the point where the guest traps (in switch.c)?
>>>>>
>>>>
>>>> I have dumped HCR_EL2 before entry to L1 in both V11 and V10.
>>>> on V10 HCR_EL2=0x2743c827c263f
>>>> on V11 HCR_EL2=0x27c3c827c263f
>>>>
>>>> on V11 the function vcpu_el2_e2h_is_set(vcpu) is returning false
>>>> resulting in NV1 bit set along with NV and NV2.
>>>> AFAIK, For L1 to be in VHE, NV1 bit should be zero and NV=NV2=1.
>>>>
>>>> I could boot L1 then L2, if I hack vcpu_el2_e2h_is_set to return true.
>>>> There could be a bug in V11 or E2H0 patchset resulting in
>>>> vcpu_el2_e2h_is_set() returning false?
>>>
>>> The E2H0 series should only force vcpu_el2_e2h_is_set() to return
>>> true, but not set it to false. Can you dump the *guest's* version of
>>> HCR_EL2 at this point?
>>>
>>
>> with V11: vhcr_el2=0x100030080000000 mask=0x100af00ffffffff
>
> How is this value possible if the write to HCR_EL2 has taken place?
> When do you sample this?
I am not sure how and where it got set. I think, whatever it is set, it
is due to false return of vcpu_el2_e2h_is_set(). Need to understand/debug.
The vhcr_el2 value I have shared is traced along with hcr in function
__activate_traps/__compute_hcr.
>
>> with V10: vhcr_el2=0x488000000
>> with hack+V11: vhcr_el2=0x488000000 mask=0x100af00ffffffff
>
> Well, of course, if you constrain the value of HCR_EL2...
>
> M.
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 13:22 ` Ganapatrao Kulkarni
@ 2023-11-24 14:32 ` Marc Zyngier
2023-11-27 7:26 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-24 14:32 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Fri, 24 Nov 2023 13:22:22 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
> > How is this value possible if the write to HCR_EL2 has taken place?
> > When do you sample this?
>
> I am not sure how and where it got set. I think, whatever it is set,
> it is due to false return of vcpu_el2_e2h_is_set(). Need to
> understand/debug.
> The vhcr_el2 value I have shared is traced along with hcr in function
> __activate_traps/__compute_hcr.
Here's my hunch:
The guest boots with E2H=0, because we don't advertise anything else
on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
means that HCR_EL2 is writable (to memory) without a trap. But we're
still running with NV1=1.
Subsequently, we access a sysreg that should never trap for a VHE
guest, but we're with the wrong config. Bad things happen.
Unfortunately, NV2 is pretty much incompatible with E2H being updated,
because it cannot perform the changes that this would result into at
the point where they should happen. We can try and do a best effort
handling, but you can always trick it.
Anyway, can you see if the hack below helps? I'm not keen on it at
all, but this would be a good data point.
M.
From c4b856221661393b884cbf673d100faaa8dc018a Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Fri, 26 May 2023 12:16:05 +0100
Subject: [PATCH] KVM: arm64: Opportunistically track HCR_EL2.E2H being flipped
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 9 +++++++--
arch/arm64/kvm/hyp/include/hyp/switch.h | 13 +++++++++++++
arch/arm64/kvm/hyp/vhe/switch.c | 10 ++++++++--
3 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c91f607e989d..d45ef41de5fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -655,6 +655,9 @@ struct kvm_vcpu_arch {
/* State flags for kernel bookkeeping, unused by the hypervisor code */
u8 sflags;
+ /* Bookkeeping flags for NV */
+ u8 nvflags;
+
/*
* Don't run the guest (internal implementation need).
*
@@ -858,8 +861,6 @@ struct kvm_vcpu_arch {
#define DEBUG_STATE_SAVE_SPE __vcpu_single_flag(iflags, BIT(5))
/* Save TRBE context if active */
#define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6))
-/* vcpu running in HYP context */
-#define VCPU_HYP_CONTEXT __vcpu_single_flag(iflags, BIT(7))
/* SVE enabled for host EL0 */
#define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0))
@@ -878,6 +879,10 @@ struct kvm_vcpu_arch {
/* WFI instruction trapped */
#define IN_WFI __vcpu_single_flag(sflags, BIT(7))
+/* vcpu running in HYP context */
+#define VCPU_HYP_CONTEXT __vcpu_single_flag(nvflags, BIT(0))
+/* vcpu entered with HCR_EL2.E2H set */
+#define VCPU_HCR_E2H __vcpu_single_flag(nvflags, BIT(1))
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index aed2ea35082c..9c1346116d61 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -669,6 +669,19 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
*/
synchronize_vcpu_pstate(vcpu, exit_code);
+ if (vcpu_has_nv(vcpu) &&
+ (!!vcpu_get_flag(vcpu, VCPU_HCR_E2H) ^ vcpu_el2_e2h_is_set(vcpu))) {
+ if (vcpu_el2_e2h_is_set(vcpu)) {
+ sysreg_clear_set(hcr_el2, HCR_NV1, 0);
+ vcpu_set_flag(vcpu, VCPU_HCR_E2H);
+ } else {
+ sysreg_clear_set(hcr_el2, 0, HCR_NV1);
+ vcpu_clear_flag(vcpu, VCPU_HCR_E2H);
+ }
+
+ return true;
+ }
+
/*
* Check whether we want to repaint the state one way or
* another.
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 8d1e9d1adabe..395aaa06f358 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -447,10 +447,16 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
sysreg_restore_guest_state_vhe(guest_ctxt);
__debug_switch_to_guest(vcpu);
- if (is_hyp_ctxt(vcpu))
+ if (is_hyp_ctxt(vcpu)) {
+ if (vcpu_el2_e2h_is_set(vcpu))
+ vcpu_set_flag(vcpu, VCPU_HCR_E2H);
+ else
+ vcpu_clear_flag(vcpu, VCPU_HCR_E2H);
+
vcpu_set_flag(vcpu, VCPU_HYP_CONTEXT);
- else
+ } else {
vcpu_clear_flag(vcpu, VCPU_HYP_CONTEXT);
+ }
do {
/* Jump in the fire! */
--
2.39.2
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-24 14:32 ` Marc Zyngier
@ 2023-11-27 7:26 ` Ganapatrao Kulkarni
2023-11-27 9:22 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-27 7:26 UTC (permalink / raw)
To: Marc Zyngier
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 24-11-2023 08:02 pm, Marc Zyngier wrote:
> On Fri, 24 Nov 2023 13:22:22 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>> How is this value possible if the write to HCR_EL2 has taken place?
>>> When do you sample this?
>>
>> I am not sure how and where it got set. I think, whatever it is set,
>> it is due to false return of vcpu_el2_e2h_is_set(). Need to
>> understand/debug.
>> The vhcr_el2 value I have shared is traced along with hcr in function
>> __activate_traps/__compute_hcr.
>
> Here's my hunch:
>
> The guest boots with E2H=0, because we don't advertise anything else
> on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
> means that HCR_EL2 is writable (to memory) without a trap. But we're
> still running with NV1=1.
>
> Subsequently, we access a sysreg that should never trap for a VHE
> guest, but we're with the wrong config. Bad things happen.
>
> Unfortunately, NV2 is pretty much incompatible with E2H being updated,
> because it cannot perform the changes that this would result into at
> the point where they should happen. We can try and do a best effort
> handling, but you can always trick it.
>
> Anyway, can you see if the hack below helps? I'm not keen on it at
> all, but this would be a good data point.
Thanks Marc, this diff fixes the issue.
Just wondering what is changed w.r.t to L1 handling from V10 to V11 that
it requires this trick?
Also why this was not seen on your platform, is it E2H0 enabled?
>
> M.
>
> From c4b856221661393b884cbf673d100faaa8dc018a Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Fri, 26 May 2023 12:16:05 +0100
> Subject: [PATCH] KVM: arm64: Opportunistically track HCR_EL2.E2H being flipped
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 9 +++++++--
> arch/arm64/kvm/hyp/include/hyp/switch.h | 13 +++++++++++++
> arch/arm64/kvm/hyp/vhe/switch.c | 10 ++++++++--
> 3 files changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c91f607e989d..d45ef41de5fb 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -655,6 +655,9 @@ struct kvm_vcpu_arch {
> /* State flags for kernel bookkeeping, unused by the hypervisor code */
> u8 sflags;
>
> + /* Bookkeeping flags for NV */
> + u8 nvflags;
> +
> /*
> * Don't run the guest (internal implementation need).
> *
> @@ -858,8 +861,6 @@ struct kvm_vcpu_arch {
> #define DEBUG_STATE_SAVE_SPE __vcpu_single_flag(iflags, BIT(5))
> /* Save TRBE context if active */
> #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6))
> -/* vcpu running in HYP context */
> -#define VCPU_HYP_CONTEXT __vcpu_single_flag(iflags, BIT(7))
>
> /* SVE enabled for host EL0 */
> #define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0))
> @@ -878,6 +879,10 @@ struct kvm_vcpu_arch {
> /* WFI instruction trapped */
> #define IN_WFI __vcpu_single_flag(sflags, BIT(7))
>
> +/* vcpu running in HYP context */
> +#define VCPU_HYP_CONTEXT __vcpu_single_flag(nvflags, BIT(0))
> +/* vcpu entered with HCR_EL2.E2H set */
> +#define VCPU_HCR_E2H __vcpu_single_flag(nvflags, BIT(1))
>
> /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index aed2ea35082c..9c1346116d61 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -669,6 +669,19 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> */
> synchronize_vcpu_pstate(vcpu, exit_code);
>
> + if (vcpu_has_nv(vcpu) &&
> + (!!vcpu_get_flag(vcpu, VCPU_HCR_E2H) ^ vcpu_el2_e2h_is_set(vcpu))) {
> + if (vcpu_el2_e2h_is_set(vcpu)) {
> + sysreg_clear_set(hcr_el2, HCR_NV1, 0);
> + vcpu_set_flag(vcpu, VCPU_HCR_E2H);
> + } else {
> + sysreg_clear_set(hcr_el2, 0, HCR_NV1);
> + vcpu_clear_flag(vcpu, VCPU_HCR_E2H);
> + }
> +
> + return true;
> + }
> +
> /*
> * Check whether we want to repaint the state one way or
> * another.
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 8d1e9d1adabe..395aaa06f358 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -447,10 +447,16 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> sysreg_restore_guest_state_vhe(guest_ctxt);
> __debug_switch_to_guest(vcpu);
>
> - if (is_hyp_ctxt(vcpu))
> + if (is_hyp_ctxt(vcpu)) {
> + if (vcpu_el2_e2h_is_set(vcpu))
> + vcpu_set_flag(vcpu, VCPU_HCR_E2H);
> + else
> + vcpu_clear_flag(vcpu, VCPU_HCR_E2H);
> +
> vcpu_set_flag(vcpu, VCPU_HYP_CONTEXT);
> - else
> + } else {
> vcpu_clear_flag(vcpu, VCPU_HYP_CONTEXT);
> + }
>
> do {
> /* Jump in the fire! */
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-27 7:26 ` Ganapatrao Kulkarni
@ 2023-11-27 9:22 ` Marc Zyngier
2023-11-27 10:59 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-27 9:22 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Mon, 27 Nov 2023 07:26:58 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 24-11-2023 08:02 pm, Marc Zyngier wrote:
> > On Fri, 24 Nov 2023 13:22:22 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >>> How is this value possible if the write to HCR_EL2 has taken place?
> >>> When do you sample this?
> >>
> >> I am not sure how and where it got set. I think, whatever it is set,
> >> it is due to false return of vcpu_el2_e2h_is_set(). Need to
> >> understand/debug.
> >> The vhcr_el2 value I have shared is traced along with hcr in function
> >> __activate_traps/__compute_hcr.
> >
> > Here's my hunch:
> >
> > The guest boots with E2H=0, because we don't advertise anything else
> > on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
> > means that HCR_EL2 is writable (to memory) without a trap. But we're
> > still running with NV1=1.
> >
> > Subsequently, we access a sysreg that should never trap for a VHE
> > guest, but we're with the wrong config. Bad things happen.
> >
> > Unfortunately, NV2 is pretty much incompatible with E2H being updated,
> > because it cannot perform the changes that this would result into at
> > the point where they should happen. We can try and do a best effort
> > handling, but you can always trick it.
> >
> > Anyway, can you see if the hack below helps? I'm not keen on it at
> > all, but this would be a good data point.
>
> Thanks Marc, this diff fixes the issue.
> Just wondering what is changed w.r.t to L1 handling from V10 to V11
> that it requires this trick?
Not completely sure. Before v11, anything that would trap would be
silently handled by the FEAT_NV code. Now, a trap for something that
is supposed to be redirected to VNCR results in an UNDEF exception.
I suspect that the exception is handled again as a call to
__finalise_el2(), probably because the write to VBAR_EL1 didn't do
what it was supposed to do.
> Also why this was not seen on your platform, is it E2H0 enabled?
It doesn't have FEAT_E2H0, and that's the whole point. No E2H0, no
problems, as the guest cannot trick the host into losing track of the
state (which I'm pretty sure can happen even with this ugly hack).
I will probably completely disable NV1 support in the next drop, and
make NV support only VHE guests. Which is the only mode that makes any
sense anyway.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-27 9:22 ` Marc Zyngier
@ 2023-11-27 10:59 ` Ganapatrao Kulkarni
2023-11-27 11:45 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-27 10:59 UTC (permalink / raw)
To: Marc Zyngier
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 27-11-2023 02:52 pm, Marc Zyngier wrote:
> On Mon, 27 Nov 2023 07:26:58 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>>
>> On 24-11-2023 08:02 pm, Marc Zyngier wrote:
>>> On Fri, 24 Nov 2023 13:22:22 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>>> How is this value possible if the write to HCR_EL2 has taken place?
>>>>> When do you sample this?
>>>>
>>>> I am not sure how and where it got set. I think, whatever it is set,
>>>> it is due to false return of vcpu_el2_e2h_is_set(). Need to
>>>> understand/debug.
>>>> The vhcr_el2 value I have shared is traced along with hcr in function
>>>> __activate_traps/__compute_hcr.
>>>
>>> Here's my hunch:
>>>
>>> The guest boots with E2H=0, because we don't advertise anything else
>>> on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
>>> means that HCR_EL2 is writable (to memory) without a trap. But we're
>>> still running with NV1=1.
>>>
>>> Subsequently, we access a sysreg that should never trap for a VHE
>>> guest, but we're with the wrong config. Bad things happen.
>>>
>>> Unfortunately, NV2 is pretty much incompatible with E2H being updated,
>>> because it cannot perform the changes that this would result into at
>>> the point where they should happen. We can try and do a best effort
>>> handling, but you can always trick it.
>>>
>>> Anyway, can you see if the hack below helps? I'm not keen on it at
>>> all, but this would be a good data point.
>>
>> Thanks Marc, this diff fixes the issue.
>> Just wondering what is changed w.r.t to L1 handling from V10 to V11
>> that it requires this trick?
>
> Not completely sure. Before v11, anything that would trap would be
> silently handled by the FEAT_NV code. Now, a trap for something that
> is supposed to be redirected to VNCR results in an UNDEF exception.
>
> I suspect that the exception is handled again as a call to
> __finalise_el2(), probably because the write to VBAR_EL1 didn't do
> what it was supposed to do.
>
>> Also why this was not seen on your platform, is it E2H0 enabled?
>
> It doesn't have FEAT_E2H0, and that's the whole point. No E2H0, no
> problems, as the guest cannot trick the host into losing track of the
> state (which I'm pretty sure can happen even with this ugly hack).
>
> I will probably completely disable NV1 support in the next drop, and
> make NV support only VHE guests. Which is the only mode that makes any
> sense anyway.
>
Thanks, absolutely makes sense to have *VHE-only* L1, looking forward to
a next drop.
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-27 10:59 ` Ganapatrao Kulkarni
@ 2023-11-27 11:45 ` Marc Zyngier
2023-11-27 12:18 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-11-27 11:45 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Mon, 27 Nov 2023 10:59:36 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
>
> On 27-11-2023 02:52 pm, Marc Zyngier wrote:
> > On Mon, 27 Nov 2023 07:26:58 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >>
> >>
> >> On 24-11-2023 08:02 pm, Marc Zyngier wrote:
> >>> On Fri, 24 Nov 2023 13:22:22 +0000,
> >>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>>>
> >>>>> How is this value possible if the write to HCR_EL2 has taken place?
> >>>>> When do you sample this?
> >>>>
> >>>> I am not sure how and where it got set. I think, whatever it is set,
> >>>> it is due to false return of vcpu_el2_e2h_is_set(). Need to
> >>>> understand/debug.
> >>>> The vhcr_el2 value I have shared is traced along with hcr in function
> >>>> __activate_traps/__compute_hcr.
> >>>
> >>> Here's my hunch:
> >>>
> >>> The guest boots with E2H=0, because we don't advertise anything else
> >>> on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
> >>> means that HCR_EL2 is writable (to memory) without a trap. But we're
> >>> still running with NV1=1.
> >>>
> >>> Subsequently, we access a sysreg that should never trap for a VHE
> >>> guest, but we're with the wrong config. Bad things happen.
> >>>
> >>> Unfortunately, NV2 is pretty much incompatible with E2H being updated,
> >>> because it cannot perform the changes that this would result into at
> >>> the point where they should happen. We can try and do a best effort
> >>> handling, but you can always trick it.
> >>>
> >>> Anyway, can you see if the hack below helps? I'm not keen on it at
> >>> all, but this would be a good data point.
> >>
> >> Thanks Marc, this diff fixes the issue.
> >> Just wondering what is changed w.r.t to L1 handling from V10 to V11
> >> that it requires this trick?
> >
> > Not completely sure. Before v11, anything that would trap would be
> > silently handled by the FEAT_NV code. Now, a trap for something that
> > is supposed to be redirected to VNCR results in an UNDEF exception.
> >
> > I suspect that the exception is handled again as a call to
> > __finalise_el2(), probably because the write to VBAR_EL1 didn't do
> > what it was supposed to do.
> >
> >> Also why this was not seen on your platform, is it E2H0 enabled?
> >
> > It doesn't have FEAT_E2H0, and that's the whole point. No E2H0, no
> > problems, as the guest cannot trick the host into losing track of the
> > state (which I'm pretty sure can happen even with this ugly hack).
> >
> > I will probably completely disable NV1 support in the next drop, and
> > make NV support only VHE guests. Which is the only mode that makes any
> > sense anyway.
> >
>
> Thanks, absolutely makes sense to have *VHE-only* L1, looking forward
> to a next drop.
Note that this won't be restricted to L1, but will affect *everything.
No non-VHE guest will be supported at any level whatsoever, and NV
will always expose ID_AA64MMFR4_EL1.E2H0=0b1110, indicating that
HCR_EL2.NV1 is RES0, on top of ID_AA64MMFR4_EL1.NV_frac=1 (NV2 only).
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-27 11:45 ` Marc Zyngier
@ 2023-11-27 12:18 ` Ganapatrao Kulkarni
2023-11-27 13:57 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2023-11-27 12:18 UTC (permalink / raw)
To: Marc Zyngier
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On 27-11-2023 05:15 pm, Marc Zyngier wrote:
> On Mon, 27 Nov 2023 10:59:36 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>>
>> On 27-11-2023 02:52 pm, Marc Zyngier wrote:
>>> On Mon, 27 Nov 2023 07:26:58 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>>
>>>>
>>>> On 24-11-2023 08:02 pm, Marc Zyngier wrote:
>>>>> On Fri, 24 Nov 2023 13:22:22 +0000,
>>>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>>>
>>>>>>> How is this value possible if the write to HCR_EL2 has taken place?
>>>>>>> When do you sample this?
>>>>>>
>>>>>> I am not sure how and where it got set. I think, whatever it is set,
>>>>>> it is due to false return of vcpu_el2_e2h_is_set(). Need to
>>>>>> understand/debug.
>>>>>> The vhcr_el2 value I have shared is traced along with hcr in function
>>>>>> __activate_traps/__compute_hcr.
>>>>>
>>>>> Here's my hunch:
>>>>>
>>>>> The guest boots with E2H=0, because we don't advertise anything else
>>>>> on your HW. So we run with NV1=1 until we try to *upgrade* to VHE. NV2
>>>>> means that HCR_EL2 is writable (to memory) without a trap. But we're
>>>>> still running with NV1=1.
>>>>>
>>>>> Subsequently, we access a sysreg that should never trap for a VHE
>>>>> guest, but we're with the wrong config. Bad things happen.
>>>>>
>>>>> Unfortunately, NV2 is pretty much incompatible with E2H being updated,
>>>>> because it cannot perform the changes that this would result into at
>>>>> the point where they should happen. We can try and do a best effort
>>>>> handling, but you can always trick it.
>>>>>
>>>>> Anyway, can you see if the hack below helps? I'm not keen on it at
>>>>> all, but this would be a good data point.
>>>>
>>>> Thanks Marc, this diff fixes the issue.
>>>> Just wondering what is changed w.r.t to L1 handling from V10 to V11
>>>> that it requires this trick?
>>>
>>> Not completely sure. Before v11, anything that would trap would be
>>> silently handled by the FEAT_NV code. Now, a trap for something that
>>> is supposed to be redirected to VNCR results in an UNDEF exception.
>>>
>>> I suspect that the exception is handled again as a call to
>>> __finalise_el2(), probably because the write to VBAR_EL1 didn't do
>>> what it was supposed to do.
>>>
>>>> Also why this was not seen on your platform, is it E2H0 enabled?
>>>
>>> It doesn't have FEAT_E2H0, and that's the whole point. No E2H0, no
>>> problems, as the guest cannot trick the host into losing track of the
>>> state (which I'm pretty sure can happen even with this ugly hack).
>>>
>>> I will probably completely disable NV1 support in the next drop, and
>>> make NV support only VHE guests. Which is the only mode that makes any
>>> sense anyway.
>>>
>>
>> Thanks, absolutely makes sense to have *VHE-only* L1, looking forward
>> to a next drop.
>
> Note that this won't be restricted to L1, but will affect *everything.
>
Ok.
> No non-VHE guest will be supported at any level whatsoever, and NV
> will always expose ID_AA64MMFR4_EL1.E2H0=0b1110, indicating that
> HCR_EL2.NV1 is RES0, on top of ID_AA64MMFR4_EL1.NV_frac=1 (NV2 only).
OK, Even I was thinking the same instead of the work-around/trick, then
I felt it is still needed since the L1 may be any distro kernel and it
may not have code to interpret/decode ID_AA64MMFR4_EL1.E2H0.
Thanks,
Ganapat
>
> M.
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-27 12:18 ` Ganapatrao Kulkarni
@ 2023-11-27 13:57 ` Marc Zyngier
0 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-11-27 13:57 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: Miguel Luis, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, Alexandru Elisei,
Andre Przywara, Chase Conklin, Christoffer Dall, Darren Hart,
Jintack Lim, Russell King, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Mon, 27 Nov 2023 12:18:45 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
> >>> I will probably completely disable NV1 support in the next drop, and
> >>> make NV support only VHE guests. Which is the only mode that makes any
> >>> sense anyway.
> >>>
> >>
> >> Thanks, absolutely makes sense to have *VHE-only* L1, looking forward
> >> to a next drop.
> >
> > Note that this won't be restricted to L1, but will affect *everything.
> >
> Ok.
>
> > No non-VHE guest will be supported at any level whatsoever, and NV
> > will always expose ID_AA64MMFR4_EL1.E2H0=0b1110, indicating that
> > HCR_EL2.NV1 is RES0, on top of ID_AA64MMFR4_EL1.NV_frac=1 (NV2 only).
>
> OK, Even I was thinking the same instead of the work-around/trick,
> then I felt it is still needed since the L1 may be any distro kernel
> and it may not have code to interpret/decode ID_AA64MMFR4_EL1.E2H0.
The problem is that we can't make that reliable. Current Linux kernels
will work with E2H RES1, as you experienced it by hacking KVM. Old
crap won't work, but I can't say I care.
My point is: KVM NV support will be compliant with the architecture.
SW that needs to run with NV will have to understand the version of
the architecture that KVM exposes. If you need distros to be upgraded,
then you can work with them to update their stuff.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (44 preceding siblings ...)
2023-11-21 16:49 ` Miguel Luis
@ 2023-12-18 12:39 ` Marc Zyngier
2023-12-18 19:51 ` Oliver Upton
2023-12-19 10:32 ` (subset) " Marc Zyngier
46 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2023-12-18 12:39 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Ganapatrao Kulkarni, Darren Hart, Jintack Lim, Russell King,
Miguel Luis, James Morse, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
On Mon, 20 Nov 2023 13:09:44 +0000,
Marc Zyngier <maz@kernel.org> wrote:
>
> This is the 5th drop of NV support on arm64 for this year, and most
> probably the last one for this side of Christmas.
Unless someone objects, I'm planning to take the first 10 patches of
this series into 6.8 (with the dependency on ID_AA64MMFR4_EL1.NV_frac
in patch #1 removed).
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-12-18 12:39 ` Marc Zyngier
@ 2023-12-18 19:51 ` Oliver Upton
0 siblings, 0 replies; 79+ messages in thread
From: Oliver Upton @ 2023-12-18 19:51 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Ganapatrao Kulkarni, Darren Hart,
Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Zenghui Yu
On Mon, Dec 18, 2023 at 12:39:26PM +0000, Marc Zyngier wrote:
> On Mon, 20 Nov 2023 13:09:44 +0000,
> Marc Zyngier <maz@kernel.org> wrote:
> >
> > This is the 5th drop of NV support on arm64 for this year, and most
> > probably the last one for this side of Christmas.
>
> Unless someone objects, I'm planning to take the first 10 patches of
> this series into 6.8 (with the dependency on ID_AA64MMFR4_EL1.NV_frac
> in patch #1 removed).
For the first 10 patches:
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: (subset) [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only)
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
` (45 preceding siblings ...)
2023-12-18 12:39 ` Marc Zyngier
@ 2023-12-19 10:32 ` Marc Zyngier
46 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2023-12-19 10:32 UTC (permalink / raw)
To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
Cc: Russell King, Christoffer Dall, Oliver Upton, James Morse,
Zenghui Yu, Darren Hart, Andre Przywara, Alexandru Elisei,
Ganapatrao Kulkarni, Jintack Lim, Suzuki K Poulose, Chase Conklin,
Miguel Luis
On Mon, 20 Nov 2023 13:09:44 +0000, Marc Zyngier wrote:
> This is the 5th drop of NV support on arm64 for this year, and most
> probably the last one for this side of Christmas.
>
> For the previous episodes, see [1].
>
> What's changed:
>
> [...]
Applied to next, thanks!
[01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2
commit: 2bfc654b89c4dd1c372bb2cbba6b5a0eb578d214
[02/43] KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt()
commit: 111903d1f5b9334d1100e1c6ee08e740fa374d91
[03/43] KVM: arm64: nv: Compute NV view of idregs as a one-off
commit: 3ed0b5123cd5a2a4f1fe4e594e7bf319e9eaf1da
[04/43] KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR
commit: 4d4f52052ba8357f1591cb9bc3086541070711af
[05/43] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers
commit: 3606e0b2e462164bced151dbb54ccfe42ac6c35b
[06/43] KVM: arm64: nv: Add include containing the VNCR_EL2 offsets
commit: 60ce16cc122aad999129d23061fa35f63d5b1e9b
[07/43] KVM: arm64: Introduce a bad_trap() primitive for unexpected trap handling
commit: 2733dd10701abc6ab23d65a732f58fbeb80bd203
[08/43] KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers
commit: 9b9cce60be85e6807bdb0eaa2f520e78dbab0659
[09/43] KVM: arm64: nv: Map VNCR-capable registers to a separate page
commit: d8bd48e3f0ee9e1fdba2a2e453155a5354e48a8d
[10/43] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg()
commit: fedc612314acfebf506e071bf3a941076aa56d10
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults
2023-11-20 13:10 ` [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults Marc Zyngier
@ 2024-01-17 14:53 ` Joey Gouly
2024-01-17 15:53 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Joey Gouly @ 2024-01-17 14:53 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Ganapatrao Kulkarni, Darren Hart,
Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
Hi Marc,
Drive by thing I spotted.
On Mon, Nov 20, 2023 at 01:10:03PM +0000, Marc Zyngier wrote:
> If we are faulting on a shadow stage 2 translation, we first walk the
> guest hypervisor's stage 2 page table to see if it has a mapping. If
> not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
> create a mapping in the shadow stage 2 page table.
>
> Note that we have to deal with two IPAs when we got a shadow stage 2
> page fault. One is the address we faulted on, and is in the L2 guest
> phys space. The other is from the guest stage-2 page table walk, and is
> in the L1 guest phys space. To differentiate them, we rename variables
> so that fault_ipa is used for the former and ipa is used for the latter.
>
> Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
> Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> [maz: rewrote this multiple times...]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_emulate.h | 7 +++
> arch/arm64/include/asm/kvm_nested.h | 19 ++++++
> arch/arm64/kvm/mmu.c | 89 ++++++++++++++++++++++++----
> arch/arm64/kvm/nested.c | 48 +++++++++++++++
> 4 files changed, 153 insertions(+), 10 deletions(-)
>
[.. snip ..]
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 588ce46c0ad0..41de7616b735 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1412,14 +1412,16 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> }
>
> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> - struct kvm_memory_slot *memslot, unsigned long hva,
> - unsigned long fault_status)
> + struct kvm_s2_trans *nested,
> + struct kvm_memory_slot *memslot,
> + unsigned long hva, unsigned long fault_status)
> {
> int ret = 0;
> bool write_fault, writable, force_pte = false;
> bool exec_fault, mte_allowed;
> bool device = false;
> unsigned long mmu_seq;
> + phys_addr_t ipa = fault_ipa;
> struct kvm *kvm = vcpu->kvm;
> struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> struct vm_area_struct *vma;
> @@ -1504,10 +1506,38 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> }
>
> vma_pagesize = 1UL << vma_shift;
> +
> + if (nested) {
> + unsigned long max_map_size;
> +
> + max_map_size = force_pte ? PUD_SIZE : PAGE_SIZE;
This seems like the wrong way around, presumably you want PAGE_SIZE for force_pte?
> +
> + ipa = kvm_s2_trans_output(nested);
> +
> + /*
> + * If we're about to create a shadow stage 2 entry, then we
> + * can only create a block mapping if the guest stage 2 page
> + * table uses at least as big a mapping.
> + */
> + max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
> +
> + /*
> + * Be careful that if the mapping size falls between
> + * two host sizes, take the smallest of the two.
> + */
> + if (max_map_size >= PMD_SIZE && max_map_size < PUD_SIZE)
> + max_map_size = PMD_SIZE;
> + else if (max_map_size >= PAGE_SIZE && max_map_size < PMD_SIZE)
> + max_map_size = PAGE_SIZE;
> +
> + force_pte = (max_map_size == PAGE_SIZE);
> + vma_pagesize = min(vma_pagesize, (long)max_map_size);
> + }
> +
> if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
> fault_ipa &= ~(vma_pagesize - 1);
>
> - gfn = fault_ipa >> PAGE_SHIFT;
> + gfn = ipa >> PAGE_SHIFT;
> mte_allowed = kvm_vma_mte_allowed(vma);
>
> /* Don't use the VMA after the unlock -- it may have vanished */
[.. snip ..]
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults
2024-01-17 14:53 ` Joey Gouly
@ 2024-01-17 15:53 ` Marc Zyngier
0 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2024-01-17 15:53 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Ganapatrao Kulkarni, Darren Hart,
Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Oliver Upton, Zenghui Yu
On Wed, 17 Jan 2024 14:53:16 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> Hi Marc,
>
> Drive by thing I spotted.
>
> On Mon, Nov 20, 2023 at 01:10:03PM +0000, Marc Zyngier wrote:
> > If we are faulting on a shadow stage 2 translation, we first walk the
> > guest hypervisor's stage 2 page table to see if it has a mapping. If
> > not, we inject a stage 2 page fault to the virtual EL2. Otherwise, we
> > create a mapping in the shadow stage 2 page table.
> >
> > Note that we have to deal with two IPAs when we got a shadow stage 2
> > page fault. One is the address we faulted on, and is in the L2 guest
> > phys space. The other is from the guest stage-2 page table walk, and is
> > in the L1 guest phys space. To differentiate them, we rename variables
> > so that fault_ipa is used for the former and ipa is used for the latter.
> >
> > Co-developed-by: Christoffer Dall <christoffer.dall@linaro.org>
> > Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
> > [maz: rewrote this multiple times...]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_emulate.h | 7 +++
> > arch/arm64/include/asm/kvm_nested.h | 19 ++++++
> > arch/arm64/kvm/mmu.c | 89 ++++++++++++++++++++++++----
> > arch/arm64/kvm/nested.c | 48 +++++++++++++++
> > 4 files changed, 153 insertions(+), 10 deletions(-)
> >
> [.. snip ..]
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 588ce46c0ad0..41de7616b735 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1412,14 +1412,16 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> > }
> >
> > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > - struct kvm_memory_slot *memslot, unsigned long hva,
> > - unsigned long fault_status)
> > + struct kvm_s2_trans *nested,
> > + struct kvm_memory_slot *memslot,
> > + unsigned long hva, unsigned long fault_status)
> > {
> > int ret = 0;
> > bool write_fault, writable, force_pte = false;
> > bool exec_fault, mte_allowed;
> > bool device = false;
> > unsigned long mmu_seq;
> > + phys_addr_t ipa = fault_ipa;
> > struct kvm *kvm = vcpu->kvm;
> > struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> > struct vm_area_struct *vma;
> > @@ -1504,10 +1506,38 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > }
> >
> > vma_pagesize = 1UL << vma_shift;
> > +
> > + if (nested) {
> > + unsigned long max_map_size;
> > +
> > + max_map_size = force_pte ? PUD_SIZE : PAGE_SIZE;
>
> This seems like the wrong way around, presumably you want PAGE_SIZE for force_pte?
This is hilarious. I really shouldn't write code these days.
Thanks a lot for spotting this one, I'll fix that right away!
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2023-11-20 13:10 ` [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures Marc Zyngier
@ 2024-01-23 9:55 ` Ganapatrao Kulkarni
2024-01-23 14:26 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2024-01-23 9:55 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Alexandru Elisei, Andre Przywara, Chase Conklin, Christoffer Dall,
Darren Hart, Jintack Lim, Russell King, Miguel Luis, James Morse,
Suzuki K Poulose, Oliver Upton, Zenghui Yu, D Scott Phillips
Hi Marc,
On 20-11-2023 06:40 pm, Marc Zyngier wrote:
> Add Stage-2 mmu data structures for virtual EL2 and for nested guests.
> We don't yet populate shadow Stage-2 page tables, but we now have a
> framework for getting to a shadow Stage-2 pgd.
>
> We allocate twice the number of vcpus as Stage-2 mmu structures because
> that's sufficient for each vcpu running two translation regimes without
> having to flush the Stage-2 page tables.
>
> Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 41 ++++++
> arch/arm64/include/asm/kvm_mmu.h | 9 ++
> arch/arm64/include/asm/kvm_nested.h | 7 +
> arch/arm64/kvm/arm.c | 12 ++
> arch/arm64/kvm/mmu.c | 78 ++++++++---
> arch/arm64/kvm/nested.c | 207 ++++++++++++++++++++++++++++
> arch/arm64/kvm/reset.c | 6 +
> 7 files changed, 338 insertions(+), 22 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index f17fb7c42973..eb96fe9b686e 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -188,8 +188,40 @@ struct kvm_s2_mmu {
> uint64_t split_page_chunk_size;
>
> struct kvm_arch *arch;
> +
> + /*
> + * For a shadow stage-2 MMU, the virtual vttbr used by the
> + * host to parse the guest S2.
> + * This either contains:
> + * - the virtual VTTBR programmed by the guest hypervisor with
> + * CnP cleared
> + * - The value 1 (VMID=0, BADDR=0, CnP=1) if invalid
> + *
> + * We also cache the full VTCR which gets used for TLB invalidation,
> + * taking the ARM ARM's "Any of the bits in VTCR_EL2 are permitted
> + * to be cached in a TLB" to the letter.
> + */
> + u64 tlb_vttbr;
> + u64 tlb_vtcr;
> +
> + /*
> + * true when this represents a nested context where virtual
> + * HCR_EL2.VM == 1
> + */
> + bool nested_stage2_enabled;
> +
> + /*
> + * 0: Nobody is currently using this, check vttbr for validity
> + * >0: Somebody is actively using this.
> + */
> + atomic_t refcnt;
> };
>
> +static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
> +{
> + return !(mmu->tlb_vttbr & 1);
> +}
> +
> struct kvm_arch_memory_slot {
> };
>
> @@ -241,6 +273,14 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
> struct kvm_arch {
> struct kvm_s2_mmu mmu;
>
> + /*
> + * Stage 2 paging state for VMs with nested S2 using a virtual
> + * VMID.
> + */
> + struct kvm_s2_mmu *nested_mmus;
> + size_t nested_mmus_size;
> + int nested_mmus_next;
> +
> /* Interrupt controller */
> struct vgic_dist vgic;
>
> @@ -1186,6 +1226,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
> void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu);
>
> int __init kvm_set_ipa_limit(void);
> +u32 kvm_get_pa_bits(struct kvm *kvm);
>
> #define __KVM_HAVE_ARCH_VM_ALLOC
> struct kvm *kvm_arch_alloc_vm(void);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 49e0d4b36bd0..5c6fb2fb8287 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -119,6 +119,7 @@ alternative_cb_end
> #include <asm/mmu_context.h>
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_host.h>
> +#include <asm/kvm_nested.h>
>
> void kvm_update_va_mask(struct alt_instr *alt,
> __le32 *origptr, __le32 *updptr, int nr_inst);
> @@ -171,6 +172,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
> int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr);
> void __init free_hyp_pgds(void);
>
> +void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
> void stage2_unmap_vm(struct kvm *kvm);
> int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type);
> void kvm_uninit_stage2_mmu(struct kvm *kvm);
> @@ -339,5 +341,12 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> {
> return container_of(mmu->arch, struct kvm, arch);
> }
> +
> +static inline u64 get_vmid(u64 vttbr)
> +{
> + return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
> + VTTBR_VMID_SHIFT;
> +}
> +
> #endif /* __ASSEMBLY__ */
> #endif /* __ARM64_KVM_MMU_H__ */
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index aa085f2f1947..f421ad294e68 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -60,6 +60,13 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
> return ttbr0 & ~GENMASK_ULL(63, 48);
> }
>
> +extern void kvm_init_nested(struct kvm *kvm);
> +extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
> +extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
> +extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu);
> +extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
> +extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
> +
> extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
> extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index b65df612b41b..2e76892c1a56 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -147,6 +147,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> mutex_unlock(&kvm->lock);
> #endif
>
> + kvm_init_nested(kvm);
> +
> ret = kvm_share_hyp(kvm, kvm + 1);
> if (ret)
> return ret;
> @@ -429,6 +431,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> struct kvm_s2_mmu *mmu;
> int *last_ran;
>
> + if (vcpu_has_nv(vcpu))
> + kvm_vcpu_load_hw_mmu(vcpu);
> +
> mmu = vcpu->arch.hw_mmu;
> last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
>
> @@ -479,9 +484,12 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> kvm_timer_vcpu_put(vcpu);
> kvm_vgic_put(vcpu);
> kvm_vcpu_pmu_restore_host(vcpu);
> + if (vcpu_has_nv(vcpu))
> + kvm_vcpu_put_hw_mmu(vcpu);
> kvm_arm_vmid_clear_active();
>
> vcpu_clear_on_unsupported_cpu(vcpu);
> +
> vcpu->cpu = -1;
> }
>
> @@ -1336,6 +1344,10 @@ static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
> if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu)
> ret = kvm_arm_set_default_pmu(kvm);
>
> + /* Prepare for nested if required */
> + if (!ret)
> + ret = kvm_vcpu_init_nested(vcpu);
> +
> return ret;
> }
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d87c8fcc4c24..588ce46c0ad0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -305,7 +305,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
> * does.
> */
> /**
> - * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
> + * __unmap_stage2_range -- Clear stage2 page table entries to unmap a range
> * @mmu: The KVM stage-2 MMU pointer
> * @start: The intermediate physical base address of the range to unmap
> * @size: The size of the area to unmap
> @@ -328,7 +328,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
> may_block));
> }
>
> -static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
> +void kvm_unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
> {
> __unmap_stage2_range(mmu, start, size, true);
> }
> @@ -853,21 +853,9 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
> .icache_inval_pou = invalidate_icache_guest_page,
> };
>
> -/**
> - * kvm_init_stage2_mmu - Initialise a S2 MMU structure
> - * @kvm: The pointer to the KVM structure
> - * @mmu: The pointer to the s2 MMU structure
> - * @type: The machine type of the virtual machine
> - *
> - * Allocates only the stage-2 HW PGD level table(s).
> - * Note we don't need locking here as this is only called when the VM is
> - * created, which can only be done once.
> - */
> -int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
> +static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type)
> {
> u32 kvm_ipa_limit = get_kvm_ipa_limit();
> - int cpu, err;
> - struct kvm_pgtable *pgt;
> u64 mmfr0, mmfr1;
> u32 phys_shift;
>
> @@ -894,11 +882,58 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
> mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
>
> + return 0;
> +}
> +
> +/**
> + * kvm_init_stage2_mmu - Initialise a S2 MMU structure
> + * @kvm: The pointer to the KVM structure
> + * @mmu: The pointer to the s2 MMU structure
> + * @type: The machine type of the virtual machine
> + *
> + * Allocates only the stage-2 HW PGD level table(s).
> + * Note we don't need locking here as this is only called in two cases:
> + *
> + * - when the VM is created, which can't race against anything
> + *
> + * - when secondary kvm_s2_mmu structures are initialised for NV
> + * guests, and the caller must hold kvm->lock as this is called on a
> + * per-vcpu basis.
> + */
> +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
> +{
> + int cpu, err;
> + struct kvm_pgtable *pgt;
> +
> + /*
> + * If we already have our page tables in place, and that the
> + * MMU context is the canonical one, we have a bug somewhere,
> + * as this is only supposed to ever happen once per VM.
> + *
> + * Otherwise, we're building nested page tables, and that's
> + * probably because userspace called KVM_ARM_VCPU_INIT more
> + * than once on the same vcpu. Since that's actually legal,
> + * don't kick a fuss and leave gracefully.
> + */
> if (mmu->pgt != NULL) {
> + if (&kvm->arch.mmu != mmu)
> + return 0;
> +
> kvm_err("kvm_arch already initialized?\n");
> return -EINVAL;
> }
>
> + /*
> + * We only initialise the IPA range on the canonical MMU, so
> + * the type is meaningless in all other situations.
> + */
> + if (&kvm->arch.mmu != mmu)
> + type = kvm_get_pa_bits(kvm);
> +
> + err = kvm_init_ipa_range(mmu, type);
> + if (err)
> + return err;
> +
> pgt = kzalloc(sizeof(*pgt), GFP_KERNEL_ACCOUNT);
> if (!pgt)
> return -ENOMEM;
> @@ -923,6 +958,10 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
>
> mmu->pgt = pgt;
> mmu->pgd_phys = __pa(pgt->pgd);
> +
> + if (&kvm->arch.mmu != mmu)
> + kvm_init_nested_s2_mmu(mmu);
> +
> return 0;
>
> out_destroy_pgtable:
> @@ -974,7 +1013,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>
> if (!(vma->vm_flags & VM_PFNMAP)) {
> gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
> - unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
> + kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
> }
> hva = vm_end;
> } while (hva < reg_end);
> @@ -2054,11 +2093,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
> {
> }
>
> -void kvm_arch_flush_shadow_all(struct kvm *kvm)
> -{
> - kvm_uninit_stage2_mmu(kvm);
> -}
> -
> void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> struct kvm_memory_slot *slot)
> {
> @@ -2066,7 +2100,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> phys_addr_t size = slot->npages << PAGE_SHIFT;
>
> write_lock(&kvm->mmu_lock);
> - unmap_stage2_range(&kvm->arch.mmu, gpa, size);
> + kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
> write_unlock(&kvm->mmu_lock);
> }
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 66d05f5d39a2..c5752ab8c3fe 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -7,7 +7,9 @@
> #include <linux/kvm.h>
> #include <linux/kvm_host.h>
>
> +#include <asm/kvm_arm.h>
> #include <asm/kvm_emulate.h>
> +#include <asm/kvm_mmu.h>
> #include <asm/kvm_nested.h>
> #include <asm/sysreg.h>
>
> @@ -16,6 +18,211 @@
> /* Protection against the sysreg repainting madness... */
> #define NV_FTR(r, f) ID_AA64##r##_EL1_##f
>
> +void kvm_init_nested(struct kvm *kvm)
> +{
> + kvm->arch.nested_mmus = NULL;
> + kvm->arch.nested_mmus_size = 0;
> +}
> +
> +int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + struct kvm_s2_mmu *tmp;
> + int num_mmus;
> + int ret = -ENOMEM;
> +
> + if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->kvm->arch.vcpu_features))
> + return 0;
> +
> + if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
> + return -EINVAL;
> +
> + /*
> + * Let's treat memory allocation failures as benign: If we fail to
> + * allocate anything, return an error and keep the allocated array
> + * alive. Userspace may try to recover by intializing the vcpu
> + * again, and there is no reason to affect the whole VM for this.
> + */
> + num_mmus = atomic_read(&kvm->online_vcpus) * 2;
> + tmp = krealloc(kvm->arch.nested_mmus,
> + num_mmus * sizeof(*kvm->arch.nested_mmus),
> + GFP_KERNEL_ACCOUNT | __GFP_ZERO);
> + if (tmp) {
> + /*
> + * If we went through a realocation, adjust the MMU
> + * back-pointers in the previously initialised
> + * pg_table structures.
> + */
> + if (kvm->arch.nested_mmus != tmp) {
> + int i;
> +
> + for (i = 0; i < num_mmus - 2; i++)
> + tmp[i].pgt->mmu = &tmp[i];
> + }
> +
> + if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1], 0) ||
> + kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2], 0)) {
> + kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> + kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
> + } else {
> + kvm->arch.nested_mmus_size = num_mmus;
> + ret = 0;
> + }
> +
> + kvm->arch.nested_mmus = tmp;
> + }
> +
> + return ret;
> +}
> +
> +/* Must be called with kvm->mmu_lock held */
> +struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu)
> +{
> + bool nested_stage2_enabled;
> + u64 vttbr, vtcr, hcr;
> + struct kvm *kvm;
> + int i;
> +
> + kvm = vcpu->kvm;
> +
> + vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
> + vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
> + hcr = vcpu_read_sys_reg(vcpu, HCR_EL2);
> +
> + nested_stage2_enabled = hcr & HCR_VM;
> +
> + /* Don't consider the CnP bit for the vttbr match */
> + vttbr = vttbr & ~VTTBR_CNP_BIT;
> +
> + /*
> + * Two possibilities when looking up a S2 MMU context:
> + *
> + * - either S2 is enabled in the guest, and we need a context that is
> + * S2-enabled and matches the full VTTBR (VMID+BADDR) and VTCR,
> + * which makes it safe from a TLB conflict perspective (a broken
> + * guest won't be able to generate them),
> + *
> + * - or S2 is disabled, and we need a context that is S2-disabled
> + * and matches the VMID only, as all TLBs are tagged by VMID even
> + * if S2 translation is disabled.
> + */
> + for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
> + struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
> +
> + if (!kvm_s2_mmu_valid(mmu))
> + continue;
> +
> + if (nested_stage2_enabled &&
> + mmu->nested_stage2_enabled &&
> + vttbr == mmu->tlb_vttbr &&
> + vtcr == mmu->tlb_vtcr)
> + return mmu;
> +
> + if (!nested_stage2_enabled &&
> + !mmu->nested_stage2_enabled &&
> + get_vmid(vttbr) == get_vmid(mmu->tlb_vttbr))
> + return mmu;
> + }
> + return NULL;
> +}
> +
> +/* Must be called with kvm->mmu_lock held */
> +static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + struct kvm_s2_mmu *s2_mmu;
> + int i;
> +
> + s2_mmu = lookup_s2_mmu(vcpu);
> + if (s2_mmu)
> + goto out;
> +
> + /*
> + * Make sure we don't always search from the same point, or we
> + * will always reuse a potentially active context, leaving
> + * free contexts unused.
> + */
> + for (i = kvm->arch.nested_mmus_next;
> + i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next);
> + i++) {
> + s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size];
> +
> + if (atomic_read(&s2_mmu->refcnt) == 0)
> + break;
> + }
> + BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */
> +
> + /* Set the scene for the next search */
> + kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size;
> +
> + if (kvm_s2_mmu_valid(s2_mmu)) {
> + /* Clear the old state */
> + kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(s2_mmu));
> + if (atomic64_read(&s2_mmu->vmid.id))
> + kvm_call_hyp(__kvm_tlb_flush_vmid, s2_mmu);
> + }
> +
> + /*
> + * The virtual VMID (modulo CnP) will be used as a key when matching
> + * an existing kvm_s2_mmu.
> + *
> + * We cache VTCR at allocation time, once and for all. It'd be great
> + * if the guest didn't screw that one up, as this is not very
> + * forgiving...
> + */
> + s2_mmu->tlb_vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2) & ~VTTBR_CNP_BIT;
> + s2_mmu->tlb_vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
> + s2_mmu->nested_stage2_enabled = vcpu_read_sys_reg(vcpu, HCR_EL2) & HCR_VM;
> +
> +out:
> + atomic_inc(&s2_mmu->refcnt);
> + return s2_mmu;
> +}
> +
> +void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
> +{
> + mmu->tlb_vttbr = 1;
> + mmu->nested_stage2_enabled = false;
> + atomic_set(&mmu->refcnt, 0);
> +}
> +
> +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
> +{
> + if (is_hyp_ctxt(vcpu)) {
> + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
> + } else {
> + write_lock(&vcpu->kvm->mmu_lock);
> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
> + write_unlock(&vcpu->kvm->mmu_lock);
> + }
Due to race, there is a non-existing L2's mmu table is getting loaded
for some of vCPU while booting L1(noticed with L1 boot using large
number of vCPUs). This is happening since at the early stage the
e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
resulting in context switch as if it is returning to L2(guest enter) and
loading not initialized mmu table on those vCPUs resulting in
unrecoverable traps and aborts.
Adding code to check (below diff fixes the issue) stage 2 is enabled and
avoid the false ERET and continue with L1's mmu context.
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 340e2710cdda..1901dd19d770 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -759,7 +759,12 @@ void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
{
- if (is_hyp_ctxt(vcpu)) {
+ bool nested_stage2_enabled = vcpu_read_sys_reg(vcpu, HCR_EL2) &
HCR_VM;
+
+ /* Load L2 mmu only if nested_stage2_enabled, avoid mmu
+ * load due to false ERET trap.
+ */
+ if (is_hyp_ctxt(vcpu) || !nested_stage2_enabled) {
vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
} else {
write_lock(&vcpu->kvm->mmu_lock);
Hoping we dont hit this when we move completely NV2 based implementation
and e2h is always set?
> +}
> +
> +void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu->arch.hw_mmu != &vcpu->kvm->arch.mmu) {
> + atomic_dec(&vcpu->arch.hw_mmu->refcnt);
> + vcpu->arch.hw_mmu = NULL;
> + }
> +}
> +
> +void kvm_arch_flush_shadow_all(struct kvm *kvm)
> +{
> + int i;
> +
> + for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
> + struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
> +
> + WARN_ON(atomic_read(&mmu->refcnt));
> +
> + if (!atomic_read(&mmu->refcnt))
> + kvm_free_stage2_pgd(mmu);
> + }
> + kfree(kvm->arch.nested_mmus);
> + kvm->arch.nested_mmus = NULL;
> + kvm->arch.nested_mmus_size = 0;
> + kvm_uninit_stage2_mmu(kvm);
> +}
> +
> /*
> * Our emulated CPU doesn't support all the possible features. For the
> * sake of simplicity (and probably mental sanity), wipe out a number
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 5bb4de162cab..e106ea01598f 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -266,6 +266,12 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> preempt_enable();
> }
>
> +u32 kvm_get_pa_bits(struct kvm *kvm)
> +{
> + /* Fixed limit until we can configure ID_AA64MMFR0.PARange */
> + return kvm_ipa_limit;
> +}
> +
> u32 get_kvm_ipa_limit(void)
> {
> return kvm_ipa_limit;
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2024-01-23 9:55 ` Ganapatrao Kulkarni
@ 2024-01-23 14:26 ` Marc Zyngier
2024-01-25 8:14 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2024-01-23 14:26 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, D Scott Phillips
Hi Ganapatrao,
On Tue, 23 Jan 2024 09:55:32 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
> Hi Marc,
>
> > +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
> > +{
> > + if (is_hyp_ctxt(vcpu)) {
> > + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
> > + } else {
> > + write_lock(&vcpu->kvm->mmu_lock);
> > + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
> > + write_unlock(&vcpu->kvm->mmu_lock);
> > + }
>
> Due to race, there is a non-existing L2's mmu table is getting loaded
> for some of vCPU while booting L1(noticed with L1 boot using large
> number of vCPUs). This is happening since at the early stage the
> e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
> resulting in context switch as if it is returning to L2(guest enter)
> and loading not initialized mmu table on those vCPUs resulting in
> unrecoverable traps and aborts.
I'm not sure I understand the problem you're describing here.
What is the race exactly? Why isn't the shadow S2 good enough? Not
having HCR_EL2.VM set doesn't mean we can use the same S2, as the TLBs
are tagged by a different VMID, so staying on the canonical S2 seems
wrong.
My expectations are that the L1 ERET from EL2 to EL1 is trapped, and
that we pick an empty S2 and start populating it. What fails in this
process?
> Adding code to check (below diff fixes the issue) stage 2 is enabled
> and avoid the false ERET and continue with L1's mmu context.
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 340e2710cdda..1901dd19d770 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -759,7 +759,12 @@ void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
>
> void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
> {
> - if (is_hyp_ctxt(vcpu)) {
> + bool nested_stage2_enabled = vcpu_read_sys_reg(vcpu, HCR_EL2)
> & HCR_VM;
> +
> + /* Load L2 mmu only if nested_stage2_enabled, avoid mmu
> + * load due to false ERET trap.
> + */
> + if (is_hyp_ctxt(vcpu) || !nested_stage2_enabled) {
> vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
> } else {
> write_lock(&vcpu->kvm->mmu_lock);
As I said above, this doesn't look right.
> Hoping we dont hit this when we move completely NV2 based
> implementation and e2h is always set?
No, the same constraints apply. I don't see why this would change.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2024-01-23 14:26 ` Marc Zyngier
@ 2024-01-25 8:14 ` Ganapatrao Kulkarni
2024-01-25 8:58 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2024-01-25 8:14 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, D Scott Phillips
Hi Marc,
On 23-01-2024 07:56 pm, Marc Zyngier wrote:
> Hi Ganapatrao,
>
> On Tue, 23 Jan 2024 09:55:32 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>> Hi Marc,
>>
>>> +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
>>> +{
>>> + if (is_hyp_ctxt(vcpu)) {
>>> + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
>>> + } else {
>>> + write_lock(&vcpu->kvm->mmu_lock);
>>> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
>>> + write_unlock(&vcpu->kvm->mmu_lock);
>>> + }
>>
>> Due to race, there is a non-existing L2's mmu table is getting loaded
>> for some of vCPU while booting L1(noticed with L1 boot using large
>> number of vCPUs). This is happening since at the early stage the
>> e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
>> resulting in context switch as if it is returning to L2(guest enter)
>> and loading not initialized mmu table on those vCPUs resulting in
>> unrecoverable traps and aborts.
>
> I'm not sure I understand the problem you're describing here.
>
IIUC, When the S2 fault happens, the faulted vCPU gets the pages from
qemu process and maps in S2 and copies the code to allocated memory.
Mean while other vCPUs which are in race to come online, when they
switches over to dummy S2 finds the mapping and returns to L1 and
subsequent execution does not fault instead fetches from memory where no
code exists yet(for some) and generates stage 1 instruction abort and
jumps to abort handler and even there is no code exist and keeps
aborting. This is happening on random vCPUs(no pattern).
> What is the race exactly? Why isn't the shadow S2 good enough? Not
> having HCR_EL2.VM set doesn't mean we can use the same S2, as the TLBs
> are tagged by a different VMID, so staying on the canonical S2 seems
> wrong.
IMO, it is unnecessary to switch-over for first ERET while L1 is booting
and repeat the faults and page allocation which is anyway dummy once L1
switches to E2H.
Let L1 use its S2 always which is created by L0. Even we should consider
avoiding the entry created for L1 in array(first entry in the array) of
S2-MMUs and avoid unnecessary iteration/lookup while unmap of NestedVMs.
I am anticipating this unwanted switch-over wont happen when we have NV2
only support in V12?
>
> My expectations are that the L1 ERET from EL2 to EL1 is trapped, and
> that we pick an empty S2 and start populating it. What fails in this
> process?
>
>> Adding code to check (below diff fixes the issue) stage 2 is enabled
>> and avoid the false ERET and continue with L1's mmu context.
>>
>> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
>> index 340e2710cdda..1901dd19d770 100644
>> --- a/arch/arm64/kvm/nested.c
>> +++ b/arch/arm64/kvm/nested.c
>> @@ -759,7 +759,12 @@ void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu)
>>
>> void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
>> {
>> - if (is_hyp_ctxt(vcpu)) {
>> + bool nested_stage2_enabled = vcpu_read_sys_reg(vcpu, HCR_EL2)
>> & HCR_VM;
>> +
>> + /* Load L2 mmu only if nested_stage2_enabled, avoid mmu
>> + * load due to false ERET trap.
>> + */
>> + if (is_hyp_ctxt(vcpu) || !nested_stage2_enabled) {
>> vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
>> } else {
>> write_lock(&vcpu->kvm->mmu_lock);
>
> As I said above, this doesn't look right.
>
>> Hoping we dont hit this when we move completely NV2 based
>> implementation and e2h is always set?
>
> No, the same constraints apply. I don't see why this would change.
>
> Thanks,
>
> M.
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2024-01-25 8:14 ` Ganapatrao Kulkarni
@ 2024-01-25 8:58 ` Marc Zyngier
2024-01-31 9:39 ` Ganapatrao Kulkarni
0 siblings, 1 reply; 79+ messages in thread
From: Marc Zyngier @ 2024-01-25 8:58 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, D Scott Phillips
On Thu, 25 Jan 2024 08:14:32 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
> Hi Marc,
>
> On 23-01-2024 07:56 pm, Marc Zyngier wrote:
> > Hi Ganapatrao,
> >
> > On Tue, 23 Jan 2024 09:55:32 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >> Hi Marc,
> >>
> >>> +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
> >>> +{
> >>> + if (is_hyp_ctxt(vcpu)) {
> >>> + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
> >>> + } else {
> >>> + write_lock(&vcpu->kvm->mmu_lock);
> >>> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
> >>> + write_unlock(&vcpu->kvm->mmu_lock);
> >>> + }
> >>
> >> Due to race, there is a non-existing L2's mmu table is getting loaded
> >> for some of vCPU while booting L1(noticed with L1 boot using large
> >> number of vCPUs). This is happening since at the early stage the
> >> e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
> >> resulting in context switch as if it is returning to L2(guest enter)
> >> and loading not initialized mmu table on those vCPUs resulting in
> >> unrecoverable traps and aborts.
> >
> > I'm not sure I understand the problem you're describing here.
> >
>
> IIUC, When the S2 fault happens, the faulted vCPU gets the pages from
> qemu process and maps in S2 and copies the code to allocated
> memory. Mean while other vCPUs which are in race to come online, when
> they switches over to dummy S2 finds the mapping and returns to L1 and
> subsequent execution does not fault instead fetches from memory where
> no code exists yet(for some) and generates stage 1 instruction abort
> and jumps to abort handler and even there is no code exist and keeps
> aborting. This is happening on random vCPUs(no pattern).
Why is that any different from the way we handle faults in the
non-nested case? If there is a case where we can map the PTE at S2
before the data is available, this is a generic bug that can trigger
irrespective of NV.
>
> > What is the race exactly? Why isn't the shadow S2 good enough? Not
> > having HCR_EL2.VM set doesn't mean we can use the same S2, as the TLBs
> > are tagged by a different VMID, so staying on the canonical S2 seems
> > wrong.
>
> IMO, it is unnecessary to switch-over for first ERET while L1 is
> booting and repeat the faults and page allocation which is anyway
> dummy once L1 switches to E2H.
It is mandated by the architecture. EL1 is, by definition, a different
translation regime from EL2. So we *must* have a different S2, because
that defines the boundaries of TLB creation and invalidation. The
fact that these are the same pages is totally irrelevant.
> Let L1 use its S2 always which is created by L0. Even we should
> consider avoiding the entry created for L1 in array(first entry in the
> array) of S2-MMUs and avoid unnecessary iteration/lookup while unmap
> of NestedVMs.
I'm sorry, but this is just wrong. You are merging the EL1 and EL2
translation regimes, which is not acceptable.
> I am anticipating this unwanted switch-over wont happen when we have
> NV2 only support in V12?
V11 is already NV2 only, so I really don't get what you mean here.
Everything stays the same, and there is nothing to change here.
What you describe looks like a terrible bug somewhere on the
page-fault path that has the potential to impact non-NV, and I'd like
to focus on that.
I've been booting my L1 with a fairly large number of vcpus (32 vcpu
for 6 physical CPUs), and I don't see this.
Since you seem to have a way to trigger it on your HW, can you please
pinpoint the situation where we map the page without having the
corresponding data?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2024-01-25 8:58 ` Marc Zyngier
@ 2024-01-31 9:39 ` Ganapatrao Kulkarni
2024-01-31 13:50 ` Marc Zyngier
0 siblings, 1 reply; 79+ messages in thread
From: Ganapatrao Kulkarni @ 2024-01-31 9:39 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, D Scott Phillips
Hi Marc,
On 25-01-2024 02:28 pm, Marc Zyngier wrote:
> On Thu, 25 Jan 2024 08:14:32 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>>
>> Hi Marc,
>>
>> On 23-01-2024 07:56 pm, Marc Zyngier wrote:
>>> Hi Ganapatrao,
>>>
>>> On Tue, 23 Jan 2024 09:55:32 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>> Hi Marc,
>>>>
>>>>> +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
>>>>> +{
>>>>> + if (is_hyp_ctxt(vcpu)) {
>>>>> + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
>>>>> + } else {
>>>>> + write_lock(&vcpu->kvm->mmu_lock);
>>>>> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
>>>>> + write_unlock(&vcpu->kvm->mmu_lock);
>>>>> + }
>>>>
>>>> Due to race, there is a non-existing L2's mmu table is getting loaded
>>>> for some of vCPU while booting L1(noticed with L1 boot using large
>>>> number of vCPUs). This is happening since at the early stage the
>>>> e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
>>>> resulting in context switch as if it is returning to L2(guest enter)
>>>> and loading not initialized mmu table on those vCPUs resulting in
>>>> unrecoverable traps and aborts.
>>>
>>> I'm not sure I understand the problem you're describing here.
>>>
>>
>> IIUC, When the S2 fault happens, the faulted vCPU gets the pages from
>> qemu process and maps in S2 and copies the code to allocated
>> memory. Mean while other vCPUs which are in race to come online, when
>> they switches over to dummy S2 finds the mapping and returns to L1 and
>> subsequent execution does not fault instead fetches from memory where
>> no code exists yet(for some) and generates stage 1 instruction abort
>> and jumps to abort handler and even there is no code exist and keeps
>> aborting. This is happening on random vCPUs(no pattern).
>
> Why is that any different from the way we handle faults in the
> non-nested case? If there is a case where we can map the PTE at S2
> before the data is available, this is a generic bug that can trigger
> irrespective of NV.
>
>>
>>> What is the race exactly? Why isn't the shadow S2 good enough? Not
>>> having HCR_EL2.VM set doesn't mean we can use the same S2, as the TLBs
>>> are tagged by a different VMID, so staying on the canonical S2 seems
>>> wrong.
>>
>> IMO, it is unnecessary to switch-over for first ERET while L1 is
>> booting and repeat the faults and page allocation which is anyway
>> dummy once L1 switches to E2H.
>
> It is mandated by the architecture. EL1 is, by definition, a different
> translation regime from EL2. So we *must* have a different S2, because
> that defines the boundaries of TLB creation and invalidation. The
> fact that these are the same pages is totally irrelevant.
>
>> Let L1 use its S2 always which is created by L0. Even we should
>> consider avoiding the entry created for L1 in array(first entry in the
>> array) of S2-MMUs and avoid unnecessary iteration/lookup while unmap
>> of NestedVMs.
>
> I'm sorry, but this is just wrong. You are merging the EL1 and EL2
> translation regimes, which is not acceptable.
>
>> I am anticipating this unwanted switch-over wont happen when we have
>> NV2 only support in V12?
>
> V11 is already NV2 only, so I really don't get what you mean here.
> Everything stays the same, and there is nothing to change here.
>
I am using still V10 since V11(also V12/nv-6.9-sr-enforcement) has
issues to boot with QEMU. Tried V11 with my local branch of QEMU which
is 7.2 based and also with Eric's QEMU[1] which rebased on 8.2. The
issue is QEMU crashes at the very beginning itself. Not sure about the
issue and yet to debug.
[1] https://github.com/eauger/qemu/tree/v8.2-nv
> What you describe looks like a terrible bug somewhere on the
> page-fault path that has the potential to impact non-NV, and I'd like
> to focus on that.
I found the bug/issue and fixed it.
The problem was so random and was happening when tried booting L1 with
large cores(200 to 300+).
I have implemented(yet to send to ML for review) to fix the performance
issue[2] due to unmapping of Shadow tables by implementing the lookup
table to unmap only the mapped Shadow IPAs instead of unmapping complete
Shadow S2 of all active NestedVMs.
This lookup table was not adding the mappings created for the L1 when it
is using the shadow S2-MMU(my bad, missed to notice that the L1 hops
between vEL2 and EL1 at the booting stage), hence when there is a page
migration, the unmap was not getting done for those pages and resulting
in access of stale pages/memory by the some of the VCPUs of L1.
I have modified the check while adding the Shadow-IPA to PA mapping to a
lookup table to check, is this page is getting mapped to NestedVMs or to
a L1 while it is using Shadow S2.
[2] https://www.spinics.net/lists/kvm/msg326638.html
>
> I've been booting my L1 with a fairly large number of vcpus (32 vcpu
> for 6 physical CPUs), and I don't see this.
>
> Since you seem to have a way to trigger it on your HW, can you please
> pinpoint the situation where we map the page without having the
> corresponding data?
>
> Thanks,
>
> M.
>
Thanks,
Ganapat
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
2024-01-31 9:39 ` Ganapatrao Kulkarni
@ 2024-01-31 13:50 ` Marc Zyngier
0 siblings, 0 replies; 79+ messages in thread
From: Marc Zyngier @ 2024-01-31 13:50 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Alexandru Elisei, Andre Przywara,
Chase Conklin, Christoffer Dall, Darren Hart, Jintack Lim,
Russell King, Miguel Luis, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, D Scott Phillips
On Wed, 31 Jan 2024 09:39:34 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
>
> Hi Marc,
>
> On 25-01-2024 02:28 pm, Marc Zyngier wrote:
> > On Thu, 25 Jan 2024 08:14:32 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>
> >>
> >> Hi Marc,
> >>
> >> On 23-01-2024 07:56 pm, Marc Zyngier wrote:
> >>> Hi Ganapatrao,
> >>>
> >>> On Tue, 23 Jan 2024 09:55:32 +0000,
> >>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >>>>
> >>>> Hi Marc,
> >>>>
> >>>>> +void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu)
> >>>>> +{
> >>>>> + if (is_hyp_ctxt(vcpu)) {
> >>>>> + vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu;
> >>>>> + } else {
> >>>>> + write_lock(&vcpu->kvm->mmu_lock);
> >>>>> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
> >>>>> + write_unlock(&vcpu->kvm->mmu_lock);
> >>>>> + }
> >>>>
> >>>> Due to race, there is a non-existing L2's mmu table is getting loaded
> >>>> for some of vCPU while booting L1(noticed with L1 boot using large
> >>>> number of vCPUs). This is happening since at the early stage the
> >>>> e2h(hyp-context) is not set and trap to eret of L1 boot-strap code
> >>>> resulting in context switch as if it is returning to L2(guest enter)
> >>>> and loading not initialized mmu table on those vCPUs resulting in
> >>>> unrecoverable traps and aborts.
> >>>
> >>> I'm not sure I understand the problem you're describing here.
> >>>
> >>
> >> IIUC, When the S2 fault happens, the faulted vCPU gets the pages from
> >> qemu process and maps in S2 and copies the code to allocated
> >> memory. Mean while other vCPUs which are in race to come online, when
> >> they switches over to dummy S2 finds the mapping and returns to L1 and
> >> subsequent execution does not fault instead fetches from memory where
> >> no code exists yet(for some) and generates stage 1 instruction abort
> >> and jumps to abort handler and even there is no code exist and keeps
> >> aborting. This is happening on random vCPUs(no pattern).
> >
> > Why is that any different from the way we handle faults in the
> > non-nested case? If there is a case where we can map the PTE at S2
> > before the data is available, this is a generic bug that can trigger
> > irrespective of NV.
> >
> >>
> >>> What is the race exactly? Why isn't the shadow S2 good enough? Not
> >>> having HCR_EL2.VM set doesn't mean we can use the same S2, as the TLBs
> >>> are tagged by a different VMID, so staying on the canonical S2 seems
> >>> wrong.
> >>
> >> IMO, it is unnecessary to switch-over for first ERET while L1 is
> >> booting and repeat the faults and page allocation which is anyway
> >> dummy once L1 switches to E2H.
> >
> > It is mandated by the architecture. EL1 is, by definition, a different
> > translation regime from EL2. So we *must* have a different S2, because
> > that defines the boundaries of TLB creation and invalidation. The
> > fact that these are the same pages is totally irrelevant.
> >
> >> Let L1 use its S2 always which is created by L0. Even we should
> >> consider avoiding the entry created for L1 in array(first entry in the
> >> array) of S2-MMUs and avoid unnecessary iteration/lookup while unmap
> >> of NestedVMs.
> >
> > I'm sorry, but this is just wrong. You are merging the EL1 and EL2
> > translation regimes, which is not acceptable.
> >
> >> I am anticipating this unwanted switch-over wont happen when we have
> >> NV2 only support in V12?
> >
> > V11 is already NV2 only, so I really don't get what you mean here.
> > Everything stays the same, and there is nothing to change here.
> >
>
> I am using still V10 since V11(also V12/nv-6.9-sr-enforcement) has
> issues to boot with QEMU.
Let's be clear: I have no interest in reports against a version that
is older than the current one. If you still use V10, then
congratulations, you are the maintainer of that version.
> Tried V11 with my local branch of QEMU which
> is 7.2 based and also with Eric's QEMU[1] which rebased on 8.2. The
> issue is QEMU crashes at the very beginning itself. Not sure about the
> issue and yet to debug.
>
> [1] https://github.com/eauger/qemu/tree/v8.2-nv
I have already reported that QEMU was doing some horrible things
behind the kernel's back, and I don't think it is working correctly.
>
> > What you describe looks like a terrible bug somewhere on the
> > page-fault path that has the potential to impact non-NV, and I'd like
> > to focus on that.
>
> I found the bug/issue and fixed it.
> The problem was so random and was happening when tried booting L1 with
> large cores(200 to 300+).
>
> I have implemented(yet to send to ML for review) to fix the
> performance issue[2] due to unmapping of Shadow tables by implementing
> the lookup table to unmap only the mapped Shadow IPAs instead of
> unmapping complete Shadow S2 of all active NestedVMs.
Again, this is irrelevant:
- you develop against an unmaintained version
- you waste time prematurely optimising code that is clearly
advertised as throw-away
>
> This lookup table was not adding the mappings created for the L1 when
> it is using the shadow S2-MMU(my bad, missed to notice that the L1
> hops between vEL2 and EL1 at the booting stage), hence when there is a
> page migration, the unmap was not getting done for those pages and
> resulting in access of stale pages/memory by the some of the VCPUs of
> L1.
>
> I have modified the check while adding the Shadow-IPA to PA mapping to
> a lookup table to check, is this page is getting mapped to NestedVMs
> or to a L1 while it is using Shadow S2.
>
> [2] https://www.spinics.net/lists/kvm/msg326638.html
Do I read it correctly that I wasted hours trying to reproduce
something that only exists with on an obsolete series together with
private patches?
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 79+ messages in thread
end of thread, other threads:[~2024-01-31 13:50 UTC | newest]
Thread overview: 79+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-20 13:09 [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 01/43] arm64: cpufeatures: Restrict NV support to FEAT_NV2 Marc Zyngier
2023-11-21 9:07 ` Ganapatrao Kulkarni
2023-11-21 9:27 ` Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 02/43] KVM: arm64: nv: Hoist vcpu_has_nv() into is_hyp_ctxt() Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 03/43] KVM: arm64: nv: Compute NV view of idregs as a one-off Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 04/43] KVM: arm64: nv: Drop EL12 register traps that are redirected to VNCR Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 05/43] KVM: arm64: nv: Add non-VHE-EL2->EL1 translation helpers Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 06/43] KVM: arm64: nv: Add include containing the VNCR_EL2 offsets Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 07/43] KVM: arm64: Introduce a bad_trap() primitive for unexpected trap handling Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 08/43] KVM: arm64: nv: Add EL2_REG_VNCR()/EL2_REG_REDIR() sysreg helpers Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 09/43] KVM: arm64: nv: Map VNCR-capable registers to a separate page Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 10/43] KVM: arm64: nv: Handle virtual EL2 registers in vcpu_read/write_sys_reg() Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 11/43] KVM: arm64: nv: Handle HCR_EL2.E2H specially Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 12/43] KVM: arm64: nv: Handle CNTHCTL_EL2 specially Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 13/43] KVM: arm64: nv: Save/Restore vEL2 sysregs Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 14/43] KVM: arm64: nv: Respect virtual HCR_EL2.TWX setting Marc Zyngier
2023-11-20 13:09 ` [PATCH v11 15/43] KVM: arm64: nv: Respect virtual CPTR_EL2.{TFP,FPEN} settings Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 16/43] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 17/43] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures Marc Zyngier
2024-01-23 9:55 ` Ganapatrao Kulkarni
2024-01-23 14:26 ` Marc Zyngier
2024-01-25 8:14 ` Ganapatrao Kulkarni
2024-01-25 8:58 ` Marc Zyngier
2024-01-31 9:39 ` Ganapatrao Kulkarni
2024-01-31 13:50 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 18/43] KVM: arm64: nv: Implement nested Stage-2 page table walk logic Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 19/43] KVM: arm64: nv: Handle shadow stage 2 page faults Marc Zyngier
2024-01-17 14:53 ` Joey Gouly
2024-01-17 15:53 ` Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 20/43] KVM: arm64: nv: Restrict S2 RD/WR permissions to match the guest's Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 21/43] KVM: arm64: nv: Unmap/flush shadow stage 2 page tables Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 22/43] KVM: arm64: nv: Set a handler for the system instruction traps Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 23/43] KVM: arm64: nv: Trap and emulate AT instructions from virtual EL2 Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 24/43] KVM: arm64: nv: Trap and emulate TLBI " Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 25/43] KVM: arm64: nv: Hide RAS from nested guests Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 26/43] KVM: arm64: nv: Add handling of EL2-specific timer registers Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 27/43] KVM: arm64: nv: Sync nested timer state with FEAT_NV2 Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 28/43] KVM: arm64: nv: Publish emulated timer interrupt state in the in-memory state Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 29/43] KVM: arm64: nv: Load timer before the GIC Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 30/43] KVM: arm64: nv: Nested GICv3 Support Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 31/43] KVM: arm64: nv: Don't block in WFI from nested state Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 32/43] KVM: arm64: nv: vgic: Allow userland to set VGIC maintenance IRQ Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 33/43] KVM: arm64: nv: Fold GICv3 host trapping requirements into guest setup Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 34/43] KVM: arm64: nv: Deal with broken VGIC on maintenance interrupt delivery Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 35/43] KVM: arm64: nv: Add handling of FEAT_TTL TLB invalidation Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 36/43] KVM: arm64: nv: Invalidate TLBs based on shadow S2 TTL-like information Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 37/43] KVM: arm64: nv: Tag shadow S2 entries with nested level Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 38/43] KVM: arm64: nv: Allocate VNCR page when required Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 39/43] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 40/43] KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 41/43] KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 42/43] KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV is on Marc Zyngier
2023-11-20 13:10 ` [PATCH v11 43/43] KVM: arm64: nv: Allow userspace to request KVM_ARM_VCPU_NESTED_VIRT Marc Zyngier
2023-11-21 8:51 ` [PATCH v11 00/43] KVM: arm64: Nested Virtualization support (FEAT_NV2 only) Ganapatrao Kulkarni
2023-11-21 9:08 ` Marc Zyngier
2023-11-21 9:26 ` Ganapatrao Kulkarni
2023-11-21 9:41 ` Marc Zyngier
2023-11-22 11:10 ` Ganapatrao Kulkarni
2023-11-22 11:39 ` Marc Zyngier
2023-11-21 16:49 ` Miguel Luis
2023-11-21 19:02 ` Marc Zyngier
2023-11-23 16:21 ` Miguel Luis
2023-11-23 16:44 ` Marc Zyngier
2023-11-24 9:50 ` Ganapatrao Kulkarni
2023-11-24 10:19 ` Marc Zyngier
2023-11-24 12:34 ` Ganapatrao Kulkarni
2023-11-24 12:51 ` Marc Zyngier
2023-11-24 13:22 ` Ganapatrao Kulkarni
2023-11-24 14:32 ` Marc Zyngier
2023-11-27 7:26 ` Ganapatrao Kulkarni
2023-11-27 9:22 ` Marc Zyngier
2023-11-27 10:59 ` Ganapatrao Kulkarni
2023-11-27 11:45 ` Marc Zyngier
2023-11-27 12:18 ` Ganapatrao Kulkarni
2023-11-27 13:57 ` Marc Zyngier
2023-12-18 12:39 ` Marc Zyngier
2023-12-18 19:51 ` Oliver Upton
2023-12-19 10:32 ` (subset) " Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).