* [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5
@ 2025-03-11 4:03 Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Zhenyu Ye
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
This series of patches add support to the Hardware Dirty state tracking
Structure(HDBSS) feature, which is introduced by the ARM architecture
in the DDI0601(ID121123) version.
The HDBSS feature is an extension to the architecture that enhances
tracking translation table descriptors' dirty state, identified as
FEAT_HDBSS. The goal of this feature is to reduce the cost of surveying
for dirtied granules, with minimal effect on recording when a granule
has been dirtied.
The purpose of this feature is to make the execution overhead of live
migration lower to both the guest and the host, compared to existing
approaches (write-protect or search stage 2 tables).
After these patches, users(such as qemu) can use the
KVM_CAP_ARM_HW_DIRTY_STATE_TRACK ioctl to enable or disable the HDBSS
feature before and after the live migration.
See patches for details, Thanks.
eillon (5):
arm64/sysreg: add HDBSS related register information
arm64/kvm: support set the DBM attr during memory abort
arm64/kvm: using ioctl to enable/disable the HDBSS feature
arm64/kvm: support to handle the HDBSSF event
arm64/config: add config to control whether enable HDBSS feature
arch/arm64/Kconfig | 19 +++++++
arch/arm64/Makefile | 4 +-
arch/arm64/include/asm/cpufeature.h | 15 +++++
arch/arm64/include/asm/esr.h | 2 +
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/kvm_host.h | 6 ++
arch/arm64/include/asm/kvm_mmu.h | 12 ++++
arch/arm64/include/asm/kvm_pgtable.h | 3 +
arch/arm64/include/asm/sysreg.h | 16 ++++++
arch/arm64/kvm/arm.c | 80 +++++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 47 ++++++++++++++++
arch/arm64/kvm/hyp/pgtable.c | 6 ++
arch/arm64/kvm/hyp/vhe/switch.c | 1 +
arch/arm64/kvm/mmu.c | 10 ++++
arch/arm64/kvm/reset.c | 7 +++
arch/arm64/tools/sysreg | 28 ++++++++++
include/linux/kvm_host.h | 1 +
include/uapi/linux/kvm.h | 1 +
tools/arch/arm64/include/asm/sysreg.h | 4 ++
tools/include/uapi/linux/kvm.h | 1 +
20 files changed, 263 insertions(+), 1 deletion(-)
--
2.39.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
@ 2025-03-11 4:03 ` Zhenyu Ye
2025-03-11 9:41 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort Zhenyu Ye
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
The ARM architecture added the HDBSS feature and descriptions of
related registers (HDBSSBR/HDBSSPROD) in the DDI0601(ID121123) version,
add them to Linux.
Signed-off-by: eillon <yezhenyu2@huawei.com>
---
arch/arm64/include/asm/esr.h | 2 ++
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/sysreg.h | 4 ++++
arch/arm64/tools/sysreg | 28 +++++++++++++++++++++++++++
tools/arch/arm64/include/asm/sysreg.h | 4 ++++
5 files changed, 39 insertions(+)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index d1b1a33f9a8b..a33befe0999a 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -147,6 +147,8 @@
#define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
/* ISS2 field definitions for Data Aborts */
+#define ESR_ELx_HDBSSF_SHIFT (11)
+#define ESR_ELx_HDBSSF (UL(1) << ESR_ELx_HDBSSF_SHIFT)
#define ESR_ELx_TnD_SHIFT (10)
#define ESR_ELx_TnD (UL(1) << ESR_ELx_TnD_SHIFT)
#define ESR_ELx_TagAccess_SHIFT (9)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c2417a424b98..80793ef57f8b 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -122,6 +122,7 @@
TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
/* VTCR_EL2 Registers bits */
+#define VTCR_EL2_HDBSS (1UL << 45)
#define VTCR_EL2_DS TCR_EL2_DS
#define VTCR_EL2_RES1 (1U << 31)
#define VTCR_EL2_HD (1 << 22)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 05ea5223d2d5..b727772c06fb 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -522,6 +522,10 @@
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
+
+#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2)
+#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3)
+
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 762ee084b37c..c2aea1e7fd22 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2876,6 +2876,34 @@ Sysreg GCSPR_EL2 3 4 2 5 1
Fields GCSPR_ELx
EndSysreg
+Sysreg HDBSSBR_EL2 3 4 2 3 2
+Res0 63:56
+Field 55:12 BADDR
+Res0 11:4
+Enum 3:0 SZ
+ 0b0001 8KB
+ 0b0010 16KB
+ 0b0011 32KB
+ 0b0100 64KB
+ 0b0101 128KB
+ 0b0110 256KB
+ 0b0111 512KB
+ 0b1000 1MB
+ 0b1001 2MB
+EndEnum
+EndSysreg
+
+Sysreg HDBSSPROD_EL2 3 4 2 3 3
+Res0 63:32
+Enum 31:26 FSC
+ 0b000000 OK
+ 0b010000 ExternalAbort
+ 0b101000 GPF
+EndEnum
+Res0 25:19
+Field 18:0 INDEX
+EndSysreg
+
Sysreg DACR32_EL2 3 4 3 0 0
Res0 63:32
Field 31:30 D15
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 150416682e2c..95fc6a4ee655 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -518,6 +518,10 @@
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
+
+#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2)
+#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3)
+
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
--
2.39.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Zhenyu Ye
@ 2025-03-11 4:03 ` Zhenyu Ye
2025-03-11 9:47 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Zhenyu Ye
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
Since the ARMv8, the page entry has supported the DBM attribute.
Support set the attr during user_mem_abort().
Signed-off-by: eillon <yezhenyu2@huawei.com>
---
arch/arm64/include/asm/kvm_pgtable.h | 3 +++
arch/arm64/kvm/hyp/pgtable.c | 6 ++++++
2 files changed, 9 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index 6b9d274052c7..35648d7f08f5 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -86,6 +86,8 @@ typedef u64 kvm_pte_t;
#define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54)
+#define KVM_PTE_LEAF_ATTR_HI_S2_DBM BIT(51)
+
#define KVM_PTE_LEAF_ATTR_HI_S1_GP BIT(50)
#define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
@@ -252,6 +254,7 @@ enum kvm_pgtable_prot {
KVM_PGTABLE_PROT_DEVICE = BIT(3),
KVM_PGTABLE_PROT_NORMAL_NC = BIT(4),
+ KVM_PGTABLE_PROT_DBM = BIT(5),
KVM_PGTABLE_PROT_SW0 = BIT(55),
KVM_PGTABLE_PROT_SW1 = BIT(56),
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index df5cc74a7dd0..3ea6bdbc02a0 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -700,6 +700,9 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
if (prot & KVM_PGTABLE_PROT_W)
attr |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
+ if (prot & KVM_PGTABLE_PROT_DBM)
+ attr |= KVM_PTE_LEAF_ATTR_HI_S2_DBM;
+
if (!kvm_lpa2_is_enabled())
attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S2_SH, sh);
@@ -1309,6 +1312,9 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
if (prot & KVM_PGTABLE_PROT_W)
set |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
+ if (prot & KVM_PGTABLE_PROT_DBM)
+ set |= KVM_PTE_LEAF_ATTR_HI_S2_DBM;
+
if (prot & KVM_PGTABLE_PROT_X)
clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN;
--
2.39.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort Zhenyu Ye
@ 2025-03-11 4:03 ` Zhenyu Ye
2025-03-11 8:05 ` Oliver Upton
2025-03-11 9:59 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature Zhenyu Ye
4 siblings, 2 replies; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
In ARM64, the buffer size corresponding to the HDBSS feature is
configurable. Therefore, we cannot enable the HDBSS feature during
KVM initialization, but we should enable it when triggering a
live migration, where the buffer size can be configured by the user.
The KVM_CAP_ARM_HW_DIRTY_STATE_TRACK ioctl is added to enable/disable
this feature. Users (such as qemu) can invoke the ioctl to enable
HDBSS at the beginning of the migration and disable the feature by
invoking the ioctl again at the end of the migration with size set to 0.
Signed-off-by: eillon <yezhenyu2@huawei.com>
---
arch/arm64/include/asm/cpufeature.h | 12 +++++
arch/arm64/include/asm/kvm_host.h | 6 +++
arch/arm64/include/asm/kvm_mmu.h | 12 +++++
arch/arm64/include/asm/sysreg.h | 12 +++++
arch/arm64/kvm/arm.c | 70 +++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/vhe/switch.c | 1 +
arch/arm64/kvm/mmu.c | 3 ++
arch/arm64/kvm/reset.c | 7 +++
include/linux/kvm_host.h | 1 +
include/uapi/linux/kvm.h | 1 +
tools/include/uapi/linux/kvm.h | 1 +
11 files changed, 126 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e0e4478f5fb5..c76d51506562 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -743,6 +743,18 @@ static __always_inline bool system_supports_fpsimd(void)
return alternative_has_cap_likely(ARM64_HAS_FPSIMD);
}
+static inline bool system_supports_hdbss(void)
+{
+ u64 mmfr1;
+ u32 val;
+
+ mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+ val = cpuid_feature_extract_unsigned_field(mmfr1,
+ ID_AA64MMFR1_EL1_HAFDBS_SHIFT);
+
+ return val == ID_AA64MMFR1_EL1_HAFDBS_HDBSS;
+}
+
static inline bool system_uses_hw_pan(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_PAN);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d919557af5e5..bd73ee92b12c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -787,6 +787,12 @@ struct kvm_vcpu_arch {
/* Per-vcpu CCSIDR override or NULL */
u32 *ccsidr;
+
+ /* HDBSS registers info */
+ struct {
+ u64 br_el2;
+ u64 prod_el2;
+ } hdbss;
};
/*
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index b98ac6aa631f..ed5b68c2085e 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -330,6 +330,18 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu,
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
}
+static __always_inline void __load_hdbss(struct kvm_vcpu *vcpu)
+{
+ if (!vcpu->kvm->enable_hdbss)
+ return;
+
+ write_sysreg_s(vcpu->arch.hdbss.br_el2, SYS_HDBSSBR_EL2);
+ write_sysreg_s(vcpu->arch.hdbss.prod_el2, SYS_HDBSSPROD_EL2);
+
+ dsb(sy);
+ isb();
+}
+
static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b727772c06fb..3040eac74f8c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1105,6 +1105,18 @@
#define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \
GCS_CAP_VALID_TOKEN)
+/*
+ * Definitions for the HDBSS feature
+ */
+#define HDBSS_MAX_SIZE HDBSSBR_EL2_SZ_2MB
+
+#define HDBSSBR_EL2(baddr, sz) (((baddr) & GENMASK(55, 12 + sz)) | \
+ ((sz) << HDBSSBR_EL2_SZ_SHIFT))
+#define HDBSSBR_BADDR(br) ((br) & GENMASK(55, (12 + HDBSSBR_SZ(br))))
+#define HDBSSBR_SZ(br) (((br) & HDBSSBR_EL2_SZ_MASK) >> HDBSSBR_EL2_SZ_SHIFT)
+
+#define HDBSSPROD_IDX(prod) (((prod) & HDBSSPROD_EL2_INDEX_MASK) >> HDBSSPROD_EL2_INDEX_SHIFT)
+
#define ARM64_FEATURE_FIELD_BITS 4
/* Defined for compatibility only, do not add new users. */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 0160b4924351..825cfef3b1c2 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -80,6 +80,70 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
}
+static int kvm_cap_arm_enable_hdbss(struct kvm *kvm,
+ struct kvm_enable_cap *cap)
+{
+ unsigned long i;
+ struct kvm_vcpu *vcpu;
+ struct page *hdbss_pg;
+ int size = cap->args[0];
+
+ if (!system_supports_hdbss()) {
+ kvm_err("This system does not support HDBSS!\n");
+ return -EINVAL;
+ }
+
+ if (size < 0 || size > HDBSS_MAX_SIZE) {
+ kvm_err("Invalid HDBSS buffer size: %d!\n", size);
+ return -EINVAL;
+ }
+
+ /* Enable the HDBSS feature if size > 0, otherwise disable it. */
+ if (size) {
+ kvm->enable_hdbss = true;
+ kvm->arch.mmu.vtcr |= VTCR_EL2_HD | VTCR_EL2_HDBSS;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ hdbss_pg = alloc_pages(GFP_KERNEL, size);
+ if (!hdbss_pg) {
+ kvm_err("Alloc HDBSS buffer failed!\n");
+ return -EINVAL;
+ }
+
+ vcpu->arch.hdbss.br_el2 = HDBSSBR_EL2(page_to_phys(hdbss_pg), size);
+ vcpu->arch.hdbss.prod_el2 = 0;
+
+ /*
+ * We should kick vcpus out of guest mode here to
+ * load new vtcr value to vtcr_el2 register when
+ * re-enter guest mode.
+ */
+ kvm_vcpu_kick(vcpu);
+ }
+
+ kvm_info("Enable HDBSS success, HDBSS buffer size: %d\n", size);
+ } else if (kvm->enable_hdbss) {
+ kvm->arch.mmu.vtcr &= ~(VTCR_EL2_HD | VTCR_EL2_HDBSS);
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ /* Kick vcpus to flush hdbss buffer. */
+ kvm_vcpu_kick(vcpu);
+
+ hdbss_pg = phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2));
+ if (hdbss_pg)
+ __free_pages(hdbss_pg, HDBSSBR_SZ(vcpu->arch.hdbss.br_el2));
+
+ vcpu->arch.hdbss.br_el2 = 0;
+ vcpu->arch.hdbss.prod_el2 = 0;
+ }
+
+ kvm->enable_hdbss = false;
+ kvm_info("Disable HDBSS success\n");
+ }
+
+ return 0;
+}
+
int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
struct kvm_enable_cap *cap)
{
@@ -125,6 +189,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
}
mutex_unlock(&kvm->slots_lock);
break;
+ case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK:
+ r = kvm_cap_arm_enable_hdbss(kvm, cap);
+ break;
default:
break;
}
@@ -393,6 +460,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
r = BIT(0);
break;
+ case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK:
+ r = system_supports_hdbss();
+ break;
default:
r = 0;
}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 647737d6e8d0..6b633a219e4d 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -256,6 +256,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu)
__vcpu_load_switch_sysregs(vcpu);
__vcpu_load_activate_traps(vcpu);
__load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch);
+ __load_hdbss(vcpu);
}
void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1f55b0c7b11d..9c11e2292b1e 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1703,6 +1703,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (writable)
prot |= KVM_PGTABLE_PROT_W;
+ if (kvm->enable_hdbss && logging_active)
+ prot |= KVM_PGTABLE_PROT_DBM;
+
if (exec_fault)
prot |= KVM_PGTABLE_PROT_X;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 803e11b0dc8f..4e518f9a3df0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -153,12 +153,19 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu)
{
void *sve_state = vcpu->arch.sve_state;
+ struct page *hdbss_pg;
kvm_unshare_hyp(vcpu, vcpu + 1);
if (sve_state)
kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
kfree(sve_state);
kfree(vcpu->arch.ccsidr);
+
+ if (vcpu->arch.hdbss.br_el2) {
+ hdbss_pg = phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2));
+ if (hdbss_pg)
+ __free_pages(hdbss_pg, HDBSSBR_SZ(vcpu->arch.hdbss.br_el2));
+ }
}
static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f34f4cfaa513..aae37141c4a6 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -862,6 +862,7 @@ struct kvm {
struct xarray mem_attr_array;
#endif
char stats_id[KVM_STATS_NAME_SIZE];
+ bool enable_hdbss;
};
#define kvm_err(fmt, ...) \
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 45e6d8fca9b9..748891902426 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -929,6 +929,7 @@ struct kvm_enable_cap {
#define KVM_CAP_PRE_FAULT_MEMORY 236
#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
#define KVM_CAP_X86_GUEST_MODE 238
+#define KVM_CAP_ARM_HW_DIRTY_STATE_TRACK 239
struct kvm_irq_routing_irqchip {
__u32 irqchip;
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index 502ea63b5d2e..27d58b751e77 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -933,6 +933,7 @@ struct kvm_enable_cap {
#define KVM_CAP_PRE_FAULT_MEMORY 236
#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
#define KVM_CAP_X86_GUEST_MODE 238
+#define KVM_CAP_ARM_HW_DIRTY_STATE_TRACK 239
struct kvm_irq_routing_irqchip {
__u32 irqchip;
--
2.39.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
` (2 preceding siblings ...)
2025-03-11 4:03 ` [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Zhenyu Ye
@ 2025-03-11 4:03 ` Zhenyu Ye
2025-03-11 10:34 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature Zhenyu Ye
4 siblings, 1 reply; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
Updating the dirty bitmap based on the HDBSS buffer. Similar
to the implementation of the x86 pml feature, KVM flushes the
buffers on all VM-Exits, thus we only need to kick running
vCPUs to force a VM-Exit.
Signed-off-by: eillon <yezhenyu2@huawei.com>
---
arch/arm64/kvm/arm.c | 10 ++++++++
arch/arm64/kvm/handle_exit.c | 47 ++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/mmu.c | 7 ++++++
3 files changed, 64 insertions(+)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 825cfef3b1c2..fceceeead011 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1845,7 +1845,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
{
+ /*
+ * Flush all CPUs' dirty log buffers to the dirty_bitmap. Called
+ * before reporting dirty_bitmap to userspace. KVM flushes the buffers
+ * on all VM-Exits, thus we only need to kick running vCPUs to force a
+ * VM-Exit.
+ */
+ struct kvm_vcpu *vcpu;
+ unsigned long i;
+ kvm_for_each_vcpu(i, vcpu, kvm)
+ kvm_vcpu_kick(vcpu);
}
static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 512d152233ff..db9d7e1f72bf 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -330,6 +330,50 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
return arm_exit_handlers[esr_ec];
}
+#define HDBSS_ENTRY_VALID_SHIFT 0
+#define HDBSS_ENTRY_VALID_MASK (1UL << HDBSS_ENTRY_VALID_SHIFT)
+#define HDBSS_ENTRY_IPA_SHIFT 12
+#define HDBSS_ENTRY_IPA_MASK GENMASK_ULL(55, HDBSS_ENTRY_IPA_SHIFT)
+
+static void kvm_flush_hdbss_buffer(struct kvm_vcpu *vcpu)
+{
+ int idx, curr_idx;
+ u64 *hdbss_buf;
+
+ if (!vcpu->kvm->enable_hdbss)
+ return;
+
+ dsb(sy);
+ isb();
+ curr_idx = HDBSSPROD_IDX(read_sysreg_s(SYS_HDBSSPROD_EL2));
+
+ /* Do nothing if HDBSS buffer is empty or br_el2 is NULL */
+ if (curr_idx == 0 || vcpu->arch.hdbss.br_el2 == 0)
+ return;
+
+ hdbss_buf = page_address(phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2)));
+ if (!hdbss_buf) {
+ kvm_err("Enter flush hdbss buffer with buffer == NULL!");
+ return;
+ }
+
+ for (idx = 0; idx < curr_idx; idx++) {
+ u64 gpa;
+
+ gpa = hdbss_buf[idx];
+ if (!(gpa & HDBSS_ENTRY_VALID_MASK))
+ continue;
+
+ gpa = gpa & HDBSS_ENTRY_IPA_MASK;
+ kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT);
+ }
+
+ /* reset HDBSS index */
+ write_sysreg_s(0, SYS_HDBSSPROD_EL2);
+ dsb(sy);
+ isb();
+}
+
/*
* We may be single-stepping an emulated instruction. If the emulation
* has been completed in the kernel, we can return to userspace with a
@@ -365,6 +409,9 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
{
struct kvm_run *run = vcpu->run;
+ if (vcpu->kvm->enable_hdbss)
+ kvm_flush_hdbss_buffer(vcpu);
+
if (ARM_SERROR_PENDING(exception_index)) {
/*
* The SError is handled by handle_exit_early(). If the guest
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9c11e2292b1e..3e0781ae0ae1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1790,6 +1790,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
+ /*
+ * HDBSS buffer already flushed when enter handle_trap_exceptions().
+ * Nothing to do here.
+ */
+ if (ESR_ELx_ISS2(esr) & ESR_ELx_HDBSSF)
+ return 1;
+
if (esr_fsc_is_translation_fault(esr)) {
/* Beyond sanitised PARange (which is the IPA limit) */
if (fault_ipa >= BIT_ULL(get_kvm_ipa_limit())) {
--
2.39.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
` (3 preceding siblings ...)
2025-03-11 4:03 ` [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event Zhenyu Ye
@ 2025-03-11 4:03 ` Zhenyu Ye
2025-03-11 9:53 ` Marc Zyngier
4 siblings, 1 reply; 12+ messages in thread
From: Zhenyu Ye @ 2025-03-11 4:03 UTC (permalink / raw)
To: maz, yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly
Cc: linux-kernel, yezhenyu2, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
From: eillon <yezhenyu2@huawei.com>
The HDBSS feature introduces new assembly registers
(HDBSSBR_EL2 and HDBSSPROD_EL2), which depends on the armv9.5-a
compilation support. So add ARM64_HDBSS config to control whether
enable the HDBSS feature.
Signed-off-by: eillon <yezhenyu2@huawei.com>
---
arch/arm64/Kconfig | 19 +++++++++++++++++++
arch/arm64/Makefile | 4 +++-
arch/arm64/include/asm/cpufeature.h | 3 +++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 940343beb3d4..3458261eb14b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2237,6 +2237,25 @@ config ARM64_GCS
endmenu # "v9.4 architectural features"
+menu "ARMv9.5 architectural features"
+
+config ARM64_HDBSS
+ bool "Enable support for Hardware Dirty state tracking Structure (HDBSS)"
+ default y
+ depends on AS_HAS_ARMV9_5
+ help
+ Hardware Dirty state tracking Structure(HDBSS) enhances tracking
+ translation table descriptors’ dirty state to reduce the cost of
+ surveying for dirtied granules.
+
+ The feature introduces new assembly registers (HDBSSBR_EL2 and
+ HDBSSPROD_EL2), which depends on AS_HAS_ARMV9_5.
+
+config AS_HAS_ARMV9_5
+ def_bool $(cc-option,-Wa$(comma)-march=armv9.5-a)
+
+endmenu # "ARMv9.5 architectural features"
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2b25d671365f..f22507fb09b9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -103,7 +103,9 @@ endif
# freely generate instructions which are not supported by earlier architecture
# versions, which would prevent a single kernel image from working on earlier
# hardware.
-ifeq ($(CONFIG_AS_HAS_ARMV8_5), y)
+ifeq ($(CONFIG_AS_HAS_ARMV9_5), y)
+ asm-arch := armv9.5-a
+else ifeq ($(CONFIG_AS_HAS_ARMV8_5), y)
asm-arch := armv8.5-a
else ifeq ($(CONFIG_AS_HAS_ARMV8_4), y)
asm-arch := armv8.4-a
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c76d51506562..32e432827934 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -748,6 +748,9 @@ static inline bool system_supports_hdbss(void)
u64 mmfr1;
u32 val;
+ if (!IS_ENABLED(CONFIG_ARM64_HDBSS))
+ return false;
+
mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
val = cpuid_feature_extract_unsigned_field(mmfr1,
ID_AA64MMFR1_EL1_HAFDBS_SHIFT);
--
2.39.3
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature
2025-03-11 4:03 ` [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Zhenyu Ye
@ 2025-03-11 8:05 ` Oliver Upton
2025-03-11 9:59 ` Marc Zyngier
1 sibling, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2025-03-11 8:05 UTC (permalink / raw)
To: Zhenyu Ye
Cc: maz, yuzenghui, will, catalin.marinas, joey.gouly, linux-kernel,
xiexiangyou, zhengchuan, wangzhou1, linux-arm-kernel, kvm, kvmarm
On Tue, Mar 11, 2025 at 12:03:19PM +0800, Zhenyu Ye wrote:
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index d919557af5e5..bd73ee92b12c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -787,6 +787,12 @@ struct kvm_vcpu_arch {
>
> /* Per-vcpu CCSIDR override or NULL */
> u32 *ccsidr;
> +
> + /* HDBSS registers info */
> + struct {
> + u64 br_el2;
> + u64 prod_el2;
> + } hdbss;
I'm not a fan of storing the raw system register values in the vCPU
struct. I'd rather we kept track of the buffer base address, size, and
index as three separate fields.
> };
>
> /*
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index b98ac6aa631f..ed5b68c2085e 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -330,6 +330,18 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu,
> asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> }
>
> +static __always_inline void __load_hdbss(struct kvm_vcpu *vcpu)
> +{
> + if (!vcpu->kvm->enable_hdbss)
> + return;
> +
> + write_sysreg_s(vcpu->arch.hdbss.br_el2, SYS_HDBSSBR_EL2);
> + write_sysreg_s(vcpu->arch.hdbss.prod_el2, SYS_HDBSSPROD_EL2);
> +
> + dsb(sy);
> + isb();
What are you synchronizing against here? dsb(sy) is a *huge* hammer. A
dsb() in this context would only make sense if there were pending stores
to the dirty tracking structure, which ought not be the case at load.
Also keep in mind the EL1&0 regime is out of context...
> +}
> +
> static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> {
> return container_of(mmu->arch, struct kvm, arch);
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index b727772c06fb..3040eac74f8c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -1105,6 +1105,18 @@
> #define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \
> GCS_CAP_VALID_TOKEN)
>
> +/*
> + * Definitions for the HDBSS feature
> + */
> +#define HDBSS_MAX_SIZE HDBSSBR_EL2_SZ_2MB
> +
> +#define HDBSSBR_EL2(baddr, sz) (((baddr) & GENMASK(55, 12 + sz)) | \
> + ((sz) << HDBSSBR_EL2_SZ_SHIFT))
> +#define HDBSSBR_BADDR(br) ((br) & GENMASK(55, (12 + HDBSSBR_SZ(br))))
> +#define HDBSSBR_SZ(br) (((br) & HDBSSBR_EL2_SZ_MASK) >> HDBSSBR_EL2_SZ_SHIFT)
> +
> +#define HDBSSPROD_IDX(prod) (((prod) & HDBSSPROD_EL2_INDEX_MASK) >> HDBSSPROD_EL2_INDEX_SHIFT)
> +
> #define ARM64_FEATURE_FIELD_BITS 4
>
> /* Defined for compatibility only, do not add new users. */
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 0160b4924351..825cfef3b1c2 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -80,6 +80,70 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
> return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
> }
>
> +static int kvm_cap_arm_enable_hdbss(struct kvm *kvm,
> + struct kvm_enable_cap *cap)
> +{
> + unsigned long i;
> + struct kvm_vcpu *vcpu;
> + struct page *hdbss_pg;
> + int size = cap->args[0];
> +
> + if (!system_supports_hdbss()) {
> + kvm_err("This system does not support HDBSS!\n");
> + return -EINVAL;
> + }
> +
> + if (size < 0 || size > HDBSS_MAX_SIZE) {
> + kvm_err("Invalid HDBSS buffer size: %d!\n", size);
> + return -EINVAL;
> + }
> +
> + /* Enable the HDBSS feature if size > 0, otherwise disable it. */
> + if (size) {
> + kvm->enable_hdbss = true;
> + kvm->arch.mmu.vtcr |= VTCR_EL2_HD | VTCR_EL2_HDBSS;
Nothing prevents a vCPU from using a VTCR value with HDBSS enabled
before a tracking structure has been allocated.
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + hdbss_pg = alloc_pages(GFP_KERNEL, size);
GFP_KERNEL_ACCOUNT
> + if (!hdbss_pg) {
> + kvm_err("Alloc HDBSS buffer failed!\n");
> + return -EINVAL;
> + }
enable_hdbss and vtcr aren't cleaned up in this case, and EINVAL is an
inappopriate return for a failed memory allocation.
> + vcpu->arch.hdbss.br_el2 = HDBSSBR_EL2(page_to_phys(hdbss_pg), size);
> + vcpu->arch.hdbss.prod_el2 = 0;
> +
> + /*
> + * We should kick vcpus out of guest mode here to
> + * load new vtcr value to vtcr_el2 register when
> + * re-enter guest mode.
> + */
> + kvm_vcpu_kick(vcpu);
VTCR_EL2 is configured on vcpu_load() for VHE. How is this expected to
work?
> + }
> +
> + kvm_info("Enable HDBSS success, HDBSS buffer size: %d\n", size);
Drop the debugging printks.
> + } else if (kvm->enable_hdbss) {
> + kvm->arch.mmu.vtcr &= ~(VTCR_EL2_HD | VTCR_EL2_HDBSS);
> +
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + /* Kick vcpus to flush hdbss buffer. */
> + kvm_vcpu_kick(vcpu);
> +
> + hdbss_pg = phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2));
> + if (hdbss_pg)
> + __free_pages(hdbss_pg, HDBSSBR_SZ(vcpu->arch.hdbss.br_el2));
> +
> + vcpu->arch.hdbss.br_el2 = 0;
> + vcpu->arch.hdbss.prod_el2 = 0;
> + }
> +
> + kvm->enable_hdbss = false;
> + kvm_info("Disable HDBSS success\n");
> + }
> +
> + return 0;
> +}
> +
> int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> struct kvm_enable_cap *cap)
> {
> @@ -125,6 +189,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> }
> mutex_unlock(&kvm->slots_lock);
> break;
> + case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK:
> + r = kvm_cap_arm_enable_hdbss(kvm, cap);
> + break;
> default:
> break;
> }
> @@ -393,6 +460,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
> r = BIT(0);
> break;
> + case KVM_CAP_ARM_HW_DIRTY_STATE_TRACK:
> + r = system_supports_hdbss();
> + break;
I'm not sure this is creating the right abstraction for userspace. At
least for the dirty bitmap, this is exposing an implementation detail to
the VMM.
You could, perhaps, associate the dirty tracking structure with a
similar concept (e.g. vCPU dirty rings) available to userspace.
> default:
> r = 0;
> }
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 647737d6e8d0..6b633a219e4d 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -256,6 +256,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu)
> __vcpu_load_switch_sysregs(vcpu);
> __vcpu_load_activate_traps(vcpu);
> __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch);
> + __load_hdbss(vcpu);
> }
>
> void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1f55b0c7b11d..9c11e2292b1e 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1703,6 +1703,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (writable)
> prot |= KVM_PGTABLE_PROT_W;
>
> + if (kvm->enable_hdbss && logging_active)
> + prot |= KVM_PGTABLE_PROT_DBM;
> +
We should set DBM if the mapping is PTE sized. That way you can
potentially avoid faults for pages that precede the dirty tracking
enable.
Thanks,
Oliver
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information
2025-03-11 4:03 ` [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Zhenyu Ye
@ 2025-03-11 9:41 ` Marc Zyngier
0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2025-03-11 9:41 UTC (permalink / raw)
To: Zhenyu Ye
Cc: yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly,
linux-kernel, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
On Tue, 11 Mar 2025 04:03:17 +0000,
Zhenyu Ye <yezhenyu2@huawei.com> wrote:
>
> From: eillon <yezhenyu2@huawei.com>
>
> The ARM architecture added the HDBSS feature and descriptions of
> related registers (HDBSSBR/HDBSSPROD) in the DDI0601(ID121123) version,
> add them to Linux.
>
> Signed-off-by: eillon <yezhenyu2@huawei.com>
> ---
> arch/arm64/include/asm/esr.h | 2 ++
> arch/arm64/include/asm/kvm_arm.h | 1 +
> arch/arm64/include/asm/sysreg.h | 4 ++++
> arch/arm64/tools/sysreg | 28 +++++++++++++++++++++++++++
> tools/arch/arm64/include/asm/sysreg.h | 4 ++++
> 5 files changed, 39 insertions(+)
>
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index d1b1a33f9a8b..a33befe0999a 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -147,6 +147,8 @@
> #define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
>
> /* ISS2 field definitions for Data Aborts */
> +#define ESR_ELx_HDBSSF_SHIFT (11)
> +#define ESR_ELx_HDBSSF (UL(1) << ESR_ELx_HDBSSF_SHIFT)
> #define ESR_ELx_TnD_SHIFT (10)
> #define ESR_ELx_TnD (UL(1) << ESR_ELx_TnD_SHIFT)
> #define ESR_ELx_TagAccess_SHIFT (9)
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index c2417a424b98..80793ef57f8b 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -122,6 +122,7 @@
> TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
>
> /* VTCR_EL2 Registers bits */
> +#define VTCR_EL2_HDBSS (1UL << 45)
> #define VTCR_EL2_DS TCR_EL2_DS
> #define VTCR_EL2_RES1 (1U << 31)
> #define VTCR_EL2_HD (1 << 22)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 05ea5223d2d5..b727772c06fb 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -522,6 +522,10 @@
> #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
>
> #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
> +
> +#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2)
> +#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3)
> +
Why do you add this here? You have added these two new register to the
sysreg file, which should be enough.
> #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
> #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
> #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 762ee084b37c..c2aea1e7fd22 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -2876,6 +2876,34 @@ Sysreg GCSPR_EL2 3 4 2 5 1
> Fields GCSPR_ELx
> EndSysreg
>
> +Sysreg HDBSSBR_EL2 3 4 2 3 2
> +Res0 63:56
> +Field 55:12 BADDR
> +Res0 11:4
> +Enum 3:0 SZ
> + 0b0001 8KB
> + 0b0010 16KB
> + 0b0011 32KB
> + 0b0100 64KB
> + 0b0101 128KB
> + 0b0110 256KB
> + 0b0111 512KB
> + 0b1000 1MB
> + 0b1001 2MB
> +EndEnum
> +EndSysreg
> +
> +Sysreg HDBSSPROD_EL2 3 4 2 3 3
> +Res0 63:32
> +Enum 31:26 FSC
> + 0b000000 OK
> + 0b010000 ExternalAbort
> + 0b101000 GPF
> +EndEnum
> +Res0 25:19
> +Field 18:0 INDEX
> +EndSysreg
> +
> Sysreg DACR32_EL2 3 4 3 0 0
> Res0 63:32
> Field 31:30 D15
> diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
> index 150416682e2c..95fc6a4ee655 100644
> --- a/tools/arch/arm64/include/asm/sysreg.h
> +++ b/tools/arch/arm64/include/asm/sysreg.h
> @@ -518,6 +518,10 @@
> #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
>
> #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
> +
> +#define SYS_HDBSSBR_EL2 sys_reg(3, 4, 2, 3, 2)
> +#define SYS_HDBSSPROD_EL2 sys_reg(3, 4, 2, 3, 3)
> +
Same thing here.
> #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
> #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
> #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort
2025-03-11 4:03 ` [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort Zhenyu Ye
@ 2025-03-11 9:47 ` Marc Zyngier
0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2025-03-11 9:47 UTC (permalink / raw)
To: Zhenyu Ye
Cc: yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly,
linux-kernel, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
Please follow the current convention for the subject of your patch (if
you do a git log --oneline on arch/arm64/kvm, all commit should have
the same style).
On Tue, 11 Mar 2025 04:03:18 +0000,
Zhenyu Ye <yezhenyu2@huawei.com> wrote:
>
> From: eillon <yezhenyu2@huawei.com>
>
> Since the ARMv8, the page entry has supported the DBM attribute.
> Support set the attr during user_mem_abort().
Not quite. ARMv8.1 added DBM, and that is still, to this day, an
optional functionality, including in ARMv9.5.
>
> Signed-off-by: eillon <yezhenyu2@huawei.com>
> ---
> arch/arm64/include/asm/kvm_pgtable.h | 3 +++
> arch/arm64/kvm/hyp/pgtable.c | 6 ++++++
> 2 files changed, 9 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index 6b9d274052c7..35648d7f08f5 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -86,6 +86,8 @@ typedef u64 kvm_pte_t;
>
> #define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54)
>
> +#define KVM_PTE_LEAF_ATTR_HI_S2_DBM BIT(51)
> +
> #define KVM_PTE_LEAF_ATTR_HI_S1_GP BIT(50)
>
> #define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
> @@ -252,6 +254,7 @@ enum kvm_pgtable_prot {
>
> KVM_PGTABLE_PROT_DEVICE = BIT(3),
> KVM_PGTABLE_PROT_NORMAL_NC = BIT(4),
> + KVM_PGTABLE_PROT_DBM = BIT(5),
>
> KVM_PGTABLE_PROT_SW0 = BIT(55),
> KVM_PGTABLE_PROT_SW1 = BIT(56),
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index df5cc74a7dd0..3ea6bdbc02a0 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -700,6 +700,9 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
> if (prot & KVM_PGTABLE_PROT_W)
> attr |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
>
> + if (prot & KVM_PGTABLE_PROT_DBM)
> + attr |= KVM_PTE_LEAF_ATTR_HI_S2_DBM;
> +
> if (!kvm_lpa2_is_enabled())
> attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S2_SH, sh);
>
> @@ -1309,6 +1312,9 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
> if (prot & KVM_PGTABLE_PROT_W)
> set |= KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W;
>
> + if (prot & KVM_PGTABLE_PROT_DBM)
> + set |= KVM_PTE_LEAF_ATTR_HI_S2_DBM;
> +
Why isn't that exclusive of PROT_W?
> if (prot & KVM_PGTABLE_PROT_X)
> clr |= KVM_PTE_LEAF_ATTR_HI_S2_XN;
>
What is driving this KVM_PGTABLE_PROT_DBM bit?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature
2025-03-11 4:03 ` [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature Zhenyu Ye
@ 2025-03-11 9:53 ` Marc Zyngier
0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2025-03-11 9:53 UTC (permalink / raw)
To: Zhenyu Ye
Cc: yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly,
linux-kernel, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
On Tue, 11 Mar 2025 04:03:21 +0000,
Zhenyu Ye <yezhenyu2@huawei.com> wrote:
>
> From: eillon <yezhenyu2@huawei.com>
>
> The HDBSS feature introduces new assembly registers
> (HDBSSBR_EL2 and HDBSSPROD_EL2), which depends on the armv9.5-a
> compilation support. So add ARM64_HDBSS config to control whether
> enable the HDBSS feature.
>
> Signed-off-by: eillon <yezhenyu2@huawei.com>
> ---
> arch/arm64/Kconfig | 19 +++++++++++++++++++
> arch/arm64/Makefile | 4 +++-
> arch/arm64/include/asm/cpufeature.h | 3 +++
> 3 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 940343beb3d4..3458261eb14b 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -2237,6 +2237,25 @@ config ARM64_GCS
>
> endmenu # "v9.4 architectural features"
>
> +menu "ARMv9.5 architectural features"
> +
> +config ARM64_HDBSS
> + bool "Enable support for Hardware Dirty state tracking Structure (HDBSS)"
> + default y
> + depends on AS_HAS_ARMV9_5
> + help
> + Hardware Dirty state tracking Structure(HDBSS) enhances tracking
> + translation table descriptors’ dirty state to reduce the cost of
> + surveying for dirtied granules.
> +
> + The feature introduces new assembly registers (HDBSSBR_EL2 and
> + HDBSSPROD_EL2), which depends on AS_HAS_ARMV9_5.
Why? You seem to be using the generated accessors everywhere, and I
can't see a need for this compiler dependency.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature
2025-03-11 4:03 ` [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Zhenyu Ye
2025-03-11 8:05 ` Oliver Upton
@ 2025-03-11 9:59 ` Marc Zyngier
1 sibling, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2025-03-11 9:59 UTC (permalink / raw)
To: Zhenyu Ye
Cc: yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly,
linux-kernel, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
+1 on everything Oliver said. Additionally:
On Tue, 11 Mar 2025 04:03:19 +0000,
Zhenyu Ye <yezhenyu2@huawei.com> wrote:
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1f55b0c7b11d..9c11e2292b1e 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1703,6 +1703,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (writable)
> prot |= KVM_PGTABLE_PROT_W;
>
> + if (kvm->enable_hdbss && logging_active)
> + prot |= KVM_PGTABLE_PROT_DBM;
> +
This looks totally wrong. If the page is defined as R/O
(KVM_PGTABLE_PROT_W not being set), setting the DBM flag makes it
writable anyway (the W bit is the Dirty bit). Hello, memory
corruption?
overall, this patch is a total mess, and needs to be split to have the
runtime logic on one side, and the userspace API on the other. Don't
mix the two.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event
2025-03-11 4:03 ` [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event Zhenyu Ye
@ 2025-03-11 10:34 ` Marc Zyngier
0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2025-03-11 10:34 UTC (permalink / raw)
To: Zhenyu Ye
Cc: yuzenghui, will, oliver.upton, catalin.marinas, joey.gouly,
linux-kernel, xiexiangyou, zhengchuan, wangzhou1,
linux-arm-kernel, kvm, kvmarm
On Tue, 11 Mar 2025 04:03:20 +0000,
Zhenyu Ye <yezhenyu2@huawei.com> wrote:
>
> From: eillon <yezhenyu2@huawei.com>
>
> Updating the dirty bitmap based on the HDBSS buffer. Similar
> to the implementation of the x86 pml feature, KVM flushes the
> buffers on all VM-Exits, thus we only need to kick running
> vCPUs to force a VM-Exit.
>
> Signed-off-by: eillon <yezhenyu2@huawei.com>
> ---
> arch/arm64/kvm/arm.c | 10 ++++++++
> arch/arm64/kvm/handle_exit.c | 47 ++++++++++++++++++++++++++++++++++++
> arch/arm64/kvm/mmu.c | 7 ++++++
> 3 files changed, 64 insertions(+)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 825cfef3b1c2..fceceeead011 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1845,7 +1845,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>
> void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
> {
> + /*
> + * Flush all CPUs' dirty log buffers to the dirty_bitmap. Called
> + * before reporting dirty_bitmap to userspace. KVM flushes the buffers
> + * on all VM-Exits, thus we only need to kick running vCPUs to force a
> + * VM-Exit.
> + */
> + struct kvm_vcpu *vcpu;
> + unsigned long i;
>
> + kvm_for_each_vcpu(i, vcpu, kvm)
> + kvm_vcpu_kick(vcpu);
We don't need this outside of HDBSS. Why impose it on everyone else?
I'm also perplexed by the requirement to flush on all exits. Why can't
this be deferred to vcpu_put() only? Specially given that I don't see
any use of this stuff outside of a VHE system.
> }
>
> static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 512d152233ff..db9d7e1f72bf 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -330,6 +330,50 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> return arm_exit_handlers[esr_ec];
> }
>
> +#define HDBSS_ENTRY_VALID_SHIFT 0
> +#define HDBSS_ENTRY_VALID_MASK (1UL << HDBSS_ENTRY_VALID_SHIFT)
> +#define HDBSS_ENTRY_IPA_SHIFT 12
> +#define HDBSS_ENTRY_IPA_MASK GENMASK_ULL(55, HDBSS_ENTRY_IPA_SHIFT)
This has no place here. Move this stuff somewhere else. And rewrite in
a more concise way:
#define HDBSS_ENTRY_VALID BIT(0)
#define HDBSS_ENTRY_IPA GENMASK(55, 12)
> +
> +static void kvm_flush_hdbss_buffer(struct kvm_vcpu *vcpu)
> +{
> + int idx, curr_idx;
> + u64 *hdbss_buf;
> +
> + if (!vcpu->kvm->enable_hdbss)
This control is odd. You track the logging per-VM, but dump the
buffers per-vcpu.
> + return;
> +
> + dsb(sy);
> + isb();
> + curr_idx = HDBSSPROD_IDX(read_sysreg_s(SYS_HDBSSPROD_EL2));
> +
> + /* Do nothing if HDBSS buffer is empty or br_el2 is NULL */
> + if (curr_idx == 0 || vcpu->arch.hdbss.br_el2 == 0)
> + return;
> +
> + hdbss_buf = page_address(phys_to_page(HDBSSBR_BADDR(vcpu->arch.hdbss.br_el2)));
Do you see why it is silly to keep the raw value of the register? It'd
be far better to just keep the VA (and maybe the PA as well), and
build the register value as required.
> + if (!hdbss_buf) {
> + kvm_err("Enter flush hdbss buffer with buffer == NULL!");
> + return;
> + }
> +
> + for (idx = 0; idx < curr_idx; idx++) {
> + u64 gpa;
> +
> + gpa = hdbss_buf[idx];
> + if (!(gpa & HDBSS_ENTRY_VALID_MASK))
> + continue;
> +
> + gpa = gpa & HDBSS_ENTRY_IPA_MASK;
> + kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT);
Isn't there a requirement to hold a lock of some sort here?
> + }
> +
> + /* reset HDBSS index */
> + write_sysreg_s(0, SYS_HDBSSPROD_EL2);
> + dsb(sy);
Where is the DSB(SY) requirement coming from if the logging is
per-vcpu and that each vcpu gets its own buffer?
> + isb();
And you want to do that on each exit? How will userspace intercept
this? Frankly, this should be moved to put-time, and only be
guaranteed to be visible to userspace when the vcpus are outside of
the kernel.
> +}
> +
> /*
> * We may be single-stepping an emulated instruction. If the emulation
> * has been completed in the kernel, we can return to userspace with a
> @@ -365,6 +409,9 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
> {
> struct kvm_run *run = vcpu->run;
>
> + if (vcpu->kvm->enable_hdbss)
> + kvm_flush_hdbss_buffer(vcpu);
> +
> if (ARM_SERROR_PENDING(exception_index)) {
> /*
> * The SError is handled by handle_exit_early(). If the guest
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 9c11e2292b1e..3e0781ae0ae1 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1790,6 +1790,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
> is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
>
> + /*
> + * HDBSS buffer already flushed when enter handle_trap_exceptions().
> + * Nothing to do here.
> + */
> + if (ESR_ELx_ISS2(esr) & ESR_ELx_HDBSSF)
> + return 1;
> +
Can this happen on an instruction abort? Also, you seem to be ignoring
any type of *faults*. Nothing can fail at all?
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-03-11 10:50 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-11 4:03 [PATCH v1 0/5] Support the FEAT_HDBSS introduced in Armv9.5 Zhenyu Ye
2025-03-11 4:03 ` [PATCH v1 1/5] arm64/sysreg: add HDBSS related register information Zhenyu Ye
2025-03-11 9:41 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 2/5] arm64/kvm: support set the DBM attr during memory abort Zhenyu Ye
2025-03-11 9:47 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 3/5] arm64/kvm: using ioctl to enable/disable the HDBSS feature Zhenyu Ye
2025-03-11 8:05 ` Oliver Upton
2025-03-11 9:59 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 4/5] arm64/kvm: support to handle the HDBSSF event Zhenyu Ye
2025-03-11 10:34 ` Marc Zyngier
2025-03-11 4:03 ` [PATCH v1 5/5] arm64/config: add config to control whether enable HDBSS feature Zhenyu Ye
2025-03-11 9:53 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).