* [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support
@ 2025-10-13 6:25 Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 1/7] KVM: x86: Carve out PML flush routine Nikunj A Dadhania
` (6 more replies)
0 siblings, 7 replies; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
This series implements Page Modification Logging (PML) for guests, bringing
hardware-assisted dirty logging support. PML is designed to track guest
modified memory pages. PML enables the hypervisor to identify which pages in a
guest's memory have been modified since the last checkpoint or during live
migration.
The PML feature uses two new VMCB fields (PML_ADDR and PML_INDEX) and
generates a VMEXIT when the 4KB log buffer becomes full.
The feature is enabled by default when hardware support is detected and
can be disabled via the 'pml' module parameter.
Changelog:
v4:
* Add couple of patches to enable_pml and nested CPU dirty logging to
common code (Kai Huang)
* Rebased to latest kvm/next
v3:
* Update comments with nested details (Kai Huang)
* Added nested.update_vmcb01_cpu_dirty_logging to update L1 PML (Kai Huang)
* Added patch to use BIT_ULL() instead of BIT() for 64-bit nested_ctl
v2: https://lore.kernel.org/kvm/20250915085938.639049-1-nikunj@amd.com/
* Rebased on latest kvm/next
* Added patch to move pml_pg field from struct vcpu_vmx to struct kvm_vcpu_arch
to share the PML page. (Kai Huang)
* Dropped the SNP safe allocation optimization patch, will submit it separately.
* Update commit message adding explicit mention that AMD PML follows VMX behavior
(Kai Huang)
* Updated SNP erratum comment to include PML buffer alongside VMCB, VMSA, and
AVIC pages. (Kai Huang)
RFC: https://lore.kernel.org/kvm/20250825152009.3512-1-nikunj@amd.com/
Kai Huang (1):
KVM: x86: Move nested CPU dirty logging logic to common code
Nikunj A Dadhania (6):
KVM: x86: Carve out PML flush routine
KVM: x86: Move PML page to common vcpu arch structure
KVM: x86: Move enable_pml variable to common x86 code
x86/cpufeatures: Add Page modification logging
KVM: SVM: Use BIT_ULL for 64-bit nested_ctl bit definitions
KVM: SVM: Add Page modification logging support
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/kvm_host.h | 6 ++-
arch/x86/include/asm/svm.h | 12 +++--
arch/x86/include/uapi/asm/svm.h | 2 +
arch/x86/kernel/cpu/scattered.c | 1 +
arch/x86/kvm/kvm_cache_regs.h | 7 +++
arch/x86/kvm/svm/nested.c | 9 +++-
arch/x86/kvm/svm/sev.c | 2 +-
arch/x86/kvm/svm/svm.c | 84 +++++++++++++++++++++++++++++-
arch/x86/kvm/svm/svm.h | 2 +
arch/x86/kvm/vmx/main.c | 4 +-
arch/x86/kvm/vmx/nested.c | 5 --
arch/x86/kvm/vmx/vmx.c | 72 ++++++-------------------
arch/x86/kvm/vmx/vmx.h | 10 +---
arch/x86/kvm/vmx/x86_ops.h | 2 +-
arch/x86/kvm/x86.c | 56 +++++++++++++++++++-
arch/x86/kvm/x86.h | 8 +++
17 files changed, 201 insertions(+), 82 deletions(-)
base-commit: 6b36119b94d0b2bb8cea9d512017efafd461d6ac
--
2.48.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 1/7] KVM: x86: Carve out PML flush routine
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-14 22:04 ` Huang, Kai
2025-10-13 6:25 ` [PATCH v4 2/7] KVM: x86: Move PML page to common vcpu arch structure Nikunj A Dadhania
` (5 subsequent siblings)
6 siblings, 1 reply; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Move the PML (Page Modification Logging) buffer flushing logic from
VMX-specific code to common x86 KVM code to enable reuse by SVM and avoid
code duplication.
The AMD SVM PML implementations share the same behavior as VMX PML:
1) The PML buffer is a 4K page with 512 entries
2) Hardware records dirty GPAs in reverse order (from index 511 to 0)
3) Hardware clears bits 11:0 when recording GPAs
The PML constants (PML_LOG_NR_ENTRIES and PML_HEAD_INDEX) are moved from
vmx.h to x86.h to make them available to both VMX and SVM.
No functional change intended for VMX, except tone down the WARN_ON() to
WARN_ON_ONCE() for the page alignment check. If hardware exhibits this
behavior once, it's likely to occur repeatedly, so use WARN_ON_ONCE() to
avoid log flooding while still capturing the unexpected condition.
The refactoring prepares for SVM to leverage the same PML flushing
implementation.
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/kvm/vmx/vmx.c | 26 ++------------------------
arch/x86/kvm/vmx/vmx.h | 5 -----
arch/x86/kvm/x86.c | 31 +++++++++++++++++++++++++++++++
arch/x86/kvm/x86.h | 8 ++++++++
4 files changed, 41 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 546272a5d34d..db1379cffbcb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6206,37 +6206,15 @@ static void vmx_destroy_pml_buffer(struct vcpu_vmx *vmx)
static void vmx_flush_pml_buffer(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- u16 pml_idx, pml_tail_index;
- u64 *pml_buf;
- int i;
+ u16 pml_idx;
pml_idx = vmcs_read16(GUEST_PML_INDEX);
/* Do nothing if PML buffer is empty */
if (pml_idx == PML_HEAD_INDEX)
return;
- /*
- * PML index always points to the next available PML buffer entity
- * unless PML log has just overflowed.
- */
- pml_tail_index = (pml_idx >= PML_LOG_NR_ENTRIES) ? 0 : pml_idx + 1;
- /*
- * PML log is written backwards: the CPU first writes the entry 511
- * then the entry 510, and so on.
- *
- * Read the entries in the same order they were written, to ensure that
- * the dirty ring is filled in the same order the CPU wrote them.
- */
- pml_buf = page_address(vmx->pml_pg);
-
- for (i = PML_HEAD_INDEX; i >= pml_tail_index; i--) {
- u64 gpa;
-
- gpa = pml_buf[i];
- WARN_ON(gpa & (PAGE_SIZE - 1));
- kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT);
- }
+ kvm_flush_pml_buffer(vcpu, vmx->pml_pg, pml_idx);
/* reset PML index */
vmcs_write16(GUEST_PML_INDEX, PML_HEAD_INDEX);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index ea93121029f9..fe9d2b10f4be 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -272,11 +272,6 @@ struct vcpu_vmx {
unsigned int ple_window;
bool ple_window_dirty;
- /* Support for PML */
-#define PML_LOG_NR_ENTRIES 512
- /* PML is written backwards: this is the first entry written by the CPU */
-#define PML_HEAD_INDEX (PML_LOG_NR_ENTRIES-1)
-
struct page *pml_pg;
/* apic deadline value in host tsc */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4b8138bd4857..732d8a4b7dff 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6737,6 +6737,37 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
kvm_vcpu_kick(vcpu);
}
+void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, struct page *pml_page, u16 pml_idx)
+{
+ u16 pml_tail_index;
+ u64 *pml_buf;
+ int i;
+
+ /*
+ * PML index always points to the next available PML buffer entity
+ * unless PML log has just overflowed.
+ */
+ pml_tail_index = (pml_idx >= PML_LOG_NR_ENTRIES) ? 0 : pml_idx + 1;
+
+ /*
+ * PML log is written backwards: the CPU first writes the entry 511
+ * then the entry 510, and so on.
+ *
+ * Read the entries in the same order they were written, to ensure that
+ * the dirty ring is filled in the same order the CPU wrote them.
+ */
+ pml_buf = page_address(pml_page);
+
+ for (i = PML_HEAD_INDEX; i >= pml_tail_index; i--) {
+ u64 gpa;
+
+ gpa = pml_buf[i];
+ WARN_ON_ONCE(gpa & (PAGE_SIZE - 1));
+ kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT);
+ }
+}
+EXPORT_SYMBOL_GPL(kvm_flush_pml_buffer);
+
int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
struct kvm_enable_cap *cap)
{
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index f3dc77f006f9..199d39492df8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -737,4 +737,12 @@ static inline bool kvm_is_valid_u_s_cet(struct kvm_vcpu *vcpu, u64 data)
return true;
}
+
+/* Support for PML */
+#define PML_LOG_NR_ENTRIES 512
+/* PML is written backwards: this is the first entry written by the CPU */
+#define PML_HEAD_INDEX (PML_LOG_NR_ENTRIES-1)
+
+void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, struct page *pml_pg, u16 pml_idx);
+
#endif
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 2/7] KVM: x86: Move PML page to common vcpu arch structure
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 1/7] KVM: x86: Carve out PML flush routine Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code Nikunj A Dadhania
` (4 subsequent siblings)
6 siblings, 0 replies; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Move the PML page pointer from VMX-specific vcpu_vmx structure to the
common kvm_vcpu_arch structure to enable sharing between VMX and SVM
implementations. Only the page pointer is moved to x86 common code while
keeping allocation logic vendor-specific, since AMD requires
snp_safe_alloc_page() for PML buffer allocation.
Update all VMX references accordingly, and simplify the
kvm_flush_pml_buffer() interface by removing the page parameter since it
can now access the page directly from the vcpu structure.
No functional change, restructuring to prepare for SVM PML support.
Suggested-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx/vmx.c | 24 ++++++++++++------------
arch/x86/kvm/vmx/vmx.h | 2 --
arch/x86/kvm/x86.c | 4 ++--
arch/x86/kvm/x86.h | 2 +-
5 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 48598d017d6f..7e5dceb4530e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -861,6 +861,8 @@ struct kvm_vcpu_arch {
*/
struct kvm_mmu_memory_cache mmu_external_spt_cache;
+ struct page *pml_page;
+
/*
* QEMU userspace and the guest each have their own FPU state.
* In vcpu_run, we switch between the user and guest FPU contexts.
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index db1379cffbcb..aa1ba8db6392 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4677,7 +4677,8 @@ int vmx_vcpu_precreate(struct kvm *kvm)
static void init_vmcs(struct vcpu_vmx *vmx)
{
- struct kvm *kvm = vmx->vcpu.kvm;
+ struct kvm_vcpu *vcpu = &vmx->vcpu;
+ struct kvm *kvm = vcpu->kvm;
struct kvm_vmx *kvm_vmx = to_kvm_vmx(kvm);
if (nested)
@@ -4768,7 +4769,7 @@ static void init_vmcs(struct vcpu_vmx *vmx)
vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
if (enable_pml) {
- vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
+ vmcs_write64(PML_ADDRESS, page_to_phys(vcpu->arch.pml_page));
vmcs_write16(GUEST_PML_INDEX, PML_HEAD_INDEX);
}
@@ -6195,17 +6196,16 @@ void vmx_get_entry_info(struct kvm_vcpu *vcpu, u32 *intr_info, u32 *error_code)
*error_code = 0;
}
-static void vmx_destroy_pml_buffer(struct vcpu_vmx *vmx)
+static void vmx_destroy_pml_buffer(struct kvm_vcpu *vcpu)
{
- if (vmx->pml_pg) {
- __free_page(vmx->pml_pg);
- vmx->pml_pg = NULL;
+ if (vcpu->arch.pml_page) {
+ __free_page(vcpu->arch.pml_page);
+ vcpu->arch.pml_page = NULL;
}
}
static void vmx_flush_pml_buffer(struct kvm_vcpu *vcpu)
{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
u16 pml_idx;
pml_idx = vmcs_read16(GUEST_PML_INDEX);
@@ -6214,7 +6214,7 @@ static void vmx_flush_pml_buffer(struct kvm_vcpu *vcpu)
if (pml_idx == PML_HEAD_INDEX)
return;
- kvm_flush_pml_buffer(vcpu, vmx->pml_pg, pml_idx);
+ kvm_flush_pml_buffer(vcpu, pml_idx);
/* reset PML index */
vmcs_write16(GUEST_PML_INDEX, PML_HEAD_INDEX);
@@ -7502,7 +7502,7 @@ void vmx_vcpu_free(struct kvm_vcpu *vcpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);
if (enable_pml)
- vmx_destroy_pml_buffer(vmx);
+ vmx_destroy_pml_buffer(vcpu);
free_vpid(vmx->vpid);
nested_vmx_free_vcpu(vcpu);
free_loaded_vmcs(vmx->loaded_vmcs);
@@ -7531,8 +7531,8 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
* for the guest), etc.
*/
if (enable_pml) {
- vmx->pml_pg = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
- if (!vmx->pml_pg)
+ vcpu->arch.pml_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
+ if (!vcpu->arch.pml_page)
goto free_vpid;
}
@@ -7603,7 +7603,7 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
free_vmcs:
free_loaded_vmcs(vmx->loaded_vmcs);
free_pml:
- vmx_destroy_pml_buffer(vmx);
+ vmx_destroy_pml_buffer(vcpu);
free_vpid:
free_vpid(vmx->vpid);
return err;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index fe9d2b10f4be..d2dd63194ee2 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -272,8 +272,6 @@ struct vcpu_vmx {
unsigned int ple_window;
bool ple_window_dirty;
- struct page *pml_pg;
-
/* apic deadline value in host tsc */
u64 hv_deadline_tsc;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 732d8a4b7dff..be8483d20fbc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6737,7 +6737,7 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
kvm_vcpu_kick(vcpu);
}
-void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, struct page *pml_page, u16 pml_idx)
+void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, u16 pml_idx)
{
u16 pml_tail_index;
u64 *pml_buf;
@@ -6756,7 +6756,7 @@ void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, struct page *pml_page, u16 pml_
* Read the entries in the same order they were written, to ensure that
* the dirty ring is filled in the same order the CPU wrote them.
*/
- pml_buf = page_address(pml_page);
+ pml_buf = page_address(vcpu->arch.pml_page);
for (i = PML_HEAD_INDEX; i >= pml_tail_index; i--) {
u64 gpa;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 199d39492df8..6bf6645c4fe4 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -743,6 +743,6 @@ static inline bool kvm_is_valid_u_s_cet(struct kvm_vcpu *vcpu, u64 data)
/* PML is written backwards: this is the first entry written by the CPU */
#define PML_HEAD_INDEX (PML_LOG_NR_ENTRIES-1)
-void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, struct page *pml_pg, u16 pml_idx);
+void kvm_flush_pml_buffer(struct kvm_vcpu *vcpu, u16 pml_idx);
#endif
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 1/7] KVM: x86: Carve out PML flush routine Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 2/7] KVM: x86: Move PML page to common vcpu arch structure Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-14 11:24 ` Huang, Kai
2025-10-13 6:25 ` [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code Nikunj A Dadhania
` (3 subsequent siblings)
6 siblings, 1 reply; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Move the enable_pml module parameter from VMX-specific code to common x86
KVM code. This allows both VMX and SVM implementations to access the same
PML enable/disable control.
No functional change, just code reorganization to support shared PML
infrastructure.
Suggested-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx/vmx.c | 1 -
arch/x86/kvm/x86.c | 3 +++
3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7e5dceb4530e..73b16cecc06d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1991,6 +1991,7 @@ extern bool __read_mostly allow_smaller_maxphyaddr;
extern bool __read_mostly enable_apicv;
extern bool __read_mostly enable_ipiv;
extern bool __read_mostly enable_device_posted_irqs;
+extern bool __read_mostly enable_pml;
extern struct kvm_x86_ops kvm_x86_ops;
#define kvm_x86_call(func) static_call(kvm_x86_##func)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index aa1ba8db6392..81216deb3959 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -127,7 +127,6 @@ module_param(enable_device_posted_irqs, bool, 0444);
static bool __read_mostly nested = 1;
module_param(nested, bool, 0444);
-bool __read_mostly enable_pml = 1;
module_param_named(pml, enable_pml, bool, 0444);
static bool __read_mostly error_on_inconsistent_vmcs_config = true;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index be8483d20fbc..2b23d7721444 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -241,6 +241,9 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_ipiv);
bool __read_mostly enable_device_posted_irqs = true;
EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_device_posted_irqs);
+bool __read_mostly enable_pml = true;
+EXPORT_SYMBOL_GPL(enable_pml);
+
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_COUNTER(VM, mmu_shadow_zapped),
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
` (2 preceding siblings ...)
2025-10-13 6:25 ` [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-14 11:34 ` Huang, Kai
2025-10-13 6:25 ` [PATCH v4 5/7] x86/cpufeatures: Add Page modification logging Nikunj A Dadhania
` (2 subsequent siblings)
6 siblings, 1 reply; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
From: Kai Huang <kai.huang@intel.com>
Move nested PML dirty logging update logic from VMX-specific code to common
x86 infrastructure. Both VMX and SVM share identical logic: defer CPU dirty
logging updates when running in L2, then process pending updates when
exiting to L1.
No functional change.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Co-developed-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/kvm/kvm_cache_regs.h | 7 +++++++
arch/x86/kvm/vmx/main.c | 4 ++--
arch/x86/kvm/vmx/nested.c | 5 -----
arch/x86/kvm/vmx/vmx.c | 23 ++++-------------------
arch/x86/kvm/vmx/vmx.h | 3 +--
arch/x86/kvm/vmx/x86_ops.h | 2 +-
arch/x86/kvm/x86.c | 22 +++++++++++++++++++++-
8 files changed, 38 insertions(+), 31 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 73b16cecc06d..ca5def4f3585 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -862,6 +862,7 @@ struct kvm_vcpu_arch {
struct kvm_mmu_memory_cache mmu_external_spt_cache;
struct page *pml_page;
+ bool update_cpu_dirty_logging_pending;
/*
* QEMU userspace and the guest each have their own FPU state.
@@ -1884,7 +1885,7 @@ struct kvm_x86_ops {
struct x86_exception *exception);
void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu);
- void (*update_cpu_dirty_logging)(struct kvm_vcpu *vcpu);
+ void (*update_cpu_dirty_logging)(struct kvm_vcpu *vcpu, bool enable);
const struct kvm_x86_nested_ops *nested_ops;
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 8ddb01191d6f..0c4a832a9dab 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -238,6 +238,13 @@ static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu);
}
+ /* Also see kvm_vcpu_update_cpu_dirty_logging() */
+ if (vcpu->arch.update_cpu_dirty_logging_pending) {
+ vcpu->arch.update_cpu_dirty_logging_pending = false;
+ kvm_x86_call(update_cpu_dirty_logging)(vcpu,
+ atomic_read(&vcpu->kvm->nr_memslots_dirty_logging));
+ }
+
vcpu->stat.guest_mode = 0;
}
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 0eb2773b2ae2..6fb97f6ce48e 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -103,7 +103,7 @@ static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vmx_vcpu_load(vcpu, cpu);
}
-static void vt_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
+static void vt_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable)
{
/*
* Basic TDX does not support feature PML. KVM does not enable PML in
@@ -112,7 +112,7 @@ static void vt_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
if (WARN_ON_ONCE(is_td_vcpu(vcpu)))
return;
- vmx_update_cpu_dirty_logging(vcpu);
+ vmx_update_cpu_dirty_logging(vcpu, enable);
}
static void vt_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 76271962cb70..0093fc389eae 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5202,11 +5202,6 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
vmx_set_virtual_apic_mode(vcpu);
}
- if (vmx->nested.update_vmcs01_cpu_dirty_logging) {
- vmx->nested.update_vmcs01_cpu_dirty_logging = false;
- vmx_update_cpu_dirty_logging(vcpu);
- }
-
nested_put_vmcs12_pages(vcpu);
if (vmx->nested.reload_vmcs01_apic_access_page) {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 81216deb3959..ede5aaf24278 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8194,27 +8194,12 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
}
#endif
-void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
+void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable)
{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
- if (WARN_ON_ONCE(!enable_pml))
- return;
-
- if (is_guest_mode(vcpu)) {
- vmx->nested.update_vmcs01_cpu_dirty_logging = true;
- return;
- }
-
- /*
- * Note, nr_memslots_dirty_logging can be changed concurrent with this
- * code, but in that case another update request will be made and so
- * the guest will never run with a stale PML value.
- */
- if (atomic_read(&vcpu->kvm->nr_memslots_dirty_logging))
- secondary_exec_controls_setbit(vmx, SECONDARY_EXEC_ENABLE_PML);
+ if (enable)
+ secondary_exec_controls_setbit(to_vmx(vcpu), SECONDARY_EXEC_ENABLE_PML);
else
- secondary_exec_controls_clearbit(vmx, SECONDARY_EXEC_ENABLE_PML);
+ secondary_exec_controls_clearbit(to_vmx(vcpu), SECONDARY_EXEC_ENABLE_PML);
}
void vmx_setup_mce(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index d2dd63194ee2..22bf8860add4 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -133,7 +133,6 @@ struct nested_vmx {
bool change_vmcs01_virtual_apic_mode;
bool reload_vmcs01_apic_access_page;
- bool update_vmcs01_cpu_dirty_logging;
bool update_vmcs01_apicv_status;
bool update_vmcs01_hwapic_isr;
@@ -401,7 +400,7 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
-void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable);
u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 9697368d65b3..1ae01fa592cd 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -113,7 +113,7 @@ u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
void vmx_write_tsc_offset(struct kvm_vcpu *vcpu);
void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu);
-void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable);
#ifdef CONFIG_X86_64
int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
bool *expired);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2b23d7721444..42479fcda688 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -149,6 +149,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly;
#include <asm/kvm-x86-ops.h>
EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
+EXPORT_STATIC_CALL_GPL(kvm_x86_update_cpu_dirty_logging);
static bool __read_mostly ignore_msrs = 0;
module_param(ignore_msrs, bool, 0644);
@@ -11055,6 +11056,25 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
kvm_x86_call(set_apic_access_page_addr)(vcpu);
}
+static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
+{
+ if (WARN_ON_ONCE(!enable_pml))
+ return;
+
+ if (is_guest_mode(vcpu)) {
+ vcpu->arch.update_cpu_dirty_logging_pending = true;
+ return;
+ }
+
+ /*
+ * Note, nr_memslots_dirty_logging can be changed concurrently with this
+ * code, but in that case another update request will be made and so the
+ * guest will never run with a stale PML value.
+ */
+ kvm_x86_call(update_cpu_dirty_logging)(vcpu,
+ atomic_read(&vcpu->kvm->nr_memslots_dirty_logging));
+}
+
/*
* Called within kvm->srcu read side.
* Returns 1 to let vcpu_run() continue the guest execution loop without
@@ -11221,7 +11241,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
kvm_x86_call(recalc_intercepts)(vcpu);
if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
- kvm_x86_call(update_cpu_dirty_logging)(vcpu);
+ kvm_vcpu_update_cpu_dirty_logging(vcpu);
if (kvm_check_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu)) {
kvm_vcpu_reset(vcpu, true);
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 5/7] x86/cpufeatures: Add Page modification logging
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
` (3 preceding siblings ...)
2025-10-13 6:25 ` [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 6/7] KVM: SVM: Use BIT_ULL for 64-bit nested_ctl bit definitions Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 7/7] KVM: SVM: Add Page modification logging support Nikunj A Dadhania
6 siblings, 0 replies; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Page modification logging(PML) is a hardware feature designed to track
guest modified memory pages. PML enables the hypervisor to identify which
pages in a guest's memory have been changed since the last checkpoint or
during live migration.
The PML feature is advertised via CPUID leaf 0x8000000A ECX[4] bit.
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f1a9f40622cd..66db158caa13 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -228,6 +228,7 @@
#define X86_FEATURE_PVUNLOCK ( 8*32+20) /* PV unlock function */
#define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* PV vcpu_is_preempted function */
#define X86_FEATURE_TDX_GUEST ( 8*32+22) /* "tdx_guest" Intel Trust Domain Extensions Guest */
+#define X86_FEATURE_PML ( 8*32+23) /* AMD Page Modification logging */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
#define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* "fsgsbase" RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index cf4ae822bcc0..1706b2f1ca4a 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -49,6 +49,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
{ X86_FEATURE_AMD_FAST_CPPC, CPUID_EDX, 15, 0x80000007, 0 },
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
+ { X86_FEATURE_PML, CPUID_ECX, 4, 0x8000000A, 0 },
{ X86_FEATURE_COHERENCY_SFW_NO, CPUID_EBX, 31, 0x8000001f, 0 },
{ X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
{ X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 6/7] KVM: SVM: Use BIT_ULL for 64-bit nested_ctl bit definitions
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
` (4 preceding siblings ...)
2025-10-13 6:25 ` [PATCH v4 5/7] x86/cpufeatures: Add Page modification logging Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 7/7] KVM: SVM: Add Page modification logging support Nikunj A Dadhania
6 siblings, 0 replies; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Replace BIT() with BIT_ULL() for SVM nested control bit definitions
since nested_ctl is a 64-bit field in the VMCB control area structure.
No functional change intended.
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/svm.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 17f6c3fedeee..d2f1a495691c 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -236,9 +236,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
-#define SVM_NESTED_CTL_NP_ENABLE BIT(0)
-#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
-#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2)
+#define SVM_NESTED_CTL_NP_ENABLE BIT_ULL(0)
+#define SVM_NESTED_CTL_SEV_ENABLE BIT_ULL(1)
+#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT_ULL(2)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 7/7] KVM: SVM: Add Page modification logging support
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
` (5 preceding siblings ...)
2025-10-13 6:25 ` [PATCH v4 6/7] KVM: SVM: Use BIT_ULL for 64-bit nested_ctl bit definitions Nikunj A Dadhania
@ 2025-10-13 6:25 ` Nikunj A Dadhania
2025-10-17 5:13 ` Huang, Kai
6 siblings, 1 reply; 25+ messages in thread
From: Nikunj A Dadhania @ 2025-10-13 6:25 UTC (permalink / raw)
To: seanjc, pbonzini
Cc: kvm, thomas.lendacky, santosh.shukla, bp, joao.m.martins, nikunj,
kai.huang
Currently, dirty logging relies on write protecting guest memory and
marking dirty GFNs during subsequent write faults. This method works but
incurs overhead due to additional write faults for each dirty GFN.
Implement support for the Page Modification Logging (PML) feature, a
hardware-assisted method for efficient dirty logging. PML automatically
logs dirty GPA[51:12] to a 4K buffer when the CPU sets NPT D-bits. Two new
VMCB fields are utilized: PML_ADDR and PML_INDEX. The PML_INDEX is
initialized to 511 (8 bytes per GPA entry), and the CPU decreases the
PML_INDEX after logging each GPA. When the PML buffer is full, a
VMEXIT(PML_FULL) with exit code 0x407 is generated.
Disable PML for nested guests.
PML is enabled by default when supported and can be disabled via the 'pml'
module parameter.
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/include/asm/svm.h | 6 ++-
arch/x86/include/uapi/asm/svm.h | 2 +
arch/x86/kvm/svm/nested.c | 9 +++-
arch/x86/kvm/svm/sev.c | 2 +-
arch/x86/kvm/svm/svm.c | 84 ++++++++++++++++++++++++++++++++-
arch/x86/kvm/svm/svm.h | 2 +
6 files changed, 100 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index d2f1a495691c..caf6cb09f983 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -165,7 +165,10 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u8 reserved_9[22];
u64 allowed_sev_features; /* Offset 0x138 */
u64 guest_sev_features; /* Offset 0x140 */
- u8 reserved_10[664];
+ u8 reserved_10[128];
+ u64 pml_addr; /* Offset 0x1c8 */
+ u16 pml_index; /* Offset 0x1d0 */
+ u8 reserved_11[526];
/*
* Offset 0x3e0, 32 bytes reserved
* for use by hypervisor/software.
@@ -239,6 +242,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_NESTED_CTL_NP_ENABLE BIT_ULL(0)
#define SVM_NESTED_CTL_SEV_ENABLE BIT_ULL(1)
#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT_ULL(2)
+#define SVM_NESTED_CTL_PML_ENABLE BIT_ULL(11)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 9c640a521a67..f329dca167de 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -101,6 +101,7 @@
#define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401
#define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402
#define SVM_EXIT_VMGEXIT 0x403
+#define SVM_EXIT_PML_FULL 0x407
/* SEV-ES software-defined VMGEXIT events */
#define SVM_VMGEXIT_MMIO_READ 0x80000001
@@ -232,6 +233,7 @@
{ SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
{ SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \
{ SVM_EXIT_VMGEXIT, "vmgexit" }, \
+ { SVM_EXIT_PML_FULL, "pml_full" }, \
{ SVM_VMGEXIT_MMIO_READ, "vmgexit_mmio_read" }, \
{ SVM_VMGEXIT_MMIO_WRITE, "vmgexit_mmio_write" }, \
{ SVM_VMGEXIT_NMI_COMPLETE, "vmgexit_nmi_complete" }, \
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index a6443feab252..1f6cc5a6da63 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -748,11 +748,18 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
V_NMI_BLOCKING_MASK);
}
- /* Copied from vmcb01. msrpm_base can be overwritten later. */
+ /* Copied from vmcb01. msrpm_base/nested_ctl can be overwritten later. */
vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
vmcb02->control.msrpm_base_pa = vmcb01->control.msrpm_base_pa;
+ /* Disable PML for nested guest as the A/D update is emulated by MMU */
+ if (enable_pml) {
+ vmcb02->control.nested_ctl &= ~SVM_NESTED_CTL_PML_ENABLE;
+ vmcb02->control.pml_addr = 0;
+ vmcb02->control.pml_index = -1;
+ }
+
/*
* Stash vmcb02's counter if the guest hasn't moved past the guilty
* instruction; otherwise, reset the counter to '0'.
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 0835c664fbfd..080a9a72545e 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -4774,7 +4774,7 @@ struct page *snp_safe_alloc_page_node(int node, gfp_t gfp)
* Allocate an SNP-safe page to workaround the SNP erratum where
* the CPU will incorrectly signal an RMP violation #PF if a
* hugepage (2MB or 1GB) collides with the RMP entry of a
- * 2MB-aligned VMCB, VMSA, or AVIC backing page.
+ * 2MB-aligned VMCB, VMSA, PML or AVIC backing page.
*
* Allocate one extra page, choose a page which is not
* 2MB-aligned, and free the other.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 153c12dbf3eb..fc7147024123 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -170,6 +170,8 @@ module_param(intercept_smi, bool, 0444);
bool vnmi = true;
module_param(vnmi, bool, 0444);
+module_param_named(pml, enable_pml, bool, 0444);
+
static bool svm_gp_erratum_intercept = true;
static u8 rsm_ins_bytes[] = "\x0f\xaa";
@@ -1162,6 +1164,16 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
if (vcpu->kvm->arch.bus_lock_detection_enabled)
svm_set_intercept(svm, INTERCEPT_BUSLOCK);
+ if (enable_pml) {
+ /*
+ * Populate the page address and index here, PML is enabled
+ * when dirty logging is enabled on the memslot through
+ * svm_update_cpu_dirty_logging()
+ */
+ control->pml_addr = (u64)__sme_set(page_to_phys(vcpu->arch.pml_page));
+ control->pml_index = PML_HEAD_INDEX;
+ }
+
if (sev_guest(vcpu->kvm))
sev_init_vmcb(svm, init_event);
@@ -1221,9 +1233,15 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
if (!vmcb01_page)
goto out;
+ if (enable_pml) {
+ vcpu->arch.pml_page = snp_safe_alloc_page();
+ if (!vcpu->arch.pml_page)
+ goto error_free_vmcb_page;
+ }
+
err = sev_vcpu_create(vcpu);
if (err)
- goto error_free_vmcb_page;
+ goto error_free_pml_page;
err = avic_init_vcpu(svm);
if (err)
@@ -1247,6 +1265,9 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
error_free_sev:
sev_free_vcpu(vcpu);
+error_free_pml_page:
+ if (vcpu->arch.pml_page)
+ __free_page(vcpu->arch.pml_page);
error_free_vmcb_page:
__free_page(vmcb01_page);
out:
@@ -1264,6 +1285,9 @@ static void svm_vcpu_free(struct kvm_vcpu *vcpu)
sev_free_vcpu(vcpu);
+ if (enable_pml)
+ __free_page(vcpu->arch.pml_page);
+
__free_page(__sme_pa_to_page(svm->vmcb01.pa));
svm_vcpu_free_msrpm(svm->msrpm);
}
@@ -3151,6 +3175,42 @@ static int bus_lock_exit(struct kvm_vcpu *vcpu)
return 0;
}
+void svm_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (enable)
+ svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_PML_ENABLE;
+ else
+ svm->vmcb->control.nested_ctl &= ~SVM_NESTED_CTL_PML_ENABLE;
+}
+
+static void svm_flush_pml_buffer(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb_control_area *control = &svm->vmcb->control;
+
+ /* Do nothing if PML buffer is empty */
+ if (control->pml_index == PML_HEAD_INDEX)
+ return;
+
+ kvm_flush_pml_buffer(vcpu, control->pml_index);
+
+ /* Reset the PML index */
+ control->pml_index = PML_HEAD_INDEX;
+}
+
+static int pml_full_interception(struct kvm_vcpu *vcpu)
+{
+ trace_kvm_pml_full(vcpu->vcpu_id);
+
+ /*
+ * PML buffer is already flushed at the beginning of svm_handle_exit().
+ * Nothing to do here.
+ */
+ return 1;
+}
+
static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
[SVM_EXIT_READ_CR0] = cr_interception,
[SVM_EXIT_READ_CR3] = cr_interception,
@@ -3227,6 +3287,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
#ifdef CONFIG_KVM_AMD_SEV
[SVM_EXIT_VMGEXIT] = sev_handle_vmgexit,
#endif
+ [SVM_EXIT_PML_FULL] = pml_full_interception,
};
static void dump_vmcb(struct kvm_vcpu *vcpu)
@@ -3275,8 +3336,10 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-20s%016llx\n", "exit_info2:", control->exit_info_2);
pr_err("%-20s%08x\n", "exit_int_info:", control->exit_int_info);
pr_err("%-20s%08x\n", "exit_int_info_err:", control->exit_int_info_err);
- pr_err("%-20s%lld\n", "nested_ctl:", control->nested_ctl);
+ pr_err("%-20s%llx\n", "nested_ctl:", control->nested_ctl);
pr_err("%-20s%016llx\n", "nested_cr3:", control->nested_cr3);
+ pr_err("%-20s%016llx\n", "pml_addr:", control->pml_addr);
+ pr_err("%-20s%04x\n", "pml_index:", control->pml_index);
pr_err("%-20s%016llx\n", "avic_vapic_bar:", control->avic_vapic_bar);
pr_err("%-20s%016llx\n", "ghcb:", control->ghcb_gpa);
pr_err("%-20s%08x\n", "event_inj:", control->event_inj);
@@ -3518,6 +3581,14 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
struct kvm_run *kvm_run = vcpu->run;
u32 exit_code = svm->vmcb->control.exit_code;
+ /*
+ * Opportunistically flush the PML buffer on VM exit. This keeps the
+ * dirty bitmap current by processing logged GPAs rather than waiting for
+ * PML_FULL exit.
+ */
+ if (enable_pml && !is_guest_mode(vcpu))
+ svm_flush_pml_buffer(vcpu);
+
/* SEV-ES guests must use the CR write traps to track CR registers. */
if (!sev_es_guest(vcpu->kvm)) {
if (!svm_is_intercept(svm, INTERCEPT_CR0_WRITE))
@@ -4991,6 +5062,9 @@ static int svm_vm_init(struct kvm *kvm)
return ret;
}
+ if (enable_pml)
+ kvm->arch.cpu_dirty_log_size = PML_LOG_NR_ENTRIES;
+
svm_srso_vm_init();
return 0;
}
@@ -5144,6 +5218,8 @@ struct kvm_x86_ops svm_x86_ops __initdata = {
.gmem_prepare = sev_gmem_prepare,
.gmem_invalidate = sev_gmem_invalidate,
.gmem_max_mapping_level = sev_gmem_max_mapping_level,
+
+ .update_cpu_dirty_logging = svm_update_cpu_dirty_logging,
};
/*
@@ -5365,6 +5441,10 @@ static __init int svm_hardware_setup(void)
nrips = nrips && boot_cpu_has(X86_FEATURE_NRIPS);
+ enable_pml = enable_pml && npt_enabled && cpu_feature_enabled(X86_FEATURE_PML);
+ if (enable_pml)
+ pr_info("Page modification logging supported\n");
+
if (lbrv) {
if (!boot_cpu_has(X86_FEATURE_LBRV))
lbrv = false;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index e4b04f435b3d..522b557106cb 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -720,6 +720,8 @@ static inline void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu,
svm_set_intercept_for_msr(vcpu, msr, type, true);
}
+void svm_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable);
+
/* nested.c */
#define NESTED_EXIT_HOST 0 /* Exit handled on host level */
--
2.48.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code
2025-10-13 6:25 ` [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code Nikunj A Dadhania
@ 2025-10-14 11:24 ` Huang, Kai
2025-10-14 19:22 ` Sean Christopherson
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 11:24 UTC (permalink / raw)
To: pbonzini@redhat.com, seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
> Move the enable_pml module parameter from VMX-specific code to common x86
> KVM code. This allows both VMX and SVM implementations to access the same
> PML enable/disable control.
>
> No functional change, just code reorganization to support shared PML
> infrastructure.
>
> Suggested-by: Kai Huang <kai.huang@intel.com>
For the record :-)
When I moved the 'enable_pml' from VMX to x86 in the diff I attached to v6
was purely because vmx_update_cpu_dirty_logging() checks 'enable_pml' and
after it got moved to x86 the new kvm_vcpu_update_cpu_dirty_logging() also
needed to use it (for the sake of just moving code).
I didn't mean to suggest to use a common boolean in x86 and let SVM/VMX
code to access it, since the downside is we need to export it. But I
think it's not a bad idea either.
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>
[...]
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -241,6 +241,9 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_ipiv);
> bool __read_mostly enable_device_posted_irqs = true;
> EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_device_posted_irqs);
>
> +bool __read_mostly enable_pml = true;
> +EXPORT_SYMBOL_GPL(enable_pml);
Now you need to use EXPORT_SYMBOL_FOR_KVM_INTERNAL() instead.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-13 6:25 ` [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code Nikunj A Dadhania
@ 2025-10-14 11:34 ` Huang, Kai
2025-10-14 20:40 ` Huang, Kai
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 11:34 UTC (permalink / raw)
To: pbonzini@redhat.com, seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
>
> -void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> +void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu, bool enable)
> {
> - struct vcpu_vmx *vmx = to_vmx(vcpu);
> -
> - if (WARN_ON_ONCE(!enable_pml))
> - return;
> -
> - if (is_guest_mode(vcpu)) {
> - vmx->nested.update_vmcs01_cpu_dirty_logging = true;
> - return;
> - }
> -
> - /*
> - * Note, nr_memslots_dirty_logging can be changed concurrent with this
> - * code, but in that case another update request will be made and so
> - * the guest will never run with a stale PML value.
> - */
> - if (atomic_read(&vcpu->kvm->nr_memslots_dirty_logging))
> - secondary_exec_controls_setbit(vmx, SECONDARY_EXEC_ENABLE_PML);
> + if (enable)
> + secondary_exec_controls_setbit(to_vmx(vcpu), SECONDARY_EXEC_ENABLE_PML);
> else
> - secondary_exec_controls_clearbit(vmx, SECONDARY_EXEC_ENABLE_PML);
> + secondary_exec_controls_clearbit(to_vmx(vcpu), SECONDARY_EXEC_ENABLE_PML);
Side topic:
Hmm I wish we can also move the code which programs PML buffer and index
to VMCS here to make the code clearer. SVM can do the same thing I
believe.
But we can do this separately I think, if needed.
> }
[...]
>
> +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> +{
> + if (WARN_ON_ONCE(!enable_pml))
> + return;
Nit:
Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
>arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
consistent.
Anyway, the intention of this patch is moving code out of VMX to x86, so
if needed, perhaps we can do the change in another patch.
Btw, now with 'enable_pml' also being moved to x86 common, both
'enable_pml' and 'kvm->arch.cpu_dirty_log_size' can be used to determine
whether KVM has enabled PML. It's kinda redundant, but I guess it's fine.
> +
> + if (is_guest_mode(vcpu)) {
> + vcpu->arch.update_cpu_dirty_logging_pending = true;
> + return;
> + }
> +
> + /*
> + * Note, nr_memslots_dirty_logging can be changed concurrently with this
> + * code, but in that case another update request will be made and so the
> + * guest will never run with a stale PML value.
> + */
> + kvm_x86_call(update_cpu_dirty_logging)(vcpu,
> + atomic_read(&vcpu->kvm->nr_memslots_dirty_logging));
> +}
> +
> /*
> * Called within kvm->srcu read side.
> * Returns 1 to let vcpu_run() continue the guest execution loop without
> @@ -11221,7 +11241,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> kvm_x86_call(recalc_intercepts)(vcpu);
>
> if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
> - kvm_x86_call(update_cpu_dirty_logging)(vcpu);
> + kvm_vcpu_update_cpu_dirty_logging(vcpu);
>
> if (kvm_check_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu)) {
> kvm_vcpu_reset(vcpu, true);
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code
2025-10-14 11:24 ` Huang, Kai
@ 2025-10-14 19:22 ` Sean Christopherson
2025-10-14 20:47 ` Huang, Kai
0 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2025-10-14 19:22 UTC (permalink / raw)
To: Kai Huang
Cc: pbonzini@redhat.com, nikunj@amd.com, thomas.lendacky@amd.com,
kvm@vger.kernel.org, joao.m.martins@oracle.com,
santosh.shukla@amd.com, bp@alien8.de
On Tue, Oct 14, 2025, Kai Huang wrote:
> On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
> > Move the enable_pml module parameter from VMX-specific code to common x86
> > KVM code. This allows both VMX and SVM implementations to access the same
> > PML enable/disable control.
> >
> > No functional change, just code reorganization to support shared PML
> > infrastructure.
> >
> > Suggested-by: Kai Huang <kai.huang@intel.com>
>
> For the record :-)
>
> When I moved the 'enable_pml' from VMX to x86 in the diff I attached to v6
> was purely because vmx_update_cpu_dirty_logging() checks 'enable_pml' and
> after it got moved to x86 the new kvm_vcpu_update_cpu_dirty_logging() also
> needed to use it (for the sake of just moving code).
>
> I didn't mean to suggest to use a common boolean in x86 and let SVM/VMX
> code to access it, since the downside is we need to export it. But I
> think it's not a bad idea either.
Ya. At some point it might makes sense to define "struct kvm_params", a la
"kvm_caps" and "kvm_host_values", so that we don't need a pile of one-off exports.
I'm not sure I'm entirely in favor of that idea though, as I think it'd be a net
negative for overall code readability. And with EXPORT_SYMBOL_FOR_KVM_INTERNAL,
exports feel a lot less gross :-)
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-14 11:34 ` Huang, Kai
@ 2025-10-14 20:40 ` Huang, Kai
2025-10-14 21:24 ` Sean Christopherson
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 20:40 UTC (permalink / raw)
To: pbonzini@redhat.com, seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
> >
> > +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> > +{
> > + if (WARN_ON_ONCE(!enable_pml))
> > + return;
>
> Nit:
>
> Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
> > arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
> better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
> consistent.
After second thought, I think we should just change to checking the vcpu-
>kvm.arch.cpu_dirty_log_size.
The reason is this cpu_dirty_log_size is per-VM and 'enable_pml' is KVM
global, but now for TDX guests even when 'enable_pml = 1' we don't set
cpu_dirty_log_size (we made it per-VM for TDX actually).
So while the current vmx_update_cpu_dirty_logging() checks 'enable_pml'
(which is fine since it's only for VMX guests), checking 'enable_pml' in
kvm_vcpu_update_cpu_dirty_logging() in x86 common logically isn't correct
because it doesn't cover all types of VMs.
I am not sure at AMD side whether PML works for SEV/SEV-SNP guests? Maybe
they need similar treatment as TDX guests when setting up the per-VM
cpu_dirty_log_size.
Anyway, I think checking the per-VM cpu_dirty_log_size is more correct
here.
>
> Anyway, the intention of this patch is moving code out of VMX to x86, so
> if needed, perhaps we can do the change in another patch.
>
> Btw, now with 'enable_pml' also being moved to x86 common, both
> 'enable_pml' and 'kvm->arch.cpu_dirty_log_size' can be used to determine
> whether KVM has enabled PML. It's kinda redundant, but I guess it's fine.
If we change to check cpu_dirty_log_size here, the x86 common code won't
use 'enable_pml' anymore and I think we can just get rid of that patch.
Sean, do you have any preference?
>
> > +
> > + if (is_guest_mode(vcpu)) {
> > + vcpu->arch.update_cpu_dirty_logging_pending = true;
> > + return;
> > + }
> > +
> > + /*
> > + * Note, nr_memslots_dirty_logging can be changed concurrently with this
> > + * code, but in that case another update request will be made and so the
> > + * guest will never run with a stale PML value.
> > + */
> > + kvm_x86_call(update_cpu_dirty_logging)(vcpu,
> > + atomic_read(&vcpu->kvm->nr_memslots_dirty_logging));
> > +}
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code
2025-10-14 19:22 ` Sean Christopherson
@ 2025-10-14 20:47 ` Huang, Kai
2025-10-15 4:39 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 20:47 UTC (permalink / raw)
To: seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, nikunj@amd.com,
santosh.shukla@amd.com
On Tue, 2025-10-14 at 12:22 -0700, Sean Christopherson wrote:
> On Tue, Oct 14, 2025, Kai Huang wrote:
> > On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
> > > Move the enable_pml module parameter from VMX-specific code to common x86
> > > KVM code. This allows both VMX and SVM implementations to access the same
> > > PML enable/disable control.
> > >
> > > No functional change, just code reorganization to support shared PML
> > > infrastructure.
> > >
> > > Suggested-by: Kai Huang <kai.huang@intel.com>
> >
> > For the record :-)
> >
> > When I moved the 'enable_pml' from VMX to x86 in the diff I attached to v6
> > was purely because vmx_update_cpu_dirty_logging() checks 'enable_pml' and
> > after it got moved to x86 the new kvm_vcpu_update_cpu_dirty_logging() also
> > needed to use it (for the sake of just moving code).
> >
> > I didn't mean to suggest to use a common boolean in x86 and let SVM/VMX
> > code to access it, since the downside is we need to export it. But I
> > think it's not a bad idea either.
>
> Ya. At some point it might makes sense to define "struct kvm_params", a la
> "kvm_caps" and "kvm_host_values", so that we don't need a pile of one-off exports.
> I'm not sure I'm entirely in favor of that idea though, as I think it'd be a net
> negative for overall code readability. And with EXPORT_SYMBOL_FOR_KVM_INTERNAL,
> exports feel a lot less gross :-)
Yeah I like EXPORT_SYMBOL_FOR_KVM_INTERNAL().
Btw maybe we can avoid this patch? Please see my second reply to the next
patch:
https://lore.kernel.org/kvm/8b1f31fec081c7e570ddec93477dd719638cc363.camel@intel.com/
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-14 20:40 ` Huang, Kai
@ 2025-10-14 21:24 ` Sean Christopherson
2025-10-14 21:37 ` Huang, Kai
0 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2025-10-14 21:24 UTC (permalink / raw)
To: Kai Huang
Cc: pbonzini@redhat.com, nikunj@amd.com, thomas.lendacky@amd.com,
kvm@vger.kernel.org, joao.m.martins@oracle.com,
santosh.shukla@amd.com, bp@alien8.de
On Tue, Oct 14, 2025, Kai Huang wrote:
> On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
> > >
> > > +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> > > +{
> > > + if (WARN_ON_ONCE(!enable_pml))
> > > + return;
> >
> > Nit:
> >
> > Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
> > > arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
> > better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
> > consistent.
>
> After second thought, I think we should just change to checking the vcpu-
> >kvm.arch.cpu_dirty_log_size.
+1
> > Anyway, the intention of this patch is moving code out of VMX to x86, so
> > if needed, perhaps we can do the change in another patch.
> >
> > Btw, now with 'enable_pml' also being moved to x86 common, both
> > 'enable_pml' and 'kvm->arch.cpu_dirty_log_size' can be used to determine
> > whether KVM has enabled PML. It's kinda redundant, but I guess it's fine.
>
> If we change to check cpu_dirty_log_size here, the x86 common code won't
> use 'enable_pml' anymore and I think we can just get rid of that patch.
>
> Sean, do you have any preference?
Definitely check cpu_dirty_log_size. It's more precise (TDX doesn't (yet) support
PML), avoids the export, and hopefully will yield more consistent code.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-14 21:24 ` Sean Christopherson
@ 2025-10-14 21:37 ` Huang, Kai
2025-10-15 4:43 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 21:37 UTC (permalink / raw)
To: seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, nikunj@amd.com,
santosh.shukla@amd.com
On Tue, 2025-10-14 at 14:24 -0700, Sean Christopherson wrote:
> On Tue, Oct 14, 2025, Kai Huang wrote:
> > On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
> > > >
> > > > +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> > > > +{
> > > > + if (WARN_ON_ONCE(!enable_pml))
> > > > + return;
> > >
> > > Nit:
> > >
> > > Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
> > > > arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
> > > better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
> > > consistent.
> >
> > After second thought, I think we should just change to checking the vcpu-
> > > kvm.arch.cpu_dirty_log_size.
>
> +1
>
> > > Anyway, the intention of this patch is moving code out of VMX to x86, so
> > > if needed, perhaps we can do the change in another patch.
> > >
> > > Btw, now with 'enable_pml' also being moved to x86 common, both
> > > 'enable_pml' and 'kvm->arch.cpu_dirty_log_size' can be used to determine
> > > whether KVM has enabled PML. It's kinda redundant, but I guess it's fine.
> >
> > If we change to check cpu_dirty_log_size here, the x86 common code won't
> > use 'enable_pml' anymore and I think we can just get rid of that patch.
> >
> > Sean, do you have any preference?
>
> Definitely check cpu_dirty_log_size. It's more precise (TDX doesn't (yet) support
> PML), avoids the export, and hopefully will yield more consistent code.
Yeah completely agree. Thanks for the feedback.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 1/7] KVM: x86: Carve out PML flush routine
2025-10-13 6:25 ` [PATCH v4 1/7] KVM: x86: Carve out PML flush routine Nikunj A Dadhania
@ 2025-10-14 22:04 ` Huang, Kai
2025-10-15 4:32 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-14 22:04 UTC (permalink / raw)
To: pbonzini@redhat.com, seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
> +EXPORT_SYMBOL_GPL(kvm_flush_pml_buffer);
Ditto, please use EXPORT_SYMBOL_FOR_KVM_INTERNAL().
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 1/7] KVM: x86: Carve out PML flush routine
2025-10-14 22:04 ` Huang, Kai
@ 2025-10-15 4:32 ` Nikunj A. Dadhania
0 siblings, 0 replies; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-10-15 4:32 UTC (permalink / raw)
To: Huang, Kai, pbonzini@redhat.com, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On 10/15/2025 3:34 AM, Huang, Kai wrote:
> On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
>> +EXPORT_SYMBOL_GPL(kvm_flush_pml_buffer);
>
> Ditto, please use EXPORT_SYMBOL_FOR_KVM_INTERNAL().
Sure
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code
2025-10-14 20:47 ` Huang, Kai
@ 2025-10-15 4:39 ` Nikunj A. Dadhania
0 siblings, 0 replies; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-10-15 4:39 UTC (permalink / raw)
To: Huang, Kai, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
On 10/15/2025 2:17 AM, Huang, Kai wrote:
> On Tue, 2025-10-14 at 12:22 -0700, Sean Christopherson wrote:
>> On Tue, Oct 14, 2025, Kai Huang wrote:
>>> On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
>>>> Move the enable_pml module parameter from VMX-specific code to common x86
>>>> KVM code. This allows both VMX and SVM implementations to access the same
>>>> PML enable/disable control.
>>>>
>>>> No functional change, just code reorganization to support shared PML
>>>> infrastructure.
>>>>
>>>> Suggested-by: Kai Huang <kai.huang@intel.com>
>>>
>>> For the record :-)
>>>
>>> When I moved the 'enable_pml' from VMX to x86 in the diff I attached to v6
>>> was purely because vmx_update_cpu_dirty_logging() checks 'enable_pml' and
>>> after it got moved to x86 the new kvm_vcpu_update_cpu_dirty_logging() also
>>> needed to use it (for the sake of just moving code).
>>>
>>> I didn't mean to suggest to use a common boolean in x86 and let SVM/VMX
>>> code to access it, since the downside is we need to export it. But I
>>> think it's not a bad idea either.
As per the comment on v3 [1], I had implemented the common boolean like enable_apic
>>
>> Ya. At some point it might makes sense to define "struct kvm_params", a la
>> "kvm_caps" and "kvm_host_values", so that we don't need a pile of one-off exports.
>> I'm not sure I'm entirely in favor of that idea though, as I think it'd be a net
>> negative for overall code readability. And with EXPORT_SYMBOL_FOR_KVM_INTERNAL,
>> exports feel a lot less gross :-)
>
> Yeah I like EXPORT_SYMBOL_FOR_KVM_INTERNAL().
>
> Btw maybe we can avoid this patch? Please see my second reply to the next
> patch:
>
> https://lore.kernel.org/kvm/8b1f31fec081c7e570ddec93477dd719638cc363.camel@intel.com/
Agree, with kvm_mmu_update_cpu_dirty_logging() checking kvm->arch.cpu_dirty_log_size
there is no need to export enable_pml.
Regards
Nikunj
1. https://lore.kernel.org/kvm/0d1baaecc56de2b77f82ab3af9c75a12be91d6b2.camel@intel.com/
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-14 21:37 ` Huang, Kai
@ 2025-10-15 4:43 ` Nikunj A. Dadhania
2025-10-15 5:27 ` Huang, Kai
0 siblings, 1 reply; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-10-15 4:43 UTC (permalink / raw)
To: Huang, Kai, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
On 10/15/2025 3:07 AM, Huang, Kai wrote:
> On Tue, 2025-10-14 at 14:24 -0700, Sean Christopherson wrote:
>> On Tue, Oct 14, 2025, Kai Huang wrote:
>>> On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
>>>>>
>>>>> +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
>>>>> +{
>>>>> + if (WARN_ON_ONCE(!enable_pml))
>>>>> + return;
>>>>
>>>> Nit:
>>>>
>>>> Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
>>>>> arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
>>>> better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
>>>> consistent.
>>>
>>> After second thought, I think we should just change to checking the vcpu-
>>>> kvm.arch.cpu_dirty_log_size.
>>
>> +1
>>
>>>> Anyway, the intention of this patch is moving code out of VMX to x86, so
>>>> if needed, perhaps we can do the change in another patch.
I will add this as a pre-patch, does it need a fixes tag ?
>>>>
>>>> Btw, now with 'enable_pml' also being moved to x86 common, both
>>>> 'enable_pml' and 'kvm->arch.cpu_dirty_log_size' can be used to determine
>>>> whether KVM has enabled PML. It's kinda redundant, but I guess it's fine.
>>>
>>> If we change to check cpu_dirty_log_size here, the x86 common code won't
>>> use 'enable_pml' anymore and I think we can just get rid of that patch.
>>>
>>> Sean, do you have any preference?
>>
>> Definitely check cpu_dirty_log_size. It's more precise (TDX doesn't (yet) support
>> PML), avoids the export, and hopefully will yield more consistent code.>
> Yeah completely agree. Thanks for the feedback.
Sure, thanks for the feedback.
Regards
Nikunj
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-15 4:43 ` Nikunj A. Dadhania
@ 2025-10-15 5:27 ` Huang, Kai
2025-10-15 9:06 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-15 5:27 UTC (permalink / raw)
To: seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
On Wed, 2025-10-15 at 10:13 +0530, Nikunj A. Dadhania wrote:
>
> On 10/15/2025 3:07 AM, Huang, Kai wrote:
> > On Tue, 2025-10-14 at 14:24 -0700, Sean Christopherson wrote:
> > > On Tue, Oct 14, 2025, Kai Huang wrote:
> > > > On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
> > > > > >
> > > > > > +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> > > > > > +{
> > > > > > + if (WARN_ON_ONCE(!enable_pml))
> > > > > > + return;
> > > > >
> > > > > Nit:
> > > > >
> > > > > Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
> > > > > > arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
> > > > > better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
> > > > > consistent.
> > > >
> > > > After second thought, I think we should just change to checking the vcpu-
> > > > > kvm.arch.cpu_dirty_log_size.
> > >
> > > +1
> > >
> > > > > Anyway, the intention of this patch is moving code out of VMX to x86, so
> > > > > if needed, perhaps we can do the change in another patch.
>
> I will add this as a pre-patch, does it need a fixes tag ?
No I don't think there's any bug in the existing upstream code, since for
VMX guests, checking 'enable_pml' and the per-VM 'cpu_dirty_log_size' is
basically the same thing.
I won't stop you from doing this in a separate patch, but I think we can
just change this patch to check cpu_dirty_log_size with some justification
in changelog (e.g., below, a bit lengthy but not sure how to trim down):
Note KVM common code has a per-VM kvm->arch.cpu_dirty_log_size to
indicate whether PML is enabled on VM basis. It's for supporting
running both VMX guests and TDX guests with a KVM global 'enable_pml'
module parameter -- TDX doesn't (yet) support PML, and 'enable_pml' is
used to enable PML for VMX guests while supporting TDX guests.
Currently vmx_update_cpu_dirty_logging() has a WARN() to check
'!enable_pml' to make sure it is not mistakenly called when PML is
not enabled in KVM. It's correct for VMX guests. However such check
is no longer correct in x86 common since it doesn't apply to TDX guests.
Therefore change to check the per VM cpu_dirty_log_size while moving
vmx_update_cpu_dirty_logging() to x86 common. And it's also more
consistent with kvm_mmu_update_cpu_dirty_logging(), which checks the
cpu_dirty_log_size.
Or perhaps Sean has some preference?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-15 5:27 ` Huang, Kai
@ 2025-10-15 9:06 ` Nikunj A. Dadhania
2025-10-15 21:37 ` Huang, Kai
0 siblings, 1 reply; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-10-15 9:06 UTC (permalink / raw)
To: Huang, Kai, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
On 10/15/2025 10:57 AM, Huang, Kai wrote:
> On Wed, 2025-10-15 at 10:13 +0530, Nikunj A. Dadhania wrote:
>>
>> On 10/15/2025 3:07 AM, Huang, Kai wrote:
>>> On Tue, 2025-10-14 at 14:24 -0700, Sean Christopherson wrote:
>>>> On Tue, Oct 14, 2025, Kai Huang wrote:
>>>>> On Tue, 2025-10-14 at 11:34 +0000, Huang, Kai wrote:
>>>>>>>
>>>>>>> +static void kvm_vcpu_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
>>>>>>> +{
>>>>>>> + if (WARN_ON_ONCE(!enable_pml))
>>>>>>> + return;
>>>>>>
>>>>>> Nit:
>>>>>>
>>>>>> Since kvm_mmu_update_cpu_dirty_logging() checks kvm-
>>>>>>> arch.cpu_dirty_log_size to determine whether PML is enabled, maybe it's
>>>>>> better to check vcpu->kvm.arch.cpu_dirty_log_size here too to make them
>>>>>> consistent.
>>>>>
>>>>> After second thought, I think we should just change to checking the vcpu-
>>>>>> kvm.arch.cpu_dirty_log_size.
>>>>
>>>> +1
>>>>
>>>>>> Anyway, the intention of this patch is moving code out of VMX to x86, so
>>>>>> if needed, perhaps we can do the change in another patch.
>>
>> I will add this as a pre-patch, does it need a fixes tag ?
>
> No I don't think there's any bug in the existing upstream code, since for
> VMX guests, checking 'enable_pml' and the per-VM 'cpu_dirty_log_size' is
> basically the same thing.
>
> I won't stop you from doing this in a separate patch, but I think we can
> just change this patch to check cpu_dirty_log_size with some justification
> in changelog (e.g., below, a bit lengthy but not sure how to trim down):
>
> Note KVM common code has a per-VM kvm->arch.cpu_dirty_log_size to
> indicate whether PML is enabled on VM basis. It's for supporting
> running both VMX guests and TDX guests with a KVM global 'enable_pml'
> module parameter -- TDX doesn't (yet) support PML, and 'enable_pml' is
> used to enable PML for VMX guests while supporting TDX guests.
>
> Currently vmx_update_cpu_dirty_logging() has a WARN() to check
> '!enable_pml' to make sure it is not mistakenly called when PML is
> not enabled in KVM. It's correct for VMX guests. However such check
> is no longer correct in x86 common since it doesn't apply to TDX guests.
>
> Therefore change to check the per VM cpu_dirty_log_size while moving
> vmx_update_cpu_dirty_logging() to x86 common. And it's also more
> consistent with kvm_mmu_update_cpu_dirty_logging(), which checks the
> cpu_dirty_log_size.
I have something like this as a separate patch in my stack:
From 21c3b91ad53dfc2682c01663fe65d60c9424318d Mon Sep 17 00:00:00 2001
From: Nikunj A Dadhania <nikunj@amd.com>
Date: Tue, 30 Sep 2025 05:10:15 +0000
Subject: [PATCH] KVM: VMX: Use cpu_dirty_log_size instead of enable_pml for
PML checks
Replace the enable_pml check with cpu_dirty_log_size in VMX PML code
to determine whether PML is enabled on a per-VM basis. The enable_pml
module parameter is a global setting that doesn't reflect per-VM
capabilities, whereas cpu_dirty_log_size accurately indicates whether
a specific VM has PML enabled.
For example, TDX VMs don't yet support PML. Using cpu_dirty_log_size
ensures the check correctly reflects this, while enable_pml would
incorrectly indicate PML is available.
This also improves consistency with kvm_mmu_update_cpu_dirty_logging(),
which already uses cpu_dirty_log_size to determine PML enablement.
Suggested-by: Kai Huang <kai.huang@intel.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/kvm/vmx/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index aa1ba8db6392..9e0c0e29d47e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8199,7 +8199,7 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
- if (WARN_ON_ONCE(!enable_pml))
+ if (WARN_ON_ONCE(!vcpu->kvm->arch.cpu_dirty_log_size))
return;
if (is_guest_mode(vcpu)) {
--
2.48.1
>
> Or perhaps Sean has some preference?
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-15 9:06 ` Nikunj A. Dadhania
@ 2025-10-15 21:37 ` Huang, Kai
2025-10-16 9:23 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-15 21:37 UTC (permalink / raw)
To: seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
>
> I have something like this as a separate patch in my stack:
>
> From 21c3b91ad53dfc2682c01663fe65d60c9424318d Mon Sep 17 00:00:00 2001
> From: Nikunj A Dadhania <nikunj@amd.com>
> Date: Tue, 30 Sep 2025 05:10:15 +0000
> Subject: [PATCH] KVM: VMX: Use cpu_dirty_log_size instead of enable_pml for
> PML checks
>
> Replace the enable_pml check with cpu_dirty_log_size in VMX PML code
> to determine whether PML is enabled on a per-VM basis. The enable_pml
> module parameter is a global setting that doesn't reflect per-VM
> capabilities, whereas cpu_dirty_log_size accurately indicates whether
> a specific VM has PML enabled.
>
> For example, TDX VMs don't yet support PML. Using cpu_dirty_log_size
> ensures the check correctly reflects this, while enable_pml would
> incorrectly indicate PML is available.
>
> This also improves consistency with kvm_mmu_update_cpu_dirty_logging(),
> which already uses cpu_dirty_log_size to determine PML enablement.
I would add this is a preparation for moving this code out to x86 common
to share with AMD PML. Otherwise it's not a mandatory change, albeit it
is slightly better in terms of code consistency.
>
> Suggested-by: Kai Huang <kai.huang@intel.com>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Thanks for doing this:
Reviewed-by: Kai Huang <kai.huang@intel.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index aa1ba8db6392..9e0c0e29d47e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8199,7 +8199,7 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> - if (WARN_ON_ONCE(!enable_pml))
> + if (WARN_ON_ONCE(!vcpu->kvm->arch.cpu_dirty_log_size))
> return;
>
> if (is_guest_mode(vcpu)) {
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code
2025-10-15 21:37 ` Huang, Kai
@ 2025-10-16 9:23 ` Nikunj A. Dadhania
0 siblings, 0 replies; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-10-16 9:23 UTC (permalink / raw)
To: Huang, Kai, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org, pbonzini@redhat.com,
joao.m.martins@oracle.com, bp@alien8.de, santosh.shukla@amd.com
On 10/16/2025 3:07 AM, Huang, Kai wrote:
>>
>> I have something like this as a separate patch in my stack:
>>
>> From 21c3b91ad53dfc2682c01663fe65d60c9424318d Mon Sep 17 00:00:00 2001
>> From: Nikunj A Dadhania <nikunj@amd.com>
>> Date: Tue, 30 Sep 2025 05:10:15 +0000
>> Subject: [PATCH] KVM: VMX: Use cpu_dirty_log_size instead of enable_pml for
>> PML checks
>>
>> Replace the enable_pml check with cpu_dirty_log_size in VMX PML code
>> to determine whether PML is enabled on a per-VM basis. The enable_pml
>> module parameter is a global setting that doesn't reflect per-VM
>> capabilities, whereas cpu_dirty_log_size accurately indicates whether
>> a specific VM has PML enabled.
>>
>> For example, TDX VMs don't yet support PML. Using cpu_dirty_log_size
>> ensures the check correctly reflects this, while enable_pml would
>> incorrectly indicate PML is available.
>>
>> This also improves consistency with kvm_mmu_update_cpu_dirty_logging(),
>> which already uses cpu_dirty_log_size to determine PML enablement.
>
> I would add this is a preparation for moving this code out to x86 common
> to share with AMD PML. Otherwise it's not a mandatory change, albeit it
> is slightly better in terms of code consistency.
This change is not related to AMD PML, but instead its a generic
future proof change for both VMX/SVM.
>
>>
>> Suggested-by: Kai Huang <kai.huang@intel.com>
>> Suggested-by: Sean Christopherson <seanjc@google.com>
>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>
> Thanks for doing this:
>
> Reviewed-by: Kai Huang <kai.huang@intel.com>
Thanks
Nikunj
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 7/7] KVM: SVM: Add Page modification logging support
2025-10-13 6:25 ` [PATCH v4 7/7] KVM: SVM: Add Page modification logging support Nikunj A Dadhania
@ 2025-10-17 5:13 ` Huang, Kai
2025-11-06 9:28 ` Nikunj A. Dadhania
0 siblings, 1 reply; 25+ messages in thread
From: Huang, Kai @ 2025-10-17 5:13 UTC (permalink / raw)
To: pbonzini@redhat.com, seanjc@google.com, nikunj@amd.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
> @@ -1162,6 +1164,16 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
> if (vcpu->kvm->arch.bus_lock_detection_enabled)
> svm_set_intercept(svm, INTERCEPT_BUSLOCK);
>
> + if (enable_pml) {
> + /*
> + * Populate the page address and index here, PML is enabled
> + * when dirty logging is enabled on the memslot through
> + * svm_update_cpu_dirty_logging()
> + */
> + control->pml_addr = (u64)__sme_set(page_to_phys(vcpu->arch.pml_page));
> + control->pml_index = PML_HEAD_INDEX;
> + }
> +
> if (sev_guest(vcpu->kvm))
> sev_init_vmcb(svm, init_event);
Hi Nikunj,
As asked in another reply, does PML work with SEV* guests?
In whatever case, it would be helpful to describe this in the changelog.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 7/7] KVM: SVM: Add Page modification logging support
2025-10-17 5:13 ` Huang, Kai
@ 2025-11-06 9:28 ` Nikunj A. Dadhania
0 siblings, 0 replies; 25+ messages in thread
From: Nikunj A. Dadhania @ 2025-11-06 9:28 UTC (permalink / raw)
To: Huang, Kai, pbonzini@redhat.com, seanjc@google.com
Cc: thomas.lendacky@amd.com, kvm@vger.kernel.org,
joao.m.martins@oracle.com, santosh.shukla@amd.com, bp@alien8.de
On 10/17/2025 10:43 AM, Huang, Kai wrote:
> On Mon, 2025-10-13 at 06:25 +0000, Nikunj A Dadhania wrote:
>> if (sev_guest(vcpu->kvm))
>> sev_init_vmcb(svm, init_event);
>
Hi Kai,
My apologies for the delay in responding.
> Hi Nikunj,
>
> As asked in another reply, does PML work with SEV* guests?
Yes, PML is supported with SEV* guests. Since SEV guest
migration is not currently supported, I verified PML functionality
using a SEV kselftest instead.
The test has an SEV guest vCPU continuously write to random pages within
a 1GB memory region while the main thread continuously collects the dirty
log. Below is the bpftrace output showing the PML buffer being drained:
sudo bpftrace -e 'kprobe:kvm_vcpu_mark_page_dirty { @[kstack] = count(); }'
Attaching 1 probe...
@[
kvm_vcpu_mark_page_dirty+5
kvm_flush_pml_buffer+120
svm_handle_exit+490
kvm_arch_vcpu_ioctl_run+1993
kvm_vcpu_ioctl+303
...
]: 1241163
>
> In whatever case, it would be helpful to describe this in the changelog.
I will clarify this in the v5 changelog and add a selftest for SEV
with PML.
Regards
Nikunj
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2025-11-06 9:28 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-13 6:25 [PATCH v4 0/7] KVM: SVM: Add Page Modification Logging (PML) support Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 1/7] KVM: x86: Carve out PML flush routine Nikunj A Dadhania
2025-10-14 22:04 ` Huang, Kai
2025-10-15 4:32 ` Nikunj A. Dadhania
2025-10-13 6:25 ` [PATCH v4 2/7] KVM: x86: Move PML page to common vcpu arch structure Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 3/7] KVM: x86: Move enable_pml variable to common x86 code Nikunj A Dadhania
2025-10-14 11:24 ` Huang, Kai
2025-10-14 19:22 ` Sean Christopherson
2025-10-14 20:47 ` Huang, Kai
2025-10-15 4:39 ` Nikunj A. Dadhania
2025-10-13 6:25 ` [PATCH v4 4/7] KVM: x86: Move nested CPU dirty logging logic to common code Nikunj A Dadhania
2025-10-14 11:34 ` Huang, Kai
2025-10-14 20:40 ` Huang, Kai
2025-10-14 21:24 ` Sean Christopherson
2025-10-14 21:37 ` Huang, Kai
2025-10-15 4:43 ` Nikunj A. Dadhania
2025-10-15 5:27 ` Huang, Kai
2025-10-15 9:06 ` Nikunj A. Dadhania
2025-10-15 21:37 ` Huang, Kai
2025-10-16 9:23 ` Nikunj A. Dadhania
2025-10-13 6:25 ` [PATCH v4 5/7] x86/cpufeatures: Add Page modification logging Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 6/7] KVM: SVM: Use BIT_ULL for 64-bit nested_ctl bit definitions Nikunj A Dadhania
2025-10-13 6:25 ` [PATCH v4 7/7] KVM: SVM: Add Page modification logging support Nikunj A Dadhania
2025-10-17 5:13 ` Huang, Kai
2025-11-06 9:28 ` Nikunj A. Dadhania
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox