* [PATCH v6 1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-11 9:29 ` Adrian Hunter
2025-06-10 23:20 ` [PATCH v6 2/8] KVM: x86: Convert vcpu_run()'s immediate exit param into a generic bitmap Sean Christopherson
` (7 subsequent siblings)
8 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
Use the kvm_arch_vcpu.host_debugctl snapshot to restore DEBUGCTL after
running a TD vCPU. The final TDX series rebase was mishandled, likely due
to commit fb71c7959356 ("KVM: x86: Snapshot the host's DEBUGCTL in common
x86") deleting the same line of code from vmx.h, i.e. creating a semantic
conflict of sorts, but no syntactic conflict.
Using the version in kvm_vcpu_arch picks up the ulong => u64 fix (which
isn't relevant to TDX) as well as the IRQ fix from commit 189ecdb3e112
("KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs").
Link: https://lore.kernel.org/all/20250307212053.2948340-10-pbonzini@redhat.com
Cc: Adrian Hunter <adrian.hunter@intel.com>
Fixes: 8af099037527 ("KVM: TDX: Save and restore IA32_DEBUGCTL")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/common.h | 2 --
arch/x86/kvm/vmx/tdx.c | 6 ++----
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index a0c5e8781c33..bc5ece76533a 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -53,8 +53,6 @@ struct vcpu_vt {
#ifdef CONFIG_X86_64
u64 msr_host_kernel_gs_base;
#endif
-
- unsigned long host_debugctlmsr;
};
#ifdef CONFIG_KVM_INTEL_TDX
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index b952bc673271..3cfe89aad68e 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -778,8 +778,6 @@ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
else
vt->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE);
- vt->host_debugctlmsr = get_debugctlmsr();
-
vt->guest_state_loaded = true;
}
@@ -1055,8 +1053,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
tdx_vcpu_enter_exit(vcpu);
- if (vt->host_debugctlmsr & ~TDX_DEBUGCTL_PRESERVED)
- update_debugctlmsr(vt->host_debugctlmsr);
+ if (vcpu->arch.host_debugctl & ~TDX_DEBUGCTL_PRESERVED)
+ update_debugctlmsr(vcpu->arch.host_debugctl);
tdx_load_host_xsave_state(vcpu);
tdx->guest_entered = true;
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v6 1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL
2025-06-10 23:20 ` [PATCH v6 1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL Sean Christopherson
@ 2025-06-11 9:29 ` Adrian Hunter
0 siblings, 0 replies; 17+ messages in thread
From: Adrian Hunter @ 2025-06-11 9:29 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel, Maxim Levitsky
On 11/06/2025 02:20, Sean Christopherson wrote:
> Use the kvm_arch_vcpu.host_debugctl snapshot to restore DEBUGCTL after
> running a TD vCPU. The final TDX series rebase was mishandled, likely due
> to commit fb71c7959356 ("KVM: x86: Snapshot the host's DEBUGCTL in common
> x86") deleting the same line of code from vmx.h, i.e. creating a semantic
> conflict of sorts, but no syntactic conflict.
>
> Using the version in kvm_vcpu_arch picks up the ulong => u64 fix (which
> isn't relevant to TDX) as well as the IRQ fix from commit 189ecdb3e112
> ("KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs").
>
> Link: https://lore.kernel.org/all/20250307212053.2948340-10-pbonzini@redhat.com
> Cc: Adrian Hunter <adrian.hunter@intel.com>
> Fixes: 8af099037527 ("KVM: TDX: Save and restore IA32_DEBUGCTL")
> Signed-off-by: Sean Christopherson <seanjc@google.com>
Thanks for fixing this up!
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v6 2/8] KVM: x86: Convert vcpu_run()'s immediate exit param into a generic bitmap
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 3/8] KVM: x86: Drop kvm_x86_ops.set_dr6() in favor of a new KVM_RUN flag Sean Christopherson
` (6 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
Convert kvm_x86_ops.vcpu_run()'s "force_immediate_exit" boolean parameter
into an a generic bitmap so that similar "take action" information can be
passed to vendor code without creating a pile of boolean parameters.
This will allow dropping kvm_x86_ops.set_dr6() in favor of a new flag, and
will also allow for adding similar functionality for re-loading debugctl
in the active VMCS.
Opportunistically massage the TDX WARN and comment to prepare for adding
more run_flags, all of which are expected to be mutually exclusive with
TDX, i.e. should be WARNed on.
No functional change intended.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 6 +++++-
arch/x86/kvm/svm/svm.c | 4 ++--
arch/x86/kvm/vmx/main.c | 6 +++---
arch/x86/kvm/vmx/tdx.c | 18 +++++++++---------
arch/x86/kvm/vmx/vmx.c | 3 ++-
arch/x86/kvm/vmx/x86_ops.h | 4 ++--
arch/x86/kvm/x86.c | 11 ++++++++---
7 files changed, 31 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 330cdcbed1a6..3b5871ebd7e4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1673,6 +1673,10 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
}
+enum kvm_x86_run_flags {
+ KVM_RUN_FORCE_IMMEDIATE_EXIT = BIT(0),
+};
+
struct kvm_x86_ops {
const char *name;
@@ -1754,7 +1758,7 @@ struct kvm_x86_ops {
int (*vcpu_pre_run)(struct kvm_vcpu *vcpu);
enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu,
- bool force_immediate_exit);
+ u64 run_flags);
int (*handle_exit)(struct kvm_vcpu *vcpu,
enum exit_fastpath_completion exit_fastpath);
int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0ad1a6d4fb6d..00d78090de3d 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4402,9 +4402,9 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
guest_state_exit_irqoff();
}
-static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
- bool force_immediate_exit)
+static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
{
+ bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
struct vcpu_svm *svm = to_svm(vcpu);
bool spec_ctrl_intercepted = msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL);
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index d1e02e567b57..fef3e3803707 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -175,12 +175,12 @@ static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu)
return vmx_vcpu_pre_run(vcpu);
}
-static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
{
if (is_td_vcpu(vcpu))
- return tdx_vcpu_run(vcpu, force_immediate_exit);
+ return tdx_vcpu_run(vcpu, run_flags);
- return vmx_vcpu_run(vcpu, force_immediate_exit);
+ return vmx_vcpu_run(vcpu, run_flags);
}
static int vt_handle_exit(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 3cfe89aad68e..9a758d8b38ea 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1018,20 +1018,20 @@ static void tdx_load_host_xsave_state(struct kvm_vcpu *vcpu)
DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI | \
DEBUGCTLMSR_FREEZE_IN_SMM)
-fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
struct vcpu_vt *vt = to_vt(vcpu);
/*
- * force_immediate_exit requires vCPU entering for events injection with
- * an immediately exit followed. But The TDX module doesn't guarantee
- * entry, it's already possible for KVM to _think_ it completely entry
- * to the guest without actually having done so.
- * Since KVM never needs to force an immediate exit for TDX, and can't
- * do direct injection, just warn on force_immediate_exit.
+ * WARN if KVM wants to force an immediate exit, as the TDX module does
+ * not guarantee entry into the guest, i.e. it's possible for KVM to
+ * _think_ it completed entry to the guest and forced an immediate exit
+ * without actually having done so. Luckily, KVM never needs to force
+ * an immediate exit for TDX (KVM can't do direct event injection, so
+ * just WARN and continue on.
*/
- WARN_ON_ONCE(force_immediate_exit);
+ WARN_ON_ONCE(run_flags);
/*
* Wait until retry of SEPT-zap-related SEAMCALL completes before
@@ -1041,7 +1041,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
if (unlikely(READ_ONCE(to_kvm_tdx(vcpu->kvm)->wait_for_sept_zap)))
return EXIT_FASTPATH_EXIT_HANDLED;
- trace_kvm_entry(vcpu, force_immediate_exit);
+ trace_kvm_entry(vcpu, run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT);
if (pi_test_on(&vt->pi_desc)) {
apic->send_IPI_self(POSTED_INTR_VECTOR);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9ff00ae9f05a..e66f5ffa8716 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7317,8 +7317,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
guest_state_exit_irqoff();
}
-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
{
+ bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long cr3, cr4;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index b4596f651232..0b4f5c5558d0 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -21,7 +21,7 @@ void vmx_vm_destroy(struct kvm *kvm);
int vmx_vcpu_precreate(struct kvm *kvm);
int vmx_vcpu_create(struct kvm_vcpu *vcpu);
int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu);
-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit);
+fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags);
void vmx_vcpu_free(struct kvm_vcpu *vcpu);
void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
@@ -133,7 +133,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
void tdx_vcpu_free(struct kvm_vcpu *vcpu);
void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
int tdx_vcpu_pre_run(struct kvm_vcpu *vcpu);
-fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit);
+fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags);
void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
void tdx_vcpu_put(struct kvm_vcpu *vcpu);
bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index dd34a2ec854c..d4a51b263d6b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10779,6 +10779,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
dm_request_for_irq_injection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
fastpath_t exit_fastpath;
+ u64 run_flags;
bool req_immediate_exit = false;
@@ -11023,8 +11024,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
goto cancel_injection;
}
- if (req_immediate_exit)
+ run_flags = 0;
+ if (req_immediate_exit) {
+ run_flags |= KVM_RUN_FORCE_IMMEDIATE_EXIT;
kvm_make_request(KVM_REQ_EVENT, vcpu);
+ }
fpregs_assert_state_consistent();
if (test_thread_flag(TIF_NEED_FPU_LOAD))
@@ -11061,8 +11065,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
(kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));
- exit_fastpath = kvm_x86_call(vcpu_run)(vcpu,
- req_immediate_exit);
+ exit_fastpath = kvm_x86_call(vcpu_run)(vcpu, run_flags);
if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
break;
@@ -11074,6 +11077,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
break;
}
+ run_flags = 0;
+
/* Note, VM-Exits that go down the "slow" path are accounted below. */
++vcpu->stat.exits;
}
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v6 3/8] KVM: x86: Drop kvm_x86_ops.set_dr6() in favor of a new KVM_RUN flag
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 2/8] KVM: x86: Convert vcpu_run()'s immediate exit param into a generic bitmap Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 4/8] KVM: VMX: Allow guest to set DEBUGCTL.RTM_DEBUG if RTM is supported Sean Christopherson
` (5 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
Instruct vendor code to load the guest's DR6 into hardware via a new
KVM_RUN flag, and remove kvm_x86_ops.set_dr6(), whose sole purpose was to
load vcpu->arch.dr6 into hardware when DR6 can be read/written directly
by the guest.
Note, TDX already WARNs on any run_flag being set, i.e. will yell if KVM
thinks DR6 needs to be reloaded. TDX vCPUs force KVM_DEBUGREG_AUTO_SWITCH
and never clear the flag, i.e. should never observe KVM_RUN_LOAD_GUEST_DR6.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 -
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/svm/svm.c | 10 ++++++----
arch/x86/kvm/vmx/main.c | 9 ---------
arch/x86/kvm/vmx/vmx.c | 9 +++------
arch/x86/kvm/x86.c | 2 +-
6 files changed, 11 insertions(+), 22 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 8d50e3e0a19b..9e0c37ea267e 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -49,7 +49,6 @@ KVM_X86_OP(set_idt)
KVM_X86_OP(get_gdt)
KVM_X86_OP(set_gdt)
KVM_X86_OP(sync_dirty_debug_regs)
-KVM_X86_OP(set_dr6)
KVM_X86_OP(set_dr7)
KVM_X86_OP(cache_reg)
KVM_X86_OP(get_rflags)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3b5871ebd7e4..3d6325369a4b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1675,6 +1675,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
enum kvm_x86_run_flags {
KVM_RUN_FORCE_IMMEDIATE_EXIT = BIT(0),
+ KVM_RUN_LOAD_GUEST_DR6 = BIT(1),
};
struct kvm_x86_ops {
@@ -1727,7 +1728,6 @@ struct kvm_x86_ops {
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
- void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 00d78090de3d..171e5be16802 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4451,10 +4451,13 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
svm_hv_update_vp_id(svm->vmcb, vcpu);
/*
- * Run with all-zero DR6 unless needed, so that we can get the exact cause
- * of a #DB.
+ * Run with all-zero DR6 unless the guest can write DR6 freely, so that
+ * KVM can get the exact cause of a #DB. Note, loading guest DR6 from
+ * KVM's snapshot is only necessary when DR accesses won't exit.
*/
- if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
+ if (unlikely(run_flags & KVM_RUN_LOAD_GUEST_DR6))
+ svm_set_dr6(vcpu, vcpu->arch.dr6);
+ else if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
svm_set_dr6(vcpu, DR6_ACTIVE_LOW);
clgi();
@@ -5265,7 +5268,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.set_idt = svm_set_idt,
.get_gdt = svm_get_gdt,
.set_gdt = svm_set_gdt,
- .set_dr6 = svm_set_dr6,
.set_dr7 = svm_set_dr7,
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
.cache_reg = svm_cache_reg,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index fef3e3803707..c85cbce6d2f6 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -489,14 +489,6 @@ static void vt_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
vmx_set_gdt(vcpu, dt);
}
-static void vt_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
-{
- if (is_td_vcpu(vcpu))
- return;
-
- vmx_set_dr6(vcpu, val);
-}
-
static void vt_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
{
if (is_td_vcpu(vcpu))
@@ -943,7 +935,6 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.set_idt = vt_op(set_idt),
.get_gdt = vt_op(get_gdt),
.set_gdt = vt_op(set_gdt),
- .set_dr6 = vt_op(set_dr6),
.set_dr7 = vt_op(set_dr7),
.sync_dirty_debug_regs = vt_op(sync_dirty_debug_regs),
.cache_reg = vt_op(cache_reg),
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e66f5ffa8716..49bf58ca9ffd 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5604,12 +5604,6 @@ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
set_debugreg(DR6_RESERVED, 6);
}
-void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
-{
- lockdep_assert_irqs_disabled();
- set_debugreg(vcpu->arch.dr6, 6);
-}
-
void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
{
vmcs_writel(GUEST_DR7, val);
@@ -7364,6 +7358,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]);
vcpu->arch.regs_dirty = 0;
+ if (run_flags & KVM_RUN_LOAD_GUEST_DR6)
+ set_debugreg(vcpu->arch.dr6, 6);
+
/*
* Refresh vmcs.HOST_CR3 if necessary. This must be done immediately
* prior to VM-Enter, as the kernel may load a new ASID (PCID) any time
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d4a51b263d6b..6742eb556d91 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11046,7 +11046,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
set_debugreg(vcpu->arch.eff_db[3], 3);
/* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
- kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
+ run_flags |= KVM_RUN_LOAD_GUEST_DR6;
} else if (unlikely(hw_breakpoint_active())) {
set_debugreg(0, 7);
}
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v6 4/8] KVM: VMX: Allow guest to set DEBUGCTL.RTM_DEBUG if RTM is supported
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (2 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 3/8] KVM: x86: Drop kvm_x86_ops.set_dr6() in favor of a new KVM_RUN flag Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper Sean Christopherson
` (4 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
Let the guest set DEBUGCTL.RTM_DEBUG if RTM is supported according to the
guest CPUID model, as debug support is supposed to be available if RTM is
supported, and there are no known downsides to letting the guest debug RTM
aborts.
Note, there are no known bug reports related to RTM_DEBUG, the primary
motivation is to reduce the probability of breaking existing guests when a
future change adds a missing consistency check on vmcs12.GUEST_DEBUGCTL
(KVM currently lets L2 run with whatever hardware supports; whoops).
Note #2, KVM already emulates DR6.RTM, and doesn't restrict access to
DR7.RTM.
Fixes: 83c529151ab0 ("KVM: x86: expose Intel cpu new features (HLE, RTM) to guest")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/kvm/vmx/vmx.c | 4 ++++
2 files changed, 5 insertions(+)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e7d2f460fcc6..1229396a059f 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -419,6 +419,7 @@
#define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI (1UL << 12)
#define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14
#define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT)
+#define DEBUGCTLMSR_RTM_DEBUG BIT(15)
#define MSR_PEBS_FRONTEND 0x000003f7
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 49bf58ca9ffd..ab5c742db140 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2184,6 +2184,10 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
(host_initiated || intel_pmu_lbr_is_enabled(vcpu)))
debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
+ if (boot_cpu_has(X86_FEATURE_RTM) &&
+ (host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_RTM)))
+ debugctl |= DEBUGCTLMSR_RTM_DEBUG;
+
return debugctl;
}
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v6 5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (3 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 4/8] KVM: VMX: Allow guest to set DEBUGCTL.RTM_DEBUG if RTM is supported Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-11 8:55 ` Mi, Dapeng
2025-06-10 23:20 ` [PATCH v6 6/8] KVM: nVMX: Check vmcs12->guest_ia32_debugctl on nested VM-Enter Sean Christopherson
` (3 subsequent siblings)
8 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
Move VMX's logic to check DEBUGCTL values into a standalone helper so that
the code can be used by nested VM-Enter to apply the same logic to the
value being loaded from vmcs12.
KVM needs to explicitly check vmcs12->guest_ia32_debugctl on nested
VM-Enter, as hardware may support features that KVM does not, i.e. relying
on hardware to detect invalid guest state will result in false negatives.
Unfortunately, that means applying KVM's funky suppression of BTF and LBR
to vmcs12 so as not to break existing guests.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ab5c742db140..358c7036272a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2191,6 +2191,19 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
return debugctl;
}
+static bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data,
+ bool host_initiated)
+{
+ u64 invalid;
+
+ invalid = data & ~vmx_get_supported_debugctl(vcpu, host_initiated);
+ if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
+ kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
+ invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
+ }
+ return !invalid;
+}
+
/*
* Writes msr value into the appropriate "register".
* Returns 0 on success, non-0 otherwise.
@@ -2259,19 +2272,12 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
}
vmcs_writel(GUEST_SYSENTER_ESP, data);
break;
- case MSR_IA32_DEBUGCTLMSR: {
- u64 invalid;
-
- invalid = data & ~vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
- if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
- kvm_pr_unimpl_wrmsr(vcpu, msr_index, data);
- data &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
- invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
- }
-
- if (invalid)
+ case MSR_IA32_DEBUGCTLMSR:
+ if (!vmx_is_valid_debugctl(vcpu, data, msr_info->host_initiated))
return 1;
+ data &= vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
+
if (is_guest_mode(vcpu) && get_vmcs12(vcpu)->vm_exit_controls &
VM_EXIT_SAVE_DEBUG_CONTROLS)
get_vmcs12(vcpu)->guest_ia32_debugctl = data;
@@ -2281,7 +2287,6 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
(data & DEBUGCTLMSR_LBR))
intel_pmu_create_guest_lbr_event(vcpu);
return 0;
- }
case MSR_IA32_BNDCFGS:
if (!kvm_mpx_supported() ||
(!msr_info->host_initiated &&
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v6 5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper
2025-06-10 23:20 ` [PATCH v6 5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper Sean Christopherson
@ 2025-06-11 8:55 ` Mi, Dapeng
0 siblings, 0 replies; 17+ messages in thread
From: Mi, Dapeng @ 2025-06-11 8:55 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
On 6/11/2025 7:20 AM, Sean Christopherson wrote:
> Move VMX's logic to check DEBUGCTL values into a standalone helper so that
> the code can be used by nested VM-Enter to apply the same logic to the
> value being loaded from vmcs12.
>
> KVM needs to explicitly check vmcs12->guest_ia32_debugctl on nested
> VM-Enter, as hardware may support features that KVM does not, i.e. relying
> on hardware to detect invalid guest state will result in false negatives.
> Unfortunately, that means applying KVM's funky suppression of BTF and LBR
> to vmcs12 so as not to break existing guests.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 29 +++++++++++++++++------------
> 1 file changed, 17 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ab5c742db140..358c7036272a 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2191,6 +2191,19 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
> return debugctl;
> }
>
> +static bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data,
> + bool host_initiated)
> +{
> + u64 invalid;
> +
> + invalid = data & ~vmx_get_supported_debugctl(vcpu, host_initiated);
> + if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
> + kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
> + invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
nit: add spaces around "|".
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> + }
> + return !invalid;
> +}
> +
> /*
> * Writes msr value into the appropriate "register".
> * Returns 0 on success, non-0 otherwise.
> @@ -2259,19 +2272,12 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> }
> vmcs_writel(GUEST_SYSENTER_ESP, data);
> break;
> - case MSR_IA32_DEBUGCTLMSR: {
> - u64 invalid;
> -
> - invalid = data & ~vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
> - if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
> - kvm_pr_unimpl_wrmsr(vcpu, msr_index, data);
> - data &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
> - invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
> - }
> -
> - if (invalid)
> + case MSR_IA32_DEBUGCTLMSR:
> + if (!vmx_is_valid_debugctl(vcpu, data, msr_info->host_initiated))
> return 1;
>
> + data &= vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
> +
> if (is_guest_mode(vcpu) && get_vmcs12(vcpu)->vm_exit_controls &
> VM_EXIT_SAVE_DEBUG_CONTROLS)
> get_vmcs12(vcpu)->guest_ia32_debugctl = data;
> @@ -2281,7 +2287,6 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> (data & DEBUGCTLMSR_LBR))
> intel_pmu_create_guest_lbr_event(vcpu);
> return 0;
> - }
> case MSR_IA32_BNDCFGS:
> if (!kvm_mpx_supported() ||
> (!msr_info->host_initiated &&
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v6 6/8] KVM: nVMX: Check vmcs12->guest_ia32_debugctl on nested VM-Enter
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (4 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-10 23:20 ` [PATCH v6 7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs Sean Christopherson
` (2 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
From: Maxim Levitsky <mlevitsk@redhat.com>
Add a consistency check for L2's guest_ia32_debugctl, as KVM only supports
a subset of hardware functionality, i.e. KVM can't rely on hardware to
detect illegal/unsupported values. Failure to check the vmcs12 value
would allow the guest to load any harware-supported value while running L2.
Take care to exempt BTF and LBR from the validity check in order to match
KVM's behavior for writes via WRMSR, but without clobbering vmcs12. Even
if VM_EXIT_SAVE_DEBUG_CONTROLS is set in vmcs12, L1 can reasonably expect
that vmcs12->guest_ia32_debugctl will not be modified if writes to the MSR
are being intercepted.
Arguably, KVM _should_ update vmcs12 if VM_EXIT_SAVE_DEBUG_CONTROLS is set
*and* writes to MSR_IA32_DEBUGCTLMSR are not being intercepted by L1, but
that would incur non-trivial complexity and wouldn't change the fact that
KVM's handling of DEBUGCTL is blatantly broken. I.e. the extra complexity
is not worth carrying.
Cc: stable@vger.kernel.org
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/nested.c | 12 ++++++++++--
arch/x86/kvm/vmx/vmx.c | 5 ++---
arch/x86/kvm/vmx/vmx.h | 3 +++
3 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index b50f9c894ab8..5a6c636954eb 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2662,7 +2662,8 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
if (vmx->nested.nested_run_pending &&
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) {
kvm_set_dr(vcpu, 7, vmcs12->guest_dr7);
- vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl);
+ vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl &
+ vmx_get_supported_debugctl(vcpu, false));
} else {
kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl);
@@ -3155,7 +3156,8 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
return -EINVAL;
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
- CC(!kvm_dr7_valid(vmcs12->guest_dr7)))
+ (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
+ CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
return -EINVAL;
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) &&
@@ -4607,6 +4609,12 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
(vmcs12->vm_entry_controls & ~VM_ENTRY_IA32E_MODE) |
(vm_entry_controls_get(to_vmx(vcpu)) & VM_ENTRY_IA32E_MODE);
+ /*
+ * Note! Save DR7, but intentionally don't grab DEBUGCTL from vmcs02.
+ * Writes to DEBUGCTL that aren't intercepted by L1 are immediately
+ * propagated to vmcs12 (see vmx_set_msr()), as the value loaded into
+ * vmcs02 doesn't strictly track vmcs12.
+ */
if (vmcs12->vm_exit_controls & VM_EXIT_SAVE_DEBUG_CONTROLS)
vmcs12->guest_dr7 = vcpu->arch.dr7;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 358c7036272a..b685e43de4e9 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2172,7 +2172,7 @@ static u64 nested_vmx_truncate_sysenter_addr(struct kvm_vcpu *vcpu,
return (unsigned long)data;
}
-static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
+u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
{
u64 debugctl = 0;
@@ -2191,8 +2191,7 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
return debugctl;
}
-static bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data,
- bool host_initiated)
+bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
{
u64 invalid;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index b5758c33c60f..392e66c7e5fe 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -414,6 +414,9 @@ static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
+bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
+
/*
* Note, early Intel manuals have the write-low and read-high bitmap offsets
* the wrong way round. The bitmaps control MSRs 0x00000000-0x00001fff and
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v6 7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (5 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 6/8] KVM: nVMX: Check vmcs12->guest_ia32_debugctl on nested VM-Enter Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-11 8:58 ` Mi, Dapeng
2025-06-10 23:20 ` [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest Sean Christopherson
2025-06-24 19:38 ` [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
8 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
From: Maxim Levitsky <mlevitsk@redhat.com>
Introduce vmx_guest_debugctl_{read,write}() to handle all accesses to
vmcs.GUEST_IA32_DEBUGCTL. This will allow stuffing FREEZE_IN_SMM into
GUEST_IA32_DEBUGCTL based on the host setting without bleeding the state
into the guest, and without needing to copy+paste the FREEZE_IN_SMM
logic into every patch that accesses GUEST_IA32_DEBUGCTL.
No functional change intended.
Cc: stable@vger.kernel.org
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
[sean: massage changelog, make inline, use in all prepare_vmcs02() cases]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/nested.c | 10 +++++-----
arch/x86/kvm/vmx/pmu_intel.c | 8 ++++----
arch/x86/kvm/vmx/vmx.c | 8 +++++---
arch/x86/kvm/vmx/vmx.h | 10 ++++++++++
4 files changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 5a6c636954eb..9edce9f411a3 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2662,11 +2662,11 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
if (vmx->nested.nested_run_pending &&
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) {
kvm_set_dr(vcpu, 7, vmcs12->guest_dr7);
- vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl &
- vmx_get_supported_debugctl(vcpu, false));
+ vmx_guest_debugctl_write(vcpu, vmcs12->guest_ia32_debugctl &
+ vmx_get_supported_debugctl(vcpu, false));
} else {
kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
- vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl);
+ vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl);
}
if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
@@ -3531,7 +3531,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
if (!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
- vmx->nested.pre_vmenter_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
+ vmx->nested.pre_vmenter_debugctl = vmx_guest_debugctl_read();
if (kvm_mpx_supported() &&
(!vmx->nested.nested_run_pending ||
!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
@@ -4805,7 +4805,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
__vmx_set_segment(vcpu, &seg, VCPU_SREG_LDTR);
kvm_set_dr(vcpu, 7, 0x400);
- vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
+ vmx_guest_debugctl_write(vcpu, 0);
if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
vmcs12->vm_exit_msr_load_count))
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 8a94b52c5731..578b4ef58260 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -652,11 +652,11 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
*/
static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
{
- u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
+ u64 data = vmx_guest_debugctl_read();
if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
data &= ~DEBUGCTLMSR_LBR;
- vmcs_write64(GUEST_IA32_DEBUGCTL, data);
+ vmx_guest_debugctl_write(vcpu, data);
}
}
@@ -729,7 +729,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
if (!lbr_desc->event) {
vmx_disable_lbr_msrs_passthrough(vcpu);
- if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
+ if (vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR)
goto warn;
if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
goto warn;
@@ -751,7 +751,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
{
- if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
+ if (!(vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR))
intel_pmu_release_guest_lbr_event(vcpu);
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index b685e43de4e9..196f33d934d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2147,7 +2147,7 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
break;
case MSR_IA32_DEBUGCTLMSR:
- msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
+ msr_info->data = vmx_guest_debugctl_read();
break;
default:
find_uret_msr:
@@ -2281,7 +2281,8 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
VM_EXIT_SAVE_DEBUG_CONTROLS)
get_vmcs12(vcpu)->guest_ia32_debugctl = data;
- vmcs_write64(GUEST_IA32_DEBUGCTL, data);
+ vmx_guest_debugctl_write(vcpu, data);
+
if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
(data & DEBUGCTLMSR_LBR))
intel_pmu_create_guest_lbr_event(vcpu);
@@ -4796,7 +4797,8 @@ static void init_vmcs(struct vcpu_vmx *vmx)
vmcs_write32(GUEST_SYSENTER_CS, 0);
vmcs_writel(GUEST_SYSENTER_ESP, 0);
vmcs_writel(GUEST_SYSENTER_EIP, 0);
- vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
+
+ vmx_guest_debugctl_write(&vmx->vcpu, 0);
if (cpu_has_vmx_tpr_shadow()) {
vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, 0);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 392e66c7e5fe..c20a4185d10a 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -417,6 +417,16 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
+static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
+{
+ vmcs_write64(GUEST_IA32_DEBUGCTL, val);
+}
+
+static inline u64 vmx_guest_debugctl_read(void)
+{
+ return vmcs_read64(GUEST_IA32_DEBUGCTL);
+}
+
/*
* Note, early Intel manuals have the write-low and read-high bitmap offsets
* the wrong way round. The bitmaps control MSRs 0x00000000-0x00001fff and
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v6 7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs
2025-06-10 23:20 ` [PATCH v6 7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs Sean Christopherson
@ 2025-06-11 8:58 ` Mi, Dapeng
0 siblings, 0 replies; 17+ messages in thread
From: Mi, Dapeng @ 2025-06-11 8:58 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
On 6/11/2025 7:20 AM, Sean Christopherson wrote:
> From: Maxim Levitsky <mlevitsk@redhat.com>
>
> Introduce vmx_guest_debugctl_{read,write}() to handle all accesses to
> vmcs.GUEST_IA32_DEBUGCTL. This will allow stuffing FREEZE_IN_SMM into
> GUEST_IA32_DEBUGCTL based on the host setting without bleeding the state
> into the guest, and without needing to copy+paste the FREEZE_IN_SMM
> logic into every patch that accesses GUEST_IA32_DEBUGCTL.
>
> No functional change intended.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> [sean: massage changelog, make inline, use in all prepare_vmcs02() cases]
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/vmx/nested.c | 10 +++++-----
> arch/x86/kvm/vmx/pmu_intel.c | 8 ++++----
> arch/x86/kvm/vmx/vmx.c | 8 +++++---
> arch/x86/kvm/vmx/vmx.h | 10 ++++++++++
> 4 files changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 5a6c636954eb..9edce9f411a3 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -2662,11 +2662,11 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
> if (vmx->nested.nested_run_pending &&
> (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) {
> kvm_set_dr(vcpu, 7, vmcs12->guest_dr7);
> - vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl &
> - vmx_get_supported_debugctl(vcpu, false));
> + vmx_guest_debugctl_write(vcpu, vmcs12->guest_ia32_debugctl &
> + vmx_get_supported_debugctl(vcpu, false));
> } else {
> kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
> - vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl);
> + vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl);
> }
> if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
> !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
> @@ -3531,7 +3531,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
>
> if (!vmx->nested.nested_run_pending ||
> !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
> - vmx->nested.pre_vmenter_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
> + vmx->nested.pre_vmenter_debugctl = vmx_guest_debugctl_read();
> if (kvm_mpx_supported() &&
> (!vmx->nested.nested_run_pending ||
> !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
> @@ -4805,7 +4805,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
> __vmx_set_segment(vcpu, &seg, VCPU_SREG_LDTR);
>
> kvm_set_dr(vcpu, 7, 0x400);
> - vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
> + vmx_guest_debugctl_write(vcpu, 0);
>
> if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
> vmcs12->vm_exit_msr_load_count))
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 8a94b52c5731..578b4ef58260 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -652,11 +652,11 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
> */
> static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
> {
> - u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
> + u64 data = vmx_guest_debugctl_read();
>
> if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
> data &= ~DEBUGCTLMSR_LBR;
> - vmcs_write64(GUEST_IA32_DEBUGCTL, data);
> + vmx_guest_debugctl_write(vcpu, data);
> }
> }
>
> @@ -729,7 +729,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
>
> if (!lbr_desc->event) {
> vmx_disable_lbr_msrs_passthrough(vcpu);
> - if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
> + if (vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR)
> goto warn;
> if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
> goto warn;
> @@ -751,7 +751,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
>
> static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
> {
> - if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
> + if (!(vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR))
> intel_pmu_release_guest_lbr_event(vcpu);
> }
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index b685e43de4e9..196f33d934d3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2147,7 +2147,7 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
> break;
> case MSR_IA32_DEBUGCTLMSR:
> - msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
> + msr_info->data = vmx_guest_debugctl_read();
> break;
> default:
> find_uret_msr:
> @@ -2281,7 +2281,8 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> VM_EXIT_SAVE_DEBUG_CONTROLS)
> get_vmcs12(vcpu)->guest_ia32_debugctl = data;
>
> - vmcs_write64(GUEST_IA32_DEBUGCTL, data);
> + vmx_guest_debugctl_write(vcpu, data);
> +
> if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
> (data & DEBUGCTLMSR_LBR))
> intel_pmu_create_guest_lbr_event(vcpu);
> @@ -4796,7 +4797,8 @@ static void init_vmcs(struct vcpu_vmx *vmx)
> vmcs_write32(GUEST_SYSENTER_CS, 0);
> vmcs_writel(GUEST_SYSENTER_ESP, 0);
> vmcs_writel(GUEST_SYSENTER_EIP, 0);
> - vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
> +
> + vmx_guest_debugctl_write(&vmx->vcpu, 0);
>
> if (cpu_has_vmx_tpr_shadow()) {
> vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, 0);
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 392e66c7e5fe..c20a4185d10a 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -417,6 +417,16 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
> u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
> bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
>
> +static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
> +{
> + vmcs_write64(GUEST_IA32_DEBUGCTL, val);
> +}
> +
> +static inline u64 vmx_guest_debugctl_read(void)
> +{
> + return vmcs_read64(GUEST_IA32_DEBUGCTL);
> +}
> +
> /*
> * Note, early Intel manuals have the write-low and read-high bitmap offsets
> * the wrong way round. The bitmaps control MSRs 0x00000000-0x00001fff and
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (6 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs Sean Christopherson
@ 2025-06-10 23:20 ` Sean Christopherson
2025-06-24 19:59 ` mlevitsk
2025-06-24 19:38 ` [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
8 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-06-10 23:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
From: Maxim Levitsky <mlevitsk@redhat.com>
Set/clear DEBUGCTLMSR_FREEZE_IN_SMM in GUEST_IA32_DEBUGCTL based on the
host's pre-VM-Enter value, i.e. preserve the host's FREEZE_IN_SMM setting
while running the guest. When running with the "default treatment of SMIs"
in effect (the only mode KVM supports), SMIs do not generate a VM-Exit that
is visible to host (non-SMM) software, and instead transitions directly
from VMX non-root to SMM. And critically, DEBUGCTL isn't context switched
by hardware on SMI or RSM, i.e. SMM will run with whatever value was
resident in hardware at the time of the SMI.
Failure to preserve FREEZE_IN_SMM results in the PMU unexpectedly counting
events while the CPU is executing in SMM, which can pollute profiling and
potentially leak information into the guest.
Check for changes in FREEZE_IN_SMM prior to every entry into KVM's inner
run loop, as the bit can be toggled in IRQ context via IPI callback (SMP
function call), by way of /sys/devices/cpu/freeze_on_smi.
Add a field in kvm_x86_ops to communicate which DEBUGCTL bits need to be
preserved, as FREEZE_IN_SMM is only supported and defined for Intel CPUs,
i.e. explicitly checking FREEZE_IN_SMM in common x86 is at best weird, and
at worst could lead to undesirable behavior in the future if AMD CPUs ever
happened to pick up a collision with the bit.
Exempt TDX vCPUs, i.e. protected guests, from the check, as the TDX Module
owns and controls GUEST_IA32_DEBUGCTL.
WARN in SVM if KVM_RUN_LOAD_DEBUGCTL is set, mostly to document that the
lack of handling isn't a KVM bug (TDX already WARNs on any run_flag).
Lastly, explicitly reload GUEST_IA32_DEBUGCTL on a VM-Fail that is missed
by KVM but detected by hardware, i.e. in nested_vmx_restore_host_state().
Doing so avoids the need to track host_debugctl on a per-VMCS basis, as
GUEST_IA32_DEBUGCTL is unconditionally written by prepare_vmcs02() and
load_vmcs12_host_state(). For the VM-Fail case, even though KVM won't
have actually entered the guest, vcpu_enter_guest() will have run with
vmcs02 active and thus could result in vmcs01 being run with a stale value.
Cc: stable@vger.kernel.org
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 7 +++++++
arch/x86/kvm/vmx/main.c | 2 ++
arch/x86/kvm/vmx/nested.c | 3 +++
arch/x86/kvm/vmx/vmx.c | 3 +++
arch/x86/kvm/vmx/vmx.h | 15 ++++++++++++++-
arch/x86/kvm/x86.c | 14 ++++++++++++--
6 files changed, 41 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3d6325369a4b..e59527dd5a0b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1676,6 +1676,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
enum kvm_x86_run_flags {
KVM_RUN_FORCE_IMMEDIATE_EXIT = BIT(0),
KVM_RUN_LOAD_GUEST_DR6 = BIT(1),
+ KVM_RUN_LOAD_DEBUGCTL = BIT(2),
};
struct kvm_x86_ops {
@@ -1706,6 +1707,12 @@ struct kvm_x86_ops {
void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
void (*vcpu_put)(struct kvm_vcpu *vcpu);
+ /*
+ * Mask of DEBUGCTL bits that are owned by the host, i.e. that need to
+ * match the host's value even while the guest is active.
+ */
+ const u64 HOST_OWNED_DEBUGCTL;
+
void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index c85cbce6d2f6..4a6d4460f947 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -915,6 +915,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.vcpu_load = vt_op(vcpu_load),
.vcpu_put = vt_op(vcpu_put),
+ .HOST_OWNED_DEBUGCTL = DEBUGCTLMSR_FREEZE_IN_SMM,
+
.update_exception_bitmap = vt_op(update_exception_bitmap),
.get_feature_msr = vmx_get_feature_msr,
.get_msr = vt_op(get_msr),
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 9edce9f411a3..756c42e2d038 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4860,6 +4860,9 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
WARN_ON(kvm_set_dr(vcpu, 7, vmcs_readl(GUEST_DR7)));
}
+ /* Reload DEBUGCTL to ensure vmcs01 has a fresh FREEZE_IN_SMM value. */
+ vmx_reload_guest_debugctl(vcpu);
+
/*
* Note that calling vmx_set_{efer,cr0,cr4} is important as they
* handle a variety of side effects to KVM's software model.
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 196f33d934d3..70a115d99530 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7371,6 +7371,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
if (run_flags & KVM_RUN_LOAD_GUEST_DR6)
set_debugreg(vcpu->arch.dr6, 6);
+ if (run_flags & KVM_RUN_LOAD_DEBUGCTL)
+ vmx_reload_guest_debugctl(vcpu);
+
/*
* Refresh vmcs.HOST_CR3 if necessary. This must be done immediately
* prior to VM-Enter, as the kernel may load a new ASID (PCID) any time
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index c20a4185d10a..076af78af151 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -419,12 +419,25 @@ bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
{
+ WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
+
+ val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
vmcs_write64(GUEST_IA32_DEBUGCTL, val);
}
static inline u64 vmx_guest_debugctl_read(void)
{
- return vmcs_read64(GUEST_IA32_DEBUGCTL);
+ return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
+}
+
+static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
+{
+ u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
+
+ if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
+ return;
+
+ vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
}
/*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6742eb556d91..811f4db824ab 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10779,7 +10779,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
dm_request_for_irq_injection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
fastpath_t exit_fastpath;
- u64 run_flags;
+ u64 run_flags, debug_ctl;
bool req_immediate_exit = false;
@@ -11051,7 +11051,17 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
set_debugreg(0, 7);
}
- vcpu->arch.host_debugctl = get_debugctlmsr();
+ /*
+ * Refresh the host DEBUGCTL snapshot after disabling IRQs, as DEBUGCTL
+ * can be modified in IRQ context, e.g. via SMP function calls. Inform
+ * vendor code if any host-owned bits were changed, e.g. so that the
+ * value loaded into hardware while running the guest can be updated.
+ */
+ debug_ctl = get_debugctlmsr();
+ if ((debug_ctl ^ vcpu->arch.host_debugctl) & kvm_x86_ops.HOST_OWNED_DEBUGCTL &&
+ !vcpu->arch.guest_state_protected)
+ run_flags |= KVM_RUN_LOAD_DEBUGCTL;
+ vcpu->arch.host_debugctl = debug_ctl;
guest_timing_enter_irqoff();
--
2.50.0.rc0.642.g800a2b2222-goog
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
2025-06-10 23:20 ` [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest Sean Christopherson
@ 2025-06-24 19:59 ` mlevitsk
2025-06-26 16:17 ` Sean Christopherson
0 siblings, 1 reply; 17+ messages in thread
From: mlevitsk @ 2025-06-24 19:59 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel, Adrian Hunter
On Tue, 2025-06-10 at 16:20 -0700, Sean Christopherson wrote:
> From: Maxim Levitsky <mlevitsk@redhat.com>
>
> Set/clear DEBUGCTLMSR_FREEZE_IN_SMM in GUEST_IA32_DEBUGCTL based on the
> host's pre-VM-Enter value, i.e. preserve the host's FREEZE_IN_SMM setting
> while running the guest. When running with the "default treatment of SMIs"
> in effect (the only mode KVM supports), SMIs do not generate a VM-Exit that
> is visible to host (non-SMM) software, and instead transitions directly
> from VMX non-root to SMM. And critically, DEBUGCTL isn't context switched
> by hardware on SMI or RSM, i.e. SMM will run with whatever value was
> resident in hardware at the time of the SMI.
>
> Failure to preserve FREEZE_IN_SMM results in the PMU unexpectedly counting
> events while the CPU is executing in SMM, which can pollute profiling and
> potentially leak information into the guest.
>
> Check for changes in FREEZE_IN_SMM prior to every entry into KVM's inner
> run loop, as the bit can be toggled in IRQ context via IPI callback (SMP
> function call), by way of /sys/devices/cpu/freeze_on_smi.
>
> Add a field in kvm_x86_ops to communicate which DEBUGCTL bits need to be
> preserved, as FREEZE_IN_SMM is only supported and defined for Intel CPUs,
> i.e. explicitly checking FREEZE_IN_SMM in common x86 is at best weird, and
> at worst could lead to undesirable behavior in the future if AMD CPUs ever
> happened to pick up a collision with the bit.
>
> Exempt TDX vCPUs, i.e. protected guests, from the check, as the TDX Module
> owns and controls GUEST_IA32_DEBUGCTL.
>
> WARN in SVM if KVM_RUN_LOAD_DEBUGCTL is set, mostly to document that the
> lack of handling isn't a KVM bug (TDX already WARNs on any run_flag).
>
> Lastly, explicitly reload GUEST_IA32_DEBUGCTL on a VM-Fail that is missed
> by KVM but detected by hardware, i.e. in nested_vmx_restore_host_state().
> Doing so avoids the need to track host_debugctl on a per-VMCS basis, as
> GUEST_IA32_DEBUGCTL is unconditionally written by prepare_vmcs02() and
> load_vmcs12_host_state(). For the VM-Fail case, even though KVM won't
> have actually entered the guest, vcpu_enter_guest() will have run with
> vmcs02 active and thus could result in vmcs01 being run with a stale value.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 7 +++++++
> arch/x86/kvm/vmx/main.c | 2 ++
> arch/x86/kvm/vmx/nested.c | 3 +++
> arch/x86/kvm/vmx/vmx.c | 3 +++
> arch/x86/kvm/vmx/vmx.h | 15 ++++++++++++++-
> arch/x86/kvm/x86.c | 14 ++++++++++++--
> 6 files changed, 41 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3d6325369a4b..e59527dd5a0b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1676,6 +1676,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
> enum kvm_x86_run_flags {
> KVM_RUN_FORCE_IMMEDIATE_EXIT = BIT(0),
> KVM_RUN_LOAD_GUEST_DR6 = BIT(1),
> + KVM_RUN_LOAD_DEBUGCTL = BIT(2),
> };
>
> struct kvm_x86_ops {
> @@ -1706,6 +1707,12 @@ struct kvm_x86_ops {
> void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
> void (*vcpu_put)(struct kvm_vcpu *vcpu);
>
> + /*
> + * Mask of DEBUGCTL bits that are owned by the host, i.e. that need to
> + * match the host's value even while the guest is active.
> + */
> + const u64 HOST_OWNED_DEBUGCTL;
> +
> void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
> int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
> int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index c85cbce6d2f6..4a6d4460f947 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -915,6 +915,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> .vcpu_load = vt_op(vcpu_load),
> .vcpu_put = vt_op(vcpu_put),
>
> + .HOST_OWNED_DEBUGCTL = DEBUGCTLMSR_FREEZE_IN_SMM,
> +
> .update_exception_bitmap = vt_op(update_exception_bitmap),
> .get_feature_msr = vmx_get_feature_msr,
> .get_msr = vt_op(get_msr),
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 9edce9f411a3..756c42e2d038 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4860,6 +4860,9 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
> WARN_ON(kvm_set_dr(vcpu, 7, vmcs_readl(GUEST_DR7)));
> }
>
> + /* Reload DEBUGCTL to ensure vmcs01 has a fresh FREEZE_IN_SMM value. */
> + vmx_reload_guest_debugctl(vcpu);
> +
> /*
> * Note that calling vmx_set_{efer,cr0,cr4} is important as they
> * handle a variety of side effects to KVM's software model.
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 196f33d934d3..70a115d99530 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7371,6 +7371,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
> if (run_flags & KVM_RUN_LOAD_GUEST_DR6)
> set_debugreg(vcpu->arch.dr6, 6);
>
> + if (run_flags & KVM_RUN_LOAD_DEBUGCTL)
> + vmx_reload_guest_debugctl(vcpu);
> +
> /*
> * Refresh vmcs.HOST_CR3 if necessary. This must be done immediately
> * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index c20a4185d10a..076af78af151 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -419,12 +419,25 @@ bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
>
> static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
> {
> + WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
> +
> + val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
> vmcs_write64(GUEST_IA32_DEBUGCTL, val);
> }
>
> static inline u64 vmx_guest_debugctl_read(void)
> {
> - return vmcs_read64(GUEST_IA32_DEBUGCTL);
> + return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
> +}
> +
> +static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
> +{
> + u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
> +
> + if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
> + return;
> +
> + vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
> }
Wouldn't it be better to use kvm_x86_ops.HOST_OWNED_DEBUGCTL here as well
to avoid logic duplication?
Besides this, everything else looks fine to me.
Best regards,
Maxim Levitsky
>
> /*
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6742eb556d91..811f4db824ab 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10779,7 +10779,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> dm_request_for_irq_injection(vcpu) &&
> kvm_cpu_accept_dm_intr(vcpu);
> fastpath_t exit_fastpath;
> - u64 run_flags;
> + u64 run_flags, debug_ctl;
>
> bool req_immediate_exit = false;
>
> @@ -11051,7 +11051,17 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> set_debugreg(0, 7);
> }
>
> - vcpu->arch.host_debugctl = get_debugctlmsr();
> + /*
> + * Refresh the host DEBUGCTL snapshot after disabling IRQs, as DEBUGCTL
> + * can be modified in IRQ context, e.g. via SMP function calls. Inform
> + * vendor code if any host-owned bits were changed, e.g. so that the
> + * value loaded into hardware while running the guest can be updated.
> + */
> + debug_ctl = get_debugctlmsr();
> + if ((debug_ctl ^ vcpu->arch.host_debugctl) & kvm_x86_ops.HOST_OWNED_DEBUGCTL &&
> + !vcpu->arch.guest_state_protected)
> + run_flags |= KVM_RUN_LOAD_DEBUGCTL;
> + vcpu->arch.host_debugctl = debug_ctl;
>
> guest_timing_enter_irqoff();
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
2025-06-24 19:59 ` mlevitsk
@ 2025-06-26 16:17 ` Sean Christopherson
2025-06-26 17:07 ` mlevitsk
0 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2025-06-26 16:17 UTC (permalink / raw)
To: mlevitsk; +Cc: Paolo Bonzini, kvm, linux-kernel, Adrian Hunter
On Tue, Jun 24, 2025, mlevitsk@redhat.com wrote:
> On Tue, 2025-06-10 at 16:20 -0700, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> > index c20a4185d10a..076af78af151 100644
> > --- a/arch/x86/kvm/vmx/vmx.h
> > +++ b/arch/x86/kvm/vmx/vmx.h
> > @@ -419,12 +419,25 @@ bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
> >
> > static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
> > {
> > + WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
> > +
> > + val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
> > vmcs_write64(GUEST_IA32_DEBUGCTL, val);
> > }
> >
> > static inline u64 vmx_guest_debugctl_read(void)
> > {
> > - return vmcs_read64(GUEST_IA32_DEBUGCTL);
> > + return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
> > +}
> > +
> > +static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
> > +{
> > + u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
> > +
> > + if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
> > + return;
> > +
> > + vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
> > }
>
>
> Wouldn't it be better to use kvm_x86_ops.HOST_OWNED_DEBUGCTL here as well
> to avoid logic duplication?
Hmm, yeah. I used DEBUGCTLMSR_FREEZE_IN_SMM directly to avoid a memory load
just to get at a constant literal.
What about this? It doesn't completely dedup the logic, but I think it gets us
close enough to a single source of truth.
--
From: Sean Christopherson <seanjc@google.com>
Date: Thu, 26 Jun 2025 09:14:20 -0700
Subject: [PATCH] KVM: VMX: Add a macro to track which DEBUGCTL bits are
host-owned
Add VMX_HOST_OWNED_DEBUGCTL_BITS to track which bits are host-owned, i.e.
need to be preserved when running the guest, to dedup the logic without
having to incur a memory load to get at kvm_x86_ops.HOST_OWNED_DEBUGCTL.
No functional change intended.
Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/main.c | 2 +-
arch/x86/kvm/vmx/vmx.h | 12 +++++++-----
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 8c6435fdda18..dbab1c15b0cd 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -883,7 +883,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.vcpu_load = vt_op(vcpu_load),
.vcpu_put = vt_op(vcpu_put),
- .HOST_OWNED_DEBUGCTL = DEBUGCTLMSR_FREEZE_IN_SMM,
+ .HOST_OWNED_DEBUGCTL = VMX_HOST_OWNED_DEBUGCTL_BITS,
.update_exception_bitmap = vt_op(update_exception_bitmap),
.get_feature_msr = vmx_get_feature_msr,
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 87174d961c85..d3389baf3ab3 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -410,27 +410,29 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
+#define VMX_HOST_OWNED_DEBUGCTL_BITS (DEBUGCTLMSR_FREEZE_IN_SMM)
+
static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
{
- WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
+ WARN_ON_ONCE(val & VMX_HOST_OWNED_DEBUGCTL_BITS);
- val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
+ val |= vcpu->arch.host_debugctl & VMX_HOST_OWNED_DEBUGCTL_BITS;
vmcs_write64(GUEST_IA32_DEBUGCTL, val);
}
static inline u64 vmx_guest_debugctl_read(void)
{
- return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
+ return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~VMX_HOST_OWNED_DEBUGCTL_BITS;
}
static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
{
u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
- if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
+ if (!((val ^ vcpu->arch.host_debugctl) & VMX_HOST_OWNED_DEBUGCTL_BITS))
return;
- vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
+ vmx_guest_debugctl_write(vcpu, val & ~VMX_HOST_OWNED_DEBUGCTL_BITS);
}
/*
base-commit: 6c7ecd725e503bf2ca69ff52c6cc48bb650b1f11
--
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
2025-06-26 16:17 ` Sean Christopherson
@ 2025-06-26 17:07 ` mlevitsk
2025-07-10 23:11 ` Sean Christopherson
0 siblings, 1 reply; 17+ messages in thread
From: mlevitsk @ 2025-06-26 17:07 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Adrian Hunter
On Thu, 2025-06-26 at 09:17 -0700, Sean Christopherson wrote:
> On Tue, Jun 24, 2025, mlevitsk@redhat.com wrote:
> > On Tue, 2025-06-10 at 16:20 -0700, Sean Christopherson wrote:
> > > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> > > index c20a4185d10a..076af78af151 100644
> > > --- a/arch/x86/kvm/vmx/vmx.h
> > > +++ b/arch/x86/kvm/vmx/vmx.h
> > > @@ -419,12 +419,25 @@ bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
> > >
> > > static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
> > > {
> > > + WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
> > > +
> > > + val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
> > > vmcs_write64(GUEST_IA32_DEBUGCTL, val);
> > > }
> > >
> > > static inline u64 vmx_guest_debugctl_read(void)
> > > {
> > > - return vmcs_read64(GUEST_IA32_DEBUGCTL);
> > > + return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
> > > +}
> > > +
> > > +static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
> > > +{
> > > + u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
> > > +
> > > + if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
> > > + return;
> > > +
> > > + vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
> > > }
> >
> >
> > Wouldn't it be better to use kvm_x86_ops.HOST_OWNED_DEBUGCTL here as well
> > to avoid logic duplication?
>
> Hmm, yeah. I used DEBUGCTLMSR_FREEZE_IN_SMM directly to avoid a memory load
> just to get at a constant literal.
>
> What about this? It doesn't completely dedup the logic, but I think it gets us
> close enough to a single source of truth.
>
> --
> From: Sean Christopherson <seanjc@google.com>
> Date: Thu, 26 Jun 2025 09:14:20 -0700
> Subject: [PATCH] KVM: VMX: Add a macro to track which DEBUGCTL bits are
> host-owned
>
> Add VMX_HOST_OWNED_DEBUGCTL_BITS to track which bits are host-owned, i.e.
> need to be preserved when running the guest, to dedup the logic without
> having to incur a memory load to get at kvm_x86_ops.HOST_OWNED_DEBUGCTL.
>
> No functional change intended.
>
> Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/vmx/main.c | 2 +-
> arch/x86/kvm/vmx/vmx.h | 12 +++++++-----
> 2 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> index 8c6435fdda18..dbab1c15b0cd 100644
> --- a/arch/x86/kvm/vmx/main.c
> +++ b/arch/x86/kvm/vmx/main.c
> @@ -883,7 +883,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> .vcpu_load = vt_op(vcpu_load),
> .vcpu_put = vt_op(vcpu_put),
>
> - .HOST_OWNED_DEBUGCTL = DEBUGCTLMSR_FREEZE_IN_SMM,
> + .HOST_OWNED_DEBUGCTL = VMX_HOST_OWNED_DEBUGCTL_BITS,
>
> .update_exception_bitmap = vt_op(update_exception_bitmap),
> .get_feature_msr = vmx_get_feature_msr,
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 87174d961c85..d3389baf3ab3 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -410,27 +410,29 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
> u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
> bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
>
> +#define VMX_HOST_OWNED_DEBUGCTL_BITS (DEBUGCTLMSR_FREEZE_IN_SMM)
> +
> static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
> {
> - WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
> + WARN_ON_ONCE(val & VMX_HOST_OWNED_DEBUGCTL_BITS);
>
> - val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
> + val |= vcpu->arch.host_debugctl & VMX_HOST_OWNED_DEBUGCTL_BITS;
> vmcs_write64(GUEST_IA32_DEBUGCTL, val);
> }
>
> static inline u64 vmx_guest_debugctl_read(void)
> {
> - return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
> + return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~VMX_HOST_OWNED_DEBUGCTL_BITS;
> }
>
> static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
> {
> u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
>
> - if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
> + if (!((val ^ vcpu->arch.host_debugctl) & VMX_HOST_OWNED_DEBUGCTL_BITS))
> return;
>
> - vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
> + vmx_guest_debugctl_write(vcpu, val & ~VMX_HOST_OWNED_DEBUGCTL_BITS);
> }
>
> /*
>
> base-commit: 6c7ecd725e503bf2ca69ff52c6cc48bb650b1f11
> --
>
This looks reasonable.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
2025-06-26 17:07 ` mlevitsk
@ 2025-07-10 23:11 ` Sean Christopherson
0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-07-10 23:11 UTC (permalink / raw)
To: mlevitsk; +Cc: Paolo Bonzini, kvm, linux-kernel, Adrian Hunter
On Thu, Jun 26, 2025, mlevitsk@redhat.com wrote:
> On Thu, 2025-06-26 at 09:17 -0700, Sean Christopherson wrote:
> > --
> > From: Sean Christopherson <seanjc@google.com>
> > Date: Thu, 26 Jun 2025 09:14:20 -0700
> > Subject: [PATCH] KVM: VMX: Add a macro to track which DEBUGCTL bits are
> > host-owned
> >
> > Add VMX_HOST_OWNED_DEBUGCTL_BITS to track which bits are host-owned, i.e.
> > need to be preserved when running the guest, to dedup the logic without
> > having to incur a memory load to get at kvm_x86_ops.HOST_OWNED_DEBUGCTL.
> >
> > No functional change intended.
> >
> > Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
...
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Applied to kvm-x86 misc, thanks!
[1/1] KVM: VMX: Add a macro to track which DEBUGCTL bits are host-owned
https://github.com/kvm-x86/linux/commit/e1ef1c57ff70
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM
2025-06-10 23:20 [PATCH v6 0/8] KVM: VMX: Preserve host's DEBUGCTL.FREEZE_IN_SMM Sean Christopherson
` (7 preceding siblings ...)
2025-06-10 23:20 ` [PATCH v6 8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest Sean Christopherson
@ 2025-06-24 19:38 ` Sean Christopherson
8 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2025-06-24 19:38 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Adrian Hunter, Maxim Levitsky
On Tue, 10 Jun 2025 16:20:02 -0700, Sean Christopherson wrote:
> Preserve the host's FREEZE_IN_SMM setting by stuffing GUEST_DEBUGCTL, so that
> SMM activity doesn't bleed into PMU events while running the guest.
>
> Along the way, enforce the supported set of DEBUGCTL bits when processing
> vmcs12.GUEST_DEBUGCTL, as KVM can't rely on hardware to reject an MSR value
> that is supported in hardware.
>
> [...]
Applied to kvm-x86 misc, thanks!
[1/8] KVM: TDX: Use kvm_arch_vcpu.host_debugctl to restore the host's DEBUGCTL
https://github.com/kvm-x86/linux/commit/7d390a9da823
[2/8] KVM: x86: Convert vcpu_run()'s immediate exit param into a generic bitmap
https://github.com/kvm-x86/linux/commit/2478b1b220c4
[3/8] KVM: x86: Drop kvm_x86_ops.set_dr6() in favor of a new KVM_RUN flag
https://github.com/kvm-x86/linux/commit/80c64c7afea1
[4/8] KVM: VMX: Allow guest to set DEBUGCTL.RTM_DEBUG if RTM is supported
https://github.com/kvm-x86/linux/commit/17ec2f965344
[5/8] KVM: VMX: Extract checking of guest's DEBUGCTL into helper
https://github.com/kvm-x86/linux/commit/8a4351ac302c
[6/8] KVM: nVMX: Check vmcs12->guest_ia32_debugctl on nested VM-Enter
https://github.com/kvm-x86/linux/commit/095686e6fcb4
[7/8] KVM: VMX: Wrap all accesses to IA32_DEBUGCTL with getter/setter APIs
https://github.com/kvm-x86/linux/commit/7d0cce6cbe71
[8/8] KVM: VMX: Preserve host's DEBUGCTLMSR_FREEZE_IN_SMM while running the guest
https://github.com/kvm-x86/linux/commit/6b1dd26544d0
--
https://github.com/kvm-x86/kvm-unit-tests/tree/next
^ permalink raw reply [flat|nested] 17+ messages in thread