* [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement
@ 2024-08-02 19:51 Sean Christopherson
2024-08-02 19:51 ` [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful Sean Christopherson
` (5 more replies)
0 siblings, 6 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
This series was prompted by observations of HLT-exiting when debugging
a throughput issue related to posted interrupts. When KVM is running in
a nested scenario, a rather surprising number of HLT exits occur with an
unmasked interrupt already pending. I didn't debug too deeply into the
guest side of things, but I suspect what is happening is that it's fairly
easy for L2 to be interrupted (by L1 or L0) between checking if it (the
CPU) should enter an idle state and actually executing STI;HLT.
AFAICT, a non-nested setup doesn't benefit much, if at all. But, I don't
see any downside to checking for a wake event in the fastpath, e.g. it's
basically a "zero" time halt-polling mechanism.
The other patches fix flaws found by inspection when adding HLT-exiting
to the faspath.
Note, the userspace-exit logic is basically untested, i.e. I probably
need to write a selftest...
Sean Christopherson (5):
KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
KVM: x86: Dedup fastpath MSR post-handling logic
KVM: x86: Exit to userspace if fastpath triggers one on instruction
skip
KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running
helpers
KVM: x86: Add fastpath handling of HLT VM-Exits
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/svm.c | 13 +-
arch/x86/kvm/vmx/vmx.c | 2 +
arch/x86/kvm/x86.c | 319 +++++++++++++++++---------------
arch/x86/kvm/x86.h | 1 +
5 files changed, 188 insertions(+), 148 deletions(-)
base-commit: 332d2c1d713e232e163386c35a3ba0c1b90df83f
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
@ 2024-08-02 19:51 ` Sean Christopherson
2024-09-02 9:58 ` Paolo Bonzini
2024-08-02 19:51 ` [PATCH 2/5] KVM: x86: Dedup fastpath MSR post-handling logic Sean Christopherson
` (4 subsequent siblings)
5 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Re-enter the guest in the fastpath if WRMSR emulation for x2APIC's ICR is
successful, as no additional work is needed, i.e. there is no code unique
for WRMSR exits between the fastpath and the "!= EXIT_FASTPATH_NONE" check
in __vmx_handle_exit().
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index af6c8cf6a37a..cf397110953f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2173,7 +2173,7 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
data = kvm_read_edx_eax(vcpu);
if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data)) {
kvm_skip_emulated_instruction(vcpu);
- ret = EXIT_FASTPATH_EXIT_HANDLED;
+ ret = EXIT_FASTPATH_REENTER_GUEST;
}
break;
case MSR_IA32_TSC_DEADLINE:
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/5] KVM: x86: Dedup fastpath MSR post-handling logic
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
2024-08-02 19:51 ` [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful Sean Christopherson
@ 2024-08-02 19:51 ` Sean Christopherson
2024-08-02 19:51 ` [PATCH 3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip Sean Christopherson
` (3 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Now that the WRMSR fastpath for x2APIC_ICR and TSC_DEADLINE are identical,
ignoring the backend MSR handling, consolidate the common bits of skipping
the instruction and setting the return value.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cf397110953f..332584476129 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2164,31 +2164,32 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
{
u32 msr = kvm_rcx_read(vcpu);
u64 data;
- fastpath_t ret = EXIT_FASTPATH_NONE;
+ fastpath_t ret;
+ bool handled;
kvm_vcpu_srcu_read_lock(vcpu);
switch (msr) {
case APIC_BASE_MSR + (APIC_ICR >> 4):
data = kvm_read_edx_eax(vcpu);
- if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data)) {
- kvm_skip_emulated_instruction(vcpu);
- ret = EXIT_FASTPATH_REENTER_GUEST;
- }
+ handled = !handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
break;
case MSR_IA32_TSC_DEADLINE:
data = kvm_read_edx_eax(vcpu);
- if (!handle_fastpath_set_tscdeadline(vcpu, data)) {
- kvm_skip_emulated_instruction(vcpu);
- ret = EXIT_FASTPATH_REENTER_GUEST;
- }
+ handled = !handle_fastpath_set_tscdeadline(vcpu, data);
break;
default:
+ handled = false;
break;
}
- if (ret != EXIT_FASTPATH_NONE)
+ if (handled) {
+ kvm_skip_emulated_instruction(vcpu);
+ ret = EXIT_FASTPATH_REENTER_GUEST;
trace_kvm_msr_write(msr, data);
+ } else {
+ ret = EXIT_FASTPATH_NONE;
+ }
kvm_vcpu_srcu_read_unlock(vcpu);
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
2024-08-02 19:51 ` [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful Sean Christopherson
2024-08-02 19:51 ` [PATCH 2/5] KVM: x86: Dedup fastpath MSR post-handling logic Sean Christopherson
@ 2024-08-02 19:51 ` Sean Christopherson
2024-08-02 19:51 ` [PATCH 4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers Sean Christopherson
` (2 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Exit to userspace if a fastpath handler triggers such an exit, which can
happen when skipping the instruction, e.g. due to userspace
single-stepping the guest via KVM_GUESTDBG_SINGLESTEP or because of an
emulation failure.
Fixes: 404d5d7bff0d ("KVM: X86: Introduce more exit_fastpath_completion enum values")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 9 +++++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 950a03e0181e..736dda300849 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -211,6 +211,7 @@ enum exit_fastpath_completion {
EXIT_FASTPATH_NONE,
EXIT_FASTPATH_REENTER_GUEST,
EXIT_FASTPATH_EXIT_HANDLED,
+ EXIT_FASTPATH_EXIT_USERSPACE,
};
typedef enum exit_fastpath_completion fastpath_t;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 332584476129..3c54a241696f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2184,8 +2184,10 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
}
if (handled) {
- kvm_skip_emulated_instruction(vcpu);
- ret = EXIT_FASTPATH_REENTER_GUEST;
+ if (!kvm_skip_emulated_instruction(vcpu))
+ ret = EXIT_FASTPATH_EXIT_USERSPACE;
+ else
+ ret = EXIT_FASTPATH_REENTER_GUEST;
trace_kvm_msr_write(msr, data);
} else {
ret = EXIT_FASTPATH_NONE;
@@ -11206,6 +11208,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (vcpu->arch.apic_attention)
kvm_lapic_sync_from_vapic(vcpu);
+ if (unlikely(exit_fastpath == EXIT_FASTPATH_EXIT_USERSPACE))
+ return 0;
+
r = kvm_x86_call(handle_exit)(vcpu, exit_fastpath);
return r;
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
` (2 preceding siblings ...)
2024-08-02 19:51 ` [PATCH 3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip Sean Christopherson
@ 2024-08-02 19:51 ` Sean Christopherson
2024-08-02 19:51 ` [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits Sean Christopherson
2024-08-31 0:20 ` [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
5 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Shuffle code around in x86.c so that the various helpers related to vCPU
blocking/running logic are (a) located near each other and (b) ordered so
that HLT emulation can use kvm_vcpu_has_events() in a future path.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 264 ++++++++++++++++++++++-----------------------
1 file changed, 132 insertions(+), 132 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3c54a241696f..46686504cd47 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9927,51 +9927,6 @@ void kvm_x86_vendor_exit(void)
}
EXPORT_SYMBOL_GPL(kvm_x86_vendor_exit);
-static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
-{
- /*
- * The vCPU has halted, e.g. executed HLT. Update the run state if the
- * local APIC is in-kernel, the run loop will detect the non-runnable
- * state and halt the vCPU. Exit to userspace if the local APIC is
- * managed by userspace, in which case userspace is responsible for
- * handling wake events.
- */
- ++vcpu->stat.halt_exits;
- if (lapic_in_kernel(vcpu)) {
- vcpu->arch.mp_state = state;
- return 1;
- } else {
- vcpu->run->exit_reason = reason;
- return 0;
- }
-}
-
-int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu)
-{
- return __kvm_emulate_halt(vcpu, KVM_MP_STATE_HALTED, KVM_EXIT_HLT);
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_halt_noskip);
-
-int kvm_emulate_halt(struct kvm_vcpu *vcpu)
-{
- int ret = kvm_skip_emulated_instruction(vcpu);
- /*
- * TODO: we might be squashing a GUESTDBG_SINGLESTEP-triggered
- * KVM_EXIT_DEBUG here.
- */
- return kvm_emulate_halt_noskip(vcpu) && ret;
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_halt);
-
-int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
-{
- int ret = kvm_skip_emulated_instruction(vcpu);
-
- return __kvm_emulate_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD,
- KVM_EXIT_AP_RESET_HOLD) && ret;
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold);
-
#ifdef CONFIG_X86_64
static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
unsigned long clock_type)
@@ -11224,6 +11179,67 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
return r;
}
+static bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
+ !vcpu->arch.apf.halted);
+}
+
+static bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
+{
+ if (!list_empty_careful(&vcpu->async_pf.done))
+ return true;
+
+ if (kvm_apic_has_pending_init_or_sipi(vcpu) &&
+ kvm_apic_init_sipi_allowed(vcpu))
+ return true;
+
+ if (vcpu->arch.pv.pv_unhalted)
+ return true;
+
+ if (kvm_is_exception_pending(vcpu))
+ return true;
+
+ if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
+ (vcpu->arch.nmi_pending &&
+ kvm_x86_call(nmi_allowed)(vcpu, false)))
+ return true;
+
+#ifdef CONFIG_KVM_SMM
+ if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
+ (vcpu->arch.smi_pending &&
+ kvm_x86_call(smi_allowed)(vcpu, false)))
+ return true;
+#endif
+
+ if (kvm_test_request(KVM_REQ_PMI, vcpu))
+ return true;
+
+ if (kvm_test_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu))
+ return true;
+
+ if (kvm_arch_interrupt_allowed(vcpu) && kvm_cpu_has_interrupt(vcpu))
+ return true;
+
+ if (kvm_hv_has_stimer_pending(vcpu))
+ return true;
+
+ if (is_guest_mode(vcpu) &&
+ kvm_x86_ops.nested_ops->has_events &&
+ kvm_x86_ops.nested_ops->has_events(vcpu, false))
+ return true;
+
+ if (kvm_xen_has_pending_events(vcpu))
+ return true;
+
+ return false;
+}
+
+int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu);
+}
+
/* Called within kvm->srcu read side. */
static inline int vcpu_block(struct kvm_vcpu *vcpu)
{
@@ -11295,12 +11311,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
return 1;
}
-static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
-{
- return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
- !vcpu->arch.apf.halted);
-}
-
/* Called within kvm->srcu read side. */
static int vcpu_run(struct kvm_vcpu *vcpu)
{
@@ -11352,6 +11362,77 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
return r;
}
+static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
+{
+ /*
+ * The vCPU has halted, e.g. executed HLT. Update the run state if the
+ * local APIC is in-kernel, the run loop will detect the non-runnable
+ * state and halt the vCPU. Exit to userspace if the local APIC is
+ * managed by userspace, in which case userspace is responsible for
+ * handling wake events.
+ */
+ ++vcpu->stat.halt_exits;
+ if (lapic_in_kernel(vcpu)) {
+ vcpu->arch.mp_state = state;
+ return 1;
+ } else {
+ vcpu->run->exit_reason = reason;
+ return 0;
+ }
+}
+
+int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu)
+{
+ return __kvm_emulate_halt(vcpu, KVM_MP_STATE_HALTED, KVM_EXIT_HLT);
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_halt_noskip);
+
+int kvm_emulate_halt(struct kvm_vcpu *vcpu)
+{
+ int ret = kvm_skip_emulated_instruction(vcpu);
+ /*
+ * TODO: we might be squashing a GUESTDBG_SINGLESTEP-triggered
+ * KVM_EXIT_DEBUG here.
+ */
+ return kvm_emulate_halt_noskip(vcpu) && ret;
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_halt);
+
+int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
+{
+ int ret = kvm_skip_emulated_instruction(vcpu);
+
+ return __kvm_emulate_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD,
+ KVM_EXIT_AP_RESET_HOLD) && ret;
+}
+EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold);
+
+bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_apicv_active(vcpu) &&
+ kvm_x86_call(dy_apicv_has_pending_interrupt)(vcpu);
+}
+
+bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.preempted_in_kernel;
+}
+
+bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
+{
+ if (READ_ONCE(vcpu->arch.pv.pv_unhalted))
+ return true;
+
+ if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
+#ifdef CONFIG_KVM_SMM
+ kvm_test_request(KVM_REQ_SMI, vcpu) ||
+#endif
+ kvm_test_request(KVM_REQ_EVENT, vcpu))
+ return true;
+
+ return kvm_arch_dy_has_pending_interrupt(vcpu);
+}
+
static inline int complete_emulated_io(struct kvm_vcpu *vcpu)
{
return kvm_emulate_instruction(vcpu, EMULTYPE_NO_DECODE);
@@ -13163,87 +13244,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
kvm_arch_free_memslot(kvm, old);
}
-static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
-{
- if (!list_empty_careful(&vcpu->async_pf.done))
- return true;
-
- if (kvm_apic_has_pending_init_or_sipi(vcpu) &&
- kvm_apic_init_sipi_allowed(vcpu))
- return true;
-
- if (vcpu->arch.pv.pv_unhalted)
- return true;
-
- if (kvm_is_exception_pending(vcpu))
- return true;
-
- if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
- (vcpu->arch.nmi_pending &&
- kvm_x86_call(nmi_allowed)(vcpu, false)))
- return true;
-
-#ifdef CONFIG_KVM_SMM
- if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
- (vcpu->arch.smi_pending &&
- kvm_x86_call(smi_allowed)(vcpu, false)))
- return true;
-#endif
-
- if (kvm_test_request(KVM_REQ_PMI, vcpu))
- return true;
-
- if (kvm_test_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu))
- return true;
-
- if (kvm_arch_interrupt_allowed(vcpu) && kvm_cpu_has_interrupt(vcpu))
- return true;
-
- if (kvm_hv_has_stimer_pending(vcpu))
- return true;
-
- if (is_guest_mode(vcpu) &&
- kvm_x86_ops.nested_ops->has_events &&
- kvm_x86_ops.nested_ops->has_events(vcpu, false))
- return true;
-
- if (kvm_xen_has_pending_events(vcpu))
- return true;
-
- return false;
-}
-
-int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
-{
- return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu);
-}
-
-bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
-{
- return kvm_vcpu_apicv_active(vcpu) &&
- kvm_x86_call(dy_apicv_has_pending_interrupt)(vcpu);
-}
-
-bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.preempted_in_kernel;
-}
-
-bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
-{
- if (READ_ONCE(vcpu->arch.pv.pv_unhalted))
- return true;
-
- if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
-#ifdef CONFIG_KVM_SMM
- kvm_test_request(KVM_REQ_SMI, vcpu) ||
-#endif
- kvm_test_request(KVM_REQ_EVENT, vcpu))
- return true;
-
- return kvm_arch_dy_has_pending_interrupt(vcpu);
-}
-
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
{
if (vcpu->arch.guest_state_protected)
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
` (3 preceding siblings ...)
2024-08-02 19:51 ` [PATCH 4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers Sean Christopherson
@ 2024-08-02 19:51 ` Sean Christopherson
2024-10-08 5:22 ` Manali Shukla
2024-08-31 0:20 ` [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
5 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2024-08-02 19:51 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
Add a fastpath for HLT VM-Exits by immediately re-entering the guest if
it has a pending wake event. When virtual interrupt delivery is enabled,
i.e. when KVM doesn't need to manually inject interrupts, this allows KVM
to stay in the fastpath run loop when a vIRQ arrives between the guest
doing CLI and STI;HLT. Without AMD's Idle HLT-intercept support, the CPU
generates a HLT VM-Exit even though KVM will immediately resume the guest.
Note, on bare metal, it's relatively uncommon for a modern guest kernel to
actually trigger this scenario, as the window between the guest checking
for a wake event and committing to HLT is quite small. But in a nested
environment, the timings change significantly, e.g. rudimentary testing
showed that ~50% of HLT exits where HLT-polling was successful would be
serviced by this fastpath, i.e. ~50% of the time that a nested vCPU gets
a wake event before KVM schedules out the vCPU, the wake event was pending
even before the VM-Exit.
Link: https://lore.kernel.org/all/20240528041926.3989-3-manali.shukla@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 13 +++++++++++--
arch/x86/kvm/vmx/vmx.c | 2 ++
arch/x86/kvm/x86.c | 23 ++++++++++++++++++++++-
arch/x86/kvm/x86.h | 1 +
4 files changed, 36 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c115d26844f7..64381ff63034 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4144,12 +4144,21 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
if (is_guest_mode(vcpu))
return EXIT_FASTPATH_NONE;
- if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
- to_svm(vcpu)->vmcb->control.exit_info_1)
+ switch (svm->vmcb->control.exit_code) {
+ case SVM_EXIT_MSR:
+ if (!svm->vmcb->control.exit_info_1)
+ break;
return handle_fastpath_set_msr_irqoff(vcpu);
+ case SVM_EXIT_HLT:
+ return handle_fastpath_hlt(vcpu);
+ default:
+ break;
+ }
return EXIT_FASTPATH_NONE;
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f18c2d8c7476..f6382750fbf0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7265,6 +7265,8 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu,
return handle_fastpath_set_msr_irqoff(vcpu);
case EXIT_REASON_PREEMPTION_TIMER:
return handle_fastpath_preemption_timer(vcpu, force_immediate_exit);
+ case EXIT_REASON_HLT:
+ return handle_fastpath_hlt(vcpu);
default:
return EXIT_FASTPATH_NONE;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 46686504cd47..eb5ea963698f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11373,7 +11373,10 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
*/
++vcpu->stat.halt_exits;
if (lapic_in_kernel(vcpu)) {
- vcpu->arch.mp_state = state;
+ if (kvm_vcpu_has_events(vcpu))
+ vcpu->arch.pv.pv_unhalted = false;
+ else
+ vcpu->arch.mp_state = state;
return 1;
} else {
vcpu->run->exit_reason = reason;
@@ -11398,6 +11401,24 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_emulate_halt);
+fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu)
+{
+ int ret;
+
+ kvm_vcpu_srcu_read_lock(vcpu);
+ ret = kvm_emulate_halt(vcpu);
+ kvm_vcpu_srcu_read_unlock(vcpu);
+
+ if (!ret)
+ return EXIT_FASTPATH_EXIT_USERSPACE;
+
+ if (kvm_vcpu_running(vcpu))
+ return EXIT_FASTPATH_REENTER_GUEST;
+
+ return EXIT_FASTPATH_EXIT_HANDLED;
+}
+EXPORT_SYMBOL_GPL(handle_fastpath_hlt);
+
int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
{
int ret = kvm_skip_emulated_instruction(vcpu);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 50596f6f8320..5185ab76fdd2 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -334,6 +334,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len);
fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
+fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu);
extern struct kvm_caps kvm_caps;
extern struct kvm_host_values kvm_host;
--
2.46.0.rc2.264.g509ed76dc8-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
` (4 preceding siblings ...)
2024-08-02 19:51 ` [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits Sean Christopherson
@ 2024-08-31 0:20 ` Sean Christopherson
2024-09-02 10:00 ` Paolo Bonzini
5 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2024-08-31 0:20 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel
On Fri, 02 Aug 2024 12:51:15 -0700, Sean Christopherson wrote:
> This series was prompted by observations of HLT-exiting when debugging
> a throughput issue related to posted interrupts. When KVM is running in
> a nested scenario, a rather surprising number of HLT exits occur with an
> unmasked interrupt already pending. I didn't debug too deeply into the
> guest side of things, but I suspect what is happening is that it's fairly
> easy for L2 to be interrupted (by L1 or L0) between checking if it (the
> CPU) should enter an idle state and actually executing STI;HLT.
>
> [...]
Applied to kvm-x86 misc, I gave myself enough confidence the fastpath fix is
correct with a selftest update[*] (which I'll get applied next week).
[*] https://lore.kernel.org/all/20240830044448.130449-1-seanjc@google.com
[1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
https://github.com/kvm-x86/linux/commit/0dd45f2cd8cc
[2/5] KVM: x86: Dedup fastpath MSR post-handling logic
https://github.com/kvm-x86/linux/commit/ea60229af7fb
[3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip
https://github.com/kvm-x86/linux/commit/f7f39c50edb9
[4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers
https://github.com/kvm-x86/linux/commit/70cdd2385106
[5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
https://github.com/kvm-x86/linux/commit/1876dd69dfe8
--
https://github.com/kvm-x86/linux/tree/next
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
2024-08-02 19:51 ` [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful Sean Christopherson
@ 2024-09-02 9:58 ` Paolo Bonzini
2024-09-03 15:09 ` Sean Christopherson
0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2024-09-02 9:58 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, linux-kernel
On Fri, Aug 2, 2024 at 9:51 PM Sean Christopherson <seanjc@google.com> wrote:
> Re-enter the guest in the fastpath if WRMSR emulation for x2APIC's ICR is
> successful, as no additional work is needed, i.e. there is no code unique
> for WRMSR exits between the fastpath and the "!= EXIT_FASTPATH_NONE" check
> in __vmx_handle_exit().
What about if you send an IPI to yourself? Doesn't that return true
for kvm_vcpu_exit_request() if posted interrupts are disabled?
Paolo
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/x86.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index af6c8cf6a37a..cf397110953f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2173,7 +2173,7 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
> data = kvm_read_edx_eax(vcpu);
> if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data)) {
> kvm_skip_emulated_instruction(vcpu);
> - ret = EXIT_FASTPATH_EXIT_HANDLED;
> + ret = EXIT_FASTPATH_REENTER_GUEST;
> }
> break;
> case MSR_IA32_TSC_DEADLINE:
> --
> 2.46.0.rc2.264.g509ed76dc8-goog
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement
2024-08-31 0:20 ` [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
@ 2024-09-02 10:00 ` Paolo Bonzini
0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2024-09-02 10:00 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, linux-kernel
On Sat, Aug 31, 2024 at 2:21 AM Sean Christopherson <seanjc@google.com> wrote:
> Applied to kvm-x86 misc, I gave myself enough confidence the fastpath fix is
> correct with a selftest update[*] (which I'll get applied next week).
Sorry for not reviewing this before vacation; done this belatedly and
patches 1/2 may require another thought (or a revert). Hopefully I'm
wrong.
Paolo
> [*] https://lore.kernel.org/all/20240830044448.130449-1-seanjc@google.com
>
> [1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
> https://github.com/kvm-x86/linux/commit/0dd45f2cd8cc
> [2/5] KVM: x86: Dedup fastpath MSR post-handling logic
> https://github.com/kvm-x86/linux/commit/ea60229af7fb
> [3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip
> https://github.com/kvm-x86/linux/commit/f7f39c50edb9
> [4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers
> https://github.com/kvm-x86/linux/commit/70cdd2385106
> [5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
> https://github.com/kvm-x86/linux/commit/1876dd69dfe8
>
> --
> https://github.com/kvm-x86/linux/tree/next
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
2024-09-02 9:58 ` Paolo Bonzini
@ 2024-09-03 15:09 ` Sean Christopherson
2024-09-03 16:49 ` Paolo Bonzini
0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2024-09-03 15:09 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, linux-kernel
On Mon, Sep 02, 2024, Paolo Bonzini wrote:
> On Fri, Aug 2, 2024 at 9:51 PM Sean Christopherson <seanjc@google.com> wrote:
> > Re-enter the guest in the fastpath if WRMSR emulation for x2APIC's ICR is
> > successful, as no additional work is needed, i.e. there is no code unique
> > for WRMSR exits between the fastpath and the "!= EXIT_FASTPATH_NONE" check
> > in __vmx_handle_exit().
>
> What about if you send an IPI to yourself? Doesn't that return true
> for kvm_vcpu_exit_request() if posted interrupts are disabled?
Yes, but that doesn't have anything to do with WRMSR itself, as KVM needs to morph
EXIT_FASTPATH_EXIT_HANDLED => EXIT_FASTPATH_REENTER_GUEST if there's a pending
event that needs requires injection.
Given that kvm_x86_ops.sync_pir_to_irr is likely NULL if virtual interrupt delivery
is enabled, the overhead of the trying to re-enter the guest it essentially a few
cycles, e.g. check vcpu->mode and kvm_request_pending().
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
2024-09-03 15:09 ` Sean Christopherson
@ 2024-09-03 16:49 ` Paolo Bonzini
2024-09-03 16:58 ` Sean Christopherson
0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2024-09-03 16:49 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, linux-kernel
On Tue, Sep 3, 2024 at 5:09 PM Sean Christopherson <seanjc@google.com> wrote:
> On Mon, Sep 02, 2024, Paolo Bonzini wrote:
> > On Fri, Aug 2, 2024 at 9:51 PM Sean Christopherson <seanjc@google.com> wrote:
> > > Re-enter the guest in the fastpath if WRMSR emulation for x2APIC's ICR is
> > > successful, as no additional work is needed, i.e. there is no code unique
> > > for WRMSR exits between the fastpath and the "!= EXIT_FASTPATH_NONE" check
> > > in __vmx_handle_exit().
> >
> > What about if you send an IPI to yourself? Doesn't that return true
> > for kvm_vcpu_exit_request() if posted interrupts are disabled?
>
> Yes, but that doesn't have anything to do with WRMSR itself, as KVM needs to morph
> EXIT_FASTPATH_EXIT_HANDLED => EXIT_FASTPATH_REENTER_GUEST if there's a pending
> event that needs requires injection.
The other way round? i.e. treat EXIT_FASTPATH_REENTER_GUEST as
EXIT_FASTPATH_EXIT_HANDLED to go through event injection.
> Given that kvm_x86_ops.sync_pir_to_irr is likely NULL if virtual interrupt delivery
> is enabled, the overhead of the trying to re-enter the guest it essentially a few
> cycles, e.g. check vcpu->mode and kvm_request_pending().
No, I wasn't worried about performance. Probably I misread
if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
break;
as something like
if (likely(exit_fastpath == EXIT_FASTPATH_REENTER_GUEST))
continue;
EXIT_FASTPATH_REENTER_GUEST is exactly what's needed here.
Paolo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful
2024-09-03 16:49 ` Paolo Bonzini
@ 2024-09-03 16:58 ` Sean Christopherson
0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-09-03 16:58 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, linux-kernel
On Tue, Sep 03, 2024, Paolo Bonzini wrote:
> On Tue, Sep 3, 2024 at 5:09 PM Sean Christopherson <seanjc@google.com> wrote:
> > On Mon, Sep 02, 2024, Paolo Bonzini wrote:
> > > On Fri, Aug 2, 2024 at 9:51 PM Sean Christopherson <seanjc@google.com> wrote:
> > > > Re-enter the guest in the fastpath if WRMSR emulation for x2APIC's ICR is
> > > > successful, as no additional work is needed, i.e. there is no code unique
> > > > for WRMSR exits between the fastpath and the "!= EXIT_FASTPATH_NONE" check
> > > > in __vmx_handle_exit().
> > >
> > > What about if you send an IPI to yourself? Doesn't that return true
> > > for kvm_vcpu_exit_request() if posted interrupts are disabled?
> >
> > Yes, but that doesn't have anything to do with WRMSR itself, as KVM needs to morph
> > EXIT_FASTPATH_EXIT_HANDLED => EXIT_FASTPATH_REENTER_GUEST if there's a pending
> > event that needs requires injection.
>
> The other way round? i.e. treat EXIT_FASTPATH_REENTER_GUEST as
> EXIT_FASTPATH_EXIT_HANDLED to go through event injection.
Doh, yes.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
2024-08-02 19:51 ` [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits Sean Christopherson
@ 2024-10-08 5:22 ` Manali Shukla
2024-10-08 17:39 ` Sean Christopherson
0 siblings, 1 reply; 14+ messages in thread
From: Manali Shukla @ 2024-10-08 5:22 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Manali Shukla, nikunj
Hi Sean,
On 8/3/2024 1:21 AM, Sean Christopherson wrote:
> Add a fastpath for HLT VM-Exits by immediately re-entering the guest if
> it has a pending wake event. When virtual interrupt delivery is enabled,
> i.e. when KVM doesn't need to manually inject interrupts, this allows KVM
> to stay in the fastpath run loop when a vIRQ arrives between the guest
> doing CLI and STI;HLT. Without AMD's Idle HLT-intercept support, the CPU
> generates a HLT VM-Exit even though KVM will immediately resume the guest.
>
> Note, on bare metal, it's relatively uncommon for a modern guest kernel to
> actually trigger this scenario, as the window between the guest checking
> for a wake event and committing to HLT is quite small. But in a nested
> environment, the timings change significantly, e.g. rudimentary testing
> showed that ~50% of HLT exits where HLT-polling was successful would be
> serviced by this fastpath, i.e. ~50% of the time that a nested vCPU gets
> a wake event before KVM schedules out the vCPU, the wake event was pending
> even before the VM-Exit.
>
Could you please help me with the test case that resulted in an approximately
50% improvement for the nested scenario?
- Manali
> Link: https://lore.kernel.org/all/20240528041926.3989-3-manali.shukla@amd.com
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/svm/svm.c | 13 +++++++++++--
> arch/x86/kvm/vmx/vmx.c | 2 ++
> arch/x86/kvm/x86.c | 23 ++++++++++++++++++++++-
> arch/x86/kvm/x86.h | 1 +
> 4 files changed, 36 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index c115d26844f7..64381ff63034 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4144,12 +4144,21 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
>
> static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> {
> + struct vcpu_svm *svm = to_svm(vcpu);
> +
> if (is_guest_mode(vcpu))
> return EXIT_FASTPATH_NONE;
>
> - if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
> - to_svm(vcpu)->vmcb->control.exit_info_1)
> + switch (svm->vmcb->control.exit_code) {
> + case SVM_EXIT_MSR:
> + if (!svm->vmcb->control.exit_info_1)
> + break;
> return handle_fastpath_set_msr_irqoff(vcpu);
> + case SVM_EXIT_HLT:
> + return handle_fastpath_hlt(vcpu);
> + default:
> + break;
> + }
>
> return EXIT_FASTPATH_NONE;
> }
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index f18c2d8c7476..f6382750fbf0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7265,6 +7265,8 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu,
> return handle_fastpath_set_msr_irqoff(vcpu);
> case EXIT_REASON_PREEMPTION_TIMER:
> return handle_fastpath_preemption_timer(vcpu, force_immediate_exit);
> + case EXIT_REASON_HLT:
> + return handle_fastpath_hlt(vcpu);
> default:
> return EXIT_FASTPATH_NONE;
> }
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 46686504cd47..eb5ea963698f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11373,7 +11373,10 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
> */
> ++vcpu->stat.halt_exits;
> if (lapic_in_kernel(vcpu)) {
> - vcpu->arch.mp_state = state;
> + if (kvm_vcpu_has_events(vcpu))
> + vcpu->arch.pv.pv_unhalted = false;
> + else
> + vcpu->arch.mp_state = state;
> return 1;
> } else {
> vcpu->run->exit_reason = reason;
> @@ -11398,6 +11401,24 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_emulate_halt);
>
> +fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu)
> +{
> + int ret;
> +
> + kvm_vcpu_srcu_read_lock(vcpu);
> + ret = kvm_emulate_halt(vcpu);
> + kvm_vcpu_srcu_read_unlock(vcpu);
> +
> + if (!ret)
> + return EXIT_FASTPATH_EXIT_USERSPACE;
> +
> + if (kvm_vcpu_running(vcpu))
> + return EXIT_FASTPATH_REENTER_GUEST;
> +
> + return EXIT_FASTPATH_EXIT_HANDLED;
> +}
> +EXPORT_SYMBOL_GPL(handle_fastpath_hlt);
> +
> int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
> {
> int ret = kvm_skip_emulated_instruction(vcpu);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 50596f6f8320..5185ab76fdd2 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -334,6 +334,7 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
> int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> int emulation_type, void *insn, int insn_len);
> fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
> +fastpath_t handle_fastpath_hlt(struct kvm_vcpu *vcpu);
>
> extern struct kvm_caps kvm_caps;
> extern struct kvm_host_values kvm_host;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits
2024-10-08 5:22 ` Manali Shukla
@ 2024-10-08 17:39 ` Sean Christopherson
0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2024-10-08 17:39 UTC (permalink / raw)
To: Manali Shukla; +Cc: Paolo Bonzini, kvm, linux-kernel, nikunj
On Tue, Oct 08, 2024, Manali Shukla wrote:
> Hi Sean,
>
> On 8/3/2024 1:21 AM, Sean Christopherson wrote:
> > Add a fastpath for HLT VM-Exits by immediately re-entering the guest if
> > it has a pending wake event. When virtual interrupt delivery is enabled,
> > i.e. when KVM doesn't need to manually inject interrupts, this allows KVM
> > to stay in the fastpath run loop when a vIRQ arrives between the guest
> > doing CLI and STI;HLT. Without AMD's Idle HLT-intercept support, the CPU
> > generates a HLT VM-Exit even though KVM will immediately resume the guest.
> >
> > Note, on bare metal, it's relatively uncommon for a modern guest kernel to
> > actually trigger this scenario, as the window between the guest checking
> > for a wake event and committing to HLT is quite small. But in a nested
> > environment, the timings change significantly, e.g. rudimentary testing
> > showed that ~50% of HLT exits where HLT-polling was successful would be
> > serviced by this fastpath, i.e. ~50% of the time that a nested vCPU gets
> > a wake event before KVM schedules out the vCPU, the wake event was pending
> > even before the VM-Exit.
> >
>
> Could you please help me with the test case that resulted in an approximately
> 50% improvement for the nested scenario?
It's not a 50% improvement, it was simply an observation that ~50% of the time
_that HLT-polling is successful_, the wake event was already pending when the
VM-Exit occurred. That is _wildly_ different than a "50% improvement".
As for the test case, it's simply running a lightly loaded VM as L2.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-10-08 17:39 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-02 19:51 [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
2024-08-02 19:51 ` [PATCH 1/5] KVM: x86: Re-enter guest if WRMSR(X2APIC_ICR) fastpath is successful Sean Christopherson
2024-09-02 9:58 ` Paolo Bonzini
2024-09-03 15:09 ` Sean Christopherson
2024-09-03 16:49 ` Paolo Bonzini
2024-09-03 16:58 ` Sean Christopherson
2024-08-02 19:51 ` [PATCH 2/5] KVM: x86: Dedup fastpath MSR post-handling logic Sean Christopherson
2024-08-02 19:51 ` [PATCH 3/5] KVM: x86: Exit to userspace if fastpath triggers one on instruction skip Sean Christopherson
2024-08-02 19:51 ` [PATCH 4/5] KVM: x86: Reorganize code in x86.c to co-locate vCPU blocking/running helpers Sean Christopherson
2024-08-02 19:51 ` [PATCH 5/5] KVM: x86: Add fastpath handling of HLT VM-Exits Sean Christopherson
2024-10-08 5:22 ` Manali Shukla
2024-10-08 17:39 ` Sean Christopherson
2024-08-31 0:20 ` [PATCH 0/5] KVM: x86: Fastpath cleanup, fix, and enhancement Sean Christopherson
2024-09-02 10:00 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).