public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
@ 2024-10-21 10:22 Bernhard Kauer
  2024-12-10  1:22 ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Bernhard Kauer @ 2024-10-21 10:22 UTC (permalink / raw)
  To: kvm; +Cc: Chao Gao, Bernhard Kauer

It used a static key to avoid loading the lapic pointer from
the vcpu->arch structure.  However, in the common case the load
is from a hot cacheline and the CPU should be able to perfectly
predict it. Thus there is no upside of this premature optimization.

The downside is that code patching including an IPI to all CPUs
is required whenever the first VM without an lapic is created or
the last is destroyed.

Signed-off-by: Bernhard Kauer <bk@alpico.io>
---

V1->V2: remove spillover from other patch and fix style

 arch/x86/kvm/lapic.c | 10 ++--------
 arch/x86/kvm/lapic.h |  6 +-----
 arch/x86/kvm/x86.c   |  6 ------
 3 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 2098dc689088..287a43fae041 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -135,8 +135,6 @@ static inline int __apic_test_and_clear_vector(int vec, void *bitmap)
 	return __test_and_clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
 }
 
-__read_mostly DEFINE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
-EXPORT_SYMBOL_GPL(kvm_has_noapic_vcpu);
 
 __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_hw_disabled, HZ);
 __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_sw_disabled, HZ);
@@ -2517,10 +2515,8 @@ void kvm_free_lapic(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
 
-	if (!vcpu->arch.apic) {
-		static_branch_dec(&kvm_has_noapic_vcpu);
+	if (!vcpu->arch.apic)
 		return;
-	}
 
 	hrtimer_cancel(&apic->lapic_timer.timer);
 
@@ -2863,10 +2859,8 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu)
 
 	ASSERT(vcpu != NULL);
 
-	if (!irqchip_in_kernel(vcpu->kvm)) {
-		static_branch_inc(&kvm_has_noapic_vcpu);
+	if (!irqchip_in_kernel(vcpu->kvm))
 		return 0;
-	}
 
 	apic = kzalloc(sizeof(*apic), GFP_KERNEL_ACCOUNT);
 	if (!apic)
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 1b8ef9856422..157af18c9fc8 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -179,13 +179,9 @@ static inline u32 kvm_lapic_get_reg(struct kvm_lapic *apic, int reg_off)
 	return __kvm_lapic_get_reg(apic->regs, reg_off);
 }
 
-DECLARE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
-
 static inline bool lapic_in_kernel(struct kvm_vcpu *vcpu)
 {
-	if (static_branch_unlikely(&kvm_has_noapic_vcpu))
-		return vcpu->arch.apic;
-	return true;
+	return vcpu->arch.apic;
 }
 
 extern struct static_key_false_deferred apic_hw_disabled;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 83fe0a78146f..88b04355273d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -14015,9 +14015,3 @@ static int __init kvm_x86_init(void)
 	return 0;
 }
 module_init(kvm_x86_init);
-
-static void __exit kvm_x86_exit(void)
-{
-	WARN_ON_ONCE(static_branch_unlikely(&kvm_has_noapic_vcpu));
-}
-module_exit(kvm_x86_exit);
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-10-21 10:22 [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization Bernhard Kauer
@ 2024-12-10  1:22 ` Sean Christopherson
  2024-12-10  1:40   ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2024-12-10  1:22 UTC (permalink / raw)
  To: Bernhard Kauer; +Cc: kvm, Chao Gao

On Mon, Oct 21, 2024, Bernhard Kauer wrote:
> It used a static key to avoid loading the lapic pointer from
> the vcpu->arch structure.  However, in the common case the load
> is from a hot cacheline and the CPU should be able to perfectly
> predict it. Thus there is no upside of this premature optimization.
> 
> The downside is that code patching including an IPI to all CPUs
> is required whenever the first VM without an lapic is created or
> the last is destroyed.
> 
> Signed-off-by: Bernhard Kauer <bk@alpico.io>
> ---
> 
> V1->V2: remove spillover from other patch and fix style
> 
>  arch/x86/kvm/lapic.c | 10 ++--------
>  arch/x86/kvm/lapic.h |  6 +-----
>  arch/x86/kvm/x86.c   |  6 ------
>  3 files changed, 3 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 2098dc689088..287a43fae041 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -135,8 +135,6 @@ static inline int __apic_test_and_clear_vector(int vec, void *bitmap)
>  	return __test_and_clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
>  }
>  
> -__read_mostly DEFINE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
> -EXPORT_SYMBOL_GPL(kvm_has_noapic_vcpu);
>  
>  __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_hw_disabled, HZ);
>  __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_sw_disabled, HZ);

I'm on the fence, slightly leaning towards removing all three of these static keys.

If we remove kvm_has_noapic_vcpu to avoid the text patching, then we should
definitely drop apic_sw_disabled, as vCPUs are practically guaranteed to toggle
the S/W enable bit, e.g. it starts out '0' at RESET.  And if we drop apic_sw_disabled,
then keeping apic_hw_disabled seems rather pointless.

Removing all three keys is measurable, but the impact is so tiny that I have a
hard time believing anyone would notice in practice.

To measure, I tweaked KVM to handle CPUID exits in the fastpath and then ran the
KVM-Unit-Test CPUID microbenchmark (with some minor modifications).  Handling
CPUID in the fastpath makes the kvm_lapic_enabled() call in the innermost run loop
stick out (that helpers checks all three keys/conditions).

	for (;;) {
		/*
		 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
		 * update must kick and wait for all vCPUs before toggling the
		 * per-VM state, and responding vCPUs must wait for the update
		 * to complete before servicing KVM_REQ_APICV_UPDATE.
		 */
		WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
			     (kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));

		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu,
						       req_immediate_exit);
		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
			break;

		if (kvm_lapic_enabled(vcpu))
			kvm_x86_call(sync_pir_to_irr)(vcpu);

		if (unlikely(kvm_vcpu_exit_request(vcpu))) {
			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
			break;
		}

		/* Note, VM-Exits that go down the "slow" path are accounted below. */
		++vcpu->stat.exits;
	}

With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
(mostly laziness), the average latency goes from 1034 => 1053.

Other flows that check multiple vCPUs, e.g. kvm_irq_delivery_to_apic(), might be
more affected?  The optimized APIC map should help for common cases, but KVM does
still check if APICs are enabled multiple times when delivering interrupts.  And
that's really my only hesitation: there are checks *everywhere* in KVM.

On the other hand, we lose gobs and gobs of cycles with far less thought.  E.g.
with mitigations on, the latency for a single vCPU jumps all the way to 1600+ cycles.

And while the diff stats are quite nice, the relevant code is low maintenance.

 arch/x86/kvm/lapic.c | 41 ++---------------------------------------
 arch/x86/kvm/lapic.h | 19 +++----------------
 arch/x86/kvm/x86.c   |  4 +---
 3 files changed, 6 insertions(+), 58 deletions(-)

Paolo or anyone else... thoughts?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-12-10  1:22 ` Sean Christopherson
@ 2024-12-10  1:40   ` Sean Christopherson
  2024-12-10  8:16     ` Bernhard Kauer
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2024-12-10  1:40 UTC (permalink / raw)
  To: Bernhard Kauer; +Cc: kvm, Chao Gao, Paolo Bonzini

+Paolo, I'm pretty sure he still doesn't subscribe to kvm@ :-)

On Mon, Dec 09, 2024, Sean Christopherson wrote:
> On Mon, Oct 21, 2024, Bernhard Kauer wrote:
> > It used a static key to avoid loading the lapic pointer from
> > the vcpu->arch structure.  However, in the common case the load
> > is from a hot cacheline and the CPU should be able to perfectly
> > predict it. Thus there is no upside of this premature optimization.
> > 
> > The downside is that code patching including an IPI to all CPUs
> > is required whenever the first VM without an lapic is created or
> > the last is destroyed.
> > 
> > Signed-off-by: Bernhard Kauer <bk@alpico.io>
> > ---
> > 
> > V1->V2: remove spillover from other patch and fix style
> > 
> >  arch/x86/kvm/lapic.c | 10 ++--------
> >  arch/x86/kvm/lapic.h |  6 +-----
> >  arch/x86/kvm/x86.c   |  6 ------
> >  3 files changed, 3 insertions(+), 19 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > index 2098dc689088..287a43fae041 100644
> > --- a/arch/x86/kvm/lapic.c
> > +++ b/arch/x86/kvm/lapic.c
> > @@ -135,8 +135,6 @@ static inline int __apic_test_and_clear_vector(int vec, void *bitmap)
> >  	return __test_and_clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
> >  }
> >  
> > -__read_mostly DEFINE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
> > -EXPORT_SYMBOL_GPL(kvm_has_noapic_vcpu);
> >  
> >  __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_hw_disabled, HZ);
> >  __read_mostly DEFINE_STATIC_KEY_DEFERRED_FALSE(apic_sw_disabled, HZ);
> 
> I'm on the fence, slightly leaning towards removing all three of these static keys.
> 
> If we remove kvm_has_noapic_vcpu to avoid the text patching, then we should
> definitely drop apic_sw_disabled, as vCPUs are practically guaranteed to toggle
> the S/W enable bit, e.g. it starts out '0' at RESET.  And if we drop apic_sw_disabled,
> then keeping apic_hw_disabled seems rather pointless.
> 
> Removing all three keys is measurable, but the impact is so tiny that I have a
> hard time believing anyone would notice in practice.
> 
> To measure, I tweaked KVM to handle CPUID exits in the fastpath and then ran the
> KVM-Unit-Test CPUID microbenchmark (with some minor modifications).  Handling
> CPUID in the fastpath makes the kvm_lapic_enabled() call in the innermost run loop
> stick out (that helpers checks all three keys/conditions).
> 
> 	for (;;) {
> 		/*
> 		 * Assert that vCPU vs. VM APICv state is consistent.  An APICv
> 		 * update must kick and wait for all vCPUs before toggling the
> 		 * per-VM state, and responding vCPUs must wait for the update
> 		 * to complete before servicing KVM_REQ_APICV_UPDATE.
> 		 */
> 		WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
> 			     (kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));
> 
> 		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu,
> 						       req_immediate_exit);
> 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
> 			break;
> 
> 		if (kvm_lapic_enabled(vcpu))
> 			kvm_x86_call(sync_pir_to_irr)(vcpu);
> 
> 		if (unlikely(kvm_vcpu_exit_request(vcpu))) {
> 			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
> 			break;
> 		}
> 
> 		/* Note, VM-Exits that go down the "slow" path are accounted below. */
> 		++vcpu->stat.exits;
> 	}
> 
> With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
> (mostly laziness), the average latency goes from 1034 => 1053.
> 
> Other flows that check multiple vCPUs, e.g. kvm_irq_delivery_to_apic(), might be
> more affected?  The optimized APIC map should help for common cases, but KVM does
> still check if APICs are enabled multiple times when delivering interrupts.  And
> that's really my only hesitation: there are checks *everywhere* in KVM.
> 
> On the other hand, we lose gobs and gobs of cycles with far less thought.  E.g.
> with mitigations on, the latency for a single vCPU jumps all the way to 1600+ cycles.
> 
> And while the diff stats are quite nice, the relevant code is low maintenance.
> 
>  arch/x86/kvm/lapic.c | 41 ++---------------------------------------
>  arch/x86/kvm/lapic.h | 19 +++----------------
>  arch/x86/kvm/x86.c   |  4 +---
>  3 files changed, 6 insertions(+), 58 deletions(-)
> 
> Paolo or anyone else... thoughts?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-12-10  1:40   ` Sean Christopherson
@ 2024-12-10  8:16     ` Bernhard Kauer
  2024-12-11 17:16       ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Bernhard Kauer @ 2024-12-10  8:16 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Bernhard Kauer, kvm, Chao Gao, Paolo Bonzini

On Mon, Dec 09, 2024 at 05:40:48PM -0800, Sean Christopherson wrote:
> On Mon, Dec 09, 2024, Sean Christopherson wrote:
> > On Mon, Oct 21, 2024, Bernhard Kauer wrote:
> > > It used a static key to avoid loading the lapic pointer from
> > > the vcpu->arch structure.  However, in the common case the load
> > > is from a hot cacheline and the CPU should be able to perfectly
> > > predict it. Thus there is no upside of this premature optimization.
> > > 
> > > The downside is that code patching including an IPI to all CPUs
> > > is required whenever the first VM without an lapic is created or
> > > the last is destroyed.
> > >
> > I'm on the fence, slightly leaning towards removing all three of these static keys.

Thanks for continuing this work.


> > With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> > goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
> > (mostly laziness), the average latency goes from 1034 => 1053.

Are these kind of benchmarks tracked somewhere automatically?  With it one
could systematically optimize for faster exits.


> > On the other hand, we lose gobs and gobs of cycles with far less thought.  E.g.
> > with mitigations on, the latency for a single vCPU jumps all the way to 1600+ cycles.

In the end it is a tradeoff to be made.  The cost for switching between the
modes is more than a hundred microsecond unexpected latency.  On the other
hande one saves 1-2% per exit but has a larger code-base.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-12-10  8:16     ` Bernhard Kauer
@ 2024-12-11 17:16       ` Sean Christopherson
  2024-12-12 10:19         ` Bernhard Kauer
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2024-12-11 17:16 UTC (permalink / raw)
  To: Bernhard Kauer; +Cc: kvm, Chao Gao, Paolo Bonzini

On Tue, Dec 10, 2024, Bernhard Kauer wrote:
> On Mon, Dec 09, 2024 at 05:40:48PM -0800, Sean Christopherson wrote:
> > > With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> > > goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
> > > (mostly laziness), the average latency goes from 1034 => 1053.
> 
> Are these kind of benchmarks tracked somewhere automatically?

I'm not sure what you're asking.  The benchmark is KVM-Unit-Test's[*] CPUID test,
e.g. "./x86/run x86/vmexit.flat -smp 1 -append 'cpuid'".

git@gitlab.com:kvm-unit-tests/kvm-unit-tests.git[*]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-12-11 17:16       ` Sean Christopherson
@ 2024-12-12 10:19         ` Bernhard Kauer
  2024-12-12 15:16           ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Bernhard Kauer @ 2024-12-12 10:19 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Bernhard Kauer, kvm, Chao Gao, Paolo Bonzini

On Wed, Dec 11, 2024 at 09:16:11AM -0800, Sean Christopherson wrote:
> On Tue, Dec 10, 2024, Bernhard Kauer wrote:
> > On Mon, Dec 09, 2024 at 05:40:48PM -0800, Sean Christopherson wrote:
> > > > With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> > > > goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
> > > > (mostly laziness), the average latency goes from 1034 => 1053.
> > 
> > Are these kind of benchmarks tracked somewhere automatically?
> 
> I'm not sure what you're asking.  The benchmark is KVM-Unit-Test's[*] CPUID test,
> e.g. "./x86/run x86/vmexit.flat -smp 1 -append 'cpuid'".

There are various issues with these benchmarks.

1. The absolute numbers depend on the particular CPU. My results
   can't be compared to your absolute results.

2. They have a 1% accuracy when warming up and pinning to a CPU.
   Thus one has to do multiple runs.

      1 cpuid 1087
      1 cpuid 1092
      5 cpuid 1093
      4 cpuid 1094
      3 cpuid 1095
     11 cpuid 1096
      8 cpuid 1097
     24 cpuid 1098
     11 cpuid 1099
     17 cpuid 1100
      8 cpuid 1101
      1 cpuid 1102
      4 cpuid 1103
      1 cpuid 1104
      1 cpuid 1110

3. Dynamic Frequency scaling makes it even more inaccurate.  A previously idle
   CPU can be as low as 1072 cycles and without pinning even 1050 cycles. 
   This 2.4% and 4.6% faster than the 1098 median.

4. Patches that seem not to be worth checking for or where the impact is
   smaller than measurement uncertainties might make the system slowly
   slower.


Most of this goes away if a dedicated machine tracks performance numbers
continously.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
  2024-12-12 10:19         ` Bernhard Kauer
@ 2024-12-12 15:16           ` Sean Christopherson
  0 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2024-12-12 15:16 UTC (permalink / raw)
  To: Bernhard Kauer; +Cc: kvm, Chao Gao, Paolo Bonzini

On Thu, Dec 12, 2024, Bernhard Kauer wrote:
> On Wed, Dec 11, 2024 at 09:16:11AM -0800, Sean Christopherson wrote:
> > On Tue, Dec 10, 2024, Bernhard Kauer wrote:
> > > On Mon, Dec 09, 2024 at 05:40:48PM -0800, Sean Christopherson wrote:
> > > > > With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> > > > > goes from 1018 => 1027 cycles, plus or minus a few.  With 8 vCPUs, no pinning
> > > > > (mostly laziness), the average latency goes from 1034 => 1053.
> > > 
> > > Are these kind of benchmarks tracked somewhere automatically?
> > 
> > I'm not sure what you're asking.  The benchmark is KVM-Unit-Test's[*] CPUID test,
> > e.g. "./x86/run x86/vmexit.flat -smp 1 -append 'cpuid'".
> 
> There are various issues with these benchmarks.

LOL, yes, they are far, far from perfect.  But they are good enough for developers
to detect egregious bugs, trends across multiple kernels, etc.

> 1. The absolute numbers depend on the particular CPU. My results
>    can't be compared to your absolute results.
> 
> 2. They have a 1% accuracy when warming up and pinning to a CPU.
>    Thus one has to do multiple runs.
> 
>       1 cpuid 1087
>       1 cpuid 1092
>       5 cpuid 1093
>       4 cpuid 1094
>       3 cpuid 1095
>      11 cpuid 1096
>       8 cpuid 1097
>      24 cpuid 1098
>      11 cpuid 1099
>      17 cpuid 1100
>       8 cpuid 1101
>       1 cpuid 1102
>       4 cpuid 1103
>       1 cpuid 1104
>       1 cpuid 1110
> 
> 3. Dynamic Frequency scaling makes it even more inaccurate.  A previously idle
>    CPU can be as low as 1072 cycles and without pinning even 1050 cycles. 
>    This 2.4% and 4.6% faster than the 1098 median.
> 
> 4. Patches that seem not to be worth checking for or where the impact is
>    smaller than measurement uncertainties might make the system slowly
>    slower.
> 
> 
> Most of this goes away if a dedicated machine tracks performance numbers
> continously.

I don't disagree, but I also don't see this happening anytime soon, at least not
for upstream kernels.  We don't even have meaningful CI testing for upstream
kernels, for a variety of reasons (some good, some bad).  Getting an entire mini-
fleet[*] of systems just for KVM performance testing of upstream kernels would be
wonderful, but for me it's a very distant second after getting testing in place.
Which I also don't see happening anytime soon, unfortunately.

[*] Performance (and regular) testing requires multiple machines to cover Intel
    vs. AMD, and the variety of hardware features/capabilities that KVM utilizes.
    E.g. adding support for new features can and does introduce overhead in the
    entry/exit flows.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-12-12 15:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-21 10:22 [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization Bernhard Kauer
2024-12-10  1:22 ` Sean Christopherson
2024-12-10  1:40   ` Sean Christopherson
2024-12-10  8:16     ` Bernhard Kauer
2024-12-11 17:16       ` Sean Christopherson
2024-12-12 10:19         ` Bernhard Kauer
2024-12-12 15:16           ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox