From: Sean Christopherson <seanjc@google.com>
To: Bernhard Kauer <bk@alpico.io>
Cc: kvm@vger.kernel.org, Chao Gao <chao.gao@intel.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization
Date: Thu, 12 Dec 2024 07:16:06 -0800 [thread overview]
Message-ID: <Z1r-Nh0JAQdL_L8n@google.com> (raw)
In-Reply-To: <Z1q4vxmEmZbkOiqC@mias.mediconcil.de>
On Thu, Dec 12, 2024, Bernhard Kauer wrote:
> On Wed, Dec 11, 2024 at 09:16:11AM -0800, Sean Christopherson wrote:
> > On Tue, Dec 10, 2024, Bernhard Kauer wrote:
> > > On Mon, Dec 09, 2024 at 05:40:48PM -0800, Sean Christopherson wrote:
> > > > > With a single vCPU pinned to a single pCPU, the average latency for a CPUID exit
> > > > > goes from 1018 => 1027 cycles, plus or minus a few. With 8 vCPUs, no pinning
> > > > > (mostly laziness), the average latency goes from 1034 => 1053.
> > >
> > > Are these kind of benchmarks tracked somewhere automatically?
> >
> > I'm not sure what you're asking. The benchmark is KVM-Unit-Test's[*] CPUID test,
> > e.g. "./x86/run x86/vmexit.flat -smp 1 -append 'cpuid'".
>
> There are various issues with these benchmarks.
LOL, yes, they are far, far from perfect. But they are good enough for developers
to detect egregious bugs, trends across multiple kernels, etc.
> 1. The absolute numbers depend on the particular CPU. My results
> can't be compared to your absolute results.
>
> 2. They have a 1% accuracy when warming up and pinning to a CPU.
> Thus one has to do multiple runs.
>
> 1 cpuid 1087
> 1 cpuid 1092
> 5 cpuid 1093
> 4 cpuid 1094
> 3 cpuid 1095
> 11 cpuid 1096
> 8 cpuid 1097
> 24 cpuid 1098
> 11 cpuid 1099
> 17 cpuid 1100
> 8 cpuid 1101
> 1 cpuid 1102
> 4 cpuid 1103
> 1 cpuid 1104
> 1 cpuid 1110
>
> 3. Dynamic Frequency scaling makes it even more inaccurate. A previously idle
> CPU can be as low as 1072 cycles and without pinning even 1050 cycles.
> This 2.4% and 4.6% faster than the 1098 median.
>
> 4. Patches that seem not to be worth checking for or where the impact is
> smaller than measurement uncertainties might make the system slowly
> slower.
>
>
> Most of this goes away if a dedicated machine tracks performance numbers
> continously.
I don't disagree, but I also don't see this happening anytime soon, at least not
for upstream kernels. We don't even have meaningful CI testing for upstream
kernels, for a variety of reasons (some good, some bad). Getting an entire mini-
fleet[*] of systems just for KVM performance testing of upstream kernels would be
wonderful, but for me it's a very distant second after getting testing in place.
Which I also don't see happening anytime soon, unfortunately.
[*] Performance (and regular) testing requires multiple machines to cover Intel
vs. AMD, and the variety of hardware features/capabilities that KVM utilizes.
E.g. adding support for new features can and does introduce overhead in the
entry/exit flows.
prev parent reply other threads:[~2024-12-12 15:16 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-21 10:22 [PATCH v2] KVM: x86: Drop the kvm_has_noapic_vcpu optimization Bernhard Kauer
2024-12-10 1:22 ` Sean Christopherson
2024-12-10 1:40 ` Sean Christopherson
2024-12-10 8:16 ` Bernhard Kauer
2024-12-11 17:16 ` Sean Christopherson
2024-12-12 10:19 ` Bernhard Kauer
2024-12-12 15:16 ` Sean Christopherson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z1r-Nh0JAQdL_L8n@google.com \
--to=seanjc@google.com \
--cc=bk@alpico.io \
--cc=chao.gao@intel.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox