From: Avi Kivity <avi@redhat.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.
Date: Tue, 23 Mar 2010 10:58:26 +0200 [thread overview]
Message-ID: <4BA882B2.8050303@redhat.com> (raw)
In-Reply-To: <D5AB6E638E5A3E4B8F4406B113A5A19A1D599CF6@shsmsx501.ccr.corp.intel.com>
On 03/23/2010 10:33 AM, Xu, Dongxiao wrote:
>
>> Did you measure workloads that exit to userspace very often?
>>
>> Also, what about future processors? My understanding is that the
>> manual recommends keeping things cached, the above description is for
>> sleep states.
>>
> I measured the performance by using kernel build in guest. I launched 6
> guests, 5 of them and the host are doing while(1) loop, and the left guest
> is doing kernel build. The CPU overcommitment is 7:1, and vcpu schedule
> frequency is about 15k/sec. I tested this with Intel new processors on
> my hand, and the performance difference is little.
>
The 15k/sec context switches are distributed among 7 entities, so we
have about 2k/sec for the guest you are measuring. If the cost is 1
microsecond, then the impact would be 0.2% on the kernel build. But 1
microsecond is way too high for some workloads.
Can you measure the impact directly? kvm/user/test/x86/vmexit.c has a
test called inl_pmtimer that measures exit to userspace costs. Please
run it with and without the patch.
btw, what about VPID? That's a global resource. How do you ensure no
VPID conflicts?
>>>> Is that the only motivation? It seems like an odd use-case. If
>>>> there was no performance impact (current or future), I wouldn't
>>>> mind, but the design of VMPTRLD/VMCLEAR/VMXON/VMXOFF seems to
>>>> indicate that we want to keep a VMCS loaded as much as possible on
>>>> the processor.
>>>>
>>>>
>>> I just used KVM and VMware Workstation 7 for testing this patchset.
>>>
>>> Through this new usage of VMPTRLD/VMCLEAR/VMXON/VMXOFF,
>>> we could make hosted VMMs work separately and do not impact each
>>> other.
>>>
>>>
>> What I am questioning is whether a significant number of users want to
>> run kvm in parallel with another hypervisor.
>>
> At least this approach gives users an option to run VMMs in parallel without
> significant performance loss. Think of this senario, if a server has already
> Deployed VMware software, but some new customers want to use KVM,
> this patch could help them to meet their requirements.
>
For server workloads vmware users will run esx, on which you can't run
kvm. If someone wants to evaluate kvm or vmware on a workstation, they
can shut down the other product. I simply don't see a scenario where
you want to run both concurrently that would be worth even a small
performance loss.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-03-23 8:58 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-18 9:49 [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence Xu, Dongxiao
2010-03-18 10:36 ` Alexander Graf
2010-03-18 12:55 ` Avi Kivity
2010-03-23 4:01 ` Xu, Dongxiao
2010-03-23 7:39 ` Avi Kivity
2010-03-23 8:33 ` Xu, Dongxiao
2010-03-23 8:58 ` Avi Kivity [this message]
2010-03-23 9:12 ` Alexander Graf
2010-03-18 13:51 ` Avi Kivity
2010-03-18 14:27 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BA882B2.8050303@redhat.com \
--to=avi@redhat.com \
--cc=dongxiao.xu@intel.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox