public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* comparisons with VMware and Xen
@ 2008-07-07 20:09 Sukanto Ghosh
  2008-07-07 20:13 ` Javier Guerra
  2008-07-07 20:39 ` Anthony Liguori
  0 siblings, 2 replies; 4+ messages in thread
From: Sukanto Ghosh @ 2008-07-07 20:09 UTC (permalink / raw)
  To: kvm

1. Is the maximum no. of  vcpus in a particular guest limited to the
no. of host cpus ? (I guess not)

2. Is there any attempt made to co-schedule all of a guest's vcpus, in
order to avoid any spinlock holding problem ?

3. Are there any means to do content-based page sharing between guests
as VMware does ?

4. Does kvm makes any attempt to avoid TLB flushes while vm-exits and
vm-entries ? like Xen makes a memory hole ?
(my guess is that it doesn't needs to as kvm is mapped into a guest's
address-space and the pages are protected with the help of linux vm.
Am i right ? )


Slightly different ones,

5. How much useful is a balloon driver for kvm, which doesn't makes
any hard partitions of available physical memory between the guests ?
Shouldn't the linux VM's knowledge be superior in this case than the
guest-vm's  ?

6. What is coalesced mmio ?



-- 
Regards,
Sukanto Ghosh

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: comparisons with VMware and Xen
  2008-07-07 20:09 comparisons with VMware and Xen Sukanto Ghosh
@ 2008-07-07 20:13 ` Javier Guerra
  2008-07-07 20:39 ` Anthony Liguori
  1 sibling, 0 replies; 4+ messages in thread
From: Javier Guerra @ 2008-07-07 20:13 UTC (permalink / raw)
  To: Sukanto Ghosh; +Cc: kvm

On Mon, Jul 7, 2008 at 3:09 PM, Sukanto Ghosh
<sukanto.cse.iitb@gmail.com> wrote:

> 3. Are there any means to do content-based page sharing between guests
> as VMware does ?

is it VMWare, or NetApp the one doing this? or you mean RAM page
sharing? if so, sounds like a big performance tradeoff for a little
extra (cheap) RAM


-- 
Javier

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: comparisons with VMware and Xen
  2008-07-07 20:09 comparisons with VMware and Xen Sukanto Ghosh
  2008-07-07 20:13 ` Javier Guerra
@ 2008-07-07 20:39 ` Anthony Liguori
  2008-07-08  7:23   ` Sukanto Ghosh
  1 sibling, 1 reply; 4+ messages in thread
From: Anthony Liguori @ 2008-07-07 20:39 UTC (permalink / raw)
  To: Sukanto Ghosh; +Cc: kvm

Sukanto Ghosh wrote:
> 1. Is the maximum no. of  vcpus in a particular guest limited to the
> no. of host cpus ? (I guess not)
>   

No.

> 2. Is there any attempt made to co-schedule all of a guest's vcpus, in
> order to avoid any spinlock holding problem ?
>   

Right now, no.  There is some discussion of gang scheduling within Linux 
but I don't know that it's going anywhere.  Recently, Jeremy 
Fitzhardinge posted some paravirtual spinlock patches for Linux that 
would at least allow for spinlock yielding.

> 3. Are there any means to do content-based page sharing between guests
> as VMware does ?
>   

Yes, KSM.

> 4. Does kvm makes any attempt to avoid TLB flushes while vm-exits and
> vm-entries ? like Xen makes a memory hole ?
> (my guess is that it doesn't needs to as kvm is mapped into a guest's
> address-space and the pages are protected with the help of linux vm.
> Am i right ? )
>   

A TLB flush is mandatory when using hardware virtualization support 
(even with Xen--you are thinking of Xen PV).  However, both Intel and 
AMD now support hardware TLB tagging which reduces the pain of this TLB 
flush.

> Slightly different ones,
>
> 5. How much useful is a balloon driver for kvm, which doesn't makes
> any hard partitions of available physical memory between the guests ?
> Shouldn't the linux VM's knowledge be superior in this case than the
> guest-vm's  ?
>   

Ballooning can be very useful when doing very large changes to the 
amount of guest memory.

> 6. What is coalesced mmio ?
>   

There are a number of times when a large number of MMIO writes occur 
back-to-back.  Think VGA planar updates, writes of a network packet to 
on-chip memory, etc.  Instead of passing each of these writes to 
userspace, we buffer a certain number of them and send a good bit of 
them down to QEMU.  The real win from this is making N-1 of the buffered 
writes into light-weight exits instead of heavy-weight exits.

Regards,

Anthony Liguori

>
>   


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: comparisons with VMware and Xen
  2008-07-07 20:39 ` Anthony Liguori
@ 2008-07-08  7:23   ` Sukanto Ghosh
  0 siblings, 0 replies; 4+ messages in thread
From: Sukanto Ghosh @ 2008-07-08  7:23 UTC (permalink / raw)
  To: Anthony Liguori, izike; +Cc: kvm

KSM related queries

>> 3. Are there any means to do content-based page sharing between guests
>> as VMware does ?
>>
>
> Yes, KSM.
>

How does KSM offers its services.  saw in the archives that there is
some /dev/ksm device.
I mean what are the major steps involved:

Is this feature turned on from the beginning ? i.e, always when a
guest page is created it is scanned for similarity with existing pages
and shared ? or, is  it trigger at some events ? If so, when is it
trigerred ?

Is sharing done only between the pages which have been registered via
KSM_REGISTER_MEMORY_REGION ?

What are these for ? KSM_CREATE_SHARED_MEMORY_AREA and KSM_CREATE_SCAN ?

How much is the overhead involved due to this ?


-- 
Regards,
Sukanto Ghosh

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2008-07-08  7:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-07 20:09 comparisons with VMware and Xen Sukanto Ghosh
2008-07-07 20:13 ` Javier Guerra
2008-07-07 20:39 ` Anthony Liguori
2008-07-08  7:23   ` Sukanto Ghosh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox