public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* ESXi, KVM or Xen?
@ 2010-07-03  3:55 Emmanuel Noobadmin
  2010-07-03  4:05 ` Peter Chacko
  2010-07-03  4:35 ` Javier Guerra Giraldez
  0 siblings, 2 replies; 6+ messages in thread
From: Emmanuel Noobadmin @ 2010-07-03  3:55 UTC (permalink / raw)
  To: kvm

Which of these would be the recommended virtualization platform for
mainly CentOS guest on CentOS host especially for running a
virtualized mail server? From what I've read, objectively it seems
that VMWare's still the way to go although I would had like to go with
Xen or KVM just as a matter of subjective preference.


VMWare's offering seems to have the best support and tools, plus
likely the most mature of the options. Also given their market
dominance, unlikely to just up and die in the near future.

Xen would had been a possible option except Redhat appears to be
focusing on KVM as their virtualization platform of choice to compete
with VMWare and Citrix. So maybe Xen support will be killed shortly.
Plus the modified xen kernel apparently causes conflict with certain
software, at least based on previous incidents where I'd been advised
not to use the CentOS xen kernel if not using xen virtualization.


KVM would be ideal since it's opensource and would be supported in
CentOS as far as can be reasonably foreseen. However, looking at
available resources online, it seems to have these key disadvantages

1. Poorer performance under load.
http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Quantitative+Comparison+of+Xen+and+KVM.pdf
This 2008 XenSummit paper indicates that it dies on heavy network load
as well as when there are more than a few VM doing heavy processing at
the same time. But that's two years ago and they weren't using
paravirtual drivers it seems.

http://vmstudy.blogspot.com/2010/04/network-performance-test-xenkvm-vt-d.html
This  blog testing out Xen/KVM pretty recently. While the loads are
not as drastic and neither the difference, it still shows that KVM
does lag behind by about 10%.

This is a concern since I plan to put storage on the network and the
most heavy load the client has is basically the email server due to
the volume plus inline antivirus and anti-spam scanning to be done on
those emails. Admittedly, they won't be seeing as much emails as say a
webhost but most of their emails come with relatively large
attachments.


2. Security
Some sites point out that KVM VM runs in userspace as threads. So a
compromised guest OS would then give intruder access to the system as
well as other VMs.

Should I really be concerned or are these worries only for extreme
situations and that KVM is viable for normal production situations?
Are there other things I should be aware of?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ESXi, KVM or Xen?
  2010-07-03  3:55 ESXi, KVM or Xen? Emmanuel Noobadmin
@ 2010-07-03  4:05 ` Peter Chacko
  2010-07-03  7:32   ` Jan Kiszka
  2010-07-03  4:35 ` Javier Guerra Giraldez
  1 sibling, 1 reply; 6+ messages in thread
From: Peter Chacko @ 2010-07-03  4:05 UTC (permalink / raw)
  To: Emmanuel Noobadmin; +Cc: kvm

Did you consider VirtualBox ? Of course VmWare is the market leader
NOW, but if you plan to invest on future open source  platforms, You
should choose KVM(which is now Linux native) or XEN.( Its unlikely to
be killed). KVM still lag behind in terms of enterprise-class features
, but count on it for future investment. So, i think you should just
start off with Xen or virtualBox, with a migration plan to KVM in
future.

peter chacko.

On Sat, Jul 3, 2010 at 9:25 AM, Emmanuel Noobadmin
<centos.admin@gmail.com> wrote:
> Which of these would be the recommended virtualization platform for
> mainly CentOS guest on CentOS host especially for running a
> virtualized mail server? From what I've read, objectively it seems
> that VMWare's still the way to go although I would had like to go with
> Xen or KVM just as a matter of subjective preference.
>
>
> VMWare's offering seems to have the best support and tools, plus
> likely the most mature of the options. Also given their market
> dominance, unlikely to just up and die in the near future.
>
> Xen would had been a possible option except Redhat appears to be
> focusing on KVM as their virtualization platform of choice to compete
> with VMWare and Citrix. So maybe Xen support will be killed shortly.
> Plus the modified xen kernel apparently causes conflict with certain
> software, at least based on previous incidents where I'd been advised
> not to use the CentOS xen kernel if not using xen virtualization.
>
>
> KVM would be ideal since it's opensource and would be supported in
> CentOS as far as can be reasonably foreseen. However, looking at
> available resources online, it seems to have these key disadvantages
>
> 1. Poorer performance under load.
> http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Quantitative+Comparison+of+Xen+and+KVM.pdf
> This 2008 XenSummit paper indicates that it dies on heavy network load
> as well as when there are more than a few VM doing heavy processing at
> the same time. But that's two years ago and they weren't using
> paravirtual drivers it seems.
>
> http://vmstudy.blogspot.com/2010/04/network-performance-test-xenkvm-vt-d.html
> This  blog testing out Xen/KVM pretty recently. While the loads are
> not as drastic and neither the difference, it still shows that KVM
> does lag behind by about 10%.
>
> This is a concern since I plan to put storage on the network and the
> most heavy load the client has is basically the email server due to
> the volume plus inline antivirus and anti-spam scanning to be done on
> those emails. Admittedly, they won't be seeing as much emails as say a
> webhost but most of their emails come with relatively large
> attachments.
>
>
> 2. Security
> Some sites point out that KVM VM runs in userspace as threads. So a
> compromised guest OS would then give intruder access to the system as
> well as other VMs.
>
> Should I really be concerned or are these worries only for extreme
> situations and that KVM is viable for normal production situations?
> Are there other things I should be aware of?
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ESXi, KVM or Xen?
  2010-07-03  3:55 ESXi, KVM or Xen? Emmanuel Noobadmin
  2010-07-03  4:05 ` Peter Chacko
@ 2010-07-03  4:35 ` Javier Guerra Giraldez
  2010-07-03  5:48   ` Emmanuel Noobadmin
  1 sibling, 1 reply; 6+ messages in thread
From: Javier Guerra Giraldez @ 2010-07-03  4:35 UTC (permalink / raw)
  To: Emmanuel Noobadmin; +Cc: kvm

On Fri, Jul 2, 2010 at 10:55 PM, Emmanuel Noobadmin
<centos.admin@gmail.com> wrote:
> This is a concern since I plan to put storage on the network and the
> most heavy load the client has is basically the email server due to
> the volume plus inline antivirus and anti-spam scanning to be done on
> those emails. Admittedly, they won't be seeing as much emails as say a
> webhost but most of their emails come with relatively large
> attachments.

if by 'put storage on the network' you mean using a block-level
protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
means initiate on the host OS (Dom0 in Xen) and present to the VM as
if it were local storage.  it's far faster and more stable that way.
in that case, storage wouldn't add to the VM's network load, which
might or might not make those (old) scenarios irrelevant


> 2. Security
> Some sites point out that KVM VM runs in userspace as threads. So a
> compromised guest OS would then give intruder access to the system as
> well as other VMs.

in any case, if your base OS (host on KVM, Dom0 on Xen) is
compromised, it's game over.  also, KVM is not 'userspace threads',
they're processes as far as the scheduler is concerned, and their ram
mapping is managed as a separate process space.  no more no less
separation than usual among processes on a server.  of course, the
guest processes are isolated inside that VM and there's no way out of
there (unless there's a security bug, which are few and far between
given the hardware-assisted virtualization)


in any case, yes; Xen does have more maturity on big hosting
deployments.  but most third parties are betting on KVM for the
future.  the biggest examples are Redhat, Canonical, libvirt (which is
sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
with either Xen or KVM, focusing on the last) so the gap is closing.

regarding performance, KVM is still somewhat behind; but the design is
cleaner and more scalable (don't believe too much on 'type 1 vs type
2' hype, most people invoking that don' really understand the issues).
 the evolving virtio backends have a lot of promise, and periodically
there are proof of concept tests that blow out of the water everything
else.

and finally, even if right now the 'best' deployment on Xen definitely
outperforms KVM by a measurable margin; when things are not optimal
Xen degrades a lot quicker than KVM.  in part because the Xen core
scheduler is far from the maturity of Linux kernel's scheduler.

-- 
Javier

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ESXi, KVM or Xen?
  2010-07-03  4:35 ` Javier Guerra Giraldez
@ 2010-07-03  5:48   ` Emmanuel Noobadmin
  2010-07-03  7:34     ` Jan Kiszka
  0 siblings, 1 reply; 6+ messages in thread
From: Emmanuel Noobadmin @ 2010-07-03  5:48 UTC (permalink / raw)
  To: kvm

> if by 'put storage on the network' you mean using a block-level
> protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
> means initiate on the host OS (Dom0 in Xen) and present to the VM as
> if it were local storage.  it's far faster and more stable that way.
> in that case, storage wouldn't add to the VM's network load, which
> might or might not make those (old) scenarios irrelevant

Thanks for that tip :)

> in any case, yes; Xen does have more maturity on big hosting
> deployments.  but most third parties are betting on KVM for the
> future.  the biggest examples are Redhat, Canonical, libvirt (which is
> sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
> with either Xen or KVM, focusing on the last) so the gap is closing.

This is what I figured too, hence not a straightforward choice. I
don't need top notch performance for most of the servers targeted for
virtualization. Loads are usually low except on the mail servers and
often only when there's a mail loop problem. So if the performance hit
under worse case situation is only 10~20%, it's something I can live
with. Especially since the intended VM servers (i5/i7) will be
significantly faster than the current ones (P4/C2D) I'm basing the my
estimates on.

But I need to do my due dilligence and have justification ready to
show that current performance/reliability/security is at least "good
enough" instead of "I like where KVM is going and think it'll be the
platform of choice in the years to come". Bosses and clients tend to
frown on that kind of thing :D

> and finally, even if right now the 'best' deployment on Xen definitely
> outperforms KVM by a measurable margin; when things are not optimal
> Xen degrades a lot quicker than KVM.  in part because the Xen core
> scheduler is far from the maturity of Linux kernel's scheduler.

The problem is finding stats to back that up if my clients/boss ask
about it. So far most of the available comparisons/data seem rather
dated, mostly 2007 and 2008. The most "professional" looking one, in
that PDF I linked to, seems to indicate the opposite, i.e. KVM
degrades faster when things go south. That graph with the Apache
problem is especially damning because our primary product/services are
web-based applications, infrastructure being a supplement
service/product.

In addition, I remember reading a thread on this list where an Intel
developer pointed out that the Linux scheduler causes performance hit,
about 8x~10x slower when the physical processors are heavily loaded
and there are more vCPU than pCPU when it puts the same VM's vCPUs
into the same physical core.

So I am a little worried since 8~10x is massive difference, esp if
some process goes awry, starts chewing up processor cycles and the VM
starts to lag because of this. A vicious cycle that makes it even
harder to fix things without killing the VM.

Of course if I could honestly tell my clients/boss "This, this and
this are rare situations we will almost never encounter...", then it's
a different thing. Hence asking about this here :)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ESXi, KVM or Xen?
  2010-07-03  4:05 ` Peter Chacko
@ 2010-07-03  7:32   ` Jan Kiszka
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Kiszka @ 2010-07-03  7:32 UTC (permalink / raw)
  To: Peter Chacko, Emmanuel Noobadmin; +Cc: kvm

[-- Attachment #1: Type: text/plain, Size: 1044 bytes --]

Peter Chacko wrote:
> Did you consider VirtualBox ? Of course VmWare is the market leader
> NOW, but if you plan to invest on future open source  platforms, You
> should choose KVM(which is now Linux native) or XEN.( Its unlikely to
> be killed). KVM still lag behind in terms of enterprise-class features
> , but count on it for future investment. So, i think you should just
> start off with Xen or virtualBox, with a migration plan to KVM in
> future.

VBox has surely its strengths on non-Linux hosts and hosts without
virtualization acceleration. But I would carefully evaluate its
performance under relevant load:

http://permalink.gmane.org/gmane.comp.emulators.virtualbox.devel/2796


I recently learned from someone doing Xen consulting that it's still
troublesome to get it running on non-certified hardware. This may have
impact on the hardware choice.


For hosting Linux-on-Linux, I would also consider containers, e.g. lxc
or OpenVZ. Performance-wise, that's generally the most efficient approach.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: ESXi, KVM or Xen?
  2010-07-03  5:48   ` Emmanuel Noobadmin
@ 2010-07-03  7:34     ` Jan Kiszka
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Kiszka @ 2010-07-03  7:34 UTC (permalink / raw)
  To: Emmanuel Noobadmin; +Cc: kvm

[-- Attachment #1: Type: text/plain, Size: 4149 bytes --]

Emmanuel Noobadmin wrote:
>> if by 'put storage on the network' you mean using a block-level
>> protocol (iSCSI, FCoE, AoE, NBD, DRBD...), then you should by all
>> means initiate on the host OS (Dom0 in Xen) and present to the VM as
>> if it were local storage.  it's far faster and more stable that way.
>> in that case, storage wouldn't add to the VM's network load, which
>> might or might not make those (old) scenarios irrelevant
> 
> Thanks for that tip :)
> 
>> in any case, yes; Xen does have more maturity on big hosting
>> deployments.  but most third parties are betting on KVM for the
>> future.  the biggest examples are Redhat, Canonical, libvirt (which is
>> sponsored by redhat), and Eucalyptus (which reimplements amazon's EC2
>> with either Xen or KVM, focusing on the last) so the gap is closing.
> 
> This is what I figured too, hence not a straightforward choice. I
> don't need top notch performance for most of the servers targeted for
> virtualization. Loads are usually low except on the mail servers and
> often only when there's a mail loop problem. So if the performance hit
> under worse case situation is only 10~20%, it's something I can live
> with. Especially since the intended VM servers (i5/i7) will be
> significantly faster than the current ones (P4/C2D) I'm basing the my
> estimates on.
> 
> But I need to do my due dilligence and have justification ready to
> show that current performance/reliability/security is at least "good
> enough" instead of "I like where KVM is going and think it'll be the
> platform of choice in the years to come". Bosses and clients tend to
> frown on that kind of thing :D

How much customization will you apply on your virtualization
infrastructure? If you can manage to do the majority via proper
hypervisor abstraction, specifically libvirt, you will actually have
quite some freedom in choosing the platform. If not, I would very
carefully look at the management interfaces of all those hypervisors,
how much they conform to standard administration procedures or what
specialties they require, both on host and guest side.

> 
>> and finally, even if right now the 'best' deployment on Xen definitely
>> outperforms KVM by a measurable margin; when things are not optimal
>> Xen degrades a lot quicker than KVM.  in part because the Xen core
>> scheduler is far from the maturity of Linux kernel's scheduler.
> 
> The problem is finding stats to back that up if my clients/boss ask
> about it. So far most of the available comparisons/data seem rather
> dated, mostly 2007 and 2008. The most "professional" looking one, in
> that PDF I linked to, seems to indicate the opposite, i.e. KVM
> degrades faster when things go south. That graph with the Apache
> problem is especially damning because our primary product/services are
> web-based applications, infrastructure being a supplement
> service/product.
> 
> In addition, I remember reading a thread on this list where an Intel
> developer pointed out that the Linux scheduler causes performance hit,
> about 8x~10x slower when the physical processors are heavily loaded
> and there are more vCPU than pCPU when it puts the same VM's vCPUs
> into the same physical core.

That's only relevant if you run SMP guests on over-committed hosts. How
will your guests look like?

> 
> So I am a little worried since 8~10x is massive difference, esp if
> some process goes awry, starts chewing up processor cycles and the VM
> starts to lag because of this. A vicious cycle that makes it even
> harder to fix things without killing the VM.
> 
> Of course if I could honestly tell my clients/boss "This, this and
> this are rare situations we will almost never encounter...", then it's
> a different thing. Hence asking about this here :)

All solutions have weak points. The point is indeed to estimate if your
use cases will trigger then. Still then, the question remains if some
weakness is inherent to the solution's design or likely to be fixed
quicker than you will actually hit it. And weaknesses may not only be
performance aspects.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-07-03  7:34 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-03  3:55 ESXi, KVM or Xen? Emmanuel Noobadmin
2010-07-03  4:05 ` Peter Chacko
2010-07-03  7:32   ` Jan Kiszka
2010-07-03  4:35 ` Javier Guerra Giraldez
2010-07-03  5:48   ` Emmanuel Noobadmin
2010-07-03  7:34     ` Jan Kiszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox