public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: infernix <infernix@infernix.net>
To: kvm@vger.kernel.org
Subject: Re: KVM incompatible with multipath?
Date: Sun, 28 Jun 2009 00:42:26 +0200	[thread overview]
Message-ID: <4A46A052.3030408@infernix.net> (raw)
In-Reply-To: <4A466290.2010804@codemonkey.ws>

Anthony Liguori wrote:
> You need to create a partition table and setup grub in order to be able 
> to use something as -hda.  You don't get that automatically with 
> debootstrap.

Although I didn't include that in my mail, I did configure partitions 
and debootstrapped Lenny on part1. But that's not what's important 
anymore; this post kind of grew into a performance test of virtio block 
and net for Xen 3.2.1 and KVM 87, specifically when using multipath IO 
to an Equallogic iSCSI box.

I made this work as a Xen 3.2.1 domU on the same box and ran some 
performance tests. Afterwards I retried KVM and I didn't experience the 
problems I had before, for reasons unknown.

I boot the same multipathed disk that I used for Xen with 2.6.27.25 (+ 
kvm-87 modules in initrd).




It actually boots now and I have no problems whatsoever. Note that this 
kernel I built works for xen domU, Linux native and for KVM guest.

I've ran some bonnie++ tests, see below for the results.

Whats interesting in these results is that KVM guest has much lower 
sequential block output than on the host kernel, but much better 
sequential input. The latter is probably due to caching and buffers in 
the KVM host kernel. Setting cache=writeback improves both, but still 
block output is ~75MB/sec slower than on the host.

It seems that KVM guest write performance is CPU limited. Any advice on 
how to get better write speeds is highly appreciated.

In any case, so far I'm unable to reproduce the data corruption. 
Probably some glitch in the matrix.





Here's bonnie++ output with Xen domU (kernel 2.6.27.25) which has the 
multipathed 36090a0383049a2ac41a4643f000070c2-part1 configured as root 
disk xvda1:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
77918  98 115400  18 56323   8 60034  69 122466   7 422.4   0

Here's bonnie++ output with KVM guest (kernel 2.6.27.25) using virtio:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
59461  95 97238  17 60831  12 62181  95 205094  25 656.4   2

Here's host performance with Xen dom0 (kernel 2.6.26-2-amd64 from lenny):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
66519  93 172569  50 85457  35 59671  86 164754  40 451.8   0

And here's native performance with 2.6.27.25 (no xen):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
71796  98 182818  53 85511  29 61484  79 165302  31 668.4   1

The above tests were performed with blockdev --setra 16384 and MTU 1500.


Here's native Linux with jumbo frames:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
73616  99 195577  48 96794  27 68845  84 201899  29 630.3   1

Here's KVM guest performance (with jumbo frames in the hosts iscsi 
interfaces), cache=writethrough:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
60814  96 93222  15 64166  12 58015  94 258557  31 649.3   2

Here's KVM guest performance (with jumbo frames in the hosts iscsi 
interfaces), cache=writeback and bonnie size=2.5 times host RAM:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
52627  95 120630  23 100333  22 61271  94 284889  37 464.6   2


Xen domU with jumbo in dom0:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
76316  97 116028  19 58278   9 60066  71 131953   9 282.8   0



Now for something different, iscsi multipath inside xen domU (e.g. domU 
gets 3 nics, each bridged to one of the nics on dom0):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
64011  97 196484  49 91937  33 54583  81 160031  33 531.6   0

KVM guest with iscsi multipath, same bridging setup with 3 tap devices 
but *NO* jumbo frames:










  reply	other threads:[~2009-06-27 22:42 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-27 15:57 KVM incompatible with multipath? infernix
2009-06-27 18:18 ` Anthony Liguori
2009-06-27 22:42   ` infernix [this message]
2009-06-27 23:00     ` infernix

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A46A052.3030408@infernix.net \
    --to=infernix@infernix.net \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox