xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jia Rao <rickenrao@gmail.com>
To: xen-devel@lists.xensource.com
Subject: strange xen disk performance
Date: Mon, 19 Jul 2010 16:13:39 -0400	[thread overview]
Message-ID: <AANLkTinUbDd9_Gs4THhRpQ6C-o9ECKODZ0AANpkY8ZeN@mail.gmail.com> (raw)


[-- Attachment #1.1: Type: text/plain, Size: 1761 bytes --]

Hi all,

I observed some strange disk performance under Xen environment.

*The test bed:*
Dom0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler.
Xen: 3.3.1
Guest OS: 2.6.18.8, virtual disks are using physical partitions:
phy:/dev/sdbX. one virtual cpu, each pinned to a separate core.
Intel two quad-core xeon CPU

I did some tests on disk I/O performance when two VMs are sharing the
physical host.

both of the VMs are running iozone benchmark, sequential read a 1GB file
with a request size 32K. The read was issued as O_DIRECT IO skipping the
domU's buffer cache.

To get a reference throughput, I issued the read from each VM individually.
It turned out to be a throughput of 56MB/s-61MB/s, which is the limit of the
system for 32K sequential reads.

Then I issued the read from two VMs simultaneously. It turned out to be a
22MB/s-24MB/s for each VM. Combined together is around 45MB/s for the whole
system, which is pretty normal.

The most strange thing was that, in order to make sure that the above
results come from pure disk performance, nothing to to with buffer cache, I
did the same test again but purge the cached data in domU and dom0 before
the test.

sync and echo 3 > /proc/sys/vm/drop_caches were executed both in domU and
dom0. The resulted throughput of each VM increased to 56MB/s-61MB/s, which
generates a system throughput around 120MB/s. I ran the test for 10 times, 9
out of 10 have this strange result. Only one test looks like the original
test.

It seems impossible to me and it must has something to do with caches.

My question is, do Xen or dom0 cache any disk I/O data from the guest OS? It
seems dom0's buffer cache has nothing to do with this because I already
purged everything,

Any ideas?

Thanks a lot.

[-- Attachment #1.2: Type: text/html, Size: 2152 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

                 reply	other threads:[~2010-07-19 20:13 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTinUbDd9_Gs4THhRpQ6C-o9ECKODZ0AANpkY8ZeN@mail.gmail.com \
    --to=rickenrao@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).