From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jia Rao Subject: strange xen disk performance Date: Mon, 19 Jul 2010 16:13:39 -0400 Message-ID: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1488682263==" Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org --===============1488682263== Content-Type: multipart/alternative; boundary=0016e6d58fd652e5fa048bc33402 --0016e6d58fd652e5fa048bc33402 Content-Type: text/plain; charset=ISO-8859-1 Hi all, I observed some strange disk performance under Xen environment. *The test bed:* Dom0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler. Xen: 3.3.1 Guest OS: 2.6.18.8, virtual disks are using physical partitions: phy:/dev/sdbX. one virtual cpu, each pinned to a separate core. Intel two quad-core xeon CPU I did some tests on disk I/O performance when two VMs are sharing the physical host. both of the VMs are running iozone benchmark, sequential read a 1GB file with a request size 32K. The read was issued as O_DIRECT IO skipping the domU's buffer cache. To get a reference throughput, I issued the read from each VM individually. It turned out to be a throughput of 56MB/s-61MB/s, which is the limit of the system for 32K sequential reads. Then I issued the read from two VMs simultaneously. It turned out to be a 22MB/s-24MB/s for each VM. Combined together is around 45MB/s for the whole system, which is pretty normal. The most strange thing was that, in order to make sure that the above results come from pure disk performance, nothing to to with buffer cache, I did the same test again but purge the cached data in domU and dom0 before the test. sync and echo 3 > /proc/sys/vm/drop_caches were executed both in domU and dom0. The resulted throughput of each VM increased to 56MB/s-61MB/s, which generates a system throughput around 120MB/s. I ran the test for 10 times, 9 out of 10 have this strange result. Only one test looks like the original test. It seems impossible to me and it must has something to do with caches. My question is, do Xen or dom0 cache any disk I/O data from the guest OS? It seems dom0's buffer cache has nothing to do with this because I already purged everything, Any ideas? Thanks a lot. --0016e6d58fd652e5fa048bc33402 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi all,

I observed some strange disk performance under X= en environment.

The test bed:
Dom= 0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler.
Xen: 3.3.1
Guest OS: 2.6.18.8, virtual disks are using physi= cal partitions: phy:/dev/sdbX. one virtual cpu, each pinned to a separate c= ore.=A0
Intel two quad-core xeon CPU

I d= id some tests on disk I/O performance when two VMs are sharing the physical= host.=A0

both of the VMs are running iozone benchmark, sequentia= l read a 1GB file with a request size 32K. The read was issued as O_DIRECT = IO skipping the domU's buffer cache.

To get a = reference throughput, I issued the read from each VM individually. It turne= d out to be a throughput of 56MB/s-61MB/s, which is the limit of the system= for 32K sequential reads.

Then I issued the read from two VMs=A0simultaneously. I= t turned out to be a 22MB/s-24MB/s for each VM. Combined together is around= 45MB/s for the whole system, which is pretty normal.

The most strange thing was that, in order to make sure that the above = results come from pure disk performance, nothing to to with buffer cache, I= did the same test again but purge the cached data in domU and dom0 before = the test.

sync and echo 3 > /proc/sys/vm/drop_caches were exec= uted both in domU and dom0. The resulted throughput of each VM increased to= 56MB/s-61MB/s, which generates a system throughput around 120MB/s. I ran t= he test for 10 times, 9 out of 10 have this strange result. Only one test l= ooks like the original test.

It seems impossible to me and it must has something to = do with caches.

My question is, do Xen or dom0 cac= he any disk I/O data from the guest OS? It seems dom0's buffer cache ha= s nothing to do with this because I already purged everything,

Any ideas?

Thanks a lot.
=


--0016e6d58fd652e5fa048bc33402-- --===============1488682263== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============1488682263==--