From mboxrd@z Thu Jan 1 00:00:00 1970 From: Brendan Cully Subject: Re: domU is causing misaligned disk writes Date: Tue, 20 Apr 2010 13:41:51 -0700 Message-ID: <20100420204151.GH1708@kremvax.cs.ubc.ca> References: <20100420080958.GN5660@tracyreed.org> <20100420084955.GV1878@reaktio.net> <20100420200004.GQ5660@tracyreed.org> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: <20100420200004.GQ5660@tracyreed.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Tracy Reed , Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= , xen-devel@lists.xensource.com, Aoetools-discuss@lists.sourceforge.net, linux-kernel@vger.kernel.org List-Id: xen-devel@lists.xenproject.org On Tuesday, 20 April 2010 at 13:00, Tracy Reed wrote: > On Tue, Apr 20, 2010 at 11:49:55AM +0300, Pasi K=E4rkk=E4inen spake thu= sly: > > Are you using filesystems on normal partitions, or LVM in the domU?=20 > > I'm pretty sure this is a domU partitioning problem. >=20 > Also: What changes in the view of the partitioning between domU and > dom0? Wouldn't a partitioning error manifest itself in tests in the > dom0 as well as in the domU? >=20 > BTW: The dd from the last time in my last email finally finished: >=20 > # dd if=3D/dev/zero of=3D/dev/xvdg1 bs=3D4096 count=3D3000000 > 3000000+0 records in > 3000000+0 records out > 12288000000 bytes (12 GB) copied, 734.714 seconds, 16.7 MB/s >=20 > If I run that very same dd as above (the last test in my previous > email) with the same partition setup again but this time from the > dom0: >=20 > # dd if=3D/dev/zero of=3D/dev/etherd/e6.1 bs=3D4096 count=3D3000000 > 3000000+0 records in > 3000000+0 records out > 12288000000 bytes (12 GB) copied, 107.352 seconds, 114 MB/s >=20 > # /sbin/sfdisk -d /dev/etherd/e6.1=20 > # partition table of /dev/etherd/e6.1 > unit: sectors >=20 > /dev/etherd/e6.1p1 : start=3D 64, size=3D566226926, Id=3D83 > /dev/etherd/e6.1p2 : start=3D 0, size=3D 0, Id=3D 0 > /dev/etherd/e6.1p3 : start=3D 0, size=3D 0, Id=3D 0 > /dev/etherd/e6.1p4 : start=3D 0, size=3D 0, Id=3D 0 >=20 > Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz > avgqu-sz await svctm %util > sda 0.00 17350.80 0.60 275.60 22.40 72540.00 > 525.43 97.94 344.01 3.62 100.02 > sdb 0.00 17374.80 1.20 256.00 28.00 74848.00 > 582.24 136.20 527.72 3.89 100.02 You could also be limited by the size of the block request ring (I believe the ring is normally only one page) -- the ring needs to be large enough to handle the bandwidth delay product, and AoE means the delay is probably higher than normal. Do you get better performance against a local partition?