From: Brendan Cully <brendan@cs.ubc.ca>
To: "Tracy Reed" <treed@ultraviolet.org>,
"Pasi Kärkkäinen" <pasik@iki.fi>,
xen-devel@lists.xensource.com,
Aoetools-discuss@lists.sourceforge.net,
linux-kernel@vger.kernel.org
Subject: Re: domU is causing misaligned disk writes
Date: Tue, 20 Apr 2010 13:41:51 -0700 [thread overview]
Message-ID: <20100420204151.GH1708@kremvax.cs.ubc.ca> (raw)
In-Reply-To: <20100420200004.GQ5660@tracyreed.org>
On Tuesday, 20 April 2010 at 13:00, Tracy Reed wrote:
> On Tue, Apr 20, 2010 at 11:49:55AM +0300, Pasi Kärkkäinen spake thusly:
> > Are you using filesystems on normal partitions, or LVM in the domU?
> > I'm pretty sure this is a domU partitioning problem.
>
> Also: What changes in the view of the partitioning between domU and
> dom0? Wouldn't a partitioning error manifest itself in tests in the
> dom0 as well as in the domU?
>
> BTW: The dd from the last time in my last email finally finished:
>
> # dd if=/dev/zero of=/dev/xvdg1 bs=4096 count=3000000
> 3000000+0 records in
> 3000000+0 records out
> 12288000000 bytes (12 GB) copied, 734.714 seconds, 16.7 MB/s
>
> If I run that very same dd as above (the last test in my previous
> email) with the same partition setup again but this time from the
> dom0:
>
> # dd if=/dev/zero of=/dev/etherd/e6.1 bs=4096 count=3000000
> 3000000+0 records in
> 3000000+0 records out
> 12288000000 bytes (12 GB) copied, 107.352 seconds, 114 MB/s
>
> # /sbin/sfdisk -d /dev/etherd/e6.1
> # partition table of /dev/etherd/e6.1
> unit: sectors
>
> /dev/etherd/e6.1p1 : start= 64, size=566226926, Id=83
> /dev/etherd/e6.1p2 : start= 0, size= 0, Id= 0
> /dev/etherd/e6.1p3 : start= 0, size= 0, Id= 0
> /dev/etherd/e6.1p4 : start= 0, size= 0, Id= 0
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.00 17350.80 0.60 275.60 22.40 72540.00
> 525.43 97.94 344.01 3.62 100.02
> sdb 0.00 17374.80 1.20 256.00 28.00 74848.00
> 582.24 136.20 527.72 3.89 100.02
You could also be limited by the size of the block request ring (I
believe the ring is normally only one page) -- the ring needs to be
large enough to handle the bandwidth delay product, and AoE means the
delay is probably higher than normal. Do you get better performance
against a local partition?
next prev parent reply other threads:[~2010-04-20 20:41 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-20 8:09 domU is causing misaligned disk writes Tracy Reed
2010-04-20 8:49 ` Pasi Kärkkäinen
2010-04-20 8:54 ` Pasi Kärkkäinen
2010-04-20 18:40 ` [LKML] " Konrad Rzeszutek Wilk
2010-04-20 22:58 ` [LKML] Re: [Xen-devel] " Tracy Reed
2010-04-20 19:39 ` Tracy Reed
2010-04-20 19:54 ` Pasi Kärkkäinen
2010-04-20 20:03 ` [Xen-devel] " Tracy Reed
2010-04-20 20:00 ` Tracy Reed
2010-04-20 20:25 ` Konrad Rzeszutek Wilk
2010-04-20 21:19 ` Tracy Reed
2010-04-28 19:55 ` [Aoetools-discuss] [Xen-devel] " Gabor Gombas
2010-04-20 20:41 ` Brendan Cully [this message]
2010-04-20 21:58 ` Tracy Reed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100420204151.GH1708@kremvax.cs.ubc.ca \
--to=brendan@cs.ubc.ca \
--cc=Aoetools-discuss@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
--cc=pasik@iki.fi \
--cc=treed@ultraviolet.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).