From: NeilBrown <neilb@suse.de>
To: "Ramon Schönborn" <RSchoenborn@gmx.net>
Cc: linux-raid@vger.kernel.org
Subject: Re: md device io request split
Date: Wed, 23 Nov 2011 13:31:24 +1100 [thread overview]
Message-ID: <20111123133124.2042c1f4@notabene.brown> (raw)
In-Reply-To: <20111122093634.105520@gmx.net>
[-- Attachment #1: Type: text/plain, Size: 2351 bytes --]
On Tue, 22 Nov 2011 10:36:34 +0100 "Ramon Schönborn" <RSchoenborn@gmx.net>
wrote:
> Hi,
>
> could someone help me understand why md splits io requests in 4k blocks?
> iostat says:
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
> ...
> dm-71 4.00 5895.00 31.00 7538.00 0.14 52.54 14.25 94.69 16041 0.13 96.00
> dm-96 2.00 5883.00 18.00 7670.00 0.07 52.95 14.13 104.84 13.69 0.12 96.00
> md17 0.00 0.00 48.00 13234.00 0.19 51.70 8.00 0.00 0.00 0.00 0.00
>
> md17 is a raid1 with members "dm-71" and "dm-96". IO was generated with something like "dd if=/dev/zero bs=100k of=/dev/md17".
> According to "avgrq-sz", the average size of the requests is 8 times 512b, i.e. 4k.
> I used kernel 3.0.7 and verified the results with a raid5 and older kernel version (2.6.32) too.
> Why do i bother about this at all?
> The io requests in my case come from a virtual machine, where the requests have been merged in a virtual device. Afterwards the requests are split at md-level (vm host) and later merged again (at dm-71/dm-96). This seems to be an avoidable overhead, isn't it?
Reads to a RAID5 device should be as large as the chunk size.
Writes will always be 4K as they go through the stripe cache which uses 4K
blocks.
These 4K requests should be combined into large requests by the
elevator/scheduler at a lower level so the device should see largish writes.
Writing to a RAID5 is always going to be costly due to the need to compute
and write parity, so it isn't clear to me that this is a place were
optimisation is appropriate.
RAID1 will only limit requests to 4K if the device beneath it is
non-contiguous - e.g. a striped array or LVM arrangement were consecutive
blocks might be on different devices.
Because of the way request splitting is managed in the block layer, RAID1 is
only allowed to send down a request that will be sure to fit on a single
device. As different devices in the RAID1 could have different alignments it
would be very complex to track exactly how each request must be split at the
top of the stack so as to fit all the way down, and I think it is impossible
to do it in a race-free way.
So if this might be the case, RAID1 insists on only receiving 1-page requests
because it knows they are always allowed to be passed down.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next prev parent reply other threads:[~2011-11-23 2:31 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-22 9:36 md device io request split "Ramon Schönborn"
2011-11-23 2:31 ` NeilBrown [this message]
2011-11-23 13:22 ` "Ramon Schönborn"
2011-11-23 19:30 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111123133124.2042c1f4@notabene.brown \
--to=neilb@suse.de \
--cc=RSchoenborn@gmx.net \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).