From: Bill Davidsen <davidsen@tmr.com>
To: Ian Dall <ian@beware.dropbear.id.au>
Cc: David Greaves <david@dgreaves.com>, Neil Brown <neilb@suse.de>,
linux-raid@vger.kernel.org
Subject: Re: Proposed enhancement to mdadm: Allow "--write-behind=" to be done in grow mode.
Date: Wed, 04 Jul 2007 09:45:52 -0400 [thread overview]
Message-ID: <468BA490.8000806@tmr.com> (raw)
In-Reply-To: <1183516204.17720.16.camel@sibyl.beware.dropbear.id.au>
Ian Dall wrote:
> On Tue, 2007-07-03 at 15:03 +0100, David Greaves wrote:
>
>> Ian Dall wrote:
>>
>>> There doesn't seem to be any designated place to send bug reports and
>>> feature requests to mdadm, so I hope I am doing the right thing by
>>> sending it here.
>>>
>>> I have a small patch to mdamd which allows the write-behind amount to be
>>> set a array grow time (instead of currently only at grow or create
>>> time). I have tested this fairly extensively on some arrays built out of
>>> loop back devices, and once on a real live array. I haven't lot any data
>>> and it seems to work OK, though it is possible I am missing something.
>>>
>> Sounds like a useful feature...
>>
>> Did you test the bitmap cases you mentioned?
>>
>
> Yes. And I can use mdadm -X to see that the write behind parameter is
> set in the superblock. I don't know any way to monitor how much the
> write behind feature is being used though.
>
> My motivation was for doing this was to enable me to experiment to see
> how effective it is. Currently I have a Raid 0 array across 3 very fast
> (15k rpm) scsi disks. This array is mirrored by a single large vanilla
> ata (7.2k rpm) disk. I figure that the read performance of the
> combination is basically the read performance of the Raid 0, and the
> sustained write performance is basically that of the ata disk, which is
> about 6:1 read to write speed. I also see typically about 6 times the
> read traffic to write traffic. So I figure it should be close to optimal
> IF the bursts of write activity are not too long.
>
> Does anyone know how I can monitor the number of pending writes? Where
> are these queued? Are they simply stuck on the block device queue (and I
> could see with iostat) or does the md device maintain its own special
> queue for this?
>
Sort of... you can look at the stats in /proc/diskstats while you do a
write burst. You should be able to note the increasing number of blocks
written to the fast disks vs. the slow, and thereby see how behind the
slow one gets.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
next prev parent reply other threads:[~2007-07-04 13:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-06-28 13:52 Does "--write-behind=" have to be done at create time? Ian Dall
2007-06-29 15:28 ` Ian Dall
2007-07-03 13:12 ` Proposed enhancement to mdadm: Allow "--write-behind=" to be done in grow mode Ian Dall
2007-07-03 14:03 ` David Greaves
2007-07-04 2:30 ` Ian Dall
2007-07-04 13:45 ` Bill Davidsen [this message]
2007-07-05 15:37 ` Bill Davidsen
2007-07-09 1:30 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=468BA490.8000806@tmr.com \
--to=davidsen@tmr.com \
--cc=david@dgreaves.com \
--cc=ian@beware.dropbear.id.au \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).