From: NeilBrown <neilb@suse.de>
To: majianpeng <majianpeng@gmail.com>
Cc: Dan Williams <dan.j.williams@gmail.com>,
Paul Menzel <paulepanter@users.sourceforge.net>,
linux-raid <linux-raid@vger.kernel.org>
Subject: Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
Date: Mon, 16 Jul 2012 17:46:00 +1000 [thread overview]
Message-ID: <20120716174600.25589b7c@notabene.brown> (raw)
In-Reply-To: <201207131831085787372@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2299 bytes --]
On Fri, 13 Jul 2012 18:31:11 +0800 majianpeng <majianpeng@gmail.com> wrote:
> To improve write perfomance by decreasing the preread stripe,only move
> IO_THRESHOLD stripes from delay_list to hold_list once.
>
> Using the follow command:
> dd if=/dev/zero of=/dev/md0 bs=2M count=52100.
>
> At default condition: speed is 95MB/s.
> At the condition of preread_bypass_threshold was equal zero:speed is 105MB/s.
> Using this patch:speed is 123MB/s.
>
> If preread_bypass_threshold was zero,the performance will be better,but
> not better than this patch.
> I think maybe two reason:
> 1:If bio is REQ_SYNC
> 2:In function __get_priority_stripe():
> >> } else if (!list_empty(&conf->hold_list) &&
> >> ((conf->bypass_threshold &&
> >> conf->bypass_count > conf->bypass_threshold) ||
> >> atomic_read(&conf->pending_full_writes) == 0)) {
> Preread_bypass_threshold is one condition of getting stripe from
> hold_list.So only control the number of hold_list can get better
> performance.
>
> Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
> ---
> drivers/md/raid5.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 04348d7..a6749bb 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -3662,6 +3662,7 @@ finish:
>
> static void raid5_activate_delayed(struct r5conf *conf)
> {
> + int count = 0;
> if (atomic_read(&conf->preread_active_stripes) < IO_THRESHOLD) {
> while (!list_empty(&conf->delayed_list)) {
> struct list_head *l = conf->delayed_list.next;
> @@ -3672,6 +3673,8 @@ static void raid5_activate_delayed(struct r5conf *conf)
> if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
> atomic_inc(&conf->preread_active_stripes);
> list_add_tail(&sh->lru, &conf->hold_list);
> + if (++count >= IO_THRESHOLD)
> + break;
> }
> }
> }
I tried this patch - against my current for-next tree - on my own modest
hardware and could not measure any difference in write throughput.
Maybe some other patch has fixed something.
However it is still reading a lot during a write-only test and that is not
ideal. It would be nice if we could arrange that it didn't read at all.
NeilBRown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next prev parent reply other threads:[~2012-07-16 7:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-13 10:31 [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once majianpeng
2012-07-13 23:56 ` Dan Williams
2012-07-16 1:09 ` majianpeng
2012-07-16 7:46 ` NeilBrown [this message]
2012-07-16 8:53 ` majianpeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120716174600.25589b7c@notabene.brown \
--to=neilb@suse.de \
--cc=dan.j.williams@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=majianpeng@gmail.com \
--cc=paulepanter@users.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).