From: Shaohua Li <shli@kernel.org>
To: NeilBrown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: [patch 1/2]RAID5: make stripe size configurable
Date: Fri, 11 Jul 2014 19:16:56 +0800 [thread overview]
Message-ID: <20140711111656.GA4570@kernel.org> (raw)
In-Reply-To: <20140710153936.6f042277@notabene.brown>
On Thu, Jul 10, 2014 at 03:39:36PM +1000, NeilBrown wrote:
> On Tue, 8 Jul 2014 09:00:18 +0800 Shaohua Li <shli@kernel.org> wrote:
>
> >
> > stripe size is 4k default. Bigger stripe size is considered harmful, because if
> > IO size is small, big stripe size can cause a lot of unnecessary IO/parity
> > calculation. But if upper layer always sends full stripe write to RAID5 array,
> > this drawback goes away. And bigger stripe size can improve performance
> > actually in this case because of bigger size IO and less stripes to handle. In
> > my full stripe write test case, 16k stripe size can improve throughput 40% -
> > 120% depending on RAID5 configuration.
>
> Hi,
> certainly interesting.
> I'd really like to see more precise numbers though. What config gives 40%,
> what config gives 120% etc.
A 7-disk raid5 array gives 40%, and a 16-disk PCIe raid5 array gives 120. All
use pcie SSD and do full stripe write. And I observed cpu usage drops too. For
example, in the 7-disk array, cpu utilization drops about 20%.
On the other hand, small size write performance drops a lot, which isn't a surprise.
> I'm not keen on adding a number that has to be tuned though. I'd really
> like to understand exactly where the performance gain comes from.
> Is it that the requests being sent down are larger, or just better managed -
> or is it some per-stripe_head overhead that is being removed.
From perf, I saw handle_stripe overhead drops, and some lock contentions get
reduced too because we have less stripes. From iostat, I saw request size gets
bigger.
>
> e.g. if we sorted the stripe_heads and handled them in batches of adjacent
> addresses, might that provide the same speed up?
I tried before. Increasing batch in handle_active_stripes can increase request
size, but we still have big overhead to handle stripes.
> I'm certain there is remove for improving the scheduling of the
> stripe_heads, I'm just not sure what the right approach is though.
> I'd like to explore that more before make the stripe_heads bigger.
>
> Also I really don't like depending on multi-page allocations. If we were
> going to go this way I think I'd want an array of single pages, not a
> multi-page.
Yep, that's easy to fix. I'm using multi-page and hope IO segment size is
bigger. Maybe not worthy, considering we have skip_copy?
Thanks,
Shaohua
prev parent reply other threads:[~2014-07-11 11:16 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-08 1:00 [patch 1/2]RAID5: make stripe size configurable Shaohua Li
2014-07-10 5:39 ` NeilBrown
2014-07-11 11:16 ` Shaohua Li [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140711111656.GA4570@kernel.org \
--to=shli@kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).