linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shli@kernel.org>
To: NeilBrown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org, axboe@kernel.dk
Subject: Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
Date: Tue, 3 Jul 2012 16:58:58 +0800	[thread overview]
Message-ID: <20120703085858.GA829@kernel.org> (raw)
In-Reply-To: <20120702073645.GA785@kernel.org>

On Mon, Jul 02, 2012 at 03:36:45PM +0800, Shaohua Li wrote:
> On Fri, Jun 29, 2012 at 02:10:30PM +0800, Shaohua Li wrote:
> > 2012/6/28 NeilBrown <neilb@suse.de>:
> > > On Wed, 13 Jun 2012 17:11:43 +0800 Shaohua Li <shli@kernel.org> wrote:
> > >
> > >> In raid1/10, all write requests are dispatched in a single thread. In fast
> > >> storage, the thread is a bottleneck, because it dispatches request too slow.
> > >> Also the thread migrates freely, which makes request completion cpu not match
> > >> with submission cpu even driver/block layer has such capability. This will
> > >> cause bad cache issue. Both these are not a big deal for slow storage.
> > >>
> > >> Switching the dispatching to percpu/perthread based dramatically increases
> > >> performance.  The more raid disk number is, the more performance boosts. In a
> > >> 4-disk raid10 setup, this can double the throughput.
> > >>
> > >> percpu/perthread based dispatch doesn't harm slow storage. This is the way how
> > >> raw device is accessed, and there is correct block plug set which can help do
> > >> request merge and reduce lock contention.
> > >>
> > >> V2->V3:
> > >> rebase to latest tree and fix cpuhotplug issue
> > >>
> > >> V1->V2:
> > >> 1. droped direct dispatch patches. That has better performance imporvement, but
> > >> is hopelessly made correct.
> > >> 2. Add a MD specific workqueue to do percpu dispatch.
> > >
> > >
> > > Hi.
> > >
> > > I still don't like the per-cpu allocations and the extra work queues.
> > >
> > > The following patch demonstrates how I would like to address this issue.  It
> > > should submit requests from the same thread that initially made the request -
> > > at least in most cases.
> > >
> > > It leverages the plugging code and pushed everything out on the unplug,
> > > unless that comes from a scheduler call (which should be uncommon).  In that
> > > case it falls back on passing all the requests to the md thread.
> > >
> > > Obviously if we proceed with this I'll split this up into neat reviewable
> > > patches.  However before that it would help to know if it really helps as I
> > > think it should.
> > >
> > > So would you be able to test it on your SSD hardware and see how it compares
> > > the current code, and to you code?  Thanks.
> > >
> > > I have only tested it lightly myself so there could still be bugs, but
> > > hopefully not obvious ones.
> > >
> > > A simple "time mkfs" test on very modest hardware show as 25% reduction in
> > > total time (168s -> 127s).  I guess that's a 33% increase in speed?
> > > However sequential writes with 'dd' seem a little slower (14MB/s -> 13.6MB/s)
> > >
> > > There are some hacks in there that need to be cleaned up, but I think the
> > > general structure looks good.
> > 
> > Thought I consider this approach before, and schedule from the unplug
> > callback is an issue. Maybe I overlooked it at that time, the from_schedule
> > check looks promising.
> 
> I tried raid1/raid10 performance with this patch (with similar change for
> raid10, and add plug in the raid1/10 unplug function for dispatching), the
> result is ok. The from_schedule check does the trick, there isn't race I
> mentioned before. And I double checked the rate unplug is called from schedule,
> which is very very low.
> 
> Now the only problem is if extra bitmap flush could be an overhead. Our card
> hasn't such overhead, so not sure.

Looks you merged the patch to your tree, great! The raid1_unplug() still lacks
blk_start_plug/blk_finish_plug. Will you add a similar patch for raid10?

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2012-07-03  8:58 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-13  9:11 [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage Shaohua Li
2012-06-13  9:11 ` [patch 1/3 v3] MD: add a specific workqueue to do dispatch Shaohua Li
2012-06-13  9:11 ` [patch 2/3 v3] raid1: percpu dispatch for write request if bitmap supported Shaohua Li
2012-06-13  9:11 ` [patch 3/3 v3] raid10: " Shaohua Li
2012-06-28  9:03 ` [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage NeilBrown
2012-06-29  1:29   ` Stan Hoeppner
2012-06-29  2:52     ` NeilBrown
2012-06-29  3:02       ` Roberto Spadim
2012-06-30  4:37       ` Stan Hoeppner
2012-06-29  6:10   ` Shaohua Li
2012-07-02  7:36     ` Shaohua Li
2012-07-03  8:58       ` Shaohua Li [this message]
2012-07-04  1:45         ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120703085858.GA829@kernel.org \
    --to=shli@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).