linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Shaohua Li <shli@kernel.org>
Cc: linux-raid@vger.kernel.org, axboe@kernel.dk,
	dan.j.williams@intel.com, shli@fusionio.com
Subject: Re: [patch 6/8] raid5: make_request use batch stripe release
Date: Thu, 7 Jun 2012 17:33:10 +1000	[thread overview]
Message-ID: <20120607173310.2a3f147b@notabene.brown> (raw)
In-Reply-To: <20120607063358.GB779@kernel.org>

[-- Attachment #1: Type: text/plain, Size: 2675 bytes --]

On Thu, 7 Jun 2012 14:33:58 +0800 Shaohua Li <shli@kernel.org> wrote:

> On Thu, Jun 07, 2012 at 11:23:45AM +1000, NeilBrown wrote:
> > On Mon, 04 Jun 2012 16:01:58 +0800 Shaohua Li <shli@kernel.org> wrote:
> > 
> > > make_request() does stripe release for every stripe and the stripe usually has
> > > count 1, which makes previous release_stripe() optimization not work. In my
> > > test, this release_stripe() becomes the heaviest pleace to take
> > > conf->device_lock after previous patches applied.
> > > 
> > > Below patch makes stripe release batch. When maxium strips of a batch reach,
> > > the batch will be flushed out. Another way to do the flush is when unplug is
> > > called.
> > > 
> > > Signed-off-by: Shaohua Li <shli@fusionio.com>
> > 
> > I like the idea of a batched release.
> > I don't like the per-cpu variables... and I don't think it is safe to only
> > allocate them for_each_present_cpu without support cpu-hot-plug.
> > 
> > I would much rather keep a list of stripes (linked on ->lru) in struct
> > md_plug_cb (or maybe in some structure which contains that) and release them
> > all on unplug - and only on unplug.
> > 
> > Maybe pass a size to mddev_check_unplugged, and it allocates that much more
> > space.  Get mddev_check_unplugged to return the md_plug_cb structure.
> > If the new space is NULL, then list_head_init it, and change the cb.callback
> > to a raid5 specific function.
> > Then add any stripe to the md_plug_cb, and in the unplug function, release
> > them all.
> > 
> > Does that make sense?
> > 
> > Also I would rather the batched stripe release code were defined in the same
> > patch that used it.  It isn't big enough to justify a separate patch.
> 
> The stripe->lru need protection of device_lock, so I can't use a list. An array
> is preferred. I really didn't like the idea to allocate memory especially when
> allocating an array. I'll fix the code for cpuhotplug.

You don't need device_lock to use ->lru.
Currently the lru is not used when sh->count is not-zero unless
STRIPE_EXPANDING is set - and we never attach IO requests if STRIPE_EXPANDING
is set.
So when make_request wants to release a stripe_head, ->lru is currently
unused.
So we can use it to put the stripe on a per-thread list without locking.

We need another stripe_head flag to say "is on a per-thread unplug list" to
avoid racing between processes, but we don't need a spinlock for that.
ie.
  if (!test_and_set(STRIPE_ON_UNPLUG_LIST, &sh->state))
           list_add(&plug->list, &sh->lru);

or similar.

Please do waste time on cpuhotplug - it isn't the right solution.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  reply	other threads:[~2012-06-07  7:33 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-04  8:01 [patch 0/8] raid5: improve write performance for fast storage Shaohua Li
2012-06-04  8:01 ` [patch 1/8] raid5: add a per-stripe lock Shaohua Li
2012-06-07  0:54   ` NeilBrown
2012-06-07  6:29     ` Shaohua Li
2012-06-07  6:35       ` NeilBrown
2012-06-07  6:52         ` Shaohua Li
2012-06-12 21:02           ` Dan Williams
2012-06-13  4:08             ` Dan Williams
2012-06-13  4:23               ` Shaohua Li
2012-06-12 21:10   ` Dan Williams
2012-06-04  8:01 ` [patch 2/8] raid5: lockless access raid5 overrided bi_phys_segments Shaohua Li
2012-06-07  1:06   ` NeilBrown
2012-06-12 20:41     ` Dan Williams
2012-06-04  8:01 ` [patch 3/8] raid5: remove some device_lock locking places Shaohua Li
2012-06-04  8:01 ` [patch 4/8] raid5: reduce chance release_stripe() taking device_lock Shaohua Li
2012-06-07  0:50   ` NeilBrown
2012-06-04  8:01 ` [patch 5/8] raid5: add batch stripe release Shaohua Li
2012-06-04  8:01 ` [patch 6/8] raid5: make_request use " Shaohua Li
2012-06-07  1:23   ` NeilBrown
2012-06-07  6:33     ` Shaohua Li
2012-06-07  7:33       ` NeilBrown [this message]
2012-06-07  7:58         ` Shaohua Li
2012-06-08  6:16           ` Shaohua Li
2012-06-08  6:42             ` NeilBrown
2012-06-04  8:01 ` [patch 7/8] raid5: raid5d handle stripe in batch way Shaohua Li
2012-06-07  1:32   ` NeilBrown
2012-06-07  6:35     ` Shaohua Li
2012-06-07  7:38       ` NeilBrown
2012-06-04  8:02 ` [patch 8/8] raid5: create multiple threads to handle stripes Shaohua Li
2012-06-07  1:39   ` NeilBrown
2012-06-07  6:45     ` Shaohua Li
2012-06-13  4:08       ` Dan Williams
2012-06-21 10:09         ` Shaohua Li
2012-07-02 20:43           ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120607173310.2a3f147b@notabene.brown \
    --to=neilb@suse.de \
    --cc=axboe@kernel.dk \
    --cc=dan.j.williams@intel.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=shli@fusionio.com \
    --cc=shli@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).