linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shli@kernel.org>
To: Tejun Heo <tj@kernel.org>
Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
	neilb@suse.de, djbw@fb.com
Subject: Re: [patch 3/3] raid5: only wakeup necessary threads
Date: Tue, 30 Jul 2013 21:24:14 +0800	[thread overview]
Message-ID: <20130730132414.GB30352@kernel.org> (raw)
In-Reply-To: <20130730124655.GB2599@htj.dyndns.org>

On Tue, Jul 30, 2013 at 08:46:55AM -0400, Tejun Heo wrote:
> Hello,
> 
> On Tue, Jul 30, 2013 at 01:52:10PM +0800, shli@kernel.org wrote:
> > If there are no enough stripes to handle, we'd better now always queue all
> > available work_structs. If one worker can only handle small or even none
> > stripes, it will impact request merge and create lock contention.
> > 
> > With this patch, the number of work_struct running will depend on pending
> > stripes number. Not some statistics info used in the patch are accessed without
> > locking protection. Yhis should doesn't matter, we just try best to avoid queue
> > unnecessary work_struct.
> 
> I haven't really followed the code but two general comments.
> 
> * Stacking drivers in general should always try to keep the bios
>   passing through in the same order that they are received.  The order
>   of bios is an important information to the io scheduler and io
>   scheduling will suffer badly if the bios are shuffled by the
>   stacking driver.  It'd probably be a good idea to have a mechanism
>   to keep the issue order intact even when multiple workers are
>   employed.

In the raid5 case, it's very hard to keep the order the bios passed in, because
we need read some disks, calculate parity, and write some disks, the timing
could break any kind of order. Besides the workqueue handles 8 stripes one
time, so I suppose this keeps some order if there is.
 
> * While limiting the number of work_struct dynamically could be
>   beneficial and it's upto Neil, it'd be nice if you can accompany it
>   with some numbers so that whether such optimization actually is
>   worthwhile or not can be decided.  The same goes for the whole
>   series, I suppose.

Sure, I can add the number in next post. Basically if I let 8 worker running
for 7 disks raid5 setup, multi-threading is 4x ~ 5x faster.

Thanks,
Shaohua

  reply	other threads:[~2013-07-30 13:24 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-30  5:52 [patch 0/3] raid5: make stripe handling multi-threading shli
2013-07-30  5:52 ` [patch 1/3] raid5: offload stripe handle to workqueue shli
2013-07-30 11:46   ` Tejun Heo
2013-07-30 12:53   ` Tejun Heo
2013-07-30 13:07     ` Shaohua Li
2013-07-30 13:57       ` Tejun Heo
2013-07-31  1:24         ` Shaohua Li
2013-07-31 10:33           ` Tejun Heo
2013-08-01  2:01             ` Shaohua Li
2013-08-01 12:15               ` Tejun Heo
2013-07-30  5:52 ` [patch 2/3] raid5: sysfs entry to control worker thread number shli
2013-07-30  5:52 ` [patch 3/3] raid5: only wakeup necessary threads shli
2013-07-30 12:46   ` Tejun Heo
2013-07-30 13:24     ` Shaohua Li [this message]
2013-07-30 14:01       ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130730132414.GB30352@kernel.org \
    --to=shli@kernel.org \
    --cc=djbw@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).