linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiao Ni <xni@redhat.com>
To: Shaohua Li <shli@kernel.org>
Cc: linux-raid@vger.kernel.org, colyli@suse.de, ncroxon@redhat.com
Subject: Re: [MD PATCH v2 1/1] Use a new variable to count flighting sync requests
Date: Fri, 28 Apr 2017 01:18:37 -0400 (EDT)	[thread overview]
Message-ID: <25286079.2019909.1493356717905.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20170427210516.key5efxmifbxd3sz@kernel.org>



----- Original Message -----
> From: "Shaohua Li" <shli@kernel.org>
> To: "Xiao Ni" <xni@redhat.com>
> Cc: linux-raid@vger.kernel.org, colyli@suse.de, ncroxon@redhat.com
> Sent: Friday, April 28, 2017 5:05:16 AM
> Subject: Re: [MD PATCH v2 1/1] Use a new variable to count flighting sync requests
> 
> On Thu, Apr 27, 2017 at 01:58:01PM -0700, Shaohua Li wrote:
> > On Thu, Apr 27, 2017 at 04:28:49PM +0800, Xiao Ni wrote:
> > > In new barrier codes, raise_barrier waits if conf->nr_pending[idx] is not
> > > zero.
> > > After all the conditions are true, the resync request can go on be
> > > handled. But
> > > it adds conf->nr_pending[idx] again. The next resync request hit the same
> > > bucket
> > > idx need to wait the resync request which is submitted before. The
> > > performance
> > > of resync/recovery is degraded.
> > > So we should use a new variable to count sync requests which are in
> > > flight.
> > > 
> > > I did a simple test:
> > > 1. Without the patch, create a raid1 with two disks. The resync speed:
> > > Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
> > > avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> > > sdb               0.00     0.00  166.00    0.00    10.38     0.00
> > > 128.00     0.03    0.20    0.20    0.00   0.19   3.20
> > > sdc               0.00     0.00    0.00  166.00     0.00    10.38
> > > 128.00     0.96    5.77    0.00    5.77   5.75  95.50
> > > 2. With the patch, the result is:
> > > sdb            2214.00     0.00  766.00    0.00   185.69     0.00
> > > 496.46     2.80    3.66    3.66    0.00   1.03  79.10
> > > sdc               0.00  2205.00    0.00  769.00     0.00   186.44
> > > 496.52     5.25    6.84    0.00    6.84   1.30 100.10
> > > 
> > > Suggested-by: Shaohua Li <shli@kernel.org>
> > > Signed-off-by: Xiao Ni <xni@redhat.com>
> > 
> > applied, thanks!
> > > ---
> > >  drivers/md/raid1.c | 5 +++--
> > >  drivers/md/raid1.h | 1 +
> > >  2 files changed, 4 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> > > index a34f587..ff5ee53 100644
> > > --- a/drivers/md/raid1.c
> > > +++ b/drivers/md/raid1.c
> > > @@ -869,7 +869,7 @@ static void raise_barrier(struct r1conf *conf,
> > > sector_t sector_nr)
> > >  			     atomic_read(&conf->barrier[idx]) < RESYNC_DEPTH,
> > >  			    conf->resync_lock);
> > >  
> > > -	atomic_inc(&conf->nr_pending[idx]);
> > > +	atomic_inc(&conf->nr_sync_pending);
> > >  	spin_unlock_irq(&conf->resync_lock);
> > >  }
> > >  
> > > @@ -880,7 +880,7 @@ static void lower_barrier(struct r1conf *conf,
> > > sector_t sector_nr)
> > >  	BUG_ON(atomic_read(&conf->barrier[idx]) <= 0);
> > >  
> > >  	atomic_dec(&conf->barrier[idx]);
> > > -	atomic_dec(&conf->nr_pending[idx]);
> > > +	atomic_dec(&conf->nr_sync_pending);
> > >  	wake_up(&conf->wait_barrier);
> > >  }
> > >  
> > > @@ -1017,6 +1017,7 @@ static int get_unqueued_pending(struct r1conf
> > > *conf)
> > >  {
> > >  	int idx, ret;
> > >  
> > > +	ret = atomic_read(&conf->nr_sync_pending);
> > >  	for (ret = 0, idx = 0; idx < BARRIER_BUCKETS_NR; idx++)
> 
> actually I deleted the 'ret = 0'

Sorry, I didn't notice this. I need more attention. And thanks
for the modification. 

Xiao 

> 
> > >  		ret += atomic_read(&conf->nr_pending[idx]) -
> > >  			atomic_read(&conf->nr_queued[idx]);
> > > diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
> > > index dd22a37..1668f22 100644
> > > --- a/drivers/md/raid1.h
> > > +++ b/drivers/md/raid1.h
> > > @@ -84,6 +84,7 @@ struct r1conf {
> > >  	 */
> > >  	wait_queue_head_t	wait_barrier;
> > >  	spinlock_t		resync_lock;
> > > +	atomic_t		nr_sync_pending;
> > >  	atomic_t		*nr_pending;
> > >  	atomic_t		*nr_waiting;
> > >  	atomic_t		*nr_queued;
> > > --
> > > 2.7.4
> > > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

      reply	other threads:[~2017-04-28  5:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-27  8:28 [MD PATCH v2 1/1] Use a new variable to count flighting sync requests Xiao Ni
2017-04-27  8:36 ` Coly Li
2017-04-27 20:58 ` Shaohua Li
2017-04-27 21:05   ` Shaohua Li
2017-04-28  5:18     ` Xiao Ni [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=25286079.2019909.1493356717905.JavaMail.zimbra@redhat.com \
    --to=xni@redhat.com \
    --cc=colyli@suse.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=ncroxon@redhat.com \
    --cc=shli@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).