Linux NFS development
 help / color / mirror / Atom feed
From: Jeff Layton <jeff.layton@primarydata.com>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Chris Worley <chris.worley@primarydata.com>, linux-nfs@vger.kernel.org
Subject: Re: [PATCH 1/4] sunrpc: add a rcu_head to svc_rqst and use kfree_rcu to free it
Date: Mon, 1 Dec 2014 18:05:33 -0500	[thread overview]
Message-ID: <20141201180533.7c8a7587@tlielax.poochiereds.net> (raw)
In-Reply-To: <20141201224407.GD30749@fieldses.org>

On Mon, 1 Dec 2014 17:44:07 -0500
"J. Bruce Fields" <bfields@fieldses.org> wrote:

> On Fri, Nov 21, 2014 at 02:19:28PM -0500, Jeff Layton wrote:
> > ...also make the manipulation of sp_all_threads list use RCU-friendly
> > functions.
> > 
> > Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> > Tested-by: Chris Worley <chris.worley@primarydata.com>
> > ---
> >  include/linux/sunrpc/svc.h    |  2 ++
> >  include/trace/events/sunrpc.h |  3 ++-
> >  net/sunrpc/svc.c              | 10 ++++++----
> >  3 files changed, 10 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> > index 5f0ab39bf7c3..7f80a99c59e4 100644
> > --- a/include/linux/sunrpc/svc.h
> > +++ b/include/linux/sunrpc/svc.h
> > @@ -223,6 +223,7 @@ static inline void svc_putu32(struct kvec *iov, __be32 val)
> >  struct svc_rqst {
> >  	struct list_head	rq_list;	/* idle list */
> >  	struct list_head	rq_all;		/* all threads list */
> > +	struct rcu_head		rq_rcu_head;	/* for RCU deferred kfree */
> >  	struct svc_xprt *	rq_xprt;	/* transport ptr */
> >  
> >  	struct sockaddr_storage	rq_addr;	/* peer address */
> > @@ -262,6 +263,7 @@ struct svc_rqst {
> >  #define	RQ_SPLICE_OK	(4)			/* turned off in gss privacy
> >  						 * to prevent encrypting page
> >  						 * cache pages */
> > +#define	RQ_VICTIM	(5)			/* about to be shut down */
> >  	unsigned long		rq_flags;	/* flags field */
> >  
> >  	void *			rq_argp;	/* decoded arguments */
> > diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
> > index 5848fc235869..08a5fed50f34 100644
> > --- a/include/trace/events/sunrpc.h
> > +++ b/include/trace/events/sunrpc.h
> > @@ -418,7 +418,8 @@ TRACE_EVENT(xs_tcp_data_recv,
> >  		{ (1UL << RQ_LOCAL),		"RQ_LOCAL"},		\
> >  		{ (1UL << RQ_USEDEFERRAL),	"RQ_USEDEFERRAL"},	\
> >  		{ (1UL << RQ_DROPME),		"RQ_DROPME"},		\
> > -		{ (1UL << RQ_SPLICE_OK),	"RQ_SPLICE_OK"})
> > +		{ (1UL << RQ_SPLICE_OK),	"RQ_SPLICE_OK"},	\
> > +		{ (1UL << RQ_VICTIM),		"RQ_VICTIM"})
> >  
> >  TRACE_EVENT(svc_recv,
> >  	TP_PROTO(struct svc_rqst *rqst, int status),
> > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > index 5d9a443d21f6..4edef32f3b9f 100644
> > --- a/net/sunrpc/svc.c
> > +++ b/net/sunrpc/svc.c
> > @@ -616,7 +616,7 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
> >  	serv->sv_nrthreads++;
> >  	spin_lock_bh(&pool->sp_lock);
> >  	pool->sp_nrthreads++;
> > -	list_add(&rqstp->rq_all, &pool->sp_all_threads);
> > +	list_add_rcu(&rqstp->rq_all, &pool->sp_all_threads);
> >  	spin_unlock_bh(&pool->sp_lock);
> >  	rqstp->rq_server = serv;
> >  	rqstp->rq_pool = pool;
> > @@ -684,7 +684,8 @@ found_pool:
> >  		 * so we don't try to kill it again.
> >  		 */
> >  		rqstp = list_entry(pool->sp_all_threads.next, struct svc_rqst, rq_all);
> > -		list_del_init(&rqstp->rq_all);
> > +		set_bit(RQ_VICTIM, &rqstp->rq_flags);
> > +		list_del_rcu(&rqstp->rq_all);
> >  		task = rqstp->rq_task;
> >  	}
> >  	spin_unlock_bh(&pool->sp_lock);
> > @@ -782,10 +783,11 @@ svc_exit_thread(struct svc_rqst *rqstp)
> >  
> >  	spin_lock_bh(&pool->sp_lock);
> >  	pool->sp_nrthreads--;
> > -	list_del(&rqstp->rq_all);
> > +	if (!test_and_set_bit(RQ_VICTIM, &rqstp->rq_flags))
> > +		list_del_rcu(&rqstp->rq_all);
> 
> Both users of RQ_VICTIM are under the sp_lock, so we don't really need
> an atomic test_and_set_bit, do we?
> 

No, it doesn't really need to be an atomic test_and_set_bit. We could
just as easily do:

if (!test_bit(...)) {
	set_bit(...)
	list_del_rcu()
}

...but this works and I think it makes for easier reading. Is it less
efficient? Maybe, but this is not anywhere near a hot codepath so a
couple of extra cycles really shouldn't matter.

> But I guess svc_exit_thread probably still needs to check for the case
> where it's called on a thread that svc_set_num_threads has already
> chosen, and this works even if it's overkill.  OK, fine.
> 

Right. We can't use list_del_init in choose_victim anymore because
we're switching this list over to RCU. So, we need some way to know
whether it still needs to be deleted from the list or not. RQ_VICTIM is
what indicates that.

> >  	spin_unlock_bh(&pool->sp_lock);
> >  
> > -	kfree(rqstp);
> > +	kfree_rcu(rqstp, rq_rcu_head);
> >  
> >  	/* Release the server */
> >  	if (serv)
> > -- 
> > 2.1.0
> > 


-- 
Jeff Layton <jlayton@primarydata.com>

  reply	other threads:[~2014-12-01 23:05 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-21 19:19 [PATCH 0/4] sunrpc: reduce pool->sp_lock contention when queueing a xprt for servicing Jeff Layton
2014-11-21 19:19 ` [PATCH 1/4] sunrpc: add a rcu_head to svc_rqst and use kfree_rcu to free it Jeff Layton
2014-12-01 22:44   ` J. Bruce Fields
2014-12-01 23:05     ` Jeff Layton [this message]
2014-12-01 23:36       ` Trond Myklebust
2014-12-02  0:29         ` Jeff Layton
2014-12-02  0:52           ` Trond Myklebust
2014-12-09 17:05             ` J. Bruce Fields
2014-11-21 19:19 ` [PATCH 2/4] sunrpc: fix potential races in pool_stats collection Jeff Layton
2014-11-21 19:19 ` [PATCH 3/4] sunrpc: convert to lockless lookup of queued server threads Jeff Layton
2014-12-01 23:47   ` J. Bruce Fields
2014-12-02  0:38     ` Trond Myklebust
2014-12-02 11:57       ` Jeff Layton
2014-12-02 12:14         ` Jeff Layton
2014-12-02 16:50           ` J. Bruce Fields
2014-12-02 18:53             ` Ben Myers
2014-12-09 17:04               ` J. Bruce Fields
2014-12-08 18:57             ` J. Bruce Fields
2014-12-08 19:54               ` Jeff Layton
2014-12-08 19:58                 ` J. Bruce Fields
2014-12-08 20:24                   ` Jeff Layton
2014-12-09 16:57           ` J. Bruce Fields
2014-11-21 19:19 ` [PATCH 4/4] sunrpc: add some tracepoints around enqueue and dequeue of svc_xprt Jeff Layton
2014-12-02 13:31   ` Jeff Layton
2014-12-09 16:36     ` J. Bruce Fields
2014-11-25 21:25 ` [PATCH 0/4] sunrpc: reduce pool->sp_lock contention when queueing a xprt for servicing Jeff Layton
2014-11-26  0:09   ` J. Bruce Fields
2014-11-26  0:38     ` Jeff Layton
2014-11-26  2:40       ` J. Bruce Fields
2014-11-26 11:12         ` Jeff Layton
2014-12-09 16:44 ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141201180533.7c8a7587@tlielax.poochiereds.net \
    --to=jeff.layton@primarydata.com \
    --cc=bfields@fieldses.org \
    --cc=chris.worley@primarydata.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox