public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: NeilBrown <neilb@suse.de>, Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org, Olga Kornievskaia <kolga@netapp.com>,
	Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
	Steve Dickson <steved@redhat.com>
Subject: Re: [PATCH 05/14] sunrpc: change sp_nrthreads from atomic_t to unsigned int.
Date: Mon, 15 Jul 2024 10:33:05 -0400	[thread overview]
Message-ID: <4b252735acae35757f778772503b21c75565139c.camel@kernel.org> (raw)
In-Reply-To: <cf8d0e7e1ddaa4d8e1923be8274ff0679713e471.camel@kernel.org>

On Mon, 2024-07-15 at 10:12 -0400, Jeff Layton wrote:
> On Mon, 2024-07-15 at 17:14 +1000, NeilBrown wrote:
> > sp_nrthreads is only ever accessed under the service mutex
> >   nlmsvc_mutex nfs_callback_mutex nfsd_mutex
> > so these is no need for it to be an atomic_t.
> > 
> > The fact that all code using it is single-threaded means that we
> > can
> > simplify svc_pool_victim and remove the temporary elevation of
> > sp_nrthreads.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  fs/nfsd/nfsctl.c           |  2 +-
> >  fs/nfsd/nfssvc.c           |  2 +-
> >  include/linux/sunrpc/svc.h |  4 ++--
> >  net/sunrpc/svc.c           | 31 +++++++++++--------------------
> >  4 files changed, 15 insertions(+), 24 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index 5b0f2e0d7ccf..d85b6d1fa31f 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -1769,7 +1769,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff
> > *skb, struct genl_info *info)
> >  			struct svc_pool *sp = &nn->nfsd_serv-
> > >sv_pools[i];
> >  
> >  			err = nla_put_u32(skb,
> > NFSD_A_SERVER_THREADS,
> > -					  atomic_read(&sp-
> > >sp_nrthreads));
> > +					  sp->sp_nrthreads);
> >  			if (err)
> >  				goto err_unlock;
> >  		}
> > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > index 4438cdcd4873..7377422a34df 100644
> > --- a/fs/nfsd/nfssvc.c
> > +++ b/fs/nfsd/nfssvc.c
> > @@ -641,7 +641,7 @@ int nfsd_get_nrthreads(int n, int *nthreads,
> > struct net *net)
> >  
> >  	if (serv)
> >  		for (i = 0; i < serv->sv_nrpools && i < n; i++)
> > -			nthreads[i] = atomic_read(&serv-
> > >sv_pools[i].sp_nrthreads);
> > +			nthreads[i] = serv-
> > >sv_pools[i].sp_nrthreads;
> >  	return 0;
> >  }
> >  
> > diff --git a/include/linux/sunrpc/svc.h
> > b/include/linux/sunrpc/svc.h
> > index e4fa25fafa97..99e9345d829e 100644
> > --- a/include/linux/sunrpc/svc.h
> > +++ b/include/linux/sunrpc/svc.h
> > @@ -33,9 +33,9 @@
> >   * node traffic on multi-node NUMA NFS servers.
> >   */
> >  struct svc_pool {
> > -	unsigned int		sp_id;	    	/* pool id; also
> > node id on NUMA */
> > +	unsigned int		sp_id;		/* pool id; also
> > node id on NUMA */
> >  	struct lwq		sp_xprts;	/* pending
> > transports */
> > -	atomic_t		sp_nrthreads;	/* # of threads in
> > pool */
> > +	unsigned int		sp_nrthreads;	/* # of threads in
> > pool */
> >  	struct list_head	sp_all_threads;	/* all
> > server threads */
> >  	struct llist_head	sp_idle_threads; /* idle server
> > threads */
> >  
> > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > index 072ad115ae3d..0d8588bc693c 100644
> > --- a/net/sunrpc/svc.c
> > +++ b/net/sunrpc/svc.c
> > @@ -725,7 +725,7 @@ svc_prepare_thread(struct svc_serv *serv,
> > struct svc_pool *pool, int node)
> >  	serv->sv_nrthreads += 1;
> >  	spin_unlock_bh(&serv->sv_lock);
> >  
> > -	atomic_inc(&pool->sp_nrthreads);
> > +	pool->sp_nrthreads += 1;
> >  
> >  	/* Protected by whatever lock the service uses when
> > calling
> >  	 * svc_set_num_threads()
> > @@ -780,31 +780,22 @@ svc_pool_victim(struct svc_serv *serv, struct
> > svc_pool *target_pool,
> >  	struct svc_pool *pool;
> >  	unsigned int i;
> >  
> > -retry:
> >  	pool = target_pool;
> >  
> > -	if (pool != NULL) {
> > -		if (atomic_inc_not_zero(&pool->sp_nrthreads))
> > -			goto found_pool;
> > -		return NULL;
> > -	} else {
> > +	if (!pool) {
> >  		for (i = 0; i < serv->sv_nrpools; i++) {
> >  			pool = &serv->sv_pools[--(*state) % serv-
> > >sv_nrpools];
> > -			if (atomic_inc_not_zero(&pool-
> > >sp_nrthreads))
> > -				goto found_pool;
> > +			if (pool->sp_nrthreads)
> > +				break;
> >  		}
> > -		return NULL;
> >  	}
> >  
> > -found_pool:
> > -	set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
> > -	set_bit(SP_NEED_VICTIM, &pool->sp_flags);
> > -	if (!atomic_dec_and_test(&pool->sp_nrthreads))
> > +	if (pool && pool->sp_nrthreads) {
> > +		set_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
> > +		set_bit(SP_NEED_VICTIM, &pool->sp_flags);
> >  		return pool;
> > -	/* Nothing left in this pool any more */
> > -	clear_bit(SP_NEED_VICTIM, &pool->sp_flags);
> > -	clear_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
> > -	goto retry;
> > +	}
> > +	return NULL;
> >  }
> >  
> >  static int
> > @@ -883,7 +874,7 @@ svc_set_num_threads(struct svc_serv *serv,
> > struct svc_pool *pool, int nrservs)
> >  	if (!pool)
> >  		nrservs -= serv->sv_nrthreads;
> >  	else
> > -		nrservs -= atomic_read(&pool->sp_nrthreads);
> > +		nrservs -= pool->sp_nrthreads;
> >  
> >  	if (nrservs > 0)
> >  		return svc_start_kthreads(serv, pool, nrservs);
> > @@ -953,7 +944,7 @@ svc_exit_thread(struct svc_rqst *rqstp)
> >  
> >  	list_del_rcu(&rqstp->rq_all);
> >  
> > -	atomic_dec(&pool->sp_nrthreads);
> > +	pool->sp_nrthreads -= 1;
> >  
> >  	spin_lock_bh(&serv->sv_lock);
> >  	serv->sv_nrthreads -= 1;
> 
> I don't think svc_exit_thread is called with the nfsd_mutex held, so
> if
> several threads were exiting at the same time, they could race here.
> 


Ok, the changelog on #7 might point out why I'm wron here.

nfsd() calls svc_exit_thread when exiting, but I missed that that would
imply that svc_stop_kthreads() is running in another task (and that the
nfsd_mutex would actually be held). It also looks like they do have to
be torn down serially as well, so there should be no race there after
all.

Either way, could I trouble you to add a comment about this above
svc_exit_thread? That's a really subtle interaction and it would be
good to document it.

Thanks,
-- 
Jeff Layton <jlayton@kernel.org>

  reply	other threads:[~2024-07-15 14:33 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-15  7:14 [PATCH 00/14 RFC] support automatic changes to nfsd thread count NeilBrown
2024-07-15  7:14 ` [PATCH 01/14] lockd: discard nlmsvc_timeout NeilBrown
2024-07-15  7:14 ` [PATCH 02/14] SUNRPC: make various functions static, or not exported NeilBrown
2024-07-15  7:14 ` [PATCH 03/14] nfsd: move nfsd_pool_stats_open into nfsctl.c NeilBrown
2024-07-15  7:14 ` [PATCH 04/14] nfsd: don't allocate the versions array NeilBrown
2024-08-02 21:34   ` Mike Snitzer
2024-08-02 23:04     ` NeilBrown
2024-08-05  4:55       ` NeilBrown
2024-07-15  7:14 ` [PATCH 05/14] sunrpc: change sp_nrthreads from atomic_t to unsigned int NeilBrown
2024-07-15 14:12   ` Jeff Layton
2024-07-15 14:33     ` Jeff Layton [this message]
2024-07-16  1:33     ` NeilBrown
2024-07-24 19:36       ` Chuck Lever
2024-07-15  7:14 ` [PATCH 06/14] sunrpc: don't take ->sv_lock when updating ->sv_nrthreads NeilBrown
2024-07-15  7:14 ` [PATCH 07/14] Change unshare_fs_struct() to never fail NeilBrown
2024-07-15 14:39   ` Jeff Layton
2024-07-16  1:48     ` NeilBrown
2024-07-15  7:14 ` [PATCH 08/14] SUNRPC: move nrthreads counting to start/stop threads NeilBrown
2024-07-15  7:14 ` [PATCH 09/14] nfsd: return hard failure for OP_SETCLIENTID when there are too many clients NeilBrown
2024-07-15 15:21   ` Jeff Layton
2024-07-15  7:14 ` [PATCH 10/14] nfs: dynamically adjust per-client DRC slot limits NeilBrown
2024-07-15  7:14 ` [PATCH 11/14] nfsd: don't use sv_nrthreads in connection limiting calculations NeilBrown
2024-07-15 15:52   ` Jeff Layton
2024-07-16  2:04     ` NeilBrown
2024-07-15  7:14 ` [PATCH 12/14] sunrpc: introduce possibility that requested number of threads is different from actual NeilBrown
2024-07-15 16:00   ` Jeff Layton
2024-07-15  7:14 ` [PATCH 13/14] nfsd: introduce concept of a maximum number of threads NeilBrown
2024-07-15 17:06   ` Jeff Layton
2024-07-16  3:21     ` NeilBrown
2024-07-16 11:00       ` Jeff Layton
2024-07-16 13:31         ` Chuck Lever III
2024-07-16 18:49           ` Tom Talpey
2024-07-17 15:24             ` Chuck Lever III
2024-07-15  7:14 ` [PATCH 14/14] nfsd: adjust number of running nfsd threads NeilBrown
2024-07-15 17:29 ` [PATCH 00/14 RFC] support automatic changes to nfsd thread count Jeff Layton
2024-07-24 19:43 ` Chuck Lever III
2024-07-24 21:25   ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b252735acae35757f778772503b21c75565139c.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=Dai.Ngo@oracle.com \
    --cc=chuck.lever@oracle.com \
    --cc=kolga@netapp.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=steved@redhat.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox