public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever@oracle.com>
To: NeilBrown <neilb@suse.de>, Anna Schumaker <anna@kernel.org>,
	Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: linux-nfs@vger.kernel.org,
	Olga Kornievskaia <okorniev@redhat.com>,
	Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
	Jeff Layton <jlayton@kernel.org>
Subject: Re: [PATCH 1/2] nfsd: don't use sv_nrthreads in connection limiting calculations.
Date: Mon, 9 Dec 2024 12:25:20 -0500	[thread overview]
Message-ID: <f306d92f-5d5f-4e7d-99b8-19efeda0078b@oracle.com> (raw)
In-Reply-To: <20241209004310.728309-2-neilb@suse.de>

On 12/8/24 7:41 PM, NeilBrown wrote:
> The heuristic for limiting the number of incoming connections to nfsd
> currently uses sv_nrthreads - allowing more connections if more threads
> were configured.
> 
> A future patch will allow number of threads to grow dynamically so that
> there will be no need to configure sv_nrthreads.  So we need a different
> solution for limiting connections.
> 
> It isn't clear what problem is solved by limiting connections (as
> mentioned in a code comment) but the most likely problem is a connection
> storm - many connections that are not doing productive work.  These will
> be closed after about 6 minutes already but it might help to slow down a
> storm.
> 
> This patch adds a per-connection flag XPT_PEER_VALID which indicates
> that the peer has presented a filehandle for which it has some sort of
> access.  i.e the peer is known to be trusted in some way.  We now only
> count connections which have NOT been determined to be valid.  There
> should be relative few of these at any given time.
> 
> If the number of non-validated peer exceed a limit - currently 64 - we
> close the oldest non-validated peer to avoid having too many of these
> useless connections.
> 
> Note that this patch significantly changes the meaning of the various
> configuration parameters for "max connections".  The next patch will
> remove all of these.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>   fs/nfs/callback.c               |  4 ----
>   fs/nfs/callback_xdr.c           |  1 +

I've pulled this one into nfsd-testing, but it would be great if the
fs/nfs/ hunks could be Acked-by a client maintainer.


>   fs/nfsd/netns.h                 |  4 ++--
>   fs/nfsd/nfsfh.c                 |  2 ++
>   include/linux/sunrpc/svc.h      |  2 +-
>   include/linux/sunrpc/svc_xprt.h | 15 +++++++++++++++
>   net/sunrpc/svc_xprt.c           | 33 +++++++++++++++++----------------
>   7 files changed, 38 insertions(+), 23 deletions(-)
> 
> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> index 6cf92498a5ac..86bdc7d23fb9 100644
> --- a/fs/nfs/callback.c
> +++ b/fs/nfs/callback.c
> @@ -211,10 +211,6 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
>   		return ERR_PTR(-ENOMEM);
>   	}
>   	cb_info->serv = serv;
> -	/* As there is only one thread we need to over-ride the
> -	 * default maximum of 80 connections
> -	 */
> -	serv->sv_maxconn = 1024;
>   	dprintk("nfs_callback_create_svc: service created\n");
>   	return serv;
>   }
> diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
> index fdeb0b34a3d3..4254ba3ee7c5 100644
> --- a/fs/nfs/callback_xdr.c
> +++ b/fs/nfs/callback_xdr.c
> @@ -984,6 +984,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
>   			nfs_put_client(cps.clp);
>   			goto out_invalidcred;
>   		}
> +		svc_xprt_set_valid(rqstp->rq_xprt);
>   	}
>   
>   	cps.minorversion = hdr_arg.minorversion;
> diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
> index 26f7b34d1a03..a05a45bb1978 100644
> --- a/fs/nfsd/netns.h
> +++ b/fs/nfsd/netns.h
> @@ -129,8 +129,8 @@ struct nfsd_net {
>   	unsigned char writeverf[8];
>   
>   	/*
> -	 * Max number of connections this nfsd container will allow. Defaults
> -	 * to '0' which is means that it bases this on the number of threads.
> +	 * Max number of non-validated connections this nfsd container
> +	 * will allow.  Defaults to '0' gets mapped to 64.
>   	 */
>   	unsigned int max_connections;
>   
> diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
> index 6a831cb242df..bf59f83c6224 100644
> --- a/fs/nfsd/nfsfh.c
> +++ b/fs/nfsd/nfsfh.c
> @@ -382,6 +382,8 @@ __fh_verify(struct svc_rqst *rqstp,
>   	if (error)
>   		goto out;
>   
> +	svc_xprt_set_valid(rqstp->rq_xprt);
> +
>   	/* Finally, check access permissions. */
>   	error = nfsd_permission(cred, exp, dentry, access);
>   out:
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index e68fecf6eab5..617ebfff2f30 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -81,7 +81,7 @@ struct svc_serv {
>   	unsigned int		sv_xdrsize;	/* XDR buffer size */
>   	struct list_head	sv_permsocks;	/* all permanent sockets */
>   	struct list_head	sv_tempsocks;	/* all temporary sockets */
> -	int			sv_tmpcnt;	/* count of temporary sockets */
> +	int			sv_tmpcnt;	/* count of temporary "valid" sockets */
>   	struct timer_list	sv_temptimer;	/* timer for aging temporary sockets */
>   
>   	char *			sv_name;	/* service name */
> diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
> index 0981e35a9fed..35929a7727c7 100644
> --- a/include/linux/sunrpc/svc_xprt.h
> +++ b/include/linux/sunrpc/svc_xprt.h
> @@ -99,8 +99,23 @@ enum {
>   	XPT_HANDSHAKE,		/* xprt requests a handshake */
>   	XPT_TLS_SESSION,	/* transport-layer security established */
>   	XPT_PEER_AUTH,		/* peer has been authenticated */
> +	XPT_PEER_VALID,		/* peer has presented a filehandle that
> +				 * it has access to.  It is NOT counted
> +				 * in ->sv_tmpcnt.
> +				 */
>   };
>   
> +static inline void svc_xprt_set_valid(struct svc_xprt *xpt)
> +{
> +	if (test_bit(XPT_TEMP, &xpt->xpt_flags) &&
> +	    !test_and_set_bit(XPT_PEER_VALID, &xpt->xpt_flags)) {
> +		struct svc_serv *serv = xpt->xpt_server;
> +		spin_lock(&serv->sv_lock);
> +		serv->sv_tmpcnt -= 1;
> +		spin_unlock(&serv->sv_lock);
> +	}
> +}
> +
>   static inline void unregister_xpt_user(struct svc_xprt *xpt, struct svc_xpt_user *u)
>   {
>   	spin_lock(&xpt->xpt_lock);
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index 43c57124de52..ff5b8bb8a88f 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -606,7 +606,8 @@ int svc_port_is_privileged(struct sockaddr *sin)
>   }
>   
>   /*
> - * Make sure that we don't have too many active connections. If we have,
> + * Make sure that we don't have too many connections that have not yet
> + * demonstrated that they have access the the NFS server. If we have,
>    * something must be dropped. It's not clear what will happen if we allow
>    * "too many" connections, but when dealing with network-facing software,
>    * we have to code defensively. Here we do that by imposing hard limits.
> @@ -625,27 +626,26 @@ int svc_port_is_privileged(struct sockaddr *sin)
>    */
>   static void svc_check_conn_limits(struct svc_serv *serv)
>   {
> -	unsigned int limit = serv->sv_maxconn ? serv->sv_maxconn :
> -				(serv->sv_nrthreads+3) * 20;
> +	unsigned int limit = serv->sv_maxconn ? serv->sv_maxconn : 64;
>   
>   	if (serv->sv_tmpcnt > limit) {
> -		struct svc_xprt *xprt = NULL;
> +		struct svc_xprt *xprt = NULL, *xprti;
>   		spin_lock_bh(&serv->sv_lock);
>   		if (!list_empty(&serv->sv_tempsocks)) {
> -			/* Try to help the admin */
> -			net_notice_ratelimited("%s: too many open connections, consider increasing the %s\n",
> -					       serv->sv_name, serv->sv_maxconn ?
> -					       "max number of connections" :
> -					       "number of threads");
>   			/*
>   			 * Always select the oldest connection. It's not fair,
> -			 * but so is life
> +			 * but nor is life.
>   			 */
> -			xprt = list_entry(serv->sv_tempsocks.prev,
> -					  struct svc_xprt,
> -					  xpt_list);
> -			set_bit(XPT_CLOSE, &xprt->xpt_flags);
> -			svc_xprt_get(xprt);
> +			list_for_each_entry_reverse(xprti, &serv->sv_tempsocks,
> +						    xpt_list)
> +			{
> +				if (!test_bit(XPT_PEER_VALID, &xprti->xpt_flags)) {
> +					xprt = xprti;
> +					set_bit(XPT_CLOSE, &xprt->xpt_flags);
> +					svc_xprt_get(xprt);
> +					break;
> +				}
> +			}
>   		}
>   		spin_unlock_bh(&serv->sv_lock);
>   
> @@ -1039,7 +1039,8 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
>   
>   	spin_lock_bh(&serv->sv_lock);
>   	list_del_init(&xprt->xpt_list);
> -	if (test_bit(XPT_TEMP, &xprt->xpt_flags))
> +	if (test_bit(XPT_TEMP, &xprt->xpt_flags) &&
> +	    !test_bit(XPT_PEER_VALID, &xprt->xpt_flags))
>   		serv->sv_tmpcnt--;
>   	spin_unlock_bh(&serv->sv_lock);
>   


-- 
Chuck Lever

  reply	other threads:[~2024-12-09 17:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-09  0:41 [PATCH 0/2] nfsd: don't use sv_nrthreads in connection limiting NeilBrown
2024-12-09  0:41 ` [PATCH 1/2] nfsd: don't use sv_nrthreads in connection limiting calculations NeilBrown
2024-12-09 17:25   ` Chuck Lever [this message]
2024-12-09  0:41 ` [PATCH 2/2] sunrpc: remove all connection limit configuration NeilBrown
2024-12-09 17:27   ` Chuck Lever
2024-12-10 12:29     ` Jeff Layton
2024-12-11  3:48       ` NeilBrown
2024-12-09 14:54 ` [PATCH 0/2] nfsd: don't use sv_nrthreads in connection limiting Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f306d92f-5d5f-4e7d-99b8-19efeda0078b@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=Dai.Ngo@oracle.com \
    --cc=anna@kernel.org \
    --cc=jlayton@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=okorniev@redhat.com \
    --cc=tom@talpey.com \
    --cc=trond.myklebust@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox