From: Jeff Layton <jlayton@kernel.org>
To: NeilBrown <neilb@suse.de>, Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org, Olga Kornievskaia <kolga@netapp.com>,
Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
Steve Dickson <steved@redhat.com>
Subject: Re: [PATCH 09/14] nfsd: return hard failure for OP_SETCLIENTID when there are too many clients.
Date: Mon, 15 Jul 2024 11:21:32 -0400 [thread overview]
Message-ID: <c4d862487377da1a8b9a5d48f8cf27b1c9fa95d3.camel@kernel.org> (raw)
In-Reply-To: <20240715074657.18174-10-neilb@suse.de>
On Mon, 2024-07-15 at 17:14 +1000, NeilBrown wrote:
> If there are more non-courteous clients than the calculated limit, we
> should fail the request rather than report a soft failure and
> encouraging the client to retry indefinitely.
>
> If there a courteous clients which push us over the limit, then expedite
> their removal.
>
> This is not known to have caused a problem is production use, but
> testing of lots of clients reports repeated NFS4ERR_DELAY responses
> which doesn't seem helpful.
>
> Also remove an outdated comment - we do use a slab cache.
>
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
> fs/nfsd/nfs4state.c | 23 +++++++++++++----------
> 1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a20c2c9d7d45..88936f3189e1 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2212,21 +2212,20 @@ STALE_CLIENTID(clientid_t *clid, struct nfsd_net *nn)
> return 1;
> }
>
> -/*
> - * XXX Should we use a slab cache ?
> - * This type of memory management is somewhat inefficient, but we use it
> - * anyway since SETCLIENTID is not a common operation.
> - */
> static struct nfs4_client *alloc_client(struct xdr_netobj name,
> struct nfsd_net *nn)
> {
> struct nfs4_client *clp;
> int i;
>
> - if (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients) {
> + if (atomic_read(&nn->nfs4_client_count) -
> + atomic_read(&nn->nfsd_courtesy_clients) >= nn->nfs4_max_clients)
> + return ERR_PTR(-EREMOTEIO);
> +
nit: I know it gets remapped, but why EREMOTEIO? From nfsd's standpoint
this would seem to imply a problem on the client. Maybe:
#define EUSERS 87 /* Too many users */
...instead?
> + if (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients &&
> + atomic_read(&nn->nfsd_courtesy_clients) > 0)
> mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> - return NULL;
> - }
> +
> clp = kmem_cache_zalloc(client_slab, GFP_KERNEL);
> if (clp == NULL)
> return NULL;
> @@ -3115,8 +3114,8 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
> struct dentry *dentries[ARRAY_SIZE(client_files)];
>
> clp = alloc_client(name, nn);
> - if (clp == NULL)
> - return NULL;
> + if (IS_ERR_OR_NULL(clp))
> + return clp;
>
> ret = copy_cred(&clp->cl_cred, &rqstp->rq_cred);
> if (ret) {
> @@ -3498,6 +3497,8 @@ nfsd4_exchange_id(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> new = create_client(exid->clname, rqstp, &verf);
> if (new == NULL)
> return nfserr_jukebox;
> + if (IS_ERR(new))
> + return nfserr_resource;
> status = copy_impl_id(new, exid);
> if (status)
> goto out_nolock;
> @@ -4416,6 +4417,8 @@ nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> new = create_client(clname, rqstp, &clverifier);
> if (new == NULL)
> return nfserr_jukebox;
> + if (IS_ERR(new))
> + return nfserr_resource;
> spin_lock(&nn->client_lock);
> conf = find_confirmed_client_by_name(&clname, nn);
> if (conf && client_has_state(conf)) {
Patch looks fine otherwise though.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
next prev parent reply other threads:[~2024-07-15 15:21 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-15 7:14 [PATCH 00/14 RFC] support automatic changes to nfsd thread count NeilBrown
2024-07-15 7:14 ` [PATCH 01/14] lockd: discard nlmsvc_timeout NeilBrown
2024-07-15 7:14 ` [PATCH 02/14] SUNRPC: make various functions static, or not exported NeilBrown
2024-07-15 7:14 ` [PATCH 03/14] nfsd: move nfsd_pool_stats_open into nfsctl.c NeilBrown
2024-07-15 7:14 ` [PATCH 04/14] nfsd: don't allocate the versions array NeilBrown
2024-08-02 21:34 ` Mike Snitzer
2024-08-02 23:04 ` NeilBrown
2024-08-05 4:55 ` NeilBrown
2024-07-15 7:14 ` [PATCH 05/14] sunrpc: change sp_nrthreads from atomic_t to unsigned int NeilBrown
2024-07-15 14:12 ` Jeff Layton
2024-07-15 14:33 ` Jeff Layton
2024-07-16 1:33 ` NeilBrown
2024-07-24 19:36 ` Chuck Lever
2024-07-15 7:14 ` [PATCH 06/14] sunrpc: don't take ->sv_lock when updating ->sv_nrthreads NeilBrown
2024-07-15 7:14 ` [PATCH 07/14] Change unshare_fs_struct() to never fail NeilBrown
2024-07-15 14:39 ` Jeff Layton
2024-07-16 1:48 ` NeilBrown
2024-07-15 7:14 ` [PATCH 08/14] SUNRPC: move nrthreads counting to start/stop threads NeilBrown
2024-07-15 7:14 ` [PATCH 09/14] nfsd: return hard failure for OP_SETCLIENTID when there are too many clients NeilBrown
2024-07-15 15:21 ` Jeff Layton [this message]
2024-07-15 7:14 ` [PATCH 10/14] nfs: dynamically adjust per-client DRC slot limits NeilBrown
2024-07-15 7:14 ` [PATCH 11/14] nfsd: don't use sv_nrthreads in connection limiting calculations NeilBrown
2024-07-15 15:52 ` Jeff Layton
2024-07-16 2:04 ` NeilBrown
2024-07-15 7:14 ` [PATCH 12/14] sunrpc: introduce possibility that requested number of threads is different from actual NeilBrown
2024-07-15 16:00 ` Jeff Layton
2024-07-15 7:14 ` [PATCH 13/14] nfsd: introduce concept of a maximum number of threads NeilBrown
2024-07-15 17:06 ` Jeff Layton
2024-07-16 3:21 ` NeilBrown
2024-07-16 11:00 ` Jeff Layton
2024-07-16 13:31 ` Chuck Lever III
2024-07-16 18:49 ` Tom Talpey
2024-07-17 15:24 ` Chuck Lever III
2024-07-15 7:14 ` [PATCH 14/14] nfsd: adjust number of running nfsd threads NeilBrown
2024-07-15 17:29 ` [PATCH 00/14 RFC] support automatic changes to nfsd thread count Jeff Layton
2024-07-24 19:43 ` Chuck Lever III
2024-07-24 21:25 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c4d862487377da1a8b9a5d48f8cf27b1c9fa95d3.camel@kernel.org \
--to=jlayton@kernel.org \
--cc=Dai.Ngo@oracle.com \
--cc=chuck.lever@oracle.com \
--cc=kolga@netapp.com \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@suse.de \
--cc=steved@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox