From: Jeff Layton <jlayton@kernel.org>
To: NeilBrown <neilb@suse.de>, Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org, okorniev@redhat.com,
Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Subject: Re: [PATCH v2] nfsd: Don't fail OP_SETCLIENTID when there are too many clients.
Date: Thu, 24 Oct 2024 11:34:37 -0400 [thread overview]
Message-ID: <30f62ce7f3a6aa0d02f07bfd1be71b3f82f83961.camel@kernel.org> (raw)
In-Reply-To: <172972144286.81717.3023946721770566532@noble.neil.brown.name>
On Thu, 2024-10-24 at 09:10 +1100, NeilBrown wrote:
> Failing OP_SETCLIENTID or OP_EXCHANGE_ID should only happen if there is
> memory allocation failure. Putting a hard limit on the number of
> clients is really helpful as it will either happen too early and prevent
"unhelpful" ?
> clients that the server can easily handle, or too late and allow clients
> when the server is swamped.
>
> The calculated limit is still useful for expiring courtesy clients where
> there are "too many" clients, but it shouldn't prevent the creation of
> active clients.
>
> Testing of lots of clients against small-mem servers reports repeated
> NFS4ERR_DELAY responses which doesn't seem helpful. There may have been
> reports of similar problems in production use.
>
> Also remove an outdated comment - we do use a slab cache.
>
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
> fs/nfsd/nfs4state.c | 11 +++--------
> 1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index d585c267731b..0791a43b19e6 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2218,21 +2218,16 @@ STALE_CLIENTID(clientid_t *clid, struct nfsd_net *nn)
> return 1;
> }
>
> -/*
> - * XXX Should we use a slab cache ?
> - * This type of memory management is somewhat inefficient, but we use it
> - * anyway since SETCLIENTID is not a common operation.
> - */
> static struct nfs4_client *alloc_client(struct xdr_netobj name,
> struct nfsd_net *nn)
> {
> struct nfs4_client *clp;
> int i;
>
> - if (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients) {
> + if (atomic_read(&nn->nfs4_client_count) >= nn->nfs4_max_clients &&
> + atomic_read(&nn->nfsd_courtesy_clients) > 0)
> mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> - return NULL;
> - }
> +
> clp = kmem_cache_zalloc(client_slab, GFP_KERNEL);
> if (clp == NULL)
> return NULL;
Do we even need to check nn->nfs4_max_clients at all?
Maybe we should just kick the laundromat whenever
nfsd_courtesy_clients > 0. I would suggest just removing
nfs4_max_clients altogether, but it looks like
nfs4_get_client_reaplist() uses it and it's not clear to me what would
better replace it.
--
Jeff Layton <jlayton@kernel.org>
next prev parent reply other threads:[~2024-10-24 15:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-23 22:10 [PATCH v2] nfsd: Don't fail OP_SETCLIENTID when there are too many clients NeilBrown
2024-10-24 13:31 ` Chuck Lever
2024-10-24 15:34 ` Jeff Layton [this message]
2024-10-30 18:14 ` cel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30f62ce7f3a6aa0d02f07bfd1be71b3f82f83961.camel@kernel.org \
--to=jlayton@kernel.org \
--cc=Dai.Ngo@oracle.com \
--cc=chuck.lever@oracle.com \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@suse.de \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox