public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19  0:41 [PATCH 0/6 RFC v2] nfsd: allocate/free session-based DRC slots on demand NeilBrown
@ 2024-11-19  0:41 ` NeilBrown
  2024-11-19 19:14   ` Chuck Lever
  2024-11-19 19:21   ` Chuck Lever
  0 siblings, 2 replies; 23+ messages in thread
From: NeilBrown @ 2024-11-19  0:41 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Each client now reports the number of slots allocated in each session.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 3889ba1c653f..31ff9f92a895 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
 static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = file_inode(m->file);
+	struct nfsd4_session *ses;
 	struct nfs4_client *clp;
 	u64 clid;
 
@@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
 	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
 	seq_printf(m, "admin-revoked states: %d\n",
 		   atomic_read(&clp->cl_admin_revoked));
+	seq_printf(m, "session slots:");
+	spin_lock(&clp->cl_lock);
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
+	spin_unlock(&clp->cl_lock);
+	seq_puts(m, "\n");
+
 	drop_client(clp);
 
 	return 0;
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19  0:41 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
@ 2024-11-19 19:14   ` Chuck Lever
  2024-11-19 22:22     ` NeilBrown
  2024-11-19 19:21   ` Chuck Lever
  1 sibling, 1 reply; 23+ messages in thread
From: Chuck Lever @ 2024-11-19 19:14 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> Each client now reports the number of slots allocated in each session.

Can this file also report the target slot count? Ie, is the server
matching the client's requested slot count, or is it over or under
by some number?

Would it be useful for a server tester or administrator to poke a
target slot count value into this file and watch the machinery
adjust?


> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfs4state.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 3889ba1c653f..31ff9f92a895 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
>  static int client_info_show(struct seq_file *m, void *v)
>  {
>  	struct inode *inode = file_inode(m->file);
> +	struct nfsd4_session *ses;
>  	struct nfs4_client *clp;
>  	u64 clid;
>  
> @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
>  	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
>  	seq_printf(m, "admin-revoked states: %d\n",
>  		   atomic_read(&clp->cl_admin_revoked));
> +	seq_printf(m, "session slots:");
> +	spin_lock(&clp->cl_lock);
> +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> +		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
> +	spin_unlock(&clp->cl_lock);
> +	seq_puts(m, "\n");
> +
>  	drop_client(clp);
>  
>  	return 0;
> -- 
> 2.47.0
> 

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19  0:41 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
  2024-11-19 19:14   ` Chuck Lever
@ 2024-11-19 19:21   ` Chuck Lever
  2024-11-19 22:24     ` NeilBrown
  1 sibling, 1 reply; 23+ messages in thread
From: Chuck Lever @ 2024-11-19 19:21 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> Each client now reports the number of slots allocated in each session.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfs4state.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 3889ba1c653f..31ff9f92a895 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
>  static int client_info_show(struct seq_file *m, void *v)
>  {
>  	struct inode *inode = file_inode(m->file);
> +	struct nfsd4_session *ses;
>  	struct nfs4_client *clp;
>  	u64 clid;
>  
> @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
>  	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
>  	seq_printf(m, "admin-revoked states: %d\n",
>  		   atomic_read(&clp->cl_admin_revoked));
> +	seq_printf(m, "session slots:");
> +	spin_lock(&clp->cl_lock);
> +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> +		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
> +	spin_unlock(&clp->cl_lock);
> +	seq_puts(m, "\n");
> +

Also, I wonder if information about the backchannel session can be
surfaced in this way?


>  	drop_client(clp);
>  
>  	return 0;
> -- 
> 2.47.0
> 

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19 19:14   ` Chuck Lever
@ 2024-11-19 22:22     ` NeilBrown
  2024-11-20  0:21       ` Chuck Lever
  0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-11-19 22:22 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Wed, 20 Nov 2024, Chuck Lever wrote:
> On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> > Each client now reports the number of slots allocated in each session.
> 
> Can this file also report the target slot count? Ie, is the server
> matching the client's requested slot count, or is it over or under
> by some number?

I could.  Would you like to suggest a syntax?
Usually the numbers would be the same except for short transition
periods, so I'm not convinced of the value.

Currently if the target is reduced while the client is idle there can be
a longer delay before the slots are actually freed, but I think 2
lease-renewal SEQUENCE ops would do it.  If/when we add use of the
CB_RECALL_SLOT callback the delay should disappear.

> 
> Would it be useful for a server tester or administrator to poke a
> target slot count value into this file and watch the machinery
> adjust?

Maybe.  By echo 3 > drop_caches does a pretty good job.  I don't see
that we need more.

Thanks,
NeilBrown


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19 19:21   ` Chuck Lever
@ 2024-11-19 22:24     ` NeilBrown
  2024-11-20  0:25       ` Chuck Lever
  0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-11-19 22:24 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Wed, 20 Nov 2024, Chuck Lever wrote:
> On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> > Each client now reports the number of slots allocated in each session.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  fs/nfsd/nfs4state.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index 3889ba1c653f..31ff9f92a895 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
> >  static int client_info_show(struct seq_file *m, void *v)
> >  {
> >  	struct inode *inode = file_inode(m->file);
> > +	struct nfsd4_session *ses;
> >  	struct nfs4_client *clp;
> >  	u64 clid;
> >  
> > @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
> >  	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
> >  	seq_printf(m, "admin-revoked states: %d\n",
> >  		   atomic_read(&clp->cl_admin_revoked));
> > +	seq_printf(m, "session slots:");
> > +	spin_lock(&clp->cl_lock);
> > +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> > +		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
> > +	spin_unlock(&clp->cl_lock);
> > +	seq_puts(m, "\n");
> > +
> 
> Also, I wonder if information about the backchannel session can be
> surfaced in this way?
> 

Probably make sense.  Maybe we should invent a syntax for reporting
arbitrary info about each session.

   session %d slots: %d
   session %d cb-slots: %d
   ...

???

NeilBrown

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19 22:22     ` NeilBrown
@ 2024-11-20  0:21       ` Chuck Lever
  0 siblings, 0 replies; 23+ messages in thread
From: Chuck Lever @ 2024-11-20  0:21 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Wed, Nov 20, 2024 at 09:22:59AM +1100, NeilBrown wrote:
> On Wed, 20 Nov 2024, Chuck Lever wrote:
> > On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> > > Each client now reports the number of slots allocated in each session.
> > 
> > Can this file also report the target slot count? Ie, is the server
> > matching the client's requested slot count, or is it over or under
> > by some number?
> 
> I could.  Would you like to suggest a syntax?
> Usually the numbers would be the same except for short transition
> periods, so I'm not convinced of the value.

That's precisely the kind of situation I would like to be able
catch -- the two are unequal longer than expected.


> Currently if the target is reduced while the client is idle there can be
> a longer delay before the slots are actually freed, but I think 2
> lease-renewal SEQUENCE ops would do it.  If/when we add use of the
> CB_RECALL_SLOT callback the delay should disappear.
> 
> > Would it be useful for a server tester or administrator to poke a
> > target slot count value into this file and watch the machinery
> > adjust?
> 
> Maybe.  By echo 3 > drop_caches does a pretty good job.  I don't see
> that we need more.

Fair enough.

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-19 22:24     ` NeilBrown
@ 2024-11-20  0:25       ` Chuck Lever
  2024-11-21 21:03         ` NeilBrown
  0 siblings, 1 reply; 23+ messages in thread
From: Chuck Lever @ 2024-11-20  0:25 UTC (permalink / raw)
  To: NeilBrown; +Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Wed, Nov 20, 2024 at 09:24:52AM +1100, NeilBrown wrote:
> On Wed, 20 Nov 2024, Chuck Lever wrote:
> > On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> > > Each client now reports the number of slots allocated in each session.
> > > 
> > > Signed-off-by: NeilBrown <neilb@suse.de>
> > > ---
> > >  fs/nfsd/nfs4state.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > > index 3889ba1c653f..31ff9f92a895 100644
> > > --- a/fs/nfsd/nfs4state.c
> > > +++ b/fs/nfsd/nfs4state.c
> > > @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
> > >  static int client_info_show(struct seq_file *m, void *v)
> > >  {
> > >  	struct inode *inode = file_inode(m->file);
> > > +	struct nfsd4_session *ses;
> > >  	struct nfs4_client *clp;
> > >  	u64 clid;
> > >  
> > > @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
> > >  	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
> > >  	seq_printf(m, "admin-revoked states: %d\n",
> > >  		   atomic_read(&clp->cl_admin_revoked));
> > > +	seq_printf(m, "session slots:");
> > > +	spin_lock(&clp->cl_lock);
> > > +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> > > +		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
> > > +	spin_unlock(&clp->cl_lock);
> > > +	seq_puts(m, "\n");
> > > +
> > 
> > Also, I wonder if information about the backchannel session can be
> > surfaced in this way?
> > 
> 
> Probably make sense.  Maybe we should invent a syntax for reporting
> arbitrary info about each session.
> 
>    session %d slots: %d
>    session %d cb-slots: %d
>    ...
> 
> ???

If each client has a directory, then it should have a subdirectory
called "sessions". Each subdirectory of "sessions" should be one
session, named by its hex session ID (as it is presented by
Wireshark). Each session directory could have a file for the forward
channel, one for the backchannel, and maybe one for generic
information like when the session was created and how many
connections it has.

We don't need all of that in this patch set, but whatever is
introduced here should be extensible to allow us to add more over
time.

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-20  0:25       ` Chuck Lever
@ 2024-11-21 21:03         ` NeilBrown
  2024-11-21 21:24           ` Chuck Lever III
  0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-11-21 21:03 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Wed, 20 Nov 2024, Chuck Lever wrote:
> On Wed, Nov 20, 2024 at 09:24:52AM +1100, NeilBrown wrote:
> > On Wed, 20 Nov 2024, Chuck Lever wrote:
> > > On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
> > > > Each client now reports the number of slots allocated in each session.
> > > > 
> > > > Signed-off-by: NeilBrown <neilb@suse.de>
> > > > ---
> > > >  fs/nfsd/nfs4state.c | 8 ++++++++
> > > >  1 file changed, 8 insertions(+)
> > > > 
> > > > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > > > index 3889ba1c653f..31ff9f92a895 100644
> > > > --- a/fs/nfsd/nfs4state.c
> > > > +++ b/fs/nfsd/nfs4state.c
> > > > @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
> > > >  static int client_info_show(struct seq_file *m, void *v)
> > > >  {
> > > >  	struct inode *inode = file_inode(m->file);
> > > > +	struct nfsd4_session *ses;
> > > >  	struct nfs4_client *clp;
> > > >  	u64 clid;
> > > >  
> > > > @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
> > > >  	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
> > > >  	seq_printf(m, "admin-revoked states: %d\n",
> > > >  		   atomic_read(&clp->cl_admin_revoked));
> > > > +	seq_printf(m, "session slots:");
> > > > +	spin_lock(&clp->cl_lock);
> > > > +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> > > > +		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
> > > > +	spin_unlock(&clp->cl_lock);
> > > > +	seq_puts(m, "\n");
> > > > +
> > > 
> > > Also, I wonder if information about the backchannel session can be
> > > surfaced in this way?
> > > 
> > 
> > Probably make sense.  Maybe we should invent a syntax for reporting
> > arbitrary info about each session.
> > 
> >    session %d slots: %d
> >    session %d cb-slots: %d
> >    ...
> > 
> > ???
> 
> If each client has a directory, then it should have a subdirectory
> called "sessions". Each subdirectory of "sessions" should be one
> session, named by its hex session ID (as it is presented by
> Wireshark). Each session directory could have a file for the forward
> channel, one for the backchannel, and maybe one for generic
> information like when the session was created and how many
> connections it has.
> 
> We don't need all of that in this patch set, but whatever is
> introduced here should be extensible to allow us to add more over
> time.

I cannot say I'm excited about the proliferation of tiny files.  Your
suggestion isn't quite as bad as sysfs which claims to want one file per
value, but I think the sysfs approach provided more pain than gain and
you seem to be heading that way.  As evidence I present the rise of
netlink.  Netlink's main advantage is that it allows you to access a
collection of data in a single syscall (or maybe pair of syscalls).  If
we had a standard format for doing that with open/read/close, the
filesystem would be a much nicer interface.  But the sysfs rules prevent
that, so people who care avoid it.

We don't need to impose those same rules on nfsd-fs.

Having separate dirs for the clients makes some sense as the clients are
quite independent.  Sessions aren't - they are just part of the client. 
The *only* way session information is different from other client
information is that there is more structure - an array of sessions each
with detail.  I don't think that justifies a new directory.  I does
justify a carefully designed (or chosen) format for representing
structured data.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-11-21 21:03         ` NeilBrown
@ 2024-11-21 21:24           ` Chuck Lever III
  0 siblings, 0 replies; 23+ messages in thread
From: Chuck Lever III @ 2024-11-21 21:24 UTC (permalink / raw)
  To: Neil Brown
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey



> On Nov 21, 2024, at 4:03 PM, NeilBrown <neilb@suse.de> wrote:
> 
> On Wed, 20 Nov 2024, Chuck Lever wrote:
>> On Wed, Nov 20, 2024 at 09:24:52AM +1100, NeilBrown wrote:
>>> On Wed, 20 Nov 2024, Chuck Lever wrote:
>>>> On Tue, Nov 19, 2024 at 11:41:30AM +1100, NeilBrown wrote:
>>>>> Each client now reports the number of slots allocated in each session.
>>>>> 
>>>>> Signed-off-by: NeilBrown <neilb@suse.de>
>>>>> ---
>>>>> fs/nfsd/nfs4state.c | 8 ++++++++
>>>>> 1 file changed, 8 insertions(+)
>>>>> 
>>>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>>>> index 3889ba1c653f..31ff9f92a895 100644
>>>>> --- a/fs/nfsd/nfs4state.c
>>>>> +++ b/fs/nfsd/nfs4state.c
>>>>> @@ -2642,6 +2642,7 @@ static const char *cb_state2str(int state)
>>>>> static int client_info_show(struct seq_file *m, void *v)
>>>>> {
>>>>> struct inode *inode = file_inode(m->file);
>>>>> + struct nfsd4_session *ses;
>>>>> struct nfs4_client *clp;
>>>>> u64 clid;
>>>>> 
>>>>> @@ -2678,6 +2679,13 @@ static int client_info_show(struct seq_file *m, void *v)
>>>>> seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
>>>>> seq_printf(m, "admin-revoked states: %d\n",
>>>>>    atomic_read(&clp->cl_admin_revoked));
>>>>> + seq_printf(m, "session slots:");
>>>>> + spin_lock(&clp->cl_lock);
>>>>> + list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
>>>>> + seq_printf(m, " %u", ses->se_fchannel.maxreqs);
>>>>> + spin_unlock(&clp->cl_lock);
>>>>> + seq_puts(m, "\n");
>>>>> +
>>>> 
>>>> Also, I wonder if information about the backchannel session can be
>>>> surfaced in this way?
>>>> 
>>> 
>>> Probably make sense.  Maybe we should invent a syntax for reporting
>>> arbitrary info about each session.
>>> 
>>>   session %d slots: %d
>>>   session %d cb-slots: %d
>>>   ...
>>> 
>>> ???
>> 
>> If each client has a directory, then it should have a subdirectory
>> called "sessions". Each subdirectory of "sessions" should be one
>> session, named by its hex session ID (as it is presented by
>> Wireshark). Each session directory could have a file for the forward
>> channel, one for the backchannel, and maybe one for generic
>> information like when the session was created and how many
>> connections it has.
>> 
>> We don't need all of that in this patch set, but whatever is
>> introduced here should be extensible to allow us to add more over
>> time.
> 
> I cannot say I'm excited about the proliferation of tiny files.  Your
> suggestion isn't quite as bad as sysfs which claims to want one file per
> value, but I think the sysfs approach provided more pain than gain and
> you seem to be heading that way.  As evidence I present the rise of
> netlink.  Netlink's main advantage is that it allows you to access a
> collection of data in a single syscall (or maybe pair of syscalls).  If
> we had a standard format for doing that with open/read/close, the
> filesystem would be a much nicer interface.  But the sysfs rules prevent
> that, so people who care avoid it.

I don't see this set of information as being in a
performance path. Needing multiple open/read/close
iterations doesn't seem like an impediment to me.

The only possible issue is that user space might
want a snapshot of certain related values, and
having to get the values from multiple files means
there's no guarantee that the values are consistent
with each other.


> We don't need to impose those same rules on nfsd-fs.
> 
> Having separate dirs for the clients makes some sense as the clients are
> quite independent.  Sessions aren't - they are just part of the client. 
> The *only* way session information is different from other client
> information is that there is more structure - an array of sessions each
> with detail.  I don't think that justifies a new directory.

Hrm. IMHO a directory is exactly suited to this kind of
information hierarchy. I can't say that I understand your
view; perhaps you feel this way because the client
implementations we are familiar with use only a single
session. For that, of course, a directory is overkill.


> It does
> justify a carefully designed (or chosen) format for representing
> structured data.

That usually means JSON or XML, which also have their haters.

However, I don't feel strongly about this. You asked me
for some thoughts, and here they are, at random.

My bottom line is reasonable extensibility -- the ability to
provide more information in these files in the future without
perturbing current consumers. IME that's been nearly impossible
with designs that have one file full of fields that need to be
parsed.

Should we expose session information via the new NFSD netlink
protocol instead? Or a sessions/ directory with one formatted
file per session? I'm open to discussion.


--
Chuck Lever



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-12-06  0:43 [PATCH 0/6 v3] nfsd: allocate/free session-based DRC slots on demand NeilBrown
@ 2024-12-06  0:43 ` NeilBrown
  0 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-06  0:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Each client now reports the number of slots allocated in each session.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 808cb0d897d5..67dfc699e411 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2643,6 +2643,7 @@ static const char *cb_state2str(int state)
 static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = file_inode(m->file);
+	struct nfsd4_session *ses;
 	struct nfs4_client *clp;
 	u64 clid;
 
@@ -2679,6 +2680,13 @@ static int client_info_show(struct seq_file *m, void *v)
 	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
 	seq_printf(m, "admin-revoked states: %d\n",
 		   atomic_read(&clp->cl_admin_revoked));
+	spin_lock(&clp->cl_lock);
+	seq_printf(m, "session slots:");
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
+	spin_unlock(&clp->cl_lock);
+	seq_puts(m, "\n");
+
 	drop_client(clp);
 
 	return 0;
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand
@ 2024-12-08 22:43 NeilBrown
  2024-12-08 22:43 ` [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots NeilBrown
                   ` (6 more replies)
  0 siblings, 7 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Changes from v3 include:
 - use GFP_NOWAIT more consistently - don't use GFP_ATOMIC
 - document reduce_session_slots()
 - change sl_generation to u16.  As we reduce the number of slots one at
   a time and update se_slot_gen each time, we could cycle a u8 generation
   counter quickly.

Thanks,
NeilBrown

 [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots
 [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC
 [PATCH 3/6] nfsd: add session slot count to
 [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand.
 [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots
 [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-09  0:53   ` cel
  2024-12-08 22:43 ` [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC NeilBrown
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Using an xarray to store session slots will make it easier to change the
number of active slots based on demand, and removes an unnecessary
limit.

To achieve good throughput with a high-latency server it can be helpful
to have hundreds of concurrent writes, which means hundreds of slots.
So increase the limit to 2048 (twice what the Linux client will
currently use).  This limit is only a sanity check, not a hard limit.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 28 ++++++++++++++++++----------
 fs/nfsd/state.h     |  9 ++++++---
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 741b9449f727..aa4f1293d4d3 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1915,8 +1915,11 @@ free_session_slots(struct nfsd4_session *ses)
 	int i;
 
 	for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
-		free_svc_cred(&ses->se_slots[i]->sl_cred);
-		kfree(ses->se_slots[i]);
+		struct nfsd4_slot *slot = xa_load(&ses->se_slots, i);
+
+		xa_erase(&ses->se_slots, i);
+		free_svc_cred(&slot->sl_cred);
+		kfree(slot);
 	}
 }
 
@@ -1996,17 +1999,20 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 	struct nfsd4_session *new;
 	int i;
 
-	BUILD_BUG_ON(struct_size(new, se_slots, NFSD_MAX_SLOTS_PER_SESSION)
-		     > PAGE_SIZE);
-
-	new = kzalloc(struct_size(new, se_slots, numslots), GFP_KERNEL);
+	new = kzalloc(sizeof(*new), GFP_KERNEL);
 	if (!new)
 		return NULL;
+	xa_init(&new->se_slots);
 	/* allocate each struct nfsd4_slot and data cache in one piece */
 	for (i = 0; i < numslots; i++) {
-		new->se_slots[i] = kzalloc(slotsize, GFP_KERNEL);
-		if (!new->se_slots[i])
+		struct nfsd4_slot *slot;
+		slot = kzalloc(slotsize, GFP_KERNEL);
+		if (!slot)
 			goto out_free;
+		if (xa_is_err(xa_store(&new->se_slots, i, slot, GFP_KERNEL))) {
+			kfree(slot);
+			goto out_free;
+		}
 	}
 
 	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
@@ -2017,7 +2023,8 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 	return new;
 out_free:
 	while (i--)
-		kfree(new->se_slots[i]);
+		kfree(xa_load(&new->se_slots, i));
+	xa_destroy(&new->se_slots);
 	kfree(new);
 	return NULL;
 }
@@ -2124,6 +2131,7 @@ static void nfsd4_del_conns(struct nfsd4_session *s)
 static void __free_session(struct nfsd4_session *ses)
 {
 	free_session_slots(ses);
+	xa_destroy(&ses->se_slots);
 	kfree(ses);
 }
 
@@ -4278,7 +4286,7 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	if (seq->slotid >= session->se_fchannel.maxreqs)
 		goto out_put_session;
 
-	slot = session->se_slots[seq->slotid];
+	slot = xa_load(&session->se_slots, seq->slotid);
 	dprintk("%s: slotid %d\n", __func__, seq->slotid);
 
 	/* We do not negotiate the number of slots yet, so set the
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e16bb3717fb9..aad547d3ad8b 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -227,8 +227,11 @@ static inline struct nfs4_delegation *delegstateid(struct nfs4_stid *s)
 	return container_of(s, struct nfs4_delegation, dl_stid);
 }
 
-/* Maximum number of slots per session. 160 is useful for long haul TCP */
-#define NFSD_MAX_SLOTS_PER_SESSION     160
+/* Maximum number of slots per session.  This is for sanity-check only.
+ * It could be increased if we had a mechanism to shutdown misbehaving clients.
+ * A large number can be needed to get good throughput on high-latency servers.
+ */
+#define NFSD_MAX_SLOTS_PER_SESSION	2048
 /* Maximum  session per slot cache size */
 #define NFSD_SLOT_CACHE_SIZE		2048
 /* Maximum number of NFSD_SLOT_CACHE_SIZE slots per session */
@@ -327,7 +330,7 @@ struct nfsd4_session {
 	struct nfsd4_cb_sec	se_cb_sec;
 	struct list_head	se_conns;
 	u32			se_cb_seq_nr[NFSD_BC_SLOT_TABLE_SIZE];
-	struct nfsd4_slot	*se_slots[];	/* forward channel slots */
+	struct xarray		se_slots;	/* forward channel slots */
 };
 
 /* formatted contents of nfs4_sessionid */
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
  2024-12-08 22:43 ` [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-08 22:43 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Rather than guessing how much space it might be safe to use for the DRC,
simply try allocating slots and be prepared to accept failure.

The first slot for each session is allocated with GFP_KERNEL which is
unlikely to fail.  Subsequent slots are allocated with the addition of
__GFP_NORETRY which is expected to fail if there isn't much free memory.

This is probably too aggressive but clears the way for adding a
shrinker interface to free extra slots when memory is tight.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 94 ++++++++-------------------------------------
 fs/nfsd/nfsd.h      |  3 --
 fs/nfsd/nfssvc.c    | 32 ---------------
 3 files changed, 16 insertions(+), 113 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index aa4f1293d4d3..808cb0d897d5 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1938,65 +1938,13 @@ static inline u32 slot_bytes(struct nfsd4_channel_attrs *ca)
 	return size + sizeof(struct nfsd4_slot);
 }
 
-/*
- * XXX: If we run out of reserved DRC memory we could (up to a point)
- * re-negotiate active sessions and reduce their slot usage to make
- * room for new connections. For now we just fail the create session.
- */
-static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca, struct nfsd_net *nn)
-{
-	u32 slotsize = slot_bytes(ca);
-	u32 num = ca->maxreqs;
-	unsigned long avail, total_avail;
-	unsigned int scale_factor;
-
-	spin_lock(&nfsd_drc_lock);
-	if (nfsd_drc_max_mem > nfsd_drc_mem_used)
-		total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
-	else
-		/* We have handed out more space than we chose in
-		 * set_max_drc() to allow.  That isn't really a
-		 * problem as long as that doesn't make us think we
-		 * have lots more due to integer overflow.
-		 */
-		total_avail = 0;
-	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
-	/*
-	 * Never use more than a fraction of the remaining memory,
-	 * unless it's the only way to give this client a slot.
-	 * The chosen fraction is either 1/8 or 1/number of threads,
-	 * whichever is smaller.  This ensures there are adequate
-	 * slots to support multiple clients per thread.
-	 * Give the client one slot even if that would require
-	 * over-allocation--it is better than failure.
-	 */
-	scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);
-
-	avail = clamp_t(unsigned long, avail, slotsize,
-			total_avail/scale_factor);
-	num = min_t(int, num, avail / slotsize);
-	num = max_t(int, num, 1);
-	nfsd_drc_mem_used += num * slotsize;
-	spin_unlock(&nfsd_drc_lock);
-
-	return num;
-}
-
-static void nfsd4_put_drc_mem(struct nfsd4_channel_attrs *ca)
-{
-	int slotsize = slot_bytes(ca);
-
-	spin_lock(&nfsd_drc_lock);
-	nfsd_drc_mem_used -= slotsize * ca->maxreqs;
-	spin_unlock(&nfsd_drc_lock);
-}
-
 static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 					   struct nfsd4_channel_attrs *battrs)
 {
 	int numslots = fattrs->maxreqs;
 	int slotsize = slot_bytes(fattrs);
 	struct nfsd4_session *new;
+	struct nfsd4_slot *slot;
 	int i;
 
 	new = kzalloc(sizeof(*new), GFP_KERNEL);
@@ -2004,17 +1952,21 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 		return NULL;
 	xa_init(&new->se_slots);
 	/* allocate each struct nfsd4_slot and data cache in one piece */
-	for (i = 0; i < numslots; i++) {
-		struct nfsd4_slot *slot;
-		slot = kzalloc(slotsize, GFP_KERNEL);
+	slot = kzalloc(slotsize, GFP_KERNEL);
+	if (!slot || xa_is_err(xa_store(&new->se_slots, 0, slot, GFP_KERNEL)))
+		goto out_free;
+
+	for (i = 1; i < numslots; i++) {
+		const gfp_t gfp = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
+		slot = kzalloc(slotsize, gfp);
 		if (!slot)
-			goto out_free;
-		if (xa_is_err(xa_store(&new->se_slots, i, slot, GFP_KERNEL))) {
+			break;
+		if (xa_is_err(xa_store(&new->se_slots, i, slot, gfp))) {
 			kfree(slot);
-			goto out_free;
+			break;
 		}
 	}
-
+	fattrs->maxreqs = i;
 	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
 	new->se_cb_slot_avail = ~0U;
 	new->se_cb_highest_slot = min(battrs->maxreqs - 1,
@@ -2022,8 +1974,7 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 	spin_lock_init(&new->se_lock);
 	return new;
 out_free:
-	while (i--)
-		kfree(xa_load(&new->se_slots, i));
+	kfree(slot);
 	xa_destroy(&new->se_slots);
 	kfree(new);
 	return NULL;
@@ -2138,7 +2089,6 @@ static void __free_session(struct nfsd4_session *ses)
 static void free_session(struct nfsd4_session *ses)
 {
 	nfsd4_del_conns(ses);
-	nfsd4_put_drc_mem(&ses->se_fchannel);
 	__free_session(ses);
 }
 
@@ -3786,17 +3736,6 @@ static __be32 check_forechannel_attrs(struct nfsd4_channel_attrs *ca, struct nfs
 	ca->maxresp_cached = min_t(u32, ca->maxresp_cached,
 			NFSD_SLOT_CACHE_SIZE + NFSD_MIN_HDR_SEQ_SZ);
 	ca->maxreqs = min_t(u32, ca->maxreqs, NFSD_MAX_SLOTS_PER_SESSION);
-	/*
-	 * Note decreasing slot size below client's request may make it
-	 * difficult for client to function correctly, whereas
-	 * decreasing the number of slots will (just?) affect
-	 * performance.  When short on memory we therefore prefer to
-	 * decrease number of slots instead of their size.  Clients that
-	 * request larger slots than they need will get poor results:
-	 * Note that we always allow at least one slot, because our
-	 * accounting is soft and provides no guarantees either way.
-	 */
-	ca->maxreqs = nfsd4_get_drc_mem(ca, nn);
 
 	return nfs_ok;
 }
@@ -3874,11 +3813,11 @@ nfsd4_create_session(struct svc_rqst *rqstp,
 		return status;
 	status = check_backchannel_attrs(&cr_ses->back_channel);
 	if (status)
-		goto out_release_drc_mem;
+		goto out_err;
 	status = nfserr_jukebox;
 	new = alloc_session(&cr_ses->fore_channel, &cr_ses->back_channel);
 	if (!new)
-		goto out_release_drc_mem;
+		goto out_err;
 	conn = alloc_conn_from_crses(rqstp, cr_ses);
 	if (!conn)
 		goto out_free_session;
@@ -3987,8 +3926,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
 	free_conn(conn);
 out_free_session:
 	__free_session(new);
-out_release_drc_mem:
-	nfsd4_put_drc_mem(&cr_ses->fore_channel);
+out_err:
 	return status;
 }
 
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 4b56ba1e8e48..3eb21e63b921 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -88,9 +88,6 @@ struct nfsd_genl_rqstp {
 extern struct svc_program	nfsd_programs[];
 extern const struct svc_version	nfsd_version2, nfsd_version3, nfsd_version4;
 extern struct mutex		nfsd_mutex;
-extern spinlock_t		nfsd_drc_lock;
-extern unsigned long		nfsd_drc_max_mem;
-extern unsigned long		nfsd_drc_mem_used;
 extern atomic_t			nfsd_th_cnt;		/* number of available threads */
 
 extern const struct seq_operations nfs_exports_op;
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 49e2f32102ab..3dbaefc96608 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -70,16 +70,6 @@ static __be32			nfsd_init_request(struct svc_rqst *,
  */
 DEFINE_MUTEX(nfsd_mutex);
 
-/*
- * nfsd_drc_lock protects nfsd_drc_max_pages and nfsd_drc_pages_used.
- * nfsd_drc_max_pages limits the total amount of memory available for
- * version 4.1 DRC caches.
- * nfsd_drc_pages_used tracks the current version 4.1 DRC memory usage.
- */
-DEFINE_SPINLOCK(nfsd_drc_lock);
-unsigned long	nfsd_drc_max_mem;
-unsigned long	nfsd_drc_mem_used;
-
 #if IS_ENABLED(CONFIG_NFS_LOCALIO)
 static const struct svc_version *localio_versions[] = {
 	[1] = &localio_version1,
@@ -575,27 +565,6 @@ void nfsd_reset_versions(struct nfsd_net *nn)
 		}
 }
 
-/*
- * Each session guarantees a negotiated per slot memory cache for replies
- * which in turn consumes memory beyond the v2/v3/v4.0 server. A dedicated
- * NFSv4.1 server might want to use more memory for a DRC than a machine
- * with mutiple services.
- *
- * Impose a hard limit on the number of pages for the DRC which varies
- * according to the machines free pages. This is of course only a default.
- *
- * For now this is a #defined shift which could be under admin control
- * in the future.
- */
-static void set_max_drc(void)
-{
-	#define NFSD_DRC_SIZE_SHIFT	7
-	nfsd_drc_max_mem = (nr_free_buffer_pages()
-					>> NFSD_DRC_SIZE_SHIFT) * PAGE_SIZE;
-	nfsd_drc_mem_used = 0;
-	dprintk("%s nfsd_drc_max_mem %lu \n", __func__, nfsd_drc_max_mem);
-}
-
 static int nfsd_get_default_max_blksize(void)
 {
 	struct sysinfo i;
@@ -678,7 +647,6 @@ int nfsd_create_serv(struct net *net)
 	nn->nfsd_serv = serv;
 	spin_unlock(&nfsd_notifier_lock);
 
-	set_max_drc();
 	/* check if the notifier is already set */
 	if (atomic_inc_return(&nfsd_notifier_refcount) == 1) {
 		register_inetaddr_notifier(&nfsd_inetaddr_notifier);
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
  2024-12-08 22:43 ` [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots NeilBrown
  2024-12-08 22:43 ` [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-08 22:43 ` [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand NeilBrown
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Each client now reports the number of slots allocated in each session.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 808cb0d897d5..67dfc699e411 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2643,6 +2643,7 @@ static const char *cb_state2str(int state)
 static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = file_inode(m->file);
+	struct nfsd4_session *ses;
 	struct nfs4_client *clp;
 	u64 clid;
 
@@ -2679,6 +2680,13 @@ static int client_info_show(struct seq_file *m, void *v)
 	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
 	seq_printf(m, "admin-revoked states: %d\n",
 		   atomic_read(&clp->cl_admin_revoked));
+	spin_lock(&clp->cl_lock);
+	seq_printf(m, "session slots:");
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
+	spin_unlock(&clp->cl_lock);
+	seq_puts(m, "\n");
+
 	drop_client(clp);
 
 	return 0;
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand.
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
                   ` (2 preceding siblings ...)
  2024-12-08 22:43 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-08 22:43 ` [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots NeilBrown
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

If a client ever uses the highest available slot for a given session,
attempt to allocate more slots so there is room for the client to use
them if wanted.  GFP_NOWAIT is used so if there is not plenty of
free memory, failure is expected - which is what we want.  It also
allows the allocation while holding a spinlock.

Each time we increase the number of slots by 20% (rounded up).  This
allows fairly quick growth while avoiding excessive over-shoot.

We would expect to stablise with around 10% more slots available than
the client actually uses.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 37 ++++++++++++++++++++++++++++++++-----
 1 file changed, 32 insertions(+), 5 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 67dfc699e411..fd9473d487f3 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4235,11 +4235,6 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	slot = xa_load(&session->se_slots, seq->slotid);
 	dprintk("%s: slotid %d\n", __func__, seq->slotid);
 
-	/* We do not negotiate the number of slots yet, so set the
-	 * maxslots to the session maxreqs which is used to encode
-	 * sr_highest_slotid and the sr_target_slot id to maxslots */
-	seq->maxslots = session->se_fchannel.maxreqs;
-
 	trace_nfsd_slot_seqid_sequence(clp, seq, slot);
 	status = check_slot_seqid(seq->seqid, slot->sl_seqid,
 					slot->sl_flags & NFSD4_SLOT_INUSE);
@@ -4289,6 +4284,38 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	cstate->session = session;
 	cstate->clp = clp;
 
+	/*
+	 * If the client ever uses the highest available slot,
+	 * gently try to allocate another 20%.  This allows
+	 * fairly quick growth without grossly over-shooting what
+	 * the client might use.
+	 */
+	if (seq->slotid == session->se_fchannel.maxreqs - 1 &&
+	    session->se_fchannel.maxreqs < NFSD_MAX_SLOTS_PER_SESSION) {
+		int s = session->se_fchannel.maxreqs;
+		int cnt = DIV_ROUND_UP(s, 5);
+
+		do {
+			/*
+			 * GFP_NOWAIT both allows allocation under a
+			 * spinlock, and only succeeds if there is
+			 * plenty of memory.
+			 */
+			slot = kzalloc(slot_bytes(&session->se_fchannel),
+				       GFP_NOWAIT);
+			if (slot &&
+			    !xa_is_err(xa_store(&session->se_slots, s, slot,
+						GFP_NOWAIT))) {
+				s += 1;
+				session->se_fchannel.maxreqs = s;
+			} else {
+				kfree(slot);
+				slot = NULL;
+			}
+		} while (slot && --cnt > 0);
+	}
+	seq->maxslots = session->se_fchannel.maxreqs;
+
 out:
 	switch (clp->cl_cb_state) {
 	case NFSD4_CB_DOWN:
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
                   ` (3 preceding siblings ...)
  2024-12-08 22:43 ` [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-08 22:43 ` [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session NeilBrown
  2024-12-09 14:49 ` [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand Jeff Layton
  6 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Reducing the number of slots in the session slot table requires
confirmation from the client.  This patch adds reduce_session_slots()
which starts the process of getting confirmation, but never calls it.
That will come in a later patch.

Before we can free a slot we need to confirm that the client won't try
to use it again.  This involves returning a lower cr_maxrequests in a
SEQUENCE reply and then seeing a ca_maxrequests on the same slot which
is not larger than we limit we are trying to impose.  So for each slot
we need to remember that we have sent a reduced cr_maxrequests.

To achieve this we introduce a concept of request "generations".  Each
time we decide to reduce cr_maxrequests we increment the generation
number, and record this when we return the lower cr_maxrequests to the
client.  When a slot with the current generation reports a low
ca_maxrequests, we commit to that level and free extra slots.

We use an 16 bit generation number (64 seems wasteful) and if it cycles
we iterate all slots and reset the generation number to avoid false matches.

When we free a slot we store the seqid in the slot pointer so that it can
be restored when we reactivate the slot.  The RFC can be read as
suggesting that the slot number could restart from one after a slot is
retired and reactivated, but also suggests that retiring slots is not
required.  So when we reactive a slot we accept with the next seqid in
sequence, or 1.

When decoding sa_highest_slotid into maxslots we need to add 1 - this
matches how it is encoded for the reply.

se_dead is moved in struct nfsd4_session to remove a hole.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 94 ++++++++++++++++++++++++++++++++++++++++-----
 fs/nfsd/nfs4xdr.c   |  5 ++-
 fs/nfsd/state.h     |  6 ++-
 fs/nfsd/xdr4.h      |  2 -
 4 files changed, 92 insertions(+), 15 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index fd9473d487f3..a2d1f97b8a0e 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1910,17 +1910,69 @@ gen_sessionid(struct nfsd4_session *ses)
 #define NFSD_MIN_HDR_SEQ_SZ  (24 + 12 + 44)
 
 static void
-free_session_slots(struct nfsd4_session *ses)
+free_session_slots(struct nfsd4_session *ses, int from)
 {
 	int i;
 
-	for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
+	if (from >= ses->se_fchannel.maxreqs)
+		return;
+
+	for (i = from; i < ses->se_fchannel.maxreqs; i++) {
 		struct nfsd4_slot *slot = xa_load(&ses->se_slots, i);
 
-		xa_erase(&ses->se_slots, i);
+		/*
+		 * Save the seqid in case we reactivate this slot.
+		 * This will never require a memory allocation so GFP
+		 * flag is irrelevant
+		 */
+		xa_store(&ses->se_slots, i, xa_mk_value(slot->sl_seqid), 0);
 		free_svc_cred(&slot->sl_cred);
 		kfree(slot);
 	}
+	ses->se_fchannel.maxreqs = from;
+	if (ses->se_target_maxslots > from)
+		ses->se_target_maxslots = from;
+}
+
+/**
+ * reduce_session_slots - reduce the target max-slots of a session if possible
+ * @ses:  The session to affect
+ * @dec:  how much to decrease the target by
+ *
+ * This interface can be used by a shrinker to reduce the target max-slots
+ * for a session so that some slots can eventually be freed.
+ * It uses spin_trylock() as it may be called in a context where another
+ * spinlock is held that has a dependency on client_lock.  As shrinkers are
+ * best-effort, skiping a session is client_lock is already held has no
+ * great coast
+ *
+ * Return value:
+ *   The number of slots that the target was reduced by.
+ */
+static int __maybe_unused
+reduce_session_slots(struct nfsd4_session *ses, int dec)
+{
+	struct nfsd_net *nn = net_generic(ses->se_client->net,
+					  nfsd_net_id);
+	int ret = 0;
+
+	if (ses->se_target_maxslots <= 1)
+		return ret;
+	if (!spin_trylock(&nn->client_lock))
+		return ret;
+	ret = min(dec, ses->se_target_maxslots-1);
+	ses->se_target_maxslots -= ret;
+	ses->se_slot_gen += 1;
+	if (ses->se_slot_gen == 0) {
+		int i;
+		ses->se_slot_gen = 1;
+		for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
+			struct nfsd4_slot *slot = xa_load(&ses->se_slots, i);
+			slot->sl_generation = 0;
+		}
+	}
+	spin_unlock(&nn->client_lock);
+	return ret;
 }
 
 /*
@@ -1968,6 +2020,7 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 	}
 	fattrs->maxreqs = i;
 	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
+	new->se_target_maxslots = i;
 	new->se_cb_slot_avail = ~0U;
 	new->se_cb_highest_slot = min(battrs->maxreqs - 1,
 				      NFSD_BC_SLOT_TABLE_SIZE - 1);
@@ -2081,7 +2134,7 @@ static void nfsd4_del_conns(struct nfsd4_session *s)
 
 static void __free_session(struct nfsd4_session *ses)
 {
-	free_session_slots(ses);
+	free_session_slots(ses, 0);
 	xa_destroy(&ses->se_slots);
 	kfree(ses);
 }
@@ -2684,6 +2737,9 @@ static int client_info_show(struct seq_file *m, void *v)
 	seq_printf(m, "session slots:");
 	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
 		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
+	seq_printf(m, "\nsession target slots:");
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+		seq_printf(m, " %u", ses->se_target_maxslots);
 	spin_unlock(&clp->cl_lock);
 	seq_puts(m, "\n");
 
@@ -3674,10 +3730,10 @@ nfsd4_exchange_id_release(union nfsd4_op_u *u)
 	kfree(exid->server_impl_name);
 }
 
-static __be32 check_slot_seqid(u32 seqid, u32 slot_seqid, bool slot_inuse)
+static __be32 check_slot_seqid(u32 seqid, u32 slot_seqid, u8 flags)
 {
 	/* The slot is in use, and no response has been sent. */
-	if (slot_inuse) {
+	if (flags & NFSD4_SLOT_INUSE) {
 		if (seqid == slot_seqid)
 			return nfserr_jukebox;
 		else
@@ -3686,6 +3742,8 @@ static __be32 check_slot_seqid(u32 seqid, u32 slot_seqid, bool slot_inuse)
 	/* Note unsigned 32-bit arithmetic handles wraparound: */
 	if (likely(seqid == slot_seqid + 1))
 		return nfs_ok;
+	if ((flags & NFSD4_SLOT_REUSED) && seqid == 1)
+		return nfs_ok;
 	if (seqid == slot_seqid)
 		return nfserr_replay_cache;
 	return nfserr_seq_misordered;
@@ -4236,8 +4294,7 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	dprintk("%s: slotid %d\n", __func__, seq->slotid);
 
 	trace_nfsd_slot_seqid_sequence(clp, seq, slot);
-	status = check_slot_seqid(seq->seqid, slot->sl_seqid,
-					slot->sl_flags & NFSD4_SLOT_INUSE);
+	status = check_slot_seqid(seq->seqid, slot->sl_seqid, slot->sl_flags);
 	if (status == nfserr_replay_cache) {
 		status = nfserr_seq_misordered;
 		if (!(slot->sl_flags & NFSD4_SLOT_INITIALIZED))
@@ -4262,6 +4319,12 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	if (status)
 		goto out_put_session;
 
+	if (session->se_target_maxslots < session->se_fchannel.maxreqs &&
+	    slot->sl_generation == session->se_slot_gen &&
+	    seq->maxslots <= session->se_target_maxslots)
+		/* Client acknowledged our reduce maxreqs */
+		free_session_slots(session, session->se_target_maxslots);
+
 	buflen = (seq->cachethis) ?
 			session->se_fchannel.maxresp_cached :
 			session->se_fchannel.maxresp_sz;
@@ -4272,9 +4335,11 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	svc_reserve(rqstp, buflen);
 
 	status = nfs_ok;
-	/* Success! bump slot seqid */
+	/* Success! accept new slot seqid */
 	slot->sl_seqid = seq->seqid;
+	slot->sl_flags &= ~NFSD4_SLOT_REUSED;
 	slot->sl_flags |= NFSD4_SLOT_INUSE;
+	slot->sl_generation = session->se_slot_gen;
 	if (seq->cachethis)
 		slot->sl_flags |= NFSD4_SLOT_CACHETHIS;
 	else
@@ -4291,9 +4356,11 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	 * the client might use.
 	 */
 	if (seq->slotid == session->se_fchannel.maxreqs - 1 &&
+	    session->se_target_maxslots >= session->se_fchannel.maxreqs &&
 	    session->se_fchannel.maxreqs < NFSD_MAX_SLOTS_PER_SESSION) {
 		int s = session->se_fchannel.maxreqs;
 		int cnt = DIV_ROUND_UP(s, 5);
+		void *prev_slot;
 
 		do {
 			/*
@@ -4303,18 +4370,25 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 			 */
 			slot = kzalloc(slot_bytes(&session->se_fchannel),
 				       GFP_NOWAIT);
+			prev_slot = xa_load(&session->se_slots, s);
+			if (xa_is_value(prev_slot) && slot) {
+				slot->sl_seqid = xa_to_value(prev_slot);
+				slot->sl_flags |= NFSD4_SLOT_REUSED;
+			}
 			if (slot &&
 			    !xa_is_err(xa_store(&session->se_slots, s, slot,
 						GFP_NOWAIT))) {
 				s += 1;
 				session->se_fchannel.maxreqs = s;
+				session->se_target_maxslots = s;
 			} else {
 				kfree(slot);
 				slot = NULL;
 			}
 		} while (slot && --cnt > 0);
 	}
-	seq->maxslots = session->se_fchannel.maxreqs;
+	seq->maxslots = max(session->se_target_maxslots, seq->maxslots);
+	seq->target_maxslots = session->se_target_maxslots;
 
 out:
 	switch (clp->cl_cb_state) {
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 53fac037611c..4dcb03cd9292 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -1884,7 +1884,8 @@ nfsd4_decode_sequence(struct nfsd4_compoundargs *argp,
 		return nfserr_bad_xdr;
 	seq->seqid = be32_to_cpup(p++);
 	seq->slotid = be32_to_cpup(p++);
-	seq->maxslots = be32_to_cpup(p++);
+	/* sa_highest_slotid counts from 0 but maxslots  counts from 1 ... */
+	seq->maxslots = be32_to_cpup(p++) + 1;
 	seq->cachethis = be32_to_cpup(p);
 
 	seq->status_flags = 0;
@@ -4968,7 +4969,7 @@ nfsd4_encode_sequence(struct nfsd4_compoundres *resp, __be32 nfserr,
 	if (nfserr != nfs_ok)
 		return nfserr;
 	/* sr_target_highest_slotid */
-	nfserr = nfsd4_encode_slotid4(xdr, seq->maxslots - 1);
+	nfserr = nfsd4_encode_slotid4(xdr, seq->target_maxslots - 1);
 	if (nfserr != nfs_ok)
 		return nfserr;
 	/* sr_status_flags */
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index aad547d3ad8b..4251ff3c5ad1 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -245,10 +245,12 @@ struct nfsd4_slot {
 	struct svc_cred sl_cred;
 	u32	sl_datalen;
 	u16	sl_opcnt;
+	u16	sl_generation;
 #define NFSD4_SLOT_INUSE	(1 << 0)
 #define NFSD4_SLOT_CACHETHIS	(1 << 1)
 #define NFSD4_SLOT_INITIALIZED	(1 << 2)
 #define NFSD4_SLOT_CACHED	(1 << 3)
+#define NFSD4_SLOT_REUSED	(1 << 4)
 	u8	sl_flags;
 	char	sl_data[];
 };
@@ -321,7 +323,6 @@ struct nfsd4_session {
 	u32			se_cb_slot_avail; /* bitmap of available slots */
 	u32			se_cb_highest_slot;	/* highest slot client wants */
 	u32			se_cb_prog;
-	bool			se_dead;
 	struct list_head	se_hash;	/* hash by sessionid */
 	struct list_head	se_perclnt;
 	struct nfs4_client	*se_client;
@@ -331,6 +332,9 @@ struct nfsd4_session {
 	struct list_head	se_conns;
 	u32			se_cb_seq_nr[NFSD_BC_SLOT_TABLE_SIZE];
 	struct xarray		se_slots;	/* forward channel slots */
+	u16			se_slot_gen;
+	bool			se_dead;
+	u32			se_target_maxslots;
 };
 
 /* formatted contents of nfs4_sessionid */
diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
index 382cc1389396..c26ba86dbdfd 100644
--- a/fs/nfsd/xdr4.h
+++ b/fs/nfsd/xdr4.h
@@ -576,9 +576,7 @@ struct nfsd4_sequence {
 	u32			slotid;			/* request/response */
 	u32			maxslots;		/* request/response */
 	u32			cachethis;		/* request */
-#if 0
 	u32			target_maxslots;	/* response */
-#endif /* not yet */
 	u32			status_flags;		/* response */
 };
 
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
                   ` (4 preceding siblings ...)
  2024-12-08 22:43 ` [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots NeilBrown
@ 2024-12-08 22:43 ` NeilBrown
  2024-12-10 21:05   ` Chuck Lever
  2024-12-09 14:49 ` [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand Jeff Layton
  6 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-12-08 22:43 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Add a shrinker which frees unused slots and may ask the clients to use
fewer slots on each session.

We keep a global count of the number of freeable slots, which is the sum
of one less than the current "target" slots in all sessions in all
clients in all net-namespaces. This number is reported by the shrinker.

When the shrinker is asked to free some, we call xxx on each session in
a round-robin asking each to reduce the slot count by 1.  This will
reduce the "target" so the number reported by the shrinker will reduce
immediately.  The memory will only be freed later when the client
confirmed that it is no longer needed.

We use a global list of sessions and move the "head" to after the last
session that we asked to reduce, so the next callback from the shrinker
will move on to the next session.  This pressure should be applied
"evenly" across all sessions over time.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 71 ++++++++++++++++++++++++++++++++++++++++++---
 fs/nfsd/state.h     |  1 +
 2 files changed, 68 insertions(+), 4 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a2d1f97b8a0e..311f67418759 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1909,6 +1909,16 @@ gen_sessionid(struct nfsd4_session *ses)
  */
 #define NFSD_MIN_HDR_SEQ_SZ  (24 + 12 + 44)
 
+static struct shrinker *nfsd_slot_shrinker;
+static DEFINE_SPINLOCK(nfsd_session_list_lock);
+static LIST_HEAD(nfsd_session_list);
+/* The sum of "target_slots-1" on every session.  The shrinker can push this
+ * down, though it can take a little while for the memory to actually
+ * be freed.  The "-1" is because we can never free slot 0 while the
+ * session is active.
+ */
+static atomic_t nfsd_total_target_slots = ATOMIC_INIT(0);
+
 static void
 free_session_slots(struct nfsd4_session *ses, int from)
 {
@@ -1930,8 +1940,11 @@ free_session_slots(struct nfsd4_session *ses, int from)
 		kfree(slot);
 	}
 	ses->se_fchannel.maxreqs = from;
-	if (ses->se_target_maxslots > from)
-		ses->se_target_maxslots = from;
+	if (ses->se_target_maxslots > from) {
+		int new_target = from ?: 1;
+		atomic_sub(ses->se_target_maxslots - new_target, &nfsd_total_target_slots);
+		ses->se_target_maxslots = new_target;
+	}
 }
 
 /**
@@ -1949,7 +1962,7 @@ free_session_slots(struct nfsd4_session *ses, int from)
  * Return value:
  *   The number of slots that the target was reduced by.
  */
-static int __maybe_unused
+static int
 reduce_session_slots(struct nfsd4_session *ses, int dec)
 {
 	struct nfsd_net *nn = net_generic(ses->se_client->net,
@@ -1962,6 +1975,7 @@ reduce_session_slots(struct nfsd4_session *ses, int dec)
 		return ret;
 	ret = min(dec, ses->se_target_maxslots-1);
 	ses->se_target_maxslots -= ret;
+	atomic_sub(ret, &nfsd_total_target_slots);
 	ses->se_slot_gen += 1;
 	if (ses->se_slot_gen == 0) {
 		int i;
@@ -2021,6 +2035,7 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
 	fattrs->maxreqs = i;
 	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
 	new->se_target_maxslots = i;
+	atomic_add(i - 1, &nfsd_total_target_slots);
 	new->se_cb_slot_avail = ~0U;
 	new->se_cb_highest_slot = min(battrs->maxreqs - 1,
 				      NFSD_BC_SLOT_TABLE_SIZE - 1);
@@ -2145,6 +2160,36 @@ static void free_session(struct nfsd4_session *ses)
 	__free_session(ses);
 }
 
+static unsigned long
+nfsd_slot_count(struct shrinker *s, struct shrink_control *sc)
+{
+	unsigned long cnt = atomic_read(&nfsd_total_target_slots);
+
+	return cnt ? cnt : SHRINK_EMPTY;
+}
+
+static unsigned long
+nfsd_slot_scan(struct shrinker *s, struct shrink_control *sc)
+{
+	struct nfsd4_session *ses;
+	unsigned long scanned = 0;
+	unsigned long freed = 0;
+
+	spin_lock(&nfsd_session_list_lock);
+	list_for_each_entry(ses, &nfsd_session_list, se_all_sessions) {
+		freed += reduce_session_slots(ses, 1);
+		scanned += 1;
+		if (scanned >= sc->nr_to_scan) {
+			/* Move starting point for next scan */
+			list_move(&nfsd_session_list, &ses->se_all_sessions);
+			break;
+		}
+	}
+	spin_unlock(&nfsd_session_list_lock);
+	sc->nr_scanned = scanned;
+	return freed;
+}
+
 static void init_session(struct svc_rqst *rqstp, struct nfsd4_session *new, struct nfs4_client *clp, struct nfsd4_create_session *cses)
 {
 	int idx;
@@ -2169,6 +2214,10 @@ static void init_session(struct svc_rqst *rqstp, struct nfsd4_session *new, stru
 	list_add(&new->se_perclnt, &clp->cl_sessions);
 	spin_unlock(&clp->cl_lock);
 
+	spin_lock(&nfsd_session_list_lock);
+	list_add_tail(&new->se_all_sessions, &nfsd_session_list);
+	spin_unlock(&nfsd_session_list_lock);
+
 	{
 		struct sockaddr *sa = svc_addr(rqstp);
 		/*
@@ -2238,6 +2287,9 @@ unhash_session(struct nfsd4_session *ses)
 	spin_lock(&ses->se_client->cl_lock);
 	list_del(&ses->se_perclnt);
 	spin_unlock(&ses->se_client->cl_lock);
+	spin_lock(&nfsd_session_list_lock);
+	list_del(&ses->se_all_sessions);
+	spin_unlock(&nfsd_session_list_lock);
 }
 
 /* SETCLIENTID and SETCLIENTID_CONFIRM Helper functions */
@@ -4380,6 +4432,8 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 						GFP_NOWAIT))) {
 				s += 1;
 				session->se_fchannel.maxreqs = s;
+				atomic_add(s - session->se_target_maxslots,
+					   &nfsd_total_target_slots);
 				session->se_target_maxslots = s;
 			} else {
 				kfree(slot);
@@ -8776,7 +8830,6 @@ nfs4_state_start_net(struct net *net)
 }
 
 /* initialization to perform when the nfsd service is started: */
-
 int
 nfs4_state_start(void)
 {
@@ -8786,6 +8839,15 @@ nfs4_state_start(void)
 	if (ret)
 		return ret;
 
+	nfsd_slot_shrinker = shrinker_alloc(0, "nfsd-DRC-slot");
+	if (!nfsd_slot_shrinker) {
+		rhltable_destroy(&nfs4_file_rhltable);
+		return -ENOMEM;
+	}
+	nfsd_slot_shrinker->count_objects = nfsd_slot_count;
+	nfsd_slot_shrinker->scan_objects = nfsd_slot_scan;
+	shrinker_register(nfsd_slot_shrinker);
+
 	set_max_delegations();
 	return 0;
 }
@@ -8827,6 +8889,7 @@ void
 nfs4_state_shutdown(void)
 {
 	rhltable_destroy(&nfs4_file_rhltable);
+	shrinker_free(nfsd_slot_shrinker);
 }
 
 static void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 4251ff3c5ad1..f45aee751a10 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -325,6 +325,7 @@ struct nfsd4_session {
 	u32			se_cb_prog;
 	struct list_head	se_hash;	/* hash by sessionid */
 	struct list_head	se_perclnt;
+	struct list_head	se_all_sessions;/* global list of sessions */
 	struct nfs4_client	*se_client;
 	struct nfs4_sessionid	se_sessionid;
 	struct nfsd4_channel_attrs se_fchannel;
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots
  2024-12-08 22:43 ` [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots NeilBrown
@ 2024-12-09  0:53   ` cel
  0 siblings, 0 replies; 23+ messages in thread
From: cel @ 2024-12-09  0:53 UTC (permalink / raw)
  To: Jeff Layton, NeilBrown
  Cc: Chuck Lever, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

From: Chuck Lever <chuck.lever@oracle.com>

On Mon, 09 Dec 2024 09:43:12 +1100, NeilBrown wrote:
> Using an xarray to store session slots will make it easier to change the
> number of active slots based on demand, and removes an unnecessary
> limit.
> 
> To achieve good throughput with a high-latency server it can be helpful
> to have hundreds of concurrent writes, which means hundreds of slots.
> So increase the limit to 2048 (twice what the Linux client will
> currently use).  This limit is only a sanity check, not a hard limit.
> 
> [...]

Applied to nfsd-testing for v6.14, thanks!

[1/6] nfsd: use an xarray to store v4.1 session slots
      commit: 2d8efbc3b656b43a5d1b813e3a778c9b9c8810a4
[2/6] nfsd: remove artificial limits on the session-based DRC
      commit: 8233f78fbd970cbfcb9f78c719ac5a3aac4ea053
[3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
      commit: c1c0d459067dc044dba36779da8a9da69c2053cb
[4/6] nfsd: allocate new session-based DRC slots on demand.
      commit: c2340cd75a0c99bb68cefde70db5893577f100f4
[5/6] nfsd: add support for freeing unused session-DRC slots
      commit: 22b1fbeea695de4efedf4de4db66be21004f134a
[6/6] nfsd: add shrinker to reduce number of slots allocated per session
      commit: 8af8f01a1bb7d84ad2d176ae00112c96647e151f

--
Chuck Lever


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand
  2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
                   ` (5 preceding siblings ...)
  2024-12-08 22:43 ` [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session NeilBrown
@ 2024-12-09 14:49 ` Jeff Layton
  6 siblings, 0 replies; 23+ messages in thread
From: Jeff Layton @ 2024-12-09 14:49 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2024-12-09 at 09:43 +1100, NeilBrown wrote:
> Changes from v3 include:
>  - use GFP_NOWAIT more consistently - don't use GFP_ATOMIC
>  - document reduce_session_slots()
>  - change sl_generation to u16.  As we reduce the number of slots one at
>    a time and update se_slot_gen each time, we could cycle a u8 generation
>    counter quickly.
> 
> Thanks,
> NeilBrown
> 
>  [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots
>  [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC
>  [PATCH 3/6] nfsd: add session slot count to
>  [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand.
>  [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots
>  [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session
  2024-12-08 22:43 ` [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session NeilBrown
@ 2024-12-10 21:05   ` Chuck Lever
  2024-12-11  3:32     ` NeilBrown
  0 siblings, 1 reply; 23+ messages in thread
From: Chuck Lever @ 2024-12-10 21:05 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Jeff Layton

On 12/8/24 5:43 PM, NeilBrown wrote:
> Add a shrinker which frees unused slots and may ask the clients to use
> fewer slots on each session.
> 
> We keep a global count of the number of freeable slots, which is the sum
> of one less than the current "target" slots in all sessions in all
> clients in all net-namespaces. This number is reported by the shrinker.
> 
> When the shrinker is asked to free some, we call xxx on each session in
> a round-robin asking each to reduce the slot count by 1.  This will
> reduce the "target" so the number reported by the shrinker will reduce
> immediately.  The memory will only be freed later when the client
> confirmed that it is no longer needed.
> 
> We use a global list of sessions and move the "head" to after the last
> session that we asked to reduce, so the next callback from the shrinker
> will move on to the next session.  This pressure should be applied
> "evenly" across all sessions over time.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>   fs/nfsd/nfs4state.c | 71 ++++++++++++++++++++++++++++++++++++++++++---
>   fs/nfsd/state.h     |  1 +
>   2 files changed, 68 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a2d1f97b8a0e..311f67418759 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1909,6 +1909,16 @@ gen_sessionid(struct nfsd4_session *ses)
>    */
>   #define NFSD_MIN_HDR_SEQ_SZ  (24 + 12 + 44)
>   
> +static struct shrinker *nfsd_slot_shrinker;
> +static DEFINE_SPINLOCK(nfsd_session_list_lock);
> +static LIST_HEAD(nfsd_session_list);
> +/* The sum of "target_slots-1" on every session.  The shrinker can push this
> + * down, though it can take a little while for the memory to actually
> + * be freed.  The "-1" is because we can never free slot 0 while the
> + * session is active.
> + */
> +static atomic_t nfsd_total_target_slots = ATOMIC_INIT(0);
> +
>   static void
>   free_session_slots(struct nfsd4_session *ses, int from)
>   {
> @@ -1930,8 +1940,11 @@ free_session_slots(struct nfsd4_session *ses, int from)
>   		kfree(slot);
>   	}
>   	ses->se_fchannel.maxreqs = from;
> -	if (ses->se_target_maxslots > from)
> -		ses->se_target_maxslots = from;
> +	if (ses->se_target_maxslots > from) {
> +		int new_target = from ?: 1;
> +		atomic_sub(ses->se_target_maxslots - new_target, &nfsd_total_target_slots);
> +		ses->se_target_maxslots = new_target;
> +	}
>   }
>   
>   /**
> @@ -1949,7 +1962,7 @@ free_session_slots(struct nfsd4_session *ses, int from)
>    * Return value:
>    *   The number of slots that the target was reduced by.
>    */
> -static int __maybe_unused
> +static int
>   reduce_session_slots(struct nfsd4_session *ses, int dec)
>   {
>   	struct nfsd_net *nn = net_generic(ses->se_client->net,
> @@ -1962,6 +1975,7 @@ reduce_session_slots(struct nfsd4_session *ses, int dec)
>   		return ret;
>   	ret = min(dec, ses->se_target_maxslots-1);
>   	ses->se_target_maxslots -= ret;
> +	atomic_sub(ret, &nfsd_total_target_slots);
>   	ses->se_slot_gen += 1;
>   	if (ses->se_slot_gen == 0) {
>   		int i;
> @@ -2021,6 +2035,7 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs,
>   	fattrs->maxreqs = i;
>   	memcpy(&new->se_fchannel, fattrs, sizeof(struct nfsd4_channel_attrs));
>   	new->se_target_maxslots = i;
> +	atomic_add(i - 1, &nfsd_total_target_slots);
>   	new->se_cb_slot_avail = ~0U;
>   	new->se_cb_highest_slot = min(battrs->maxreqs - 1,
>   				      NFSD_BC_SLOT_TABLE_SIZE - 1);
> @@ -2145,6 +2160,36 @@ static void free_session(struct nfsd4_session *ses)
>   	__free_session(ses);
>   }
>   
> +static unsigned long
> +nfsd_slot_count(struct shrinker *s, struct shrink_control *sc)
> +{
> +	unsigned long cnt = atomic_read(&nfsd_total_target_slots);
> +
> +	return cnt ? cnt : SHRINK_EMPTY;
> +}
> +
> +static unsigned long
> +nfsd_slot_scan(struct shrinker *s, struct shrink_control *sc)
> +{
> +	struct nfsd4_session *ses;
> +	unsigned long scanned = 0;
> +	unsigned long freed = 0;
> +
> +	spin_lock(&nfsd_session_list_lock);
> +	list_for_each_entry(ses, &nfsd_session_list, se_all_sessions) {
> +		freed += reduce_session_slots(ses, 1);
> +		scanned += 1;
> +		if (scanned >= sc->nr_to_scan) {
> +			/* Move starting point for next scan */
> +			list_move(&nfsd_session_list, &ses->se_all_sessions);
> +			break;
> +		}
> +	}
> +	spin_unlock(&nfsd_session_list_lock);
> +	sc->nr_scanned = scanned;
> +	return freed;
> +}
> +
>   static void init_session(struct svc_rqst *rqstp, struct nfsd4_session *new, struct nfs4_client *clp, struct nfsd4_create_session *cses)
>   {
>   	int idx;
> @@ -2169,6 +2214,10 @@ static void init_session(struct svc_rqst *rqstp, struct nfsd4_session *new, stru
>   	list_add(&new->se_perclnt, &clp->cl_sessions);
>   	spin_unlock(&clp->cl_lock);
>   
> +	spin_lock(&nfsd_session_list_lock);
> +	list_add_tail(&new->se_all_sessions, &nfsd_session_list);
> +	spin_unlock(&nfsd_session_list_lock);
> +
>   	{
>   		struct sockaddr *sa = svc_addr(rqstp);
>   		/*
> @@ -2238,6 +2287,9 @@ unhash_session(struct nfsd4_session *ses)
>   	spin_lock(&ses->se_client->cl_lock);
>   	list_del(&ses->se_perclnt);
>   	spin_unlock(&ses->se_client->cl_lock);
> +	spin_lock(&nfsd_session_list_lock);
> +	list_del(&ses->se_all_sessions);
> +	spin_unlock(&nfsd_session_list_lock);
>   }
>   
>   /* SETCLIENTID and SETCLIENTID_CONFIRM Helper functions */
> @@ -4380,6 +4432,8 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
>   						GFP_NOWAIT))) {
>   				s += 1;
>   				session->se_fchannel.maxreqs = s;
> +				atomic_add(s - session->se_target_maxslots,
> +					   &nfsd_total_target_slots);
>   				session->se_target_maxslots = s;
>   			} else {
>   				kfree(slot);
> @@ -8776,7 +8830,6 @@ nfs4_state_start_net(struct net *net)
>   }
>   
>   /* initialization to perform when the nfsd service is started: */
> -
>   int
>   nfs4_state_start(void)
>   {
> @@ -8786,6 +8839,15 @@ nfs4_state_start(void)
>   	if (ret)
>   		return ret;
>   
> +	nfsd_slot_shrinker = shrinker_alloc(0, "nfsd-DRC-slot");
> +	if (!nfsd_slot_shrinker) {
> +		rhltable_destroy(&nfs4_file_rhltable);
> +		return -ENOMEM;
> +	}
> +	nfsd_slot_shrinker->count_objects = nfsd_slot_count;
> +	nfsd_slot_shrinker->scan_objects = nfsd_slot_scan;
> +	shrinker_register(nfsd_slot_shrinker);
> +
>   	set_max_delegations();
>   	return 0;
>   }
> @@ -8827,6 +8889,7 @@ void
>   nfs4_state_shutdown(void)
>   {
>   	rhltable_destroy(&nfs4_file_rhltable);
> +	shrinker_free(nfsd_slot_shrinker);
>   }
>   
>   static void
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index 4251ff3c5ad1..f45aee751a10 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -325,6 +325,7 @@ struct nfsd4_session {
>   	u32			se_cb_prog;
>   	struct list_head	se_hash;	/* hash by sessionid */
>   	struct list_head	se_perclnt;
> +	struct list_head	se_all_sessions;/* global list of sessions */
>   	struct nfs4_client	*se_client;
>   	struct nfs4_sessionid	se_sessionid;
>   	struct nfsd4_channel_attrs se_fchannel;

Bisected to this patch. Sometime during the pynfs NFSv4.1 server tests,
this list_del corruption splat is triggered:

[   87.768277] list_del corruption. prev->next should be 
ff388b4606369638, but was 0000000000000000. (prev=ff388b4606368038)
[   87.771492] ------------[ cut here ]------------
[   87.772862] kernel BUG at lib/list_debug.c:62!
[   87.775029] Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[   87.777179] CPU: 2 UID: 0 PID: 940 Comm: nfsd Not tainted 
6.13.0-rc2-g6139eb164177 #1
[   87.780065] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
1.16.3-2.fc40 04/01/2014
[   87.783143] RIP: 0010:__list_del_entry_valid_or_report.cold+0x4f/0x9f
[   87.785336] Code: c2 48 83 05 43 a7 13 04 01 e8 5e ba f9 ff 0f 0b 48 
89 f2 48 89 fe 48 c7 c7 00 07 84 ae 48 83 05 0f a7 13 04 01 e8 42 ba f9 
ff <0f> 0b 48 89 fe 48 89 ca 48 c7 c7 c8 06 84 ae 48 83 05 db a6 13 04
[   87.791467] RSP: 0018:ff4e1b1302de3d08 EFLAGS: 00010246
[   87.793251] RAX: 000000000000006d RBX: ff388b4606369600 RCX: 
0000000000000000
[   87.795660] RDX: 0000000000000000 RSI: ff388b496fd21900 RDI: 
ff388b496fd21900
[   87.798066] RBP: ff4e1b1302de3d08 R08: 0000000000000000 R09: 
656e3e2d76657270
[   87.800485] R10: 0000000000000029 R11: ff4e1b1302de3aa0 R12: 
ffffffffb0495580
[   87.802884] R13: ff388b460dcee128 R14: 0000000000000001 R15: 
ffffffffb0495580
[   87.805301] FS:  0000000000000000(0000) GS:ff388b496fd00000(0000) 
knlGS:0000000000000000
[   87.807992] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   87.809952] CR2: 00007f7424c42008 CR3: 0000000100f30001 CR4: 
0000000000771ef0
[   87.811961] PKRU: 55555554
[   87.812699] Call Trace:
[   87.813380]  <TASK>
[   87.813966]  ? show_regs.cold+0x21/0x36
[   87.814990]  ? __die_body+0x2b/0xa0
[   87.815934]  ? __die+0x3c/0x4e
[   87.816669]  ? die+0x43/0x80
[   87.817297]  ? do_trap+0x11c/0x150
[   87.818008]  ? do_error_trap+0xbc/0x110
[   87.818797]  ? __list_del_entry_valid_or_report.cold+0x4f/0x9f
[   87.819955]  ? exc_invalid_op+0x6e/0x90
[   87.820747]  ? __list_del_entry_valid_or_report.cold+0x4f/0x9f
[   87.821904]  ? asm_exc_invalid_op+0x1f/0x30
[   87.822761]  ? __list_del_entry_valid_or_report.cold+0x4f/0x9f
[   87.823915]  ? __list_del_entry_valid_or_report.cold+0x4f/0x9f
[   87.825069]  nfsd4_destroy_session+0x280/0x430 [nfsd]
[   87.826230]  nfsd4_proc_compound+0x64d/0xcf0 [nfsd]
[   87.827141]  ? nfs4svc_decode_compoundargs+0x367/0x6c0 [nfsd]
[   87.827989]  nfsd_dispatch+0x16b/0x3d0 [nfsd]
[   87.828671]  svc_process_common+0x903/0xc80 [sunrpc]
[   87.829440]  ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[   87.830178]  svc_process+0x166/0x2e0 [sunrpc]
[   87.830868]  svc_recv+0xd65/0x12c0 [sunrpc]
[   87.831529]  ? __pfx_nfsd+0x10/0x10 [nfsd]
[   87.832160]  nfsd+0x10a/0x1b0 [nfsd]
[   87.832734]  kthread+0x149/0x1c0
[   87.833201]  ? __pfx_kthread+0x10/0x10
[   87.833737]  ret_from_fork+0x5e/0x80
[   87.834248]  ? __pfx_kthread+0x10/0x10
[   87.834786]  ret_from_fork_asm+0x1a/0x30
[   87.835349]  </TASK>


-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session
  2024-12-10 21:05   ` Chuck Lever
@ 2024-12-11  3:32     ` NeilBrown
  2024-12-11 13:44       ` Chuck Lever
  0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2024-12-11  3:32 UTC (permalink / raw)
  To: Chuck Lever
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Jeff Layton

On Wed, 11 Dec 2024, Chuck Lever wrote:
> On 12/8/24 5:43 PM, NeilBrown wrote:
> > Add a shrinker which frees unused slots and may ask the clients to use
> > fewer slots on each session.

> 
> Bisected to this patch. Sometime during the pynfs NFSv4.1 server tests,
> this list_del corruption splat is triggered:

Thanks.
This fixes it.  Do you want to squash it in, or should I resend?

Having two places that detach a session from a client seems less than
ideal.  I wonder if I should fix that.

Thanks,
NeilBrown


diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 311f67418759..3b76cfe44b45 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2425,8 +2425,12 @@ unhash_client_locked(struct nfs4_client *clp)
 	}
 	list_del_init(&clp->cl_lru);
 	spin_lock(&clp->cl_lock);
-	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+	spin_lock(&nfsd_session_list_lock);
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt) {
 		list_del_init(&ses->se_hash);
+		list_del_init(&ses->se_all_sessions);
+	}
+	spin_unlock(&nfsd_session_list_lock);
 	spin_unlock(&clp->cl_lock);
 }
 

Process Finished

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session
  2024-12-11  3:32     ` NeilBrown
@ 2024-12-11 13:44       ` Chuck Lever
  0 siblings, 0 replies; 23+ messages in thread
From: Chuck Lever @ 2024-12-11 13:44 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Jeff Layton

On 12/10/24 10:32 PM, NeilBrown wrote:
> On Wed, 11 Dec 2024, Chuck Lever wrote:
>> On 12/8/24 5:43 PM, NeilBrown wrote:
>>> Add a shrinker which frees unused slots and may ask the clients to use
>>> fewer slots on each session.
> 
>>
>> Bisected to this patch. Sometime during the pynfs NFSv4.1 server tests,
>> this list_del corruption splat is triggered:
> 
> Thanks.
> This fixes it.  Do you want to squash it in, or should I resend?

Resend, thanks!


> Having two places that detach a session from a client seems less than
> ideal.  I wonder if I should fix that.
> 
> Thanks,
> NeilBrown
> 
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 311f67418759..3b76cfe44b45 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2425,8 +2425,12 @@ unhash_client_locked(struct nfs4_client *clp)
>   	}
>   	list_del_init(&clp->cl_lru);
>   	spin_lock(&clp->cl_lock);
> -	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
> +	spin_lock(&nfsd_session_list_lock);
> +	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt) {
>   		list_del_init(&ses->se_hash);
> +		list_del_init(&ses->se_all_sessions);
> +	}
> +	spin_unlock(&nfsd_session_list_lock);
>   	spin_unlock(&clp->cl_lock);
>   }
>   
> 
> Process Finished


-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info
  2024-12-11 21:47 [PATCH 0/6 v5] " NeilBrown
@ 2024-12-11 21:47 ` NeilBrown
  0 siblings, 0 replies; 23+ messages in thread
From: NeilBrown @ 2024-12-11 21:47 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Each client now reports the number of slots allocated in each session.

Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 808cb0d897d5..67dfc699e411 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2643,6 +2643,7 @@ static const char *cb_state2str(int state)
 static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = file_inode(m->file);
+	struct nfsd4_session *ses;
 	struct nfs4_client *clp;
 	u64 clid;
 
@@ -2679,6 +2680,13 @@ static int client_info_show(struct seq_file *m, void *v)
 	seq_printf(m, "callback address: \"%pISpc\"\n", &clp->cl_cb_conn.cb_addr);
 	seq_printf(m, "admin-revoked states: %d\n",
 		   atomic_read(&clp->cl_admin_revoked));
+	spin_lock(&clp->cl_lock);
+	seq_printf(m, "session slots:");
+	list_for_each_entry(ses, &clp->cl_sessions, se_perclnt)
+		seq_printf(m, " %u", ses->se_fchannel.maxreqs);
+	spin_unlock(&clp->cl_lock);
+	seq_puts(m, "\n");
+
 	drop_client(clp);
 
 	return 0;
-- 
2.47.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2024-12-11 21:49 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-08 22:43 [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand NeilBrown
2024-12-08 22:43 ` [PATCH 1/6] nfsd: use an xarray to store v4.1 session slots NeilBrown
2024-12-09  0:53   ` cel
2024-12-08 22:43 ` [PATCH 2/6] nfsd: remove artificial limits on the session-based DRC NeilBrown
2024-12-08 22:43 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
2024-12-08 22:43 ` [PATCH 4/6] nfsd: allocate new session-based DRC slots on demand NeilBrown
2024-12-08 22:43 ` [PATCH 5/6] nfsd: add support for freeing unused session-DRC slots NeilBrown
2024-12-08 22:43 ` [PATCH 6/6] nfsd: add shrinker to reduce number of slots allocated per session NeilBrown
2024-12-10 21:05   ` Chuck Lever
2024-12-11  3:32     ` NeilBrown
2024-12-11 13:44       ` Chuck Lever
2024-12-09 14:49 ` [PATCH 0/6 v4] nfsd: allocate/free session-based DRC slots on demand Jeff Layton
  -- strict thread matches above, loose matches on Subject: below --
2024-12-11 21:47 [PATCH 0/6 v5] " NeilBrown
2024-12-11 21:47 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
2024-12-06  0:43 [PATCH 0/6 v3] nfsd: allocate/free session-based DRC slots on demand NeilBrown
2024-12-06  0:43 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
2024-11-19  0:41 [PATCH 0/6 RFC v2] nfsd: allocate/free session-based DRC slots on demand NeilBrown
2024-11-19  0:41 ` [PATCH 3/6] nfsd: add session slot count to /proc/fs/nfsd/clients/*/info NeilBrown
2024-11-19 19:14   ` Chuck Lever
2024-11-19 22:22     ` NeilBrown
2024-11-20  0:21       ` Chuck Lever
2024-11-19 19:21   ` Chuck Lever
2024-11-19 22:24     ` NeilBrown
2024-11-20  0:25       ` Chuck Lever
2024-11-21 21:03         ` NeilBrown
2024-11-21 21:24           ` Chuck Lever III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox