public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Geert Uytterhoeven <geert@linux-m68k.org>
To: "J. Bruce Fields" <bfields@fieldses.org>,
	Roberto Bergantinos Corpas <rbergant@redhat.com>
Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] sunrpc: raise kernel RPC channel buffer size
Date: Fri, 23 Oct 2020 11:44:38 +0200 (CEST)	[thread overview]
Message-ID: <alpine.DEB.2.21.2010231141460.29805@ramsan.of.borg> (raw)
In-Reply-To: <20201019132000.GA32403@fieldses.org>

 	Hi Bruce, Roberto,

On Mon, 19 Oct 2020, J. Bruce Fields wrote:
> On Mon, Oct 19, 2020 at 11:33:56AM +0200, Roberto Bergantinos Corpas wrote:
>> Its possible that using AUTH_SYS and mountd manage-gids option a
>> user may hit the 8k RPC channel buffer limit. This have been observed
>> on field, causing unanswered RPCs on clients after mountd fails to
>> write on channel :
>>
>> rpc.mountd[11231]: auth_unix_gid: error writing reply
>>
>> Userland nfs-utils uses a buffer size of 32k (RPC_CHAN_BUF_SIZE), so
>> lets match those two.
>
> Thanks, applying.
>
> That should allow about 4000 group memberships.  If that doesn't do it
> then maybe it's time to rethink....
>
> --b.
>
>>
>> Signed-off-by: Roberto Bergantinos Corpas <rbergant@redhat.com>
>> ---
>>  net/sunrpc/cache.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
>> index baef5ee43dbb..08df4c599ab3 100644
>> --- a/net/sunrpc/cache.c
>> +++ b/net/sunrpc/cache.c
>> @@ -908,7 +908,7 @@ static ssize_t cache_do_downcall(char *kaddr, const char __user *buf,
>>  static ssize_t cache_slow_downcall(const char __user *buf,
>>  				   size_t count, struct cache_detail *cd)
>>  {
>> -	static char write_buf[8192]; /* protected by queue_io_mutex */
>> +	static char write_buf[32768]; /* protected by queue_io_mutex */
>>  	ssize_t ret = -EINVAL;
>>
>>  	if (count >= sizeof(write_buf))

This is now commit 27a1e8a0f79e643d ("sunrpc: raise kernel RPC channel
buffer size") upstream, and increases kernel size by 24 KiB, even if
RPC is not used.

Can this buffer allocated dynamically instead? This code path seems to
be a slow path anyway. If it's critical, perhaps this buffer can be
allocated on first use?

Thanks!

Gr{oetje,eeting}s,

 						Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
 							    -- Linus Torvalds

  reply	other threads:[~2020-10-23  9:44 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-19  9:33 [PATCH] sunrpc: raise kernel RPC channel buffer size Roberto Bergantinos Corpas
2020-10-19 13:20 ` J. Bruce Fields
2020-10-23  9:44   ` Geert Uytterhoeven [this message]
2020-10-24  0:04     ` J. Bruce Fields
2020-10-24  1:29       ` Roberto Bergantinos Corpas
2020-10-24  2:09         ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.2010231141460.29805@ramsan.of.borg \
    --to=geert@linux-m68k.org \
    --cc=bfields@fieldses.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=rbergant@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox