From: Mike Snitzer <snitzer@kernel.org>
To: NeilBrown <neilb@ownmail.net>
Cc: Jeff Layton <jlayton@kernel.org>,
Chuck Lever <chuck.lever@oracle.com>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
Trond Myklebust <trondmy@kernel.org>,
Anna Schumaker <anna@kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Simon Horman <horms@kernel.org>,
David Howells <dhowells@redhat.com>,
Brandon Adams <brandona@meta.com>,
linux-nfs@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] sunrpc: add a slot to rqstp->rq_bvec for TCP record marker
Date: Wed, 8 Oct 2025 20:08:27 -0400 [thread overview]
Message-ID: <aOb8-3C6y3wV9sIH@kernel.org> (raw)
In-Reply-To: <175996028564.1793333.11431539077389693375@noble.neil.brown.name>
On Thu, Oct 09, 2025 at 08:51:25AM +1100, NeilBrown wrote:
> On Thu, 09 Oct 2025, Jeff Layton wrote:
> > We've seen some occurrences of messages like this in dmesg on some knfsd
> > servers:
> >
> > xdr_buf_to_bvec: bio_vec array overflow
> >
> > Usually followed by messages like this that indicate a short send (note
> > that this message is from an older kernel and the amount that it reports
> > attempting to send is short by 4 bytes):
> >
> > rpc-srv/tcp: nfsd: sent 1048155 when sending 1048152 bytes - shutting down socket
> >
> > svc_tcp_sendmsg() steals a slot in the rq_bvec array for the TCP record
> > marker. If the send is an unaligned READ call though, then there may not
> > be enough slots in the rq_bvec array in some cases.
> >
> > Add a slot to the rq_bvec array, and fix up the array lengths in the
> > callers that care.
> >
> > Fixes: e18e157bb5c8 ("SUNRPC: Send RPC message on TCP with a single sock_sendmsg() call")
> > Tested-by: Brandon Adams <brandona@meta.com>
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> > fs/nfsd/vfs.c | 6 +++---
> > net/sunrpc/svc.c | 3 ++-
> > net/sunrpc/svcsock.c | 4 ++--
> > 3 files changed, 7 insertions(+), 6 deletions(-)
>
> I can't say that I'm liking this patch.
>
> There are 11 place where (in nfsd-testing recently) where
> rq_maxpages is used (as opposed to declared or assigned).
>
> 3 in nfsd/vfs.c
> 4 in sunrpc/svc.c
> 1 in sunrpc/svc_xprt.c
> 2 in sunrpc/svcsock.c
> 1 in xprtrdma/svc_rdma_rc.c
>
> Your patch changes six of those to add 1. I guess the others aren't
> "callers that care". It would help to have it clearly stated why, or
> why not, a caller might care.
>
> But also, what does "rq_maxpages" even mean now?
> The comment in svc.h still says "num of entries in rq_pages"
> which is certainly no longer the case.
> But if it was the case, we should have called it "rq_numpages"
> or similar.
> But maybe it wasn't meant to be the number of pages in the array,
> maybe it was meant to be the maximum number of pages is a request
> or a reply.....
> No - that is sv_max_mesg, to which we add 2 and 1.
> So I could ask "why not just add another 1 in svc_serv_maxpages()?"
> Would the callers that might not care be harmed if rq_maxpages were
> one larger than it is?
>
> It seems to me that rq_maxpages is rather confused and the bug you have
> found which requires this patch is some evidence to that confusion. We
> should fix the confusion, not just the bug.
>
> So simple question to cut through my waffle:
> Would this:
> - return DIV_ROUND_UP(serv->sv_max_mesg, PAGE_SIZE) + 2 + 1;
> + return DIV_ROUND_UP(serv->sv_max_mesg, PAGE_SIZE) + 2 + 1 + 1;
>
> fix the problem. If not, why not? If so, can we just do this?
> then look at renaming rq_maxpages to rq_numpages and audit all the uses
> (and maybe you have already audited...).
Right, I recently wanted to do the same:
https://lore.kernel.org/linux-nfs/20250909233315.80318-2-snitzer@kernel.org/
Certainly cleaner and preferable to me.
Otherwise the +1 sprinkled selectively is really prone to be a problem
for any new users of rq_maxpages.
Mike
next prev parent reply other threads:[~2025-10-09 0:08 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-08 20:23 [PATCH v2 0/2] sunrpc: fix handling of rq_bvec array in svc_rqst Jeff Layton
2025-10-08 20:23 ` [PATCH v2 1/2] sunrpc: account for TCP record marker in rq_bvec array when sending Jeff Layton
2025-10-08 21:32 ` NeilBrown
2025-10-08 20:23 ` [PATCH v2 2/2] sunrpc: add a slot to rqstp->rq_bvec for TCP record marker Jeff Layton
2025-10-08 21:51 ` NeilBrown
2025-10-09 0:08 ` Mike Snitzer [this message]
2025-10-09 12:22 ` Jeff Layton
2025-10-10 0:10 ` NeilBrown
2025-10-10 10:25 ` Jeff Layton
2025-10-10 10:47 ` NeilBrown
2025-10-10 12:47 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aOb8-3C6y3wV9sIH@kernel.org \
--to=snitzer@kernel.org \
--cc=Dai.Ngo@oracle.com \
--cc=anna@kernel.org \
--cc=brandona@meta.com \
--cc=chuck.lever@oracle.com \
--cc=davem@davemloft.net \
--cc=dhowells@redhat.com \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=jlayton@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@ownmail.net \
--cc=netdev@vger.kernel.org \
--cc=okorniev@redhat.com \
--cc=pabeni@redhat.com \
--cc=tom@talpey.com \
--cc=trondmy@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).