* [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc
@ 2023-11-10 16:28 Chuck Lever
2023-11-10 16:28 ` [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() Chuck Lever
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Chuck Lever @ 2023-11-10 16:28 UTC (permalink / raw)
To: linux-nfs
I've chased down the reported failures with krb5i/p. Essentially
the DRC was broken for krb5i/p because of the extra crap those
security flavors stick before and after the NFS headers.
These patches do address bugs in stable kernels, but they will need
to be massaged, tested, and applied by hand to get there. Voting now
open on whether it's worth the trouble for us to do it now, or wait
until someone complains about a particular LTS problem.
---
Chuck Lever (3):
NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
NFSD: Update nfsd_cache_append() to use xdr_stream
NFSD: Fix checksum mismatches in the duplicate reply cache
fs/nfsd/cache.h | 4 +--
fs/nfsd/nfscache.c | 87 +++++++++++++++++++++++++++-------------------
fs/nfsd/nfssvc.c | 14 ++++++--
3 files changed, 65 insertions(+), 40 deletions(-)
--
Chuck Lever
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
2023-11-10 16:28 [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Chuck Lever
@ 2023-11-10 16:28 ` Chuck Lever
2023-11-17 14:57 ` Jeff Layton
2023-11-10 16:28 ` [PATCH v2 2/3] NFSD: Update nfsd_cache_append() to use xdr_stream Chuck Lever
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Chuck Lever @ 2023-11-10 16:28 UTC (permalink / raw)
To: linux-nfs
From: Chuck Lever <chuck.lever@oracle.com>
The "statp + 1" pointer that is passed to nfsd_cache_update() is
supposed to point to the start of the egress NFS Reply header. In
fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests.
But both krb5i and krb5p add fields between the RPC header's
accept_stat field and the start of the NFS Reply header. In those
cases, "statp + 1" points at the extra fields instead of the Reply.
The result is that nfsd_cache_update() caches what looks to the
client like garbage.
A connection break can occur for a number of reasons, but the most
common reason when using krb5i/p is a GSS sequence number window
underrun. When an underrun is detected, the server is obliged to
drop the RPC and the connection to force a retransmit with a fresh
GSS sequence number. The client presents the same XID, it hits in
the server's DRC, and the server returns the garbage cache entry.
The "statp + 1" argument has been used since the oldest changeset
in the kernel history repo, so it has been in nfsd_dispatch()
literally since before history began. The problem arose only when
the server-side GSS implementation was added twenty years ago.
This particular patch applies cleanly to v6.5 and later, but needs
some context adjustment to apply to earlier kernels. Before v5.16,
nfsd_dispatch() does not use xdr_stream, so saving the NFS header
pointer before calling ->pc_encode is still an appropriate fix
but it needs to be implemented differently.
Cc: <stable@vger.kernel.org> # v5.16+
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
fs/nfsd/nfssvc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index d6122bb2d167..60aacca2bca6 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
const struct svc_procedure *proc = rqstp->rq_procinfo;
__be32 *statp = rqstp->rq_accept_statp;
struct nfsd_cacherep *rp;
+ __be32 *nfs_reply;
/*
* Give the xdr decoder a chance to change this if it wants
@@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
if (test_bit(RQ_DROPME, &rqstp->rq_flags))
goto out_update_drop;
+ nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0);
if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
goto out_encode_err;
@@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
*/
smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1);
- nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
+ nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply);
out_cached_reply:
return 1;
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 2/3] NFSD: Update nfsd_cache_append() to use xdr_stream
2023-11-10 16:28 [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Chuck Lever
2023-11-10 16:28 ` [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() Chuck Lever
@ 2023-11-10 16:28 ` Chuck Lever
2023-11-10 16:28 ` [PATCH v2 3/3] NFSD: Fix checksum mismatches in the duplicate reply cache Chuck Lever
2023-11-11 12:42 ` [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Jeff Layton
3 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2023-11-10 16:28 UTC (permalink / raw)
To: linux-nfs
From: Chuck Lever <chuck.lever@oracle.com>
When inserting a DRC-cached response into the reply buffer, ensure
that the buffer's xdr_stream is updated properly. Otherwise the
server will send a garbage response.
This should have been part of last year's series to handle NFS
response encoding using the xdr_stream utilities.
Cc: <stable@vger.kernel.org> # v5.16+
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
fs/nfsd/nfscache.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
index fd56a52aa5fb..9046331e0e5e 100644
--- a/fs/nfsd/nfscache.c
+++ b/fs/nfsd/nfscache.c
@@ -641,24 +641,17 @@ void nfsd_cache_update(struct svc_rqst *rqstp, struct nfsd_cacherep *rp,
return;
}
-/*
- * Copy cached reply to current reply buffer. Should always fit.
- * FIXME as reply is in a page, we should just attach the page, and
- * keep a refcount....
- */
static int
nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *data)
{
- struct kvec *vec = &rqstp->rq_res.head[0];
-
- if (vec->iov_len + data->iov_len > PAGE_SIZE) {
- printk(KERN_WARNING "nfsd: cached reply too large (%zd).\n",
- data->iov_len);
- return 0;
- }
- memcpy((char*)vec->iov_base + vec->iov_len, data->iov_base, data->iov_len);
- vec->iov_len += data->iov_len;
- return 1;
+ __be32 *p;
+
+ p = xdr_reserve_space(&rqstp->rq_res_stream, data->iov_len);
+ if (unlikely(!p))
+ return false;
+ memcpy(p, data->iov_base, data->iov_len);
+ xdr_commit_encode(&rqstp->rq_res_stream);
+ return true;
}
/*
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 3/3] NFSD: Fix checksum mismatches in the duplicate reply cache
2023-11-10 16:28 [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Chuck Lever
2023-11-10 16:28 ` [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() Chuck Lever
2023-11-10 16:28 ` [PATCH v2 2/3] NFSD: Update nfsd_cache_append() to use xdr_stream Chuck Lever
@ 2023-11-10 16:28 ` Chuck Lever
2023-11-11 12:42 ` [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Jeff Layton
3 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2023-11-10 16:28 UTC (permalink / raw)
To: linux-nfs
From: Chuck Lever <chuck.lever@oracle.com>
nfsd_cache_csum() currently assumes that the server's RPC layer has
been advancing rq_arg.head[0].iov_base as it decodes an incoming
request, because that's the way it used to work. On entry, it
expects that buf->head[0].iov_base points to the start of the NFS
header, and excludes the already-decoded RPC header.
These days however, head[0].iov_base now points to the start of the
RPC header during all processing. It no longer points at the NFS
Call header when execution arrives at nfsd_cache_csum().
In a retransmitted RPC the XID and the NFS header are supposed to
be the same as the original message, but the contents of the
retransmitted RPC header can be different. For example, for krb5,
the GSS sequence number will be different between the two. Thus if
the RPC header is always included in the DRC checksum computation,
the checksum of the retransmitted message might not match the
checksum of the original message, even though the NFS part of these
messages is identical.
The result is that, even if a matching XID is found in the DRC,
the checksum mismatch causes the server to execute the
retransmitted RPC transaction again.
It might be appropriate to consider applying this fix kernels as
early as v6.3, if someone can make it apply cleanly and test it
properly.
Cc: <stable@vger.kernel.org> # v6.6+
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
fs/nfsd/cache.h | 4 ++-
fs/nfsd/nfscache.c | 64 +++++++++++++++++++++++++++++++++++-----------------
fs/nfsd/nfssvc.c | 10 +++++++-
3 files changed, 54 insertions(+), 24 deletions(-)
diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
index 929248c6ca84..4cbe0434cbb8 100644
--- a/fs/nfsd/cache.h
+++ b/fs/nfsd/cache.h
@@ -84,8 +84,8 @@ int nfsd_net_reply_cache_init(struct nfsd_net *nn);
void nfsd_net_reply_cache_destroy(struct nfsd_net *nn);
int nfsd_reply_cache_init(struct nfsd_net *);
void nfsd_reply_cache_shutdown(struct nfsd_net *);
-int nfsd_cache_lookup(struct svc_rqst *rqstp,
- struct nfsd_cacherep **cacherep);
+int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start,
+ unsigned int len, struct nfsd_cacherep **cacherep);
void nfsd_cache_update(struct svc_rqst *rqstp, struct nfsd_cacherep *rp,
int cachetype, __be32 *statp);
int nfsd_reply_cache_stats_show(struct seq_file *m, void *v);
diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
index 9046331e0e5e..d3273a396659 100644
--- a/fs/nfsd/nfscache.c
+++ b/fs/nfsd/nfscache.c
@@ -369,33 +369,52 @@ nfsd_reply_cache_scan(struct shrinker *shrink, struct shrink_control *sc)
return freed;
}
-/*
- * Walk an xdr_buf and get a CRC for at most the first RC_CSUMLEN bytes
+/**
+ * nfsd_cache_csum - Checksum incoming NFS Call arguments
+ * @buf: buffer containing a whole RPC Call message
+ * @start: starting byte of the NFS Call header
+ * @remaining: size of the NFS Call header, in bytes
+ *
+ * Compute a weak checksum of the leading bytes of an NFS procedure
+ * call header to help verify that a retransmitted Call matches an
+ * entry in the duplicate reply cache.
+ *
+ * To avoid assumptions about how the RPC message is laid out in
+ * @buf and what else it might contain (eg, a GSS MIC suffix), the
+ * caller passes us the exact location and length of the NFS Call
+ * header.
+ *
+ * Returns a 32-bit checksum value, as defined in RFC 793.
*/
-static __wsum
-nfsd_cache_csum(struct svc_rqst *rqstp)
+static __wsum nfsd_cache_csum(struct xdr_buf *buf, unsigned int start,
+ unsigned int remaining)
{
+ unsigned int base, len;
+ struct xdr_buf subbuf;
+ __wsum csum = 0;
+ void *p;
int idx;
- unsigned int base;
- __wsum csum;
- struct xdr_buf *buf = &rqstp->rq_arg;
- const unsigned char *p = buf->head[0].iov_base;
- size_t csum_len = min_t(size_t, buf->head[0].iov_len + buf->page_len,
- RC_CSUMLEN);
- size_t len = min(buf->head[0].iov_len, csum_len);
+
+ if (remaining > RC_CSUMLEN)
+ remaining = RC_CSUMLEN;
+ if (xdr_buf_subsegment(buf, &subbuf, start, remaining))
+ return csum;
/* rq_arg.head first */
- csum = csum_partial(p, len, 0);
- csum_len -= len;
+ if (subbuf.head[0].iov_len) {
+ len = min_t(unsigned int, subbuf.head[0].iov_len, remaining);
+ csum = csum_partial(subbuf.head[0].iov_base, len, csum);
+ remaining -= len;
+ }
/* Continue into page array */
- idx = buf->page_base / PAGE_SIZE;
- base = buf->page_base & ~PAGE_MASK;
- while (csum_len) {
- p = page_address(buf->pages[idx]) + base;
- len = min_t(size_t, PAGE_SIZE - base, csum_len);
+ idx = subbuf.page_base / PAGE_SIZE;
+ base = subbuf.page_base & ~PAGE_MASK;
+ while (remaining) {
+ p = page_address(subbuf.pages[idx]) + base;
+ len = min_t(unsigned int, PAGE_SIZE - base, remaining);
csum = csum_partial(p, len, csum);
- csum_len -= len;
+ remaining -= len;
base = 0;
++idx;
}
@@ -466,6 +485,8 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct nfsd_cacherep *key,
/**
* nfsd_cache_lookup - Find an entry in the duplicate reply cache
* @rqstp: Incoming Call to find
+ * @start: starting byte in @rqstp->rq_arg of the NFS Call header
+ * @len: size of the NFS Call header, in bytes
* @cacherep: OUT: DRC entry for this request
*
* Try to find an entry matching the current call in the cache. When none
@@ -479,7 +500,8 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct nfsd_cacherep *key,
* %RC_REPLY: Reply from cache
* %RC_DROPIT: Do not process the request further
*/
-int nfsd_cache_lookup(struct svc_rqst *rqstp, struct nfsd_cacherep **cacherep)
+int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start,
+ unsigned int len, struct nfsd_cacherep **cacherep)
{
struct nfsd_net *nn;
struct nfsd_cacherep *rp, *found;
@@ -495,7 +517,7 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp, struct nfsd_cacherep **cacherep)
goto out;
}
- csum = nfsd_cache_csum(rqstp);
+ csum = nfsd_cache_csum(&rqstp->rq_arg, start, len);
/*
* Since the common case is a cache miss followed by an insert,
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 60aacca2bca6..d8eeb760b5d9 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
const struct svc_procedure *proc = rqstp->rq_procinfo;
__be32 *statp = rqstp->rq_accept_statp;
struct nfsd_cacherep *rp;
+ unsigned int start, len;
__be32 *nfs_reply;
/*
@@ -989,6 +990,13 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
*/
rqstp->rq_cachetype = proc->pc_cachetype;
+ /*
+ * ->pc_decode advances the argument stream past the NFS
+ * Call header, so grab the header's starting location and
+ * size now for the call to nfsd_cache_lookup().
+ */
+ start = xdr_stream_pos(&rqstp->rq_arg_stream);
+ len = xdr_stream_remaining(&rqstp->rq_arg_stream);
if (!proc->pc_decode(rqstp, &rqstp->rq_arg_stream))
goto out_decode_err;
@@ -1002,7 +1010,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter | 1);
rp = NULL;
- switch (nfsd_cache_lookup(rqstp, &rp)) {
+ switch (nfsd_cache_lookup(rqstp, start, len, &rp)) {
case RC_DOIT:
break;
case RC_REPLY:
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc
2023-11-10 16:28 [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Chuck Lever
` (2 preceding siblings ...)
2023-11-10 16:28 ` [PATCH v2 3/3] NFSD: Fix checksum mismatches in the duplicate reply cache Chuck Lever
@ 2023-11-11 12:42 ` Jeff Layton
2023-11-11 16:09 ` Chuck Lever III
3 siblings, 1 reply; 10+ messages in thread
From: Jeff Layton @ 2023-11-11 12:42 UTC (permalink / raw)
To: Chuck Lever, linux-nfs
On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
> I've chased down the reported failures with krb5i/p. Essentially
> the DRC was broken for krb5i/p because of the extra crap those
> security flavors stick before and after the NFS headers.
Huh, so the reported failures were due to retransmissions?
>
> These patches do address bugs in stable kernels, but they will need
> to be massaged, tested, and applied by hand to get there. Voting now
> open on whether it's worth the trouble for us to do it now, or wait
> until someone complains about a particular LTS problem.
>
> ---
>
> Chuck Lever (3):
> NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
> NFSD: Update nfsd_cache_append() to use xdr_stream
> NFSD: Fix checksum mismatches in the duplicate reply cache
>
>
> fs/nfsd/cache.h | 4 +--
> fs/nfsd/nfscache.c | 87 +++++++++++++++++++++++++++-------------------
> fs/nfsd/nfssvc.c | 14 ++++++--
> 3 files changed, 65 insertions(+), 40 deletions(-)
>
> --
> Chuck Lever
>
Wow, nice work tracking those down. I'm sure that wasn't easy!
Reviewed-by: Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc
2023-11-11 12:42 ` [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Jeff Layton
@ 2023-11-11 16:09 ` Chuck Lever III
0 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever III @ 2023-11-11 16:09 UTC (permalink / raw)
To: Jeff Layton; +Cc: Chuck Lever, Linux NFS Mailing List
> On Nov 11, 2023, at 7:42 AM, Jeff Layton <jlayton@kernel.org> wrote:
>
> On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
>> I've chased down the reported failures with krb5i/p. Essentially
>> the DRC was broken for krb5i/p because of the extra crap those
>> security flavors stick before and after the NFS headers.
>
> Huh, so the reported failures were due to retransmissions?
Good question. This cover letter leaves out some of the story.
I noticed test failures when running the git regression suite
with 12 threads on a sec=krb5i mount point, proto=rdma. The
failure was 100% reproducible. I saw some retransmits in the
mountstats error counters.
The initial trigger of the retransmissions is GSS sequence
window underruns, which are rather frequent with this test
set-up. The server is mandated to drop the request and
connection after each detected underrun.
At first I thought that getting rid of the underruns would
fix the issue, but then realized that that's not possible to
do completely. Then I realized that a simple retransmit is not
supposed to result in vastly different application behavior
(at least most of the time) and so dug into that.
The hole I crawled out of at the end was that the DRC was
busted. And not just one bug, but three separate issues.
Once all three are addressed, the test passes on krb5i and
krb5p.
>> These patches do address bugs in stable kernels, but they will need
>> to be massaged, tested, and applied by hand to get there. Voting now
>> open on whether it's worth the trouble for us to do it now, or wait
>> until someone complains about a particular LTS problem.
I haven't been able to successfully build a kernel older
than about v5.18 on my Fedora 38 systems. I'm thinking that,
in general, we could test stable backports using kdevops
with older Fedora vagrant boxes.
But I haven't gotten the full kdevops environment to run
here yet, and thus don't have a git regression workflow. And
I guess we don't have RDMA working in this environment yet
either. (siw might not work well or at all on older kernels).
Maybe the connection losses could be triggered instead with
disconnect injection.
>> ---
>>
>> Chuck Lever (3):
>> NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
>> NFSD: Update nfsd_cache_append() to use xdr_stream
>> NFSD: Fix checksum mismatches in the duplicate reply cache
>>
>>
>> fs/nfsd/cache.h | 4 +--
>> fs/nfsd/nfscache.c | 87 +++++++++++++++++++++++++++-------------------
>> fs/nfsd/nfssvc.c | 14 ++++++--
>> 3 files changed, 65 insertions(+), 40 deletions(-)
>>
>> --
>> Chuck Lever
>>
>
> Wow, nice work tracking those down. I'm sure that wasn't easy!
The hardest part was not giving up.
> Reviewed-by: Jeff Layton <jlayton@kernel.org>
Much appreciated!
--
Chuck Lever
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
2023-11-10 16:28 ` [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() Chuck Lever
@ 2023-11-17 14:57 ` Jeff Layton
2023-11-17 15:08 ` Chuck Lever
0 siblings, 1 reply; 10+ messages in thread
From: Jeff Layton @ 2023-11-17 14:57 UTC (permalink / raw)
To: Chuck Lever, linux-nfs
[-- Attachment #1: Type: text/plain, Size: 3193 bytes --]
On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> The "statp + 1" pointer that is passed to nfsd_cache_update() is
> supposed to point to the start of the egress NFS Reply header. In
> fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests.
>
> But both krb5i and krb5p add fields between the RPC header's
> accept_stat field and the start of the NFS Reply header. In those
> cases, "statp + 1" points at the extra fields instead of the Reply.
> The result is that nfsd_cache_update() caches what looks to the
> client like garbage.
>
> A connection break can occur for a number of reasons, but the most
> common reason when using krb5i/p is a GSS sequence number window
> underrun. When an underrun is detected, the server is obliged to
> drop the RPC and the connection to force a retransmit with a fresh
> GSS sequence number. The client presents the same XID, it hits in
> the server's DRC, and the server returns the garbage cache entry.
>
> The "statp + 1" argument has been used since the oldest changeset
> in the kernel history repo, so it has been in nfsd_dispatch()
> literally since before history began. The problem arose only when
> the server-side GSS implementation was added twenty years ago.
>
> This particular patch applies cleanly to v6.5 and later, but needs
> some context adjustment to apply to earlier kernels. Before v5.16,
> nfsd_dispatch() does not use xdr_stream, so saving the NFS header
> pointer before calling ->pc_encode is still an appropriate fix
> but it needs to be implemented differently.
>
> Cc: <stable@vger.kernel.org> # v5.16+
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> fs/nfsd/nfssvc.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index d6122bb2d167..60aacca2bca6 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> const struct svc_procedure *proc = rqstp->rq_procinfo;
> __be32 *statp = rqstp->rq_accept_statp;
> struct nfsd_cacherep *rp;
> + __be32 *nfs_reply;
>
> /*
> * Give the xdr decoder a chance to change this if it wants
> @@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> if (test_bit(RQ_DROPME, &rqstp->rq_flags))
> goto out_update_drop;
>
> + nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0);
> if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
> goto out_encode_err;
>
> @@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> */
> smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1);
>
> - nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
> + nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply);
> out_cached_reply:
> return 1;
>
>
>
With this patch, I'm seeing a regression in pynfs RPLY14. In the
attached capture the client sends a replay of an earlier call, and the
server responds (frame #97) with a reply that is truncated just after
the RPC accept state.
--
Jeff Layton <jlayton@kernel.org>
[-- Attachment #2: RPLY14.pcap.gz --]
[-- Type: application/vnd.tcpdump.pcap, Size: 4431 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
2023-11-17 14:57 ` Jeff Layton
@ 2023-11-17 15:08 ` Chuck Lever
2023-11-17 18:58 ` Chuck Lever III
0 siblings, 1 reply; 10+ messages in thread
From: Chuck Lever @ 2023-11-17 15:08 UTC (permalink / raw)
To: Jeff Layton; +Cc: Chuck Lever, linux-nfs
On Fri, Nov 17, 2023 at 09:57:49AM -0500, Jeff Layton wrote:
> On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
> > From: Chuck Lever <chuck.lever@oracle.com>
> >
> > The "statp + 1" pointer that is passed to nfsd_cache_update() is
> > supposed to point to the start of the egress NFS Reply header. In
> > fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests.
> >
> > But both krb5i and krb5p add fields between the RPC header's
> > accept_stat field and the start of the NFS Reply header. In those
> > cases, "statp + 1" points at the extra fields instead of the Reply.
> > The result is that nfsd_cache_update() caches what looks to the
> > client like garbage.
> >
> > A connection break can occur for a number of reasons, but the most
> > common reason when using krb5i/p is a GSS sequence number window
> > underrun. When an underrun is detected, the server is obliged to
> > drop the RPC and the connection to force a retransmit with a fresh
> > GSS sequence number. The client presents the same XID, it hits in
> > the server's DRC, and the server returns the garbage cache entry.
> >
> > The "statp + 1" argument has been used since the oldest changeset
> > in the kernel history repo, so it has been in nfsd_dispatch()
> > literally since before history began. The problem arose only when
> > the server-side GSS implementation was added twenty years ago.
> >
> > This particular patch applies cleanly to v6.5 and later, but needs
> > some context adjustment to apply to earlier kernels. Before v5.16,
> > nfsd_dispatch() does not use xdr_stream, so saving the NFS header
> > pointer before calling ->pc_encode is still an appropriate fix
> > but it needs to be implemented differently.
> >
> > Cc: <stable@vger.kernel.org> # v5.16+
> > Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> > ---
> > fs/nfsd/nfssvc.c | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > index d6122bb2d167..60aacca2bca6 100644
> > --- a/fs/nfsd/nfssvc.c
> > +++ b/fs/nfsd/nfssvc.c
> > @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > const struct svc_procedure *proc = rqstp->rq_procinfo;
> > __be32 *statp = rqstp->rq_accept_statp;
> > struct nfsd_cacherep *rp;
> > + __be32 *nfs_reply;
> >
> > /*
> > * Give the xdr decoder a chance to change this if it wants
> > @@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > if (test_bit(RQ_DROPME, &rqstp->rq_flags))
> > goto out_update_drop;
> >
> > + nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0);
> > if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
> > goto out_encode_err;
> >
> > @@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > */
> > smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1);
> >
> > - nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
> > + nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply);
> > out_cached_reply:
> > return 1;
> >
> >
> >
>
> With this patch, I'm seeing a regression in pynfs RPLY14. In the
> attached capture the client sends a replay of an earlier call, and the
> server responds (frame #97) with a reply that is truncated just after
> the RPC accept state.
I've reproduced it. Looking now.
--
Chuck Lever
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
2023-11-17 15:08 ` Chuck Lever
@ 2023-11-17 18:58 ` Chuck Lever III
2023-11-17 20:11 ` Jeff Layton
0 siblings, 1 reply; 10+ messages in thread
From: Chuck Lever III @ 2023-11-17 18:58 UTC (permalink / raw)
To: Jeff Layton; +Cc: Chuck Lever, Linux NFS Mailing List
> On Nov 17, 2023, at 10:08 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>
> On Fri, Nov 17, 2023 at 09:57:49AM -0500, Jeff Layton wrote:
>> On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
>>> From: Chuck Lever <chuck.lever@oracle.com>
>>>
>>> The "statp + 1" pointer that is passed to nfsd_cache_update() is
>>> supposed to point to the start of the egress NFS Reply header. In
>>> fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests.
>>>
>>> But both krb5i and krb5p add fields between the RPC header's
>>> accept_stat field and the start of the NFS Reply header. In those
>>> cases, "statp + 1" points at the extra fields instead of the Reply.
>>> The result is that nfsd_cache_update() caches what looks to the
>>> client like garbage.
>>>
>>> A connection break can occur for a number of reasons, but the most
>>> common reason when using krb5i/p is a GSS sequence number window
>>> underrun. When an underrun is detected, the server is obliged to
>>> drop the RPC and the connection to force a retransmit with a fresh
>>> GSS sequence number. The client presents the same XID, it hits in
>>> the server's DRC, and the server returns the garbage cache entry.
>>>
>>> The "statp + 1" argument has been used since the oldest changeset
>>> in the kernel history repo, so it has been in nfsd_dispatch()
>>> literally since before history began. The problem arose only when
>>> the server-side GSS implementation was added twenty years ago.
>>>
>>> This particular patch applies cleanly to v6.5 and later, but needs
>>> some context adjustment to apply to earlier kernels. Before v5.16,
>>> nfsd_dispatch() does not use xdr_stream, so saving the NFS header
>>> pointer before calling ->pc_encode is still an appropriate fix
>>> but it needs to be implemented differently.
>>>
>>> Cc: <stable@vger.kernel.org> # v5.16+
>>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>>> ---
>>> fs/nfsd/nfssvc.c | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
>>> index d6122bb2d167..60aacca2bca6 100644
>>> --- a/fs/nfsd/nfssvc.c
>>> +++ b/fs/nfsd/nfssvc.c
>>> @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
>>> const struct svc_procedure *proc = rqstp->rq_procinfo;
>>> __be32 *statp = rqstp->rq_accept_statp;
>>> struct nfsd_cacherep *rp;
>>> + __be32 *nfs_reply;
>>>
>>> /*
>>> * Give the xdr decoder a chance to change this if it wants
>>> @@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
>>> if (test_bit(RQ_DROPME, &rqstp->rq_flags))
>>> goto out_update_drop;
>>>
>>> + nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0);
>>> if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
>>> goto out_encode_err;
>>>
>>> @@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
>>> */
>>> smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1);
>>>
>>> - nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
>>> + nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply);
>>> out_cached_reply:
>>> return 1;
>>>
>>>
>>>
>>
>> With this patch, I'm seeing a regression in pynfs RPLY14. In the
>> attached capture the client sends a replay of an earlier call, and the
>> server responds (frame #97) with a reply that is truncated just after
>> the RPC accept state.
>
> I've reproduced it. Looking now.
One line fix was squashed into "NFSD: Fix "start of NFS reply"
pointer passed to nfsd_cache_update()". The new series is in
the nfsd-fixes branch of my repo on kernel.org <http://kernel.org/>.
--
Chuck Lever
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update()
2023-11-17 18:58 ` Chuck Lever III
@ 2023-11-17 20:11 ` Jeff Layton
0 siblings, 0 replies; 10+ messages in thread
From: Jeff Layton @ 2023-11-17 20:11 UTC (permalink / raw)
To: Chuck Lever III; +Cc: Chuck Lever, Linux NFS Mailing List
On Fri, 2023-11-17 at 18:58 +0000, Chuck Lever III wrote:
>
> > On Nov 17, 2023, at 10:08 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> >
> > On Fri, Nov 17, 2023 at 09:57:49AM -0500, Jeff Layton wrote:
> > > On Fri, 2023-11-10 at 11:28 -0500, Chuck Lever wrote:
> > > > From: Chuck Lever <chuck.lever@oracle.com>
> > > >
> > > > The "statp + 1" pointer that is passed to nfsd_cache_update() is
> > > > supposed to point to the start of the egress NFS Reply header. In
> > > > fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests.
> > > >
> > > > But both krb5i and krb5p add fields between the RPC header's
> > > > accept_stat field and the start of the NFS Reply header. In those
> > > > cases, "statp + 1" points at the extra fields instead of the Reply.
> > > > The result is that nfsd_cache_update() caches what looks to the
> > > > client like garbage.
> > > >
> > > > A connection break can occur for a number of reasons, but the most
> > > > common reason when using krb5i/p is a GSS sequence number window
> > > > underrun. When an underrun is detected, the server is obliged to
> > > > drop the RPC and the connection to force a retransmit with a fresh
> > > > GSS sequence number. The client presents the same XID, it hits in
> > > > the server's DRC, and the server returns the garbage cache entry.
> > > >
> > > > The "statp + 1" argument has been used since the oldest changeset
> > > > in the kernel history repo, so it has been in nfsd_dispatch()
> > > > literally since before history began. The problem arose only when
> > > > the server-side GSS implementation was added twenty years ago.
> > > >
> > > > This particular patch applies cleanly to v6.5 and later, but needs
> > > > some context adjustment to apply to earlier kernels. Before v5.16,
> > > > nfsd_dispatch() does not use xdr_stream, so saving the NFS header
> > > > pointer before calling ->pc_encode is still an appropriate fix
> > > > but it needs to be implemented differently.
> > > >
> > > > Cc: <stable@vger.kernel.org> # v5.16+
> > > > Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> > > > ---
> > > > fs/nfsd/nfssvc.c | 4 +++-
> > > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > > > index d6122bb2d167..60aacca2bca6 100644
> > > > --- a/fs/nfsd/nfssvc.c
> > > > +++ b/fs/nfsd/nfssvc.c
> > > > @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > > > const struct svc_procedure *proc = rqstp->rq_procinfo;
> > > > __be32 *statp = rqstp->rq_accept_statp;
> > > > struct nfsd_cacherep *rp;
> > > > + __be32 *nfs_reply;
> > > >
> > > > /*
> > > > * Give the xdr decoder a chance to change this if it wants
> > > > @@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > > > if (test_bit(RQ_DROPME, &rqstp->rq_flags))
> > > > goto out_update_drop;
> > > >
> > > > + nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0);
> > > > if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
> > > > goto out_encode_err;
> > > >
> > > > @@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > > > */
> > > > smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1);
> > > >
> > > > - nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
> > > > + nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply);
> > > > out_cached_reply:
> > > > return 1;
> > > >
> > > >
> > > >
> > >
> > > With this patch, I'm seeing a regression in pynfs RPLY14. In the
> > > attached capture the client sends a replay of an earlier call, and the
> > > server responds (frame #97) with a reply that is truncated just after
> > > the RPC accept state.
> >
> > I've reproduced it. Looking now.
>
> One line fix was squashed into "NFSD: Fix "start of NFS reply"
> pointer passed to nfsd_cache_update()". The new series is in
> the nfsd-fixes branch of my repo on kernel.org <http://kernel.org/>.
>
LGTM. You can add this to the pile:
Tested-by: Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-11-17 20:11 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-10 16:28 [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Chuck Lever
2023-11-10 16:28 ` [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() Chuck Lever
2023-11-17 14:57 ` Jeff Layton
2023-11-17 15:08 ` Chuck Lever
2023-11-17 18:58 ` Chuck Lever III
2023-11-17 20:11 ` Jeff Layton
2023-11-10 16:28 ` [PATCH v2 2/3] NFSD: Update nfsd_cache_append() to use xdr_stream Chuck Lever
2023-11-10 16:28 ` [PATCH v2 3/3] NFSD: Fix checksum mismatches in the duplicate reply cache Chuck Lever
2023-11-11 12:42 ` [PATCH v2 0/3] NFSD DRC fixes for v6.7-rc Jeff Layton
2023-11-11 16:09 ` Chuck Lever III
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox