From: Chuck Lever <cel@kernel.org>
To: Anna Schumaker <anna@kernel.org>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>,
<linux-nfs@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
Chuck Lever <chuck.lever@oracle.com>
Subject: [PATCH v2 4/8] xprtrdma: Close lost-wakeup race in xprt_rdma_alloc_slot
Date: Thu, 5 Mar 2026 09:50:59 -0500 [thread overview]
Message-ID: <20260305145054.7096-14-cel@kernel.org> (raw)
In-Reply-To: <20260305145054.7096-10-cel@kernel.org>
From: Chuck Lever <chuck.lever@oracle.com>
xprt_rdma_alloc_slot() and xprt_rdma_free_slot() lack serialization
between the buffer pool and the backlog queue. A buffer freed
after rpcrdma_buffer_get() finds the pool empty but before
rpc_sleep_on() places the task on the backlog is returned to the
pool with no waiter to wake, leaving the task stuck on the backlog
indefinitely.
After joining the backlog, re-check the pool and route any
recovered buffer through xprt_wake_up_backlog(), whose queue lock
serializes with concurrent wakeups and avoids double-assignment
of slots.
Because xprt_rdma_free_slot() does not hold reserve_lock, the
XPRT_CONGESTED double-check in xprt_throttle_congested() is
ineffective: a task can join the backlog through that path after
free_slot has already found it empty and cleared the bit. Avoid
this by using xprt_add_backlog_noncongested(), which queues the
task without setting XPRT_CONGESTED, so every allocation reaches
xprt_rdma_alloc_slot() and its post-sleep re-check.
Fixes: edb41e61a54e ("xprtrdma: Make rpc_rqst part of rpcrdma_req")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/sunrpc/xprt.h | 2 ++
net/sunrpc/xprt.c | 16 ++++++++++++++++
net/sunrpc/xprtrdma/transport.c | 15 ++++++++++++++-
3 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index f46d1fb8f71a..a82045804d34 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -404,6 +404,8 @@ struct rpc_xprt * xprt_alloc(struct net *net, size_t size,
unsigned int max_req);
void xprt_free(struct rpc_xprt *);
void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task);
+void xprt_add_backlog_noncongested(struct rpc_xprt *xprt,
+ struct rpc_task *task);
bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req);
void xprt_cleanup_ids(void);
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 4fbb57a29704..48a3618cbb29 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -1663,6 +1663,22 @@ void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
}
EXPORT_SYMBOL_GPL(xprt_add_backlog);
+/**
+ * xprt_add_backlog_noncongested - queue task on backlog
+ * @xprt: transport whose backlog queue receives the task
+ * @task: task to queue
+ *
+ * Like xprt_add_backlog, but does not set XPRT_CONGESTED.
+ * For transports whose free_slot path does not synchronize
+ * with xprt_throttle_congested via reserve_lock.
+ */
+void xprt_add_backlog_noncongested(struct rpc_xprt *xprt,
+ struct rpc_task *task)
+{
+ rpc_sleep_on(&xprt->backlog, task, xprt_complete_request_init);
+}
+EXPORT_SYMBOL_GPL(xprt_add_backlog_noncongested);
+
static bool __xprt_set_rq(struct rpc_task *task, void *data)
{
struct rpc_rqst *req = data;
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index ca079439f9cc..61706df5e485 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -511,7 +511,20 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
out_sleep:
task->tk_status = -EAGAIN;
- xprt_add_backlog(xprt, task);
+ xprt_add_backlog_noncongested(xprt, task);
+ /* A buffer freed between buffer_get and rpc_sleep_on
+ * goes back to the pool with no waiter to wake.
+ * Re-check after joining the backlog to close that gap.
+ */
+ req = rpcrdma_buffer_get(&r_xprt->rx_buf);
+ if (req) {
+ struct rpc_rqst *rqst = &req->rl_slot;
+
+ if (!xprt_wake_up_backlog(xprt, rqst)) {
+ memset(rqst, 0, sizeof(*rqst));
+ rpcrdma_buffer_put(&r_xprt->rx_buf, req);
+ }
+ }
}
/**
--
2.53.0
next prev parent reply other threads:[~2026-03-05 14:51 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 14:50 [PATCH v2 0/8] Fix various races in xprtrdma Chuck Lever
2026-03-05 14:50 ` [PATCH v2 1/8] xprtrdma: Decrement re_receiving on the early exit paths Chuck Lever
2026-03-05 14:52 ` Chuck Lever
2026-03-05 14:50 ` [PATCH v2 2/8] xprtrdma: Close sendctx get/put race that can block a transport Chuck Lever
2026-03-05 14:50 ` [PATCH v2 3/8] xprtrdma: Avoid 250 ms delay on backlog wakeup Chuck Lever
2026-03-05 14:50 ` Chuck Lever [this message]
2026-03-05 14:51 ` [PATCH v2 5/8] xprtrdma: Decouple frwr_wp_create from frwr_map Chuck Lever
2026-03-05 14:51 ` [PATCH v2 6/8] xprtrdma: Replace rpcrdma_mr_seg with xdr_buf cursor Chuck Lever
2026-03-06 16:28 ` Anna Schumaker
2026-03-06 16:30 ` Chuck Lever
2026-03-05 14:51 ` [PATCH v2 7/8] xprtrdma: Scale receive batch size with credit window Chuck Lever
2026-03-05 14:51 ` [PATCH v2 8/8] xprtrdma: Post receive buffers after RPC completion Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260305145054.7096-14-cel@kernel.org \
--to=cel@kernel.org \
--cc=anna@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=trond.myklebust@hammerspace.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox