* [PATCH 1/5] smb: smbdirect: introduce smbdirect_socket.send_io.lcredits.*
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
@ 2025-10-20 18:35 ` Stefan Metzmacher
2025-10-20 18:35 ` [PATCH 2/5] smb: server: smb_direct_disconnect_rdma_connection() already wakes all waiters on error Stefan Metzmacher
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Stefan Metzmacher @ 2025-10-20 18:35 UTC (permalink / raw)
To: linux-cifs, samba-technical
Cc: metze, Steve French, Tom Talpey, Long Li, Namjae Jeon
This will be used to implement a logic in order to make sure
we don't overflow the send submission queue for ib_post_send().
We will initialize the local credits with the
fixed sp->send_credit_target value, which matches
the reserved slots in the submission queue for ib_post_send().
We will be a local credit first and then wait for a remote credit,
if we managed to get both we are allowed to post an
IB_WR_SEND[_WITH_INV]. The local credit is given back to
the pool when we get the local ib_post_send() completion,
while remote credits are granted by the peer.
From reading the git history of the linux smbdirect
implementations in client and server) it was seen
that a peer granted more credits than we requested.
I guess that only happened because of bugs in our
implementation which was active as client and server.
I guess Windows won't do that.
So the local credits make sure we only use the amount
of credits we asked for.
The client already has some logic for this based on
smbdirect_socket.send_io.pending.count, but that
counts in the order direction and makes it complex it
share common logic for various credits classes.
That logic will be replaced soon.
Cc: Steve French <smfrench@gmail.com>
Cc: Tom Talpey <tom@talpey.com>
Cc: Long Li <longli@microsoft.com>
Cc: Namjae Jeon <linkinjeon@kernel.org>
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Signed-off-by: Stefan Metzmacher <metze@samba.org>
---
fs/smb/common/smbdirect/smbdirect_socket.h | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
index 361db7f9f623..ee5a90d691c8 100644
--- a/fs/smb/common/smbdirect/smbdirect_socket.h
+++ b/fs/smb/common/smbdirect/smbdirect_socket.h
@@ -142,7 +142,15 @@ struct smbdirect_socket {
} mem;
/*
- * The credit state for the send side
+ * The local credit state for ib_post_send()
+ */
+ struct {
+ atomic_t count;
+ wait_queue_head_t wait_queue;
+ } lcredits;
+
+ /*
+ * The remote credit state for the send side
*/
struct {
atomic_t count;
@@ -337,6 +345,9 @@ static __always_inline void smbdirect_socket_init(struct smbdirect_socket *sc)
INIT_DELAYED_WORK(&sc->idle.timer_work, __smbdirect_socket_disabled_work);
disable_delayed_work_sync(&sc->idle.timer_work);
+ atomic_set(&sc->send_io.lcredits.count, 0);
+ init_waitqueue_head(&sc->send_io.lcredits.wait_queue);
+
atomic_set(&sc->send_io.credits.count, 0);
init_waitqueue_head(&sc->send_io.credits.wait_queue);
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 2/5] smb: server: smb_direct_disconnect_rdma_connection() already wakes all waiters on error
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
2025-10-20 18:35 ` [PATCH 1/5] smb: smbdirect: introduce smbdirect_socket.send_io.lcredits.* Stefan Metzmacher
@ 2025-10-20 18:35 ` Stefan Metzmacher
2025-10-20 18:36 ` [PATCH 3/5] smb: server: simplify sibling_list handling in smb_direct_flush_send_list/send_done Stefan Metzmacher
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Stefan Metzmacher @ 2025-10-20 18:35 UTC (permalink / raw)
To: linux-cifs, samba-technical
Cc: metze, Namjae Jeon, Steve French, Tom Talpey, Steve French
There's no need to care about pending or credit counters when we
already disconnecting.
And all related wait_event conditions already check for broken
connections too.
This will simplify the code and makes the following changes simpler.
Cc: Namjae Jeon <linkinjeon@kernel.org>
Cc: Steve French <smfrench@gmail.com>
Cc: Tom Talpey <tom@talpey.com>
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
---
fs/smb/server/transport_rdma.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
index 019e5f70d7b3..c4df1328342d 100644
--- a/fs/smb/server/transport_rdma.c
+++ b/fs/smb/server/transport_rdma.c
@@ -988,8 +988,6 @@ static int smb_direct_post_send(struct smbdirect_socket *sc,
ret = ib_post_send(sc->ib.qp, wr, NULL);
if (ret) {
pr_err("failed to post send: %d\n", ret);
- if (atomic_dec_and_test(&sc->send_io.pending.count))
- wake_up(&sc->send_io.pending.zero_wait_queue);
smb_direct_disconnect_rdma_connection(sc);
}
return ret;
@@ -1038,8 +1036,6 @@ static int smb_direct_flush_send_list(struct smbdirect_socket *sc,
send_ctx->need_invalidate_rkey,
send_ctx->remote_key);
} else {
- atomic_add(send_ctx->wr_cnt, &sc->send_io.credits.count);
- wake_up(&sc->send_io.credits.wait_queue);
list_for_each_entry_safe(first, last, &send_ctx->msg_list,
sibling_list) {
smb_direct_free_sendmsg(sc, first);
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 3/5] smb: server: simplify sibling_list handling in smb_direct_flush_send_list/send_done
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
2025-10-20 18:35 ` [PATCH 1/5] smb: smbdirect: introduce smbdirect_socket.send_io.lcredits.* Stefan Metzmacher
2025-10-20 18:35 ` [PATCH 2/5] smb: server: smb_direct_disconnect_rdma_connection() already wakes all waiters on error Stefan Metzmacher
@ 2025-10-20 18:36 ` Stefan Metzmacher
2025-10-20 18:36 ` [PATCH 4/5] smb: server: make use of smbdirect_socket.send_io.lcredits.* Stefan Metzmacher
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Stefan Metzmacher @ 2025-10-20 18:36 UTC (permalink / raw)
To: linux-cifs, samba-technical
Cc: metze, Namjae Jeon, Steve French, Tom Talpey, Steve French
We have a list handling that is much easier to understand:
1. Before smb_direct_flush_send_list() is called all
struct smbdirect_send_io messages are part of
send_ctx->msg_list
2. Before smb_direct_flush_send_list() calls
smb_direct_post_send() we remove the last
element in send_ctx->msg_list and move all
others into last->sibling_list. As only
last has IB_SEND_SIGNALED and gets a completion
vis send_done().
3. send_done() has an easy way to free all others
in sendmsg->sibling_list (if there are any).
And use list_for_each_entry_safe() instead of
a complex custom logic.
This will help us to share send_done() in common
code soon, as it will work fine for the client too,
where last->sibling_list is currently always an empty list.
Cc: Namjae Jeon <linkinjeon@kernel.org>
Cc: Steve French <smfrench@gmail.com>
Cc: Tom Talpey <tom@talpey.com>
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
---
fs/smb/server/transport_rdma.c | 60 +++++++++++++++++++++-------------
1 file changed, 38 insertions(+), 22 deletions(-)
diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
index c4df1328342d..e7e2388c00c2 100644
--- a/fs/smb/server/transport_rdma.c
+++ b/fs/smb/server/transport_rdma.c
@@ -524,6 +524,12 @@ static void smb_direct_free_sendmsg(struct smbdirect_socket *sc,
{
int i;
+ /*
+ * The list needs to be empty!
+ * The caller should take care of it.
+ */
+ WARN_ON_ONCE(!list_empty(&msg->sibling_list));
+
if (msg->num_sge > 0) {
ib_dma_unmap_single(sc->ib.dev,
msg->sge[0].addr, msg->sge[0].length,
@@ -909,9 +915,8 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
static void send_done(struct ib_cq *cq, struct ib_wc *wc)
{
- struct smbdirect_send_io *sendmsg, *sibling;
+ struct smbdirect_send_io *sendmsg, *sibling, *next;
struct smbdirect_socket *sc;
- struct list_head *pos, *prev, *end;
sendmsg = container_of(wc->wr_cqe, struct smbdirect_send_io, cqe);
sc = sendmsg->socket;
@@ -920,27 +925,26 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
ib_wc_status_msg(wc->status), wc->status,
wc->opcode);
+ /*
+ * Free possible siblings and then the main send_io
+ */
+ list_for_each_entry_safe(sibling, next, &sendmsg->sibling_list, sibling_list) {
+ list_del_init(&sibling->sibling_list);
+ smb_direct_free_sendmsg(sc, sibling);
+ }
+ /* Note this frees wc->wr_cqe, but not wc */
+ smb_direct_free_sendmsg(sc, sendmsg);
+
if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
pr_err("Send error. status='%s (%d)', opcode=%d\n",
ib_wc_status_msg(wc->status), wc->status,
wc->opcode);
smb_direct_disconnect_rdma_connection(sc);
+ return;
}
if (atomic_dec_and_test(&sc->send_io.pending.count))
wake_up(&sc->send_io.pending.zero_wait_queue);
-
- /* iterate and free the list of messages in reverse. the list's head
- * is invalid.
- */
- for (pos = &sendmsg->sibling_list, prev = pos->prev, end = sendmsg->sibling_list.next;
- prev != end; pos = prev, prev = prev->prev) {
- sibling = container_of(pos, struct smbdirect_send_io, sibling_list);
- smb_direct_free_sendmsg(sc, sibling);
- }
-
- sibling = container_of(pos, struct smbdirect_send_io, sibling_list);
- smb_direct_free_sendmsg(sc, sibling);
}
static int manage_credits_prior_sending(struct smbdirect_socket *sc)
@@ -1030,17 +1034,29 @@ static int smb_direct_flush_send_list(struct smbdirect_socket *sc,
last->wr.send_flags = IB_SEND_SIGNALED;
last->wr.wr_cqe = &last->cqe;
+ /*
+ * Remove last from send_ctx->msg_list
+ * and splice the rest of send_ctx->msg_list
+ * to last->sibling_list.
+ *
+ * send_ctx->msg_list is a valid empty list
+ * at the end.
+ */
+ list_del_init(&last->sibling_list);
+ list_splice_tail_init(&send_ctx->msg_list, &last->sibling_list);
+ send_ctx->wr_cnt = 0;
+
ret = smb_direct_post_send(sc, &first->wr);
- if (!ret) {
- smb_direct_send_ctx_init(send_ctx,
- send_ctx->need_invalidate_rkey,
- send_ctx->remote_key);
- } else {
- list_for_each_entry_safe(first, last, &send_ctx->msg_list,
- sibling_list) {
- smb_direct_free_sendmsg(sc, first);
+ if (ret) {
+ struct smbdirect_send_io *sibling, *next;
+
+ list_for_each_entry_safe(sibling, next, &last->sibling_list, sibling_list) {
+ list_del_init(&sibling->sibling_list);
+ smb_direct_free_sendmsg(sc, sibling);
}
+ smb_direct_free_sendmsg(sc, last);
}
+
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 4/5] smb: server: make use of smbdirect_socket.send_io.lcredits.*
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
` (2 preceding siblings ...)
2025-10-20 18:36 ` [PATCH 3/5] smb: server: simplify sibling_list handling in smb_direct_flush_send_list/send_done Stefan Metzmacher
@ 2025-10-20 18:36 ` Stefan Metzmacher
2025-10-20 18:36 ` [PATCH 5/5] smb: client: " Stefan Metzmacher
2025-10-21 6:51 ` [PATCH 0/5] smb: smbdirect: introduce local send credits Namjae Jeon
5 siblings, 0 replies; 7+ messages in thread
From: Stefan Metzmacher @ 2025-10-20 18:36 UTC (permalink / raw)
To: linux-cifs, samba-technical; +Cc: metze, Namjae Jeon, Steve French, Tom Talpey
This introduces logic to prevent on overflow of
the send submission queue with ib_post_send() easier.
As we first get a local credit and then a remote credit
before we mark us as pending.
From reading the git history of the linux smbdirect
implementations in client and server) it was seen
that a peer granted more credits than we requested.
I guess that only happened because of bugs in our
implementation which was active as client and server.
I guess Windows won't do that.
So the local credits make sure we only use the amount
of credits we asked for.
Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers")
Cc: Namjae Jeon <linkinjeon@kernel.org>
Cc: Steve French <smfrench@gmail.com>
Cc: Tom Talpey <tom@talpey.com>
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Signed-off-by: Stefan Metzmacher <metze@samba.org>
---
fs/smb/server/transport_rdma.c | 42 ++++++++++++++++++++++++++++++----
1 file changed, 37 insertions(+), 5 deletions(-)
diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
index e7e2388c00c2..7d86553fcc7c 100644
--- a/fs/smb/server/transport_rdma.c
+++ b/fs/smb/server/transport_rdma.c
@@ -219,6 +219,7 @@ static void smb_direct_disconnect_wake_up_all(struct smbdirect_socket *sc)
* in order to notice the broken connection.
*/
wake_up_all(&sc->status_wait);
+ wake_up_all(&sc->send_io.lcredits.wait_queue);
wake_up_all(&sc->send_io.credits.wait_queue);
wake_up_all(&sc->send_io.pending.zero_wait_queue);
wake_up_all(&sc->recv_io.reassembly.wait_queue);
@@ -917,6 +918,7 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct smbdirect_send_io *sendmsg, *sibling, *next;
struct smbdirect_socket *sc;
+ int lcredits = 0;
sendmsg = container_of(wc->wr_cqe, struct smbdirect_send_io, cqe);
sc = sendmsg->socket;
@@ -931,9 +933,11 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
list_for_each_entry_safe(sibling, next, &sendmsg->sibling_list, sibling_list) {
list_del_init(&sibling->sibling_list);
smb_direct_free_sendmsg(sc, sibling);
+ lcredits += 1;
}
/* Note this frees wc->wr_cqe, but not wc */
smb_direct_free_sendmsg(sc, sendmsg);
+ lcredits += 1;
if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
pr_err("Send error. status='%s (%d)', opcode=%d\n",
@@ -943,6 +947,9 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
return;
}
+ atomic_add(lcredits, &sc->send_io.lcredits.count);
+ wake_up(&sc->send_io.lcredits.wait_queue);
+
if (atomic_dec_and_test(&sc->send_io.pending.count))
wake_up(&sc->send_io.pending.zero_wait_queue);
}
@@ -1082,6 +1089,23 @@ static int wait_for_credits(struct smbdirect_socket *sc,
} while (true);
}
+static int wait_for_send_lcredit(struct smbdirect_socket *sc,
+ struct smbdirect_send_batch *send_ctx)
+{
+ if (send_ctx && (atomic_read(&sc->send_io.lcredits.count) <= 1)) {
+ int ret;
+
+ ret = smb_direct_flush_send_list(sc, send_ctx, false);
+ if (ret)
+ return ret;
+ }
+
+ return wait_for_credits(sc,
+ &sc->send_io.lcredits.wait_queue,
+ &sc->send_io.lcredits.count,
+ 1);
+}
+
static int wait_for_send_credits(struct smbdirect_socket *sc,
struct smbdirect_send_batch *send_ctx)
{
@@ -1269,9 +1293,13 @@ static int smb_direct_post_send_data(struct smbdirect_socket *sc,
int data_length;
struct scatterlist sg[SMBDIRECT_SEND_IO_MAX_SGE - 1];
+ ret = wait_for_send_lcredit(sc, send_ctx);
+ if (ret)
+ goto lcredit_failed;
+
ret = wait_for_send_credits(sc, send_ctx);
if (ret)
- return ret;
+ goto credit_failed;
data_length = 0;
for (i = 0; i < niov; i++)
@@ -1279,10 +1307,8 @@ static int smb_direct_post_send_data(struct smbdirect_socket *sc,
ret = smb_direct_create_header(sc, data_length, remaining_data_length,
&msg);
- if (ret) {
- atomic_inc(&sc->send_io.credits.count);
- return ret;
- }
+ if (ret)
+ goto header_failed;
for (i = 0; i < niov; i++) {
struct ib_sge *sge;
@@ -1320,7 +1346,11 @@ static int smb_direct_post_send_data(struct smbdirect_socket *sc,
return 0;
err:
smb_direct_free_sendmsg(sc, msg);
+header_failed:
atomic_inc(&sc->send_io.credits.count);
+credit_failed:
+ atomic_inc(&sc->send_io.lcredits.count);
+lcredit_failed:
return ret;
}
@@ -1897,6 +1927,8 @@ static int smb_direct_init_params(struct smbdirect_socket *sc)
return -EINVAL;
}
+ atomic_set(&sc->send_io.lcredits.count, sp->send_credit_target);
+
maxpages = DIV_ROUND_UP(sp->max_read_write_size, PAGE_SIZE);
sc->rw_io.credits.max = rdma_rw_mr_factor(sc->ib.dev,
sc->rdma.cm_id->port_num,
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 5/5] smb: client: make use of smbdirect_socket.send_io.lcredits.*
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
` (3 preceding siblings ...)
2025-10-20 18:36 ` [PATCH 4/5] smb: server: make use of smbdirect_socket.send_io.lcredits.* Stefan Metzmacher
@ 2025-10-20 18:36 ` Stefan Metzmacher
2025-10-21 6:51 ` [PATCH 0/5] smb: smbdirect: introduce local send credits Namjae Jeon
5 siblings, 0 replies; 7+ messages in thread
From: Stefan Metzmacher @ 2025-10-20 18:36 UTC (permalink / raw)
To: linux-cifs, samba-technical
Cc: metze, Steve French, Tom Talpey, Long Li, Namjae Jeon
This makes the logic to prevent on overflow of
the send submission queue with ib_post_send() easier.
As we first get a local credit and then a remote credit
before we mark us as pending.
For now we'll keep the logic around smbdirect_socket.send_io.pending.*,
but that will likely change or be removed completely.
The server will get a similar logic soon, so
we'll be able to share the send code in future.
Cc: Steve French <smfrench@gmail.com>
Cc: Tom Talpey <tom@talpey.com>
Cc: Long Li <longli@microsoft.com>
Cc: Namjae Jeon <linkinjeon@kernel.org>
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Signed-off-by: Stefan Metzmacher <metze@samba.org>
---
fs/smb/client/smbdirect.c | 67 ++++++++++++++++++++++++---------------
1 file changed, 42 insertions(+), 25 deletions(-)
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index b1218ea4aa8b..85a4c55b61b8 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -172,6 +172,7 @@ static void smbd_disconnect_wake_up_all(struct smbdirect_socket *sc)
* in order to notice the broken connection.
*/
wake_up_all(&sc->status_wait);
+ wake_up_all(&sc->send_io.lcredits.wait_queue);
wake_up_all(&sc->send_io.credits.wait_queue);
wake_up_all(&sc->send_io.pending.dec_wait_queue);
wake_up_all(&sc->send_io.pending.zero_wait_queue);
@@ -495,6 +496,7 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
struct smbdirect_send_io *request =
container_of(wc->wr_cqe, struct smbdirect_send_io, cqe);
struct smbdirect_socket *sc = request->socket;
+ int lcredits = 0;
log_rdma_send(INFO, "smbdirect_send_io 0x%p completed wc->status=%s\n",
request, ib_wc_status_msg(wc->status));
@@ -504,22 +506,24 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
request->sge[i].addr,
request->sge[i].length,
DMA_TO_DEVICE);
+ mempool_free(request, sc->send_io.mem.pool);
+ lcredits += 1;
if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
if (wc->status != IB_WC_WR_FLUSH_ERR)
log_rdma_send(ERR, "wc->status=%s wc->opcode=%d\n",
ib_wc_status_msg(wc->status), wc->opcode);
- mempool_free(request, sc->send_io.mem.pool);
smbd_disconnect_rdma_connection(sc);
return;
}
+ atomic_add(lcredits, &sc->send_io.lcredits.count);
+ wake_up(&sc->send_io.lcredits.wait_queue);
+
if (atomic_dec_and_test(&sc->send_io.pending.count))
wake_up(&sc->send_io.pending.zero_wait_queue);
wake_up(&sc->send_io.pending.dec_wait_queue);
-
- mempool_free(request, sc->send_io.mem.pool);
}
static void dump_smbdirect_negotiate_resp(struct smbdirect_negotiate_resp *resp)
@@ -567,6 +571,7 @@ static bool process_negotiation_response(
log_rdma_event(ERR, "error: credits_granted==0\n");
return false;
}
+ atomic_set(&sc->send_io.lcredits.count, sp->send_credit_target);
atomic_set(&sc->send_io.credits.count, le16_to_cpu(packet->credits_granted));
if (le32_to_cpu(packet->preferred_send_size) > sp->max_recv_size) {
@@ -1114,6 +1119,24 @@ static int smbd_post_send_iter(struct smbdirect_socket *sc,
struct smbdirect_data_transfer *packet;
int new_credits = 0;
+wait_lcredit:
+ /* Wait for local send credits */
+ rc = wait_event_interruptible(sc->send_io.lcredits.wait_queue,
+ atomic_read(&sc->send_io.lcredits.count) > 0 ||
+ sc->status != SMBDIRECT_SOCKET_CONNECTED);
+ if (rc)
+ goto err_wait_lcredit;
+
+ if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ log_outgoing(ERR, "disconnected not sending on wait_credit\n");
+ rc = -EAGAIN;
+ goto err_wait_lcredit;
+ }
+ if (unlikely(atomic_dec_return(&sc->send_io.lcredits.count) < 0)) {
+ atomic_inc(&sc->send_io.lcredits.count);
+ goto wait_lcredit;
+ }
+
wait_credit:
/* Wait for send credits. A SMBD packet needs one credit */
rc = wait_event_interruptible(sc->send_io.credits.wait_queue,
@@ -1132,23 +1155,6 @@ static int smbd_post_send_iter(struct smbdirect_socket *sc,
goto wait_credit;
}
-wait_send_queue:
- wait_event(sc->send_io.pending.dec_wait_queue,
- atomic_read(&sc->send_io.pending.count) < sp->send_credit_target ||
- sc->status != SMBDIRECT_SOCKET_CONNECTED);
-
- if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
- log_outgoing(ERR, "disconnected not sending on wait_send_queue\n");
- rc = -EAGAIN;
- goto err_wait_send_queue;
- }
-
- if (unlikely(atomic_inc_return(&sc->send_io.pending.count) >
- sp->send_credit_target)) {
- atomic_dec(&sc->send_io.pending.count);
- goto wait_send_queue;
- }
-
request = mempool_alloc(sc->send_io.mem.pool, GFP_KERNEL);
if (!request) {
rc = -ENOMEM;
@@ -1229,10 +1235,21 @@ static int smbd_post_send_iter(struct smbdirect_socket *sc,
le32_to_cpu(packet->data_length),
le32_to_cpu(packet->remaining_data_length));
+ /*
+ * Now that we got a local and a remote credit
+ * we add us as pending
+ */
+ atomic_inc(&sc->send_io.pending.count);
+
rc = smbd_post_send(sc, request);
if (!rc)
return 0;
+ if (atomic_dec_and_test(&sc->send_io.pending.count))
+ wake_up(&sc->send_io.pending.zero_wait_queue);
+
+ wake_up(&sc->send_io.pending.dec_wait_queue);
+
err_dma:
for (i = 0; i < request->num_sge; i++)
if (request->sge[i].addr)
@@ -1246,14 +1263,14 @@ static int smbd_post_send_iter(struct smbdirect_socket *sc,
atomic_sub(new_credits, &sc->recv_io.credits.count);
err_alloc:
- if (atomic_dec_and_test(&sc->send_io.pending.count))
- wake_up(&sc->send_io.pending.zero_wait_queue);
-
-err_wait_send_queue:
- /* roll back send credits and pending */
atomic_inc(&sc->send_io.credits.count);
+ wake_up(&sc->send_io.credits.wait_queue);
err_wait_credit:
+ atomic_inc(&sc->send_io.lcredits.count);
+ wake_up(&sc->send_io.lcredits.wait_queue);
+
+err_wait_lcredit:
return rc;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH 0/5] smb: smbdirect: introduce local send credits
2025-10-20 18:35 [PATCH 0/5] smb: smbdirect: introduce local send credits Stefan Metzmacher
` (4 preceding siblings ...)
2025-10-20 18:36 ` [PATCH 5/5] smb: client: " Stefan Metzmacher
@ 2025-10-21 6:51 ` Namjae Jeon
5 siblings, 0 replies; 7+ messages in thread
From: Namjae Jeon @ 2025-10-21 6:51 UTC (permalink / raw)
To: Stefan Metzmacher; +Cc: linux-cifs, samba-technical
On Tue, Oct 21, 2025 at 3:36 AM Stefan Metzmacher <metze@samba.org> wrote:
>
> Hi,
>
> our client already has some logic to prevent overflows of
> the local submission queue for ib_post_send(), if the peer
> granted more credits than we asked for.
>
> But it's not as easy as it could be.
>
> I guess that won't happen against Windows, but our git
> history indicates this could happen.
>
> Now we have a loop of local credits based on our send_credit_target.
> With that we always try to get a local credit first and then
> get a remote credit. When we got both we are able to
> mark the request as pending in order to keep the
> existing logic based on the pending count working.
> Removing or changing that is a task for another day,
> when all code if in common between client and server.
>
> For the server this is a real bug fix, as such a logic was missing
> before.
>
> For the client it's not strictly required for 6.18, but
> I think we should keep things consistent, as it will reduce
> churn on my 6.19 patchset, which already has about 100 patches
> and brings things into common code. And more is comming there...
>
> Stefan Metzmacher (5):
> smb: smbdirect: introduce smbdirect_socket.send_io.lcredits.*
> smb: server: smb_direct_disconnect_rdma_connection() already wakes all
> waiters on error
> smb: server: simplify sibling_list handling in
> smb_direct_flush_send_list/send_done
> smb: server: make use of smbdirect_socket.send_io.lcredits.*
> smb: client: make use of smbdirect_socket.send_io.lcredits.*
Applied them to #ksmbd-for-next-next.
Thanks!
>
> fs/smb/client/smbdirect.c | 67 ++++++++-----
> fs/smb/common/smbdirect/smbdirect_socket.h | 13 ++-
> fs/smb/server/transport_rdma.c | 106 +++++++++++++++------
> 3 files changed, 129 insertions(+), 57 deletions(-)
>
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread