* [PATCH net-next v1 1/7] net/rds: new extension header: rdma bytes
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-26 17:46 ` [net-next,v1,1/7] " Simon Horman
2026-01-25 7:06 ` [PATCH net-next v1 2/7] net/rds: Encode cp_index in TCP source port Allison Henderson
` (5 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Introduce a new extension header type RDSV3_EXTHDR_RDMA_BYTES for
an RDMA initiator to exchange rdma byte counts to its target.
Currently, RDMA operations cannot precisely account how many bytes a
peer just transferred via RDMA, which limits per-connection statistics
and future policy (e.g., monitoring or rate/cgroup accounting of RDMA
traffic).
In this patch we expand rds_message_add_extension to accept multiple
extensions, and add new flag to RDS header: RDS_FLAG_EXTHDR_EXTENSION,
along with a new extension to RDS header: rds_ext_header_rdma_bytes.
Signed-off-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/ib_send.c | 19 +++++++++++++-
net/rds/message.c | 65 +++++++++++++++++++++++++++++++++++++----------
net/rds/rds.h | 24 +++++++++++++----
net/rds/send.c | 6 ++---
4 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
index f9d28ddd168d8..8282ff61b0b37 100644
--- a/net/rds/ib_send.c
+++ b/net/rds/ib_send.c
@@ -578,10 +578,27 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
* used by the peer to release use-once RDMA MRs. */
if (rm->rdma.op_active) {
struct rds_ext_header_rdma ext_hdr;
+ struct rds_ext_header_rdma_bytes rdma_bytes_ext_hdr;
ext_hdr.h_rdma_rkey = cpu_to_be32(rm->rdma.op_rkey);
rds_message_add_extension(&rm->m_inc.i_hdr,
- RDS_EXTHDR_RDMA, &ext_hdr, sizeof(ext_hdr));
+ RDS_EXTHDR_RDMA, &ext_hdr);
+
+ /* prepare the rdma bytes ext header */
+ rdma_bytes_ext_hdr.h_rflags = rm->rdma.op_write ?
+ RDS_FLAG_RDMA_WR_BYTES : RDS_FLAG_RDMA_RD_BYTES;
+ rdma_bytes_ext_hdr.h_rdma_bytes =
+ cpu_to_be32(rm->rdma.op_bytes);
+
+ if (rds_message_add_extension(&rm->m_inc.i_hdr,
+ RDS_EXTHDR_RDMA_BYTES,
+ &rdma_bytes_ext_hdr)) {
+ /* rdma bytes ext header was added successfully,
+ * notify the remote side via flag in header
+ */
+ rm->m_inc.i_hdr.h_flags |=
+ RDS_FLAG_EXTHDR_EXTENSION;
+ }
}
if (rm->m_rdma_cookie) {
rds_message_add_rdma_dest_extension(&rm->m_inc.i_hdr,
diff --git a/net/rds/message.c b/net/rds/message.c
index 199a899a43e9c..591a27c9c62f7 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -44,6 +44,7 @@ static unsigned int rds_exthdr_size[__RDS_EXTHDR_MAX] = {
[RDS_EXTHDR_VERSION] = sizeof(struct rds_ext_header_version),
[RDS_EXTHDR_RDMA] = sizeof(struct rds_ext_header_rdma),
[RDS_EXTHDR_RDMA_DEST] = sizeof(struct rds_ext_header_rdma_dest),
+[RDS_EXTHDR_RDMA_BYTES] = sizeof(struct rds_ext_header_rdma_bytes),
[RDS_EXTHDR_NPATHS] = sizeof(__be16),
[RDS_EXTHDR_GEN_NUM] = sizeof(__be32),
};
@@ -191,31 +192,69 @@ void rds_message_populate_header(struct rds_header *hdr, __be16 sport,
hdr->h_sport = sport;
hdr->h_dport = dport;
hdr->h_sequence = cpu_to_be64(seq);
- hdr->h_exthdr[0] = RDS_EXTHDR_NONE;
+ /* see rds_find_next_ext_space for reason why we memset the
+ * ext header
+ */
+ memset(hdr->h_exthdr, RDS_EXTHDR_NONE, RDS_HEADER_EXT_SPACE);
}
EXPORT_SYMBOL_GPL(rds_message_populate_header);
-int rds_message_add_extension(struct rds_header *hdr, unsigned int type,
- const void *data, unsigned int len)
+/*
+ * Find the next place we can add an RDS header extension with
+ * specific length. Extension headers are pushed one after the
+ * other. In the following, the number after the colon is the number
+ * of bytes:
+ *
+ * [ type1:1 dta1:len1 [ type2:1 dta2:len2 ] ... ] RDS_EXTHDR_NONE
+ *
+ * If the extension headers fill the complete extension header space
+ * (16 bytes), the trailing RDS_EXTHDR_NONE is omitted.
+ */
+static int rds_find_next_ext_space(struct rds_header *hdr, unsigned int len,
+ u8 **ext_start)
{
- unsigned int ext_len = sizeof(u8) + len;
- unsigned char *dst;
+ unsigned int ext_len;
+ unsigned int type;
+ int ind = 0;
+
+ while ((ind + 1 + len) <= RDS_HEADER_EXT_SPACE) {
+ if (hdr->h_exthdr[ind] == RDS_EXTHDR_NONE) {
+ *ext_start = hdr->h_exthdr + ind;
+ return 0;
+ }
- /* For now, refuse to add more than one extension header */
- if (hdr->h_exthdr[0] != RDS_EXTHDR_NONE)
- return 0;
+ type = hdr->h_exthdr[ind];
+
+ ext_len = (type < __RDS_EXTHDR_MAX) ? rds_exthdr_size[type] : 0;
+ WARN_ONCE(!ext_len, "Unknown ext hdr type %d\n", type);
+ if (!ext_len)
+ return -EINVAL;
+
+ /* ind points to a valid ext hdr with known length */
+ ind += 1 + ext_len;
+ }
+
+ /* no room for extension */
+ return -ENOSPC;
+}
+
+/* The ext hdr space is prefilled with zero from the kzalloc() */
+int rds_message_add_extension(struct rds_header *hdr,
+ unsigned int type, const void *data)
+{
+ unsigned char *dst;
+ unsigned int len;
- if (type >= __RDS_EXTHDR_MAX || len != rds_exthdr_size[type])
+ len = (type < __RDS_EXTHDR_MAX) ? rds_exthdr_size[type] : 0;
+ if (!len)
return 0;
- if (ext_len >= RDS_HEADER_EXT_SPACE)
+ if (rds_find_next_ext_space(hdr, len, &dst))
return 0;
- dst = hdr->h_exthdr;
*dst++ = type;
memcpy(dst, data, len);
- dst[len] = RDS_EXTHDR_NONE;
return 1;
}
EXPORT_SYMBOL_GPL(rds_message_add_extension);
@@ -272,7 +311,7 @@ int rds_message_add_rdma_dest_extension(struct rds_header *hdr, u32 r_key, u32 o
ext_hdr.h_rdma_rkey = cpu_to_be32(r_key);
ext_hdr.h_rdma_offset = cpu_to_be32(offset);
- return rds_message_add_extension(hdr, RDS_EXTHDR_RDMA_DEST, &ext_hdr, sizeof(ext_hdr));
+ return rds_message_add_extension(hdr, RDS_EXTHDR_RDMA_DEST, &ext_hdr);
}
EXPORT_SYMBOL_GPL(rds_message_add_rdma_dest_extension);
diff --git a/net/rds/rds.h b/net/rds/rds.h
index 8a549fe687ac9..cadfd7ec0ba92 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -183,10 +183,11 @@ void rds_conn_net_set(struct rds_connection *conn, struct net *net)
write_pnet(&conn->c_net, net);
}
-#define RDS_FLAG_CONG_BITMAP 0x01
-#define RDS_FLAG_ACK_REQUIRED 0x02
-#define RDS_FLAG_RETRANSMITTED 0x04
-#define RDS_MAX_ADV_CREDIT 255
+#define RDS_FLAG_CONG_BITMAP 0x01
+#define RDS_FLAG_ACK_REQUIRED 0x02
+#define RDS_FLAG_RETRANSMITTED 0x04
+#define RDS_FLAG_EXTHDR_EXTENSION 0x20
+#define RDS_MAX_ADV_CREDIT 255
/* RDS_FLAG_PROBE_PORT is the reserved sport used for sending a ping
* probe to exchange control information before establishing a connection.
@@ -258,6 +259,19 @@ struct rds_ext_header_rdma_dest {
__be32 h_rdma_offset;
};
+/*
+ * This extension header tells the peer about delivered RDMA byte count.
+ */
+#define RDS_EXTHDR_RDMA_BYTES 4
+
+struct rds_ext_header_rdma_bytes {
+ __be32 h_rdma_bytes; /* byte count */
+ u8 h_rflags; /* direction of RDMA, write or read */
+};
+
+#define RDS_FLAG_RDMA_WR_BYTES 0x01
+#define RDS_FLAG_RDMA_RD_BYTES 0x02
+
/* Extension header announcing number of paths.
* Implicit length = 2 bytes.
*/
@@ -871,7 +885,7 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
void rds_message_populate_header(struct rds_header *hdr, __be16 sport,
__be16 dport, u64 seq);
int rds_message_add_extension(struct rds_header *hdr,
- unsigned int type, const void *data, unsigned int len);
+ unsigned int type, const void *data);
int rds_message_next_extension(struct rds_header *hdr,
unsigned int *pos, void *buf, unsigned int *buflen);
int rds_message_add_rdma_dest_extension(struct rds_header *hdr, u32 r_key, u32 offset);
diff --git a/net/rds/send.c b/net/rds/send.c
index 3e3d028bc21ee..306785fa7065e 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1459,12 +1459,10 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
__be32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
rds_message_add_extension(&rm->m_inc.i_hdr,
- RDS_EXTHDR_NPATHS, &npaths,
- sizeof(npaths));
+ RDS_EXTHDR_NPATHS, &npaths);
rds_message_add_extension(&rm->m_inc.i_hdr,
RDS_EXTHDR_GEN_NUM,
- &my_gen_num,
- sizeof(u32));
+ &my_gen_num);
}
spin_unlock_irqrestore(&cp->cp_lock, flags);
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [net-next,v1,1/7] net/rds: new extension header: rdma bytes
2026-01-25 7:06 ` [PATCH net-next v1 1/7] net/rds: new extension header: rdma bytes Allison Henderson
@ 2026-01-26 17:46 ` Simon Horman
2026-01-27 6:16 ` Allison Henderson
0 siblings, 1 reply; 11+ messages in thread
From: Simon Horman @ 2026-01-26 17:46 UTC (permalink / raw)
To: achender
Cc: Simon Horman, pabeni, rds-devel, netdev, allison.henderson,
edumazet, linux-kselftest, kuba, linux-rdma
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net/rds: new extension header: rdma bytes
This patch introduces a new extension header type RDS_EXTHDR_RDMA_BYTES
to allow RDMA initiators to exchange byte counts with their targets for
statistics and policy purposes. It also expands rds_message_add_extension()
to support multiple extensions per message.
> diff --git a/net/rds/rds.h b/net/rds/rds.h
> index 8a549fe687ac..cadfd7ec0ba9 100644
> --- a/net/rds/rds.h
> +++ b/net/rds/rds.h
[ ... ]
> @@ -258,6 +259,19 @@ struct rds_ext_header_rdma_dest {
> __be32 h_rdma_offset;
> };
>
> +/*
> + * This extension header tells the peer about delivered RDMA byte count.
> + */
> +#define RDS_EXTHDR_RDMA_BYTES 4
> +
> +struct rds_ext_header_rdma_bytes {
> + __be32 h_rdma_bytes; /* byte count */
> + u8 h_rflags; /* direction of RDMA, write or read */
> +};
This struct has 5 bytes of actual data but sizeof() will return 8 bytes
due to alignment padding. The 3 padding bytes after h_rflags are not
initialized when this struct is used in rds_ib_xmit(). Since
rds_message_add_extension() copies sizeof(struct) bytes via memcpy(),
those uninitialized padding bytes get transmitted to the peer.
Should this struct be marked __packed, or should the declaration in
rds_ib_xmit() use a zeroing initializer like = {} to avoid leaking
kernel stack contents over the network?
> diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
> index f9d28ddd168d..8282ff61b0b3 100644
> --- a/net/rds/ib_send.c
> +++ b/net/rds/ib_send.c
> @@ -578,10 +578,27 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
> * used by the peer to release use-once RDMA MRs. */
> if (rm->rdma.op_active) {
> struct rds_ext_header_rdma ext_hdr;
> + struct rds_ext_header_rdma_bytes rdma_bytes_ext_hdr;
>
> ext_hdr.h_rdma_rkey = cpu_to_be32(rm->rdma.op_rkey);
> rds_message_add_extension(&rm->m_inc.i_hdr,
> - RDS_EXTHDR_RDMA, &ext_hdr, sizeof(ext_hdr));
> + RDS_EXTHDR_RDMA, &ext_hdr);
> +
> + /* prepare the rdma bytes ext header */
> + rdma_bytes_ext_hdr.h_rflags = rm->rdma.op_write ?
> + RDS_FLAG_RDMA_WR_BYTES : RDS_FLAG_RDMA_RD_BYTES;
> + rdma_bytes_ext_hdr.h_rdma_bytes =
> + cpu_to_be32(rm->rdma.op_bytes);
Here rdma_bytes_ext_hdr is declared on the stack without initialization.
Only h_rflags and h_rdma_bytes are assigned, leaving the 3 padding bytes
uninitialized. When rds_message_add_extension() calls memcpy() with
len = sizeof(struct rds_ext_header_rdma_bytes) = 8, those uninitialized
bytes are copied into the header and sent to the peer.
[ ... ]
--
pw-bot: cr
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [net-next,v1,1/7] net/rds: new extension header: rdma bytes
2026-01-26 17:46 ` [net-next,v1,1/7] " Simon Horman
@ 2026-01-27 6:16 ` Allison Henderson
0 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-27 6:16 UTC (permalink / raw)
To: horms@kernel.org, achender@kernel.org
Cc: linux-rdma@vger.kernel.org, rds-devel@oss.oracle.com,
edumazet@google.com, netdev@vger.kernel.org,
linux-kselftest@vger.kernel.org, pabeni@redhat.com,
kuba@kernel.org
On Mon, 2026-01-26 at 17:46 +0000, Simon Horman wrote:
> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
>
> For local reproduction steps see https://urldefense.com/v3/__https://netdev-ai.bots.linux.dev/ai-local.html__;!!ACWV5N9M2RV99hQ!MPuQd8UZOu7Nec0FmSJuOO_TptptoR__bJuwS83q30PC3MeyF4iUF4CGt46bvEfQomEudN-E9bVMtVO05tU$
>
>
Ok I've got the reproducer working, I'll post an update when I get it resolved.
Thanks for the reviews!
Allison
> ---
> net/rds: new extension header: rdma bytes
>
> This patch introduces a new extension header type RDS_EXTHDR_RDMA_BYTES
> to allow RDMA initiators to exchange byte counts with their targets for
> statistics and policy purposes. It also expands rds_message_add_extension()
> to support multiple extensions per message.
>
> > diff --git a/net/rds/rds.h b/net/rds/rds.h
> > index 8a549fe687ac..cadfd7ec0ba9 100644
> > --- a/net/rds/rds.h
> > +++ b/net/rds/rds.h
>
> [ ... ]
>
> > @@ -258,6 +259,19 @@ struct rds_ext_header_rdma_dest {
> > __be32 h_rdma_offset;
> > };
> >
> > +/*
> > + * This extension header tells the peer about delivered RDMA byte count.
> > + */
> > +#define RDS_EXTHDR_RDMA_BYTES 4
> > +
> > +struct rds_ext_header_rdma_bytes {
> > + __be32 h_rdma_bytes; /* byte count */
> > + u8 h_rflags; /* direction of RDMA, write or read */
> > +};
>
> This struct has 5 bytes of actual data but sizeof() will return 8 bytes
> due to alignment padding. The 3 padding bytes after h_rflags are not
> initialized when this struct is used in rds_ib_xmit(). Since
> rds_message_add_extension() copies sizeof(struct) bytes via memcpy(),
> those uninitialized padding bytes get transmitted to the peer.
>
> Should this struct be marked __packed, or should the declaration in
> rds_ib_xmit() use a zeroing initializer like = {} to avoid leaking
> kernel stack contents over the network?
>
> > diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
> > index f9d28ddd168d..8282ff61b0b3 100644
> > --- a/net/rds/ib_send.c
> > +++ b/net/rds/ib_send.c
> > @@ -578,10 +578,27 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
> > * used by the peer to release use-once RDMA MRs. */
> > if (rm->rdma.op_active) {
> > struct rds_ext_header_rdma ext_hdr;
> > + struct rds_ext_header_rdma_bytes rdma_bytes_ext_hdr;
> >
> > ext_hdr.h_rdma_rkey = cpu_to_be32(rm->rdma.op_rkey);
> > rds_message_add_extension(&rm->m_inc.i_hdr,
> > - RDS_EXTHDR_RDMA, &ext_hdr, sizeof(ext_hdr));
> > + RDS_EXTHDR_RDMA, &ext_hdr);
> > +
> > + /* prepare the rdma bytes ext header */
> > + rdma_bytes_ext_hdr.h_rflags = rm->rdma.op_write ?
> > + RDS_FLAG_RDMA_WR_BYTES : RDS_FLAG_RDMA_RD_BYTES;
> > + rdma_bytes_ext_hdr.h_rdma_bytes =
> > + cpu_to_be32(rm->rdma.op_bytes);
>
> Here rdma_bytes_ext_hdr is declared on the stack without initialization.
> Only h_rflags and h_rdma_bytes are assigned, leaving the 3 padding bytes
> uninitialized. When rds_message_add_extension() calls memcpy() with
> len = sizeof(struct rds_ext_header_rdma_bytes) = 8, those uninitialized
> bytes are copied into the header and sent to the peer.
>
> [ ... ]
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH net-next v1 2/7] net/rds: Encode cp_index in TCP source port
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 1/7] net/rds: new extension header: rdma bytes Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 3/7] net/rds: rds_tcp_conn_path_shutdown must not discard messages Allison Henderson
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Gerd Rausch <gerd.rausch@oracle.com>
Upon "sendmsg", RDS/TCP selects a backend connection based
on a hash calculated from the source-port ("RDS_MPATH_HASH").
However, "rds_tcp_accept_one" accepts connections
in the order they arrive, which is non-deterministic.
Therefore the mapping of the sender's "cp->cp_index"
to that of the receiver changes if the backend
connections are dropped and reconnected.
However, connection state that's preserved across reconnects
(e.g. "cp_next_rx_seq") relies on that sender<->receiver
mapping to never change.
So we make sure that client and server of the TCP connection
have the exact same "cp->cp_index" across reconnects by
encoding "cp->cp_index" in the lower three bits of the
client's TCP source port.
A new extension "RDS_EXTHDR_SPORT_IDX" is introduced,
that allows the server to tell the difference between
clients that do the "cp->cp_index" encoding, and
legacy clients that pick source ports randomly.
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/message.c | 1 +
net/rds/rds.h | 3 +++
net/rds/recv.c | 7 +++++++
net/rds/send.c | 4 ++++
net/rds/tcp.h | 1 +
net/rds/tcp_connect.c | 22 ++++++++++++++++++++-
net/rds/tcp_listen.c | 45 +++++++++++++++++++++++++++++++++++++------
7 files changed, 76 insertions(+), 7 deletions(-)
diff --git a/net/rds/message.c b/net/rds/message.c
index 591a27c9c62f7..54fd000806eab 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -47,6 +47,7 @@ static unsigned int rds_exthdr_size[__RDS_EXTHDR_MAX] = {
[RDS_EXTHDR_RDMA_BYTES] = sizeof(struct rds_ext_header_rdma_bytes),
[RDS_EXTHDR_NPATHS] = sizeof(__be16),
[RDS_EXTHDR_GEN_NUM] = sizeof(__be32),
+[RDS_EXTHDR_SPORT_IDX] = 1,
};
void rds_message_addref(struct rds_message *rm)
diff --git a/net/rds/rds.h b/net/rds/rds.h
index cadfd7ec0ba92..d942057b91ee4 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -147,6 +147,7 @@ struct rds_connection {
c_ping_triggered:1,
c_pad_to_32:29;
int c_npaths;
+ bool c_with_sport_idx;
struct rds_connection *c_passive;
struct rds_transport *c_trans;
@@ -277,8 +278,10 @@ struct rds_ext_header_rdma_bytes {
*/
#define RDS_EXTHDR_NPATHS 5
#define RDS_EXTHDR_GEN_NUM 6
+#define RDS_EXTHDR_SPORT_IDX 8
#define __RDS_EXTHDR_MAX 16 /* for now */
+
#define RDS_RX_MAX_TRACES (RDS_MSG_RX_DGRAM_TRACE_MAX + 1)
#define RDS_MSG_RX_HDR 0
#define RDS_MSG_RX_START 1
diff --git a/net/rds/recv.c b/net/rds/recv.c
index 66680f652e74a..ddf128a023470 100644
--- a/net/rds/recv.c
+++ b/net/rds/recv.c
@@ -204,7 +204,9 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
struct rds_ext_header_version version;
__be16 rds_npaths;
__be32 rds_gen_num;
+ u8 dummy;
} buffer;
+ bool new_with_sport_idx = false;
u32 new_peer_gen_num = 0;
while (1) {
@@ -221,11 +223,16 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
case RDS_EXTHDR_GEN_NUM:
new_peer_gen_num = be32_to_cpu(buffer.rds_gen_num);
break;
+ case RDS_EXTHDR_SPORT_IDX:
+ new_with_sport_idx = true;
+ break;
default:
pr_warn_ratelimited("ignoring unknown exthdr type "
"0x%x\n", type);
}
}
+
+ conn->c_with_sport_idx = new_with_sport_idx;
/* if RDS_EXTHDR_NPATHS was not found, default to a single-path */
conn->c_npaths = max_t(int, conn->c_npaths, 1);
conn->c_ping_triggered = 0;
diff --git a/net/rds/send.c b/net/rds/send.c
index 306785fa7065e..85e1c5352ad80 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1457,12 +1457,16 @@ rds_send_probe(struct rds_conn_path *cp, __be16 sport,
cp->cp_conn->c_trans->t_mp_capable) {
__be16 npaths = cpu_to_be16(RDS_MPATH_WORKERS);
__be32 my_gen_num = cpu_to_be32(cp->cp_conn->c_my_gen_num);
+ u8 dummy = 0;
rds_message_add_extension(&rm->m_inc.i_hdr,
RDS_EXTHDR_NPATHS, &npaths);
rds_message_add_extension(&rm->m_inc.i_hdr,
RDS_EXTHDR_GEN_NUM,
&my_gen_num);
+ rds_message_add_extension(&rm->m_inc.i_hdr,
+ RDS_EXTHDR_SPORT_IDX,
+ &dummy);
}
spin_unlock_irqrestore(&cp->cp_lock, flags);
diff --git a/net/rds/tcp.h b/net/rds/tcp.h
index 7d07128593b71..7c91974fcde79 100644
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -34,6 +34,7 @@ struct rds_tcp_connection {
*/
struct mutex t_conn_path_lock;
struct socket *t_sock;
+ u32 t_client_port_group;
struct rds_tcp_net *t_rtn;
void *t_orig_write_space;
void *t_orig_data_ready;
diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
index 92891b0d224d3..a55a27c05934d 100644
--- a/net/rds/tcp_connect.c
+++ b/net/rds/tcp_connect.c
@@ -93,6 +93,8 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
struct sockaddr_in6 sin6;
struct sockaddr_in sin;
struct sockaddr *addr;
+ int port_low, port_high, port;
+ int port_groups, groups_left;
int addrlen;
bool isv6;
int ret;
@@ -145,7 +147,25 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
addrlen = sizeof(sin);
}
- ret = kernel_bind(sock, (struct sockaddr_unsized *)addr, addrlen);
+ /* encode cp->cp_index in lowest bits of source-port */
+ inet_get_local_port_range(rds_conn_net(conn), &port_low, &port_high);
+ port_low = ALIGN(port_low, RDS_MPATH_WORKERS);
+ port_groups = (port_high - port_low + 1) / RDS_MPATH_WORKERS;
+ ret = -EADDRINUSE;
+ groups_left = port_groups;
+ while (groups_left-- > 0 && ret) {
+ if (++tc->t_client_port_group >= port_groups)
+ tc->t_client_port_group = 0;
+ port = port_low +
+ tc->t_client_port_group * RDS_MPATH_WORKERS +
+ cp->cp_index;
+
+ if (isv6)
+ sin6.sin6_port = htons(port);
+ else
+ sin.sin_port = htons(port);
+ ret = kernel_bind(sock, (struct sockaddr_unsized *)addr, addrlen);
+ }
if (ret) {
rdsdebug("bind failed with %d at address %pI6c\n",
ret, &conn->c_laddr);
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index 551c847f2890a..900d059010a41 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -62,19 +62,52 @@ void rds_tcp_keepalive(struct socket *sock)
* we special case cp_index 0 is to allow the rds probe ping itself to itself
* get through efficiently.
*/
-static
-struct rds_tcp_connection *rds_tcp_accept_one_path(struct rds_connection *conn)
+static struct rds_tcp_connection *
+rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
{
- int i;
- int npaths = max_t(int, 1, conn->c_npaths);
+ union {
+ struct sockaddr_storage storage;
+ struct sockaddr addr;
+ struct sockaddr_in sin;
+ struct sockaddr_in6 sin6;
+ } saddr;
+ int sport, npaths, i_min, i_max, i;
+
+ if (conn->c_with_sport_idx &&
+ kernel_getpeername(sock, &saddr.addr) == 0) {
+ /* cp->cp_index is encoded in lowest bits of source-port */
+ switch (saddr.addr.sa_family) {
+ case AF_INET:
+ sport = ntohs(saddr.sin.sin_port);
+ break;
+ case AF_INET6:
+ sport = ntohs(saddr.sin6.sin6_port);
+ break;
+ default:
+ sport = -1;
+ }
+ } else {
+ sport = -1;
+ }
+
+ npaths = max_t(int, 1, conn->c_npaths);
- for (i = 0; i < npaths; i++) {
+ if (sport >= 0) {
+ i_min = sport % npaths;
+ i_max = i_min;
+ } else {
+ i_min = 0;
+ i_max = npaths - 1;
+ }
+
+ for (i = i_min; i <= i_max; i++) {
struct rds_conn_path *cp = &conn->c_path[i];
if (rds_conn_path_transition(cp, RDS_CONN_DOWN,
RDS_CONN_CONNECTING))
return cp->cp_transport_data;
}
+
return NULL;
}
@@ -199,7 +232,7 @@ int rds_tcp_accept_one(struct rds_tcp_net *rtn)
* to and discarded by the sender.
* We must not throw those away!
*/
- rs_tcp = rds_tcp_accept_one_path(conn);
+ rs_tcp = rds_tcp_accept_one_path(conn, new_sock);
if (!rs_tcp) {
/* It's okay to stash "new_sock", since
* "rds_tcp_conn_slots_available" triggers
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net-next v1 3/7] net/rds: rds_tcp_conn_path_shutdown must not discard messages
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 1/7] net/rds: new extension header: rdma bytes Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 2/7] net/rds: Encode cp_index in TCP source port Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 4/7] net/rds: Kick-start TCP receiver after accept Allison Henderson
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Gerd Rausch <gerd.rausch@oracle.com>
RDS/TCP differs from RDS/RDMA in that message acknowledgment
is done based on TCP sequence numbers:
As soon as the last byte of a message has been acknowledged by the
TCP stack of a peer, rds_tcp_write_space() goes on to discard
prior messages from the send queue.
Which is fine, for as long as the receiver never throws any messages
away.
The dequeuing of messages in RDS/TCP is done either from the
"sk_data_ready" callback pointing to rds_tcp_data_ready()
(the most common case), or from the receive worker pointing
to rds_tcp_recv_path() which is called for as long as the
connection is "RDS_CONN_UP".
However, as soon as rds_conn_path_drop() is called for whatever reason,
including "DR_USER_RESET", "cp_state" transitions to "RDS_CONN_ERROR",
and rds_tcp_restore_callbacks() ends up restoring the callbacks
and thereby disabling message receipt.
So messages already acknowledged to the sender were dropped.
Furthermore, the "->shutdown" callback was always called
with an invalid parameter ("RCV_SHUTDOWN | SEND_SHUTDOWN == 3"),
instead of the correct pre-increment value ("SHUT_RDWR == 2").
inet_shutdown() returns "-EINVAL" in such cases, rendering
this call a NOOP.
So we change rds_tcp_conn_path_shutdown() to do the proper
"->shutdown(SHUT_WR)" call in order to signal EOF to the peer
and make it transition to "TCP_CLOSE_WAIT" (RFC 793).
This should make the peer also enter rds_tcp_conn_path_shutdown()
and do the same.
This allows us to dequeue all messages already received
and acknowledged to the peer.
We do so, until we know that the receive queue no longer has data
(skb_queue_empty()) and that we couldn't have any data
in flight anymore, because the socket transitioned to
any of the states "CLOSING", "TIME_WAIT", "CLOSE_WAIT",
"LAST_ACK", or "CLOSE" (RFC 793).
However, if we do just that, we suddenly see duplicate RDS
messages being delivered to the application.
So what gives?
Turns out that with MPRDS and its multitude of backend connections,
retransmitted messages ("RDS_FLAG_RETRANSMITTED") can outrace
the dequeuing of their original counterparts.
And the duplicate check implemented in rds_recv_local() only
discards duplicates if flag "RDS_FLAG_RETRANSMITTED" is set.
Rather curious, because a duplicate is a duplicate; it shouldn't
matter which copy is looked at and delivered first.
To avoid this entire situation, we simply make the sender discard
messages from the send-queue right from within
rds_tcp_conn_path_shutdown(). Just like rds_tcp_write_space() would
have done, were it called in time or still called.
This makes sure that we no longer have messages that we know
the receiver already dequeued sitting in our send-queue,
and therefore avoid the entire "RDS_FLAG_RETRANSMITTED" fiasco.
Now we got rid of the duplicate RDS message delivery, but we
still run into cases where RDS messages are dropped.
This time it is due to the delayed setting of the socket-callbacks
in rds_tcp_accept_one() via either rds_tcp_reset_callbacks()
or rds_tcp_set_callbacks().
By the time rds_tcp_accept_one() gets there, the socket
may already have transitioned into state "TCP_CLOSE_WAIT",
but rds_tcp_state_change() was never called.
Subsequently, "->shutdown(SHUT_WR)" did not happen either.
So the peer ends up getting stuck in state "TCP_FIN_WAIT2".
We fix that by checking for states "TCP_CLOSE_WAIT", "TCP_LAST_ACK",
or "TCP_CLOSE" and drop the freshly accepted socket in that case.
This problem is observable by running "rds-stress --reset"
frequently on either of the two sides of a RDS connection,
or both while other "rds-stress" processes are exchanging data.
Those "rds-stress" processes reported out-of-sequence
errors, with the expected sequence number being smaller
than the one actually received (due to the dropped messages).
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/tcp.c | 1 +
net/rds/tcp.h | 4 ++++
net/rds/tcp_connect.c | 46 ++++++++++++++++++++++++++++++++++++++++++-
net/rds/tcp_listen.c | 14 +++++++++++++
net/rds/tcp_recv.c | 4 ++++
net/rds/tcp_send.c | 2 +-
6 files changed, 69 insertions(+), 2 deletions(-)
diff --git a/net/rds/tcp.c b/net/rds/tcp.c
index 31e7425e2da9a..45484a93d75fb 100644
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -384,6 +384,7 @@ static int rds_tcp_conn_alloc(struct rds_connection *conn, gfp_t gfp)
tc->t_tinc = NULL;
tc->t_tinc_hdr_rem = sizeof(struct rds_header);
tc->t_tinc_data_rem = 0;
+ init_waitqueue_head(&tc->t_recv_done_waitq);
conn->c_path[i].cp_transport_data = tc;
tc->t_cpath = &conn->c_path[i];
diff --git a/net/rds/tcp.h b/net/rds/tcp.h
index 7c91974fcde79..b36af0865a078 100644
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -55,6 +55,9 @@ struct rds_tcp_connection {
u32 t_last_sent_nxt;
u32 t_last_expected_una;
u32 t_last_seen_una;
+
+ /* for rds_tcp_conn_path_shutdown */
+ wait_queue_head_t t_recv_done_waitq;
};
struct rds_tcp_statistics {
@@ -105,6 +108,7 @@ void rds_tcp_xmit_path_prepare(struct rds_conn_path *cp);
void rds_tcp_xmit_path_complete(struct rds_conn_path *cp);
int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
unsigned int hdr_off, unsigned int sg, unsigned int off);
+int rds_tcp_is_acked(struct rds_message *rm, uint64_t ack);
void rds_tcp_write_space(struct sock *sk);
/* tcp_stats.c */
diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
index a55a27c05934d..f380762f8322b 100644
--- a/net/rds/tcp_connect.c
+++ b/net/rds/tcp_connect.c
@@ -75,8 +75,16 @@ void rds_tcp_state_change(struct sock *sk)
rds_connect_path_complete(cp, RDS_CONN_CONNECTING);
}
break;
+ case TCP_CLOSING:
+ case TCP_TIME_WAIT:
+ if (wq_has_sleeper(&tc->t_recv_done_waitq))
+ wake_up(&tc->t_recv_done_waitq);
+ break;
case TCP_CLOSE_WAIT:
+ case TCP_LAST_ACK:
case TCP_CLOSE:
+ if (wq_has_sleeper(&tc->t_recv_done_waitq))
+ wake_up(&tc->t_recv_done_waitq);
rds_conn_path_drop(cp, false);
break;
default:
@@ -225,6 +233,7 @@ void rds_tcp_conn_path_shutdown(struct rds_conn_path *cp)
{
struct rds_tcp_connection *tc = cp->cp_transport_data;
struct socket *sock = tc->t_sock;
+ unsigned int rounds;
rdsdebug("shutting down conn %p tc %p sock %p\n",
cp->cp_conn, tc, sock);
@@ -232,8 +241,43 @@ void rds_tcp_conn_path_shutdown(struct rds_conn_path *cp)
if (sock) {
if (rds_destroy_pending(cp->cp_conn))
sock_no_linger(sock->sk);
- sock->ops->shutdown(sock, RCV_SHUTDOWN | SEND_SHUTDOWN);
+
+ sock->ops->shutdown(sock, SHUT_WR);
+
+ /* after sending FIN,
+ * wait until we processed all incoming messages
+ * and we're sure that there won't be any more:
+ * i.e. state CLOSING, TIME_WAIT, CLOSE_WAIT,
+ * LAST_ACK, or CLOSE (RFC 793).
+ *
+ * Give up waiting after 5 seconds and allow messages
+ * to theoretically get dropped, if the TCP transition
+ * didn't happen.
+ */
+ rounds = 0;
+ do {
+ /* we need to ensure messages are dequeued here
+ * since "rds_recv_worker" only dispatches messages
+ * while the connection is still in RDS_CONN_UP
+ * and there is no guarantee that "rds_tcp_data_ready"
+ * was called nor that "sk_data_ready" still points to it.
+ */
+ rds_tcp_recv_path(cp);
+ } while (!wait_event_timeout(tc->t_recv_done_waitq,
+ (sock->sk->sk_state == TCP_CLOSING ||
+ sock->sk->sk_state == TCP_TIME_WAIT ||
+ sock->sk->sk_state == TCP_CLOSE_WAIT ||
+ sock->sk->sk_state == TCP_LAST_ACK ||
+ sock->sk->sk_state == TCP_CLOSE) &&
+ skb_queue_empty_lockless(&sock->sk->sk_receive_queue),
+ msecs_to_jiffies(100)) &&
+ ++rounds < 50);
lock_sock(sock->sk);
+
+ /* discard messages that the peer received already */
+ tc->t_last_seen_una = rds_tcp_snd_una(tc);
+ rds_send_path_drop_acked(cp, rds_tcp_snd_una(tc), rds_tcp_is_acked);
+
rds_tcp_restore_callbacks(sock, tc); /* tc->tc_sock = NULL */
release_sock(sock->sk);
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index 900d059010a41..ec54fc4a69018 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -278,6 +278,20 @@ int rds_tcp_accept_one(struct rds_tcp_net *rtn)
rds_tcp_set_callbacks(new_sock, cp);
rds_connect_path_complete(cp, RDS_CONN_CONNECTING);
}
+
+ /* Since "rds_tcp_set_callbacks" happens this late
+ * the connection may already have been closed without
+ * "rds_tcp_state_change" doing its due diligence.
+ *
+ * If that's the case, we simply drop the path,
+ * knowing that "rds_tcp_conn_path_shutdown" will
+ * dequeue pending messages.
+ */
+ if (new_sock->sk->sk_state == TCP_CLOSE_WAIT ||
+ new_sock->sk->sk_state == TCP_LAST_ACK ||
+ new_sock->sk->sk_state == TCP_CLOSE)
+ rds_conn_path_drop(cp, 0);
+
new_sock = NULL;
ret = 0;
if (conn->c_npaths == 0)
diff --git a/net/rds/tcp_recv.c b/net/rds/tcp_recv.c
index b7cf7f451430d..49f96ee0c40f6 100644
--- a/net/rds/tcp_recv.c
+++ b/net/rds/tcp_recv.c
@@ -278,6 +278,10 @@ static int rds_tcp_read_sock(struct rds_conn_path *cp, gfp_t gfp)
rdsdebug("tcp_read_sock for tc %p gfp 0x%x returned %d\n", tc, gfp,
desc.error);
+ if (skb_queue_empty_lockless(&sock->sk->sk_receive_queue) &&
+ wq_has_sleeper(&tc->t_recv_done_waitq))
+ wake_up(&tc->t_recv_done_waitq);
+
return desc.error;
}
diff --git a/net/rds/tcp_send.c b/net/rds/tcp_send.c
index 4e82c9644aa6a..7c52acc749cf4 100644
--- a/net/rds/tcp_send.c
+++ b/net/rds/tcp_send.c
@@ -169,7 +169,7 @@ int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
* unacked byte of the TCP sequence space. We have to do very careful
* wrapping 32bit comparisons here.
*/
-static int rds_tcp_is_acked(struct rds_message *rm, uint64_t ack)
+int rds_tcp_is_acked(struct rds_message *rm, uint64_t ack)
{
if (!test_bit(RDS_MSG_HAS_ACK_SEQ, &rm->m_flags))
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net-next v1 4/7] net/rds: Kick-start TCP receiver after accept
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
` (2 preceding siblings ...)
2026-01-25 7:06 ` [PATCH net-next v1 3/7] net/rds: rds_tcp_conn_path_shutdown must not discard messages Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 5/7] net/rds: Clear reconnect pending bit Allison Henderson
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Gerd Rausch <gerd.rausch@oracle.com>
In cases where the server (the node with the higher IP-address)
in an RDS/TCP connection is overwhelmed it is possible that the
socket that was just accepted is chock-full of messages, up to
the limit of what the socket receive buffer permits.
Subsequently, "rds_tcp_data_ready" won't be called anymore,
because there is no more space to receive additional messages.
Nor was it called prior to the point of calling "rds_tcp_set_callbacks",
because the "sk_data_ready" pointer didn't even point to
"rds_tcp_data_ready" yet.
We fix this by simply kick-starting the receive-worker
for all cases where the socket state is neither
"TCP_CLOSE_WAIT" nor "TCP_CLOSE".
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/tcp_listen.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index ec54fc4a69018..c628f62421d4e 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -291,6 +291,8 @@ int rds_tcp_accept_one(struct rds_tcp_net *rtn)
new_sock->sk->sk_state == TCP_LAST_ACK ||
new_sock->sk->sk_state == TCP_CLOSE)
rds_conn_path_drop(cp, 0);
+ else
+ queue_delayed_work(cp->cp_wq, &cp->cp_recv_w, 0);
new_sock = NULL;
ret = 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net-next v1 5/7] net/rds: Clear reconnect pending bit
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
` (3 preceding siblings ...)
2026-01-25 7:06 ` [PATCH net-next v1 4/7] net/rds: Kick-start TCP receiver after accept Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 6/7] net/rds: Use the first lane until RDS_EXTHDR_NPATHS arrives Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 7/7] net/rds: Trigger rds_send_ping() more than once Allison Henderson
6 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Håkon Bugge <haakon.bugge@oracle.com>
When canceling the reconnect worker, care must be taken to reset the
reconnect-pending bit. If the reconnect worker has not yet been
scheduled before it is canceled, the reconnect-pending bit will stay
on forever.
Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/connection.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/rds/connection.c b/net/rds/connection.c
index 3f26a67f31804..4b7715eb2111c 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -442,6 +442,8 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
* to the conn hash, so we never trigger a reconnect on this
* conn - the reconnect is always triggered by the active peer. */
cancel_delayed_work_sync(&cp->cp_conn_w);
+
+ clear_bit(RDS_RECONNECT_PENDING, &cp->cp_flags);
rcu_read_lock();
if (!hlist_unhashed(&conn->c_hash_node)) {
rcu_read_unlock();
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net-next v1 6/7] net/rds: Use the first lane until RDS_EXTHDR_NPATHS arrives
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
` (4 preceding siblings ...)
2026-01-25 7:06 ` [PATCH net-next v1 5/7] net/rds: Clear reconnect pending bit Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-25 7:06 ` [PATCH net-next v1 7/7] net/rds: Trigger rds_send_ping() more than once Allison Henderson
6 siblings, 0 replies; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Gerd Rausch <gerd.rausch@oracle.com>
Instead of just blocking the sender until "c_npaths" is known
(it gets updated upon the receipt of a MPRDS PONG message),
simply use the first lane (cp_index#0).
But just using the first lane isn't enough.
As soon as we enqueue messages on a different lane, we'd run the risk
of out-of-order delivery of RDS messages.
Earlier messages enqueued on "cp_index == 0" could be delivered later
than more recent messages enqueued on "cp_index > 0", mostly because of
possible head of line blocking issues causing the first lane to be
slower.
To avoid that, we simply take a snapshot of "cp_next_tx_seq" at the
time we're about to fan-out to more lanes.
Then we delay the transmission of messages enqueued on other lanes
with "cp_index > 0" until cp_index#0 caught up with the delivery of
new messages (from "cp_send_queue") as well as in-flight
messages (from "cp_retrans") that haven't been acknowledged yet
by the receiver.
We also add a new counter "mprds_catchup_tx0_retries" to keep track
of how many times "rds_send_xmit" had to suspend activities,
because it was waiting for the first lane to catch up.
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/rds.h | 3 ++
net/rds/recv.c | 23 +++++++++++--
net/rds/send.c | 91 ++++++++++++++++++++++++++++++-------------------
net/rds/stats.c | 1 +
4 files changed, 79 insertions(+), 39 deletions(-)
diff --git a/net/rds/rds.h b/net/rds/rds.h
index d942057b91ee4..f78477912e7c1 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -170,6 +170,8 @@ struct rds_connection {
u32 c_my_gen_num;
u32 c_peer_gen_num;
+
+ u64 c_cp0_mprds_catchup_tx_seq;
};
static inline
@@ -748,6 +750,7 @@ struct rds_statistics {
uint64_t s_recv_bytes_added_to_socket;
uint64_t s_recv_bytes_removed_from_socket;
uint64_t s_send_stuck_rm;
+ uint64_t s_mprds_catchup_tx0_retries;
};
/* af_rds.c */
diff --git a/net/rds/recv.c b/net/rds/recv.c
index ddf128a023470..889a5b7935e5c 100644
--- a/net/rds/recv.c
+++ b/net/rds/recv.c
@@ -208,6 +208,9 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
} buffer;
bool new_with_sport_idx = false;
u32 new_peer_gen_num = 0;
+ int new_npaths;
+
+ new_npaths = conn->c_npaths;
while (1) {
len = sizeof(buffer);
@@ -217,8 +220,8 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
/* Process extension header here */
switch (type) {
case RDS_EXTHDR_NPATHS:
- conn->c_npaths = min_t(int, RDS_MPATH_WORKERS,
- be16_to_cpu(buffer.rds_npaths));
+ new_npaths = min_t(int, RDS_MPATH_WORKERS,
+ be16_to_cpu(buffer.rds_npaths));
break;
case RDS_EXTHDR_GEN_NUM:
new_peer_gen_num = be32_to_cpu(buffer.rds_gen_num);
@@ -233,8 +236,22 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
}
conn->c_with_sport_idx = new_with_sport_idx;
+
+ if (new_npaths > 1 && new_npaths != conn->c_npaths) {
+ /* We're about to fan-out.
+ * Make sure that messages from cp_index#0
+ * are sent prior to handling other lanes.
+ */
+ struct rds_conn_path *cp0 = conn->c_path;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cp0->cp_lock, flags);
+ conn->c_cp0_mprds_catchup_tx_seq = cp0->cp_next_tx_seq;
+ spin_unlock_irqrestore(&cp0->cp_lock, flags);
+ }
/* if RDS_EXTHDR_NPATHS was not found, default to a single-path */
- conn->c_npaths = max_t(int, conn->c_npaths, 1);
+ conn->c_npaths = max_t(int, new_npaths, 1);
+
conn->c_ping_triggered = 0;
rds_conn_peer_gen_update(conn, new_peer_gen_num);
diff --git a/net/rds/send.c b/net/rds/send.c
index 85e1c5352ad80..ea3b57e9191b1 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -137,6 +137,8 @@ int rds_send_xmit(struct rds_conn_path *cp)
{
struct rds_connection *conn = cp->cp_conn;
struct rds_message *rm;
+ struct rds_conn_path *cp0 = conn->c_path;
+ struct rds_message *rm0;
unsigned long flags;
unsigned int tmp;
struct scatterlist *sg;
@@ -248,6 +250,52 @@ int rds_send_xmit(struct rds_conn_path *cp)
if (batch_count >= send_batch_count)
goto over_batch;
+ if (cp->cp_index > 0) {
+ /* make sure cp_index#0 caught up during fan-out
+ * in order to avoid lane races
+ */
+
+ spin_lock_irqsave(&cp0->cp_lock, flags);
+
+ /* the oldest / first message in the retransmit queue
+ * has to be at or beyond c_cp0_mprds_catchup_tx_seq
+ */
+ if (!list_empty(&cp0->cp_retrans)) {
+ rm0 = list_entry(cp0->cp_retrans.next,
+ struct rds_message,
+ m_conn_item);
+ if (be64_to_cpu(rm0->m_inc.i_hdr.h_sequence) <
+ conn->c_cp0_mprds_catchup_tx_seq) {
+ /* the retransmit queue of cp_index#0 has not
+ * quite caught up yet
+ */
+ spin_unlock_irqrestore(&cp0->cp_lock, flags);
+ rds_stats_inc(s_mprds_catchup_tx0_retries);
+ goto over_batch;
+ }
+ }
+
+ /* the oldest / first message of the send queue
+ * has to be at or beyond c_cp0_mprds_catchup_tx_seq
+ */
+ rm0 = cp0->cp_xmit_rm;
+ if (!rm0 && !list_empty(&cp0->cp_send_queue))
+ rm0 = list_entry(cp0->cp_send_queue.next,
+ struct rds_message,
+ m_conn_item);
+ if (rm0 && be64_to_cpu(rm0->m_inc.i_hdr.h_sequence) <
+ conn->c_cp0_mprds_catchup_tx_seq) {
+ /* the send queue of cp_index#0 has not quite
+ * caught up yet
+ */
+ spin_unlock_irqrestore(&cp0->cp_lock, flags);
+ rds_stats_inc(s_mprds_catchup_tx0_retries);
+ goto over_batch;
+ }
+
+ spin_unlock_irqrestore(&cp0->cp_lock, flags);
+ }
+
spin_lock_irqsave(&cp->cp_lock, flags);
if (!list_empty(&cp->cp_send_queue)) {
@@ -1042,39 +1090,6 @@ static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm,
return ret;
}
-static int rds_send_mprds_hash(struct rds_sock *rs,
- struct rds_connection *conn, int nonblock)
-{
- int hash;
-
- if (conn->c_npaths == 0)
- hash = RDS_MPATH_HASH(rs, RDS_MPATH_WORKERS);
- else
- hash = RDS_MPATH_HASH(rs, conn->c_npaths);
- if (conn->c_npaths == 0 && hash != 0) {
- rds_send_ping(conn, 0);
-
- /* The underlying connection is not up yet. Need to wait
- * until it is up to be sure that the non-zero c_path can be
- * used. But if we are interrupted, we have to use the zero
- * c_path in case the connection ends up being non-MP capable.
- */
- if (conn->c_npaths == 0) {
- /* Cannot wait for the connection be made, so just use
- * the base c_path.
- */
- if (nonblock)
- return 0;
- if (wait_event_interruptible(conn->c_hs_waitq,
- conn->c_npaths != 0))
- hash = 0;
- }
- if (conn->c_npaths == 1)
- hash = 0;
- }
- return hash;
-}
-
static int rds_rdma_bytes(struct msghdr *msg, size_t *rdma_bytes)
{
struct rds_rdma_args *args;
@@ -1304,10 +1319,14 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
rs->rs_conn = conn;
}
- if (conn->c_trans->t_mp_capable)
- cpath = &conn->c_path[rds_send_mprds_hash(rs, conn, nonblock)];
- else
+ if (conn->c_trans->t_mp_capable) {
+ /* Use c_path[0] until we learn that
+ * the peer supports more (c_npaths > 1)
+ */
+ cpath = &conn->c_path[RDS_MPATH_HASH(rs, conn->c_npaths ? : 1)];
+ } else {
cpath = &conn->c_path[0];
+ }
rm->m_conn_path = cpath;
diff --git a/net/rds/stats.c b/net/rds/stats.c
index cb2e3d2cdf738..24ee22d09e8cf 100644
--- a/net/rds/stats.c
+++ b/net/rds/stats.c
@@ -79,6 +79,7 @@ static const char *const rds_stat_names[] = {
"recv_bytes_added_to_sock",
"recv_bytes_freed_fromsock",
"send_stuck_rm",
+ "mprds_catchup_tx0_retries",
};
void rds_stats_info_copy(struct rds_info_iterator *iter,
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net-next v1 7/7] net/rds: Trigger rds_send_ping() more than once
2026-01-25 7:06 [PATCH net-next v1 0/7] net/rds: RDS-TCP protocol and extension improvements Allison Henderson
` (5 preceding siblings ...)
2026-01-25 7:06 ` [PATCH net-next v1 6/7] net/rds: Use the first lane until RDS_EXTHDR_NPATHS arrives Allison Henderson
@ 2026-01-25 7:06 ` Allison Henderson
2026-01-26 17:47 ` [net-next,v1,7/7] " Simon Horman
6 siblings, 1 reply; 11+ messages in thread
From: Allison Henderson @ 2026-01-25 7:06 UTC (permalink / raw)
To: netdev
Cc: linux-kselftest, pabeni, edumazet, rds-devel, kuba, horms,
linux-rdma, allison.henderson
From: Gerd Rausch <gerd.rausch@oracle.com>
Even though a peer may have already received a
non-zero value for "RDS_EXTHDR_NPATHS" from a node in the past,
the current peer may not.
Therefore it is important to initiate another rds_send_ping()
after a re-connect to any peer:
It is unknown at that time if we're still talking to the same
instance of RDS kernel modules on the other side.
Otherwise, the peer may just operate on a single lane
("c_npaths == 0"), not knowing that more lanes are available.
However, if "c_with_sport_idx" is supported,
we also need to check that the connection we accepted on lane#0
meets the proper source port modulo requirement, as we fan out:
Since the exchange of "RDS_EXTHDR_NPATHS" and "RDS_EXTHDR_SPORT_IDX"
is asynchronous, initially we have no choice but to accept an incoming
connection (via "accept") in the first slot ("cp_index == 0")
for backwards compatibility.
But that very connection may have come from a different lane
with "cp_index != 0", since the peer thought that we already understood
and handled "c_with_sport_idx" properly, as indicated by a previous
exchange before a module was reloaded.
In short:
If a module gets reloaded, we recover from that, but do *not*
allow a downgrade to support fewer lanes.
Downgrades would require us to merge messages from separate lanes,
which is rather tricky with the current RDS design.
Each lane has its own sequence number space and all messages
would need to be re-sequenced as we merge, all while
handling "RDS_FLAG_RETRANSMITTED" and "cp_retrans" properly.
Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com>
Signed-off-by: Allison Henderson <allison.henderson@oracle.com>
---
net/rds/connection.c | 5 +++-
net/rds/rds.h | 2 +-
net/rds/recv.c | 7 +++++-
net/rds/send.c | 17 ++++++++++++++
net/rds/tcp.h | 2 +-
net/rds/tcp_listen.c | 55 +++++++++++++++++++++++++++++++++-----------
6 files changed, 71 insertions(+), 17 deletions(-)
diff --git a/net/rds/connection.c b/net/rds/connection.c
index 4b7715eb2111c..185f73b016941 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -447,13 +447,16 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
rcu_read_lock();
if (!hlist_unhashed(&conn->c_hash_node)) {
rcu_read_unlock();
+ if (conn->c_trans->t_mp_capable &&
+ cp->cp_index == 0)
+ rds_send_ping(conn, 0);
rds_queue_reconnect(cp);
} else {
rcu_read_unlock();
}
if (conn->c_trans->conn_slots_available)
- conn->c_trans->conn_slots_available(conn);
+ conn->c_trans->conn_slots_available(conn, false);
}
/* destroy a single rds_conn_path. rds_conn_destroy() iterates over
diff --git a/net/rds/rds.h b/net/rds/rds.h
index f78477912e7c1..eab396ef2c2d0 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -548,7 +548,7 @@ struct rds_transport {
* messages received on the new socket are not discarded when no
* connection path was available at the time.
*/
- void (*conn_slots_available)(struct rds_connection *conn);
+ void (*conn_slots_available)(struct rds_connection *conn, bool fan_out);
int (*conn_path_connect)(struct rds_conn_path *cp);
/*
diff --git a/net/rds/recv.c b/net/rds/recv.c
index 889a5b7935e5c..4b3f9e4a8bfda 100644
--- a/net/rds/recv.c
+++ b/net/rds/recv.c
@@ -209,6 +209,7 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
bool new_with_sport_idx = false;
u32 new_peer_gen_num = 0;
int new_npaths;
+ bool fan_out;
new_npaths = conn->c_npaths;
@@ -248,7 +249,11 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
spin_lock_irqsave(&cp0->cp_lock, flags);
conn->c_cp0_mprds_catchup_tx_seq = cp0->cp_next_tx_seq;
spin_unlock_irqrestore(&cp0->cp_lock, flags);
+ fan_out = true;
+ } else {
+ fan_out = false;
}
+
/* if RDS_EXTHDR_NPATHS was not found, default to a single-path */
conn->c_npaths = max_t(int, new_npaths, 1);
@@ -257,7 +262,7 @@ static void rds_recv_hs_exthdrs(struct rds_header *hdr,
if (conn->c_npaths > 1 &&
conn->c_trans->conn_slots_available)
- conn->c_trans->conn_slots_available(conn);
+ conn->c_trans->conn_slots_available(conn, fan_out);
}
/* rds_start_mprds() will synchronously start multiple paths when appropriate.
diff --git a/net/rds/send.c b/net/rds/send.c
index ea3b57e9191b1..8e7ece085ff1c 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1328,6 +1328,23 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
cpath = &conn->c_path[0];
}
+ /* c_npaths == 0 if we have not talked to this peer
+ * before. Initiate a connection request to the
+ * peer right away.
+ */
+ if (conn->c_trans->t_mp_capable &&
+ !rds_conn_path_up(&conn->c_path[0])) {
+ /* Ensures that only one request is queued. And
+ * rds_send_ping() ensures that only one ping is
+ * outstanding.
+ */
+ if (!test_and_set_bit(RDS_RECONNECT_PENDING,
+ &conn->c_path[0].cp_flags))
+ queue_delayed_work(conn->c_path[0].cp_wq,
+ &conn->c_path[0].cp_conn_w, 0);
+ rds_send_ping(conn, 0);
+ }
+
rm->m_conn_path = cpath;
/* Parse any control messages the user may have included. */
diff --git a/net/rds/tcp.h b/net/rds/tcp.h
index b36af0865a078..39c86347188c1 100644
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -90,7 +90,7 @@ void rds_tcp_state_change(struct sock *sk);
struct socket *rds_tcp_listen_init(struct net *net, bool isv6);
void rds_tcp_listen_stop(struct socket *sock, struct work_struct *acceptor);
void rds_tcp_listen_data_ready(struct sock *sk);
-void rds_tcp_conn_slots_available(struct rds_connection *conn);
+void rds_tcp_conn_slots_available(struct rds_connection *conn, bool fan_out);
int rds_tcp_accept_one(struct rds_tcp_net *rtn);
void rds_tcp_keepalive(struct socket *sock);
void *rds_tcp_listen_sock_def_readable(struct net *net);
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index c628f62421d4e..04fafdf59d722 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -56,14 +56,8 @@ void rds_tcp_keepalive(struct socket *sock)
tcp_sock_set_keepintvl(sock->sk, keepidle);
}
-/* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
- * client's ipaddr < server's ipaddr. Otherwise, close the accepted
- * socket and force a reconneect from smaller -> larger ip addr. The reason
- * we special case cp_index 0 is to allow the rds probe ping itself to itself
- * get through efficiently.
- */
-static struct rds_tcp_connection *
-rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
+static int
+rds_tcp_get_peer_sport(struct socket *sock)
{
union {
struct sockaddr_storage storage;
@@ -71,11 +65,9 @@ rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
struct sockaddr_in sin;
struct sockaddr_in6 sin6;
} saddr;
- int sport, npaths, i_min, i_max, i;
+ int sport;
- if (conn->c_with_sport_idx &&
- kernel_getpeername(sock, &saddr.addr) == 0) {
- /* cp->cp_index is encoded in lowest bits of source-port */
+ if (kernel_getpeername(sock, &saddr.addr) == 0) {
switch (saddr.addr.sa_family) {
case AF_INET:
sport = ntohs(saddr.sin.sin_port);
@@ -90,6 +82,26 @@ rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
sport = -1;
}
+ return sport;
+}
+
+/* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
+ * client's ipaddr < server's ipaddr. Otherwise, close the accepted
+ * socket and force a reconneect from smaller -> larger ip addr. The reason
+ * we special case cp_index 0 is to allow the rds probe ping itself to itself
+ * get through efficiently.
+ */
+static struct rds_tcp_connection *
+rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
+{
+ int sport, npaths, i_min, i_max, i;
+
+ if (conn->c_with_sport_idx)
+ /* cp->cp_index is encoded in lowest bits of source-port */
+ sport = rds_tcp_get_peer_sport(sock);
+ else
+ sport = -1;
+
npaths = max_t(int, 1, conn->c_npaths);
if (sport >= 0) {
@@ -111,10 +123,12 @@ rds_tcp_accept_one_path(struct rds_connection *conn, struct socket *sock)
return NULL;
}
-void rds_tcp_conn_slots_available(struct rds_connection *conn)
+void rds_tcp_conn_slots_available(struct rds_connection *conn, bool fan_out)
{
struct rds_tcp_connection *tc;
struct rds_tcp_net *rtn;
+ struct socket *sock;
+ int sport, npaths;
if (rds_destroy_pending(conn))
return;
@@ -124,6 +138,21 @@ void rds_tcp_conn_slots_available(struct rds_connection *conn)
if (!rtn)
return;
+ sock = tc->t_sock;
+
+ /* During fan-out, check that the connection we already
+ * accepted in slot#0 carried the proper source port modulo.
+ */
+ if (fan_out && conn->c_with_sport_idx && sock &&
+ rds_addr_cmp(&conn->c_laddr, &conn->c_faddr) > 0) {
+ /* cp->cp_index is encoded in lowest bits of source-port */
+ sport = rds_tcp_get_peer_sport(sock);
+ npaths = max_t(int, 1, conn->c_npaths);
+ if (sport >= 0 && sport % npaths != 0)
+ /* peer initiated with a non-#0 lane first */
+ rds_conn_path_drop(conn->c_path, 0);
+ }
+
/* As soon as a connection went down,
* it is safe to schedule a "rds_tcp_accept_one"
* attempt even if there are no connections pending:
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [net-next,v1,7/7] net/rds: Trigger rds_send_ping() more than once
2026-01-25 7:06 ` [PATCH net-next v1 7/7] net/rds: Trigger rds_send_ping() more than once Allison Henderson
@ 2026-01-26 17:47 ` Simon Horman
0 siblings, 0 replies; 11+ messages in thread
From: Simon Horman @ 2026-01-26 17:47 UTC (permalink / raw)
To: achender
Cc: Simon Horman, allison.henderson, rds-devel, netdev, edumazet,
pabeni, linux-rdma, linux-kselftest, kuba
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net/rds: Trigger rds_send_ping() more than once
This commit ensures that rds_send_ping() is triggered after reconnect
to properly communicate path capabilities, especially after an RDS
module reload. It also validates source port modulo during fan-out.
> diff --git a/net/rds/send.c b/net/rds/send.c
> index ea3b57e9191b..8e7ece085ff1 100644
> --- a/net/rds/send.c
> +++ b/net/rds/send.c
> @@ -1328,6 +1328,23 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
> cpath = &conn->c_path[0];
> }
>
> + /* c_npaths == 0 if we have not talked to this peer
> + * before. Initiate a connection request to the
> + * peer right away.
> + */
> + if (conn->c_trans->t_mp_capable &&
> + !rds_conn_path_up(&conn->c_path[0])) {
This isn't a bug, but the comment says "c_npaths == 0" while the code
checks "!rds_conn_path_up()". These conditions are different:
c_npaths == 0 means no paths negotiated yet, while !rds_conn_path_up()
is true for any state other than RDS_CONN_UP (DOWN, CONNECTING,
DISCONNECTING, ERROR, RESETTING).
The commit message mentions "after a re-connect to any peer" which
aligns with the code, but the comment specifically mentions c_npaths
which isn't checked. Should the comment be updated to reflect the
actual logic?
> + /* Ensures that only one request is queued. And
> + * rds_send_ping() ensures that only one ping is
> + * outstanding.
> + */
[ ... ]
^ permalink raw reply [flat|nested] 11+ messages in thread