* [PATCH mptcp-next v8 0/7] implement mptcp read_sock
@ 2025-07-09 7:25 Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 1/7] mptcp: add eat_recv_skb helper Geliang Tang
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
v8:
- export struct tcp_splice_state and tcp_splice_data_recv() in net/tcp.h.
- add a new helper mptcp_recv_should_stop.
- add mptcp_connect_splice.sh.
- update commit logs.
Based-on: <20250708-selftests-mptcp-connect-modes-v1-0-f8e6f9828fd8@kernel.org>
v7:
- only patch 1 and patch 2 changed.
- add a new helper mptcp_eat_recv_skb.
- invoke skb_peek in mptcp_recv_skb().
- use while ((skb = mptcp_recv_skb(sk)) != NULL) instead of
skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp).
v6:
- address Paolo's comment for v4, v5 (thanks)
v5:
- extract the common code of __mptcp_recvmsg_mskq() and mptcp_read_sock()
into a new helper __mptcp_recvmsg_desc() to reduce duplication code.
v4:
- v3 doesn't work for MPTCP fallback tests in mptcp_connect.sh, this
set fix it.
- invoke __mptcp_move_skbs in mptcp_read_sock.
- use INDIRECT_CALL_INET_1 in __tcp_splice_read.
v3:
- merge the two squash-to patches.
- use sk->sk_rcvbuf instead of INT_MAX as the max len in
mptcp_read_sock().
- add splice io mode for mptcp_connect and drop mptcp_splice.c test.
- the splice test for packetdrill is also added here:
https://github.com/multipath-tcp/packetdrill/pull/162
v2:
- set splice_read of mptcp
- add a splice selftest.
I have good news! I recently added MPTCP support to "NVME over TCP".
And my RFC patches are under review by NVME maintainer Hannes.
Replacing "NVME over TCP" with MPTCP is very simple. I used IPPROTO_MPTCP
instead of IPPROTO_TCP to create MPTCP sockets on both target and host
sides, these sockets are created in Kernel space.
nvmet_tcp_add_port:
ret = sock_create(port->addr.ss_family, SOCK_STREAM,
IPPROTO_MPTCP, &port->sock);
nvme_tcp_alloc_queue:
ret = sock_create_kern(current->nsproxy->net_ns,
ctrl->addr.ss_family, SOCK_STREAM,
IPPROTO_MPTCP, &queue->sock);
nvme_tcp_try_recv() needs to call .read_sock interface of struct
proto_ops, but it is not implemented in MPTCP. So I implemented it
with reference to __mptcp_recvmsg_mskq().
Since the NVME part patches are still under reviewing, I only send the
MPTCP part patches in this set to MPTCP ML for your opinions.
Geliang Tang (7):
mptcp: add eat_recv_skb helper
mptcp: implement .read_sock
tcp: export tcp_splice_state
mptcp: add recv_should_stop helper
mptcp: implement .splice_read
selftests: mptcp: add splice io mode
selftests: mptcp: connect: cover splice mode
include/net/tcp.h | 12 +
net/ipv4/tcp.c | 13 +-
net/mptcp/protocol.c | 237 +++++++++++++++---
tools/testing/selftests/net/mptcp/Makefile | 2 +-
.../selftests/net/mptcp/mptcp_connect.c | 63 ++++-
.../net/mptcp/mptcp_connect_splice.sh | 4 +
6 files changed, 285 insertions(+), 46 deletions(-)
create mode 100755 tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh
--
2.48.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 1/7] mptcp: add eat_recv_skb helper
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 2/7] mptcp: implement .read_sock Geliang Tang
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
This patch extracts the free skb related code in __mptcp_recvmsg_mskq()
into a new helper mptcp_eat_recv_skb().
Use sk_eat_skb() in this helper instead of open-coding it.
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
net/mptcp/protocol.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 0c93b36373b1..48365d54bc06 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1845,6 +1845,15 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied);
+static void mptcp_eat_recv_skb(struct sock *sk, struct sk_buff *skb)
+{
+ /* avoid the indirect call, we know the destructor is sock_wfree */
+ skb->destructor = NULL;
+ atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+ sk_mem_uncharge(sk, skb->truesize);
+ sk_eat_skb(sk, skb);
+}
+
static int __mptcp_recvmsg_mskq(struct sock *sk,
struct msghdr *msg,
size_t len, int flags,
@@ -1887,12 +1896,7 @@ static int __mptcp_recvmsg_mskq(struct sock *sk,
}
if (!(flags & MSG_PEEK)) {
- /* avoid the indirect call, we know the destructor is sock_wfree */
- skb->destructor = NULL;
- atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
- sk_mem_uncharge(sk, skb->truesize);
- __skb_unlink(skb, &sk->sk_receive_queue);
- __kfree_skb(skb);
+ mptcp_eat_recv_skb(sk, skb);
msk->bytes_consumed += count;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 2/7] mptcp: implement .read_sock
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 1/7] mptcp: add eat_recv_skb helper Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 3/7] tcp: export tcp_splice_state Geliang Tang
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
nvme_tcp_try_recv() needs to call .read_sock interface of struct
proto_ops, but it's not implemented in MPTCP.
This patch implements it with reference to __tcp_read_sock() and
__mptcp_recvmsg_mskq().
Corresponding to tcp_recv_skb(), a new helper for MPTCP named
mptcp_recv_skb() is added to peek a skb from sk->sk_receive_queue.
Compared with __mptcp_recvmsg_mskq(), mptcp_read_sock() uses
sk->sk_rcvbuf as the max read length. The LISTEN status is checked
before the while loop, and mptcp_recv_skb() and mptcp_cleanup_rbuf()
are invoked after the loop. In the loop, all flags checks for
__mptcp_recvmsg_mskq() are removed.
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
v2:
- first check the sk_state (Matt), but not look for the end of the
end of a connection like TCP in __tcp_read_sock():
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
break;
This will cause a use-after-free error:
BUG: KASAN: slab-use-after-free in mptcp_read_sock.
v3:
- Use sk->sk_rcvbuf instead of INT_MAX as the max len.
v4:
- invoke __mptcp_move_skbs.
---
net/mptcp/protocol.c | 63 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 48365d54bc06..fc429d175ede 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3962,6 +3962,67 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
return mask;
}
+static struct sk_buff *mptcp_recv_skb(struct sock *sk)
+{
+ if (skb_queue_empty(&sk->sk_receive_queue))
+ __mptcp_move_skbs(sk);
+
+ return skb_peek(&sk->sk_receive_queue);
+}
+
+/*
+ * Note:
+ * - It is assumed that the socket was locked by the caller.
+ */
+static int mptcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ sk_read_actor_t recv_actor)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ size_t len = sk->sk_rcvbuf;
+ struct sk_buff *skb;
+ int copied = 0;
+
+ if (sk->sk_state == TCP_LISTEN)
+ return -ENOTCONN;
+ while ((skb = mptcp_recv_skb(sk)) != NULL) {
+ u32 offset = MPTCP_SKB_CB(skb)->offset;
+ u32 data_len = skb->len - offset;
+ u32 size = min_t(size_t, len - copied, data_len);
+ int count;
+
+ count = recv_actor(desc, skb, offset, size);
+ if (count <= 0) {
+ if (!copied)
+ copied = count;
+ break;
+ }
+
+ copied += count;
+
+ if (count < data_len) {
+ MPTCP_SKB_CB(skb)->offset += count;
+ MPTCP_SKB_CB(skb)->map_seq += count;
+ msk->bytes_consumed += count;
+ break;
+ }
+
+ mptcp_eat_recv_skb(sk, skb);
+ msk->bytes_consumed += count;
+
+ if (copied >= len)
+ break;
+ }
+
+ mptcp_rcv_space_adjust(msk, copied);
+
+ if (copied > 0) {
+ mptcp_recv_skb(sk);
+ mptcp_cleanup_rbuf(msk, copied);
+ }
+
+ return copied;
+}
+
static const struct proto_ops mptcp_stream_ops = {
.family = PF_INET,
.owner = THIS_MODULE,
@@ -3982,6 +4043,7 @@ static const struct proto_ops mptcp_stream_ops = {
.recvmsg = inet_recvmsg,
.mmap = sock_no_mmap,
.set_rcvlowat = mptcp_set_rcvlowat,
+ .read_sock = mptcp_read_sock,
};
static struct inet_protosw mptcp_protosw = {
@@ -4086,6 +4148,7 @@ static const struct proto_ops mptcp_v6_stream_ops = {
.compat_ioctl = inet6_compat_ioctl,
#endif
.set_rcvlowat = mptcp_set_rcvlowat,
+ .read_sock = mptcp_read_sock,
};
static struct proto mptcp_v6_prot;
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 3/7] tcp: export tcp_splice_state
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 1/7] mptcp: add eat_recv_skb helper Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 2/7] mptcp: implement .read_sock Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 4/7] mptcp: add recv_should_stop helper Geliang Tang
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
Export struct tcp_splice_state and tcp_splice_data_recv() in net/tcp.h so
that they can be used by MPTCP.
Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
include/net/tcp.h | 12 ++++++++++++
net/ipv4/tcp.c | 13 ++-----------
2 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index bc08de49805c..338b97a91278 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -258,6 +258,18 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX);
*/
#define TFO_SERVER_WO_SOCKOPT1 0x400
+/*
+ * TCP splice context
+ */
+struct tcp_splice_state {
+ struct pipe_inode_info *pipe;
+ size_t len;
+ unsigned int flags;
+};
+
+int tcp_splice_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
+ unsigned int offset, size_t len);
+
/* sysctl variables for tcp */
extern int sysctl_tcp_max_orphans;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 8637dfce8c4a..505c3a123e2d 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -316,15 +316,6 @@ EXPORT_SYMBOL(tcp_have_smc);
struct percpu_counter tcp_sockets_allocated ____cacheline_aligned_in_smp;
EXPORT_IPV6_MOD(tcp_sockets_allocated);
-/*
- * TCP splice context
- */
-struct tcp_splice_state {
- struct pipe_inode_info *pipe;
- size_t len;
- unsigned int flags;
-};
-
/*
* Pressure flag: try to collapse.
* Technical note: it is used by multiple contexts non atomically.
@@ -757,8 +748,8 @@ void tcp_push(struct sock *sk, int flags, int mss_now,
__tcp_push_pending_frames(sk, mss_now, nonagle);
}
-static int tcp_splice_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
- unsigned int offset, size_t len)
+int tcp_splice_data_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
+ unsigned int offset, size_t len)
{
struct tcp_splice_state *tss = rd_desc->arg.data;
int ret;
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 4/7] mptcp: add recv_should_stop helper
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (2 preceding siblings ...)
2025-07-09 7:25 ` [PATCH mptcp-next v8 3/7] tcp: export tcp_splice_state Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 5/7] mptcp: implement .splice_read Geliang Tang
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
Factor out a new helper mptcp_recv_should_stop() from mptcp_recvmsg() to
check whether to stop receiving. It will be used in mptcp_splice_read()
too.
Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
net/mptcp/protocol.c | 59 ++++++++++++++++++++++++--------------------
1 file changed, 32 insertions(+), 27 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index fc429d175ede..1fe04ec05ebd 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2090,12 +2090,40 @@ static unsigned int mptcp_inq_hint(const struct sock *sk)
return 0;
}
+static int mptcp_recv_should_stop(struct sock *sk, long timeo, int *shutdown)
+{
+ if (sk->sk_err)
+ return sock_error(sk);
+
+ if (sk->sk_shutdown & RCV_SHUTDOWN) {
+ *shutdown = 1;
+ /* race breaker: the shutdown could be after the
+ * previous receive queue check
+ */
+ if (__mptcp_move_skbs(sk))
+ *shutdown = 0;
+ return 0;
+ }
+
+ if (sk->sk_state == TCP_CLOSE)
+ return -ENOTCONN;
+
+ if (!timeo)
+ return -EAGAIN;
+
+ if (signal_pending(current))
+ return sock_intr_errno(timeo);
+
+ return 0;
+}
+
static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int flags, int *addr_len)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct scm_timestamping_internal tss;
int copied = 0, cmsg_flags = 0;
+ int shutdown = -1;
int target;
long timeo;
@@ -2146,34 +2174,11 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
signal_pending(current))
break;
} else {
- if (sk->sk_err) {
- copied = sock_error(sk);
+ copied = mptcp_recv_should_stop(sk, timeo, &shutdown);
+ if (copied < 0 || shutdown == 1)
break;
- }
-
- if (sk->sk_shutdown & RCV_SHUTDOWN) {
- /* race breaker: the shutdown could be after the
- * previous receive queue check
- */
- if (__mptcp_move_skbs(sk))
- continue;
- break;
- }
-
- if (sk->sk_state == TCP_CLOSE) {
- copied = -ENOTCONN;
- break;
- }
-
- if (!timeo) {
- copied = -EAGAIN;
- break;
- }
-
- if (signal_pending(current)) {
- copied = sock_intr_errno(timeo);
- break;
- }
+ if (shutdown == 0)
+ continue;
}
pr_debug("block timeout %ld\n", timeo);
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 5/7] mptcp: implement .splice_read
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (3 preceding siblings ...)
2025-07-09 7:25 ` [PATCH mptcp-next v8 4/7] mptcp: add recv_should_stop helper Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 6/7] selftests: mptcp: add splice io mode Geliang Tang
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
This patch implements .splice_read interface of mptcp struct proto_ops
as mptcp_splice_read() with reference to tcp_splice_read().
Corresponding to __tcp_splice_read(), __mptcp_splice_read() is defined,
invoking mptcp_read_sock() instead of tcp_read_sock().
mptcp_splice_read() is almost the same as tcp_splice_read(), except for
sock_rps_record_flow() and mptcp_recv_should_stop().
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
net/mptcp/protocol.c | 99 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 99 insertions(+)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 1fe04ec05ebd..6b3be1d0668d 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -4028,6 +4028,103 @@ static int mptcp_read_sock(struct sock *sk, read_descriptor_t *desc,
return copied;
}
+static int __mptcp_splice_read(struct sock *sk, struct tcp_splice_state *tss)
+{
+ /* Store TCP splice context information in read_descriptor_t. */
+ read_descriptor_t rd_desc = {
+ .arg.data = tss,
+ .count = tss->len,
+ };
+
+ return mptcp_read_sock(sk, &rd_desc, tcp_splice_data_recv);
+}
+
+/**
+ * mptcp_splice_read - splice data from MPTCP socket to a pipe
+ * @sock: socket to splice from
+ * @ppos: position (not valid)
+ * @pipe: pipe to splice to
+ * @len: number of bytes to splice
+ * @flags: splice modifier flags
+ *
+ * Description:
+ * Will read pages from given socket and fill them into a pipe.
+ *
+ **/
+static ssize_t mptcp_splice_read(struct socket *sock, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+ unsigned int flags)
+{
+ struct tcp_splice_state tss = {
+ .pipe = pipe,
+ .len = len,
+ .flags = flags,
+ };
+ struct sock *sk = sock->sk;
+ int shutdown = -1;
+ ssize_t spliced;
+ long timeo;
+ int ret;
+
+ /*
+ * We can't seek on a socket input
+ */
+ if (unlikely(*ppos))
+ return -ESPIPE;
+
+ spliced = 0;
+ ret = 0;
+
+ lock_sock(sk);
+
+ timeo = sock_rcvtimeo(sk, sock->file->f_flags & O_NONBLOCK);
+ while (tss.len) {
+ ret = __mptcp_splice_read(sk, &tss);
+ if (ret < 0) {
+ break;
+ } else if (!ret) {
+ if (spliced)
+ break;
+ if (sock_flag(sk, SOCK_DONE))
+ break;
+ ret = mptcp_recv_should_stop(sk, timeo, &shutdown);
+ if (ret < 0 || shutdown == 1)
+ break;
+ if (shutdown == 0)
+ continue;
+ /* if __mptcp_splice_read() got nothing while we have
+ * an skb in receive queue, we do not want to loop.
+ * This might happen with URG data.
+ */
+ if (!skb_queue_empty(&sk->sk_receive_queue))
+ break;
+ ret = sk_wait_data(sk, &timeo, NULL);
+ if (ret < 0)
+ break;
+ continue;
+ }
+ tss.len -= ret;
+ spliced += ret;
+
+ if (!tss.len || !timeo)
+ break;
+ release_sock(sk);
+ lock_sock(sk);
+
+ if (sk->sk_err || sk->sk_state == TCP_CLOSE ||
+ (sk->sk_shutdown & RCV_SHUTDOWN) ||
+ signal_pending(current))
+ break;
+ }
+
+ release_sock(sk);
+
+ if (spliced)
+ return spliced;
+
+ return ret;
+}
+
static const struct proto_ops mptcp_stream_ops = {
.family = PF_INET,
.owner = THIS_MODULE,
@@ -4049,6 +4146,7 @@ static const struct proto_ops mptcp_stream_ops = {
.mmap = sock_no_mmap,
.set_rcvlowat = mptcp_set_rcvlowat,
.read_sock = mptcp_read_sock,
+ .splice_read = mptcp_splice_read,
};
static struct inet_protosw mptcp_protosw = {
@@ -4154,6 +4252,7 @@ static const struct proto_ops mptcp_v6_stream_ops = {
#endif
.set_rcvlowat = mptcp_set_rcvlowat,
.read_sock = mptcp_read_sock,
+ .splice_read = mptcp_splice_read,
};
static struct proto mptcp_v6_prot;
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 6/7] selftests: mptcp: add splice io mode
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (4 preceding siblings ...)
2025-07-09 7:25 ` [PATCH mptcp-next v8 5/7] mptcp: implement .splice_read Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 7/7] selftests: mptcp: connect: cover splice mode Geliang Tang
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
This patch adds a new 'splice' io mode for mptcp_connect to test
the newly added read_sock() and splice() functions of MPTCP.
Usage:
./mptcp_connect.sh -m splice
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
.../selftests/net/mptcp/mptcp_connect.c | 63 ++++++++++++++++++-
1 file changed, 62 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
index ac1349c4b9e5..4c219d620398 100644
--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
@@ -51,6 +51,7 @@ enum cfg_mode {
CFG_MODE_POLL,
CFG_MODE_MMAP,
CFG_MODE_SENDFILE,
+ CFG_MODE_SPLICE,
};
enum cfg_peek {
@@ -123,7 +124,7 @@ static void die_usage(void)
fprintf(stderr, "\t-j -- add additional sleep at connection start and tear down "
"-- for MPJ tests\n");
fprintf(stderr, "\t-l -- listens mode, accepts incoming connection\n");
- fprintf(stderr, "\t-m [poll|mmap|sendfile] -- use poll(default)/mmap+write/sendfile\n");
+ fprintf(stderr, "\t-m [poll|mmap|sendfile|splice] -- use poll(default)/mmap+write/sendfile/splice\n");
fprintf(stderr, "\t-M mark -- set socket packet mark\n");
fprintf(stderr, "\t-o option -- test sockopt <option>\n");
fprintf(stderr, "\t-p num -- use port num\n");
@@ -925,6 +926,55 @@ static int copyfd_io_sendfile(int infd, int peerfd, int outfd,
return err;
}
+static int do_splice(const int infd, const int outfd, const size_t len,
+ struct wstate *winfo)
+{
+ int pipefd[2];
+ ssize_t bytes;
+ int err;
+
+ err = pipe(pipefd);
+ if (err)
+ return err;
+
+ while ((bytes = splice(infd, NULL, pipefd[1], NULL,
+ len - winfo->total_len,
+ SPLICE_F_MOVE | SPLICE_F_MORE)) > 0) {
+ splice(pipefd[0], NULL, outfd, NULL, bytes,
+ SPLICE_F_MOVE | SPLICE_F_MORE);
+ }
+
+ close(pipefd[0]);
+ close(pipefd[1]);
+
+ return 0;
+}
+
+static int copyfd_io_splice(int infd, int peerfd, int outfd, unsigned int size,
+ bool *in_closed_after_out, struct wstate *winfo)
+{
+ int err;
+
+ if (listen_mode) {
+ err = do_splice(peerfd, outfd, size, winfo);
+ if (err)
+ return err;
+
+ err = do_splice(infd, peerfd, size, winfo);
+ } else {
+ err = do_splice(infd, peerfd, size, winfo);
+ if (err)
+ return err;
+
+ shut_wr(peerfd);
+
+ err = do_splice(peerfd, outfd, size, winfo);
+ *in_closed_after_out = true;
+ }
+
+ return err;
+}
+
static int copyfd_io(int infd, int peerfd, int outfd, bool close_peerfd, struct wstate *winfo)
{
bool in_closed_after_out = false;
@@ -957,6 +1007,14 @@ static int copyfd_io(int infd, int peerfd, int outfd, bool close_peerfd, struct
&in_closed_after_out, winfo);
break;
+ case CFG_MODE_SPLICE:
+ file_size = get_infd_size(infd);
+ if (file_size < 0)
+ return file_size;
+ ret = copyfd_io_splice(infd, peerfd, outfd, file_size,
+ &in_closed_after_out, winfo);
+ break;
+
default:
fprintf(stderr, "Invalid mode %d\n", cfg_mode);
@@ -1361,12 +1419,15 @@ int parse_mode(const char *mode)
return CFG_MODE_MMAP;
if (!strcasecmp(mode, "sendfile"))
return CFG_MODE_SENDFILE;
+ if (!strcasecmp(mode, "splice"))
+ return CFG_MODE_SPLICE;
fprintf(stderr, "Unknown test mode: %s\n", mode);
fprintf(stderr, "Supported modes are:\n");
fprintf(stderr, "\t\t\"poll\" - interleaved read/write using poll()\n");
fprintf(stderr, "\t\t\"mmap\" - send entire input file (mmap+write), then read response (-l will read input first)\n");
fprintf(stderr, "\t\t\"sendfile\" - send entire input file (sendfile), then read response (-l will read input first)\n");
+ fprintf(stderr, "\t\t\"splice\" - send entire input file (splice), then read response (-l will read input first)\n");
die_usage();
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next v8 7/7] selftests: mptcp: connect: cover splice mode
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (5 preceding siblings ...)
2025-07-09 7:25 ` [PATCH mptcp-next v8 6/7] selftests: mptcp: add splice io mode Geliang Tang
@ 2025-07-09 7:25 ` Geliang Tang
2025-07-09 16:06 ` [PATCH mptcp-next v8 0/7] implement mptcp read_sock MPTCP CI
2025-07-09 17:53 ` MPTCP CI
8 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2025-07-09 7:25 UTC (permalink / raw)
To: mptcp, matttbe, hare, pabeni; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
The "splice" alternate mode for mptcp_connect.sh/.c is available now,
this patch adds mptcp_connect_splice.sh to test it in the MPTCP CI by
default.
Suggested-by: Matthieu Baerts <matttbe@kernel.org>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
tools/testing/selftests/net/mptcp/Makefile | 2 +-
tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)
create mode 100755 tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh
diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
index 4c7e51336ab2..83c1602fb6bd 100644
--- a/tools/testing/selftests/net/mptcp/Makefile
+++ b/tools/testing/selftests/net/mptcp/Makefile
@@ -5,7 +5,7 @@ top_srcdir = ../../../../..
CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh mptcp_connect_sendfile.sh \
- mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ mptcp_connect_splice.sh mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \
simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq mptcp_diag
diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh
new file mode 100755
index 000000000000..77fe240ecf8c
--- /dev/null
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh
@@ -0,0 +1,4 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+"$(dirname "${0}")/mptcp_connect.sh" -m splice "${@}"
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next v8 0/7] implement mptcp read_sock
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (6 preceding siblings ...)
2025-07-09 7:25 ` [PATCH mptcp-next v8 7/7] selftests: mptcp: connect: cover splice mode Geliang Tang
@ 2025-07-09 16:06 ` MPTCP CI
2025-07-09 17:53 ` MPTCP CI
8 siblings, 0 replies; 10+ messages in thread
From: MPTCP CI @ 2025-07-09 16:06 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Critical: 223 Call Trace(s) - Critical: Global Timeout ❌
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/16172161051
Initiator: Matthieu Baerts (NGI0)
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/aa295276bd34
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=980362
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next v8 0/7] implement mptcp read_sock
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
` (7 preceding siblings ...)
2025-07-09 16:06 ` [PATCH mptcp-next v8 0/7] implement mptcp read_sock MPTCP CI
@ 2025-07-09 17:53 ` MPTCP CI
8 siblings, 0 replies; 10+ messages in thread
From: MPTCP CI @ 2025-07-09 17:53 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/16172161051
Initiator: Matthieu Baerts (NGI0)
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/aa295276bd34
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=980362
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-07-09 17:53 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-09 7:25 [PATCH mptcp-next v8 0/7] implement mptcp read_sock Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 1/7] mptcp: add eat_recv_skb helper Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 2/7] mptcp: implement .read_sock Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 3/7] tcp: export tcp_splice_state Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 4/7] mptcp: add recv_should_stop helper Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 5/7] mptcp: implement .splice_read Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 6/7] selftests: mptcp: add splice io mode Geliang Tang
2025-07-09 7:25 ` [PATCH mptcp-next v8 7/7] selftests: mptcp: connect: cover splice mode Geliang Tang
2025-07-09 16:06 ` [PATCH mptcp-next v8 0/7] implement mptcp read_sock MPTCP CI
2025-07-09 17:53 ` MPTCP CI
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).