From: Geliang Tang <geliang@kernel.org>
To: mptcp@lists.linux.dev
Cc: Geliang Tang <tanggeliang@kylinos.cn>
Subject: [PATCH mptcp-next v14 0/8] implement mptcp read_sock
Date: Mon, 24 Nov 2025 17:12:45 +0800 [thread overview]
Message-ID: <cover.1763974740.git.tanggeliang@kylinos.cn> (raw)
From: Geliang Tang <tanggeliang@kylinos.cn>
v14:
- Patch 2, new helper __mptcp_read_sock() with noack parameter,
this makes it more similar to __tcp_read_sock() and also prepares
for the use of mptcp_read_sock_noack() in MPTCP KTLS support. Also
invoke msk_owned_by_me() in it to make sure socket was locked.
- Patch 5, export tcp_splice_data_recv() as Paolo suggested in v7.
- Patch 6, drop mptcp_splice_data_recv().
v13:
- rebase on "mptcp: introduce backlog processing" v6
- Link: https://patchwork.kernel.org/project/mptcp/cover/cover.1761198660.git.geliang@kernel.org/
v12:
- rebase on "mptcp: receive path improvement" 1-7.
- some cleanups.
v11:
- drop "tcp: drop release and lock again in splice_read", and add this
release and lock again in mptcp_splice_read too. (Thanks Mat, I didn't
understand the intent of this code before.)
- call mptcp_rps_record_subflows() in mptcp_splice_read as Mat
suggested.
v10:
- add an offset parameter for mptcp_recv_skb and make it more like
tcp_recv_skb.
- Link: https://patchwork.kernel.org/project/mptcp/cover/cover.1756780274.git.tanggeliang@kylinos.cn/
v9:
- merge the squash-to patches.
- a new patch "drop release and lock again in splice_read".
- Link: https://patchwork.kernel.org/project/mptcp/cover/cover.1752399660.git.tanggeliang@kylinos.cn/
v8:
- export struct tcp_splice_state and tcp_splice_data_recv() in net/tcp.h.
- add a new helper mptcp_recv_should_stop.
- add mptcp_connect_splice.sh.
- update commit logs.
v7:
- only patch 1 and patch 2 changed.
- add a new helper mptcp_eat_recv_skb.
- invoke skb_peek in mptcp_recv_skb().
- use while ((skb = mptcp_recv_skb(sk)) != NULL) instead of
skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp).
v6:
- address Paolo's comments for v4, v5 (thanks)
v5:
- extract the common code of __mptcp_recvmsg_mskq() and mptcp_read_sock()
into a new helper __mptcp_recvmsg_desc() to reduce duplication code.
v4:
- v3 doesn't work for MPTCP fallback tests in mptcp_connect.sh, this
set fix it.
- invoke __mptcp_move_skbs in mptcp_read_sock.
- use INDIRECT_CALL_INET_1 in __tcp_splice_read.
v3:
- merge the two squash-to patches.
- use sk->sk_rcvbuf instead of INT_MAX as the max len in
mptcp_read_sock().
- add splice io mode for mptcp_connect and drop mptcp_splice.c test.
- the splice test for packetdrill is also added here:
https://github.com/multipath-tcp/packetdrill/pull/162
v2:
- set splice_read of mptcp
- add a splice selftest.
I have good news! I recently added MPTCP support to "NVME over TCP".
And my RFC patches are under review by NVME maintainer Hannes.
Replacing "NVME over TCP" with MPTCP is very simple. I used IPPROTO_MPTCP
instead of IPPROTO_TCP to create MPTCP sockets on both target and host
sides, these sockets are created in Kernel space.
nvmet_tcp_add_port:
ret = sock_create(port->addr.ss_family, SOCK_STREAM,
IPPROTO_MPTCP, &port->sock);
nvme_tcp_alloc_queue:
ret = sock_create_kern(current->nsproxy->net_ns,
ctrl->addr.ss_family, SOCK_STREAM,
IPPROTO_MPTCP, &queue->sock);
nvme_tcp_try_recv() needs to call .read_sock interface of struct
proto_ops, but it is not implemented in MPTCP. So I implemented it
with reference to __mptcp_recvmsg_mskq().
Since the NVME part patches are still under reviewing, I only send the
MPTCP part patches in this set to MPTCP ML for your opinions.
Geliang Tang (8):
mptcp: add eat_recv_skb helper
mptcp: implement .read_sock
tcp: add recv_should_stop helper
mptcp: use recv_should_stop helper
tcp: export tcp_splice_state
mptcp: implement .splice_read
selftests: mptcp: add splice io mode
selftests: mptcp: connect: cover splice mode
include/net/tcp.h | 34 +++
net/ipv4/tcp.c | 86 ++-----
net/mptcp/protocol.c | 236 +++++++++++++++---
tools/testing/selftests/net/mptcp/Makefile | 1 +
.../selftests/net/mptcp/mptcp_connect.c | 63 ++++-
.../net/mptcp/mptcp_connect_splice.sh | 5 +
6 files changed, 315 insertions(+), 110 deletions(-)
create mode 100755 tools/testing/selftests/net/mptcp/mptcp_connect_splice.sh
--
2.43.0
next reply other threads:[~2025-11-24 9:13 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-24 9:12 Geliang Tang [this message]
2025-11-24 9:12 ` [PATCH mptcp-next v14 1/8] mptcp: add eat_recv_skb helper Geliang Tang
2025-11-24 9:12 ` [PATCH mptcp-next v14 2/8] mptcp: implement .read_sock Geliang Tang
2025-12-06 0:17 ` Mat Martineau
2025-12-08 2:14 ` Geliang Tang
2025-11-24 9:12 ` [PATCH mptcp-next v14 3/8] tcp: add recv_should_stop helper Geliang Tang
2025-12-06 0:20 ` Mat Martineau
2025-12-10 1:30 ` Mat Martineau
2025-11-24 9:12 ` [PATCH mptcp-next v14 4/8] mptcp: use " Geliang Tang
2025-11-24 9:12 ` [PATCH mptcp-next v14 5/8] tcp: export tcp_splice_state Geliang Tang
2025-11-24 9:12 ` [PATCH mptcp-next v14 6/8] mptcp: implement .splice_read Geliang Tang
2025-12-06 0:22 ` Mat Martineau
2025-11-24 9:12 ` [PATCH mptcp-next v14 7/8] selftests: mptcp: add splice io mode Geliang Tang
2025-11-24 9:12 ` [PATCH mptcp-next v14 8/8] selftests: mptcp: connect: cover splice mode Geliang Tang
2025-11-24 10:39 ` [PATCH mptcp-next v14 0/8] implement mptcp read_sock MPTCP CI
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1763974740.git.tanggeliang@kylinos.cn \
--to=geliang@kernel.org \
--cc=mptcp@lists.linux.dev \
--cc=tanggeliang@kylinos.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox