* [PATCH mptcp-next 0/3] implement .splice_eof
@ 2026-02-02 9:21 Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 1/3] tcp: export do_tcp_splice_eof Geliang Tang
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Geliang Tang @ 2026-02-02 9:21 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
This set implements .splice_eof for MPTCP and tests it.
Geliang Tang (3):
tcp: export do_tcp_splice_eof
mptcp: implement .splice_eof
selftests: mptcp: connect: trigger splice_eof
include/net/tcp.h | 1 +
net/ipv4/tcp.c | 8 ++++++--
net/mptcp/protocol.c | 16 ++++++++++++++++
.../testing/selftests/net/mptcp/mptcp_connect.c | 2 +-
4 files changed, 24 insertions(+), 3 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 1/3] tcp: export do_tcp_splice_eof
2026-02-02 9:21 [PATCH mptcp-next 0/3] implement .splice_eof Geliang Tang
@ 2026-02-02 9:21 ` Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 2/3] mptcp: implement .splice_eof Geliang Tang
` (2 subsequent siblings)
3 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2026-02-02 9:21 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
Extract a do_tcp_splice_eof() helper from tcp_splice_eof() and export it to
net/tcp.h, so that it can be used in MPTCP.
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
include/net/tcp.h | 1 +
net/ipv4/tcp.c | 8 ++++++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index cecec1a92d5e..9564fd7b73c7 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -370,6 +370,7 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size);
int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *copied,
size_t size, struct ubuf_info *uarg);
+void do_tcp_splice_eof(struct sock *sk);
void tcp_splice_eof(struct socket *sock);
int tcp_send_mss(struct sock *sk, int *size_goal, int flags);
int tcp_wmem_schedule(struct sock *sk, int copy);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ae0000215ff4..80d374fe6543 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1468,9 +1468,8 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
}
EXPORT_SYMBOL(tcp_sendmsg);
-void tcp_splice_eof(struct socket *sock)
+void do_tcp_splice_eof(struct sock *sk)
{
- struct sock *sk = sock->sk;
struct tcp_sock *tp = tcp_sk(sk);
int mss_now, size_goal;
@@ -1482,6 +1481,11 @@ void tcp_splice_eof(struct socket *sock)
tcp_push(sk, 0, mss_now, tp->nonagle, size_goal);
release_sock(sk);
}
+
+void tcp_splice_eof(struct socket *sock)
+{
+ do_tcp_splice_eof(sock->sk);
+}
EXPORT_IPV6_MOD_GPL(tcp_splice_eof);
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 2/3] mptcp: implement .splice_eof
2026-02-02 9:21 [PATCH mptcp-next 0/3] implement .splice_eof Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 1/3] tcp: export do_tcp_splice_eof Geliang Tang
@ 2026-02-02 9:21 ` Geliang Tang
2026-02-02 10:07 ` Matthieu Baerts
2026-02-02 9:21 ` [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof Geliang Tang
2026-02-02 10:41 ` [PATCH mptcp-next 0/3] implement .splice_eof MPTCP CI
3 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2026-02-02 9:21 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang, Matthieu Baerts
From: Geliang Tang <tanggeliang@kylinos.cn>
This patch implements the .splice_eof interface for MPTCP, namely
mptcp_splice_eof(), which sequentially calls do_tcp_splice_eof() for
each subflow.
Suggested-by: Matthieu Baerts <matttbe@kernel.org>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
net/mptcp/protocol.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index c88882062c40..5635d196cb9f 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -4018,6 +4018,20 @@ static int mptcp_connect(struct sock *sk, struct sockaddr_unsized *uaddr,
return 0;
}
+static void mptcp_splice_eof(struct socket *sock)
+{
+ struct mptcp_subflow_context *subflow;
+ struct sock *sk = sock->sk, *ssk;
+
+ lock_sock(sk);
+ mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
+ ssk = mptcp_subflow_tcp_sock(subflow);
+
+ do_tcp_splice_eof(ssk);
+ }
+ release_sock(sk);
+}
+
static struct proto mptcp_prot = {
.name = "MPTCP",
.owner = THIS_MODULE,
@@ -4049,6 +4063,7 @@ static struct proto mptcp_prot = {
.obj_size = sizeof(struct mptcp_sock),
.slab_flags = SLAB_TYPESAFE_BY_RCU,
.no_autobind = true,
+ .splice_eof = mptcp_splice_eof,
};
static int mptcp_bind(struct socket *sock, struct sockaddr_unsized *uaddr, int addr_len)
@@ -4540,6 +4555,7 @@ static const struct proto_ops mptcp_stream_ops = {
.set_rcvlowat = mptcp_set_rcvlowat,
.read_sock = mptcp_read_sock,
.splice_read = mptcp_splice_read,
+ .splice_eof = inet_splice_eof,
};
static struct inet_protosw mptcp_protosw = {
--
2.51.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof
2026-02-02 9:21 [PATCH mptcp-next 0/3] implement .splice_eof Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 1/3] tcp: export do_tcp_splice_eof Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 2/3] mptcp: implement .splice_eof Geliang Tang
@ 2026-02-02 9:21 ` Geliang Tang
2026-02-02 10:09 ` Matthieu Baerts
2026-02-02 10:41 ` [PATCH mptcp-next 0/3] implement .splice_eof MPTCP CI
3 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2026-02-02 9:21 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
From: Geliang Tang <tanggeliang@kylinos.cn>
Increase the sendfile count by one to ensure the transmission size
exceeds the actual data length. This triggers the splice_eof path
in the kernel, allowing the newly implemented MPTCP splice_eof
interface to be exercised during testing.
The change from 'count' to 'count + 1' forces the sendfile operation
to attempt sending one more byte than available, which activates the
end-of-file handling in the splicing logic and ensures coverage of
the related MPTCP code paths.
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
tools/testing/selftests/net/mptcp/mptcp_connect.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
index cbe573c4ab3a..2aaf3ed11315 100644
--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
@@ -870,7 +870,7 @@ static int do_sendfile(int infd, int outfd, unsigned int count,
while (count > 0) {
ssize_t r;
- r = sendfile(outfd, infd, NULL, count);
+ r = sendfile(outfd, infd, NULL, count + 1);
if (r < 0) {
perror("sendfile");
return 3;
--
2.51.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 2/3] mptcp: implement .splice_eof
2026-02-02 9:21 ` [PATCH mptcp-next 2/3] mptcp: implement .splice_eof Geliang Tang
@ 2026-02-02 10:07 ` Matthieu Baerts
2026-02-03 6:36 ` Geliang Tang
0 siblings, 1 reply; 10+ messages in thread
From: Matthieu Baerts @ 2026-02-02 10:07 UTC (permalink / raw)
To: Geliang Tang, mptcp; +Cc: Geliang Tang
Hi Geliang,
Thank you for looking at that!
On 02/02/2026 10:21, Geliang Tang wrote:
> From: Geliang Tang <tanggeliang@kylinos.cn>
>
> This patch implements the .splice_eof interface for MPTCP, namely
> mptcp_splice_eof(), which sequentially calls do_tcp_splice_eof() for
> each subflow.
Can you please explain what this hook is supposed to do / used for please?
And also why the solution is to call do_tcp_splice_eof() on each subflow?
Also, I'm a bit confused: why is this needed? Does it fix something or
is it a new feature or an optimisation?
> Suggested-by: Matthieu Baerts <matttbe@kernel.org>
> Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
> ---
> net/mptcp/protocol.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index c88882062c40..5635d196cb9f 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -4018,6 +4018,20 @@ static int mptcp_connect(struct sock *sk, struct sockaddr_unsized *uaddr,
> return 0;
> }
>
> +static void mptcp_splice_eof(struct socket *sock)
> +{
> + struct mptcp_subflow_context *subflow;
> + struct sock *sk = sock->sk, *ssk;
> +
> + lock_sock(sk);
> + mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
> + ssk = mptcp_subflow_tcp_sock(subflow);
> +
> + do_tcp_splice_eof(ssk);
Is it fine to call this on closed subflows? e.g. if the initial subflow
has been closed. (I didn't check, maybe that's OK)
> + }
> + release_sock(sk);
> +}
> +
> static struct proto mptcp_prot = {
> .name = "MPTCP",
> .owner = THIS_MODULE,
> @@ -4049,6 +4063,7 @@ static struct proto mptcp_prot = {
> .obj_size = sizeof(struct mptcp_sock),
> .slab_flags = SLAB_TYPESAFE_BY_RCU,
> .no_autobind = true,
> + .splice_eof = mptcp_splice_eof,
> };
>
> static int mptcp_bind(struct socket *sock, struct sockaddr_unsized *uaddr, int addr_len)
> @@ -4540,6 +4555,7 @@ static const struct proto_ops mptcp_stream_ops = {
> .set_rcvlowat = mptcp_set_rcvlowat,
> .read_sock = mptcp_read_sock,
> .splice_read = mptcp_splice_read,
> + .splice_eof = inet_splice_eof,
Is this line required? Will it not call inet_splice_eof() by default? (I
didn't check)
> };
>
> static struct inet_protosw mptcp_protosw = {
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof
2026-02-02 9:21 ` [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof Geliang Tang
@ 2026-02-02 10:09 ` Matthieu Baerts
2026-02-03 6:26 ` Geliang Tang
0 siblings, 1 reply; 10+ messages in thread
From: Matthieu Baerts @ 2026-02-02 10:09 UTC (permalink / raw)
To: Geliang Tang, mptcp; +Cc: Geliang Tang
Hi Geliang,
On 02/02/2026 10:21, Geliang Tang wrote:
> From: Geliang Tang <tanggeliang@kylinos.cn>
>
> Increase the sendfile count by one to ensure the transmission size
> exceeds the actual data length. This triggers the splice_eof path
> in the kernel, allowing the newly implemented MPTCP splice_eof
> interface to be exercised during testing.
>
> The change from 'count' to 'count + 1' forces the sendfile operation
> to attempt sending one more byte than available, which activates the
> end-of-file handling in the splicing logic and ensures coverage of
> the related MPTCP code paths.
I'm a bit confused: is this splice_eof interface not linked to "splice"?
Why is it used with sendfile()?
Also, what's the behaviour without the implementation of "splice_eof()"?
Was it a wrong behaviour or is it the same? What's the differences
between the situation before and after this series?
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 0/3] implement .splice_eof
2026-02-02 9:21 [PATCH mptcp-next 0/3] implement .splice_eof Geliang Tang
` (2 preceding siblings ...)
2026-02-02 9:21 ` [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof Geliang Tang
@ 2026-02-02 10:41 ` MPTCP CI
3 siblings, 0 replies; 10+ messages in thread
From: MPTCP CI @ 2026-02-02 10:41 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Unstable: 1 failed test(s): packetdrill_dss 🔴
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/21584982091
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/b605c7ce18aa
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1049653
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof
2026-02-02 10:09 ` Matthieu Baerts
@ 2026-02-03 6:26 ` Geliang Tang
0 siblings, 0 replies; 10+ messages in thread
From: Geliang Tang @ 2026-02-03 6:26 UTC (permalink / raw)
To: Matthieu Baerts, mptcp
Hi Matt,
On Mon, 2026-02-02 at 11:09 +0100, Matthieu Baerts wrote:
> Hi Geliang,
>
> On 02/02/2026 10:21, Geliang Tang wrote:
> > From: Geliang Tang <tanggeliang@kylinos.cn>
> >
> > Increase the sendfile count by one to ensure the transmission size
> > exceeds the actual data length. This triggers the splice_eof path
> > in the kernel, allowing the newly implemented MPTCP splice_eof
> > interface to be exercised during testing.
> >
> > The change from 'count' to 'count + 1' forces the sendfile
> > operation
> > to attempt sending one more byte than available, which activates
> > the
> > end-of-file handling in the splicing logic and ensures coverage of
> > the related MPTCP code paths.
>
> I'm a bit confused: is this splice_eof interface not linked to
> "splice"?
> Why is it used with sendfile()?
.splice_eof hook is indeed triggered by sendfile(), not by splice().
sendfile -> do_sendfile -> do_splice_direct -> do_splice_direct_actor
-> splice_direct_to_actor -> do_splice_eof.
>
> Also, what's the behaviour without the implementation of
> "splice_eof()"?
> Was it a wrong behaviour or is it the same? What's the differences
> between the situation before and after this series?
Without 'splice_eof', it was no a wrong behaviour, but MPTCP didn't
previously handle the splice EOF notification. TCP handled it.
>
> Cheers,
> Matt
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 2/3] mptcp: implement .splice_eof
2026-02-02 10:07 ` Matthieu Baerts
@ 2026-02-03 6:36 ` Geliang Tang
2026-02-03 9:07 ` Matthieu Baerts
0 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2026-02-03 6:36 UTC (permalink / raw)
To: Matthieu Baerts, mptcp
Hi Matt,
On Mon, 2026-02-02 at 11:07 +0100, Matthieu Baerts wrote:
> Hi Geliang,
>
> Thank you for looking at that!
>
> On 02/02/2026 10:21, Geliang Tang wrote:
> > From: Geliang Tang <tanggeliang@kylinos.cn>
> >
> > This patch implements the .splice_eof interface for MPTCP, namely
> > mptcp_splice_eof(), which sequentially calls do_tcp_splice_eof()
> > for
> > each subflow.
>
> Can you please explain what this hook is supposed to do / used for
> please?
do_tcp_splice_eof() ensures that any remaining data in the TCP send
queue is flushed immediately when a sendfile() operation reaches end-
of-file (EOF).
> And also why the solution is to call do_tcp_splice_eof() on each
> subflow?
MPTCP operates over multiple TCP subflows. When splicing data through
an MPTCP socket, each subflow may have pending data in its send buffer
that needs to be properly finalized. So here calls do_tcp_splice_eof()
on each subflow.
>
> Also, I'm a bit confused: why is this needed? Does it fix something
> or
> is it a new feature or an optimisation?
It is not a fix, but a new feature, to keep consistent with TCP. Since
TCP handles the splice EOF notification but MPTCP didn't.
>
> > Suggested-by: Matthieu Baerts <matttbe@kernel.org>
> > Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
> > ---
> > net/mptcp/protocol.c | 16 ++++++++++++++++
> > 1 file changed, 16 insertions(+)
> >
> > diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> > index c88882062c40..5635d196cb9f 100644
> > --- a/net/mptcp/protocol.c
> > +++ b/net/mptcp/protocol.c
> > @@ -4018,6 +4018,20 @@ static int mptcp_connect(struct sock *sk,
> > struct sockaddr_unsized *uaddr,
> > return 0;
> > }
> >
> > +static void mptcp_splice_eof(struct socket *sock)
> > +{
> > + struct mptcp_subflow_context *subflow;
> > + struct sock *sk = sock->sk, *ssk;
> > +
> > + lock_sock(sk);
> > + mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
> > + ssk = mptcp_subflow_tcp_sock(subflow);
> > +
> > + do_tcp_splice_eof(ssk);
>
> Is it fine to call this on closed subflows? e.g. if the initial
> subflow
> has been closed. (I didn't check, maybe that's OK)
Good point. I will add this check in v2:
if (ssk->sk_state == TCP_CLOSE)
continue;
>
> > + }
> > + release_sock(sk);
> > +}
> > +
> > static struct proto mptcp_prot = {
> > .name = "MPTCP",
> > .owner = THIS_MODULE,
> > @@ -4049,6 +4063,7 @@ static struct proto mptcp_prot = {
> > .obj_size = sizeof(struct mptcp_sock),
> > .slab_flags = SLAB_TYPESAFE_BY_RCU,
> > .no_autobind = true,
> > + .splice_eof = mptcp_splice_eof,
> > };
> >
> > static int mptcp_bind(struct socket *sock, struct sockaddr_unsized
> > *uaddr, int addr_len)
> > @@ -4540,6 +4555,7 @@ static const struct proto_ops
> > mptcp_stream_ops = {
> > .set_rcvlowat = mptcp_set_rcvlowat,
> > .read_sock = mptcp_read_sock,
> > .splice_read = mptcp_splice_read,
> > + .splice_eof = inet_splice_eof,
>
> Is this line required? Will it not call inet_splice_eof() by default?
> (I
> didn't check)
Yes, this is required.
sock_splice_eof() needs to call the .splice_eof interface from struct
proto_ops. To maintain consistency with regular TCP behavior, the
.splice_eof interface of mptcp_stream_ops is set to inet_splice_eof
too. inet_splice_eof() will switch to the protocol-specific
implementation (sk->sk_prot->splice_eof), which for MPTCP is
mptcp_splice_eof().
Thanks,
-Geliang
>
> > };
> >
> > static struct inet_protosw mptcp_protosw = {
>
> Cheers,
> Matt
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 2/3] mptcp: implement .splice_eof
2026-02-03 6:36 ` Geliang Tang
@ 2026-02-03 9:07 ` Matthieu Baerts
0 siblings, 0 replies; 10+ messages in thread
From: Matthieu Baerts @ 2026-02-03 9:07 UTC (permalink / raw)
To: Geliang Tang, mptcp
Hi Geliang,
Thank you for your reply!
On 03/02/2026 07:36, Geliang Tang wrote:
> Hi Matt,
>
> On Mon, 2026-02-02 at 11:07 +0100, Matthieu Baerts wrote:
>> Hi Geliang,
>>
>> Thank you for looking at that!
>>
>> On 02/02/2026 10:21, Geliang Tang wrote:
>>> From: Geliang Tang <tanggeliang@kylinos.cn>
>>>
>>> This patch implements the .splice_eof interface for MPTCP, namely
>>> mptcp_splice_eof(), which sequentially calls do_tcp_splice_eof()
>>> for
>>> each subflow.
>>
>> Can you please explain what this hook is supposed to do / used for
>> please?
>
> do_tcp_splice_eof() ensures that any remaining data in the TCP send
> queue is flushed immediately when a sendfile() operation reaches end-
> of-file (EOF).
OK, so if I understand correctly, it means that without .splice_eof()
support, the queue is not flushed immediately when a sendfile()
operation reaches end-of-file (EOF). But that's OK, it will be flushed,
eventually with a small delay but the most important is that all data
will be transferred. Is that correct?
If it is, can you please reflect that in the commit message?
I think it is essential to mention it is not linked to the 'splice()'
syscall, it is an improvement, and nothing was broken before.
>> And also why the solution is to call do_tcp_splice_eof() on each
>> subflow?
>
> MPTCP operates over multiple TCP subflows. When splicing data through
> an MPTCP socket, each subflow may have pending data in its send buffer
> that needs to be properly finalized. So here calls do_tcp_splice_eof()
> on each subflow.
Can you also please add a note about that in the commit message?
>> Also, I'm a bit confused: why is this needed? Does it fix something
>> or
>> is it a new feature or an optimisation?
>
> It is not a fix, but a new feature, to keep consistent with TCP. Since
> TCP handles the splice EOF notification but MPTCP didn't.
>
>>
>>> Suggested-by: Matthieu Baerts <matttbe@kernel.org>
>>> Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
>>> ---
>>> net/mptcp/protocol.c | 16 ++++++++++++++++
>>> 1 file changed, 16 insertions(+)
>>>
>>> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
>>> index c88882062c40..5635d196cb9f 100644
>>> --- a/net/mptcp/protocol.c
>>> +++ b/net/mptcp/protocol.c
>>> @@ -4018,6 +4018,20 @@ static int mptcp_connect(struct sock *sk,
>>> struct sockaddr_unsized *uaddr,
>>> return 0;
>>> }
>>>
>>> +static void mptcp_splice_eof(struct socket *sock)
>>> +{
>>> + struct mptcp_subflow_context *subflow;
>>> + struct sock *sk = sock->sk, *ssk;
>>> +
>>> + lock_sock(sk);
>>> + mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
>>> + ssk = mptcp_subflow_tcp_sock(subflow);
>>> +
>>> + do_tcp_splice_eof(ssk);
>>
>> Is it fine to call this on closed subflows? e.g. if the initial
>> subflow
>> has been closed. (I didn't check, maybe that's OK)
>
> Good point. I will add this check in v2:
>
> if (ssk->sk_state == TCP_CLOSE)
> continue;
>
>>
>>> + }
>>> + release_sock(sk);
>>> +}
>>> +
>>> static struct proto mptcp_prot = {
>>> .name = "MPTCP",
>>> .owner = THIS_MODULE,
>>> @@ -4049,6 +4063,7 @@ static struct proto mptcp_prot = {
>>> .obj_size = sizeof(struct mptcp_sock),
>>> .slab_flags = SLAB_TYPESAFE_BY_RCU,
>>> .no_autobind = true,
>>> + .splice_eof = mptcp_splice_eof,
>>> };
>>>
>>> static int mptcp_bind(struct socket *sock, struct sockaddr_unsized
>>> *uaddr, int addr_len)
>>> @@ -4540,6 +4555,7 @@ static const struct proto_ops
>>> mptcp_stream_ops = {
>>> .set_rcvlowat = mptcp_set_rcvlowat,
>>> .read_sock = mptcp_read_sock,
>>> .splice_read = mptcp_splice_read,
>>> + .splice_eof = inet_splice_eof,
>>
>> Is this line required? Will it not call inet_splice_eof() by default?
>> (I
>> didn't check)
>
> Yes, this is required.
>
> sock_splice_eof() needs to call the .splice_eof interface from struct
> proto_ops. To maintain consistency with regular TCP behavior, the
> .splice_eof interface of mptcp_stream_ops is set to inet_splice_eof
> too. inet_splice_eof() will switch to the protocol-specific
> implementation (sk->sk_prot->splice_eof), which for MPTCP is
> mptcp_splice_eof().
OK. Then be careful that inet_splice_eof() will call inet_send_prepare()
which will call sock_rps_record_flow(sk). mptcp_rps_record_subflows()
should be called on each subflow, probably from mptcp_splice_eof().
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-02-03 9:07 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-02 9:21 [PATCH mptcp-next 0/3] implement .splice_eof Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 1/3] tcp: export do_tcp_splice_eof Geliang Tang
2026-02-02 9:21 ` [PATCH mptcp-next 2/3] mptcp: implement .splice_eof Geliang Tang
2026-02-02 10:07 ` Matthieu Baerts
2026-02-03 6:36 ` Geliang Tang
2026-02-03 9:07 ` Matthieu Baerts
2026-02-02 9:21 ` [PATCH mptcp-next 3/3] selftests: mptcp: connect: trigger splice_eof Geliang Tang
2026-02-02 10:09 ` Matthieu Baerts
2026-02-03 6:26 ` Geliang Tang
2026-02-02 10:41 ` [PATCH mptcp-next 0/3] implement .splice_eof MPTCP CI
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox