From: Matthieu Baerts <matttbe@kernel.org>
To: Geliang Tang <geliang@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>, mptcp@lists.linux.dev
Subject: Re: [PATCH v2 mptcp-next] Squash-to: "mptcp: leverage the backlog for RX packet processing"
Date: Tue, 11 Nov 2025 18:14:34 +0100 [thread overview]
Message-ID: <8e2ae2b2-13b4-4615-be82-eb3710ec566f@kernel.org> (raw)
In-Reply-To: <2f6f0b1e22690e962d4fff02838d23d4b8a65f07.camel@kernel.org>
Hi Geliang,
On 11/11/2025 08:21, Geliang Tang wrote:
> Hi Paolo,
>
> Thanks for this fix.
>
> On Sun, 2025-11-09 at 14:53 +0100, Paolo Abeni wrote:
>> If a subflow receives data before gaining the memcg while the msk
>> socket lock is held at accept time, or the PM locks the msk socket
>> while still unaccepted and subflows push data to it at the same time,
>> the mptcp_graph_subflows() can complete with a non empty backlog.
>>
>> The msk will try to borrow such memory, but (some) of the skbs there
>> where not memcg charged. When the msk finally will return such
>> accounted
>> memory, we should hit the same splat of #597.
>> [even if so far I was unable to replicate this scenario]
>>
>> This patch tries to address such potential issue by:
>> - preventing the subflow from queuing data into the backlog after
>> gaining the memcg. This ensure that at the end of the look all the
>> skbs in the backlog (if any) are _not_ memory accounted.
>> - mem charge the backlog to msk
>> - 'restart' the subflow and spool any data waiting there.
>>
>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>> ---
>> net/mptcp/protocol.c | 46
>> ++++++++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 44 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
>> index 5e9325c7ea9c..d6b08e1de358 100644
>> --- a/net/mptcp/protocol.c
>> +++ b/net/mptcp/protocol.c
>> @@ -4082,10 +4082,12 @@ static void mptcp_graph_subflows(struct sock
>> *sk)
>> {
>> struct mptcp_subflow_context *subflow;
>> struct mptcp_sock *msk = mptcp_sk(sk);
>> + struct sock *ssk;
>> + int old_amt, amt;
>> + bool slow;
>>
>> mptcp_for_each_subflow(msk, subflow) {
>> - struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
>> - bool slow;
>> + ssk = mptcp_subflow_tcp_sock(subflow);
>>
>> slow = lock_sock_fast(ssk);
>>
>> @@ -4095,8 +4097,48 @@ static void mptcp_graph_subflows(struct sock
>> *sk)
>> if (!ssk->sk_socket)
>> mptcp_sock_graft(ssk, sk->sk_socket);
>>
>> + if (!mem_cgroup_from_sk(sk))
>> + goto unlock;
>
> I think it's better to use "continue" here, just like in v1, so that
> other subflows also have a chance to call mptcp_sock_graft(), but we
> need to call unlock_sock_fast() before "continue".
Mmh, that's what this code is doing: unlock and continue, no?
> Besides, wouldn't it be more appropriate to squash these lines into
> "mptcp: fix memcg accounting for passive sockets"?
From what I understood, that's not strickly needed: such check are done
in the helpers below: here it is just to avoid doing the same check yet
another time for 'subflow->closing = 1'.
>
>> +
>> __mptcp_inherit_cgrp_data(sk, ssk);
>> __mptcp_inherit_memcg(sk, ssk, GFP_KERNEL);
>> +
>> + /* Prevent subflows from queueing data into the
>> backlog
>> + * as soon as cg is set; note that we can't race
>> + * with __mptcp_close_ssk setting this bit for a
>> really
>> + * closing socket, because we hold the msk socket
>> lock here.
>> + */
>> + subflow->closing = 1;
>> +
>> +unlock:
>> + unlock_sock_fast(ssk, slow);
>> + }
>> +
>> + if (!mem_cgroup_from_sk(sk))
>> + return;
>> +
>> + /* Charge the bl memory, note that __sk_charge accounted for
>> + * fwd memory and rmem only
>> + */
>> + mptcp_data_lock(sk);
>> + old_amt = sk_mem_pages(sk->sk_forward_alloc +
>> + atomic_read(&sk->sk_rmem_alloc));
>> + amt = sk_mem_pages(msk->backlog_len + sk->sk_forward_alloc +
>> + atomic_read(&sk->sk_rmem_alloc));
>
> The code here is not aligned properly.
>
>> + amt -= old_amt;
>> + if (amt)
>> + mem_cgroup_sk_charge(sk, amt, GFP_ATOMIC |
>> __GFP_NOFAIL);
>
> I'm not sure if we need to call kmem_cache_charge() here, just like in
> __sk_charge().
>
> WDYT?
>
> Thanks,
> -Geliang
>
>> + mptcp_data_unlock(sk);
>> +
>> + /* Finally let the subflow restart queuing data. */
>> + mptcp_for_each_subflow(msk, subflow) {
>> + ssk = mptcp_subflow_tcp_sock(subflow);
>> +
>> + slow = lock_sock_fast(ssk);
>> + subflow->closing = 0;
>> +
>> + if (mptcp_subflow_data_available(ssk))
>> + mptcp_data_ready(sk, ssk);
>> unlock_sock_fast(ssk, slow);
>> }
>> }
>
>
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
next prev parent reply other threads:[~2025-11-11 17:14 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-09 13:53 [PATCH v2 mptcp-next] Squash-to: "mptcp: leverage the backlog for RX packet processing" Paolo Abeni
2025-11-09 22:01 ` MPTCP CI
2025-11-11 7:21 ` Geliang Tang
2025-11-11 17:14 ` Matthieu Baerts [this message]
2025-11-11 16:21 ` Matthieu Baerts
2025-11-11 17:09 ` Matthieu Baerts
2025-11-12 9:24 ` Paolo Abeni
2025-11-12 9:52 ` Matthieu Baerts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8e2ae2b2-13b4-4615-be82-eb3710ec566f@kernel.org \
--to=matttbe@kernel.org \
--cc=geliang@kernel.org \
--cc=mptcp@lists.linux.dev \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox