From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
To: Ratheesh Kannoth <rkannoth@marvell.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, linyunsheng@huawei.com
Subject: Re: [RFC]Page pool buffers stuck in App's socket queue
Date: Tue, 17 Jun 2025 23:56:53 +0200 [thread overview]
Message-ID: <aFHkpVXoAP5JtCzQ@lore-desk> (raw)
In-Reply-To: <20250616080530.GA279797@maili.marvell.com>
[-- Attachment #1: Type: text/plain, Size: 2059 bytes --]
> Hi,
>
> Recently customer faced a page pool leak issue And keeps on gettting following message in
> console.
> "page_pool_release_retry() stalled pool shutdown 1 inflight 60 sec"
>
> Customer runs "ping" process in background and then does a interface down/up thru "ip" command.
>
> Marvell octeotx2 driver does destroy all resources (including page pool allocated for each queue of
> net device) during interface down event. This page pool destruction will wait for all page pool buffers
> allocated by that instance to return to the pool, hence the above message (if some buffers
> are stuck).
>
> In the customer scenario, ping App opens both RAW and RAW6 sockets. Even though Customer ping
> only ipv4 address, this RAW6 socket receives some IPV6 Router Advertisement messages which gets generated
> in their network.
>
> [ 41.643448] raw6_local_deliver+0xc0/0x1d8
> [ 41.647539] ip6_protocol_deliver_rcu+0x60/0x490
> [ 41.652149] ip6_input_finish+0x48/0x70
> [ 41.655976] ip6_input+0x44/0xcc
> [ 41.659196] ip6_sublist_rcv_finish+0x48/0x68
> [ 41.663546] ip6_sublist_rcv+0x16c/0x22c
> [ 41.667460] ipv6_list_rcv+0xf4/0x12c
>
> Those packets will never gets processed. And if customer does a interface down/up, page pool
> warnings will be shown in the console.
>
> Customer was asking us for a mechanism to drain these sockets, as they dont want to kill their Apps.
> The proposal is to have debugfs which shows "pid last_processed_skb_time number_of_packets socket_fd/inode_number"
> for each raw6/raw4 sockets created in the system. and
> any write to the debugfs (any specific command) will drain the socket.
>
> 1. Could you please comment on the proposal ?
> 2. Could you suggest a better way ?
>
> -Ratheesh
Hi,
this problem recall me an issue I had in the past with page_pool
and TCP traffic destroying the pool (not sure if it is still valid):
https://lore.kernel.org/netdev/ZD2HjZZSOjtsnQaf@lore-desk/
Do you have ongoing TCP flows?
Regards,
Lorenzo
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2025-06-17 21:56 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-16 8:05 [RFC]Page pool buffers stuck in App's socket queue Ratheesh Kannoth
2025-06-17 6:33 ` Yunsheng Lin
2025-06-17 8:38 ` Ratheesh Kannoth
2025-06-17 21:02 ` Mina Almasry
2025-06-18 6:33 ` Yunsheng Lin
2025-06-17 21:00 ` Mina Almasry
2025-06-18 7:28 ` Ratheesh Kannoth
2025-06-17 21:56 ` Lorenzo Bianconi [this message]
2025-06-18 6:42 ` Ratheesh Kannoth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aFHkpVXoAP5JtCzQ@lore-desk \
--to=lorenzo.bianconi@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rkannoth@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox