netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC]Page pool buffers stuck in App's socket queue
@ 2025-06-16  8:05 Ratheesh Kannoth
  2025-06-17  6:33 ` Yunsheng Lin
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Ratheesh Kannoth @ 2025-06-16  8:05 UTC (permalink / raw)
  To: netdev, linux-kernel; +Cc: davem, edumazet, kuba, pabeni, linyunsheng

Hi,

Recently customer faced a page pool leak issue And keeps on gettting following message in
console.
"page_pool_release_retry() stalled pool shutdown 1 inflight 60 sec"

Customer runs "ping" process in background and then does a interface down/up thru "ip" command.

Marvell octeotx2 driver does destroy all resources (including page pool allocated for each queue of
net device) during interface down event. This page pool destruction will wait for all page pool buffers
allocated by that instance to return to the pool, hence the above message (if some buffers
are stuck).

In the customer scenario, ping App opens both RAW and RAW6 sockets. Even though Customer ping
only ipv4 address, this RAW6 socket receives some IPV6 Router Advertisement messages which gets generated
in their network.

[   41.643448]  raw6_local_deliver+0xc0/0x1d8
[   41.647539]  ip6_protocol_deliver_rcu+0x60/0x490
[   41.652149]  ip6_input_finish+0x48/0x70
[   41.655976]  ip6_input+0x44/0xcc
[   41.659196]  ip6_sublist_rcv_finish+0x48/0x68
[   41.663546]  ip6_sublist_rcv+0x16c/0x22c
[   41.667460]  ipv6_list_rcv+0xf4/0x12c

Those packets will never gets processed. And if customer does a interface down/up, page pool
warnings will be shown in the console.

Customer was asking us for a mechanism to drain these sockets, as they dont want to kill their Apps.
The proposal is to have debugfs which shows "pid  last_processed_skb_time  number_of_packets  socket_fd/inode_number"
for each raw6/raw4 sockets created in the system. and
any write to the debugfs (any specific command) will drain the socket.

1. Could you please comment on the proposal ?
2. Could you suggest a better way ?

-Ratheesh

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-06-18  7:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-16  8:05 [RFC]Page pool buffers stuck in App's socket queue Ratheesh Kannoth
2025-06-17  6:33 ` Yunsheng Lin
2025-06-17  8:38   ` Ratheesh Kannoth
2025-06-17 21:02   ` Mina Almasry
2025-06-18  6:33     ` Yunsheng Lin
2025-06-17 21:00 ` Mina Almasry
2025-06-18  7:28   ` Ratheesh Kannoth
2025-06-17 21:56 ` Lorenzo Bianconi
2025-06-18  6:42   ` Ratheesh Kannoth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).