netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] xdp: use multi-buff only if receive queue supports page pool
@ 2025-09-24  6:08 Octavian Purdila
  2025-09-25  0:09 ` Jakub Kicinski
  0 siblings, 1 reply; 10+ messages in thread
From: Octavian Purdila @ 2025-09-24  6:08 UTC (permalink / raw)
  To: kuba
  Cc: davem, edumazet, pabeni, horms, ast, daniel, hawk, john.fastabend,
	sdf, uniyu, ahmed.zaki, aleksander.lobakin, toke, lorenzo, netdev,
	bpf, Octavian Purdila, syzbot+ff145014d6b0ce64a173

When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing
bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a
crash occurs due to detecting a bad page state (page_pool leak).

This is because xdp_buff does not record the type of memory and
instead relies on the netdev receive queue xdp info. Since the TUN/TAP
driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking
will eventually call page_frag_free. But with current multi-buff
support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the
page pool.

To fix this issue check that the receive queue memory mode is of
MEM_TYPE_PAGE_POOL before using multi-buffs.

Reported-by: syzbot+ff145014d6b0ce64a173@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/6756c37b.050a0220.a30f1.019a.GAE@google.com/
Fixes: e6d5dbdd20aa ("xdp: add multi-buff support for xdp running in generic mode")
Signed-off-by: Octavian Purdila <tavip@google.com>
---
 net/core/dev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 8d49b2198d07..b195ee3068c2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5335,13 +5335,18 @@ static int
 netif_skb_check_for_xdp(struct sk_buff **pskb, const struct bpf_prog *prog)
 {
 	struct sk_buff *skb = *pskb;
+	struct netdev_rx_queue *rxq;
 	int err, hroom, troom;
 
-	local_lock_nested_bh(&system_page_pool.bh_lock);
-	err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool), pskb, prog);
-	local_unlock_nested_bh(&system_page_pool.bh_lock);
-	if (!err)
-		return 0;
+	rxq = netif_get_rxqueue(skb);
+	if (rxq->xdp_rxq.mem.type == MEM_TYPE_PAGE_POOL) {
+		local_lock_nested_bh(&system_page_pool.bh_lock);
+		err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool),
+					   pskb, prog);
+		local_unlock_nested_bh(&system_page_pool.bh_lock);
+		if (!err)
+			return 0;
+	}
 
 	/* In case we have to go down the path and also linearize,
 	 * then lets do the pskb_expand_head() work just once here.
-- 
2.51.0.534.gc79095c0ca-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-09-30 17:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-24  6:08 [PATCH net] xdp: use multi-buff only if receive queue supports page pool Octavian Purdila
2025-09-25  0:09 ` Jakub Kicinski
2025-09-25  7:53   ` Octavian Purdila
2025-09-25  9:42     ` Maciej Fijalkowski
2025-09-26  2:12       ` Jakub Kicinski
2025-09-26  7:33         ` Octavian Purdila
2025-09-26 11:24           ` Maciej Fijalkowski
2025-09-26 19:40             ` Jakub Kicinski
2025-09-30  0:01               ` Octavian Purdila
2025-09-30 17:41                 ` Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).