From: Octavian Purdila <tavip@google.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
horms@kernel.org, ast@kernel.org, daniel@iogearbox.net,
hawk@kernel.org, john.fastabend@gmail.com, sdf@fomichev.me,
ahmed.zaki@intel.com, aleksander.lobakin@intel.com,
toke@redhat.com, lorenzo@kernel.org, netdev@vger.kernel.org,
bpf@vger.kernel.org,
syzbot+ff145014d6b0ce64a173@syzkaller.appspotmail.com,
Kuniyuki Iwashima <kuniyu@google.com>
Subject: Re: [PATCH net] xdp: use multi-buff only if receive queue supports page pool
Date: Thu, 25 Sep 2025 00:53:53 -0700 [thread overview]
Message-ID: <CAGWr4cQCp4OwF8ESCk4QtEmPUCkhgVXZitp5esDc++rgxUhO8A@mail.gmail.com> (raw)
In-Reply-To: <20250924170914.20aac680@kernel.org>
On Wed, Sep 24, 2025 at 5:09 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Wed, 24 Sep 2025 06:08:42 +0000 Octavian Purdila wrote:
> > When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing
> > bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a
> > crash occurs due to detecting a bad page state (page_pool leak).
> >
> > This is because xdp_buff does not record the type of memory and
> > instead relies on the netdev receive queue xdp info. Since the TUN/TAP
> > driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking
> > will eventually call page_frag_free. But with current multi-buff
> > support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the
> > page pool.
> >
> > To fix this issue check that the receive queue memory mode is of
> > MEM_TYPE_PAGE_POOL before using multi-buffs.
>
> This can also happen on veth, right? And veth re-stamps the Rx queues.
I am not sure if re-stamps will have ill effects.
The allocation and deallocation for this issue happens while
processing the same packet (receive skb -> skb_pp_cow_data ->
page_pool alloc ... __bpf_prog_run -> bpf_xdp_adjust_tail).
IIUC, if the veth re-stamps the RX queue to MEM_TYPE_PAGE_POOL
skb_pp_cow_data will proceed to allocate from page_pool and
bpf_xdp_adjust_tail will correctly free from page_pool.
next prev parent reply other threads:[~2025-09-25 7:54 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-24 6:08 [PATCH net] xdp: use multi-buff only if receive queue supports page pool Octavian Purdila
2025-09-25 0:09 ` Jakub Kicinski
2025-09-25 7:53 ` Octavian Purdila [this message]
2025-09-25 9:42 ` Maciej Fijalkowski
2025-09-26 2:12 ` Jakub Kicinski
2025-09-26 7:33 ` Octavian Purdila
2025-09-26 11:24 ` Maciej Fijalkowski
2025-09-26 19:40 ` Jakub Kicinski
2025-09-30 0:01 ` Octavian Purdila
2025-09-30 17:41 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGWr4cQCp4OwF8ESCk4QtEmPUCkhgVXZitp5esDc++rgxUhO8A@mail.gmail.com \
--to=tavip@google.com \
--cc=ahmed.zaki@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=kuniyu@google.com \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=syzbot+ff145014d6b0ce64a173@syzkaller.appspotmail.com \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).