BPF List
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Dragos Tatulea <dtatulea@nvidia.com>,
	Jakub Kicinski <kuba@kernel.org>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
	Yonghong Song <yonghong.song@linux.dev>,
	John Fastabend <john.fastabend@gmail.com>,
	Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
	Jiri Olsa <jolsa@kernel.org>, Simon Horman <horms@kernel.org>,
	Toshiaki Makita <toshiaki.makita1@gmail.com>,
	David Ahern <dsahern@kernel.org>,
	Toke Hoiland Jorgensen <toke@redhat.com>
Cc: Tariq Toukan <tariqt@nvidia.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	bpf@vger.kernel.org, Martin KaFai Lau <martin.lau@linux.dev>,
	KP Singh <kpsingh@kernel.org>
Subject: Re: [RFC 2/2] xdp: Delegate fast path return decision to page_pool
Date: Tue, 11 Nov 2025 08:54:37 +0100	[thread overview]
Message-ID: <58b50903-7969-46bd-bd73-60629c00f057@kernel.org> (raw)
In-Reply-To: <4eusyirzvomxwkzib5tqfyrcgjcxoplrsf7jctytvyvrfvi5fr@f3lvd5h2kb2p>



On 10/11/2025 19.51, Dragos Tatulea wrote:
> On Mon, Nov 10, 2025 at 12:06:08PM +0100, Jesper Dangaard Brouer wrote:
>>
>>
>> On 07/11/2025 11.28, Dragos Tatulea wrote:
>>> XDP uses the BPF_RI_F_RF_NO_DIRECT flag to mark contexts where it is not
>>> allowed to do direct recycling, even though the direct flag was set by
>>> the caller. This is confusing and can lead to races which are hard to
>>> detect [1].
>>>
>>> Furthermore, the page_pool already contains an internal
>>> mechanism which checks if it is safe to switch the direct
>>> flag from off to on.
>>>
>>> This patch drops the use of the BPF_RI_F_RF_NO_DIRECT flag and always
>>> calls the page_pool release with the direct flag set to false. The
>>> page_pool will decide if it is safe to do direct recycling. This
>>> is not free but it is worth it to make the XDP code safer. The
>>> next paragrapsh are discussing the performance impact.
>>>
>>> Performance wise, there are 3 cases to consider. Looking from
>>> __xdp_return() for MEM_TYPE_PAGE_POOL case:
>>>
>>> 1) napi_direct == false:
>>>     - Before: 1 comparison in __xdp_return() + call of
>>>       page_pool_napi_local() from page_pool_put_unrefed_netmem().
>>>     - After: Only one call to page_pool_napi_local().
>>>
>>> 2) napi_direct == true && BPF_RI_F_RF_NO_DIRECT
>>>     - Before: 2 comparisons in __xdp_return() + call of
>>>       page_pool_napi_local() from page_pool_put_unrefed_netmem().
>>>     - After: Only one call to page_pool_napi_local().
>>>
>>> 3) napi_direct == true && !BPF_RI_F_RF_NO_DIRECT
>>>     - Before: 2 comparisons in __xdp_return().
>>>     - After: One call to page_pool_napi_local()
>>>
>>> Case 1 & 2 are the slower paths and they only have to gain.
>>> But they are slow anyway so the gain is small.
>>>
>>> Case 3 is the fast path and is the one that has to be considered more
>>> closely. The 2 comparisons from __xdp_return() are swapped for the more
>>> expensive page_pool_napi_local() call.
>>>
>>> Using the page_pool benchmark between the fast-path and the
>>> newly-added NAPI aware mode to measure [2] how expensive
>>> page_pool_napi_local() is:
>>>
>>>     bench_page_pool: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path
>>>     bench_page_pool: Type:tasklet_page_pool01_fast_path Per elem: 15 cycles(tsc) 7.537 ns (step:0)
>>>
>>>     bench_page_pool: time_bench_page_pool04_napi_aware(): in_serving_softirq fast-path
>>>     bench_page_pool: Type:tasklet_page_pool04_napi_aware Per elem: 20 cycles(tsc) 10.490 ns (step:0)
>>>
>>
>> IMHO fast-path slowdown is significant.  This fast-path is used for the
>> XDP_DROP use-case in drivers.  The fast-path is competing with the speed
>> of updating an (per-cpu) array and a function-call overhead. The
>> performance target for XDP_DROP is NIC *wirespeed* which at 100Gbit/s is
>> 148Mpps (or 6.72ns between packets).
>>
>> I still want to seriously entertain this idea, because (1) because the
>> bug[1] was hard to find, and (2) this is mostly an XDP API optimization
>> that isn't used by drivers (they call page_pool APIs directly for
>> XDP_DROP case).
>> Drivers can do this because they have access to the page_pool instance.
>>
>> Thus, this isn't a XDP_DROP use-case.
>>   - This is either XDP_REDIRECT or XDP_TX use-case.
>>
>> The primary change in this patch is, changing the XDP API call
>> xdp_return_frame_rx_napi() effectively to xdp_return_frame().
>>
>> Looking at code users of this call:
>>   (A) Seeing a number of drivers using this to speed up XDP_TX when
>> *completing* packets from TX-ring.
>>   (B) drivers/net/xen-netfront.c use looks incorrect.
>>   (C) drivers/net/virtio_net.c use can easily be removed.
>>   (D) cpumap.c and drivers/net/tun.c should not be using this call.
>>   (E) devmap.c is the main user (with multiple calls)
>>
>> The (A) user will see a performance drop for XDP_TX, but these driver
>> should be able to instead call the page_pool APIs directly as they
>> should have access to the page_pool instance.
>>
>> Users (B)+(C)+(D) simply needs cleanup.
>>
>> User (E): devmap is the most important+problematic user (IIRC this was
>> the cause of bug[1]).  XDP redirecting into devmap and running a new
>> XDP-prog (per target device) was a prime user of this call
>> xdp_return_frame_rx_napi() as it gave us excellent (e.g. XDP_DROP)
>> performance.
>>
> Thanks for the analysis Jesper.

Thanks for working on this! It is long over due, that we clean this up.
I think I spotted another bug in veth related to
xdp_clear_return_frame_no_direct() and when NAPI exits.

>> Perhaps we should simply measure the impact on devmap + 2nd XDP-prog
>> doing XDP_DROP.  Then, we can see if overhead is acceptable... ?
>>
> Will try. Just to make sure we are on the same page, AFAIU the setup
> would be:
> XDP_REDIRECT NIC1 -> veth ingress side and XDP_DROP veth egress side?

No, this isn't exactly what I meant. But the people that wrote this
blogpost ([1] https://loopholelabs.io/blog/xdp-for-egress-traffic ) is
dependent on the performance for that scenario with veth pairs.

When doing redirect-map, then you can attach a 2nd XDP-prog per map
target "egress" device.  That 2nd XDP-prog should do a XDP_DROP as that
will allow us to measure the code path we are talking about. I want test
to hit this code line [2].
[2] https://elixir.bootlin.com/linux/v6.17.7/source/kernel/bpf/
devmap.c#L368.

The xdp-bench[3] tool unfortunately support program-mode for 2nd XDP-
prog, so I did this code change:

diff --git a/xdp-bench/xdp_redirect_devmap.bpf.c 
b/xdp-bench/xdp_redirect_devmap.bpf.c
index 0212e824e2fa..39a24f8834e8 100644
--- a/xdp-bench/xdp_redirect_devmap.bpf.c
+++ b/xdp-bench/xdp_redirect_devmap.bpf.c
@@ -76,6 +76,8 @@ int xdp_redirect_devmap_egress(struct xdp_md *ctx)
         struct ethhdr *eth = data;
         __u64 nh_off;

+       return XDP_DROP;
+
         nh_off = sizeof(*eth);
         if (data + nh_off > data_end)
                 return XDP_DROP;

[3] https://github.com/xdp-project/xdp-tools/tree/main/xdp-bench

And then you can run thus command:
  sudo ./xdp-bench redirect-map --load-egress mlx5p1 mlx5p1

Toke (and I) will appreciate if you added code for this to xdp-bench.
Supporting a --program-mode like 'redirect-cpu' does.


> 
>>> ... and the slow path for reference:
>>>
>>>     bench_page_pool: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path
>>>     bench_page_pool: Type:tasklet_page_pool02_ptr_ring Per elem: 30 cycles(tsc) 15.395 ns (step:0)
>>
>> The devmap user will basically fallback to using this code path.
>>
> Yes, if the page_pool is not NAPI aware.
> 
> Thanks,
> Dragos


  reply	other threads:[~2025-11-11  7:54 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-07 10:28 [RFC 0/2] xdp: Delegate fast path return decision to page_pool Dragos Tatulea
2025-11-07 10:28 ` [RFC 2/2] " Dragos Tatulea
2025-11-10 11:06   ` Jesper Dangaard Brouer
2025-11-10 18:51     ` Dragos Tatulea
2025-11-11  7:54       ` Jesper Dangaard Brouer [this message]
2025-11-11 18:25         ` Dragos Tatulea
2025-12-01 10:12           ` Dragos Tatulea
2025-12-02 14:00             ` Jesper Dangaard Brouer
2025-12-02 16:29               ` Dragos Tatulea

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58b50903-7969-46bd-bd73-60629c00f057@kernel.org \
    --to=hawk@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=dtatulea@nvidia.com \
    --cc=eddyz87@gmail.com \
    --cc=edumazet@google.com \
    --cc=haoluo@google.com \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=song@kernel.org \
    --cc=tariqt@nvidia.com \
    --cc=toke@redhat.com \
    --cc=toshiaki.makita1@gmail.com \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox