From: Dragos Tatulea <dtatulea@nvidia.com>
To: Jesper Dangaard Brouer <hawk@kernel.org>,
Jakub Kicinski <kuba@kernel.org>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
John Fastabend <john.fastabend@gmail.com>,
Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>, Simon Horman <horms@kernel.org>,
Toshiaki Makita <toshiaki.makita1@gmail.com>,
David Ahern <dsahern@kernel.org>,
Toke Hoiland Jorgensen <toke@redhat.com>
Cc: Tariq Toukan <tariqt@nvidia.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, Martin KaFai Lau <martin.lau@linux.dev>,
KP Singh <kpsingh@kernel.org>
Subject: Re: [RFC 2/2] xdp: Delegate fast path return decision to page_pool
Date: Mon, 1 Dec 2025 11:12:55 +0100 [thread overview]
Message-ID: <ad6c4448-8fb3-4a5c-91b0-8739f95cf65b@nvidia.com> (raw)
In-Reply-To: <wrhhvaolxu275zw3fxgvykg7tndzp4pl4u3mnw3z4t5yfbkpix@i2abs45et7tr>
[...]
>> And then you can run thus command:
>> sudo ./xdp-bench redirect-map --load-egress mlx5p1 mlx5p1
>>
> Ah, yes! I was ignorant about the egress part of the program.
> That did the trick. The drop happens before reaching the tx
> queue of the second netdev and the mentioned code in devmem.c
> is reached.
>
> Sender is xdp-trafficgen with 3 threads pushing enough on one RX queue
> to saturate the CPU.
>
> Here's what I got:
>
> * before:
>
> eth2->eth3 16,153,328 rx/s 16,153,329 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,153,329 drop/s 0 drv_err/s 16.00 bulk-avg
> eth2->eth3 16,152,538 rx/s 16,152,546 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,152,546 drop/s 0 drv_err/s 16.00 bulk-avg
> eth2->eth3 16,156,331 rx/s 16,156,337 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,156,337 drop/s 0 drv_err/s 16.00 bulk-avg
>
> * after:
>
> eth2->eth3 16,105,461 rx/s 16,105,469 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,105,469 drop/s 0 drv_err/s 16.00 bulk-avg
> eth2->eth3 16,119,550 rx/s 16,119,541 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,119,541 drop/s 0 drv_err/s 16.00 bulk-avg
> eth2->eth3 16,092,145 rx/s 16,092,154 err,drop/s 0 xmit/s
> xmit eth2->eth3 0 xmit/s 16,092,154 drop/s 0 drv_err/s 16.00 bulk-avg
>
> So slightly worse... I don't fully trust the measurements though as I
> saw the inverse situation in other tests as well: higher rate after the
> patch.
I had a chance to re-run this on a more stable system and the conclusion
is the same. Performance is ~2 % worse:
* before:
eth2->eth3 13,746,431 rx/s 13,746,471 err,drop/s 0 xmit/s
xmit eth2->eth3 0 xmit/s 13,746,471 drop/s 0 drv_err/s 16.00 bulk-avg
* after:
eth2->eth3 13,437,277 rx/s 13,437,259 err,drop/s 0 xmit/s
xmit eth2->eth3 0 xmit/s 13,437,259 drop/s 0 drv_err/s 16.00 bulk-avg
After this experiment it doesn't seem like this direction is worth
proceeding with... I was more optimistic at the start.
>>> Toke (and I) will appreciate if you added code for this to xdp-bench.
>> Supporting a --program-mode like 'redirect-cpu' does.
>>
>>
> Ok. I will add it.
>
Added it here:
https://github.com/xdp-project/xdp-tools/pull/532
Thanks,
Dragos
next prev parent reply other threads:[~2025-12-01 10:13 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-07 10:28 [RFC 0/2] xdp: Delegate fast path return decision to page_pool Dragos Tatulea
2025-11-07 10:28 ` [RFC 1/2] page_pool: add benchmarking for napi-based recycling Dragos Tatulea
2025-11-07 11:04 ` bot+bpf-ci
2025-11-07 10:28 ` [RFC 2/2] xdp: Delegate fast path return decision to page_pool Dragos Tatulea
2025-11-10 11:06 ` Jesper Dangaard Brouer
2025-11-10 18:51 ` Dragos Tatulea
2025-11-11 7:54 ` Jesper Dangaard Brouer
2025-11-11 18:25 ` Dragos Tatulea
2025-12-01 10:12 ` Dragos Tatulea [this message]
2025-12-02 14:00 ` Jesper Dangaard Brouer
2025-12-02 16:29 ` Dragos Tatulea
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad6c4448-8fb3-4a5c-91b0-8739f95cf65b@nvidia.com \
--to=dtatulea@nvidia.com \
--cc=andrew+netdev@lunn.ch \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=eddyz87@gmail.com \
--cc=edumazet@google.com \
--cc=haoluo@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.lau@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=song@kernel.org \
--cc=tariqt@nvidia.com \
--cc=toke@redhat.com \
--cc=toshiaki.makita1@gmail.com \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).