netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Dragos Tatulea <dtatulea@nvidia.com>,
	Jakub Kicinski <kuba@kernel.org>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
	Yonghong Song <yonghong.song@linux.dev>,
	John Fastabend <john.fastabend@gmail.com>,
	Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
	Jiri Olsa <jolsa@kernel.org>, Simon Horman <horms@kernel.org>,
	Toshiaki Makita <toshiaki.makita1@gmail.com>,
	David Ahern <dsahern@kernel.org>,
	Toke Hoiland Jorgensen <toke@redhat.com>
Cc: Tariq Toukan <tariqt@nvidia.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	bpf@vger.kernel.org, Martin KaFai Lau <martin.lau@linux.dev>,
	KP Singh <kpsingh@kernel.org>,
	kernel-team <kernel-team@cloudflare.com>
Subject: Re: [RFC 2/2] xdp: Delegate fast path return decision to page_pool
Date: Tue, 2 Dec 2025 15:00:40 +0100	[thread overview]
Message-ID: <cfc0a5b0-f906-4b23-a47e-dbf56291915b@kernel.org> (raw)
In-Reply-To: <ad6c4448-8fb3-4a5c-91b0-8739f95cf65b@nvidia.com>




On 01/12/2025 11.12, Dragos Tatulea wrote:
> 
> 
> [...]
>>> And then you can run thus command:
>>>   sudo ./xdp-bench redirect-map --load-egress mlx5p1 mlx5p1
>>>
>> Ah, yes! I was ignorant about the egress part of the program.
>> That did the trick. The drop happens before reaching the tx
>> queue of the second netdev and the mentioned code in devmem.c
>> is reached.
>>
>> Sender is xdp-trafficgen with 3 threads pushing enough on one RX queue
>> to saturate the CPU.
>>
>> Here's what I got:
>>
>> * before:
>>
>> eth2->eth3             16,153,328 rx/s         16,153,329 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,153,329 drop/s                0 drv_err/s         16.00 bulk-avg
>> eth2->eth3             16,152,538 rx/s         16,152,546 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,152,546 drop/s                0 drv_err/s         16.00 bulk-avg
>> eth2->eth3             16,156,331 rx/s         16,156,337 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,156,337 drop/s                0 drv_err/s         16.00 bulk-avg
>>
>> * after:
>>
>> eth2->eth3             16,105,461 rx/s         16,105,469 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,105,469 drop/s                0 drv_err/s         16.00 bulk-avg
>> eth2->eth3             16,119,550 rx/s         16,119,541 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,119,541 drop/s                0 drv_err/s         16.00 bulk-avg
>> eth2->eth3             16,092,145 rx/s         16,092,154 err,drop/s            0 xmit/s
>>    xmit eth2->eth3               0 xmit/s       16,092,154 drop/s                0 drv_err/s         16.00 bulk-avg
>>
>> So slightly worse... I don't fully trust the measurements though as I
>> saw the inverse situation in other tests as well: higher rate after the
>> patch.

Remember that you are also removing some code (the
xdp_set_return_frame_no_direct and xdp_clear_return_frame_no_direct).
Thus, I was actually hoping we would see a higher rate after the patch.
This is why I wanted to see this XDP-redirect test, instead of the
page_pool micro-benchmark.


> I had a chance to re-run this on a more stable system and the conclusion
> is the same. Performance is ~2 % worse:
> 
> * before:
> eth2->eth3        13,746,431 rx/s   13,746,471 err,drop/s 0 xmit/s
>    xmit eth2->eth3          0 xmit/s 13,746,471 drop/s     0 drv_err/s 16.00 bulk-avg
> 
> * after:
> eth2->eth3        13,437,277 rx/s   13,437,259 err,drop/s 0 xmit/s
>    xmit eth2->eth3          0 xmit/s 13,437,259 drop/s     0 drv_err/s 16.00 bulk-avg
> 
> After this experiment it doesn't seem like this direction is worth
> proceeding with... I was more optimistic at the start.

I do think it is worth proceeding.  I will claim that your PPS results
are basically the same. Converting PPS number to nanosec per packet:

  13,746,471 = (1/13746471*10^9) = 72.74 nanosec
  13,437,259 = (1/13437259*10^9) = 74.42 nanosec
  Difference is  = (74.42-72.75) =  1.67 nanosec

In my experience it is very hard to find a system stable enough to
measure a 2 nanosec difference. As you also note you had to spend effort
finding a stable system.  Thus, I claim your results show no noticeable
performance impact.

My only concern (based on your perf symbols) is that you might not be
testing the right/expected code path.  If mlx5 is running with a
page_pool memory mode that have elevated refcnf on the page, then we
will not be exercising the slower page_pool ptr_ring return path as much
as expected.  I guess, I have to do this experiment in my own testlab on
other NIC drivers that doesn't use elevated refcnt as default.


>>>> Toke (and I) will appreciate if you added code for this to xdp-bench.
>>> Supporting a --program-mode like 'redirect-cpu' does.
>>>
>>>
>> Ok. I will add it.
>>
> Added it here:
> https://github.com/xdp-project/xdp-tools/pull/532
>

Thanks, I'll take a look, and I'm sure Toke have opinions on the cmdline
options and the missing man-page update.

--Jesper

  reply	other threads:[~2025-12-02 14:00 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-07 10:28 [RFC 0/2] xdp: Delegate fast path return decision to page_pool Dragos Tatulea
2025-11-07 10:28 ` [RFC 1/2] page_pool: add benchmarking for napi-based recycling Dragos Tatulea
2025-11-07 11:04   ` bot+bpf-ci
2025-11-07 10:28 ` [RFC 2/2] xdp: Delegate fast path return decision to page_pool Dragos Tatulea
2025-11-10 11:06   ` Jesper Dangaard Brouer
2025-11-10 18:51     ` Dragos Tatulea
2025-11-11  7:54       ` Jesper Dangaard Brouer
2025-11-11 18:25         ` Dragos Tatulea
2025-12-01 10:12           ` Dragos Tatulea
2025-12-02 14:00             ` Jesper Dangaard Brouer [this message]
2025-12-02 16:29               ` Dragos Tatulea

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cfc0a5b0-f906-4b23-a47e-dbf56291915b@kernel.org \
    --to=hawk@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=dtatulea@nvidia.com \
    --cc=eddyz87@gmail.com \
    --cc=edumazet@google.com \
    --cc=haoluo@google.com \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kernel-team@cloudflare.com \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=song@kernel.org \
    --cc=tariqt@nvidia.com \
    --cc=toke@redhat.com \
    --cc=toshiaki.makita1@gmail.com \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).