From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: "Andrew Lunn" <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Lorenzo Bianconi" <lorenzo@kernel.org>,
"Daniel Xu" <dxu@dxuuu.xyz>,
"Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Andrii Nakryiko" <andrii@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
"Toke Høiland-Jørgensen" <toke@kernel.org>,
"Martin KaFai Lau" <martin.lau@linux.dev>,
netdev@vger.kernel.org, bpf@vger.kernel.org,
linux-kernel@vger.kernel.org,
"Jesse Brandeburg" <jbrandeburg@cloudflare.com>,
kernel-team <kernel-team@cloudflare.com>
Subject: Re: [PATCH net-next v2 0/8] bpf: cpumap: enable GRO for XDP_PASS frames
Date: Wed, 8 Jan 2025 14:39:04 +0100 [thread overview]
Message-ID: <d37132e7-b8a6-4095-904c-efa85e15f9e7@intel.com> (raw)
In-Reply-To: <5ea87b3d-4fcb-4e20-a348-ff90cd9283d9@kernel.org>
From: Jesper Dangaard Brouer <hawk@kernel.org>
Date: Tue, 7 Jan 2025 18:17:06 +0100
> Awesome work! - some questions below
>
> On 07/01/2025 16.29, Alexander Lobakin wrote:
>> Several months ago, I had been looking through my old XDP hints tree[0]
>> to check whether some patches not directly related to hints can be sent
>> standalone. Roughly at the same time, Daniel appeared and asked[1] about
>> GRO for cpumap from that tree.
>>
>> Currently, cpumap uses its own kthread which processes cpumap-redirected
>> frames by batches of 8, without any weighting (but with rescheduling
>> points). The resulting skbs get passed to the stack via
>> netif_receive_skb_list(), which means no GRO happens.
>> Even though we can't currently pass checksum status from the drivers,
>> in many cases GRO performs better than the listified Rx without the
>> aggregation, confirmed by tests.
>>
>> In order to enable GRO in cpumap, we need to do the following:
>>
>> * patches 1-2: decouple the GRO struct from the NAPI struct and allow
>> using it out of a NAPI entity within the kernel core code;
>> * patch 3: switch cpumap from netif_receive_skb_list() to
>> gro_receive_skb().
>>
>> Additional improvements:
>>
>> * patch 4: optimize XDP_PASS in cpumap by using arrays instead of linked
>> lists;
>> * patch 5-6: introduce and use function do get skbs from the NAPI percpu
>> caches by bulks, not one at a time;
>> * patch 7-8: use that function in veth as well and remove the one that
>> was now superseded by it.
>>
>> My trafficgen UDP GRO tests, small frame sizes:
>>
>
> How does your trafficgen UDP test manage to get UDP GRO working?
> (Perhaps you can share test?)
I usually test as follows:
xdp-trafficgen from xdp-tools on the sender
then, on the receiver:
ethtool -K <iface> rx-udp-gro-forwarding on
No socket on the receiver, but this option enables GRO not only when
forwarding, but also when it's LOCAL_IN and there's just no socket.
Then, the UDP core drops the frame when doing sk lookup as there's no
socket.
IOW, I have the following:
* GRO gets performed
* Stack overhead is there, up to UDP lookup
* The final frame is dropped, so no userspace copy overhead.
>
> What is the "small frame" size being used?
xdp-trafficgen currently hardcodes frame sizes to 64 bytes. I was
planning to add an option to configure frame size and send it upstream,
but never finished it yet unfortunately.
I realize that on bigger frames, the boosts won't be as big due to that
the CPU will have to calculate checksums for larger buffers. OTOH TCP
benches usually send MTU-sized buffers (+ TSO), but yet the perf is better.
>
> Is the UDP benchmark avoiding (re)calculating the RX checksum?
> (via setting UDP csum to zero)
OH, I completely forgot about this one. I can imagine even bigger boosts
due to that CPU checksumming will disappear.
>
>> GRO off GRO on
>> baseline 2.7 N/A Mpps
>> patch 3 2.3 4 Mpps
>> patch 8 2.4 4.7 Mpps
>>
>> 1...3 diff -17 +48 %
>> 1...8 diff -11 +74 %
>>
>> Daniel reported from +14%[2] to +18%[3] of throughput in neper's TCP RR
>> tests. On my system however, the same test gave me up to +100%.
>>
>
> I can imagine that the TCP throughput tests will yield a huge
> performance boost.
>
>> Note that there's a series from Lorenzo[4] which achieves the same, but
>> in a different way. During the discussions, the approach using a
>> standalone GRO instance was preferred over the threaded NAPI.
>>
>
> It looks like you are keeping the "remote" CPUMAP kthread process design
> intact in this series, right?
Right, the kthread logic remains the same as before.
>
> I think this design works for our use-case. For our use-case, we want to
> give "remote" CPU-thread higher scheduling priority. It doesn't matter
> if this is a kthread or threaded-NAPI thread, as long as we can see this
> as a PID from userspace (by which we adjust the sched priority).
>
> Great to see this work progressing again :-)))
> --Jesper
Thanks,
Olek
next prev parent reply other threads:[~2025-01-08 13:40 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 15:29 [PATCH net-next v2 0/8] bpf: cpumap: enable GRO for XDP_PASS frames Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 1/8] net: gro: decouple GRO from the NAPI layer Alexander Lobakin
2025-01-09 14:24 ` Paolo Abeni
2025-01-13 13:50 ` Alexander Lobakin
2025-01-13 21:01 ` Jakub Kicinski
2025-01-14 17:19 ` Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 2/8] net: gro: expose GRO init/cleanup to use outside of NAPI Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 3/8] bpf: cpumap: switch to GRO from netif_receive_skb_list() Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 4/8] bpf: cpumap: reuse skb array instead of a linked list to chain skbs Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 5/8] net: skbuff: introduce napi_skb_cache_get_bulk() Alexander Lobakin
2025-01-09 13:16 ` Paolo Abeni
2025-01-13 13:47 ` Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 6/8] bpf: cpumap: switch to napi_skb_cache_get_bulk() Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 7/8] veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 8/8] xdp: remove xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-07 17:17 ` [PATCH net-next v2 0/8] bpf: cpumap: enable GRO for XDP_PASS frames Jesper Dangaard Brouer
2025-01-08 13:39 ` Alexander Lobakin [this message]
2025-01-09 17:02 ` Toke Høiland-Jørgensen
2025-01-13 13:50 ` Alexander Lobakin
2025-01-09 1:26 ` Daniel Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d37132e7-b8a6-4095-904c-efa85e15f9e7@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dxu@dxuuu.xyz \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=jbrandeburg@cloudflare.com \
--cc=john.fastabend@gmail.com \
--cc=kernel-team@cloudflare.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=martin.lau@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=toke@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox