netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v3 0/8] bpf: cpumap: enable GRO for XDP_PASS frames
@ 2025-01-15 15:18 Alexander Lobakin
  2025-01-15 15:18 ` [PATCH net-next v3 1/8] net: gro: decouple GRO from the NAPI layer Alexander Lobakin
                   ` (7 more replies)
  0 siblings, 8 replies; 21+ messages in thread
From: Alexander Lobakin @ 2025-01-15 15:18 UTC (permalink / raw)
  To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni
  Cc: Alexander Lobakin, Lorenzo Bianconi, Daniel Xu,
	Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Toke Høiland-Jørgensen,
	Jesper Dangaard Brouer, Martin KaFai Lau, netdev, bpf,
	linux-kernel

Several months ago, I had been looking through my old XDP hints tree[0]
to check whether some patches not directly related to hints can be sent
standalone. Roughly at the same time, Daniel appeared and asked[1] about
GRO for cpumap from that tree.

Currently, cpumap uses its own kthread which processes cpumap-redirected
frames by batches of 8, without any weighting (but with rescheduling
points). The resulting skbs get passed to the stack via
netif_receive_skb_list(), which means no GRO happens.
Even though we can't currently pass checksum status from the drivers,
in many cases GRO performs better than the listified Rx without the
aggregation, confirmed by tests.

In order to enable GRO in cpumap, we need to do the following:

* patches 1-2: decouple the GRO struct from the NAPI struct and allow
  using it out of a NAPI entity within the kernel core code;
* patch 3: switch cpumap from netif_receive_skb_list() to
  gro_receive_skb().

Additional improvements:

* patch 4: optimize XDP_PASS in cpumap by using arrays instead of linked
  lists;
* patch 5-6: introduce and use function do get skbs from the NAPI percpu
  caches by bulks, not one at a time;
* patch 7-8: use that function in veth as well and remove the one that
  was now superseded by it.

My trafficgen UDP GRO tests, small frame sizes:

                GRO off    GRO on
baseline        2.7        N/A       Mpps
patch 3         2.3        4         Mpps
patch 8         2.4        4.7       Mpps

1...3 diff      -17        +48       %
1...8 diff      -11        +74       %

Daniel reported from +14%[2] to +18%[3] of throughput in neper's TCP RR
tests. On my system however, the same test gave me up to +100%.

Note that there's a series from Lorenzo[4] which achieves the same, but
in a different way. During the discussions, the approach using a
standalone GRO instance was preferred over the threaded NAPI.

[0] https://github.com/alobakin/linux/tree/xdp_hints
[1] https://lore.kernel.org/bpf/cadda351-6e93-4568-ba26-21a760bf9a57@app.fastmail.com
[2] https://lore.kernel.org/bpf/merfatcdvwpx2lj4j2pahhwp4vihstpidws3jwljwazhh76xkd@t5vsh4gvk4mh
[3] https://lore.kernel.org/bpf/yzda66wro5twmzpmjoxvy4si5zvkehlmgtpi6brheek3sj73tj@o7kd6nurr3o6
[4] https://lore.kernel.org/bpf/20241130-cpumap-gro-v1-0-c1180b1b5758@kernel.org

Alexander Lobakin (8):
  net: gro: decouple GRO from the NAPI layer
  net: gro: expose GRO init/cleanup to use outside of NAPI
  bpf: cpumap: switch to GRO from netif_receive_skb_list()
  bpf: cpumap: reuse skb array instead of a linked list to chain skbs
  net: skbuff: introduce napi_skb_cache_get_bulk()
  bpf: cpumap: switch to napi_skb_cache_get_bulk()
  veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk()
  xdp: remove xdp_alloc_skb_bulk()

 include/linux/netdevice.h                  |  26 ++--
 include/linux/skbuff.h                     |   1 +
 include/net/busy_poll.h                    |  11 +-
 include/net/gro.h                          |  38 ++++--
 include/net/xdp.h                          |   1 -
 drivers/net/ethernet/brocade/bna/bnad.c    |   1 +
 drivers/net/ethernet/cortina/gemini.c      |   1 +
 drivers/net/veth.c                         |   3 +-
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c |   1 +
 kernel/bpf/cpumap.c                        | 145 +++++++++++++--------
 net/core/dev.c                             |  77 +++--------
 net/core/gro.c                             | 101 +++++++++-----
 net/core/skbuff.c                          |  62 +++++++++
 net/core/xdp.c                             |  10 --
 14 files changed, 298 insertions(+), 180 deletions(-)

---
From v2[5]:
* 1: remove napi_id duplication in both &gro_node and &napi_struct by
     using a tagged struct group. The most efficient approach I've
     found so far: no additional branches, no inline expansion, no tail
     calls / double calls, saves 8 bytes of &napi_struct in comparison
     with v2 (Jakub, Paolo, me);
* 4: improve and streamline skb allocation fails (-1 branch per frame),
     skip more code for skb-only batches.

From v1[6]:
* use a standalone GRO instance instead of the threaded NAPI (Jakub);
* rebase and send to net-next as it's now more networking than BPF.

[5] https://lore.kernel.org/netdev/20250107152940.26530-1-aleksander.lobakin@intel.com
[6] https://lore.kernel.org/bpf/20240830162508.1009458-1-aleksander.lobakin@intel.com
-- 
2.48.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-01-17 12:58 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-15 15:18 [PATCH net-next v3 0/8] bpf: cpumap: enable GRO for XDP_PASS frames Alexander Lobakin
2025-01-15 15:18 ` [PATCH net-next v3 1/8] net: gro: decouple GRO from the NAPI layer Alexander Lobakin
2025-01-17  1:11   ` Jakub Kicinski
2025-01-17 12:43   ` Toke Høiland-Jørgensen
2025-01-15 15:18 ` [PATCH net-next v3 2/8] net: gro: expose GRO init/cleanup to use outside of NAPI Alexander Lobakin
2025-01-17  1:14   ` Jakub Kicinski
2025-01-17 12:43   ` Toke Høiland-Jørgensen
2025-01-15 15:18 ` [PATCH net-next v3 3/8] bpf: cpumap: switch to GRO from netif_receive_skb_list() Alexander Lobakin
2025-01-15 17:52   ` Jesper Dangaard Brouer
2025-01-17  1:16   ` Jakub Kicinski
2025-01-17 12:45   ` Toke Høiland-Jørgensen
2025-01-15 15:18 ` [PATCH net-next v3 4/8] bpf: cpumap: reuse skb array instead of a linked list to chain skbs Alexander Lobakin
2025-01-17 12:53   ` Toke Høiland-Jørgensen
2025-01-15 15:18 ` [PATCH net-next v3 5/8] net: skbuff: introduce napi_skb_cache_get_bulk() Alexander Lobakin
2025-01-17 12:56   ` Toke Høiland-Jørgensen
2025-01-15 15:18 ` [PATCH net-next v3 6/8] bpf: cpumap: switch to napi_skb_cache_get_bulk() Alexander Lobakin
2025-01-17 12:58   ` Toke Høiland-Jørgensen
2025-01-15 15:19 ` [PATCH net-next v3 7/8] veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-17 12:58   ` Toke Høiland-Jørgensen
2025-01-15 15:19 ` [PATCH net-next v3 8/8] xdp: remove xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-17 12:58   ` Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).