public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v4 0/6] net: bcmgenet: add XDP support
@ 2026-03-23 12:05 Nicolai Buchwitz
  2026-03-23 12:05 ` [PATCH net-next v4 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Nicolai Buchwitz @ 2026-03-23 12:05 UTC (permalink / raw)
  To: netdev
  Cc: Justin Chen, Simon Horman, Nicolai Buchwitz, Alexei Starovoitov,
	Daniel Borkmann, David S. Miller, Jakub Kicinski,
	Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev, bpf

Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.

The first patch converts the RX path from the existing kmalloc-based
allocation to page_pool, which is a prerequisite for XDP. The remaining
patches incrementally add XDP functionality and per-action statistics.

Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
- XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
- XDP_PASS latency: 0.164ms avg, 0% packet loss
- XDP_DROP: all inbound traffic blocked as expected
- XDP_TX: TX counter increments (packet reflection working)
- Link flap with XDP attached: no errors
- Program swap under iperf3 load: no errors
- Upstream XDP selftests (xdp.py): pass_sb, drop_sb, tx_sb passing
- XDP-based EtherCAT master (~37 kHz cycle rate, all packet processing
  in BPF/XDP), stable over multiple days

Changes since v3:
  - Fixed page leak on partial bcmgenet_alloc_rx_buffers() failure:
    free already-allocated rx_cbs before destroying page pool.
    (Simon Horman)
  - Fixed GENET_Q16_TX_BD_CNT defined as 64 instead of 32, matching
    the documented and intended BD allocation. (Simon Horman)
  - Moved XDP TX ring to a separate struct member (xdp_tx_ring)
    instead of expanding tx_rings[] to DESC_INDEX+1. (Justin Chen)
  - Added synchronize_net() before bpf_prog_put() in XDP prog swap
    to ensure NAPI is not still running the old program.
  - Removed goto drop_page inside switch; inlined page_pool_put
    calls in each failure path. (Justin Chen)
  - Removed unnecessary curly braces around case XDP_TX. (Justin Chen)
  - Moved int err hoisting from patch 2 to patch 1 where it belongs.
    (Justin Chen)
  - Kept return type on same line as function name throughout, to
    match existing driver style. (Justin Chen)
    Note: checkpatch flags one alignment CHECK on bcmgenet_xdp_xmit_frame
    as a result; keeping per Justin's preference.
  - Fixed XDP_TX xmit failure path: use xdp_return_frame_rx_napi()
    instead of page_pool_put_full_page() after xdp_convert_buff_to_frame
    to avoid double-free of the backing page.
  - Count XDP TX packets/bytes in TX reclaim so XDP traffic is visible
    in standard network stats (ip -s link show).
  - Added headroom check before TSB prepend in XDP_TX to prevent
    out-of-bounds write when bpf_xdp_adjust_head consumed headroom.

Changes since v2:
  - Fixed xdp_prepare_buff() called with meta_valid=false, causing
    bcmgenet_xdp_build_skb() to compute metasize=UINT_MAX and corrupt
    skb meta_len. Now passes true. (Simon Horman)
  - Removed bcmgenet_dump_tx_queue() for ring 16 in bcmgenet_timeout().
    Ring 16 has no netdev TX queue, so netdev_get_tx_queue(dev, 16)
    accessed beyond the allocated _tx array. (Simon Horman)
  - Fixed checkpatch alignment warnings in patches 4 and 5.

Changes since v1:
  - Fixed tx_rings[DESC_INDEX] out-of-bounds access. Expanded array
    to DESC_INDEX+1 and initialized ring 16 with dedicated BDs.
  - Use ring 16 (hardware default descriptor ring) for XDP TX,
    isolating from normal SKB TX queues.
  - Piggyback ring 16 TX completion on RX NAPI poll (INTRL2_1 bit
    collision with RX ring 0).
  - Fixed ring 16 TX reclaim: skip INTRL2_1 clear, skip BQL
    completion, use non-destructive reclaim in RX poll path.
  - Prepend zeroed TSB before XDP TX frame data (TBUF_64B_EN requires
    64-byte struct status_64 prefix on all TX buffers).
  - Tested with upstream XDP selftests (xdp.py): pass_sb, drop_sb,
    tx_sb all passing. The multi-buffer tests (pass_mb, drop_mb,
    tx_mb) fail because bcmgenet does not support jumbo frames /
    MTU changes; I plan to add ndo_change_mtu support in a follow-up
    series.

Nicolai Buchwitz (6):
  net: bcmgenet: convert RX path to page_pool
  net: bcmgenet: register xdp_rxq_info for each RX ring
  net: bcmgenet: add basic XDP support (PASS/DROP)
  net: bcmgenet: add XDP_TX support
  net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support
  net: bcmgenet: add XDP statistics counters

 drivers/net/ethernet/broadcom/Kconfig         |   1 +
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 641 +++++++++++++++---
 .../net/ethernet/broadcom/genet/bcmgenet.h    |  19 +
 3 files changed, 564 insertions(+), 97 deletions(-)

--
2.51.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-25  4:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-23 12:05 [PATCH net-next v4 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
2026-03-23 12:05 ` [PATCH net-next v4 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
2026-03-23 12:05 ` [PATCH net-next v4 2/6] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
2026-03-23 12:05 ` [PATCH net-next v4 3/6] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
2026-03-23 12:05 ` [PATCH net-next v4 4/6] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
2026-03-25  4:03   ` Jakub Kicinski
2026-03-23 12:05 ` [PATCH net-next v4 5/6] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
2026-03-23 12:05 ` [PATCH net-next v4 6/6] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox