public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/6] net: bcmgenet: add XDP support
@ 2026-03-13  9:20 Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:20 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.

The first patch converts the RX path from the existing kmalloc-based
allocation to page_pool, which is a prerequisite for XDP. The remaining
patches incrementally add XDP functionality and per-action statistics.

Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
- XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
- XDP_PASS latency: 0.164ms avg, 0% packet loss
- XDP_DROP: all inbound traffic blocked as expected
- XDP_TX: TX counter increments (packet reflection working)
- Link flap with XDP attached: no errors
- Program swap under iperf3 load: no errors

Nicolai Buchwitz (6):
  net: bcmgenet: convert RX path to page_pool
  net: bcmgenet: register xdp_rxq_info for each RX ring
  net: bcmgenet: add basic XDP support (PASS/DROP)
  net: bcmgenet: add XDP_TX support
  net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support
  net: bcmgenet: add XDP statistics counters

 drivers/net/ethernet/broadcom/Kconfig         |   1 +
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 484 +++++++++++++++---
 .../net/ethernet/broadcom/genet/bcmgenet.h    |  17 +
 3 files changed, 425 insertions(+), 77 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net-next 1/6] net: bcmgenet: convert RX path to page_pool
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
@ 2026-03-13  9:20 ` Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 2/6] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:20 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Replace the per-packet __netdev_alloc_skb() + dma_map_single() in the
RX path with page_pool, which provides efficient page recycling and
DMA mapping management. This is a prerequisite for XDP support (which
requires stable page-backed buffers rather than SKB linear data).

Key changes:
- Create a page_pool per RX ring (PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV)
- bcmgenet_rx_refill() allocates pages via page_pool_alloc_pages()
- bcmgenet_desc_rx() builds SKBs from pages via napi_build_skb() with
  skb_mark_for_recycle() for automatic page_pool return
- Buffer layout reserves XDP_PACKET_HEADROOM (256 bytes) before the HW
  RSB (64 bytes) + alignment pad (2 bytes) for future XDP headroom

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 drivers/net/ethernet/broadcom/Kconfig         |   1 +
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 210 ++++++++++++------
 .../net/ethernet/broadcom/genet/bcmgenet.h    |   4 +
 3 files changed, 143 insertions(+), 72 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index cd7dddeb91dd..e3b9a5272406 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -78,6 +78,7 @@ config BCMGENET
 	select BCM7XXX_PHY
 	select MDIO_BCM_UNIMAC
 	select DIMLIB
+	select PAGE_POOL
 	select BROADCOM_PHY if ARCH_BCM2835
 	help
 	  This driver supports the built-in Ethernet MACs found in the
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 482a31e7b72b..bf3f881108f8 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -52,6 +52,14 @@
 #define RX_BUF_LENGTH		2048
 #define SKB_ALIGNMENT		32
 
+/* Page pool RX buffer layout:
+ * XDP_PACKET_HEADROOM | RSB(64) + pad(2) | frame data | skb_shared_info
+ * The HW writes the 64B RSB + 2B alignment padding before the frame.
+ */
+#define GENET_XDP_HEADROOM	XDP_PACKET_HEADROOM
+#define GENET_RSB_PAD		(sizeof(struct status_64) + 2)
+#define GENET_RX_HEADROOM	(GENET_XDP_HEADROOM + GENET_RSB_PAD)
+
 /* Tx/Rx DMA register offset, skip 256 descriptors */
 #define WORDS_PER_BD(p)		(p->hw_params->words_per_bd)
 #define DMA_DESC_SIZE		(WORDS_PER_BD(priv) * sizeof(u32))
@@ -1895,21 +1903,13 @@ static struct sk_buff *bcmgenet_free_tx_cb(struct device *dev,
 }
 
 /* Simple helper to free a receive control block's resources */
-static struct sk_buff *bcmgenet_free_rx_cb(struct device *dev,
-					   struct enet_cb *cb)
+static void bcmgenet_free_rx_cb(struct enet_cb *cb,
+				struct page_pool *pool)
 {
-	struct sk_buff *skb;
-
-	skb = cb->skb;
-	cb->skb = NULL;
-
-	if (dma_unmap_addr(cb, dma_addr)) {
-		dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr),
-				 dma_unmap_len(cb, dma_len), DMA_FROM_DEVICE);
-		dma_unmap_addr_set(cb, dma_addr, 0);
+	if (cb->rx_page) {
+		page_pool_put_full_page(pool, cb->rx_page, false);
+		cb->rx_page = NULL;
 	}
-
-	return skb;
 }
 
 /* Unlocked version of the reclaim routine */
@@ -2248,46 +2248,30 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
 	goto out;
 }
 
-static struct sk_buff *bcmgenet_rx_refill(struct bcmgenet_priv *priv,
-					  struct enet_cb *cb)
+static int bcmgenet_rx_refill(struct bcmgenet_rx_ring *ring,
+			      struct enet_cb *cb)
 {
-	struct device *kdev = &priv->pdev->dev;
-	struct sk_buff *skb;
-	struct sk_buff *rx_skb;
+	struct bcmgenet_priv *priv = ring->priv;
 	dma_addr_t mapping;
+	struct page *page;
 
-	/* Allocate a new Rx skb */
-	skb = __netdev_alloc_skb(priv->dev, priv->rx_buf_len + SKB_ALIGNMENT,
-				 GFP_ATOMIC | __GFP_NOWARN);
-	if (!skb) {
+	page = page_pool_alloc_pages(ring->page_pool,
+				     GFP_ATOMIC | __GFP_NOWARN);
+	if (!page) {
 		priv->mib.alloc_rx_buff_failed++;
 		netif_err(priv, rx_err, priv->dev,
-			  "%s: Rx skb allocation failed\n", __func__);
-		return NULL;
-	}
-
-	/* DMA-map the new Rx skb */
-	mapping = dma_map_single(kdev, skb->data, priv->rx_buf_len,
-				 DMA_FROM_DEVICE);
-	if (dma_mapping_error(kdev, mapping)) {
-		priv->mib.rx_dma_failed++;
-		dev_kfree_skb_any(skb);
-		netif_err(priv, rx_err, priv->dev,
-			  "%s: Rx skb DMA mapping failed\n", __func__);
-		return NULL;
+			  "%s: Rx page allocation failed\n", __func__);
+		return -ENOMEM;
 	}
 
-	/* Grab the current Rx skb from the ring and DMA-unmap it */
-	rx_skb = bcmgenet_free_rx_cb(kdev, cb);
+	/* page_pool handles DMA mapping via PP_FLAG_DMA_MAP */
+	mapping = page_pool_get_dma_addr(page) + GENET_XDP_HEADROOM;
 
-	/* Put the new Rx skb on the ring */
-	cb->skb = skb;
-	dma_unmap_addr_set(cb, dma_addr, mapping);
-	dma_unmap_len_set(cb, dma_len, priv->rx_buf_len);
+	cb->rx_page = page;
+	cb->rx_page_offset = GENET_XDP_HEADROOM;
 	dmadesc_set_addr(priv, cb->bd_addr, mapping);
 
-	/* Return the current Rx skb to caller */
-	return rx_skb;
+	return 0;
 }
 
 /* bcmgenet_desc_rx - descriptor based rx process.
@@ -2339,23 +2323,32 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 	while ((rxpktprocessed < rxpkttoprocess) &&
 	       (rxpktprocessed < budget)) {
 		struct status_64 *status;
+		struct page *rx_page;
+		unsigned int rx_off;
 		__be16 rx_csum;
+		void *hard_start;
 
 		cb = &priv->rx_cbs[ring->read_ptr];
-		skb = bcmgenet_rx_refill(priv, cb);
 
-		if (unlikely(!skb)) {
+		/* Save the received page before refilling */
+		rx_page = cb->rx_page;
+		rx_off = cb->rx_page_offset;
+
+		if (bcmgenet_rx_refill(ring, cb)) {
 			BCMGENET_STATS64_INC(stats, dropped);
 			goto next;
 		}
 
-		status = (struct status_64 *)skb->data;
+		page_pool_dma_sync_for_cpu(ring->page_pool, rx_page, 0,
+					   RX_BUF_LENGTH);
+
+		hard_start = page_address(rx_page) + rx_off;
+		status = (struct status_64 *)hard_start;
 		dma_length_status = status->length_status;
 		if (dev->features & NETIF_F_RXCSUM) {
 			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
 			if (rx_csum) {
-				skb->csum = (__force __wsum)ntohs(rx_csum);
-				skb->ip_summed = CHECKSUM_COMPLETE;
+				/* defer csum setup to after skb is built */
 			}
 		}
 
@@ -2373,7 +2366,8 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 		if (unlikely(len > RX_BUF_LENGTH)) {
 			netif_err(priv, rx_status, dev, "oversized packet\n");
 			BCMGENET_STATS64_INC(stats, length_errors);
-			dev_kfree_skb_any(skb);
+			page_pool_put_full_page(ring->page_pool, rx_page,
+						true);
 			goto next;
 		}
 
@@ -2381,7 +2375,8 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 			netif_err(priv, rx_status, dev,
 				  "dropping fragmented packet!\n");
 			BCMGENET_STATS64_INC(stats, fragmented_errors);
-			dev_kfree_skb_any(skb);
+			page_pool_put_full_page(ring->page_pool, rx_page,
+						true);
 			goto next;
 		}
 
@@ -2409,24 +2404,48 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 						DMA_RX_RXER)) == DMA_RX_RXER)
 				u64_stats_inc(&stats->errors);
 			u64_stats_update_end(&stats->syncp);
-			dev_kfree_skb_any(skb);
+			page_pool_put_full_page(ring->page_pool, rx_page,
+						true);
 			goto next;
 		} /* error packet */
 
-		skb_put(skb, len);
+		/* Build SKB from the page - data starts at hard_start,
+		 * frame begins after RSB(64) + pad(2) = 66 bytes.
+		 */
+		skb = napi_build_skb(hard_start, PAGE_SIZE - GENET_XDP_HEADROOM);
+		if (unlikely(!skb)) {
+			BCMGENET_STATS64_INC(stats, dropped);
+			page_pool_put_full_page(ring->page_pool, rx_page,
+						true);
+			goto next;
+		}
+
+		skb_mark_for_recycle(skb);
 
-		/* remove RSB and hardware 2bytes added for IP alignment */
-		skb_pull(skb, 66);
-		len -= 66;
+		/* Reserve the RSB + pad, then set the data length */
+		skb_reserve(skb, GENET_RSB_PAD);
+		__skb_put(skb, len - GENET_RSB_PAD);
 
 		if (priv->crc_fwd_en) {
-			skb_trim(skb, len - ETH_FCS_LEN);
+			skb_trim(skb, skb->len - ETH_FCS_LEN);
 			len -= ETH_FCS_LEN;
 		}
 
+		/* Set up checksum offload */
+		if (dev->features & NETIF_F_RXCSUM) {
+			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
+			if (rx_csum) {
+				skb->csum = (__force __wsum)ntohs(rx_csum);
+				skb->ip_summed = CHECKSUM_COMPLETE;
+			}
+		}
+
+		len = skb->len;
 		bytes_processed += len;
 
-		/*Finish setting up the received SKB and send it to the kernel*/
+		/* Finish setting up the received SKB and send it to the
+		 * kernel.
+		 */
 		skb->protocol = eth_type_trans(skb, priv->dev);
 
 		u64_stats_update_begin(&stats->syncp);
@@ -2495,12 +2514,11 @@ static void bcmgenet_dim_work(struct work_struct *work)
 	dim->state = DIM_START_MEASURE;
 }
 
-/* Assign skb to RX DMA descriptor. */
+/* Assign page_pool pages to RX DMA descriptors. */
 static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv,
 				     struct bcmgenet_rx_ring *ring)
 {
 	struct enet_cb *cb;
-	struct sk_buff *skb;
 	int i;
 
 	netif_dbg(priv, hw, priv->dev, "%s\n", __func__);
@@ -2508,10 +2526,7 @@ static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv,
 	/* loop here for each buffer needing assign */
 	for (i = 0; i < ring->size; i++) {
 		cb = ring->cbs + i;
-		skb = bcmgenet_rx_refill(priv, cb);
-		if (skb)
-			dev_consume_skb_any(skb);
-		if (!cb->skb)
+		if (bcmgenet_rx_refill(ring, cb))
 			return -ENOMEM;
 	}
 
@@ -2520,16 +2535,19 @@ static int bcmgenet_alloc_rx_buffers(struct bcmgenet_priv *priv,
 
 static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv)
 {
-	struct sk_buff *skb;
+	struct bcmgenet_rx_ring *ring;
 	struct enet_cb *cb;
-	int i;
-
-	for (i = 0; i < priv->num_rx_bds; i++) {
-		cb = &priv->rx_cbs[i];
+	int q, i;
 
-		skb = bcmgenet_free_rx_cb(&priv->pdev->dev, cb);
-		if (skb)
-			dev_consume_skb_any(skb);
+	for (q = 0; q <= priv->hw_params->rx_queues; q++) {
+		ring = &priv->rx_rings[q == priv->hw_params->rx_queues ?
+				       DESC_INDEX : q];
+		if (!ring->page_pool)
+			continue;
+		for (i = 0; i < ring->size; i++) {
+			cb = ring->cbs + i;
+			bcmgenet_free_rx_cb(cb, ring->page_pool);
+		}
 	}
 }
 
@@ -2747,6 +2765,31 @@ static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
 	netif_napi_add_tx(priv->dev, &ring->napi, bcmgenet_tx_poll);
 }
 
+static int bcmgenet_rx_ring_create_pool(struct bcmgenet_priv *priv,
+					struct bcmgenet_rx_ring *ring)
+{
+	struct page_pool_params pp_params = {
+		.order = 0,
+		.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+		.pool_size = ring->size,
+		.nid = NUMA_NO_NODE,
+		.dev = &priv->pdev->dev,
+		.dma_dir = DMA_FROM_DEVICE,
+		.offset = GENET_XDP_HEADROOM,
+		.max_len = RX_BUF_LENGTH,
+	};
+
+	ring->page_pool = page_pool_create(&pp_params);
+	if (IS_ERR(ring->page_pool)) {
+		int err = PTR_ERR(ring->page_pool);
+
+		ring->page_pool = NULL;
+		return err;
+	}
+
+	return 0;
+}
+
 /* Initialize a RDMA ring */
 static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
 				 unsigned int index, unsigned int size,
@@ -2765,10 +2808,17 @@ static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
 	ring->cb_ptr = start_ptr;
 	ring->end_ptr = end_ptr - 1;
 
-	ret = bcmgenet_alloc_rx_buffers(priv, ring);
+	ret = bcmgenet_rx_ring_create_pool(priv, ring);
 	if (ret)
 		return ret;
 
+	ret = bcmgenet_alloc_rx_buffers(priv, ring);
+	if (ret) {
+		page_pool_destroy(ring->page_pool);
+		ring->page_pool = NULL;
+		return ret;
+	}
+
 	bcmgenet_init_dim(ring, bcmgenet_dim_work);
 	bcmgenet_init_rx_coalesce(ring);
 
@@ -2961,6 +3011,20 @@ static void bcmgenet_fini_rx_napi(struct bcmgenet_priv *priv)
 	}
 }
 
+static void bcmgenet_destroy_rx_page_pools(struct bcmgenet_priv *priv)
+{
+	struct bcmgenet_rx_ring *ring;
+	unsigned int i;
+
+	for (i = 0; i <= priv->hw_params->rx_queues; ++i) {
+		ring = &priv->rx_rings[i];
+		if (ring->page_pool) {
+			page_pool_destroy(ring->page_pool);
+			ring->page_pool = NULL;
+		}
+	}
+}
+
 /* Initialize Rx queues
  *
  * Queues 0-15 are priority queues. Hardware Filtering Block (HFB) can be
@@ -3032,6 +3096,7 @@ static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
 	}
 
 	bcmgenet_free_rx_buffers(priv);
+	bcmgenet_destroy_rx_page_pools(priv);
 	kfree(priv->rx_cbs);
 	kfree(priv->tx_cbs);
 }
@@ -3108,6 +3173,7 @@ static int bcmgenet_init_dma(struct bcmgenet_priv *priv, bool flush_rx)
 	if (ret) {
 		netdev_err(priv->dev, "failed to initialize Rx queues\n");
 		bcmgenet_free_rx_buffers(priv);
+		bcmgenet_destroy_rx_page_pools(priv);
 		kfree(priv->rx_cbs);
 		kfree(priv->tx_cbs);
 		return ret;
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 9e4110c7fdf6..11a0ec563a89 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -15,6 +15,7 @@
 #include <linux/phy.h>
 #include <linux/dim.h>
 #include <linux/ethtool.h>
+#include <net/page_pool/helpers.h>
 
 #include "../unimac.h"
 
@@ -469,6 +470,8 @@ struct bcmgenet_rx_stats64 {
 
 struct enet_cb {
 	struct sk_buff      *skb;
+	struct page         *rx_page;
+	unsigned int        rx_page_offset;
 	void __iomem *bd_addr;
 	DEFINE_DMA_UNMAP_ADDR(dma_addr);
 	DEFINE_DMA_UNMAP_LEN(dma_len);
@@ -575,6 +578,7 @@ struct bcmgenet_rx_ring {
 	struct bcmgenet_net_dim dim;
 	u32		rx_max_coalesced_frames;
 	u32		rx_coalesce_usecs;
+	struct page_pool *page_pool;
 	struct bcmgenet_priv *priv;
 };
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 2/6] net: bcmgenet: register xdp_rxq_info for each RX ring
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
@ 2026-03-13  9:20 ` Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:20 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Register an xdp_rxq_info per RX ring and associate it with the ring's
page_pool via MEM_TYPE_PAGE_POOL. This is required infrastructure for
XDP program execution: the XDP framework needs to know the memory model
backing each RX queue for correct page lifecycle management.

No functional change - XDP programs are not yet attached or executed.

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 22 +++++++++++++++++--
 .../net/ethernet/broadcom/genet/bcmgenet.h    |  2 ++
 2 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index bf3f881108f8..dd70e5af2b1e 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -2778,16 +2778,32 @@ static int bcmgenet_rx_ring_create_pool(struct bcmgenet_priv *priv,
 		.offset = GENET_XDP_HEADROOM,
 		.max_len = RX_BUF_LENGTH,
 	};
+	int err;
 
 	ring->page_pool = page_pool_create(&pp_params);
 	if (IS_ERR(ring->page_pool)) {
-		int err = PTR_ERR(ring->page_pool);
-
+		err = PTR_ERR(ring->page_pool);
 		ring->page_pool = NULL;
 		return err;
 	}
 
+	err = xdp_rxq_info_reg(&ring->xdp_rxq, priv->dev, ring->index, 0);
+	if (err)
+		goto err_free_pp;
+
+	err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_POOL,
+					 ring->page_pool);
+	if (err)
+		goto err_unreg_rxq;
+
 	return 0;
+
+err_unreg_rxq:
+	xdp_rxq_info_unreg(&ring->xdp_rxq);
+err_free_pp:
+	page_pool_destroy(ring->page_pool);
+	ring->page_pool = NULL;
+	return err;
 }
 
 /* Initialize a RDMA ring */
@@ -2814,6 +2830,7 @@ static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
 
 	ret = bcmgenet_alloc_rx_buffers(priv, ring);
 	if (ret) {
+		xdp_rxq_info_unreg(&ring->xdp_rxq);
 		page_pool_destroy(ring->page_pool);
 		ring->page_pool = NULL;
 		return ret;
@@ -3019,6 +3036,7 @@ static void bcmgenet_destroy_rx_page_pools(struct bcmgenet_priv *priv)
 	for (i = 0; i <= priv->hw_params->rx_queues; ++i) {
 		ring = &priv->rx_rings[i];
 		if (ring->page_pool) {
+			xdp_rxq_info_unreg(&ring->xdp_rxq);
 			page_pool_destroy(ring->page_pool);
 			ring->page_pool = NULL;
 		}
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 11a0ec563a89..82a6d29f481d 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -16,6 +16,7 @@
 #include <linux/dim.h>
 #include <linux/ethtool.h>
 #include <net/page_pool/helpers.h>
+#include <net/xdp.h>
 
 #include "../unimac.h"
 
@@ -579,6 +580,7 @@ struct bcmgenet_rx_ring {
 	u32		rx_max_coalesced_frames;
 	u32		rx_coalesce_usecs;
 	struct page_pool *page_pool;
+	struct xdp_rxq_info xdp_rxq;
 	struct bcmgenet_priv *priv;
 };
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP)
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
  2026-03-13  9:20 ` [PATCH net-next 2/6] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
@ 2026-03-13  9:20 ` Nicolai Buchwitz
  2026-03-13 22:48   ` Florian Fainelli
  2026-03-13  9:20 ` [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:20 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Add XDP program attachment via ndo_bpf and execute XDP programs in the
RX path. Supported actions:

  - XDP_PASS: build SKB from the (possibly modified) xdp_buff and pass
    to the network stack, handling xdp_adjust_head/tail correctly
  - XDP_DROP: return the page to page_pool, no SKB allocated
  - XDP_ABORTED: same as DROP with trace_xdp_exception

XDP_TX and XDP_REDIRECT are not yet supported and will return
XDP_ABORTED.

The XDP hook runs after the HW error checks but before SKB construction,
so dropped packets avoid all SKB allocation overhead.

Advertise NETDEV_XDP_ACT_BASIC in xdp_features.

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 153 +++++++++++++++---
 .../net/ethernet/broadcom/genet/bcmgenet.h    |   4 +
 2 files changed, 133 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index dd70e5af2b1e..d43729fc2b1b 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -35,6 +35,8 @@
 #include <linux/ip.h>
 #include <linux/ipv6.h>
 #include <linux/phy.h>
+#include <linux/bpf_trace.h>
+#include <linux/filter.h>
 
 #include <linux/unaligned.h>
 
@@ -2274,6 +2276,53 @@ static int bcmgenet_rx_refill(struct bcmgenet_rx_ring *ring,
 	return 0;
 }
 
+static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
+					      struct xdp_buff *xdp,
+					      struct page *rx_page)
+{
+	unsigned int metasize;
+	struct sk_buff *skb;
+
+	skb = napi_build_skb(xdp->data_hard_start, PAGE_SIZE);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_mark_for_recycle(skb);
+
+	metasize = xdp->data - xdp->data_meta;
+	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+	__skb_put(skb, xdp->data_end - xdp->data);
+
+	if (metasize)
+		skb_metadata_set(skb, metasize);
+
+	return skb;
+}
+
+static unsigned int
+bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
+		 struct xdp_buff *xdp, struct page *rx_page)
+{
+	unsigned int act;
+
+	act = bpf_prog_run_xdp(prog, xdp);
+
+	switch (act) {
+	case XDP_PASS:
+		return XDP_PASS;
+	case XDP_DROP:
+		page_pool_put_full_page(ring->page_pool, rx_page, true);
+		return XDP_DROP;
+	default:
+		bpf_warn_invalid_xdp_action(ring->priv->dev, prog, act);
+		fallthrough;
+	case XDP_ABORTED:
+		trace_xdp_exception(ring->priv->dev, prog, act);
+		page_pool_put_full_page(ring->page_pool, rx_page, true);
+		return XDP_ABORTED;
+	}
+}
+
 /* bcmgenet_desc_rx - descriptor based rx process.
  * this could be called from bottom half, or from NAPI polling method.
  */
@@ -2283,6 +2332,7 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 	struct bcmgenet_rx_stats64 *stats = &ring->stats64;
 	struct bcmgenet_priv *priv = ring->priv;
 	struct net_device *dev = priv->dev;
+	struct bpf_prog *xdp_prog;
 	struct enet_cb *cb;
 	struct sk_buff *skb;
 	u32 dma_length_status;
@@ -2293,6 +2343,8 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 	unsigned int p_index, mask;
 	unsigned int discards;
 
+	xdp_prog = READ_ONCE(priv->xdp_prog);
+
 	/* Clear status before servicing to reduce spurious interrupts */
 	mask = 1 << (UMAC_IRQ1_RX_INTR_SHIFT + ring->index);
 	bcmgenet_intrl2_1_writel(priv, mask, INTRL2_CPU_CLEAR);
@@ -2345,12 +2397,6 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 		hard_start = page_address(rx_page) + rx_off;
 		status = (struct status_64 *)hard_start;
 		dma_length_status = status->length_status;
-		if (dev->features & NETIF_F_RXCSUM) {
-			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
-			if (rx_csum) {
-				/* defer csum setup to after skb is built */
-			}
-		}
 
 		/* DMA flags and length are still valid no matter how
 		 * we got the Receive Status Vector (64B RSB or register)
@@ -2409,26 +2455,52 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 			goto next;
 		} /* error packet */
 
-		/* Build SKB from the page - data starts at hard_start,
-		 * frame begins after RSB(64) + pad(2) = 66 bytes.
-		 */
-		skb = napi_build_skb(hard_start, PAGE_SIZE - GENET_XDP_HEADROOM);
-		if (unlikely(!skb)) {
-			BCMGENET_STATS64_INC(stats, dropped);
-			page_pool_put_full_page(ring->page_pool, rx_page,
-						true);
-			goto next;
-		}
-
-		skb_mark_for_recycle(skb);
+		/* XDP: frame data starts after RSB + pad */
+		if (xdp_prog) {
+			struct xdp_buff xdp;
+			unsigned int xdp_act;
+			int pkt_len;
+
+			pkt_len = len - GENET_RSB_PAD;
+			if (priv->crc_fwd_en)
+				pkt_len -= ETH_FCS_LEN;
+
+			xdp_init_buff(&xdp, PAGE_SIZE, &ring->xdp_rxq);
+			xdp_prepare_buff(&xdp, page_address(rx_page),
+					 GENET_RX_HEADROOM, pkt_len, false);
+
+			xdp_act = bcmgenet_run_xdp(ring, xdp_prog, &xdp,
+						   rx_page);
+			if (xdp_act != XDP_PASS)
+				goto next;
+
+			/* XDP_PASS: build SKB from (possibly modified) xdp */
+			skb = bcmgenet_xdp_build_skb(ring, &xdp, rx_page);
+			if (unlikely(!skb)) {
+				BCMGENET_STATS64_INC(stats, dropped);
+				page_pool_put_full_page(ring->page_pool,
+							rx_page, true);
+				goto next;
+			}
+		} else {
+			/* Build SKB from the page - data starts at
+			 * hard_start, frame begins after RSB(64) + pad(2).
+			 */
+			skb = napi_build_skb(hard_start,
+					     PAGE_SIZE - GENET_XDP_HEADROOM);
+			if (unlikely(!skb)) {
+				BCMGENET_STATS64_INC(stats, dropped);
+				page_pool_put_full_page(ring->page_pool,
+							rx_page, true);
+				goto next;
+			}
 
-		/* Reserve the RSB + pad, then set the data length */
-		skb_reserve(skb, GENET_RSB_PAD);
-		__skb_put(skb, len - GENET_RSB_PAD);
+			skb_mark_for_recycle(skb);
+			skb_reserve(skb, GENET_RSB_PAD);
+			__skb_put(skb, len - GENET_RSB_PAD);
 
-		if (priv->crc_fwd_en) {
-			skb_trim(skb, skb->len - ETH_FCS_LEN);
-			len -= ETH_FCS_LEN;
+			if (priv->crc_fwd_en)
+				skb_trim(skb, skb->len - ETH_FCS_LEN);
 		}
 
 		/* Set up checksum offload */
@@ -3750,6 +3822,37 @@ static int bcmgenet_change_carrier(struct net_device *dev, bool new_carrier)
 	return 0;
 }
 
+static int bcmgenet_xdp_setup(struct net_device *dev,
+			      struct netdev_bpf *xdp)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct bpf_prog *old_prog;
+	struct bpf_prog *prog = xdp->prog;
+
+	if (prog && dev->mtu > PAGE_SIZE - GENET_RX_HEADROOM -
+	    SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) {
+		NL_SET_ERR_MSG_MOD(xdp->extack,
+				   "MTU too large for single-page XDP buffer");
+		return -EOPNOTSUPP;
+	}
+
+	old_prog = xchg(&priv->xdp_prog, prog);
+	if (old_prog)
+		bpf_prog_put(old_prog);
+
+	return 0;
+}
+
+static int bcmgenet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+{
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return bcmgenet_xdp_setup(dev, xdp);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static const struct net_device_ops bcmgenet_netdev_ops = {
 	.ndo_open		= bcmgenet_open,
 	.ndo_stop		= bcmgenet_close,
@@ -3761,6 +3864,7 @@ static const struct net_device_ops bcmgenet_netdev_ops = {
 	.ndo_set_features	= bcmgenet_set_features,
 	.ndo_get_stats64	= bcmgenet_get_stats64,
 	.ndo_change_carrier	= bcmgenet_change_carrier,
+	.ndo_bpf		= bcmgenet_xdp,
 };
 
 /* GENET hardware parameters/characteristics */
@@ -4063,6 +4167,7 @@ static int bcmgenet_probe(struct platform_device *pdev)
 			 NETIF_F_RXCSUM;
 	dev->hw_features |= dev->features;
 	dev->vlan_features |= dev->features;
+	dev->xdp_features = NETDEV_XDP_ACT_BASIC;
 
 	netdev_sw_irq_coalesce_default_on(dev);
 
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 82a6d29f481d..1459473ac1b0 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -16,6 +16,7 @@
 #include <linux/dim.h>
 #include <linux/ethtool.h>
 #include <net/page_pool/helpers.h>
+#include <linux/bpf.h>
 #include <net/xdp.h>
 
 #include "../unimac.h"
@@ -671,6 +672,9 @@ struct bcmgenet_priv {
 	u8 sopass[SOPASS_MAX];
 
 	struct bcmgenet_mib_counters mib;
+
+	/* XDP */
+	struct bpf_prog *xdp_prog;
 };
 
 static inline bool bcmgenet_has_40bits(struct bcmgenet_priv *priv)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
                   ` (2 preceding siblings ...)
  2026-03-13  9:20 ` [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
@ 2026-03-13  9:20 ` Nicolai Buchwitz
  2026-03-13 11:37   ` Subbaraya Sundeep
  2026-03-13  9:21 ` [PATCH net-next 5/6] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:20 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Implement XDP_TX by submitting XDP frames through the default TX ring
(DESC_INDEX). The frame is DMA-mapped and placed into a single TX
descriptor with SOP|EOP|APPEND_CRC flags.

The xdp_frame pointer is stored in the TX control block so that
bcmgenet_free_tx_cb() can call xdp_return_frame() on TX completion,
returning the page to the originating page_pool.

The page_pool DMA direction is changed from DMA_FROM_DEVICE to
DMA_BIDIRECTIONAL to support the TX DMA mapping of received pages.

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 73 ++++++++++++++++++-
 .../net/ethernet/broadcom/genet/bcmgenet.h    |  1 +
 2 files changed, 71 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index d43729fc2b1b..373ba5878ca1 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -1893,6 +1893,12 @@ static struct sk_buff *bcmgenet_free_tx_cb(struct device *dev,
 		if (cb == GENET_CB(skb)->last_cb)
 			return skb;
 
+	} else if (cb->xdpf) {
+		dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr),
+				 dma_unmap_len(cb, dma_len), DMA_TO_DEVICE);
+		dma_unmap_addr_set(cb, dma_addr, 0);
+		xdp_return_frame(cb->xdpf);
+		cb->xdpf = NULL;
 	} else if (dma_unmap_addr(cb, dma_addr)) {
 		dma_unmap_page(dev,
 			       dma_unmap_addr(cb, dma_addr),
@@ -2299,10 +2305,62 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
 	return skb;
 }
 
+static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
+				     struct xdp_frame *xdpf)
+{
+	struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
+	struct device *kdev = &priv->pdev->dev;
+	struct enet_cb *tx_cb_ptr;
+	dma_addr_t mapping;
+	u32 len_stat;
+
+	spin_lock(&ring->lock);
+
+	if (ring->free_bds < 1) {
+		spin_unlock(&ring->lock);
+		return false;
+	}
+
+	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+	mapping = dma_map_single(kdev, xdpf->data, xdpf->len, DMA_TO_DEVICE);
+	if (dma_mapping_error(kdev, mapping)) {
+		tx_cb_ptr->skb = NULL;
+		tx_cb_ptr->xdpf = NULL;
+		bcmgenet_put_txcb(priv, ring);
+		spin_unlock(&ring->lock);
+		return false;
+	}
+
+	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
+	dma_unmap_len_set(tx_cb_ptr, dma_len, xdpf->len);
+	tx_cb_ptr->skb = NULL;
+	tx_cb_ptr->xdpf = xdpf;
+
+	len_stat = (xdpf->len << DMA_BUFLENGTH_SHIFT) |
+		   (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) |
+		   DMA_TX_APPEND_CRC | DMA_SOP | DMA_EOP;
+
+	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, len_stat);
+
+	ring->free_bds--;
+	ring->prod_index++;
+	ring->prod_index &= DMA_P_INDEX_MASK;
+
+	bcmgenet_tdma_ring_writel(priv, ring->index, ring->prod_index,
+				  TDMA_PROD_INDEX);
+
+	spin_unlock(&ring->lock);
+
+	return true;
+}
+
 static unsigned int
 bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 		 struct xdp_buff *xdp, struct page *rx_page)
 {
+	struct bcmgenet_priv *priv = ring->priv;
+	struct xdp_frame *xdpf;
 	unsigned int act;
 
 	act = bpf_prog_run_xdp(prog, xdp);
@@ -2310,14 +2368,23 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 	switch (act) {
 	case XDP_PASS:
 		return XDP_PASS;
+	case XDP_TX:
+		xdpf = xdp_convert_buff_to_frame(xdp);
+		if (unlikely(!xdpf) ||
+		    unlikely(!bcmgenet_xdp_xmit_frame(priv, xdpf))) {
+			page_pool_put_full_page(ring->page_pool, rx_page,
+						true);
+			return XDP_DROP;
+		}
+		return XDP_TX;
 	case XDP_DROP:
 		page_pool_put_full_page(ring->page_pool, rx_page, true);
 		return XDP_DROP;
 	default:
-		bpf_warn_invalid_xdp_action(ring->priv->dev, prog, act);
+		bpf_warn_invalid_xdp_action(priv->dev, prog, act);
 		fallthrough;
 	case XDP_ABORTED:
-		trace_xdp_exception(ring->priv->dev, prog, act);
+		trace_xdp_exception(priv->dev, prog, act);
 		page_pool_put_full_page(ring->page_pool, rx_page, true);
 		return XDP_ABORTED;
 	}
@@ -2846,7 +2913,7 @@ static int bcmgenet_rx_ring_create_pool(struct bcmgenet_priv *priv,
 		.pool_size = ring->size,
 		.nid = NUMA_NO_NODE,
 		.dev = &priv->pdev->dev,
-		.dma_dir = DMA_FROM_DEVICE,
+		.dma_dir = DMA_BIDIRECTIONAL,
 		.offset = GENET_XDP_HEADROOM,
 		.max_len = RX_BUF_LENGTH,
 	};
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 1459473ac1b0..192db0defbfc 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -472,6 +472,7 @@ struct bcmgenet_rx_stats64 {
 
 struct enet_cb {
 	struct sk_buff      *skb;
+	struct xdp_frame    *xdpf;
 	struct page         *rx_page;
 	unsigned int        rx_page_offset;
 	void __iomem *bd_addr;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 5/6] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
                   ` (3 preceding siblings ...)
  2026-03-13  9:20 ` [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
@ 2026-03-13  9:21 ` Nicolai Buchwitz
  2026-03-13  9:21 ` [PATCH net-next 6/6] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:21 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Add XDP_REDIRECT support and implement ndo_xdp_xmit for receiving
redirected frames from other devices.

XDP_REDIRECT uses xdp_do_redirect() in the RX path with xdp_do_flush()
called once per NAPI poll cycle.

ndo_xdp_xmit batches multiple frames into the default TX ring under a
single spinlock acquisition, ringing the doorbell once after all frames
are queued. Call xdp_features_set/clear_redirect_target in the setup
path.

Advertise NETDEV_XDP_ACT_REDIRECT and NETDEV_XDP_ACT_NDO_XMIT in
xdp_features.

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 .../net/ethernet/broadcom/genet/bcmgenet.c    | 93 +++++++++++++++----
 1 file changed, 76 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 373ba5878ca1..30181f9cff98 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -2305,21 +2305,21 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
 	return skb;
 }
 
+/* Submit a single XDP frame to the TX ring. Caller must hold ring->lock.
+ * Returns true on success. Does not ring the doorbell - caller must
+ * write TDMA_PROD_INDEX after batching.
+ */
 static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
+				     struct bcmgenet_tx_ring *ring,
 				     struct xdp_frame *xdpf)
 {
-	struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
 	struct device *kdev = &priv->pdev->dev;
 	struct enet_cb *tx_cb_ptr;
 	dma_addr_t mapping;
 	u32 len_stat;
 
-	spin_lock(&ring->lock);
-
-	if (ring->free_bds < 1) {
-		spin_unlock(&ring->lock);
+	if (ring->free_bds < 1)
 		return false;
-	}
 
 	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
 
@@ -2328,7 +2328,6 @@ static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
 		tx_cb_ptr->skb = NULL;
 		tx_cb_ptr->xdpf = NULL;
 		bcmgenet_put_txcb(priv, ring);
-		spin_unlock(&ring->lock);
 		return false;
 	}
 
@@ -2347,12 +2346,14 @@ static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
 	ring->prod_index++;
 	ring->prod_index &= DMA_P_INDEX_MASK;
 
+	return true;
+}
+
+static void bcmgenet_xdp_ring_doorbell(struct bcmgenet_priv *priv,
+					struct bcmgenet_tx_ring *ring)
+{
 	bcmgenet_tdma_ring_writel(priv, ring->index, ring->prod_index,
 				  TDMA_PROD_INDEX);
-
-	spin_unlock(&ring->lock);
-
-	return true;
 }
 
 static unsigned int
@@ -2368,16 +2369,30 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 	switch (act) {
 	case XDP_PASS:
 		return XDP_PASS;
-	case XDP_TX:
+	case XDP_TX: {
+		struct bcmgenet_tx_ring *tx_ring;
+
+		tx_ring = &priv->tx_rings[DESC_INDEX];
 		xdpf = xdp_convert_buff_to_frame(xdp);
-		if (unlikely(!xdpf) ||
-		    unlikely(!bcmgenet_xdp_xmit_frame(priv, xdpf))) {
-			page_pool_put_full_page(ring->page_pool, rx_page,
-						true);
+		if (unlikely(!xdpf))
+			goto drop_page;
+
+		spin_lock(&tx_ring->lock);
+		if (unlikely(!bcmgenet_xdp_xmit_frame(priv, tx_ring, xdpf))) {
+			spin_unlock(&tx_ring->lock);
+			xdp_return_frame_rx_napi(xdpf);
 			return XDP_DROP;
 		}
+		bcmgenet_xdp_ring_doorbell(priv, tx_ring);
+		spin_unlock(&tx_ring->lock);
 		return XDP_TX;
+	}
+	case XDP_REDIRECT:
+		if (unlikely(xdp_do_redirect(priv->dev, xdp, prog)))
+			goto drop_page;
+		return XDP_REDIRECT;
 	case XDP_DROP:
+drop_page:
 		page_pool_put_full_page(ring->page_pool, rx_page, true);
 		return XDP_DROP;
 	default:
@@ -2400,6 +2415,7 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 	struct bcmgenet_priv *priv = ring->priv;
 	struct net_device *dev = priv->dev;
 	struct bpf_prog *xdp_prog;
+	bool xdp_flush = false;
 	struct enet_cb *cb;
 	struct sk_buff *skb;
 	u32 dma_length_status;
@@ -2538,6 +2554,8 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 
 			xdp_act = bcmgenet_run_xdp(ring, xdp_prog, &xdp,
 						   rx_page);
+			if (xdp_act == XDP_REDIRECT)
+				xdp_flush = true;
 			if (xdp_act != XDP_PASS)
 				goto next;
 
@@ -2611,6 +2629,9 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
 		bcmgenet_rdma_ring_writel(priv, ring->index, ring->c_index, RDMA_CONS_INDEX);
 	}
 
+	if (xdp_flush)
+		xdp_do_flush();
+
 	ring->dim.bytes = bytes_processed;
 	ring->dim.packets = rxpktprocessed;
 
@@ -3903,10 +3924,16 @@ static int bcmgenet_xdp_setup(struct net_device *dev,
 		return -EOPNOTSUPP;
 	}
 
+	if (!prog)
+		xdp_features_clear_redirect_target(dev);
+
 	old_prog = xchg(&priv->xdp_prog, prog);
 	if (old_prog)
 		bpf_prog_put(old_prog);
 
+	if (prog)
+		xdp_features_set_redirect_target(dev, false);
+
 	return 0;
 }
 
@@ -3920,6 +3947,36 @@ static int bcmgenet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 	}
 }
 
+static int bcmgenet_xdp_xmit(struct net_device *dev, int num_frames,
+			      struct xdp_frame **frames, u32 flags)
+{
+	struct bcmgenet_priv *priv = netdev_priv(dev);
+	struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
+	int sent = 0;
+	int i;
+
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+		return -EINVAL;
+
+	if (unlikely(!netif_running(dev)))
+		return -ENETDOWN;
+
+	spin_lock(&ring->lock);
+
+	for (i = 0; i < num_frames; i++) {
+		if (!bcmgenet_xdp_xmit_frame(priv, ring, frames[i]))
+			break;
+		sent++;
+	}
+
+	if (sent)
+		bcmgenet_xdp_ring_doorbell(priv, ring);
+
+	spin_unlock(&ring->lock);
+
+	return sent;
+}
+
 static const struct net_device_ops bcmgenet_netdev_ops = {
 	.ndo_open		= bcmgenet_open,
 	.ndo_stop		= bcmgenet_close,
@@ -3932,6 +3989,7 @@ static const struct net_device_ops bcmgenet_netdev_ops = {
 	.ndo_get_stats64	= bcmgenet_get_stats64,
 	.ndo_change_carrier	= bcmgenet_change_carrier,
 	.ndo_bpf		= bcmgenet_xdp,
+	.ndo_xdp_xmit		= bcmgenet_xdp_xmit,
 };
 
 /* GENET hardware parameters/characteristics */
@@ -4234,7 +4292,8 @@ static int bcmgenet_probe(struct platform_device *pdev)
 			 NETIF_F_RXCSUM;
 	dev->hw_features |= dev->features;
 	dev->vlan_features |= dev->features;
-	dev->xdp_features = NETDEV_XDP_ACT_BASIC;
+	dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+			    NETDEV_XDP_ACT_NDO_XMIT;
 
 	netdev_sw_irq_coalesce_default_on(dev);
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next 6/6] net: bcmgenet: add XDP statistics counters
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
                   ` (4 preceding siblings ...)
  2026-03-13  9:21 ` [PATCH net-next 5/6] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
@ 2026-03-13  9:21 ` Nicolai Buchwitz
  2026-03-13 23:01 ` [PATCH net-next 0/6] net: bcmgenet: add XDP support Florian Fainelli
  2026-03-14 15:52 ` Jakub Kicinski
  7 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13  9:21 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel, Nicolai Buchwitz

Expose per-action XDP counters via ethtool -S: xdp_pass, xdp_drop,
xdp_tx, xdp_tx_err, xdp_redirect, and xdp_redirect_err.

These use the existing soft MIB infrastructure and are incremented in
bcmgenet_run_xdp() alongside the existing driver statistics.

Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
 drivers/net/ethernet/broadcom/genet/bcmgenet.c | 17 ++++++++++++++++-
 drivers/net/ethernet/broadcom/genet/bcmgenet.h |  6 ++++++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 30181f9cff98..ef499192b325 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -1168,6 +1168,13 @@ static const struct bcmgenet_stats bcmgenet_gstrings_stats[] = {
 	STAT_GENET_SOFT_MIB("tx_realloc_tsb", mib.tx_realloc_tsb),
 	STAT_GENET_SOFT_MIB("tx_realloc_tsb_failed",
 			    mib.tx_realloc_tsb_failed),
+	/* XDP counters */
+	STAT_GENET_SOFT_MIB("xdp_pass", mib.xdp_pass),
+	STAT_GENET_SOFT_MIB("xdp_drop", mib.xdp_drop),
+	STAT_GENET_SOFT_MIB("xdp_tx", mib.xdp_tx),
+	STAT_GENET_SOFT_MIB("xdp_tx_err", mib.xdp_tx_err),
+	STAT_GENET_SOFT_MIB("xdp_redirect", mib.xdp_redirect),
+	STAT_GENET_SOFT_MIB("xdp_redirect_err", mib.xdp_redirect_err),
 	/* Per TX queues */
 	STAT_GENET_Q(0),
 	STAT_GENET_Q(1),
@@ -2368,6 +2375,7 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 
 	switch (act) {
 	case XDP_PASS:
+		priv->mib.xdp_pass++;
 		return XDP_PASS;
 	case XDP_TX: {
 		struct bcmgenet_tx_ring *tx_ring;
@@ -2381,18 +2389,24 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 		if (unlikely(!bcmgenet_xdp_xmit_frame(priv, tx_ring, xdpf))) {
 			spin_unlock(&tx_ring->lock);
 			xdp_return_frame_rx_napi(xdpf);
+			priv->mib.xdp_tx_err++;
 			return XDP_DROP;
 		}
 		bcmgenet_xdp_ring_doorbell(priv, tx_ring);
 		spin_unlock(&tx_ring->lock);
+		priv->mib.xdp_tx++;
 		return XDP_TX;
 	}
 	case XDP_REDIRECT:
-		if (unlikely(xdp_do_redirect(priv->dev, xdp, prog)))
+		if (unlikely(xdp_do_redirect(priv->dev, xdp, prog))) {
+			priv->mib.xdp_redirect_err++;
 			goto drop_page;
+		}
+		priv->mib.xdp_redirect++;
 		return XDP_REDIRECT;
 	case XDP_DROP:
 drop_page:
+		priv->mib.xdp_drop++;
 		page_pool_put_full_page(ring->page_pool, rx_page, true);
 		return XDP_DROP;
 	default:
@@ -2400,6 +2414,7 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
 		fallthrough;
 	case XDP_ABORTED:
 		trace_xdp_exception(priv->dev, prog, act);
+		priv->mib.xdp_drop++;
 		page_pool_put_full_page(ring->page_pool, rx_page, true);
 		return XDP_ABORTED;
 	}
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 192db0defbfc..b3f1b7ed604a 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -156,6 +156,12 @@ struct bcmgenet_mib_counters {
 	u32	tx_dma_failed;
 	u32	tx_realloc_tsb;
 	u32	tx_realloc_tsb_failed;
+	u32	xdp_pass;
+	u32	xdp_drop;
+	u32	xdp_tx;
+	u32	xdp_tx_err;
+	u32	xdp_redirect;
+	u32	xdp_redirect_err;
 };
 
 struct bcmgenet_tx_stats64 {
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support
  2026-03-13  9:20 ` [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
@ 2026-03-13 11:37   ` Subbaraya Sundeep
  2026-03-13 12:45     ` Nicolai Buchwitz
  0 siblings, 1 reply; 16+ messages in thread
From: Subbaraya Sundeep @ 2026-03-13 11:37 UTC (permalink / raw)
  To: Nicolai Buchwitz
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli,
	Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

Hi,

On 2026-03-13 at 14:50:59, Nicolai Buchwitz (nb@tipi-net.de) wrote:
> Implement XDP_TX by submitting XDP frames through the default TX ring
> (DESC_INDEX). The frame is DMA-mapped and placed into a single TX
> descriptor with SOP|EOP|APPEND_CRC flags.
> 
> The xdp_frame pointer is stored in the TX control block so that
> bcmgenet_free_tx_cb() can call xdp_return_frame() on TX completion,
> returning the page to the originating page_pool.
> 
> The page_pool DMA direction is changed from DMA_FROM_DEVICE to
> DMA_BIDIRECTIONAL to support the TX DMA mapping of received pages.
> 
> Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
> ---
>  .../net/ethernet/broadcom/genet/bcmgenet.c    | 73 ++++++++++++++++++-
>  .../net/ethernet/broadcom/genet/bcmgenet.h    |  1 +
>  2 files changed, 71 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> index d43729fc2b1b..373ba5878ca1 100644
> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> @@ -1893,6 +1893,12 @@ static struct sk_buff *bcmgenet_free_tx_cb(struct device *dev,
>  		if (cb == GENET_CB(skb)->last_cb)
>  			return skb;
>  
> +	} else if (cb->xdpf) {
> +		dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr),
> +				 dma_unmap_len(cb, dma_len), DMA_TO_DEVICE);
> +		dma_unmap_addr_set(cb, dma_addr, 0);
> +		xdp_return_frame(cb->xdpf);
> +		cb->xdpf = NULL;
>  	} else if (dma_unmap_addr(cb, dma_addr)) {
>  		dma_unmap_page(dev,
>  			       dma_unmap_addr(cb, dma_addr),
> @@ -2299,10 +2305,62 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
>  	return skb;
>  }
>  
> +static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
> +				     struct xdp_frame *xdpf)
> +{
> +	struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
> +	struct device *kdev = &priv->pdev->dev;
> +	struct enet_cb *tx_cb_ptr;
> +	dma_addr_t mapping;
> +	u32 len_stat;
> +
> +	spin_lock(&ring->lock);
> +
> +	if (ring->free_bds < 1) {
> +		spin_unlock(&ring->lock);
> +		return false;
> +	}
> +
> +	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
> +
> +	mapping = dma_map_single(kdev, xdpf->data, xdpf->len, DMA_TO_DEVICE);

AFAIU you are transmitting the frame received on a RQ which is from the page pool
and already dma mapped. Do you have to do dma_map again?

Thanks,
Sundeep

> +	if (dma_mapping_error(kdev, mapping)) {
> +		tx_cb_ptr->skb = NULL;
> +		tx_cb_ptr->xdpf = NULL;
> +		bcmgenet_put_txcb(priv, ring);
> +		spin_unlock(&ring->lock);
> +		return false;
> +	}
> +
> +	dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
> +	dma_unmap_len_set(tx_cb_ptr, dma_len, xdpf->len);
> +	tx_cb_ptr->skb = NULL;
> +	tx_cb_ptr->xdpf = xdpf;
> +
> +	len_stat = (xdpf->len << DMA_BUFLENGTH_SHIFT) |
> +		   (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) |
> +		   DMA_TX_APPEND_CRC | DMA_SOP | DMA_EOP;
> +
> +	dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, len_stat);
> +
> +	ring->free_bds--;
> +	ring->prod_index++;
> +	ring->prod_index &= DMA_P_INDEX_MASK;
> +
> +	bcmgenet_tdma_ring_writel(priv, ring->index, ring->prod_index,
> +				  TDMA_PROD_INDEX);
> +
> +	spin_unlock(&ring->lock);
> +
> +	return true;
> +}
> +
>  static unsigned int
>  bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
>  		 struct xdp_buff *xdp, struct page *rx_page)
>  {
> +	struct bcmgenet_priv *priv = ring->priv;
> +	struct xdp_frame *xdpf;
>  	unsigned int act;
>  
>  	act = bpf_prog_run_xdp(prog, xdp);
> @@ -2310,14 +2368,23 @@ bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring, struct bpf_prog *prog,
>  	switch (act) {
>  	case XDP_PASS:
>  		return XDP_PASS;
> +	case XDP_TX:
> +		xdpf = xdp_convert_buff_to_frame(xdp);
> +		if (unlikely(!xdpf) ||
> +		    unlikely(!bcmgenet_xdp_xmit_frame(priv, xdpf))) {
> +			page_pool_put_full_page(ring->page_pool, rx_page,
> +						true);
> +			return XDP_DROP;
> +		}
> +		return XDP_TX;
>  	case XDP_DROP:
>  		page_pool_put_full_page(ring->page_pool, rx_page, true);
>  		return XDP_DROP;
>  	default:
> -		bpf_warn_invalid_xdp_action(ring->priv->dev, prog, act);
> +		bpf_warn_invalid_xdp_action(priv->dev, prog, act);
>  		fallthrough;
>  	case XDP_ABORTED:
> -		trace_xdp_exception(ring->priv->dev, prog, act);
> +		trace_xdp_exception(priv->dev, prog, act);
>  		page_pool_put_full_page(ring->page_pool, rx_page, true);
>  		return XDP_ABORTED;
>  	}
> @@ -2846,7 +2913,7 @@ static int bcmgenet_rx_ring_create_pool(struct bcmgenet_priv *priv,
>  		.pool_size = ring->size,
>  		.nid = NUMA_NO_NODE,
>  		.dev = &priv->pdev->dev,
> -		.dma_dir = DMA_FROM_DEVICE,
> +		.dma_dir = DMA_BIDIRECTIONAL,
>  		.offset = GENET_XDP_HEADROOM,
>  		.max_len = RX_BUF_LENGTH,
>  	};
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
> index 1459473ac1b0..192db0defbfc 100644
> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
> @@ -472,6 +472,7 @@ struct bcmgenet_rx_stats64 {
>  
>  struct enet_cb {
>  	struct sk_buff      *skb;
> +	struct xdp_frame    *xdpf;
>  	struct page         *rx_page;
>  	unsigned int        rx_page_offset;
>  	void __iomem *bd_addr;
> -- 
> 2.51.0
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support
  2026-03-13 11:37   ` Subbaraya Sundeep
@ 2026-03-13 12:45     ` Nicolai Buchwitz
  0 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-13 12:45 UTC (permalink / raw)
  To: Subbaraya Sundeep
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Florian Fainelli,
	Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 13.3.2026 12:37, Subbaraya Sundeep wrote:
> Hi,

Hi Sundeep

> 
> On 2026-03-13 at 14:50:59, Nicolai Buchwitz (nb@tipi-net.de) wrote:
>> Implement XDP_TX by submitting XDP frames through the default TX ring
>> (DESC_INDEX). The frame is DMA-mapped and placed into a single TX
>> descriptor with SOP|EOP|APPEND_CRC flags.
>> 
>> The xdp_frame pointer is stored in the TX control block so that
>> bcmgenet_free_tx_cb() can call xdp_return_frame() on TX completion,
>> returning the page to the originating page_pool.
>> 
>> The page_pool DMA direction is changed from DMA_FROM_DEVICE to
>> DMA_BIDIRECTIONAL to support the TX DMA mapping of received pages.
>> 
>> Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
>> ---
>>  .../net/ethernet/broadcom/genet/bcmgenet.c    | 73 
>> ++++++++++++++++++-
>>  .../net/ethernet/broadcom/genet/bcmgenet.h    |  1 +
>>  2 files changed, 71 insertions(+), 3 deletions(-)
>> 
>> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c 
>> b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> index d43729fc2b1b..373ba5878ca1 100644
>> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> @@ -1893,6 +1893,12 @@ static struct sk_buff 
>> *bcmgenet_free_tx_cb(struct device *dev,
>>  		if (cb == GENET_CB(skb)->last_cb)
>>  			return skb;
>> 
>> +	} else if (cb->xdpf) {
>> +		dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr),
>> +				 dma_unmap_len(cb, dma_len), DMA_TO_DEVICE);
>> +		dma_unmap_addr_set(cb, dma_addr, 0);
>> +		xdp_return_frame(cb->xdpf);
>> +		cb->xdpf = NULL;
>>  	} else if (dma_unmap_addr(cb, dma_addr)) {
>>  		dma_unmap_page(dev,
>>  			       dma_unmap_addr(cb, dma_addr),
>> @@ -2299,10 +2305,62 @@ static struct sk_buff 
>> *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
>>  	return skb;
>>  }
>> 
>> +static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
>> +				     struct xdp_frame *xdpf)
>> +{
>> +	struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
>> +	struct device *kdev = &priv->pdev->dev;
>> +	struct enet_cb *tx_cb_ptr;
>> +	dma_addr_t mapping;
>> +	u32 len_stat;
>> +
>> +	spin_lock(&ring->lock);
>> +
>> +	if (ring->free_bds < 1) {
>> +		spin_unlock(&ring->lock);
>> +		return false;
>> +	}
>> +
>> +	tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
>> +
>> +	mapping = dma_map_single(kdev, xdpf->data, xdpf->len, 
>> DMA_TO_DEVICE);
> 
> AFAIU you are transmitting the frame received on a RQ which is from the 
> page pool
> and already dma mapped. Do you have to do dma_map again?
> 
> Thanks,
> Sundeep
> 

You're right. Since the page_pool is configured with DMA_BIDIRECTIONAL,
the pages are already mapped and we can reuse the existing mapping for
XDP_TX frames. The initial implementation took the simple route of
mapping everything uniformly, but that's unnecessary overhead for the
local XDP_TX case.

In v2 I'll add a bool dma_map parameter to bcmgenet_xdp_xmit_frame()
(following the mvneta/stmmac pattern): XDP_TX will reuse the page_pool
mapping via page_pool_get_dma_addr() + dma_sync_single_for_device(),
while ndo_xdp_xmit will keep dma_map_single() for foreign frames. The
cleanup path will be split accordingly.

Regards
Nicolai

>> [...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP)
  2026-03-13  9:20 ` [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
@ 2026-03-13 22:48   ` Florian Fainelli
  2026-03-14 19:48     ` Nicolai Buchwitz
  0 siblings, 1 reply; 16+ messages in thread
From: Florian Fainelli @ 2026-03-13 22:48 UTC (permalink / raw)
  To: Nicolai Buchwitz, Andrew Lunn, David S . Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Doug Berger
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 3/13/26 02:20, Nicolai Buchwitz wrote:
> Add XDP program attachment via ndo_bpf and execute XDP programs in the
> RX path. Supported actions:
> 
>    - XDP_PASS: build SKB from the (possibly modified) xdp_buff and pass
>      to the network stack, handling xdp_adjust_head/tail correctly
>    - XDP_DROP: return the page to page_pool, no SKB allocated
>    - XDP_ABORTED: same as DROP with trace_xdp_exception
> 
> XDP_TX and XDP_REDIRECT are not yet supported and will return
> XDP_ABORTED.
> 
> The XDP hook runs after the HW error checks but before SKB construction,
> so dropped packets avoid all SKB allocation overhead.
> 
> Advertise NETDEV_XDP_ACT_BASIC in xdp_features.
> 
> Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
> ---

[snip]

> @@ -2345,12 +2397,6 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
>   		hard_start = page_address(rx_page) + rx_off;
>   		status = (struct status_64 *)hard_start;
>   		dma_length_status = status->length_status;
> -		if (dev->features & NETIF_F_RXCSUM) {
> -			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
> -			if (rx_csum) {
> -				/* defer csum setup to after skb is built */
> -			}
> -		}

Did you intend for that hunk to be deleted?
-- 
Florian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 0/6] net: bcmgenet: add XDP support
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
                   ` (5 preceding siblings ...)
  2026-03-13  9:21 ` [PATCH net-next 6/6] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
@ 2026-03-13 23:01 ` Florian Fainelli
  2026-03-14  0:13   ` Florian Fainelli
  2026-03-14 19:51   ` Nicolai Buchwitz
  2026-03-14 15:52 ` Jakub Kicinski
  7 siblings, 2 replies; 16+ messages in thread
From: Florian Fainelli @ 2026-03-13 23:01 UTC (permalink / raw)
  To: Nicolai Buchwitz, Andrew Lunn, David S . Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Doug Berger
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 3/13/26 02:20, Nicolai Buchwitz wrote:
> Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
> XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.
> 
> The first patch converts the RX path from the existing kmalloc-based
> allocation to page_pool, which is a prerequisite for XDP. The remaining
> patches incrementally add XDP functionality and per-action statistics.
> 
> Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
> - XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
> - XDP_PASS latency: 0.164ms avg, 0% packet loss
> - XDP_DROP: all inbound traffic blocked as expected
> - XDP_TX: TX counter increments (packet reflection working)
> - Link flap with XDP attached: no errors
> - Program swap under iperf3 load: no errors

This is very nice, thanks for doing that work! If the network is brought 
up and there is a background iperf3 client transmitting data, and then 
you issue "reboot -f", you will see the following NPD:

[  176.531216] Unable to handle kernel NULL pointer dereference at 
virtual address 0000000000000010
[  176.540052] Mem abort info:
[  176.542854]   ESR = 0x0000000096000004
[  176.546614]   EC = 0x25: DABT (current EL), IL = 32 bits
[  176.551938]   SET = 0, FnV = 0
[  176.555000]   EA = 0, S1PTW = 0
[  176.558149]   FSC = 0x04: level 0 translation fault
[  176.563037] Data abort info:
[  176.565924]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
[  176.571421]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[  176.576489]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[  176.581813] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000044d02000
[  176.588286] [0000000000000010] pgd=0000000000000000, p4d=0000000000000000
[  176.595101] Internal error: Oops: 0000000096000004 [#1]  SMP
[  176.600774] Modules linked in: bdc udc_core
[  176.604976] CPU: 3 UID: 0 PID: 1575 Comm: reboot Not tainted 
7.0.0-rc3-g08ac0b907060 #2 PREEMPTLAZY
[  176.614124] Hardware name: BCM972180HB_V20 (DT)
[  176.618662] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)
[  176.625636] pc : bcmgenet_free_rx_buffers+0x78/0x148
[  176.630618] lr : bcmgenet_fini_dma+0x104/0x180
[  176.635071] sp : ffff8000836d3910
[  5]  95.00-96.00  sec  98.4 MB[  176.638390] x29: ffff8000836d3910 
x28: 0000000000000000 x27: 0000000000000038
ytes   826 Mbits/sec    0    819[  176.648318] x26: 0000000000000001 
x25: 0000000000000010 x24: 00000000000003c0
  KBytes
[  176.658238] x23: ffffffffffffffff x22: ffff0000086b4a00 x21: 
0000000000000000
[  176.666766] x20: ffff0000086b8600 x19: 0000000000000000 x18: 
0000000000000000
[  176.673917] x17: 000000000000fe88 x16: 00000000000733f0 x15: 
00000000000005a8
[  176.681067] x14: 000000291895b141 x13: 00000000000733f0 x12: 
0000000000000000
[  176.688218] x11: 00000000000000c0 x10: 0000000000000910 x9 : 
ffff8000809b4ecc
[  176.695368] x8 : ffff0000058531f0 x7 : 0000000000000000 x6 : 
0000000000000000
[  176.702518] x5 : 0000000000000000 x4 : ffffffff00000000 x3 : 
00000000fffe1db3
[  176.709669] x2 : 0000000000000001 x1 : ffff80008103a108 x0 : 
0000000000000001
[  176.716821] Call trace:
[  176.719271]  bcmgenet_free_rx_buffers+0x78/0x148 (P)
[  176.724247]  bcmgenet_fini_dma+0x104/0x180
[  176.728353]  bcmgenet_netif_stop+0x1b4/0x1f8
[  176.732633]  bcmgenet_close+0x38/0xd8
[  176.736304]  __dev_close_many+0xd4/0x1f8
[  176.740237]  netif_close_many+0x8c/0x140
[  176.744169]  unregister_netdevice_many_notify+0x210/0x998
[  176.749578]  unregister_netdevice_queue+0xa0/0xe8
[  176.754291]  unregister_netdev+0x28/0x50
[  176.758221]  bcmgenet_shutdown+0x24/0x48
[  176.762153]  platform_shutdown+0x28/0x40
[  176.766085]  device_shutdown+0x154/0x260
[  176.770015]  kernel_restart+0x48/0xc8
[  176.773688]  __do_sys_reboot+0x154/0x268
[  176.777620]  __arm64_sys_reboot+0x28/0x38
[  176.781638]  invoke_syscall+0x4c/0x118
[  176.785397]  el0_svc_common.constprop.0+0x44/0xe8
[  176.790110]  do_el0_svc+0x20/0x30
[  176.793433]  el0_svc+0x18/0x68
[  176.796495]  el0t_64_sync_handler+0x98/0xe0
[  176.800689]  el0t_64_sync+0x154/0x158
[  176.804362] Code: d280003a d503201f f94d2e93 9b3b4eb3 (f9400a61)
[  176.810467] ---[ end trace 0000000000000000 ]---

That does not happen if you do:

ip link set eth0 down

while there is transmission in progress, FWIW.

pw-bot: cr
-- 
Florian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 0/6] net: bcmgenet: add XDP support
  2026-03-13 23:01 ` [PATCH net-next 0/6] net: bcmgenet: add XDP support Florian Fainelli
@ 2026-03-14  0:13   ` Florian Fainelli
  2026-03-14 19:51   ` Nicolai Buchwitz
  1 sibling, 0 replies; 16+ messages in thread
From: Florian Fainelli @ 2026-03-14  0:13 UTC (permalink / raw)
  To: Nicolai Buchwitz, Andrew Lunn, David S . Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Doug Berger
  Cc: Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 3/13/26 16:01, Florian Fainelli wrote:
> On 3/13/26 02:20, Nicolai Buchwitz wrote:
>> Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
>> XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.
>>
>> The first patch converts the RX path from the existing kmalloc-based
>> allocation to page_pool, which is a prerequisite for XDP. The remaining
>> patches incrementally add XDP functionality and per-action statistics.
>>
>> Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
>> - XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
>> - XDP_PASS latency: 0.164ms avg, 0% packet loss
>> - XDP_DROP: all inbound traffic blocked as expected
>> - XDP_TX: TX counter increments (packet reflection working)
>> - Link flap with XDP attached: no errors
>> - Program swap under iperf3 load: no errors
> 
> This is very nice, thanks for doing that work! If the network is brought 
> up and there is a background iperf3 client transmitting data, and then 
> you issue "reboot -f", you will see the following NPD:

Sorry to flood you with more messages, after a little while I see those 
showing up on the console, is that intended?

# [ 4322.017024] page_pool_release_retry() stalled pool shutdown: id 5, 
256 inflight 4289 sec
[ 4331.297004] page_pool_release_retry() stalled pool shutdown: id 6, 
256 inflight 60 sec
-- 
Florian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 0/6] net: bcmgenet: add XDP support
  2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
                   ` (6 preceding siblings ...)
  2026-03-13 23:01 ` [PATCH net-next 0/6] net: bcmgenet: add XDP support Florian Fainelli
@ 2026-03-14 15:52 ` Jakub Kicinski
  2026-03-14 19:52   ` Nicolai Buchwitz
  7 siblings, 1 reply; 16+ messages in thread
From: Jakub Kicinski @ 2026-03-14 15:52 UTC (permalink / raw)
  To: Nicolai Buchwitz
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Paolo Abeni,
	Doug Berger, Florian Fainelli,
	Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On Fri, 13 Mar 2026 10:20:55 +0100 Nicolai Buchwitz wrote:
> Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
> XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.
> 
> The first patch converts the RX path from the existing kmalloc-based
> allocation to page_pool, which is a prerequisite for XDP. The remaining
> patches incrementally add XDP functionality and per-action statistics.
> 
> Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
> - XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
> - XDP_PASS latency: 0.164ms avg, 0% packet loss
> - XDP_DROP: all inbound traffic blocked as expected
> - XDP_TX: TX counter increments (packet reflection working)
> - Link flap with XDP attached: no errors
> - Program swap under iperf3 load: no errors

Have you had a chance to run the XDP tests from 
tools/testing/selftests/drivers/net/hw/
?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP)
  2026-03-13 22:48   ` Florian Fainelli
@ 2026-03-14 19:48     ` Nicolai Buchwitz
  0 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-14 19:48 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Broadcom internal kernel review list,
	Vikas Gupta, Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 13.3.2026 23:48, Florian Fainelli wrote:
> On 3/13/26 02:20, Nicolai Buchwitz wrote:
>> Add XDP program attachment via ndo_bpf and execute XDP programs in the
>> RX path. Supported actions:
>> 
>>    - XDP_PASS: build SKB from the (possibly modified) xdp_buff and 
>> pass
>>      to the network stack, handling xdp_adjust_head/tail correctly
>>    - XDP_DROP: return the page to page_pool, no SKB allocated
>>    - XDP_ABORTED: same as DROP with trace_xdp_exception
>> 
>> XDP_TX and XDP_REDIRECT are not yet supported and will return
>> XDP_ABORTED.
>> 
>> The XDP hook runs after the HW error checks but before SKB 
>> construction,
>> so dropped packets avoid all SKB allocation overhead.
>> 
>> Advertise NETDEV_XDP_ACT_BASIC in xdp_features.
>> 
>> Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
>> ---
> 
> [snip]
> 
>> @@ -2345,12 +2397,6 @@ static unsigned int bcmgenet_desc_rx(struct 
>> bcmgenet_rx_ring *ring,
>>   		hard_start = page_address(rx_page) + rx_off;
>>   		status = (struct status_64 *)hard_start;
>>   		dma_length_status = status->length_status;
>> -		if (dev->features & NETIF_F_RXCSUM) {
>> -			rx_csum = (__force __be16)(status->rx_csum & 0xffff);
>> -			if (rx_csum) {
>> -				/* defer csum setup to after skb is built */
>> -			}
>> -		}
> 
> Did you intend for that hunk to be deleted?

Yes, (kinda) intentional. Patch 1 moved the csum setup after SKB 
construction
(since we no longer have an SKB at that point), but left the early read
as a dead no-op - patch 3 then removed it. I'll clean this up in v2 by
dropping the dead block directly in patch 1 instead.

Nicolai

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 0/6] net: bcmgenet: add XDP support
  2026-03-13 23:01 ` [PATCH net-next 0/6] net: bcmgenet: add XDP support Florian Fainelli
  2026-03-14  0:13   ` Florian Fainelli
@ 2026-03-14 19:51   ` Nicolai Buchwitz
  1 sibling, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-14 19:51 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Doug Berger, Broadcom internal kernel review list,
	Vikas Gupta, Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 14.3.2026 00:01, Florian Fainelli wrote:
> On 3/13/26 02:20, Nicolai Buchwitz wrote:
>> Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
>> XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.
>> 
>> The first patch converts the RX path from the existing kmalloc-based
>> allocation to page_pool, which is a prerequisite for XDP. The 
>> remaining
>> patches incrementally add XDP functionality and per-action statistics.
>> 
>> Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
>> - XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
>> - XDP_PASS latency: 0.164ms avg, 0% packet loss
>> - XDP_DROP: all inbound traffic blocked as expected
>> - XDP_TX: TX counter increments (packet reflection working)
>> - Link flap with XDP attached: no errors
>> - Program swap under iperf3 load: no errors
> 
> This is very nice, thanks for doing that work! If the network is 
> brought up and there is a background iperf3 client transmitting data, 
> and then you issue "reboot -f", you will see the following NPD:
> 
> [  176.531216] Unable to handle kernel NULL pointer dereference at 
> virtual address 0000000000000010
> [  176.540052] Mem abort info:
> [  176.542854]   ESR = 0x0000000096000004
> [  176.546614]   EC = 0x25: DABT (current EL), IL = 32 bits
> [  176.551938]   SET = 0, FnV = 0
> [  176.555000]   EA = 0, S1PTW = 0
> [  176.558149]   FSC = 0x04: level 0 translation fault
> [  176.563037] Data abort info:
> [  176.565924]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
> [  176.571421]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
> [  176.576489]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
> [  176.581813] user pgtable: 4k pages, 48-bit VAs, 
> pgdp=0000000044d02000
> [  176.588286] [0000000000000010] pgd=0000000000000000, 
> p4d=0000000000000000
> [  176.595101] Internal error: Oops: 0000000096000004 [#1]  SMP
> [  176.600774] Modules linked in: bdc udc_core
> [  176.604976] CPU: 3 UID: 0 PID: 1575 Comm: reboot Not tainted 
> 7.0.0-rc3-g08ac0b907060 #2 PREEMPTLAZY
> [  176.614124] Hardware name: BCM972180HB_V20 (DT)
> [  176.618662] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS 
> BTYPE=--)
> [  176.625636] pc : bcmgenet_free_rx_buffers+0x78/0x148
> [  176.630618] lr : bcmgenet_fini_dma+0x104/0x180
> [  176.635071] sp : ffff8000836d3910
> [  5]  95.00-96.00  sec  98.4 MB[  176.638390] x29: ffff8000836d3910 
> x28: 0000000000000000 x27: 0000000000000038
> ytes   826 Mbits/sec    0    819[  176.648318] x26: 0000000000000001 
> x25: 0000000000000010 x24: 00000000000003c0
>  KBytes
> [  176.658238] x23: ffffffffffffffff x22: ffff0000086b4a00 x21: 
> 0000000000000000
> [  176.666766] x20: ffff0000086b8600 x19: 0000000000000000 x18: 
> 0000000000000000
> [  176.673917] x17: 000000000000fe88 x16: 00000000000733f0 x15: 
> 00000000000005a8
> [  176.681067] x14: 000000291895b141 x13: 00000000000733f0 x12: 
> 0000000000000000
> [  176.688218] x11: 00000000000000c0 x10: 0000000000000910 x9 : 
> ffff8000809b4ecc
> [  176.695368] x8 : ffff0000058531f0 x7 : 0000000000000000 x6 : 
> 0000000000000000
> [  176.702518] x5 : 0000000000000000 x4 : ffffffff00000000 x3 : 
> 00000000fffe1db3
> [  176.709669] x2 : 0000000000000001 x1 : ffff80008103a108 x0 : 
> 0000000000000001
> [  176.716821] Call trace:
> [  176.719271]  bcmgenet_free_rx_buffers+0x78/0x148 (P)
> [  176.724247]  bcmgenet_fini_dma+0x104/0x180
> [  176.728353]  bcmgenet_netif_stop+0x1b4/0x1f8
> [  176.732633]  bcmgenet_close+0x38/0xd8
> [  176.736304]  __dev_close_many+0xd4/0x1f8
> [  176.740237]  netif_close_many+0x8c/0x140
> [  176.744169]  unregister_netdevice_many_notify+0x210/0x998
> [  176.749578]  unregister_netdevice_queue+0xa0/0xe8
> [  176.754291]  unregister_netdev+0x28/0x50
> [  176.758221]  bcmgenet_shutdown+0x24/0x48
> [  176.762153]  platform_shutdown+0x28/0x40
> [  176.766085]  device_shutdown+0x154/0x260
> [  176.770015]  kernel_restart+0x48/0xc8
> [  176.773688]  __do_sys_reboot+0x154/0x268
> [  176.777620]  __arm64_sys_reboot+0x28/0x38
> [  176.781638]  invoke_syscall+0x4c/0x118
> [  176.785397]  el0_svc_common.constprop.0+0x44/0xe8
> [  176.790110]  do_el0_svc+0x20/0x30
> [  176.793433]  el0_svc+0x18/0x68
> [  176.796495]  el0t_64_sync_handler+0x98/0xe0
> [  176.800689]  el0t_64_sync+0x154/0x158
> [  176.804362] Code: d280003a d503201f f94d2e93 9b3b4eb3 (f9400a61)
> [  176.810467] ---[ end trace 0000000000000000 ]---
> 
> That does not happen if you do:
> 
> ip link set eth0 down
> 
> while there is transmission in progress, FWIW.
> 

Thanks for testing!

Both the NPD and the stalled page_pool are the same
bug: bcmgenet_free_rx_buffers() used a wrong ring index (DESC_INDEX
remapping that doesn't match init_rx_queues). Already fixed in v2 WIP.

I will do some more testing (also with the xdp selftests Jakub 
mentioned)
and then send the v2.

> pw-bot: cr

Thanks,
Nicolai

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next 0/6] net: bcmgenet: add XDP support
  2026-03-14 15:52 ` Jakub Kicinski
@ 2026-03-14 19:52   ` Nicolai Buchwitz
  0 siblings, 0 replies; 16+ messages in thread
From: Nicolai Buchwitz @ 2026-03-14 19:52 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Andrew Lunn, David S . Miller, Eric Dumazet, Paolo Abeni,
	Doug Berger, Florian Fainelli,
	Broadcom internal kernel review list, Vikas Gupta,
	Bhargava Marreddy, Rajashekar Hudumula, Eric Biggers,
	Heiner Kallweit, Markus Blöchl, Arnd Bergmann, netdev,
	linux-kernel

On 14.3.2026 16:52, Jakub Kicinski wrote:
> On Fri, 13 Mar 2026 10:20:55 +0100 Nicolai Buchwitz wrote:
>> Add XDP support to the bcmgenet driver, covering XDP_PASS, XDP_DROP,
>> XDP_TX, XDP_REDIRECT, and ndo_xdp_xmit.
>> 
>> The first patch converts the RX path from the existing kmalloc-based
>> allocation to page_pool, which is a prerequisite for XDP. The 
>> remaining
>> patches incrementally add XDP functionality and per-action statistics.
>> 
>> Tested on Raspberry Pi CM4 (BCM2711, bcmgenet, 1Gbps link):
>> - XDP_PASS: 943 Mbit/s TX, 935 Mbit/s RX (no regression vs baseline)
>> - XDP_PASS latency: 0.164ms avg, 0% packet loss
>> - XDP_DROP: all inbound traffic blocked as expected
>> - XDP_TX: TX counter increments (packet reflection working)
>> - Link flap with XDP attached: no errors
>> - Program swap under iperf3 load: no errors
> 
> Have you had a chance to run the XDP tests from
> tools/testing/selftests/drivers/net/hw/
> ?

Not yet - thanks for the poibter. I will run it and include results in 
v2.

Nicolai

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2026-03-14 19:52 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-13  9:20 [PATCH net-next 0/6] net: bcmgenet: add XDP support Nicolai Buchwitz
2026-03-13  9:20 ` [PATCH net-next 1/6] net: bcmgenet: convert RX path to page_pool Nicolai Buchwitz
2026-03-13  9:20 ` [PATCH net-next 2/6] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
2026-03-13  9:20 ` [PATCH net-next 3/6] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
2026-03-13 22:48   ` Florian Fainelli
2026-03-14 19:48     ` Nicolai Buchwitz
2026-03-13  9:20 ` [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
2026-03-13 11:37   ` Subbaraya Sundeep
2026-03-13 12:45     ` Nicolai Buchwitz
2026-03-13  9:21 ` [PATCH net-next 5/6] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
2026-03-13  9:21 ` [PATCH net-next 6/6] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
2026-03-13 23:01 ` [PATCH net-next 0/6] net: bcmgenet: add XDP support Florian Fainelli
2026-03-14  0:13   ` Florian Fainelli
2026-03-14 19:51   ` Nicolai Buchwitz
2026-03-14 15:52 ` Jakub Kicinski
2026-03-14 19:52   ` Nicolai Buchwitz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox