* Re: [PATCH net-next v3 1/6] net: bcmgenet: convert RX path to page_pool
[not found] <20260319115402.353509-2-nb@tipi-net.de>
@ 2026-03-20 17:02 ` Simon Horman
0 siblings, 0 replies; only message in thread
From: Simon Horman @ 2026-03-20 17:02 UTC (permalink / raw)
To: nb
Cc: Simon Horman, davem, opendmb, john.fastabend, florian.fainelli,
daniel, arnd, bcm-kernel-feedback-list, vikas.gupta,
bhargava.marreddy, linux-kernel, ast, edumazet, kuba, sdf,
ebiggers, hawk, andrew+netdev, bpf, pabeni, rajashekar.hudumula,
netdev
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net: bcmgenet: convert RX path to page_pool
Replace the per-packet __netdev_alloc_skb() + dma_map_single() in the
RX path with page_pool, which provides efficient page recycling and
DMA mapping management. This is a prerequisite for XDP support (which
requires stable page-backed buffers rather than SKB linear data).
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
> diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
> index cd7dddeb91dd..e3b9a5272406 100644
> --- a/drivers/net/ethernet/broadcom/Kconfig
> +++ b/drivers/net/ethernet/broadcom/Kconfig
[ ... ]
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> index 482a31e7b72b..7410034d9bdc 100644
> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
[ ... ]
> @@ -2747,6 +2758,31 @@ static void bcmgenet_init_tx_ring(struct bcmgenet_priv *priv,
> netif_napi_add_tx(priv->dev, &ring->napi, bcmgenet_tx_poll);
> }
>
> +static int bcmgenet_rx_ring_create_pool(struct bcmgenet_priv *priv,
> + struct bcmgenet_rx_ring *ring)
> +{
> + struct page_pool_params pp_params = {
> + .order = 0,
> + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> + .pool_size = ring->size,
> + .nid = NUMA_NO_NODE,
> + .dev = &priv->pdev->dev,
> + .dma_dir = DMA_FROM_DEVICE,
> + .offset = GENET_XDP_HEADROOM,
> + .max_len = RX_BUF_LENGTH,
> + };
> +
> + ring->page_pool = page_pool_create(&pp_params);
> + if (IS_ERR(ring->page_pool)) {
> + int err = PTR_ERR(ring->page_pool);
> +
> + ring->page_pool = NULL;
> + return err;
> + }
> +
> + return 0;
> +}
> +
> /* Initialize a RDMA ring */
> static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
> unsigned int index, unsigned int size,
> @@ -2765,10 +2801,17 @@ static int bcmgenet_init_rx_ring(struct bcmgenet_priv *priv,
> ring->cb_ptr = start_ptr;
> ring->end_ptr = end_ptr - 1;
>
> - ret = bcmgenet_alloc_rx_buffers(priv, ring);
> + ret = bcmgenet_rx_ring_create_pool(priv, ring);
> if (ret)
> return ret;
>
> + ret = bcmgenet_alloc_rx_buffers(priv, ring);
> + if (ret) {
> + page_pool_destroy(ring->page_pool);
> + ring->page_pool = NULL;
> + return ret;
> + }
> +
Does this leak pages if bcmgenet_alloc_rx_buffers() fails partway through?
If bcmgenet_alloc_rx_buffers() successfully allocates pages for the first
k iterations but then page_pool_alloc_pages() fails on iteration k+1, the
pages stored in cb->rx_page for iterations 0 through k-1 are never
returned to the page pool.
When the error path calls page_pool_destroy(), the pool detects outstanding
pages and defers cleanup, triggering periodic 'stalled pool shutdown' kernel
warnings from page_pool_release_retry().
Then bcmgenet_init_dma's error path calls bcmgenet_free_rx_buffers(), which
skips this ring entirely because ring->page_pool is NULL:
bcmgenet_free_rx_buffers() {
for (q = 0; q <= priv->hw_params->rx_queues; q++) {
ring = &priv->rx_rings[q];
if (!ring->page_pool)
continue;
...
}
}
Should the error path iterate through the ring's cbs and call
bcmgenet_free_rx_cb() for each one before destroying the page pool?
The old code didn't have this issue because bcmgenet_free_rx_buffers()
iterated all rx_cbs globally without checking for a per-ring page_pool.
[ ... ]
^ permalink raw reply [flat|nested] only message in thread