public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: pvalerio@redhat.com
Cc: Jakub Kicinski <kuba@kernel.org>,
	netdev@vger.kernel.org, nicolas.ferre@microchip.com,
	claudiu.beznea@tuxon.dev, andrew+netdev@lunn.ch,
	davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
	lorenzo@kernel.org, theo.lebrun@bootlin.com, nb@tipi-net.de
Subject: Re: [PATCH net-next v6 3/7] net: macb: Add page pool support handle multi-descriptor frame rx
Date: Sun, 29 Mar 2026 12:47:44 -0700	[thread overview]
Message-ID: <20260329194744.2781381-1-kuba@kernel.org> (raw)
In-Reply-To: <20260323221047.2749577-4-pvalerio@redhat.com>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
Subject: net: macb: Add page pool support handle multi-descriptor frame rx

This patch adds page pool allocator and skb recycling support for the data
buffers in the Cadence MACB/GEM Ethernet driver. It also adds support for
receiving network frames that span multiple DMA descriptors.

> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
[ ... ]
> +static unsigned int gem_total_rx_buffer_size(struct macb *bp)
> +{
> +	return SKB_HEAD_ALIGN(bp->rx_buffer_size + NET_SKB_PAD);
> +}

When calculating the size for napi_build_skb(), does this code account
for the skb_shared_info overhead?

napi_build_skb() places the shared info at the end of the provided fragment
size (frag_size - sizeof(skb_shared_info)). Without adding
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) to the total size, the shared
info could overlap with the DMA buffer space. This could overwrite packet
data and cause an skb_over_panic() on skb_put() for large frames.

[ ... ]
> @@ -1500,25 +1543,26 @@ static int gem_rx_refill(struct macb_queue *queue)
>  		desc = macb_rx_desc(queue, entry);
>  
>  		if (!queue->rx_buff[entry]) {
> -			/* allocate sk_buff for this free entry in ring */
> -			skb = netdev_alloc_skb(bp->dev, bp->rx_buffer_size);
> -			if (unlikely(!skb)) {
> -				netdev_err(bp->dev,
> -					   "Unable to allocate sk_buff\n");
> +			gfp_alloc = napi ? GFP_ATOMIC : GFP_KERNEL;
> +			page = page_pool_alloc_frag(queue->page_pool, &offset,
> +						    gem_total_rx_buffer_size(bp),
> +						    gfp_alloc | __GFP_NOWARN);

Can gem_rx_refill() be called concurrently by both the NAPI poll loop and
the error recovery workqueue?

The page pool allocator uses a lockless fast-path that requires execution
from a single context. Looking at the driver, macb_hresp_error_task() is a
workqueue that calls gem_init_rings() which in turn calls gem_rx_refill().

If the workqueue path calls gem_rx_refill() without first stopping NAPI
(via napi_disable()), does this risk concurrent access and memory corruption
in the page pool ring?

[ ... ]
> @@ -2600,12 +2700,22 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	return ret;
>  }
>  
> -static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
> +static void macb_init_rx_buffer_size(struct macb *bp, unsigned int mtu)
>  {
> +	unsigned int overhead;
> +	size_t size;
> +
>  	if (!macb_is_gem(bp)) {
>  		bp->rx_buffer_size = MACB_RX_BUFFER_SIZE;
>  	} else {
> -		bp->rx_buffer_size = MIN(size, RX_BUFFER_MAX);
> +		size = mtu + ETH_HLEN + ETH_FCS_LEN;
> +		bp->rx_buffer_size = SKB_DATA_ALIGN(size + bp->rx_ip_align);
> +		if (gem_total_rx_buffer_size(bp) > PAGE_SIZE) {
> +			overhead = bp->rx_headroom +
> +				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> +			bp->rx_buffer_size = rounddown(PAGE_SIZE - overhead,
> +						       RX_BUFFER_MULTIPLE);
> +		}
>  
>  		if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) {

Could this calculation result in a page pool allocation request that
exceeds PAGE_SIZE?

Because the check for gem_total_rx_buffer_size(bp) > PAGE_SIZE happens
before bp->rx_buffer_size is rounded up to RX_BUFFER_MULTIPLE (which occurs
just below this block), specific MTU values might pass the initial check but
exceed PAGE_SIZE after rounding.

Since the page pool is initialized with order = 0, requests larger than a
page will unconditionally fail.

  parent reply	other threads:[~2026-03-29 19:47 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23 22:10 [PATCH net-next v6 0/7] net: macb: Add XDP support and page pool integration Paolo Valerio
2026-03-23 22:10 ` [PATCH net-next v6 1/7] net: macb: move Rx buffers alloc from link up to open Paolo Valerio
2026-03-29 19:47   ` Jakub Kicinski
2026-03-23 22:10 ` [PATCH net-next v6 2/7] net: macb: rename rx_skbuff into rx_buff Paolo Valerio
2026-03-29 19:47   ` Jakub Kicinski
2026-03-23 22:10 ` [PATCH net-next v6 3/7] net: macb: Add page pool support handle multi-descriptor frame rx Paolo Valerio
2026-03-25 17:45   ` Simon Horman
2026-03-29 19:47   ` Jakub Kicinski [this message]
2026-03-23 22:10 ` [PATCH net-next v6 4/7] net: macb: make macb_tx_skb generic Paolo Valerio
2026-03-29 19:47   ` Jakub Kicinski
2026-03-23 22:10 ` [PATCH net-next v6 5/7] net: macb: generalize tx buffer handling Paolo Valerio
2026-03-29 19:47   ` Jakub Kicinski
2026-03-23 22:10 ` [PATCH net-next v6 6/7] net: macb: add XDP support for gem Paolo Valerio
2026-03-24  4:57   ` Mohsin Bashir
2026-03-29 19:47   ` Jakub Kicinski
2026-03-29 19:50   ` Jakub Kicinski
2026-03-23 22:10 ` [PATCH net-next v6 7/7] net: macb: introduce ndo_xdp_xmit support Paolo Valerio
2026-03-29 19:47   ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260329194744.2781381-1-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=claudiu.beznea@tuxon.dev \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=lorenzo@kernel.org \
    --cc=nb@tipi-net.de \
    --cc=netdev@vger.kernel.org \
    --cc=nicolas.ferre@microchip.com \
    --cc=pabeni@redhat.com \
    --cc=pvalerio@redhat.com \
    --cc=theo.lebrun@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox