BPF List
 help / color / mirror / Atom feed
From: Nicolai Buchwitz <nb@tipi-net.de>
To: sashiko@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH net-next v8 4/7] net: bcmgenet: add XDP_TX support
Date: Thu, 30 Apr 2026 00:28:27 +0200	[thread overview]
Message-ID: <362a57c57af4eecd5214ea49070811f8@tipi-net.de> (raw)
In-Reply-To: <20260429211702.42E86C2BCB4@smtp.kernel.org>

On 29.4.2026 23:17, sashiko-bot@kernel.org wrote:
> Thank you for your contribution! Sashiko AI review found 4 potential 
> issue(s) to consider:
> - [High] Out-of-bounds array access in `bcmgenet_tx_reclaim` during 
> interface teardown
> - [High] Incorrect DMA address calculation for local XDP_TX causes 
> information leak and packet corruption
> - [High] Prepending `status_64` block overwrites XDP metadata and 
> causes memory corruption
> - [Medium] XDP TX ring size is smaller than standard NAPI budget
> --
> 
> commit 9add891fa4e7cf62ec26683116338b5877c21662
> Author: Nicolai Buchwitz <nb@tipi-net.de>
> 
> net: bcmgenet: add XDP_TX support
> 
> Implement XDP_TX using ring 16 (DESC_INDEX), the hardware default
> descriptor ring, dedicated to XDP TX for isolation from SKB TX queues.
> 
>> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c 
>> b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> index 1b60571446e1..3c3b0c44ea8a 100644
>> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> @@ -48,8 +48,10 @@
>> 
>>  #define GENET_Q0_RX_BD_CNT	\
>>  	(TOTAL_DESC - priv->hw_params->rx_queues * 
>> priv->hw_params->rx_bds_per_q)
>> +#define GENET_Q16_TX_BD_CNT	32
> 
> Is it possible to drop packets here during bursts since this TX ring is 
> smaller
> than the standard NAPI budget?

Yes, by design. If the ring fills, frames are dropped, which is standard
XDP backpressure. Same approach as bnxt, mvneta, and other drivers.

> 
> The standard RX NAPI budget is 64, and TX completions are reclaimed 
> only once
> before the RX loop. If the RX loop processes a full budget of 64 
> packets and
> they all result in an XDP_TX action, the first 32 will fill the TX 
> ring.
> 
> The remaining 32 packets would fail in bcmgenet_xdp_xmit_frame() 
> because
> ring->free_bds < 1, and be dropped.
> 
> [ ... ]
>> @@ -2043,6 +2069,9 @@ static void bcmgenet_tx_reclaim_all(struct 
>> net_device *dev)
>>  	do {
>>  		bcmgenet_tx_reclaim(dev, &priv->tx_rings[i++], true);
>>  	} while (i <= priv->hw_params->tx_queues && 
>> netif_is_multiqueue(dev));
>> +
>> +	/* Also reclaim XDP TX ring */
>> +	bcmgenet_tx_reclaim(dev, &priv->xdp_tx_ring, true);
>>  }
> 
> Can this cause an out-of-bounds array access during interface teardown?
> 
> When bcmgenet_tx_reclaim_all() calls bcmgenet_tx_reclaim() with all == 
> true,
> it unconditionally calls netdev_tx_reset_queue(netdev_get_tx_queue(dev, 
> ring->index)).

bcmgenet_tx_reclaim() does not call netdev_tx_reset_queue() or
netdev_get_tx_queue(). The only netdev_get_tx_queue() in the reclaim
path (in __bcmgenet_tx_reclaim, for BQL) is guarded by
"if (ring->index != DESC_INDEX)". The netdev_tx_reset_queue() loop in
bcmgenet_fini_dma() iterates indices 0..tx_queues only.

> 
> The XDP TX ring has an index of 16 (DESC_INDEX), but the network device 
> is
> allocated with only GENET_MAX_MQ_CNT + 1 TX queues (typically 5). 
> Accessing
> index 16 of dev->_tx is out-of-bounds, and netdev_tx_reset_queue() will 
> write
> to this invalid memory.
> 
> [ ... ]
>> @@ -2299,11 +2328,96 @@ static struct sk_buff 
>> *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
>>  	return skb;
>>  }
>> 
>> +static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
>> +				     struct xdp_frame *xdpf, bool dma_map)
>> +{
> [ ... ]
>> +	if (dma_map) {
> [ ... ]
>> +	} else {
>> +		struct page *page = virt_to_page(xdpf->data);
>> +
>> +		/* For local XDP_TX the caller already prepended the TSB
>> +		 * into xdpf->data/len, so dma_len == xdpf->len.
>> +		 */
>> +		dma_len = xdpf->len;
>> +		mapping = page_pool_get_dma_addr(page) +
>> +			  sizeof(*xdpf) + xdpf->headroom;
> 
> Does this calculation result in an incorrect DMA address?
> 
> The calculation implicitly assumes that xdpf resides exactly at the 
> beginning
> of the page. However, the driver configures the RX page pool with
> offset = XDP_PACKET_HEADROOM (typically 256 bytes).
> 
> By omitting the distance from the start of the page to xdpf, the 
> calculated
> DMA address points XDP_PACKET_HEADROOM bytes before the actual packet 
> data,
> which could lead to transmitting uninitialized memory.

Correct, xdpf does reside at the start of the page. 
xdp_convert_buff_to_frame()
stores the xdp_frame at xdp->data_hard_start, which equals 
page_address(page)
for our page_pool layout. page_pool_get_dma_addr() returns the DMA 
address
of the page start (not offset by .offset). sizeof(*xdpf) + 
xdpf->headroom
equals xdpf->data - data_hard_start, pointing to the prepended TSB.

> 
>> +		dma_sync_single_for_device(kdev, mapping, dma_len,
>> +					   DMA_BIDIRECTIONAL);
>> +	}
> [ ... ]
>> +	case XDP_TX:
>> +		/* Prepend a zeroed TSB (Transmit Status Block).  The GENET
>> +		 * MAC has TBUF_64B_EN set globally, so hardware expects every
>> +		 * TX buffer to begin with a 64-byte struct status_64.  Back
>> +		 * up xdp->data into the RSB area (which is no longer needed
>> +		 * after the BPF program ran) and zero it.
>> +		 */
>> +		if (xdp->data - xdp->data_hard_start <
>> +		    sizeof(struct status_64) + sizeof(struct xdp_frame)) {
>> +			page_pool_put_full_page(ring->page_pool, rx_page,
>> +						true);
>> +			return XDP_DROP;
>> +		}
>> +		xdp->data -= sizeof(struct status_64);
>> +		xdp->data_meta -= sizeof(struct status_64);
>> +		memset(xdp->data, 0, sizeof(struct status_64));
> 
> Could this overwrite XDP metadata and cause memory corruption?

xdp_convert_buff_to_frame() validates (headroom - metasize) >=
sizeof(struct xdp_frame) and returns NULL otherwise; we drop the frame
in that case. The memset writes to [xdp->data - 64, xdp->data] before
the data shift, which was previously the RSB area, not the metadata
area. Both data and data_meta are shifted equally, preserving the
metasize gap.

> 
> The space check does not account for metasize (xdp->data - 
> xdp->data_meta).
> If the BPF program appended metadata, the memset() here completely 
> overwrites
> it.
> 
> Furthermore, if metasize is > 0, the shifted xdp->data_meta pointer 
> could
> underflow and point exactly to or before xdp->data_hard_start. When
> xdp_convert_buff_to_frame() is subsequently called, it writes the 
> xdp_frame
> structure at data_hard_start, which would overlap with the shifted 
> data_meta.

Thanks
Nicolai

  reply	other threads:[~2026-04-29 22:28 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-28 20:58 [PATCH net-next v8 0/7] net: bcmgenet: add XDP support Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 2/7] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 3/7] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
2026-04-29 21:17   ` sashiko-bot
2026-04-29 22:24     ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 4/7] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
2026-04-29 21:17   ` sashiko-bot
2026-04-29 22:28     ` Nicolai Buchwitz [this message]
2026-04-28 20:58 ` [PATCH net-next v8 5/7] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
2026-04-29 21:17   ` sashiko-bot
2026-04-29 22:50     ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 6/7] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
2026-04-29 21:17   ` sashiko-bot
2026-04-29 22:39     ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 7/7] net: bcmgenet: reject MTU changes incompatible with XDP Nicolai Buchwitz
2026-04-29 21:17   ` sashiko-bot
2026-04-29 22:35     ` Nicolai Buchwitz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=362a57c57af4eecd5214ea49070811f8@tipi-net.de \
    --to=nb@tipi-net.de \
    --cc=bpf@vger.kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox