From: sashiko-bot@kernel.org
To: "Nicolai Buchwitz" <nb@tipi-net.de>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH net-next v7 4/7] net: bcmgenet: add XDP_TX support
Date: Thu, 16 Apr 2026 18:54:05 +0000 [thread overview]
Message-ID: <20260416185405.E84FEC2BCAF@smtp.kernel.org> (raw)
In-Reply-To: <20260416054743.1289191-5-nb@tipi-net.de>
Sashiko AI review found 2 potential issue(s):
- [Critical] Local XDP_TX mishandles XDP metadata, leading to buffer underflow, metadata destruction, and incorrect DMA mappings.
- [High] Unconditional access to `netdev_get_tx_queue(dev, ring->index)` during XDP TX reclaim causes an out-of-bounds array access.
--
commit bdb3ec6dd3fa70f6fc80f2e6b98851c833a05d8a
Author: Nicolai Buchwitz <nb@tipi-net.de>
net: bcmgenet: add XDP_TX support
This commit adds local XDP_TX and ndo_xdp_xmit support to the bcmgenet
driver. It dedicates ring 16 for XDP TX, prepends the required 64-byte
Transmit Status Block (TSB) to the frames, and piggybacks TX completion
reclamation onto the RX NAPI poll loop.
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
[ ... ]
> @@ -2043,6 +2069,9 @@ static void bcmgenet_tx_reclaim_all(struct net_device *dev)
> do {
> bcmgenet_tx_reclaim(dev, &priv->tx_rings[i++], true);
> } while (i <= priv->hw_params->tx_queues && netif_is_multiqueue(dev));
> +
> + /* Also reclaim XDP TX ring */
> + bcmgenet_tx_reclaim(dev, &priv->xdp_tx_ring, true);
> }
When bcmgenet_tx_reclaim() is called here with all = true, it executes
this cleanup path:
if (all) {
...
txq = netdev_get_tx_queue(ring->priv->dev, ring->index);
netif_tx_wake_queue(txq);
}
Since xdp_tx_ring.index is DESC_INDEX (16), but dev->num_tx_queues is at
most 5, does this cause an out-of-bounds array access when retrieving the
netdev transmit queue?
[ ... ]
> @@ -2299,11 +2328,96 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
> return skb;
> }
>
> +static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
> + struct xdp_frame *xdpf, bool dma_map)
> +{
[ ... ]
> + if (dma_map) {
[ ... ]
> + } else {
> + struct page *page = virt_to_page(xdpf->data);
> +
> + /* For local XDP_TX the caller already prepended the TSB
> + * into xdpf->data/len, so dma_len == xdpf->len.
> + */
> + dma_len = xdpf->len;
> + mapping = page_pool_get_dma_addr(page) +
> + sizeof(*xdpf) + xdpf->headroom;
> + dma_sync_single_for_device(kdev, mapping, dma_len,
> + DMA_BIDIRECTIONAL);
> + }
Does this correctly calculate the mapping offset?
xdpf->headroom evaluates to the offset from data_hard_start to the start of
the frame. The offset from the start of the page to data_hard_start
(XDP_PACKET_HEADROOM, 256 bytes) appears to be omitted here, potentially
causing the hardware to read from an uninitialized offset hundreds of bytes
before the actual packet.
Furthermore, if metadata is present, it looks like this maps data_meta
instead of the TSB at data.
[ ... ]
> + case XDP_TX:
> + /* Prepend a zeroed TSB (Transmit Status Block). The GENET
> + * MAC has TBUF_64B_EN set globally, so hardware expects every
> + * TX buffer to begin with a 64-byte struct status_64. Back
> + * up xdp->data into the RSB area (which is no longer needed
> + * after the BPF program ran) and zero it.
> + */
> + if (xdp->data - xdp->data_hard_start <
> + sizeof(struct status_64) + sizeof(struct xdp_frame)) {
Does this correctly account for XDP metadata?
If metadata is present, xdp->data_meta will be lower than xdp->data. By
using xdp->data in this check instead of xdp->data_meta, could subtracting
sizeof(struct status_64) push the metadata boundary below
data_hard_start + sizeof(struct xdp_frame) and cause an out-of-bounds
write when xdp_convert_buff_to_frame() is called?
> + page_pool_put_full_page(ring->page_pool, rx_page,
> + true);
> + return XDP_DROP;
> + }
> + xdp->data -= sizeof(struct status_64);
> + xdp->data_meta -= sizeof(struct status_64);
> + memset(xdp->data, 0, sizeof(struct status_64));
Does this memset() overwrite the packet metadata?
Since xdp->data was just decremented by 64 bytes, the memory region it
clears appears to perfectly overlap with the space previously occupied by
the BPF metadata.
> + xdpf = xdp_convert_buff_to_frame(xdp);
> + if (unlikely(!xdpf)) {
> + page_pool_put_full_page(ring->page_pool, rx_page,
> + true);
> + return XDP_DROP;
> + }
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260416054743.1289191-1-nb@tipi-net.de?part=4
next prev parent reply other threads:[~2026-04-16 18:54 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-16 5:47 [PATCH net-next v7 0/7] net: bcmgenet: add XDP support Nicolai Buchwitz
2026-04-16 5:47 ` [PATCH net-next v7 2/7] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
2026-04-16 5:47 ` [PATCH net-next v7 3/7] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
2026-04-16 5:47 ` [PATCH net-next v7 4/7] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
2026-04-16 18:54 ` sashiko-bot [this message]
2026-04-16 5:47 ` [PATCH net-next v7 5/7] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
2026-04-16 19:46 ` sashiko-bot
2026-04-16 5:47 ` [PATCH net-next v7 6/7] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
2026-04-16 20:08 ` sashiko-bot
2026-04-16 5:47 ` [PATCH net-next v7 7/7] net: bcmgenet: reject MTU changes incompatible with XDP Nicolai Buchwitz
2026-04-16 20:47 ` sashiko-bot
2026-04-16 8:06 ` [PATCH net-next v7 0/7] net: bcmgenet: add XDP support Paolo Abeni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260416185405.E84FEC2BCAF@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=nb@tipi-net.de \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox