From: Nicolai Buchwitz <nb@tipi-net.de>
To: sashiko@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH net-next v8 6/7] net: bcmgenet: add XDP statistics counters
Date: Thu, 30 Apr 2026 00:39:21 +0200 [thread overview]
Message-ID: <9846f1275ea211e3d03c6b6e5a6fdbce@tipi-net.de> (raw)
In-Reply-To: <20260429211704.710D9C19425@smtp.kernel.org>
On 29.4.2026 23:17, sashiko-bot@kernel.org wrote:
> Thank you for your contribution! Sashiko AI review found 3 potential
> issue(s) to consider:
> - [High] XDP_TX headroom check ignores metadata size, and TSB insertion
> corrupts metadata state.
> - [Medium] Non-atomic updates to global XDP statistics from concurrent
> NAPI contexts cause data races and cache-line bouncing.
> - [Medium] Missing counter increment when `xdp_convert_buff_to_frame()`
> fails in the `XDP_TX` path.
> --
>
> commit 9febde2d94a02157dc480d909cc304c59196d72f
> Author: Nicolai Buchwitz <nb@tipi-net.de>
>
> net: bcmgenet: add XDP statistics counters
>
> This commit exposes per-action XDP counters via ethtool -S. It uses
> the soft MIB infrastructure and increments the counters in
> bcmgenet_run_xdp().
>
>> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> index 9dd258567824..02ad2f410d6c 100644
>> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>
> [ ... ]
>
>> @@ -2428,6 +2435,7 @@ static unsigned int bcmgenet_run_xdp(struct
>> bcmgenet_rx_ring *ring,
>>
>> switch (act) {
>> case XDP_PASS:
>> + priv->mib.xdp_pass++;
>
> Is it safe to update global priv->mib statistics here?
The XDP counters follow the existing soft MIB convention in this
driver (plain u32, unsynchronized). A larger ring refactoring is
AFAIK planned that should address all MIB counters consistently.
>
> Since bcmgenet_run_xdp() can execute concurrently in multiple
> independent
> NAPI polling contexts (one per RX ring), non-atomic increments to
> shared
> global variables can cause data races and lost increments.
>
> Continually modifying a shared cacheline on the fast-path for every
> packet
> could also force cacheline bouncing across CPUs and degrade XDP
> performance.
> Should these statistics be maintained per-queue or using per-CPU
> variables
> instead?
>
>> return XDP_PASS;
>> case XDP_TX:
>> /* Prepend a zeroed TSB (Transmit Status Block). The GENET
>> * MAC has TBUF_64B_EN set globally, so hardware expects every
>> * TX buffer to begin with a 64-byte struct status_64. Back
>> * up xdp->data into the RSB area (which is no longer needed
>> * after the BPF program ran) and zero it.
>> */
>> if (xdp->data - xdp->data_hard_start <
>> sizeof(struct status_64) + sizeof(struct xdp_frame)) {
>> page_pool_put_full_page(ring->page_pool, rx_page,
>> true);
>> + priv->mib.xdp_tx_err++;
>> return XDP_DROP;
>> }
>> xdp->data -= sizeof(struct status_64);
>> xdp->data_meta -= sizeof(struct status_64);
>> memset(xdp->data, 0, sizeof(struct status_64));
>
> While not introduced by this patch, does this headroom check and TSB
> insertion handle metadata correctly?
Already addressed under patches 4 and 5: xdp_convert_buff_to_frame()
validates headroom against metasize. The memset writes into the former
RSB area, above the metadata; both data and data_meta shift equally so
the metadata position is preserved.
>
> The headroom check doesn't account for metasize, which could cause
> xdp_convert_buff_to_frame() to fail its own bounds check later.
>
> Additionally, if the XDP program added metadata, it would be located at
> [old_data - metasize, old_data]. The memset writes zeroes to
> [new_data, new_data + 64], which is [old_data - 64, old_data],
> potentially
> overwriting the original metadata.
>
> Since xdp->data_meta is shifted back by 64 bytes, could it now point to
> garbage memory and corrupt the metadata state?
>
>>
>> xdpf = xdp_convert_buff_to_frame(xdp);
>> if (unlikely(!xdpf)) {
>> page_pool_put_full_page(ring->page_pool, rx_page,
>> true);
>> return XDP_DROP;
>> }
>
> Are we missing a counter increment on this failure path?
Fair point, but xdp_convert_buff_to_frame() failure is an extremely
rare path (insufficient headroom for xdp_frame). Can be addressed in
a follow-up.
>
> The preceding headroom check failure explicitly increments xdp_tx_err
> before
> dropping, but this path returns directly. Since it doesn't fall through
> to
> the XDP_DROP switch case, the dropped packet appears to be unaccounted
> for
> in the new statistics.
Thanks,
Nicolai
next prev parent reply other threads:[~2026-04-29 22:39 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-28 20:58 [PATCH net-next v8 0/7] net: bcmgenet: add XDP support Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 2/7] net: bcmgenet: register xdp_rxq_info for each RX ring Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 3/7] net: bcmgenet: add basic XDP support (PASS/DROP) Nicolai Buchwitz
2026-04-29 21:17 ` sashiko-bot
2026-04-29 22:24 ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 4/7] net: bcmgenet: add XDP_TX support Nicolai Buchwitz
2026-04-29 21:17 ` sashiko-bot
2026-04-29 22:28 ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 5/7] net: bcmgenet: add XDP_REDIRECT and ndo_xdp_xmit support Nicolai Buchwitz
2026-04-29 21:17 ` sashiko-bot
2026-04-29 22:50 ` Nicolai Buchwitz
2026-04-28 20:58 ` [PATCH net-next v8 6/7] net: bcmgenet: add XDP statistics counters Nicolai Buchwitz
2026-04-29 21:17 ` sashiko-bot
2026-04-29 22:39 ` Nicolai Buchwitz [this message]
2026-04-28 20:58 ` [PATCH net-next v8 7/7] net: bcmgenet: reject MTU changes incompatible with XDP Nicolai Buchwitz
2026-04-29 21:17 ` sashiko-bot
2026-04-29 22:35 ` Nicolai Buchwitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9846f1275ea211e3d03c6b6e5a6fdbce@tipi-net.de \
--to=nb@tipi-net.de \
--cc=bpf@vger.kernel.org \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox