* Re: [PATCH net-next v8 6/7] net: bcmgenet: add XDP statistics counters
[not found] ` <20260429211704.710D9C19425@smtp.kernel.org>
@ 2026-04-29 22:39 ` Nicolai Buchwitz
0 siblings, 0 replies; only message in thread
From: Nicolai Buchwitz @ 2026-04-29 22:39 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 29.4.2026 23:17, sashiko-bot@kernel.org wrote:
> Thank you for your contribution! Sashiko AI review found 3 potential
> issue(s) to consider:
> - [High] XDP_TX headroom check ignores metadata size, and TSB insertion
> corrupts metadata state.
> - [Medium] Non-atomic updates to global XDP statistics from concurrent
> NAPI contexts cause data races and cache-line bouncing.
> - [Medium] Missing counter increment when `xdp_convert_buff_to_frame()`
> fails in the `XDP_TX` path.
> --
>
> commit 9febde2d94a02157dc480d909cc304c59196d72f
> Author: Nicolai Buchwitz <nb@tipi-net.de>
>
> net: bcmgenet: add XDP statistics counters
>
> This commit exposes per-action XDP counters via ethtool -S. It uses
> the soft MIB infrastructure and increments the counters in
> bcmgenet_run_xdp().
>
>> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> index 9dd258567824..02ad2f410d6c 100644
>> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
>
> [ ... ]
>
>> @@ -2428,6 +2435,7 @@ static unsigned int bcmgenet_run_xdp(struct
>> bcmgenet_rx_ring *ring,
>>
>> switch (act) {
>> case XDP_PASS:
>> + priv->mib.xdp_pass++;
>
> Is it safe to update global priv->mib statistics here?
The XDP counters follow the existing soft MIB convention in this
driver (plain u32, unsynchronized). A larger ring refactoring is
AFAIK planned that should address all MIB counters consistently.
>
> Since bcmgenet_run_xdp() can execute concurrently in multiple
> independent
> NAPI polling contexts (one per RX ring), non-atomic increments to
> shared
> global variables can cause data races and lost increments.
>
> Continually modifying a shared cacheline on the fast-path for every
> packet
> could also force cacheline bouncing across CPUs and degrade XDP
> performance.
> Should these statistics be maintained per-queue or using per-CPU
> variables
> instead?
>
>> return XDP_PASS;
>> case XDP_TX:
>> /* Prepend a zeroed TSB (Transmit Status Block). The GENET
>> * MAC has TBUF_64B_EN set globally, so hardware expects every
>> * TX buffer to begin with a 64-byte struct status_64. Back
>> * up xdp->data into the RSB area (which is no longer needed
>> * after the BPF program ran) and zero it.
>> */
>> if (xdp->data - xdp->data_hard_start <
>> sizeof(struct status_64) + sizeof(struct xdp_frame)) {
>> page_pool_put_full_page(ring->page_pool, rx_page,
>> true);
>> + priv->mib.xdp_tx_err++;
>> return XDP_DROP;
>> }
>> xdp->data -= sizeof(struct status_64);
>> xdp->data_meta -= sizeof(struct status_64);
>> memset(xdp->data, 0, sizeof(struct status_64));
>
> While not introduced by this patch, does this headroom check and TSB
> insertion handle metadata correctly?
Already addressed under patches 4 and 5: xdp_convert_buff_to_frame()
validates headroom against metasize. The memset writes into the former
RSB area, above the metadata; both data and data_meta shift equally so
the metadata position is preserved.
>
> The headroom check doesn't account for metasize, which could cause
> xdp_convert_buff_to_frame() to fail its own bounds check later.
>
> Additionally, if the XDP program added metadata, it would be located at
> [old_data - metasize, old_data]. The memset writes zeroes to
> [new_data, new_data + 64], which is [old_data - 64, old_data],
> potentially
> overwriting the original metadata.
>
> Since xdp->data_meta is shifted back by 64 bytes, could it now point to
> garbage memory and corrupt the metadata state?
>
>>
>> xdpf = xdp_convert_buff_to_frame(xdp);
>> if (unlikely(!xdpf)) {
>> page_pool_put_full_page(ring->page_pool, rx_page,
>> true);
>> return XDP_DROP;
>> }
>
> Are we missing a counter increment on this failure path?
Fair point, but xdp_convert_buff_to_frame() failure is an extremely
rare path (insufficient headroom for xdp_frame). Can be addressed in
a follow-up.
>
> The preceding headroom check failure explicitly increments xdp_tx_err
> before
> dropping, but this path returns directly. Since it doesn't fall through
> to
> the XDP_DROP switch case, the dropped packet appears to be unaccounted
> for
> in the new statistics.
Thanks,
Nicolai
^ permalink raw reply [flat|nested] only message in thread