* [PATCH net] gve: Fix XDP TX completion handling when counters overflow
@ 2024-07-16 17:10 Praveen Kaligineedi
2024-07-17 7:41 ` Simon Horman
2024-07-18 5:30 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 3+ messages in thread
From: Praveen Kaligineedi @ 2024-07-16 17:10 UTC (permalink / raw)
To: netdev
Cc: stable, hramamurthy, jfraker, jeroendb, shailend, davem, edumazet,
kuba, pabeni, ast, daniel, hawk, john.fastabend, ziweixiao,
willemb, Joshua Washington, Praveen Kaligineedi
From: Joshua Washington <joshwash@google.com>
In gve_clean_xdp_done, the driver processes the TX completions based on
a 32-bit NIC counter and a 32-bit completion counter stored in the tx
queue.
Fix the for loop so that the counter wraparound is handled correctly.
Fixes: 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format")
Signed-off-by: Joshua Washington <joshwash@google.com>
Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
drivers/net/ethernet/google/gve/gve_tx.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index 24a64ec1073e..e7fb7d6d283d 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -158,15 +158,16 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
u32 to_do)
{
struct gve_tx_buffer_state *info;
- u32 clean_end = tx->done + to_do;
u64 pkts = 0, bytes = 0;
size_t space_freed = 0;
u32 xsk_complete = 0;
u32 idx;
+ int i;
- for (; tx->done < clean_end; tx->done++) {
+ for (i = 0; i < to_do; i++) {
idx = tx->done & tx->mask;
info = &tx->info[idx];
+ tx->done++;
if (unlikely(!info->xdp.size))
continue;
--
2.45.2.803.g4e1b14247a-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net] gve: Fix XDP TX completion handling when counters overflow
2024-07-16 17:10 [PATCH net] gve: Fix XDP TX completion handling when counters overflow Praveen Kaligineedi
@ 2024-07-17 7:41 ` Simon Horman
2024-07-18 5:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: Simon Horman @ 2024-07-17 7:41 UTC (permalink / raw)
To: Praveen Kaligineedi
Cc: netdev, stable, hramamurthy, jfraker, jeroendb, shailend, davem,
edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend,
ziweixiao, willemb, Joshua Washington
On Tue, Jul 16, 2024 at 10:10:41AM -0700, Praveen Kaligineedi wrote:
> From: Joshua Washington <joshwash@google.com>
>
> In gve_clean_xdp_done, the driver processes the TX completions based on
> a 32-bit NIC counter and a 32-bit completion counter stored in the tx
> queue.
>
> Fix the for loop so that the counter wraparound is handled correctly.
>
> Fixes: 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format")
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Simon Horman <horms@kernel.org>
> ---
> drivers/net/ethernet/google/gve/gve_tx.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
> index 24a64ec1073e..e7fb7d6d283d 100644
> --- a/drivers/net/ethernet/google/gve/gve_tx.c
> +++ b/drivers/net/ethernet/google/gve/gve_tx.c
> @@ -158,15 +158,16 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
> u32 to_do)
> {
> struct gve_tx_buffer_state *info;
> - u32 clean_end = tx->done + to_do;
> u64 pkts = 0, bytes = 0;
> size_t space_freed = 0;
> u32 xsk_complete = 0;
> u32 idx;
> + int i;
>
> - for (; tx->done < clean_end; tx->done++) {
> + for (i = 0; i < to_do; i++) {
I was slightly concerned that, as it is a u32, the value of to_do could
exceed the maximum value of i, which is an int.
But I see that in practice the value of to_do is bound by an int, budget,
in the call site, gve_xdp_poll. So I think we are ok.
Perhaps, as a clean up, the type of the to_do parameter of
gve_clean_xdp_done, and local variable in gve_xdp_poll() could be updated
from u32 to int. But, OTOH, perhaps that doesn't get us anywhere.
> idx = tx->done & tx->mask;
> info = &tx->info[idx];
> + tx->done++;
>
> if (unlikely(!info->xdp.size))
> continue;
> --
> 2.45.2.803.g4e1b14247a-goog
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH net] gve: Fix XDP TX completion handling when counters overflow
2024-07-16 17:10 [PATCH net] gve: Fix XDP TX completion handling when counters overflow Praveen Kaligineedi
2024-07-17 7:41 ` Simon Horman
@ 2024-07-18 5:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-07-18 5:30 UTC (permalink / raw)
To: Praveen Kaligineedi
Cc: netdev, stable, hramamurthy, jfraker, jeroendb, shailend, davem,
edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend,
ziweixiao, willemb, joshwash
Hello:
This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Tue, 16 Jul 2024 10:10:41 -0700 you wrote:
> From: Joshua Washington <joshwash@google.com>
>
> In gve_clean_xdp_done, the driver processes the TX completions based on
> a 32-bit NIC counter and a 32-bit completion counter stored in the tx
> queue.
>
> Fix the for loop so that the counter wraparound is handled correctly.
>
> [...]
Here is the summary with links:
- [net] gve: Fix XDP TX completion handling when counters overflow
https://git.kernel.org/netdev/net/c/03b54bad26f3
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-07-18 5:30 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-16 17:10 [PATCH net] gve: Fix XDP TX completion handling when counters overflow Praveen Kaligineedi
2024-07-17 7:41 ` Simon Horman
2024-07-18 5:30 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).