public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
@ 2026-02-20 21:53 Joshua Washington
  2026-02-23 18:04 ` Simon Horman
  2026-02-24  1:30 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Joshua Washington @ 2026-02-20 21:53 UTC (permalink / raw)
  To: netdev
  Cc: Joshua Washington, Harshitha Ramamurthy, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Willem de Bruijn, Praveen Kaligineedi, Rushil Gupta,
	Bailey Forrest, linux-kernel, Ankit Garg, stable, Jordan Rhee

From: Ankit Garg <nktgrg@google.com>

In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA
buffer cleanup path. It iterates num_bufs times and attempts to unmap
entries in the dma array.

This leads to two issues:
1. The dma array shares storage with tx_qpl_buf_ids (union).
 Interpreting buffer IDs as DMA addresses results in attempting to
 unmap incorrect memory locations.
2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed
 the size of the dma array, causing out-of-bounds access warnings
(trace below is how we noticed this issue).

UBSAN: array-index-out-of-bounds in
drivers/net/ethernet/drivers/net/ethernet/google/gve/gve_tx_dqo.c:178:5 index 18 is out of
range for type 'dma_addr_t[18]' (aka 'unsigned long long[18]')
Workqueue: gve gve_service_task [gve]
Call Trace:
<TASK>
dump_stack_lvl+0x33/0xa0
__ubsan_handle_out_of_bounds+0xdc/0x110
gve_tx_stop_ring_dqo+0x182/0x200 [gve]
gve_close+0x1be/0x450 [gve]
gve_reset+0x99/0x120 [gve]
gve_service_task+0x61/0x100 [gve]
process_scheduled_works+0x1e9/0x380

Fix this by properly checking for QPL mode and delegating to
gve_free_tx_qpl_bufs() to reclaim the buffers.

Cc: stable@vger.kernel.org
Fixes: a6fb8d5a8b69 ("gve: Tx path for DQO-QPL")
Signed-off-by: Ankit Garg <nktgrg@google.com>
Reviewed-by: Jordan Rhee <jordanrhee@google.com>
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Signed-off-by: Joshua Washington <joshwash@google.com>
---
Changes in v2:
* Moved gve_unmap_packet up instead of forward declaration
  (Jakub Kicinski)
---
 drivers/net/ethernet/google/gve/gve_tx_dqo.c | 56 ++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 31 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
index 40b89b3e..e5e33966 100644
--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
@@ -167,6 +167,25 @@ gve_free_pending_packet(struct gve_tx_ring *tx,
 	}
 }
 
+static void gve_unmap_packet(struct device *dev,
+			     struct gve_tx_pending_packet_dqo *pkt)
+{
+	int i;
+
+	if (!pkt->num_bufs)
+		return;
+
+	/* SKB linear portion is guaranteed to be mapped */
+	dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
+			 dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
+	for (i = 1; i < pkt->num_bufs; i++) {
+		netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]),
+					    dma_unmap_len(pkt, len[i]),
+					    DMA_TO_DEVICE, 0);
+	}
+	pkt->num_bufs = 0;
+}
+
 /* gve_tx_free_desc - Cleans up all pending tx requests and buffers.
  */
 static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
@@ -176,21 +195,12 @@ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
 	for (i = 0; i < tx->dqo.num_pending_packets; i++) {
 		struct gve_tx_pending_packet_dqo *cur_state =
 			&tx->dqo.pending_packets[i];
-		int j;
-
-		for (j = 0; j < cur_state->num_bufs; j++) {
-			if (j == 0) {
-				dma_unmap_single(tx->dev,
-					dma_unmap_addr(cur_state, dma[j]),
-					dma_unmap_len(cur_state, len[j]),
-					DMA_TO_DEVICE);
-			} else {
-				dma_unmap_page(tx->dev,
-					dma_unmap_addr(cur_state, dma[j]),
-					dma_unmap_len(cur_state, len[j]),
-					DMA_TO_DEVICE);
-			}
-		}
+
+		if (tx->dqo.qpl)
+			gve_free_tx_qpl_bufs(tx, cur_state);
+		else
+			gve_unmap_packet(tx->dev, cur_state);
+
 		if (cur_state->skb) {
 			dev_consume_skb_any(cur_state->skb);
 			cur_state->skb = NULL;
@@ -1160,22 +1170,6 @@ static void remove_from_list(struct gve_tx_ring *tx,
 	}
 }
 
-static void gve_unmap_packet(struct device *dev,
-			     struct gve_tx_pending_packet_dqo *pkt)
-{
-	int i;
-
-	/* SKB linear portion is guaranteed to be mapped */
-	dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
-			 dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
-	for (i = 1; i < pkt->num_bufs; i++) {
-		netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]),
-					    dma_unmap_len(pkt, len[i]),
-					    DMA_TO_DEVICE, 0);
-	}
-	pkt->num_bufs = 0;
-}
-
 /* Completion types and expected behavior:
  * No Miss compl + Packet compl = Packet completed normally.
  * Miss compl + Re-inject compl = Packet completed normally.
-- 
2.53.0.335.g19a08e0c02-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
  2026-02-20 21:53 [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL Joshua Washington
@ 2026-02-23 18:04 ` Simon Horman
  2026-02-24  1:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Simon Horman @ 2026-02-23 18:04 UTC (permalink / raw)
  To: Joshua Washington
  Cc: netdev, Harshitha Ramamurthy, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Willem de Bruijn,
	Praveen Kaligineedi, Rushil Gupta, Bailey Forrest, linux-kernel,
	Ankit Garg, stable, Jordan Rhee

On Fri, Feb 20, 2026 at 01:53:24PM -0800, Joshua Washington wrote:
> From: Ankit Garg <nktgrg@google.com>
> 
> In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA
> buffer cleanup path. It iterates num_bufs times and attempts to unmap
> entries in the dma array.
> 
> This leads to two issues:
> 1. The dma array shares storage with tx_qpl_buf_ids (union).
>  Interpreting buffer IDs as DMA addresses results in attempting to
>  unmap incorrect memory locations.
> 2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed
>  the size of the dma array, causing out-of-bounds access warnings
> (trace below is how we noticed this issue).
> 
> UBSAN: array-index-out-of-bounds in
> drivers/net/ethernet/drivers/net/ethernet/google/gve/gve_tx_dqo.c:178:5 index 18 is out of
> range for type 'dma_addr_t[18]' (aka 'unsigned long long[18]')
> Workqueue: gve gve_service_task [gve]
> Call Trace:
> <TASK>
> dump_stack_lvl+0x33/0xa0
> __ubsan_handle_out_of_bounds+0xdc/0x110
> gve_tx_stop_ring_dqo+0x182/0x200 [gve]
> gve_close+0x1be/0x450 [gve]
> gve_reset+0x99/0x120 [gve]
> gve_service_task+0x61/0x100 [gve]
> process_scheduled_works+0x1e9/0x380
> 
> Fix this by properly checking for QPL mode and delegating to
> gve_free_tx_qpl_bufs() to reclaim the buffers.
> 
> Cc: stable@vger.kernel.org
> Fixes: a6fb8d5a8b69 ("gve: Tx path for DQO-QPL")
> Signed-off-by: Ankit Garg <nktgrg@google.com>
> Reviewed-by: Jordan Rhee <jordanrhee@google.com>
> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
> Signed-off-by: Joshua Washington <joshwash@google.com>
> ---
> Changes in v2:
> * Moved gve_unmap_packet up instead of forward declaration
>   (Jakub Kicinski)

Reviewed-by: Simon Horman <horms@kernel.org>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
  2026-02-20 21:53 [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL Joshua Washington
  2026-02-23 18:04 ` Simon Horman
@ 2026-02-24  1:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-24  1:30 UTC (permalink / raw)
  To: Joshua Washington
  Cc: netdev, hramamurthy, andrew+netdev, davem, edumazet, kuba, pabeni,
	willemb, pkaligineedi, rushilg, bcf, linux-kernel, nktgrg, stable,
	jordanrhee

Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Fri, 20 Feb 2026 13:53:24 -0800 you wrote:
> From: Ankit Garg <nktgrg@google.com>
> 
> In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA
> buffer cleanup path. It iterates num_bufs times and attempts to unmap
> entries in the dma array.
> 
> This leads to two issues:
> 1. The dma array shares storage with tx_qpl_buf_ids (union).
>  Interpreting buffer IDs as DMA addresses results in attempting to
>  unmap incorrect memory locations.
> 2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed
>  the size of the dma array, causing out-of-bounds access warnings
> (trace below is how we noticed this issue).
> 
> [...]

Here is the summary with links:
  - [net,v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
    https://git.kernel.org/netdev/net/c/fb868db5f4bc

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-02-24  1:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-20 21:53 [PATCH net v2] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL Joshua Washington
2026-02-23 18:04 ` Simon Horman
2026-02-24  1:30 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox