From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA6CE332633 for ; Mon, 9 Mar 2026 12:47:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773060473; cv=none; b=jnVhTSVr3uHuLoz0Yx9Q3OOaESfysysPyS3UrrS0JuFf8+/+3mkVrEzCnezRTYeRWgt68Se0xDwW13r2/sWKcnDfZKjEwPkLkWbXKiPkO9IaaV/X7MVyH9/xHwZRLnW4WNgW6kCxg27nyp4pUzxRmLoidXPMmdkn7ZdWdlD7rwc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773060473; c=relaxed/simple; bh=z0iPGN/NiPeStnltOiGW2FXTdFRfcNV17/aMHdZWLIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GehOGZU4eCGf7FYB83/VmTywNCcG2TnHyDDi+J+lzFbdVfZtPWDHA2Rp8tQ67rL3IXbprdkXILf0J/IRSiJ2+Zf2CeS08KTGDIIjJZdu3C3QfRUAIbUARNpIfV+7Qn1VIDWvlAWB6fkM+9vbZvvOIUMh1iyEQAJHw2z/itOcFLw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hpLLNb5d; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hpLLNb5d" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE2EDC4CEF7; Mon, 9 Mar 2026 12:47:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773060472; bh=z0iPGN/NiPeStnltOiGW2FXTdFRfcNV17/aMHdZWLIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hpLLNb5d4G2ZvKxMWQo6GtbQN6oWsa837wgyjMv2rb2upbRPiNP64LkgwZkBpMWFI YtdhShKcqVSNGG9ocMCRLF+ZDCcIDaZaEH4MEmfhmu8U1HLSf9LghyRH6RPOIJwjVr kQ0ipKhbUjCtsan/oOrf8Q8ag7bpGaJPw0BnabxOFP/fJ4uTxW8+a6wZPx0pC/Ax0E 2RdOk1YXYa8u57cApS65CD1yBSGovVC+f0vN3EvZzW63siC3KKSrrR9uPJDEuLLWxY fUTgAhUURZodxiE0v8IhGWF8qMDmO3p648WxsDbYYcuJ3D7z+PeJDs7dtWYEoRasPi HaOxLte38PDOw== From: Sasha Levin To: stable@vger.kernel.org Cc: Ankit Garg , Jordan Rhee , Harshitha Ramamurthy , Joshua Washington , Simon Horman , Jakub Kicinski , Sasha Levin Subject: [PATCH 6.12.y] gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL Date: Mon, 9 Mar 2026 08:47:50 -0400 Message-ID: <20260309124750.861990-1-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <2026030917-ferment-untamed-144d@gregkh> References: <2026030917-ferment-untamed-144d@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Ankit Garg [ Upstream commit fb868db5f4bccd7a78219313ab2917429f715cea ] In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA buffer cleanup path. It iterates num_bufs times and attempts to unmap entries in the dma array. This leads to two issues: 1. The dma array shares storage with tx_qpl_buf_ids (union). Interpreting buffer IDs as DMA addresses results in attempting to unmap incorrect memory locations. 2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed the size of the dma array, causing out-of-bounds access warnings (trace below is how we noticed this issue). UBSAN: array-index-out-of-bounds in drivers/net/ethernet/drivers/net/ethernet/google/gve/gve_tx_dqo.c:178:5 index 18 is out of range for type 'dma_addr_t[18]' (aka 'unsigned long long[18]') Workqueue: gve gve_service_task [gve] Call Trace: dump_stack_lvl+0x33/0xa0 __ubsan_handle_out_of_bounds+0xdc/0x110 gve_tx_stop_ring_dqo+0x182/0x200 [gve] gve_close+0x1be/0x450 [gve] gve_reset+0x99/0x120 [gve] gve_service_task+0x61/0x100 [gve] process_scheduled_works+0x1e9/0x380 Fix this by properly checking for QPL mode and delegating to gve_free_tx_qpl_bufs() to reclaim the buffers. Cc: stable@vger.kernel.org Fixes: a6fb8d5a8b69 ("gve: Tx path for DQO-QPL") Signed-off-by: Ankit Garg Reviewed-by: Jordan Rhee Reviewed-by: Harshitha Ramamurthy Signed-off-by: Joshua Washington Reviewed-by: Simon Horman Link: https://patch.msgid.link/20260220215324.1631350-1-joshwash@google.com Signed-off-by: Jakub Kicinski [ netmem_dma_unmap_page_attrs() => dma_unmap_page() ] Signed-off-by: Sasha Levin --- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 54 +++++++++----------- 1 file changed, 24 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index 26053cc85d1c5..62a6df009cda9 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -157,6 +157,24 @@ gve_free_pending_packet(struct gve_tx_ring *tx, } } +static void gve_unmap_packet(struct device *dev, + struct gve_tx_pending_packet_dqo *pkt) +{ + int i; + + if (!pkt->num_bufs) + return; + + /* SKB linear portion is guaranteed to be mapped */ + dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]), + dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE); + for (i = 1; i < pkt->num_bufs; i++) { + dma_unmap_page(dev, dma_unmap_addr(pkt, dma[i]), + dma_unmap_len(pkt, len[i]), DMA_TO_DEVICE); + } + pkt->num_bufs = 0; +} + /* gve_tx_free_desc - Cleans up all pending tx requests and buffers. */ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx) @@ -166,21 +184,12 @@ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx) for (i = 0; i < tx->dqo.num_pending_packets; i++) { struct gve_tx_pending_packet_dqo *cur_state = &tx->dqo.pending_packets[i]; - int j; - - for (j = 0; j < cur_state->num_bufs; j++) { - if (j == 0) { - dma_unmap_single(tx->dev, - dma_unmap_addr(cur_state, dma[j]), - dma_unmap_len(cur_state, len[j]), - DMA_TO_DEVICE); - } else { - dma_unmap_page(tx->dev, - dma_unmap_addr(cur_state, dma[j]), - dma_unmap_len(cur_state, len[j]), - DMA_TO_DEVICE); - } - } + + if (tx->dqo.qpl) + gve_free_tx_qpl_bufs(tx, cur_state); + else + gve_unmap_packet(tx->dev, cur_state); + if (cur_state->skb) { dev_consume_skb_any(cur_state->skb); cur_state->skb = NULL; @@ -1039,21 +1048,6 @@ static void remove_from_list(struct gve_tx_ring *tx, } } -static void gve_unmap_packet(struct device *dev, - struct gve_tx_pending_packet_dqo *pkt) -{ - int i; - - /* SKB linear portion is guaranteed to be mapped */ - dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]), - dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE); - for (i = 1; i < pkt->num_bufs; i++) { - dma_unmap_page(dev, dma_unmap_addr(pkt, dma[i]), - dma_unmap_len(pkt, len[i]), DMA_TO_DEVICE); - } - pkt->num_bufs = 0; -} - /* Completion types and expected behavior: * No Miss compl + Packet compl = Packet completed normally. * Miss compl + Re-inject compl = Packet completed normally. -- 2.51.0