Linux-HyperV List
 help / color / mirror / Atom feed
* [PATCH net] net: mana: Fix TOCTOU double-fetch of hwc_msg_id from DMA buffer
@ 2026-05-14 19:41 Erni Sri Satya Vennela
  2026-05-15 19:44 ` sashiko-bot
  0 siblings, 1 reply; 2+ messages in thread
From: Erni Sri Satya Vennela @ 2026-05-14 19:41 UTC (permalink / raw)
  To: kys, haiyangz, wei.liu, decui, longli, andrew+netdev, davem,
	edumazet, kuba, pabeni, dipayanroy, horms, ernis, kees, shacharr,
	stephen, linux-hyperv, netdev, linux-kernel

In mana_hwc_rx_event_handler(), resp->response.hwc_msg_id is read from
DMA-coherent memory and bounds-checked, then mana_hwc_handle_resp()
re-reads the same field from the same DMA buffer for test_bit() and
pointer arithmetic.

DMA-coherent memory is mapped uncacheable on x86 and is shared,
unencrypted, in Confidential VMs (SEV-SNP/TDX), so each load goes
directly to host-visible memory. A H/W can modify the value
between the check and the use, bypassing the bounds validation.

Fix this by reading hwc_msg_id exactly once using READ_ONCE() into a
stack-local variable in mana_hwc_rx_event_handler(), and passing the
validated value as a parameter to mana_hwc_handle_resp().

Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)")
Signed-off-by: Erni Sri Satya Vennela <ernis@linux.microsoft.com>
---
 .../net/ethernet/microsoft/mana/hw_channel.c  | 23 +++++++++++--------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c
index dbbde0fa57e7..fd8b324d7fb6 100644
--- a/drivers/net/ethernet/microsoft/mana/hw_channel.c
+++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c
@@ -77,21 +77,19 @@ static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq,
 }
 
 static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
-				 struct hwc_work_request *rx_req)
+				 struct hwc_work_request *rx_req, u16 msg_id)
 {
 	const struct gdma_resp_hdr *resp_msg = rx_req->buf_va;
 	struct hwc_caller_ctx *ctx;
 	int err;
 
-	if (!test_bit(resp_msg->response.hwc_msg_id,
-		      hwc->inflight_msg_res.map)) {
-		dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n",
-			resp_msg->response.hwc_msg_id);
+	if (!test_bit(msg_id, hwc->inflight_msg_res.map)) {
+		dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n", msg_id);
 		mana_hwc_post_rx_wqe(hwc->rxq, rx_req);
 		return;
 	}
 
-	ctx = hwc->caller_ctx + resp_msg->response.hwc_msg_id;
+	ctx = hwc->caller_ctx + msg_id;
 	err = mana_hwc_verify_resp_msg(ctx, resp_msg, resp_len);
 	if (err)
 		goto out;
@@ -251,6 +249,7 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id,
 	struct gdma_sge *sge;
 	u64 rq_base_addr;
 	u64 rx_req_idx;
+	u16 msg_id;
 	u8 *wqe;
 
 	if (WARN_ON_ONCE(hwc_rxq->gdma_wq->id != gdma_rxq_id))
@@ -269,13 +268,17 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id,
 	rx_req = &hwc_rxq->msg_buf->reqs[rx_req_idx];
 	resp = (struct gdma_resp_hdr *)rx_req->buf_va;
 
-	if (resp->response.hwc_msg_id >= hwc->num_inflight_msg) {
-		dev_err(hwc->dev, "HWC RX: wrong msg_id=%u\n",
-			resp->response.hwc_msg_id);
+	/* Read msg_id once from DMA buffer to prevent TOCTOU:
+	 * DMA memory is shared/unencrypted in CVMs - host can
+	 * modify it between reads.
+	 */
+	msg_id = READ_ONCE(resp->response.hwc_msg_id);
+	if (msg_id >= hwc->num_inflight_msg) {
+		dev_err(hwc->dev, "HWC RX: wrong msg_id=%u\n", msg_id);
 		return;
 	}
 
-	mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, rx_req);
+	mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, rx_req, msg_id);
 
 	/* Can no longer use 'resp', because the buffer is posted to the HW
 	 * in mana_hwc_handle_resp() above.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net] net: mana: Fix TOCTOU double-fetch of hwc_msg_id from DMA buffer
  2026-05-14 19:41 [PATCH net] net: mana: Fix TOCTOU double-fetch of hwc_msg_id from DMA buffer Erni Sri Satya Vennela
@ 2026-05-15 19:44 ` sashiko-bot
  0 siblings, 0 replies; 2+ messages in thread
From: sashiko-bot @ 2026-05-15 19:44 UTC (permalink / raw)
  To: Erni Sri Satya Vennela; +Cc: linux-hyperv

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Critical] Unvalidated array index `rx_req_idx` derived from untrusted DMA memory allows arbitrary out-of-bounds memory access.
--

commit b79489b09c3449da038fde04cce5dc4d6a1fe538
Author: Erni Sri Satya Vennela <ernis@linux.microsoft.com>

net: mana: Fix TOCTOU double-fetch of hwc_msg_id from DMA buffer

This commit fixes a TOCTOU vulnerability in mana_hwc_rx_event_handler() where
the hwc_msg_id is read twice from DMA-coherent memory. It reads the msg_id once
using READ_ONCE() and passes it to mana_hwc_handle_resp() to prevent a malicious
host in a Confidential VM from bypassing bounds checks.

> diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c
> index dbbde0fa57e71..fd8b324d7fb68 100644
> --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c
> +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c

[ ... ]

> @@ -269,13 +268,17 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id,
>  	rx_req = &hwc_rxq->msg_buf->reqs[rx_req_idx];

Since this patch addresses host-controlled DMA memory vulnerabilities in
Confidential VMs, could there be another out-of-bounds access just before
the msg_id check?

Looking at how rx_req_idx is calculated earlier in this function:

	sge = (struct gdma_sge *)(wqe + 8 + dma_oob->inline_oob_size_div4 * 4);
	rq_base_addr = hwc_rxq->msg_buf->mem_info.dma_handle;
	rx_req_idx = (sge->address - rq_base_addr) / hwc->max_req_msg_size;

Because sge->address is read directly from the untrusted DMA queue, a malicious
host could provide an arbitrary value, resulting in an out-of-bounds
rx_req_idx.

Using this unvalidated index to access hwc_rxq->msg_buf->reqs[rx_req_idx]
yields an attacker-controlled rx_req pointer. When this corrupted rx_req is
subsequently passed to mana_hwc_post_rx_wqe(), the driver writes to req->sge
and req->wqe_req, which could allow arbitrary writes to guest kernel memory.

Should rx_req_idx be bounds-checked against the size of the reqs array
before it is used to index the array?

>  	resp = (struct gdma_resp_hdr *)rx_req->buf_va;
>  
> -	if (resp->response.hwc_msg_id >= hwc->num_inflight_msg) {
> -		dev_err(hwc->dev, "HWC RX: wrong msg_id=%u\n",
> -			resp->response.hwc_msg_id);
> +	/* Read msg_id once from DMA buffer to prevent TOCTOU:
> +	 * DMA memory is shared/unencrypted in CVMs - host can
> +	 * modify it between reads.
> +	 */
> +	msg_id = READ_ONCE(resp->response.hwc_msg_id);

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260514194156.466823-1-ernis@linux.microsoft.com?part=1

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-15 19:44 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 19:41 [PATCH net] net: mana: Fix TOCTOU double-fetch of hwc_msg_id from DMA buffer Erni Sri Satya Vennela
2026-05-15 19:44 ` sashiko-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox