BPF List
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Dipayaan Roy" <dipayanroy@linux.microsoft.com>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory
Date: Tue, 28 Apr 2026 18:26:22 +0000	[thread overview]
Message-ID: <20260428182622.B7CC2C2BCB6@smtp.kernel.org> (raw)
In-Reply-To: <ae9pxvJfkAZYfKMf@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>

Sashiko AI review found 3 potential issue(s):
- [Critical] Missing DMA Synchronization causing silent data corruption on SWIOTLB/non-coherent architectures.
- [High] The implementation violates the commit message's claims regarding DMA alignment and mapping size when XDP or jumbo frames are enabled.
- [High] Double Free and Use-After-Free in the `mana_create_rxq` error path.
--

commit cd9b937476d6872de0ead35dec420877b132e896
Author: Dipayaan Roy <dipayanroy@linux.microsoft.com>

net: mana: Force single RX buffer per page for CVM/encrypted guest memory

This commit forces the mana driver to use a single Rx buffer per page on
Confidential VMs with encrypted memory. It aims to ensure page-aligned DMA
mappings without offsets to work around issues with swiotlb bounce buffers
when using sub-page fragments.

> When page_pool allocates sub-page RX buffer fragments, the bounce buffer
> granularity may not align with these smaller fragment sizes, leading to
> failure in mana driver rx path.

Could the underlying issue with sub-page fragments be a lack of DMA
synchronization rather than bounce buffer granularity?

The mana driver relies on the page pool DMA map flag for sub-page
Rx fragments, but does it call page_pool_dma_sync_for_cpu() or
dma_sync_single_for_cpu() before processing the packet?

On systems using swiotlb or non-coherent DMA, omitting this sync means
the CPU might read stale data since the bounce buffer is never copied
to the original CPU memory.

Forcing a single buffer per page works around this because the single-page
path calls dma_unmap_single(), which implicitly syncs the data. Would
adding the mandatory CPU DMA sync for sub-page fragments be the correct
fix here?

> - Each RX buffer is exactly one PAGE_SIZE.
> - The DMA offset is always 0.
> - SWIOTLB maps full, page-aligned blocks.

Does the implementation actually achieve these claims when XDP or jumbo
frames are enabled?

> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index a654b3699c4c..2d44eaf932a8 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
[ ... ]
> @@ -754,8 +772,7 @@ static void mana_get_rxbuf_cfg(struct mana_port_context *apc,
>  	/* Calculate datasize first (consistent across all cases) */
>  	*datasize = mtu + ETH_HLEN;
>  
> -	/* For xdp and jumbo frames make sure only one packet fits per page */
> -	if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc)) {
> +	if (mana_use_single_rxbuf_per_page(apc, mtu)) {
>  		if (mana_xdp_get(apc)) {
>  			*headroom = XDP_PACKET_HEADROOM;
>  			*alloc_size = PAGE_SIZE;

If XDP is enabled, headroom is set to XDP_PACKET_HEADROOM, which is
typically 256 bytes. When the driver subsequently maps the buffer,
adding this headroom results in a DMA mapping with a 256-byte offset,
rather than 0. The mapped size would also be the data size, not a full page.

Additionally, for jumbo frames, alloc_size is elevated to a higher-order
allocation, meaning the buffer is not exactly one page size.

If the hardware or the hypervisor strictly relies on 0-offset,
page-aligned DMA addresses to function correctly with swiotlb, will enabling
XDP cause DMA faults here?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/ae9pxvJfkAZYfKMf@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net?part=1

      parent reply	other threads:[~2026-04-28 18:26 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27 13:51 [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory Dipayaan Roy
2026-04-27 23:21 ` Jakub Kicinski
2026-04-28 18:26 ` sashiko-bot [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260428182622.B7CC2C2BCB6@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=dipayanroy@linux.microsoft.com \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox