* [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory
@ 2026-04-27 13:51 Dipayaan Roy
2026-04-27 23:21 ` Jakub Kicinski
2026-04-28 18:26 ` sashiko-bot
0 siblings, 2 replies; 3+ messages in thread
From: Dipayaan Roy @ 2026-04-27 13:51 UTC (permalink / raw)
To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
kuba, pabeni, leon, longli, kotaranov, horms, shradhagupta,
ssengar, ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees,
john.fastabend, hawk, bpf, daniel, ast, sdf, yury.norov
On Confidential VMs (CVMs) such as AMD SEV-SNP or Intel TDX, the guest
operating system's memory is encrypted. And current hardwares lacks the
support for TDISP (TEE Device Interface Security Protocol), meaning the
NIC cannot directly access this encrypted VM memory. Consequently, all
DMA operations must utilize SWIOTLB bounce buffers.
In the MANA driver currently, there are two distinct paths for DMA
mapping:
1. Without PP_FLAG_DMA_MAP: The driver manually maps full pages for each
packet. This creates standalone, page-aligned mappings where the offset
is always zero.
2. With PP_FLAG_DMA_MAP: Optimizes memory by using page_pool with
sub-page fragments (e.g., multiple RX buffers sharing a single page).
When PP_FLAG_DMA_MAP is enabled, page_pool maps the entire page once.
Subsequent RX buffer allocations use offsets into this pre-mapped area.
When page_pool allocates sub-page RX buffer fragments, the bounce buffer
granularity may not align with these smaller fragment sizes, leading to
failure in mana driver rx path.
Refactor the RX buffer decision into mana_use_single_rxbuf_per_page().
When cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) is true, the driver is
forced to use a single RX buffer per page. This ensures:
- Each RX buffer is exactly one PAGE_SIZE.
- The DMA offset is always 0.
- SWIOTLB maps full, page-aligned blocks.
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Dipayaan Roy <dipayanroy@linux.microsoft.com>
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 21 +++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index a654b3699c4c..2d44eaf932a8 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -12,6 +12,7 @@
#include <linux/pci.h>
#include <linux/export.h>
#include <linux/skbuff.h>
+#include <linux/cc_platform.h>
#include <net/checksum.h>
#include <net/ip6_checksum.h>
@@ -744,6 +745,23 @@ static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, dma_addr_t *da)
return va;
}
+static bool
+mana_use_single_rxbuf_per_page(struct mana_port_context *apc, u32 mtu)
+{
+ /* On confidential VMs with guest memory encryption, all DMA goes
+ * through SWIOTLB bounce buffers. Sub-page RX fragments may not
+ * be properly bounce-buffered, so use fullpage buffers.
+ */
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
+ return true;
+
+ /* For xdp and jumbo frames make sure only one packet fits per page. */
+ if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc))
+ return true;
+
+ return false;
+}
+
/* Get RX buffer's data size, alloc size, XDP headroom based on MTU */
static void mana_get_rxbuf_cfg(struct mana_port_context *apc,
int mtu, u32 *datasize, u32 *alloc_size,
@@ -754,8 +772,7 @@ static void mana_get_rxbuf_cfg(struct mana_port_context *apc,
/* Calculate datasize first (consistent across all cases) */
*datasize = mtu + ETH_HLEN;
- /* For xdp and jumbo frames make sure only one packet fits per page */
- if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc)) {
+ if (mana_use_single_rxbuf_per_page(apc, mtu)) {
if (mana_xdp_get(apc)) {
*headroom = XDP_PACKET_HEADROOM;
*alloc_size = PAGE_SIZE;
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory
2026-04-27 13:51 [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory Dipayaan Roy
@ 2026-04-27 23:21 ` Jakub Kicinski
2026-04-28 18:26 ` sashiko-bot
1 sibling, 0 replies; 3+ messages in thread
From: Jakub Kicinski @ 2026-04-27 23:21 UTC (permalink / raw)
To: Dipayaan Roy
Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
pabeni, leon, longli, kotaranov, horms, shradhagupta, ssengar,
ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees,
john.fastabend, hawk, bpf, daniel, ast, sdf, yury.norov
On Mon, 27 Apr 2026 06:51:02 -0700 Dipayaan Roy wrote:
> When page_pool allocates sub-page RX buffer fragments, the bounce buffer
> granularity may not align with these smaller fragment sizes, leading to
> failure in mana driver rx path.
>
> Refactor the RX buffer decision into mana_use_single_rxbuf_per_page().
> When cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) is true, the driver is
> forced to use a single RX buffer per page. This ensures:
> - Each RX buffer is exactly one PAGE_SIZE.
> - The DMA offset is always 0.
> - SWIOTLB maps full, page-aligned blocks.
As commented on your RFC - I'm not sure why you need this.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory
2026-04-27 13:51 [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory Dipayaan Roy
2026-04-27 23:21 ` Jakub Kicinski
@ 2026-04-28 18:26 ` sashiko-bot
1 sibling, 0 replies; 3+ messages in thread
From: sashiko-bot @ 2026-04-28 18:26 UTC (permalink / raw)
To: Dipayaan Roy; +Cc: bpf
Sashiko AI review found 3 potential issue(s):
- [Critical] Missing DMA Synchronization causing silent data corruption on SWIOTLB/non-coherent architectures.
- [High] The implementation violates the commit message's claims regarding DMA alignment and mapping size when XDP or jumbo frames are enabled.
- [High] Double Free and Use-After-Free in the `mana_create_rxq` error path.
--
commit cd9b937476d6872de0ead35dec420877b132e896
Author: Dipayaan Roy <dipayanroy@linux.microsoft.com>
net: mana: Force single RX buffer per page for CVM/encrypted guest memory
This commit forces the mana driver to use a single Rx buffer per page on
Confidential VMs with encrypted memory. It aims to ensure page-aligned DMA
mappings without offsets to work around issues with swiotlb bounce buffers
when using sub-page fragments.
> When page_pool allocates sub-page RX buffer fragments, the bounce buffer
> granularity may not align with these smaller fragment sizes, leading to
> failure in mana driver rx path.
Could the underlying issue with sub-page fragments be a lack of DMA
synchronization rather than bounce buffer granularity?
The mana driver relies on the page pool DMA map flag for sub-page
Rx fragments, but does it call page_pool_dma_sync_for_cpu() or
dma_sync_single_for_cpu() before processing the packet?
On systems using swiotlb or non-coherent DMA, omitting this sync means
the CPU might read stale data since the bounce buffer is never copied
to the original CPU memory.
Forcing a single buffer per page works around this because the single-page
path calls dma_unmap_single(), which implicitly syncs the data. Would
adding the mandatory CPU DMA sync for sub-page fragments be the correct
fix here?
> - Each RX buffer is exactly one PAGE_SIZE.
> - The DMA offset is always 0.
> - SWIOTLB maps full, page-aligned blocks.
Does the implementation actually achieve these claims when XDP or jumbo
frames are enabled?
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index a654b3699c4c..2d44eaf932a8 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
[ ... ]
> @@ -754,8 +772,7 @@ static void mana_get_rxbuf_cfg(struct mana_port_context *apc,
> /* Calculate datasize first (consistent across all cases) */
> *datasize = mtu + ETH_HLEN;
>
> - /* For xdp and jumbo frames make sure only one packet fits per page */
> - if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc)) {
> + if (mana_use_single_rxbuf_per_page(apc, mtu)) {
> if (mana_xdp_get(apc)) {
> *headroom = XDP_PACKET_HEADROOM;
> *alloc_size = PAGE_SIZE;
If XDP is enabled, headroom is set to XDP_PACKET_HEADROOM, which is
typically 256 bytes. When the driver subsequently maps the buffer,
adding this headroom results in a DMA mapping with a 256-byte offset,
rather than 0. The mapped size would also be the data size, not a full page.
Additionally, for jumbo frames, alloc_size is elevated to a higher-order
allocation, meaning the buffer is not exactly one page size.
If the hardware or the hypervisor strictly relies on 0-offset,
page-aligned DMA addresses to function correctly with swiotlb, will enabling
XDP cause DMA faults here?
--
Sashiko AI review · https://sashiko.dev/#/patchset/ae9pxvJfkAZYfKMf@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net?part=1
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-28 18:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-27 13:51 [PATCH net-next] net: mana: Force single RX buffer per page for CVM/encrypted guest memory Dipayaan Roy
2026-04-27 23:21 ` Jakub Kicinski
2026-04-28 18:26 ` sashiko-bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox