* [RFC PATCH net-next] net: mana: Force single RX buffer per page under SWIOTLB bounce modes
@ 2026-04-27 14:41 Dipayaan Roy
2026-04-27 22:44 ` Jakub Kicinski
0 siblings, 1 reply; 2+ messages in thread
From: Dipayaan Roy @ 2026-04-27 14:41 UTC (permalink / raw)
To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
kuba, pabeni, leon, longli, kotaranov, horms, shradhagupta,
ssengar, ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees,
john.fastabend, hawk, bpf, daniel, ast, sdf, yury.norov
The MANA driver has two distinct DMA paths for RX buffers:
1. Without PP_FLAG_DMA_MAP: The driver maps full pages manually,
creating page-aligned mappings where the DMA offset is always zero.
2. With PP_FLAG_DMA_MAP: page_pool uses sub-page fragments, where
multiple RX buffers share a single page. The pool maps the whole
page once, and subsequent allocations use offsets into that region.
Path 2 is problematic in two scenarios where DMA must go through
SWIOTLB bounce buffers:
- Confidential VMs (AMD SEV-SNP, Intel TDX): guest memory is encrypted
and the NIC cannot access it directly due to lack of TDISP support.
All DMA must use SWIOTLB bounce buffers.
- Force-bounce mode (swiotlb=force): all DMA is routed through bounce
buffers regardless of whether the system is a CVM.
In both cases, sub-page RX buffer fragments allocated via page_pool may
not be compatible with bounce buffering in this mode, leading to failures
in the RX path.
Add a check using is_swiotlb_force_bounce() in
mana_use_single_rxbuf_per_page() to detect when force-bounce is active
for the device and force single RX buffer per page allocation.
Note: This issue likely affects any NIC driver using page_pool with
sub-page fragment allocation under SWIOTLB. A more general fix at
the page_pool level may be desirable. Seeking feedback on the
preferred approach.
Signed-off-by: Dipayaan Roy <dipayanroy@linux.microsoft.com>
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 2d44eaf932a8..841421baf0de 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -12,6 +12,7 @@
#include <linux/pci.h>
#include <linux/export.h>
#include <linux/skbuff.h>
+#include <linux/swiotlb.h>
#include <linux/cc_platform.h>
#include <net/checksum.h>
@@ -748,10 +749,15 @@ static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, dma_addr_t *da)
static bool
mana_use_single_rxbuf_per_page(struct mana_port_context *apc, u32 mtu)
{
+ struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
+
/* On confidential VMs with guest memory encryption, all DMA goes
* through SWIOTLB bounce buffers. Sub-page RX fragments may not
* be properly bounce-buffered, so use fullpage buffers.
*/
+ if (is_swiotlb_force_bounce(gc->dev))
+ return true;
+
if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return true;
--
2.43.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [RFC PATCH net-next] net: mana: Force single RX buffer per page under SWIOTLB bounce modes
2026-04-27 14:41 [RFC PATCH net-next] net: mana: Force single RX buffer per page under SWIOTLB bounce modes Dipayaan Roy
@ 2026-04-27 22:44 ` Jakub Kicinski
0 siblings, 0 replies; 2+ messages in thread
From: Jakub Kicinski @ 2026-04-27 22:44 UTC (permalink / raw)
To: Dipayaan Roy
Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
pabeni, leon, longli, kotaranov, horms, shradhagupta, ssengar,
ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees,
john.fastabend, hawk, bpf, daniel, ast, sdf, yury.norov
On Mon, 27 Apr 2026 07:41:11 -0700 Dipayaan Roy wrote:
> In both cases, sub-page RX buffer fragments allocated via page_pool may
> not be compatible with bounce buffering in this mode, leading to failures
> in the RX path.
What does it mean to not be compatible with swiotlb?
Normally that indicates that DMA API is mis-used.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-27 22:44 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-27 14:41 [RFC PATCH net-next] net: mana: Force single RX buffer per page under SWIOTLB bounce modes Dipayaan Roy
2026-04-27 22:44 ` Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox