public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
@ 2026-01-20  7:01 Aneesh Kumar K.V (Arm)
  2026-01-20  9:25 ` Anshuman Khandual
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Aneesh Kumar K.V (Arm) @ 2026-01-20  7:01 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, iommu
  Cc: Catalin Marinas, Will Deacon, Marek Szyprowski, Robin Murphy,
	suzuki.poulose, steven.price, Aneesh Kumar K.V (Arm)

arm64 reduces the default swiotlb size (for unaligned kmalloc()
bouncing) when it detects that no swiotlb bouncing is needed.

If swiotlb bouncing is explicitly forced via the command line
(swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
query the forced-bounce state and use it to skip the resize when
bouncing is forced.

Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
 arch/arm64/mm/init.c    | 3 ++-
 include/linux/swiotlb.h | 7 +++++++
 kernel/dma/swiotlb.c    | 5 +++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 524d34a0e921..7046241b47b8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
 		flags |= SWIOTLB_FORCE;
 	}
 
-	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
+	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
+	    !(swiotlb || force_swiotlb_bounce())) {
 		/*
 		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
 		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 3dae0f592063..513a93dcbdbc 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 phys_addr_t default_swiotlb_base(void);
 phys_addr_t default_swiotlb_limit(void);
+bool force_swiotlb_bounce(void);
 #else
 static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
 {
@@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
 {
 	return 0;
 }
+
+static inline bool force_swiotlb_bounce(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SWIOTLB */
 
 phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0d37da3d95b6..85e31f228cc9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
 	return io_tlb_default_mem.defpool.start;
 }
 
+bool force_swiotlb_bounce(void)
+{
+	return swiotlb_force_bounce;
+}
+
 /**
  * default_swiotlb_limit() - get the address limit of the default SWIOTLB
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-17  5:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20  7:01 [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced Aneesh Kumar K.V (Arm)
2026-01-20  9:25 ` Anshuman Khandual
2026-01-20 13:20 ` Robin Murphy
2026-01-21  6:10   ` Aneesh Kumar K.V
2026-02-04 18:52 ` Catalin Marinas
2026-02-06  6:11   ` Aneesh Kumar K.V
2026-03-04 10:00     ` Marek Szyprowski
2026-03-17  5:29       ` Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox