* [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
@ 2026-01-20 7:01 Aneesh Kumar K.V (Arm)
2026-01-20 9:25 ` Anshuman Khandual
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Aneesh Kumar K.V (Arm) @ 2026-01-20 7:01 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, iommu
Cc: Catalin Marinas, Will Deacon, Marek Szyprowski, Robin Murphy,
suzuki.poulose, steven.price, Aneesh Kumar K.V (Arm)
arm64 reduces the default swiotlb size (for unaligned kmalloc()
bouncing) when it detects that no swiotlb bouncing is needed.
If swiotlb bouncing is explicitly forced via the command line
(swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
query the forced-bounce state and use it to skip the resize when
bouncing is forced.
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
arch/arm64/mm/init.c | 3 ++-
include/linux/swiotlb.h | 7 +++++++
kernel/dma/swiotlb.c | 5 +++++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 524d34a0e921..7046241b47b8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
flags |= SWIOTLB_FORCE;
}
- if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
+ if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
+ !(swiotlb || force_swiotlb_bounce())) {
/*
* If no bouncing needed for ZONE_DMA, reduce the swiotlb
* buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 3dae0f592063..513a93dcbdbc 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
void __init swiotlb_adjust_size(unsigned long size);
phys_addr_t default_swiotlb_base(void);
phys_addr_t default_swiotlb_limit(void);
+bool force_swiotlb_bounce(void);
#else
static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
{
@@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
{
return 0;
}
+
+static inline bool force_swiotlb_bounce(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SWIOTLB */
phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0d37da3d95b6..85e31f228cc9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
return io_tlb_default_mem.defpool.start;
}
+bool force_swiotlb_bounce(void)
+{
+ return swiotlb_force_bounce;
+}
+
/**
* default_swiotlb_limit() - get the address limit of the default SWIOTLB
*
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-01-20 7:01 [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced Aneesh Kumar K.V (Arm)
@ 2026-01-20 9:25 ` Anshuman Khandual
2026-01-20 13:20 ` Robin Murphy
2026-02-04 18:52 ` Catalin Marinas
2 siblings, 0 replies; 8+ messages in thread
From: Anshuman Khandual @ 2026-01-20 9:25 UTC (permalink / raw)
To: Aneesh Kumar K.V (Arm), linux-arm-kernel, linux-kernel, iommu
Cc: Catalin Marinas, Will Deacon, Marek Szyprowski, Robin Murphy,
suzuki.poulose, steven.price
On 20/01/26 12:31 PM, Aneesh Kumar K.V (Arm) wrote:
> arm64 reduces the default swiotlb size (for unaligned kmalloc()
> bouncing) when it detects that no swiotlb bouncing is needed.
>
> If swiotlb bouncing is explicitly forced via the command line
> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
> query the forced-bounce state and use it to skip the resize when
> bouncing is forced.
Makes sense not to reduce the SWIOTLB buffer size when being
forced by the administrator.
>
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
> ---
> arch/arm64/mm/init.c | 3 ++-
> include/linux/swiotlb.h | 7 +++++++
> kernel/dma/swiotlb.c | 5 +++++
> 3 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 524d34a0e921..7046241b47b8 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
> flags |= SWIOTLB_FORCE;
> }
>
> - if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
> + !(swiotlb || force_swiotlb_bounce())) {
> /*
> * If no bouncing needed for ZONE_DMA, reduce the swiotlb
> * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
Should the comment here be updated as well ?
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 3dae0f592063..513a93dcbdbc 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
> void __init swiotlb_adjust_size(unsigned long size);
> phys_addr_t default_swiotlb_base(void);
> phys_addr_t default_swiotlb_limit(void);
> +bool force_swiotlb_bounce(void);
> #else
> static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
> {
> @@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
> {
> return 0;
> }
> +
> +static inline bool force_swiotlb_bounce(void)
> +{
> + return false;
> +}
> +
> #endif /* CONFIG_SWIOTLB */
>
> phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 0d37da3d95b6..85e31f228cc9 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
> return io_tlb_default_mem.defpool.start;
> }
>
> +bool force_swiotlb_bounce(void)
> +{
> + return swiotlb_force_bounce;
> +}
> +
> /**
> * default_swiotlb_limit() - get the address limit of the default SWIOTLB
> *
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-01-20 7:01 [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced Aneesh Kumar K.V (Arm)
2026-01-20 9:25 ` Anshuman Khandual
@ 2026-01-20 13:20 ` Robin Murphy
2026-01-21 6:10 ` Aneesh Kumar K.V
2026-02-04 18:52 ` Catalin Marinas
2 siblings, 1 reply; 8+ messages in thread
From: Robin Murphy @ 2026-01-20 13:20 UTC (permalink / raw)
To: Aneesh Kumar K.V (Arm), linux-arm-kernel, linux-kernel, iommu
Cc: Catalin Marinas, Will Deacon, Marek Szyprowski, suzuki.poulose,
steven.price
On 2026-01-20 7:01 am, Aneesh Kumar K.V (Arm) wrote:
> arm64 reduces the default swiotlb size (for unaligned kmalloc()
> bouncing) when it detects that no swiotlb bouncing is needed.
>
> If swiotlb bouncing is explicitly forced via the command line
> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
> query the forced-bounce state and use it to skip the resize when
> bouncing is forced.
This doesn't appear to be an arm64-specific concern though... Since
swiotlb_adjust_size() already prevents resizing if the user requests a
specific size on the command line, it seems logical enough to also not
reduce the size (but I guess still allow it to be enlarged) there if
force is requested.
(Although realistically, anyone requesting force is quite likely to want
to request a larger default size anyway...)
Thanks,
Robin.
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
> ---
> arch/arm64/mm/init.c | 3 ++-
> include/linux/swiotlb.h | 7 +++++++
> kernel/dma/swiotlb.c | 5 +++++
> 3 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 524d34a0e921..7046241b47b8 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
> flags |= SWIOTLB_FORCE;
> }
>
> - if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
> + !(swiotlb || force_swiotlb_bounce())) {
> /*
> * If no bouncing needed for ZONE_DMA, reduce the swiotlb
> * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 3dae0f592063..513a93dcbdbc 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
> void __init swiotlb_adjust_size(unsigned long size);
> phys_addr_t default_swiotlb_base(void);
> phys_addr_t default_swiotlb_limit(void);
> +bool force_swiotlb_bounce(void);
> #else
> static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
> {
> @@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
> {
> return 0;
> }
> +
> +static inline bool force_swiotlb_bounce(void)
> +{
> + return false;
> +}
> +
> #endif /* CONFIG_SWIOTLB */
>
> phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 0d37da3d95b6..85e31f228cc9 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
> return io_tlb_default_mem.defpool.start;
> }
>
> +bool force_swiotlb_bounce(void)
> +{
> + return swiotlb_force_bounce;
> +}
> +
> /**
> * default_swiotlb_limit() - get the address limit of the default SWIOTLB
> *
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-01-20 13:20 ` Robin Murphy
@ 2026-01-21 6:10 ` Aneesh Kumar K.V
0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-01-21 6:10 UTC (permalink / raw)
To: Robin Murphy, linux-arm-kernel, linux-kernel, iommu
Cc: Catalin Marinas, Will Deacon, Marek Szyprowski, suzuki.poulose,
steven.price
Robin Murphy <robin.murphy@arm.com> writes:
> On 2026-01-20 7:01 am, Aneesh Kumar K.V (Arm) wrote:
>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>> bouncing) when it detects that no swiotlb bouncing is needed.
>>
>> If swiotlb bouncing is explicitly forced via the command line
>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>> query the forced-bounce state and use it to skip the resize when
>> bouncing is forced.
>
> This doesn't appear to be an arm64-specific concern though... Since
> swiotlb_adjust_size() already prevents resizing if the user requests a
> specific size on the command line, it seems logical enough to also not
> reduce the size (but I guess still allow it to be enlarged) there if
> force is requested.
>
Something like the below? I am wondering whether we are doing more than
what the function name suggests. Not allowing the size to be adjusted
when the kernel parameter specifies a swiotlb size seems fine. However,
I am not sure whether adding the force_bounce check is a good idea. I
only found RISC-V doing a similar size adjustment to arm64. Maybe we can
fix both architectures?
@@ -211,6 +211,8 @@ unsigned long swiotlb_size_or_default(void)
void __init swiotlb_adjust_size(unsigned long size)
{
+ unsigned long nslabs;
+
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
@@ -220,7 +222,13 @@ void __init swiotlb_adjust_size(unsigned long size)
return;
size = ALIGN(size, IO_TLB_SIZE);
- default_nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ /*
+ * Don't allow to reduce size if we are forcing swiotlb bounce.
+ */
+ if (swiotlb_force_bounce && nslabs < default_nslabs)
+ return;
+ default_nslabs = nslabs;
if (round_up_default_nslabs())
size = default_nslabs << IO_TLB_SHIFT;
pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-01-20 7:01 [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced Aneesh Kumar K.V (Arm)
2026-01-20 9:25 ` Anshuman Khandual
2026-01-20 13:20 ` Robin Murphy
@ 2026-02-04 18:52 ` Catalin Marinas
2026-02-06 6:11 ` Aneesh Kumar K.V
2 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2026-02-04 18:52 UTC (permalink / raw)
To: Aneesh Kumar K.V (Arm)
Cc: linux-arm-kernel, linux-kernel, iommu, Will Deacon,
Marek Szyprowski, Robin Murphy, suzuki.poulose, steven.price
On Tue, Jan 20, 2026 at 12:31:02PM +0530, Aneesh Kumar K.V (Arm) wrote:
> arm64 reduces the default swiotlb size (for unaligned kmalloc()
> bouncing) when it detects that no swiotlb bouncing is needed.
>
> If swiotlb bouncing is explicitly forced via the command line
> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
> query the forced-bounce state and use it to skip the resize when
> bouncing is forced.
I think the logic you proposed in reply to Robin might work better but
have you actually hit a problem that triggered this patch? Do people
passing swiotlb=force expect a specific size for the buffer?
--
Catalin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-02-04 18:52 ` Catalin Marinas
@ 2026-02-06 6:11 ` Aneesh Kumar K.V
2026-03-04 10:00 ` Marek Szyprowski
0 siblings, 1 reply; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-02-06 6:11 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, iommu, Will Deacon,
Marek Szyprowski, Robin Murphy, suzuki.poulose, steven.price
Catalin Marinas <catalin.marinas@arm.com> writes:
> On Tue, Jan 20, 2026 at 12:31:02PM +0530, Aneesh Kumar K.V (Arm) wrote:
>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>> bouncing) when it detects that no swiotlb bouncing is needed.
>>
>> If swiotlb bouncing is explicitly forced via the command line
>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>> query the forced-bounce state and use it to skip the resize when
>> bouncing is forced.
>
> I think the logic you proposed in reply to Robin might work better but
> have you actually hit a problem that triggered this patch? Do people
> passing swiotlb=force expect a specific size for the buffer?
>
This issue was observed while implementing swiotlb for a trusted device.
I was testing the protected swiotlb space using the swiotlb=force
option, which causes the device to use swiotlb even in protected mode.
As per Robin, an end user using the swiotlb=force option will also
specify a custom swiotlb size
-aneesh
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-02-06 6:11 ` Aneesh Kumar K.V
@ 2026-03-04 10:00 ` Marek Szyprowski
2026-03-17 5:29 ` Aneesh Kumar K.V
0 siblings, 1 reply; 8+ messages in thread
From: Marek Szyprowski @ 2026-03-04 10:00 UTC (permalink / raw)
To: Aneesh Kumar K.V, Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, iommu, Will Deacon, Robin Murphy,
suzuki.poulose, steven.price
On 06.02.2026 07:11, Aneesh Kumar K.V wrote:
> Catalin Marinas <catalin.marinas@arm.com> writes:
>> On Tue, Jan 20, 2026 at 12:31:02PM +0530, Aneesh Kumar K.V (Arm) wrote:
>>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>>> bouncing) when it detects that no swiotlb bouncing is needed.
>>>
>>> If swiotlb bouncing is explicitly forced via the command line
>>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>>> query the forced-bounce state and use it to skip the resize when
>>> bouncing is forced.
>> I think the logic you proposed in reply to Robin might work better but
>> have you actually hit a problem that triggered this patch? Do people
>> passing swiotlb=force expect a specific size for the buffer?
>>
> This issue was observed while implementing swiotlb for a trusted device.
> I was testing the protected swiotlb space using the swiotlb=force
> option, which causes the device to use swiotlb even in protected mode.
> As per Robin, an end user using the swiotlb=force option will also
> specify a custom swiotlb size
Does the above mean that it works fine when user provides both
swiotlb=force and custom swiotlb size, so no changes in the code are
actually needed?
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
2026-03-04 10:00 ` Marek Szyprowski
@ 2026-03-17 5:29 ` Aneesh Kumar K.V
0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2026-03-17 5:29 UTC (permalink / raw)
To: Marek Szyprowski, Catalin Marinas
Cc: linux-arm-kernel, linux-kernel, iommu, Will Deacon, Robin Murphy,
suzuki.poulose, steven.price
Marek Szyprowski <m.szyprowski@samsung.com> writes:
> On 06.02.2026 07:11, Aneesh Kumar K.V wrote:
>> Catalin Marinas <catalin.marinas@arm.com> writes:
>>> On Tue, Jan 20, 2026 at 12:31:02PM +0530, Aneesh Kumar K.V (Arm) wrote:
>>>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>>>> bouncing) when it detects that no swiotlb bouncing is needed.
>>>>
>>>> If swiotlb bouncing is explicitly forced via the command line
>>>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>>>> query the forced-bounce state and use it to skip the resize when
>>>> bouncing is forced.
>>> I think the logic you proposed in reply to Robin might work better but
>>> have you actually hit a problem that triggered this patch? Do people
>>> passing swiotlb=force expect a specific size for the buffer?
>>>
>> This issue was observed while implementing swiotlb for a trusted device.
>> I was testing the protected swiotlb space using the swiotlb=force
>> option, which causes the device to use swiotlb even in protected mode.
>> As per Robin, an end user using the swiotlb=force option will also
>> specify a custom swiotlb size
>
> Does the above mean that it works fine when user provides both
> swiotlb=force and custom swiotlb size, so no changes in the code are
> actually needed?
>
swiotlb_adjust_size() checks whether the default_nslabs value has
changed and avoids updating the SWIOTLB size based on different
subsystem logic.
void __init swiotlb_adjust_size(unsigned long size)
{
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
* adjust/expand SWIOTLB size for their use.
*/
if (default_nslabs != IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT)
return;
To handle swiotlb_force alone we can do
modified kernel/dma/swiotlb.c
@@ -209,6 +209,8 @@ unsigned long swiotlb_size_or_default(void)
void __init swiotlb_adjust_size(unsigned long size)
{
+ unsigned long nslabs;
+
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
@@ -218,7 +220,13 @@ void __init swiotlb_adjust_size(unsigned long size)
return;
size = ALIGN(size, IO_TLB_SIZE);
- default_nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ /*
+ * Don't allow to reduce size if we are forcing swiotlb bounce.
+ */
+ if (swiotlb_force_bounce && nslabs < default_nslabs)
+ return;
+ default_nslabs = nslabs;
if (round_up_default_nslabs())
size = default_nslabs << IO_TLB_SHIFT;
pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-03-17 5:29 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20 7:01 [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced Aneesh Kumar K.V (Arm)
2026-01-20 9:25 ` Anshuman Khandual
2026-01-20 13:20 ` Robin Murphy
2026-01-21 6:10 ` Aneesh Kumar K.V
2026-02-04 18:52 ` Catalin Marinas
2026-02-06 6:11 ` Aneesh Kumar K.V
2026-03-04 10:00 ` Marek Szyprowski
2026-03-17 5:29 ` Aneesh Kumar K.V
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox