* [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
@ 2026-05-12 16:46 ` Jason Gunthorpe
2026-05-13 14:57 ` Mostafa Saleh
` (2 more replies)
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
` (4 subsequent siblings)
5 siblings, 3 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 16:46 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Josua Mayer,
Kevin Tian, Pasha Tatashin, patches, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable
A typo, likely from a rebase, inverted the condition and caused
errors to be lost. Fix it to be "if (ret)".
This was breaking iommu_create_device_direct_mappings() on drivers
that don't use iommupt and don't fully set up their domain in
alloc_pages() (i.e., SMMUv2). In this case the first call of
iommu_create_device_direct_mappings() should fail due to the
incompletely initialized domain. Since it wrongly returns success,
the second call to iommu_create_device_direct_mappings() doesn't
happen and IOMMU_RESV_DIRECT is never set up.
Cc: stable@vger.kernel.org
Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
Reported-by: Josua Mayer <josua@solid-run.com>
Closes: https://lore.kernel.org/all/321c2e57-6a17-4aef-ba42-d2ebd577e472@solid-run.com/
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 61c12ba782066a..6e53cfad5dc001 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2669,7 +2669,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
return 0;
}
ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot, gfp);
- if (!ret)
+ if (ret)
return ret;
trace_map(iova, paddr, size);
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
@ 2026-05-13 14:57 ` Mostafa Saleh
2026-05-13 16:32 ` Samiullah Khawaja
2026-05-13 17:42 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Mostafa Saleh @ 2026-05-13 14:57 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Samiullah Khawaja, stable
On Tue, May 12, 2026 at 01:46:13PM -0300, Jason Gunthorpe wrote:
> A typo, likely from a rebase, inverted the condition and caused
> errors to be lost. Fix it to be "if (ret)".
>
> This was breaking iommu_create_device_direct_mappings() on drivers
> that don't use iommupt and don't fully set up their domain in
> alloc_pages() (i.e., SMMUv2). In this case the first call of
> iommu_create_device_direct_mappings() should fail due to the
> incompletely initialized domain. Since it wrongly returns success,
> the second call to iommu_create_device_direct_mappings() doesn't
> happen and IOMMU_RESV_DIRECT is never set up.
>
> Cc: stable@vger.kernel.org
> Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
> Reported-by: Josua Mayer <josua@solid-run.com>
> Closes: https://lore.kernel.org/all/321c2e57-6a17-4aef-ba42-d2ebd577e472@solid-run.com/
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Thanks,
Mostafa
> ---
> drivers/iommu/iommu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 61c12ba782066a..6e53cfad5dc001 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2669,7 +2669,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> return 0;
> }
> ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot, gfp);
> - if (!ret)
> + if (ret)
> return ret;
>
> trace_map(iova, paddr, size);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
2026-05-13 14:57 ` Mostafa Saleh
@ 2026-05-13 16:32 ` Samiullah Khawaja
2026-05-13 17:42 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 16:32 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:13PM -0300, Jason Gunthorpe wrote:
>A typo, likely from a rebase, inverted the condition and caused
>errors to be lost. Fix it to be "if (ret)".
>
>This was breaking iommu_create_device_direct_mappings() on drivers
>that don't use iommupt and don't fully set up their domain in
>alloc_pages() (i.e., SMMUv2). In this case the first call of
>iommu_create_device_direct_mappings() should fail due to the
>incompletely initialized domain. Since it wrongly returns success,
>the second call to iommu_create_device_direct_mappings() doesn't
>happen and IOMMU_RESV_DIRECT is never set up.
>
>Cc: stable@vger.kernel.org
>Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
>Reported-by: Josua Mayer <josua@solid-run.com>
>Closes: https://lore.kernel.org/all/321c2e57-6a17-4aef-ba42-d2ebd577e472@solid-run.com/
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/iommu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>index 61c12ba782066a..6e53cfad5dc001 100644
>--- a/drivers/iommu/iommu.c
>+++ b/drivers/iommu/iommu.c
>@@ -2669,7 +2669,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> return 0;
> }
> ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot, gfp);
>- if (!ret)
>+ if (ret)
> return ret;
>
> trace_map(iova, paddr, size);
>--
>2.43.0
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
2026-05-13 14:57 ` Mostafa Saleh
2026-05-13 16:32 ` Samiullah Khawaja
@ 2026-05-13 17:42 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Pranjal Shrivastava @ 2026-05-13 17:42 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Samiullah Khawaja, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:13PM -0300, Jason Gunthorpe wrote:
> A typo, likely from a rebase, inverted the condition and caused
> errors to be lost. Fix it to be "if (ret)".
>
> This was breaking iommu_create_device_direct_mappings() on drivers
> that don't use iommupt and don't fully set up their domain in
> alloc_pages() (i.e., SMMUv2). In this case the first call of
> iommu_create_device_direct_mappings() should fail due to the
> incompletely initialized domain. Since it wrongly returns success,
> the second call to iommu_create_device_direct_mappings() doesn't
> happen and IOMMU_RESV_DIRECT is never set up.
>
> Cc: stable@vger.kernel.org
> Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
> Reported-by: Josua Mayer <josua@solid-run.com>
> Closes: https://lore.kernel.org/all/321c2e57-6a17-4aef-ba42-d2ebd577e472@solid-run.com/
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Pranjal Shrivastava <praan@google.com>
Thanks,
Praan
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
@ 2026-05-12 16:46 ` Jason Gunthorpe
2026-05-13 15:11 ` Mostafa Saleh
` (2 more replies)
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
` (3 subsequent siblings)
5 siblings, 3 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 16:46 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Josua Mayer,
Kevin Tian, Pasha Tatashin, patches, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable
Sashiko noticed a few issues in this path, and a few more were
found on review. Tidy them up further. These are intertwined
because the debug code depends on some of the WARN_ONs to function
right:
Lift into iommu_map_nosync():
- The might_sleep_if()
- 0 pgsize_bitmap WARN_ON
- Promote the illegal domain->type to a WARN_ON
- WARN_ON for illegal gfp flags
Then remove the return 0 since it is now safe to call
iommu_debug_map().
Lift into __iommu_unmap():
- 0 pgsize_bitmap WARN_ON
- Promote the illegal domain->type to a WARN_ON
- iommu_debug_unmap_begin()
This now pairs with the unconditional iommu_debug_map() on the
mapping side. Thus iommu debugging now works for iommupt along
with some of the other debugging features.
Fixes: 99fb8afa16ad ("iommupt: Directly call iommupt's unmap_range()")
Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/iommu.c | 43 ++++++++++++++++++++++---------------------
1 file changed, 22 insertions(+), 21 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 6e53cfad5dc001..e334588a2476b4 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2583,19 +2583,9 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
size_t orig_size = size;
int ret = 0;
- might_sleep_if(gfpflags_allow_blocking(gfp));
-
- if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
- return -EINVAL;
-
- if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL))
+ if (WARN_ON(!ops->map_pages))
return -ENODEV;
- /* Discourage passing strange GFP flags */
- if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
- __GFP_HIGHMEM)))
- return -EINVAL;
-
/* find out the minimum page size supported */
min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
@@ -2657,6 +2647,15 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
struct pt_iommu *pt = iommupt_from_domain(domain);
int ret;
+ might_sleep_if(gfpflags_allow_blocking(gfp));
+
+ /* Discourage passing strange GFP flags or illegal domains */
+ if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING) ||
+ !domain->pgsize_bitmap ||
+ (gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
+ __GFP_HIGHMEM))))
+ return -EINVAL;
+
if (pt) {
size_t mapped = 0;
@@ -2666,11 +2665,12 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
iommu_unmap(domain, iova, mapped);
return ret;
}
- return 0;
+ } else {
+ ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
+ gfp);
+ if (ret)
+ return ret;
}
- ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot, gfp);
- if (ret)
- return ret;
trace_map(iova, paddr, size);
iommu_debug_map(domain, paddr, size);
@@ -2702,10 +2702,7 @@ __iommu_unmap_domain_pgtbl(struct iommu_domain *domain, unsigned long iova,
size_t unmapped_page, unmapped = 0;
unsigned int min_pagesz;
- if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
- return 0;
-
- if (WARN_ON(!ops->unmap_pages || domain->pgsize_bitmap == 0UL))
+ if (WARN_ON(!ops->unmap_pages))
return 0;
/* find out the minimum page size supported */
@@ -2724,8 +2721,6 @@ __iommu_unmap_domain_pgtbl(struct iommu_domain *domain, unsigned long iova,
pr_debug("unmap this: iova 0x%lx size 0x%zx\n", iova, size);
- iommu_debug_unmap_begin(domain, iova, size);
-
/*
* Keep iterating until we either unmap 'size' bytes (or more)
* or we hit an area that isn't mapped.
@@ -2761,6 +2756,12 @@ static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long iova,
struct pt_iommu *pt = iommupt_from_domain(domain);
size_t unmapped;
+ if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING) ||
+ !domain->pgsize_bitmap))
+ return 0;
+
+ iommu_debug_unmap_begin(domain, iova, size);
+
if (pt)
unmapped = pt->ops->unmap_range(pt, iova, size, iotlb_gather);
else
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
@ 2026-05-13 15:11 ` Mostafa Saleh
2026-05-13 16:45 ` Samiullah Khawaja
2026-05-13 17:44 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Mostafa Saleh @ 2026-05-13 15:11 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Samiullah Khawaja, stable
On Tue, May 12, 2026 at 01:46:14PM -0300, Jason Gunthorpe wrote:
> Sashiko noticed a few issues in this path, and a few more were
> found on review. Tidy them up further. These are intertwined
> because the debug code depends on some of the WARN_ONs to function
> right:
>
> Lift into iommu_map_nosync():
> - The might_sleep_if()
> - 0 pgsize_bitmap WARN_ON
> - Promote the illegal domain->type to a WARN_ON
> - WARN_ON for illegal gfp flags
>
> Then remove the return 0 since it is now safe to call
> iommu_debug_map().
>
> Lift into __iommu_unmap():
> - 0 pgsize_bitmap WARN_ON
> - Promote the illegal domain->type to a WARN_ON
> - iommu_debug_unmap_begin()
>
> This now pairs with the unconditional iommu_debug_map() on the
> mapping side. Thus iommu debugging now works for iommupt along
> with some of the other debugging features.
>
> Fixes: 99fb8afa16ad ("iommupt: Directly call iommupt's unmap_range()")
> Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Thanks,
Mostafa
> ---
> drivers/iommu/iommu.c | 43 ++++++++++++++++++++++---------------------
> 1 file changed, 22 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 6e53cfad5dc001..e334588a2476b4 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2583,19 +2583,9 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> size_t orig_size = size;
> int ret = 0;
>
> - might_sleep_if(gfpflags_allow_blocking(gfp));
> -
> - if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
> - return -EINVAL;
> -
> - if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL))
> + if (WARN_ON(!ops->map_pages))
> return -ENODEV;
>
> - /* Discourage passing strange GFP flags */
> - if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> - __GFP_HIGHMEM)))
> - return -EINVAL;
> -
> /* find out the minimum page size supported */
> min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
>
> @@ -2657,6 +2647,15 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> struct pt_iommu *pt = iommupt_from_domain(domain);
> int ret;
>
> + might_sleep_if(gfpflags_allow_blocking(gfp));
> +
> + /* Discourage passing strange GFP flags or illegal domains */
> + if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING) ||
> + !domain->pgsize_bitmap ||
> + (gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
> + __GFP_HIGHMEM))))
> + return -EINVAL;
> +
> if (pt) {
> size_t mapped = 0;
>
> @@ -2666,11 +2665,12 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> iommu_unmap(domain, iova, mapped);
> return ret;
> }
> - return 0;
> + } else {
> + ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
> + gfp);
> + if (ret)
> + return ret;
> }
> - ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot, gfp);
> - if (ret)
> - return ret;
>
> trace_map(iova, paddr, size);
> iommu_debug_map(domain, paddr, size);
> @@ -2702,10 +2702,7 @@ __iommu_unmap_domain_pgtbl(struct iommu_domain *domain, unsigned long iova,
> size_t unmapped_page, unmapped = 0;
> unsigned int min_pagesz;
>
> - if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
> - return 0;
> -
> - if (WARN_ON(!ops->unmap_pages || domain->pgsize_bitmap == 0UL))
> + if (WARN_ON(!ops->unmap_pages))
> return 0;
>
> /* find out the minimum page size supported */
> @@ -2724,8 +2721,6 @@ __iommu_unmap_domain_pgtbl(struct iommu_domain *domain, unsigned long iova,
>
> pr_debug("unmap this: iova 0x%lx size 0x%zx\n", iova, size);
>
> - iommu_debug_unmap_begin(domain, iova, size);
> -
> /*
> * Keep iterating until we either unmap 'size' bytes (or more)
> * or we hit an area that isn't mapped.
> @@ -2761,6 +2756,12 @@ static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long iova,
> struct pt_iommu *pt = iommupt_from_domain(domain);
> size_t unmapped;
>
> + if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING) ||
> + !domain->pgsize_bitmap))
> + return 0;
> +
> + iommu_debug_unmap_begin(domain, iova, size);
> +
> if (pt)
> unmapped = pt->ops->unmap_range(pt, iova, size, iotlb_gather);
> else
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
2026-05-13 15:11 ` Mostafa Saleh
@ 2026-05-13 16:45 ` Samiullah Khawaja
2026-05-13 17:44 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 16:45 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:14PM -0300, Jason Gunthorpe wrote:
>Sashiko noticed a few issues in this path, and a few more were
>found on review. Tidy them up further. These are intertwined
>because the debug code depends on some of the WARN_ONs to function
>right:
>
>Lift into iommu_map_nosync():
>- The might_sleep_if()
>- 0 pgsize_bitmap WARN_ON
>- Promote the illegal domain->type to a WARN_ON
>- WARN_ON for illegal gfp flags
>
>Then remove the return 0 since it is now safe to call
>iommu_debug_map().
>
>Lift into __iommu_unmap():
>- 0 pgsize_bitmap WARN_ON
>- Promote the illegal domain->type to a WARN_ON
>- iommu_debug_unmap_begin()
>
>This now pairs with the unconditional iommu_debug_map() on the
>mapping side. Thus iommu debugging now works for iommupt along
>with some of the other debugging features.
>
>Fixes: 99fb8afa16ad ("iommupt: Directly call iommupt's unmap_range()")
>Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/iommu.c | 43 ++++++++++++++++++++++---------------------
> 1 file changed, 22 insertions(+), 21 deletions(-)
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
2026-05-13 15:11 ` Mostafa Saleh
2026-05-13 16:45 ` Samiullah Khawaja
@ 2026-05-13 17:44 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Pranjal Shrivastava @ 2026-05-13 17:44 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Samiullah Khawaja, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:14PM -0300, Jason Gunthorpe wrote:
> Sashiko noticed a few issues in this path, and a few more were
> found on review. Tidy them up further. These are intertwined
> because the debug code depends on some of the WARN_ONs to function
> right:
>
> Lift into iommu_map_nosync():
> - The might_sleep_if()
> - 0 pgsize_bitmap WARN_ON
> - Promote the illegal domain->type to a WARN_ON
> - WARN_ON for illegal gfp flags
>
> Then remove the return 0 since it is now safe to call
> iommu_debug_map().
>
> Lift into __iommu_unmap():
> - 0 pgsize_bitmap WARN_ON
> - Promote the illegal domain->type to a WARN_ON
> - iommu_debug_unmap_begin()
>
> This now pairs with the unconditional iommu_debug_map() on the
> mapping side. Thus iommu debugging now works for iommupt along
> with some of the other debugging features.
>
> Fixes: 99fb8afa16ad ("iommupt: Directly call iommupt's unmap_range()")
> Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Pranjal Shrivastava <praan@google.com>
Thanks
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
@ 2026-05-12 16:46 ` Jason Gunthorpe
2026-05-13 15:13 ` Mostafa Saleh
` (2 more replies)
2026-05-12 16:46 ` [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap Jason Gunthorpe
` (2 subsequent siblings)
5 siblings, 3 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 16:46 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Josua Mayer,
Kevin Tian, Pasha Tatashin, patches, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable
Sashiko noticed a latent bug where the map error flow called iommu_unmap()
which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
since this is an error path the map flow never actually established the
original iommu_debug_map() it will malfunction.
Lift the unmap error handling into iommu_map_nosync() and reorder it so
the trace_map()/iommu_debug_map() records the partial mapping and then
immediately unmaps it. This avoid creating the unbalanced tracking and
provides saner tracing instead of a unmap unmatched to any map.
Fixes: ccc21213f013 ("iommu: Add calls for IOMMU_DEBUG_PAGEALLOC")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/iommu.c | 49 +++++++++++++++++--------------------------
1 file changed, 19 insertions(+), 30 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index e334588a2476b4..e5fa9875900228 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2575,12 +2575,11 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
unsigned long iova, phys_addr_t paddr,
- size_t size, int prot, gfp_t gfp)
+ size_t size, int prot, gfp_t gfp,
+ size_t *mapped)
{
const struct iommu_domain_ops *ops = domain->ops;
- unsigned long orig_iova = iova;
unsigned int min_pagesz;
- size_t orig_size = size;
int ret = 0;
if (WARN_ON(!ops->map_pages))
@@ -2603,31 +2602,25 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
while (size) {
- size_t pgsize, count, mapped = 0;
+ size_t pgsize, count, op_mapped = 0;
pgsize = iommu_pgsize(domain, iova, paddr, size, &count);
pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n",
iova, &paddr, pgsize, count);
ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot,
- gfp, &mapped);
+ gfp, &op_mapped);
/*
* Some pages may have been mapped, even if an error occurred,
* so we should account for those so they can be unmapped.
*/
- size -= mapped;
-
+ *mapped += op_mapped;
if (ret)
- break;
+ return ret;
- iova += mapped;
- paddr += mapped;
- }
-
- /* unroll mapping in case something went wrong */
- if (ret) {
- iommu_unmap(domain, orig_iova, orig_size - size);
- return ret;
+ size -= op_mapped;
+ iova += op_mapped;
+ paddr += op_mapped;
}
return 0;
}
@@ -2645,6 +2638,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
{
struct pt_iommu *pt = iommupt_from_domain(domain);
+ size_t mapped = 0;
int ret;
might_sleep_if(gfpflags_allow_blocking(gfp));
@@ -2656,24 +2650,19 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
__GFP_HIGHMEM))))
return -EINVAL;
- if (pt) {
- size_t mapped = 0;
-
+ if (pt)
ret = pt->ops->map_range(pt, iova, paddr, size, prot, gfp,
&mapped);
- if (ret) {
- iommu_unmap(domain, iova, mapped);
- return ret;
- }
- } else {
+ else
ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
- gfp);
- if (ret)
- return ret;
- }
+ gfp, &mapped);
- trace_map(iova, paddr, size);
- iommu_debug_map(domain, paddr, size);
+ trace_map(iova, paddr, mapped);
+ iommu_debug_map(domain, paddr, mapped);
+ if (ret) {
+ iommu_unmap(domain, iova, mapped);
+ return ret;
+ }
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
@ 2026-05-13 15:13 ` Mostafa Saleh
2026-05-13 15:18 ` Jason Gunthorpe
2026-05-13 16:56 ` Samiullah Khawaja
2026-05-13 17:47 ` Pranjal Shrivastava
2 siblings, 1 reply; 24+ messages in thread
From: Mostafa Saleh @ 2026-05-13 15:13 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Samiullah Khawaja, stable
On Tue, May 12, 2026 at 01:46:15PM -0300, Jason Gunthorpe wrote:
> Sashiko noticed a latent bug where the map error flow called iommu_unmap()
> which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
> since this is an error path the map flow never actually established the
> original iommu_debug_map() it will malfunction.
>
> Lift the unmap error handling into iommu_map_nosync() and reorder it so
> the trace_map()/iommu_debug_map() records the partial mapping and then
> immediately unmaps it. This avoid creating the unbalanced tracking and
> provides saner tracing instead of a unmap unmatched to any map.
There is usually littel coverage in such paths, I have been thinking
of creating some test-suites on the IOMMU and io-pgtable/SMMUv3 level
to cover some of the failure cases and some of the tricky page table
operations, the problem is that there are many things to cover (starting
from the crashes/logical issues till TLB invalidation correctness of
some operations).
I will try to post something when I get some time.
Reviewed-by: Mostafa Saleh <smostafa@google.com>
Thanks,
Mostafa
>
> Fixes: ccc21213f013 ("iommu: Add calls for IOMMU_DEBUG_PAGEALLOC")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/iommu.c | 49 +++++++++++++++++--------------------------
> 1 file changed, 19 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index e334588a2476b4..e5fa9875900228 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2575,12 +2575,11 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>
> static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> unsigned long iova, phys_addr_t paddr,
> - size_t size, int prot, gfp_t gfp)
> + size_t size, int prot, gfp_t gfp,
> + size_t *mapped)
> {
> const struct iommu_domain_ops *ops = domain->ops;
> - unsigned long orig_iova = iova;
> unsigned int min_pagesz;
> - size_t orig_size = size;
> int ret = 0;
>
> if (WARN_ON(!ops->map_pages))
> @@ -2603,31 +2602,25 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>
> while (size) {
> - size_t pgsize, count, mapped = 0;
> + size_t pgsize, count, op_mapped = 0;
>
> pgsize = iommu_pgsize(domain, iova, paddr, size, &count);
>
> pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n",
> iova, &paddr, pgsize, count);
> ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot,
> - gfp, &mapped);
> + gfp, &op_mapped);
> /*
> * Some pages may have been mapped, even if an error occurred,
> * so we should account for those so they can be unmapped.
> */
> - size -= mapped;
> -
> + *mapped += op_mapped;
> if (ret)
> - break;
> + return ret;
>
> - iova += mapped;
> - paddr += mapped;
> - }
> -
> - /* unroll mapping in case something went wrong */
> - if (ret) {
> - iommu_unmap(domain, orig_iova, orig_size - size);
> - return ret;
> + size -= op_mapped;
> + iova += op_mapped;
> + paddr += op_mapped;
> }
> return 0;
> }
> @@ -2645,6 +2638,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> {
> struct pt_iommu *pt = iommupt_from_domain(domain);
> + size_t mapped = 0;
> int ret;
>
> might_sleep_if(gfpflags_allow_blocking(gfp));
> @@ -2656,24 +2650,19 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> __GFP_HIGHMEM))))
> return -EINVAL;
>
> - if (pt) {
> - size_t mapped = 0;
> -
> + if (pt)
> ret = pt->ops->map_range(pt, iova, paddr, size, prot, gfp,
> &mapped);
> - if (ret) {
> - iommu_unmap(domain, iova, mapped);
> - return ret;
> - }
> - } else {
> + else
> ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
> - gfp);
> - if (ret)
> - return ret;
> - }
> + gfp, &mapped);
>
> - trace_map(iova, paddr, size);
> - iommu_debug_map(domain, paddr, size);
> + trace_map(iova, paddr, mapped);
> + iommu_debug_map(domain, paddr, mapped);
> + if (ret) {
> + iommu_unmap(domain, iova, mapped);
> + return ret;
> + }
> return 0;
> }
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
2026-05-13 15:13 ` Mostafa Saleh
@ 2026-05-13 15:18 ` Jason Gunthorpe
0 siblings, 0 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-13 15:18 UTC (permalink / raw)
To: Mostafa Saleh
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Samiullah Khawaja, stable
On Wed, May 13, 2026 at 03:13:41PM +0000, Mostafa Saleh wrote:
> On Tue, May 12, 2026 at 01:46:15PM -0300, Jason Gunthorpe wrote:
> > Sashiko noticed a latent bug where the map error flow called iommu_unmap()
> > which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
> > since this is an error path the map flow never actually established the
> > original iommu_debug_map() it will malfunction.
> >
> > Lift the unmap error handling into iommu_map_nosync() and reorder it so
> > the trace_map()/iommu_debug_map() records the partial mapping and then
> > immediately unmaps it. This avoid creating the unbalanced tracking and
> > provides saner tracing instead of a unmap unmatched to any map.
>
> There is usually littel coverage in such paths, I have been thinking
> of creating some test-suites on the IOMMU and io-pgtable/SMMUv3 level
> to cover some of the failure cases and some of the tricky page table
> operations, the problem is that there are many things to cover (starting
> from the crashes/logical issues till TLB invalidation correctness of
> some operations).
I think the existing kunit for iommupt would have caught this if it is run
with the debug option. I didn't test the io-pgtable stuff but I'm
working on removing more of that..
Jason
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
2026-05-13 15:13 ` Mostafa Saleh
@ 2026-05-13 16:56 ` Samiullah Khawaja
2026-05-13 17:47 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 16:56 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:15PM -0300, Jason Gunthorpe wrote:
>Sashiko noticed a latent bug where the map error flow called iommu_unmap()
>which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
>since this is an error path the map flow never actually established the
>original iommu_debug_map() it will malfunction.
>
>Lift the unmap error handling into iommu_map_nosync() and reorder it so
>the trace_map()/iommu_debug_map() records the partial mapping and then
>immediately unmaps it. This avoid creating the unbalanced tracking and
>provides saner tracing instead of a unmap unmatched to any map.
>
>Fixes: ccc21213f013 ("iommu: Add calls for IOMMU_DEBUG_PAGEALLOC")
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/iommu.c | 49 +++++++++++++++++--------------------------
> 1 file changed, 19 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>index e334588a2476b4..e5fa9875900228 100644
>--- a/drivers/iommu/iommu.c
>+++ b/drivers/iommu/iommu.c
>@@ -2575,12 +2575,11 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>
> static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> unsigned long iova, phys_addr_t paddr,
>- size_t size, int prot, gfp_t gfp)
>+ size_t size, int prot, gfp_t gfp,
>+ size_t *mapped)
> {
> const struct iommu_domain_ops *ops = domain->ops;
>- unsigned long orig_iova = iova;
> unsigned int min_pagesz;
>- size_t orig_size = size;
> int ret = 0;
>
> if (WARN_ON(!ops->map_pages))
>@@ -2603,31 +2602,25 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>
> while (size) {
>- size_t pgsize, count, mapped = 0;
>+ size_t pgsize, count, op_mapped = 0;
>
> pgsize = iommu_pgsize(domain, iova, paddr, size, &count);
>
> pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n",
> iova, &paddr, pgsize, count);
> ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot,
>- gfp, &mapped);
>+ gfp, &op_mapped);
> /*
> * Some pages may have been mapped, even if an error occurred,
> * so we should account for those so they can be unmapped.
> */
>- size -= mapped;
>-
>+ *mapped += op_mapped;
> if (ret)
>- break;
>+ return ret;
>
>- iova += mapped;
>- paddr += mapped;
>- }
>-
>- /* unroll mapping in case something went wrong */
>- if (ret) {
>- iommu_unmap(domain, orig_iova, orig_size - size);
>- return ret;
>+ size -= op_mapped;
>+ iova += op_mapped;
>+ paddr += op_mapped;
> }
> return 0;
> }
>@@ -2645,6 +2638,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> {
> struct pt_iommu *pt = iommupt_from_domain(domain);
>+ size_t mapped = 0;
> int ret;
>
> might_sleep_if(gfpflags_allow_blocking(gfp));
>@@ -2656,24 +2650,19 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> __GFP_HIGHMEM))))
> return -EINVAL;
>
>- if (pt) {
>- size_t mapped = 0;
>-
>+ if (pt)
> ret = pt->ops->map_range(pt, iova, paddr, size, prot, gfp,
> &mapped);
>- if (ret) {
>- iommu_unmap(domain, iova, mapped);
>- return ret;
>- }
>- } else {
>+ else
> ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
>- gfp);
>- if (ret)
>- return ret;
>- }
>+ gfp, &mapped);
>
>- trace_map(iova, paddr, size);
>- iommu_debug_map(domain, paddr, size);
>+ trace_map(iova, paddr, mapped);
>+ iommu_debug_map(domain, paddr, mapped);
>+ if (ret) {
>+ iommu_unmap(domain, iova, mapped);
>+ return ret;
>+ }
> return 0;
> }
>
>--
>2.43.0
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
2026-05-13 15:13 ` Mostafa Saleh
2026-05-13 16:56 ` Samiullah Khawaja
@ 2026-05-13 17:47 ` Pranjal Shrivastava
2 siblings, 0 replies; 24+ messages in thread
From: Pranjal Shrivastava @ 2026-05-13 17:47 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Samiullah Khawaja, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:15PM -0300, Jason Gunthorpe wrote:
> Sashiko noticed a latent bug where the map error flow called iommu_unmap()
> which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
> since this is an error path the map flow never actually established the
> original iommu_debug_map() it will malfunction.
>
> Lift the unmap error handling into iommu_map_nosync() and reorder it so
> the trace_map()/iommu_debug_map() records the partial mapping and then
> immediately unmaps it. This avoid creating the unbalanced tracking and
> provides saner tracing instead of a unmap unmatched to any map.
>
> Fixes: ccc21213f013 ("iommu: Add calls for IOMMU_DEBUG_PAGEALLOC")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Pranjal Shrivastava <praan@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
` (2 preceding siblings ...)
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
@ 2026-05-12 16:46 ` Jason Gunthorpe
2026-05-13 17:46 ` Samiullah Khawaja
2026-05-13 17:48 ` Pranjal Shrivastava
2026-05-12 16:46 ` [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf() Jason Gunthorpe
2026-05-13 11:08 ` [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Josua Mayer
5 siblings, 2 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 16:46 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Josua Mayer,
Kevin Tian, Pasha Tatashin, patches, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable
Sashiko pointed out that the driver could drop PAGE_SIZE from the
pgsize_bitmap. That is technically allowed but nothing does it, and
such an iommu_domain would not be used with the DMA API today.
Still, it is against the design and it is trivial to fix up. Lift
the PT_WARN_ON to the if branch and just skip the fast path.
Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/generic_pt/iommu_pt.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
index 19b6daf88f2ab1..4877b05291c9d4 100644
--- a/drivers/iommu/generic_pt/iommu_pt.h
+++ b/drivers/iommu/generic_pt/iommu_pt.h
@@ -920,8 +920,8 @@ static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
return ret;
/* Calculate target page size and level for the leaves */
- if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
- PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
+ if (pt_has_system_page_size(common) && len == PAGE_SIZE &&
+ likely(pgsize_bitmap & PAGE_SIZE)) {
if (log2_mod(iova | paddr, PAGE_SHIFT))
return -ENXIO;
map.leaf_pgsize_lg2 = PAGE_SHIFT;
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-12 16:46 ` [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap Jason Gunthorpe
@ 2026-05-13 17:46 ` Samiullah Khawaja
2026-05-13 17:57 ` Samiullah Khawaja
2026-05-13 17:48 ` Pranjal Shrivastava
1 sibling, 1 reply; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 17:46 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:16PM -0300, Jason Gunthorpe wrote:
>Sashiko pointed out that the driver could drop PAGE_SIZE from the
>pgsize_bitmap. That is technically allowed but nothing does it, and
>such an iommu_domain would not be used with the DMA API today.
>
>Still, it is against the design and it is trivial to fix up. Lift
>the PT_WARN_ON to the if branch and just skip the fast path.
>
>Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/generic_pt/iommu_pt.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
>index 19b6daf88f2ab1..4877b05291c9d4 100644
>--- a/drivers/iommu/generic_pt/iommu_pt.h
>+++ b/drivers/iommu/generic_pt/iommu_pt.h
>@@ -920,8 +920,8 @@ static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
> return ret;
>
> /* Calculate target page size and level for the leaves */
>- if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
>- PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
>+ if (pt_has_system_page_size(common) && len == PAGE_SIZE &&
>+ likely(pgsize_bitmap & PAGE_SIZE)) {
> if (log2_mod(iova | paddr, PAGE_SHIFT))
> return -ENXIO;
> map.leaf_pgsize_lg2 = PAGE_SHIFT;
>--
>2.43.0
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-13 17:46 ` Samiullah Khawaja
@ 2026-05-13 17:57 ` Samiullah Khawaja
2026-05-13 18:06 ` Jason Gunthorpe
0 siblings, 1 reply; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 17:57 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Wed, May 13, 2026 at 05:46:22PM +0000, Samiullah Khawaja wrote:
>On Tue, May 12, 2026 at 01:46:16PM -0300, Jason Gunthorpe wrote:
>>Sashiko pointed out that the driver could drop PAGE_SIZE from the
>>pgsize_bitmap. That is technically allowed but nothing does it, and
>>such an iommu_domain would not be used with the DMA API today.
>>
>>Still, it is against the design and it is trivial to fix up. Lift
>>the PT_WARN_ON to the if branch and just skip the fast path.
>>
>>Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
>>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>>---
>>drivers/iommu/generic_pt/iommu_pt.h | 4 ++--
>>1 file changed, 2 insertions(+), 2 deletions(-)
>>
>>diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
>>index 19b6daf88f2ab1..4877b05291c9d4 100644
>>--- a/drivers/iommu/generic_pt/iommu_pt.h
>>+++ b/drivers/iommu/generic_pt/iommu_pt.h
>>@@ -920,8 +920,8 @@ static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
>> return ret;
>>
>> /* Calculate target page size and level for the leaves */
>>- if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
>>- PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
>>+ if (pt_has_system_page_size(common) && len == PAGE_SIZE &&
>>+ likely(pgsize_bitmap & PAGE_SIZE)) {
>> if (log2_mod(iova | paddr, PAGE_SHIFT))
>> return -ENXIO;
After thought nit:
I wonder if the error handling of iova and paddr alignment should also
be deferred to non-fast path? Basically lift the iova and paddr check
in the parent if?
>> map.leaf_pgsize_lg2 = PAGE_SHIFT;
>>--
>>2.43.0
>>
>
>Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-13 17:57 ` Samiullah Khawaja
@ 2026-05-13 18:06 ` Jason Gunthorpe
2026-05-13 18:48 ` Samiullah Khawaja
0 siblings, 1 reply; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-13 18:06 UTC (permalink / raw)
To: Samiullah Khawaja
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Wed, May 13, 2026 at 05:57:13PM +0000, Samiullah Khawaja wrote:
> On Wed, May 13, 2026 at 05:46:22PM +0000, Samiullah Khawaja wrote:
> > On Tue, May 12, 2026 at 01:46:16PM -0300, Jason Gunthorpe wrote:
> > > Sashiko pointed out that the driver could drop PAGE_SIZE from the
> > > pgsize_bitmap. That is technically allowed but nothing does it, and
> > > such an iommu_domain would not be used with the DMA API today.
> > >
> > > Still, it is against the design and it is trivial to fix up. Lift
> > > the PT_WARN_ON to the if branch and just skip the fast path.
> > >
> > > Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
> > > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > > ---
> > > drivers/iommu/generic_pt/iommu_pt.h | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
> > > index 19b6daf88f2ab1..4877b05291c9d4 100644
> > > --- a/drivers/iommu/generic_pt/iommu_pt.h
> > > +++ b/drivers/iommu/generic_pt/iommu_pt.h
> > > @@ -920,8 +920,8 @@ static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
> > > return ret;
> > >
> > > /* Calculate target page size and level for the leaves */
> > > - if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
> > > - PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
> > > + if (pt_has_system_page_size(common) && len == PAGE_SIZE &&
> > > + likely(pgsize_bitmap & PAGE_SIZE)) {
> > > if (log2_mod(iova | paddr, PAGE_SHIFT))
> > > return -ENXIO;
>
> After thought nit:
>
> I wonder if the error handling of iova and paddr alignment should also
> be deferred to non-fast path? Basically lift the iova and paddr check
> in the parent if?
That would break support for < PAGE_SIZE tables which I've tried to
keep generic support for. Similar checks already exist in the generic
code in a more general way, probably the first is
pt_compute_best_pgsize().
Thanks,
Jason
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-13 18:06 ` Jason Gunthorpe
@ 2026-05-13 18:48 ` Samiullah Khawaja
0 siblings, 0 replies; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 18:48 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Wed, May 13, 2026 at 03:06:07PM -0300, Jason Gunthorpe wrote:
>On Wed, May 13, 2026 at 05:57:13PM +0000, Samiullah Khawaja wrote:
>> On Wed, May 13, 2026 at 05:46:22PM +0000, Samiullah Khawaja wrote:
>> > On Tue, May 12, 2026 at 01:46:16PM -0300, Jason Gunthorpe wrote:
>> > > Sashiko pointed out that the driver could drop PAGE_SIZE from the
>> > > pgsize_bitmap. That is technically allowed but nothing does it, and
>> > > such an iommu_domain would not be used with the DMA API today.
>> > >
>> > > Still, it is against the design and it is trivial to fix up. Lift
>> > > the PT_WARN_ON to the if branch and just skip the fast path.
>> > >
>> > > Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
>> > > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>> > > ---
>> > > drivers/iommu/generic_pt/iommu_pt.h | 4 ++--
>> > > 1 file changed, 2 insertions(+), 2 deletions(-)
>> > >
>> > > diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
>> > > index 19b6daf88f2ab1..4877b05291c9d4 100644
>> > > --- a/drivers/iommu/generic_pt/iommu_pt.h
>> > > +++ b/drivers/iommu/generic_pt/iommu_pt.h
>> > > @@ -920,8 +920,8 @@ static int NS(map_range)(struct pt_iommu *iommu_table, dma_addr_t iova,
>> > > return ret;
>> > >
>> > > /* Calculate target page size and level for the leaves */
>> > > - if (pt_has_system_page_size(common) && len == PAGE_SIZE) {
>> > > - PT_WARN_ON(!(pgsize_bitmap & PAGE_SIZE));
>> > > + if (pt_has_system_page_size(common) && len == PAGE_SIZE &&
>> > > + likely(pgsize_bitmap & PAGE_SIZE)) {
>> > > if (log2_mod(iova | paddr, PAGE_SHIFT))
>> > > return -ENXIO;
>>
>> After thought nit:
>>
>> I wonder if the error handling of iova and paddr alignment should also
>> be deferred to non-fast path? Basically lift the iova and paddr check
>> in the parent if?
>
>That would break support for < PAGE_SIZE tables which I've tried to
I was also thinking about support of < PAGE_SIZE tables and wondering
whether the < PAGE_SIZE tables support is already broken. For examples
consider following:
iova = 0x12341800
paddr = 0x56781800
len = PAGE_SIZE (4k)
But pt_has_system_page_size() will be false in such a system.
>keep generic support for. Similar checks already exist in the generic
>code in a more general way, probably the first is
>pt_compute_best_pgsize().
I was suggesting to rely on the already existing checks in
pt_compute_best_pgsize() to do error handling, by only entering fast
path if iova and paddr are also aligned.
>
>Thanks,
>Jason
No change needed. Putting this here again:
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
Thanks,
Sami
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
2026-05-12 16:46 ` [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap Jason Gunthorpe
2026-05-13 17:46 ` Samiullah Khawaja
@ 2026-05-13 17:48 ` Pranjal Shrivastava
1 sibling, 0 replies; 24+ messages in thread
From: Pranjal Shrivastava @ 2026-05-13 17:48 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Samiullah Khawaja, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:16PM -0300, Jason Gunthorpe wrote:
> Sashiko pointed out that the driver could drop PAGE_SIZE from the
> pgsize_bitmap. That is technically allowed but nothing does it, and
> such an iommu_domain would not be used with the DMA API today.
>
> Still, it is against the design and it is trivial to fix up. Lift
> the PT_WARN_ON to the if branch and just skip the fast path.
>
> Fixes: dcd6a011a8d5 ("iommupt: Add map_pages op")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Pranjal Shrivastava <praan@google.com>
Thanks
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf()
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
` (3 preceding siblings ...)
2026-05-12 16:46 ` [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap Jason Gunthorpe
@ 2026-05-12 16:46 ` Jason Gunthorpe
2026-05-13 17:58 ` Pranjal Shrivastava
2026-05-13 18:53 ` Samiullah Khawaja
2026-05-13 11:08 ` [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Josua Mayer
5 siblings, 2 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2026-05-12 16:46 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Josua Mayer,
Kevin Tian, Pasha Tatashin, patches, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable
Sashiko noticed a mismatch of units in this math: num_leaves is
actually the number of leaf *entries* (so a 16-item contiguous leaf
is one num_leaves), while index is in items. The mismatch in maths
causes __map_range_leaf() to exit early instead of efficiently
filling a larger range of contiguous PTEs.
The early exit is caught by the functions above and then
__map_range_leaf() is re-invoked, so there is no functional issue.
Correct the misuse of units by adjusting num_leaves with the leaf
size and avoid the performance cost of looping externally.
There are also some mismatched types for num_leaves; simplify
things to remove the duplicated calculations.
Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/generic_pt/iommu_pt.h | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
index 4877b05291c9d4..dc91fb4e2f61cb 100644
--- a/drivers/iommu/generic_pt/iommu_pt.h
+++ b/drivers/iommu/generic_pt/iommu_pt.h
@@ -534,10 +534,12 @@ static int __map_range_leaf(struct pt_range *range, void *arg,
struct pt_state pts = pt_init(range, level, table);
struct pt_iommu_map_args *map = arg;
unsigned int leaf_pgsize_lg2 = map->leaf_pgsize_lg2;
+ unsigned int leaves_avail;
unsigned int start_index;
pt_oaddr_t oa = map->oa;
- unsigned int num_leaves;
+ pt_vaddr_t num_leaves;
unsigned int orig_end;
+ unsigned int step_lg2;
pt_vaddr_t last_va;
unsigned int step;
bool need_contig;
@@ -546,21 +548,25 @@ static int __map_range_leaf(struct pt_range *range, void *arg,
PT_WARN_ON(map->leaf_level != level);
PT_WARN_ON(!pt_can_have_leaf(&pts));
- step = log2_to_int_t(unsigned int,
- leaf_pgsize_lg2 - pt_table_item_lg2sz(&pts));
- need_contig = leaf_pgsize_lg2 != pt_table_item_lg2sz(&pts);
+ step_lg2 = leaf_pgsize_lg2 - pt_table_item_lg2sz(&pts);
+ step = log2_to_int_t(unsigned int, step_lg2);
+ need_contig = step_lg2 != 0;
_pt_iter_first(&pts);
start_index = pts.index;
orig_end = pts.end_index;
- if (pts.index + map->num_leaves < pts.end_index) {
+ leaves_avail =
+ log2_div_t(unsigned int, pts.end_index - pts.index, step_lg2);
+ if (map->num_leaves <= leaves_avail) {
/* Need to stop in the middle of the table to change sizes */
- pts.end_index = pts.index + map->num_leaves;
+ pts.end_index = pts.index + log2_mul(map->num_leaves, step_lg2);
num_leaves = 0;
} else {
- num_leaves = map->num_leaves - (pts.end_index - pts.index);
+ num_leaves = map->num_leaves - leaves_avail;
}
+ PT_WARN_ON(
+ log2_mod_t(unsigned int, pts.end_index - pts.index, step_lg2));
do {
pts.type = pt_load_entry_raw(&pts);
if (pts.type != PT_ENTRY_EMPTY || need_contig) {
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf()
2026-05-12 16:46 ` [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf() Jason Gunthorpe
@ 2026-05-13 17:58 ` Pranjal Shrivastava
2026-05-13 18:53 ` Samiullah Khawaja
1 sibling, 0 replies; 24+ messages in thread
From: Pranjal Shrivastava @ 2026-05-13 17:58 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Samiullah Khawaja, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:17PM -0300, Jason Gunthorpe wrote:
> Sashiko noticed a mismatch of units in this math: num_leaves is
> actually the number of leaf *entries* (so a 16-item contiguous leaf
> is one num_leaves), while index is in items. The mismatch in maths
> causes __map_range_leaf() to exit early instead of efficiently
> filling a larger range of contiguous PTEs.
>
> The early exit is caught by the functions above and then
> __map_range_leaf() is re-invoked, so there is no functional issue.
>
> Correct the misuse of units by adjusting num_leaves with the leaf
> size and avoid the performance cost of looping externally.
>
> There are also some mismatched types for num_leaves; simplify
> things to remove the duplicated calculations.
>
> Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This is an important catch! It means that today, we were redundantly
re-walking the upper page table levels.
Reviewd-by: Pranjal Shrivastava <praan@google.com>
Thanks!
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf()
2026-05-12 16:46 ` [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf() Jason Gunthorpe
2026-05-13 17:58 ` Pranjal Shrivastava
@ 2026-05-13 18:53 ` Samiullah Khawaja
1 sibling, 0 replies; 24+ messages in thread
From: Samiullah Khawaja @ 2026-05-13 18:53 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Will Deacon, Alejandro Jimenez,
Lu Baolu, Joerg Roedel, Josua Mayer, Kevin Tian, Pasha Tatashin,
patches, Pranjal Shrivastava, Mostafa Saleh, stable
On Tue, May 12, 2026 at 01:46:17PM -0300, Jason Gunthorpe wrote:
>Sashiko noticed a mismatch of units in this math: num_leaves is
>actually the number of leaf *entries* (so a 16-item contiguous leaf
>is one num_leaves), while index is in items. The mismatch in maths
>causes __map_range_leaf() to exit early instead of efficiently
>filling a larger range of contiguous PTEs.
>
>The early exit is caught by the functions above and then
>__map_range_leaf() is re-invoked, so there is no functional issue.
>
>Correct the misuse of units by adjusting num_leaves with the leaf
>size and avoid the performance cost of looping externally.
>
>There are also some mismatched types for num_leaves; simplify
>things to remove the duplicated calculations.
>
>Fixes: d6c65b0fd621 ("iommupt: Avoid rewalking during map")
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/generic_pt/iommu_pt.h | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
>diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h
>index 4877b05291c9d4..dc91fb4e2f61cb 100644
>--- a/drivers/iommu/generic_pt/iommu_pt.h
>+++ b/drivers/iommu/generic_pt/iommu_pt.h
>@@ -534,10 +534,12 @@ static int __map_range_leaf(struct pt_range *range, void *arg,
> struct pt_state pts = pt_init(range, level, table);
> struct pt_iommu_map_args *map = arg;
> unsigned int leaf_pgsize_lg2 = map->leaf_pgsize_lg2;
>+ unsigned int leaves_avail;
> unsigned int start_index;
> pt_oaddr_t oa = map->oa;
>- unsigned int num_leaves;
>+ pt_vaddr_t num_leaves;
> unsigned int orig_end;
>+ unsigned int step_lg2;
> pt_vaddr_t last_va;
> unsigned int step;
> bool need_contig;
>@@ -546,21 +548,25 @@ static int __map_range_leaf(struct pt_range *range, void *arg,
> PT_WARN_ON(map->leaf_level != level);
> PT_WARN_ON(!pt_can_have_leaf(&pts));
>
>- step = log2_to_int_t(unsigned int,
>- leaf_pgsize_lg2 - pt_table_item_lg2sz(&pts));
>- need_contig = leaf_pgsize_lg2 != pt_table_item_lg2sz(&pts);
>+ step_lg2 = leaf_pgsize_lg2 - pt_table_item_lg2sz(&pts);
>+ step = log2_to_int_t(unsigned int, step_lg2);
>+ need_contig = step_lg2 != 0;
>
> _pt_iter_first(&pts);
> start_index = pts.index;
> orig_end = pts.end_index;
>- if (pts.index + map->num_leaves < pts.end_index) {
>+ leaves_avail =
>+ log2_div_t(unsigned int, pts.end_index - pts.index, step_lg2);
>+ if (map->num_leaves <= leaves_avail) {
> /* Need to stop in the middle of the table to change sizes */
>- pts.end_index = pts.index + map->num_leaves;
>+ pts.end_index = pts.index + log2_mul(map->num_leaves, step_lg2);
> num_leaves = 0;
> } else {
>- num_leaves = map->num_leaves - (pts.end_index - pts.index);
>+ num_leaves = map->num_leaves - leaves_avail;
> }
>
>+ PT_WARN_ON(
>+ log2_mod_t(unsigned int, pts.end_index - pts.index, step_lg2));
> do {
> pts.type = pt_load_entry_raw(&pts);
> if (pts.type != PT_ENTRY_EMPTY || need_contig) {
>--
>2.43.0
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
Thanks,
Sami
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
` (4 preceding siblings ...)
2026-05-12 16:46 ` [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf() Jason Gunthorpe
@ 2026-05-13 11:08 ` Josua Mayer
5 siblings, 0 replies; 24+ messages in thread
From: Josua Mayer @ 2026-05-13 11:08 UTC (permalink / raw)
To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
Robin Murphy, Will Deacon
Cc: Alejandro Jimenez, Lu Baolu, Joerg Roedel, Kevin Tian,
Pasha Tatashin, patches@lists.linux.dev, Pranjal Shrivastava,
Samiullah Khawaja, Mostafa Saleh, stable@vger.kernel.org
Am 12.05.26 um 18:46 schrieb Jason Gunthorpe:
> Josua found there was an errant !ret, so I ran the original series through
> Sashiko, which found some other interesting things, a few miskates were
> made while rebasing across the iommu_debug_map() series, and a few other
> interesting remarks.
>
> Jason Gunthorpe (5):
> iommu: Fix loss of errno on map failure for classic ops
> iommu: Fix up map/unmap debugging for iommupt domains
> iommu: Handle unmap error when iommu_debug is enabled
> iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap
> iommupt: Fix the end_index calculation in __map_range_leaf()
>
> drivers/iommu/generic_pt/iommu_pt.h | 24 +++++----
> drivers/iommu/iommu.c | 82 +++++++++++++----------------
> 2 files changed, 51 insertions(+), 55 deletions(-)
>
>
> base-commit: be93d186ae88a92e7aa77e122d4e661fa57b1e39
Tested on top of v7.1-rc2 with LX2160A Clearfog-CX.
Tested-by: Josua Mayer <josua@solid-run.com>
^ permalink raw reply [flat|nested] 24+ messages in thread