From: Samiullah Khawaja <skhawaja@google.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
Robin Murphy <robin.murphy@arm.com>,
Will Deacon <will@kernel.org>,
Alejandro Jimenez <alejandro.j.jimenez@oracle.com>,
Lu Baolu <baolu.lu@linux.intel.com>,
Joerg Roedel <joerg.roedel@amd.com>,
Josua Mayer <josua@solid-run.com>,
Kevin Tian <kevin.tian@intel.com>,
Pasha Tatashin <pasha.tatashin@soleen.com>,
patches@lists.linux.dev, Pranjal Shrivastava <praan@google.com>,
Mostafa Saleh <smostafa@google.com>,
stable@vger.kernel.org
Subject: Re: [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled
Date: Wed, 13 May 2026 16:56:33 +0000 [thread overview]
Message-ID: <agSswSq8bTYiI6Uv@google.com> (raw)
In-Reply-To: <3-v1-44b2fef88b25+d3-iommupt_map_rc_jgg@nvidia.com>
On Tue, May 12, 2026 at 01:46:15PM -0300, Jason Gunthorpe wrote:
>Sashiko noticed a latent bug where the map error flow called iommu_unmap()
>which calls iommu_debug_unmap_begin()/iommu_debug_unmap_end() however
>since this is an error path the map flow never actually established the
>original iommu_debug_map() it will malfunction.
>
>Lift the unmap error handling into iommu_map_nosync() and reorder it so
>the trace_map()/iommu_debug_map() records the partial mapping and then
>immediately unmaps it. This avoid creating the unbalanced tracking and
>provides saner tracing instead of a unmap unmatched to any map.
>
>Fixes: ccc21213f013 ("iommu: Add calls for IOMMU_DEBUG_PAGEALLOC")
>Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>---
> drivers/iommu/iommu.c | 49 +++++++++++++++++--------------------------
> 1 file changed, 19 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>index e334588a2476b4..e5fa9875900228 100644
>--- a/drivers/iommu/iommu.c
>+++ b/drivers/iommu/iommu.c
>@@ -2575,12 +2575,11 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>
> static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> unsigned long iova, phys_addr_t paddr,
>- size_t size, int prot, gfp_t gfp)
>+ size_t size, int prot, gfp_t gfp,
>+ size_t *mapped)
> {
> const struct iommu_domain_ops *ops = domain->ops;
>- unsigned long orig_iova = iova;
> unsigned int min_pagesz;
>- size_t orig_size = size;
> int ret = 0;
>
> if (WARN_ON(!ops->map_pages))
>@@ -2603,31 +2602,25 @@ static int __iommu_map_domain_pgtbl(struct iommu_domain *domain,
> pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>
> while (size) {
>- size_t pgsize, count, mapped = 0;
>+ size_t pgsize, count, op_mapped = 0;
>
> pgsize = iommu_pgsize(domain, iova, paddr, size, &count);
>
> pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n",
> iova, &paddr, pgsize, count);
> ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot,
>- gfp, &mapped);
>+ gfp, &op_mapped);
> /*
> * Some pages may have been mapped, even if an error occurred,
> * so we should account for those so they can be unmapped.
> */
>- size -= mapped;
>-
>+ *mapped += op_mapped;
> if (ret)
>- break;
>+ return ret;
>
>- iova += mapped;
>- paddr += mapped;
>- }
>-
>- /* unroll mapping in case something went wrong */
>- if (ret) {
>- iommu_unmap(domain, orig_iova, orig_size - size);
>- return ret;
>+ size -= op_mapped;
>+ iova += op_mapped;
>+ paddr += op_mapped;
> }
> return 0;
> }
>@@ -2645,6 +2638,7 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> {
> struct pt_iommu *pt = iommupt_from_domain(domain);
>+ size_t mapped = 0;
> int ret;
>
> might_sleep_if(gfpflags_allow_blocking(gfp));
>@@ -2656,24 +2650,19 @@ int iommu_map_nosync(struct iommu_domain *domain, unsigned long iova,
> __GFP_HIGHMEM))))
> return -EINVAL;
>
>- if (pt) {
>- size_t mapped = 0;
>-
>+ if (pt)
> ret = pt->ops->map_range(pt, iova, paddr, size, prot, gfp,
> &mapped);
>- if (ret) {
>- iommu_unmap(domain, iova, mapped);
>- return ret;
>- }
>- } else {
>+ else
> ret = __iommu_map_domain_pgtbl(domain, iova, paddr, size, prot,
>- gfp);
>- if (ret)
>- return ret;
>- }
>+ gfp, &mapped);
>
>- trace_map(iova, paddr, size);
>- iommu_debug_map(domain, paddr, size);
>+ trace_map(iova, paddr, mapped);
>+ iommu_debug_map(domain, paddr, mapped);
>+ if (ret) {
>+ iommu_unmap(domain, iova, mapped);
>+ return ret;
>+ }
> return 0;
> }
>
>--
>2.43.0
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
next prev parent reply other threads:[~2026-05-13 16:56 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 16:46 [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Jason Gunthorpe
2026-05-12 16:46 ` [PATCH rc 1/5] iommu: Fix loss of errno on map failure for classic ops Jason Gunthorpe
2026-05-13 14:57 ` Mostafa Saleh
2026-05-13 16:32 ` Samiullah Khawaja
2026-05-13 17:42 ` Pranjal Shrivastava
2026-05-12 16:46 ` [PATCH rc 2/5] iommu: Fix up map/unmap debugging for iommupt domains Jason Gunthorpe
2026-05-13 15:11 ` Mostafa Saleh
2026-05-13 16:45 ` Samiullah Khawaja
2026-05-13 17:44 ` Pranjal Shrivastava
2026-05-12 16:46 ` [PATCH rc 3/5] iommu: Handle unmap error when iommu_debug is enabled Jason Gunthorpe
2026-05-13 15:13 ` Mostafa Saleh
2026-05-13 15:18 ` Jason Gunthorpe
2026-05-13 16:56 ` Samiullah Khawaja [this message]
2026-05-13 17:47 ` Pranjal Shrivastava
2026-05-12 16:46 ` [PATCH rc 4/5] iommupt: Check for missing PAGE_SIZE in the pgsize_bitmap Jason Gunthorpe
2026-05-13 17:46 ` Samiullah Khawaja
2026-05-13 17:57 ` Samiullah Khawaja
2026-05-13 18:06 ` Jason Gunthorpe
2026-05-13 18:48 ` Samiullah Khawaja
2026-05-13 21:03 ` Jason Gunthorpe
2026-05-13 17:48 ` Pranjal Shrivastava
2026-05-12 16:46 ` [PATCH rc 5/5] iommupt: Fix the end_index calculation in __map_range_leaf() Jason Gunthorpe
2026-05-13 17:58 ` Pranjal Shrivastava
2026-05-13 18:53 ` Samiullah Khawaja
2026-05-13 11:08 ` [PATCH rc 0/5] Fix some iommupt mistakes from Sashiko Josua Mayer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=agSswSq8bTYiI6Uv@google.com \
--to=skhawaja@google.com \
--cc=alejandro.j.jimenez@oracle.com \
--cc=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joerg.roedel@amd.com \
--cc=joro@8bytes.org \
--cc=josua@solid-run.com \
--cc=kevin.tian@intel.com \
--cc=pasha.tatashin@soleen.com \
--cc=patches@lists.linux.dev \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox