From: Jason Gunthorpe <jgg@ziepe.ca>
To: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Joerg Roedel <joro@8bytes.org>,
Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Nicolin Chen <nicolinc@nvidia.com>, Yi Liu <yi.l.liu@intel.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>,
Joel Granados <j.granados@samsung.com>,
iommu@lists.linux.dev, virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v7 08/10] iommufd: Associate fault object with iommufd_hw_pgtable
Date: Fri, 28 Jun 2024 19:13:21 -0300 [thread overview]
Message-ID: <Zn81gdjwJBfjXekJ@ziepe.ca> (raw)
In-Reply-To: <20240616061155.169343-9-baolu.lu@linux.intel.com>
On Sun, Jun 16, 2024 at 02:11:53PM +0800, Lu Baolu wrote:
> @@ -308,13 +315,29 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
> goto out_put_pt;
> }
>
> + if (cmd->flags & IOMMU_HWPT_FAULT_ID_VALID) {
> + struct iommufd_fault *fault;
> +
> + fault = iommufd_get_fault(ucmd, cmd->fault_id);
> + if (IS_ERR(fault)) {
> + rc = PTR_ERR(fault);
> + goto out_hwpt;
> + }
> + hwpt->fault = fault;
> + hwpt->domain->iopf_handler = iommufd_fault_iopf_handler;
> + hwpt->domain->fault_data = hwpt;
This is not the right refcounting for a longterm reference... The PT
above shows the pattern:
pt_obj = iommufd_get_object(ucmd->ictx, cmd->pt_id, IOMMUFD_OBJ_ANY);
hwpt_paging = iommufd_hwpt_paging_alloc()
refcount_inc(&ioas->obj.users);
iommufd_put_object(ucmd->ictx, pt_obj);
Which is to say you need to incr users and then do the put object. And
iommufd_object_abort_and_destroy() will always destroy the ref on the
fault if the fault is non-null so the error handling will double free.
fail_nth is intended to catch this, but you have to add enough inputs
to cover the new cases when you add them, it seems like that is
missing in this series. ie add a fault object and hwpt alloc to a
fail_nth test and see we execute the iommufd_ucmd_respond() failure
path.
--- a/drivers/iommu/iommufd/hw_pagetable.c
+++ b/drivers/iommu/iommufd/hw_pagetable.c
@@ -14,7 +14,7 @@ static void __iommufd_hwpt_destroy(struct iommufd_hw_pagetable *hwpt)
iommu_domain_free(hwpt->domain);
if (hwpt->fault)
- iommufd_put_object(hwpt->fault->ictx, &hwpt->fault->obj);
+ refcount_dec(&hwpt->fault->obj.users);
}
void iommufd_hwpt_paging_destroy(struct iommufd_object *obj)
@@ -326,18 +326,17 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
hwpt->fault = fault;
hwpt->domain->iopf_handler = iommufd_fault_iopf_handler;
hwpt->domain->fault_data = hwpt;
+ refcount_inc(&fault->obj.users);
+ iommufd_put_object(ucmd->ictx, &fault->obj);
}
cmd->out_hwpt_id = hwpt->obj.id;
rc = iommufd_ucmd_respond(ucmd, sizeof(*cmd));
if (rc)
- goto out_put_fault;
+ goto out_hwpt;
iommufd_object_finalize(ucmd->ictx, &hwpt->obj);
goto out_unlock;
-out_put_fault:
- if (cmd->flags & IOMMU_HWPT_FAULT_ID_VALID)
- iommufd_put_object(ucmd->ictx, &hwpt->fault->obj);
out_hwpt:
iommufd_object_abort_and_destroy(ucmd->ictx, &hwpt->obj);
out_unlock:
next prev parent reply other threads:[~2024-06-29 1:11 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-16 6:11 [PATCH v7 00/10] IOMMUFD: Deliver IO page faults to user space Lu Baolu
2024-06-16 6:11 ` [PATCH v7 01/10] iommu: Introduce domain attachment handle Lu Baolu
2024-06-28 20:46 ` Jason Gunthorpe
2024-06-16 6:11 ` [PATCH v7 02/10] iommu: Remove sva handle list Lu Baolu
2024-06-28 21:15 ` Jason Gunthorpe
2024-06-16 6:11 ` [PATCH v7 03/10] iommu: Add attach handle to struct iopf_group Lu Baolu
2024-06-17 7:41 ` Tian, Kevin
2024-06-18 1:35 ` Baolu Lu
2024-06-28 20:48 ` Jason Gunthorpe
2024-06-16 6:11 ` [PATCH v7 04/10] iommu: Extend domain attach group with handle support Lu Baolu
2024-06-28 21:06 ` Jason Gunthorpe
2024-06-29 3:58 ` Baolu Lu
2024-06-16 6:11 ` [PATCH v7 05/10] iommufd: Add fault and response message definitions Lu Baolu
2024-06-16 6:11 ` [PATCH v7 06/10] iommufd: Add iommufd fault object Lu Baolu
2024-06-28 22:14 ` Jason Gunthorpe
2024-06-16 6:11 ` [PATCH v7 07/10] iommufd: Fault-capable hwpt attach/detach/replace Lu Baolu
2024-06-28 21:17 ` Jason Gunthorpe
2024-07-01 5:55 ` Baolu Lu
2024-07-09 17:36 ` Jason Gunthorpe
2024-07-10 0:32 ` Tian, Kevin
2024-07-10 8:36 ` Baolu Lu
2024-06-16 6:11 ` [PATCH v7 08/10] iommufd: Associate fault object with iommufd_hw_pgtable Lu Baolu
2024-06-28 22:13 ` Jason Gunthorpe [this message]
2024-07-01 5:26 ` Baolu Lu
2024-06-16 6:11 ` [PATCH v7 09/10] iommufd/selftest: Add IOPF support for mock device Lu Baolu
2024-06-16 6:11 ` [PATCH v7 10/10] iommufd/selftest: Add coverage for IOPF test Lu Baolu
2024-06-17 7:46 ` [PATCH v7 00/10] IOMMUFD: Deliver IO page faults to user space Tian, Kevin
2024-06-28 22:14 ` Jason Gunthorpe
2024-07-02 6:42 ` Baolu Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zn81gdjwJBfjXekJ@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=j.granados@samsung.com \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jean-philippe@linaro.org \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).