public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Baolu Lu <baolu.lu@linux.intel.com>
To: Nicolin Chen <nicolinc@nvidia.com>,
	joro@8bytes.org, afael@kernel.org, bhelgaas@google.com,
	alex@shazbot.org, jgg@nvidia.com, kevin.tian@intel.com
Cc: will@kernel.org, robin.murphy@arm.com, lenb@kernel.org,
	linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	linux-pci@vger.kernel.org, kvm@vger.kernel.org,
	patches@lists.linux.dev, pjaroszynski@nvidia.com,
	vsethi@nvidia.com, helgaas@kernel.org, etzhao1900@gmail.com
Subject: Re: [PATCH v5 4/5] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
Date: Wed, 12 Nov 2025 14:18:09 +0800	[thread overview]
Message-ID: <60970315-613f-4e62-8923-e162c29d9362@linux.intel.com> (raw)
In-Reply-To: <28af027371a981a2b4154633e12cdb1e5a11da4a.1762835355.git.nicolinc@nvidia.com>

On 11/11/25 13:12, Nicolin Chen wrote:
> +/**
> + * iommu_dev_reset_prepare() - Block IOMMU to prepare for a device reset
> + * @dev: device that is going to enter a reset routine
> + *
> + * When certain device is entering a reset routine, it wants to block any IOMMU
> + * activity during the reset routine. This includes blocking any translation as
> + * well as cache invalidation (especially the device cache).
> + *
> + * This function attaches all RID/PASID of the device's to IOMMU_DOMAIN_BLOCKED
> + * allowing any blocked-domain-supporting IOMMU driver to pause translation and
> + * cahce invalidation, but leaves the software domain pointers intact so later
> + * the iommu_dev_reset_done() can restore everything.
> + *
> + * Return: 0 on success or negative error code if the preparation failed.
> + *
> + * Caller must use iommu_dev_reset_prepare() and iommu_dev_reset_done() together
> + * before/after the core-level reset routine, to unset the resetting_domain.
> + *
> + * These two functions are designed to be used by PCI reset functions that would
> + * not invoke any racy iommu_release_device(), since PCI sysfs node gets removed
> + * before it notifies with a BUS_NOTIFY_REMOVED_DEVICE. When using them in other
> + * case, callers must ensure there will be no racy iommu_release_device() call,
> + * which otherwise would UAF the dev->iommu_group pointer.
> + */
> +int iommu_dev_reset_prepare(struct device *dev)
> +{
> +	struct iommu_group *group = dev->iommu_group;
> +	unsigned long pasid;
> +	void *entry;
> +	int ret = 0;
> +
> +	if (!dev_has_iommu(dev))
> +		return 0;

Nit: This interface is only for PCI layer, so why not just

	if (WARN_ON(!dev_is_pci(dev)))
		return -EINVAL;
?
> +
> +	guard(mutex)(&group->mutex);
> +
> +	/*
> +	 * Once the resetting_domain is set, any concurrent attachment to this
> +	 * iommu_group will be rejected, which would break the attach routines
> +	 * of the sibling devices in the same iommu_group. So, skip this case.
> +	 */
> +	if (dev_is_pci(dev)) {
> +		struct group_device *gdev;
> +
> +		for_each_group_device(group, gdev) {
> +			if (gdev->dev != dev)
> +				return 0;
> +		}
> +	}

With above dev_is_pci() check, here it can simply be,

	if (list_count_nodes(&group->devices) != 1)
		return 0;		

> +
> +	/* Re-entry is not allowed */
> +	if (WARN_ON(group->resetting_domain))
> +		return -EBUSY;
> +
> +	ret = __iommu_group_alloc_blocking_domain(group);
> +	if (ret)
> +		return ret;
> +
> +	/* Stage RID domain at blocking_domain while retaining group->domain */
> +	if (group->domain != group->blocking_domain) {
> +		ret = __iommu_attach_device(group->blocking_domain, dev,
> +					    group->domain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	/*
> +	 * Stage PASID domains at blocking_domain while retaining pasid_array.
> +	 *
> +	 * The pasid_array is mostly fenced by group->mutex, except one reader
> +	 * in iommu_attach_handle_get(), so it's safe to read without xa_lock.
> +	 */
> +	xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> +		iommu_remove_dev_pasid(dev, pasid,
> +				       pasid_array_entry_to_domain(entry));
> +
> +	group->resetting_domain = group->blocking_domain;
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_prepare);
> +
> +/**
> + * iommu_dev_reset_done() - Restore IOMMU after a device reset is finished
> + * @dev: device that has finished a reset routine
> + *
> + * When certain device has finished a reset routine, it wants to restore its
> + * IOMMU activity, including new translation as well as cache invalidation, by
> + * re-attaching all RID/PASID of the device's back to the domains retained in
> + * the core-level structure.
> + *
> + * Caller must pair it with a successfully returned iommu_dev_reset_prepare().
> + *
> + * Note that, although unlikely, there is a risk that re-attaching domains might
> + * fail due to some unexpected happening like OOM.
> + */
> +void iommu_dev_reset_done(struct device *dev)
> +{
> +	struct iommu_group *group = dev->iommu_group;
> +	unsigned long pasid;
> +	void *entry;
> +
> +	if (!dev_has_iommu(dev))
> +		return;
> +
> +	guard(mutex)(&group->mutex);
> +
> +	/* iommu_dev_reset_prepare() was bypassed for the device */
> +	if (!group->resetting_domain)
> +		return;
> +
> +	/* iommu_dev_reset_prepare() was not successfully called */
> +	if (WARN_ON(!group->blocking_domain))
> +		return;
> +
> +	/* Re-attach RID domain back to group->domain */
> +	if (group->domain != group->blocking_domain) {
> +		WARN_ON(__iommu_attach_device(group->domain, dev,
> +					      group->blocking_domain));
> +	}
> +
> +	/*
> +	 * Re-attach PASID domains back to the domains retained in pasid_array.
> +	 *
> +	 * The pasid_array is mostly fenced by group->mutex, except one reader
> +	 * in iommu_attach_handle_get(), so it's safe to read without xa_lock.
> +	 */
> +	xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> +		WARN_ON(__iommu_set_group_pasid(
> +			pasid_array_entry_to_domain(entry), group, pasid,
> +			group->blocking_domain));
> +
> +	group->resetting_domain = NULL;
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_done);
> +
>   #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
>   /**
>    * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU domain

Thanks,
baolu

  reply	other threads:[~2025-11-12  6:22 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-11  5:12 [PATCH v5 0/5] Disable ATS via iommu during PCI resets Nicolin Chen
2025-11-11  5:12 ` [PATCH v5 1/5] iommu: Lock group->mutex in iommu_deferred_attach() Nicolin Chen
2025-11-12  2:47   ` Baolu Lu
2025-11-11  5:12 ` [PATCH v5 2/5] iommu: Tiny domain for iommu_setup_dma_ops() Nicolin Chen
2025-11-12  5:22   ` Baolu Lu
2025-11-14  9:17   ` Tian, Kevin
2025-11-14  9:18   ` Tian, Kevin
2025-11-11  5:12 ` [PATCH v5 3/5] iommu: Add iommu_driver_get_domain_for_dev() helper Nicolin Chen
2025-11-12  5:58   ` Baolu Lu
2025-11-12 17:41     ` Nicolin Chen
2025-11-18  7:02       ` Nicolin Chen
2025-11-19  2:47         ` Baolu Lu
2025-11-19  2:57           ` Nicolin Chen
2025-11-24 19:16       ` Jason Gunthorpe
2025-11-12  8:52   ` kernel test robot
2025-11-14  9:18   ` Tian, Kevin
2025-11-11  5:12 ` [PATCH v5 4/5] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
2025-11-12  6:18   ` Baolu Lu [this message]
2025-11-12 17:43     ` Nicolin Chen
2025-11-14  9:37   ` Tian, Kevin
2025-11-14 18:26     ` Nicolin Chen
2025-11-17  4:59   ` Tian, Kevin
2025-11-17 19:27     ` Nicolin Chen
2025-11-17 23:04   ` Bjorn Helgaas
2025-11-11  5:12 ` [PATCH v5 5/5] pci: Suspend iommu function prior to resetting a device Nicolin Chen
2025-11-14  9:45   ` Tian, Kevin
2025-11-14 18:00     ` Nicolin Chen
2025-11-17  4:52       ` Tian, Kevin
2025-11-17 19:26         ` Nicolin Chen
2025-11-18  0:29           ` Tian, Kevin
2025-11-18  1:42             ` Nicolin Chen
2025-11-18  5:38               ` Baolu Lu
2025-11-18  6:53                 ` Nicolin Chen
2025-11-18  7:53               ` Tian, Kevin
2025-11-18  8:17                 ` Nicolin Chen
2025-11-17 22:58   ` Bjorn Helgaas
2025-11-18  8:16     ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60970315-613f-4e62-8923-e162c29d9362@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=afael@kernel.org \
    --cc=alex@shazbot.org \
    --cc=bhelgaas@google.com \
    --cc=etzhao1900@gmail.com \
    --cc=helgaas@kernel.org \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lenb@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=patches@lists.linux.dev \
    --cc=pjaroszynski@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=vsethi@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox