* [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
@ 2025-06-28 7:42 Nicolin Chen
2025-06-28 7:42 ` [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach Nicolin Chen
` (4 more replies)
0 siblings, 5 replies; 26+ messages in thread
From: Nicolin Chen @ 2025-06-28 7:42 UTC (permalink / raw)
To: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
Hi all,
PCIe permits a device to ignore ATS invalidation TLPs, while processing a
reset. This creates a problem visible to the OS where an ATS invalidation
command will time out: e.g. an SVA domain will have no coordination with a
reset event and can racily issue ATS invalidations to a resetting device.
The OS should do something to mitigate this as we do not want production
systems to be reporting critical ATS failures, especially in a hypervisor
environment. Broadly, OS could arrange to ignore the timeouts, block page
table mutations to prevent invalidations, or disable and block ATS.
The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
block ATS before initiating a Function Level Reset. It also mentions that
other reset methods could have the same vulnerability as well.
Provide a callback from the PCI subsystem that will enclose the reset and
have the iommu core temporarily change all the attached domain to BLOCKED.
After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
ATS queries, synchronously stop issuing new ATS invalidations, and wait
for all ATS invalidations to complete. This can avoid any ATS invaliation
timeouts.
When a device is resetting, any new domain attachment should be deferred,
until the reset is finished. The iommu callback will log the
However, if there is a domain attachment/replacement happening during an
ongoing reset, the ATS might be re-enabled between the two function calls.
Introduce a new pending_reset flag in iommu_group to defer any attachment
during a reset, allowing iommu core to cache the target domains in the SW
level but bypassing the driver. The iommu_dev_reset_done() will re-attach
these soft-attached domains via __iommu_attach_device/set_group_pasid().
Notes:
- This only works for IOMMU drivers that implemented ops->blocked_domain
correctly with pci_disable_ats().
- This only works for IOMMU drivers that will not issue ATS invalidation
requests to the device, after it's docked at ops->blocked_domain.
Driver should fix itself to align with the aforementioned notes.
This is on Github:
https://github.com/nicolinc/iommufd/commits/iommu_dev_reset-rfcv2
Changelog
v2
* [iommu] Update kdocs, inline comments, and commit logs
* [iommu] Replace long-holding group->mutex with a pending_reset flag
* [pci] Abort reset routines if iommu_dev_reset_prepare() fails
* [pci] Apply the same vulnerability fix to other reset functions
v1
https://lore.kernel.org/all/cover.1749494161.git.nicolinc@nvidia.com/
Thanks
Nicolin
Nicolin Chen (4):
iommu: Lock group->mutex in iommu_deferred_attach
iommu: Pass in gdev to __iommu_device_set_domain
iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
pci: Suspend iommu function prior to resetting a device
include/linux/iommu.h | 12 +++
drivers/iommu/iommu.c | 180 ++++++++++++++++++++++++++++++++++++++---
drivers/pci/pci-acpi.c | 21 ++++-
drivers/pci/pci.c | 84 +++++++++++++++++--
drivers/pci/quirks.c | 27 ++++++-
5 files changed, 307 insertions(+), 17 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
@ 2025-06-28 7:42 ` Nicolin Chen
2025-07-04 15:22 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain Nicolin Chen
` (3 subsequent siblings)
4 siblings, 1 reply; 26+ messages in thread
From: Nicolin Chen @ 2025-06-28 7:42 UTC (permalink / raw)
To: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
The iommu_deferred_attach() is a runtime asynchronous function called by
iommu-dma function, which will race against other attach functions if it
accesses something in the dev->iommu_group.
Grab the lock to protect it like others who call __iommu_attach_device()
as it will need to access dev->iommu_group.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/iommu.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index a4b606c591da..08ff7efa8925 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2151,10 +2151,14 @@ EXPORT_SYMBOL_GPL(iommu_attach_device);
int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
{
- if (dev->iommu && dev->iommu->attach_deferred)
- return __iommu_attach_device(domain, dev);
+ struct iommu_group *group = dev->iommu_group;
+ int ret = 0;
- return 0;
+ mutex_lock(&group->mutex);
+ if (dev->iommu && dev->iommu->attach_deferred)
+ ret = __iommu_attach_device(domain, dev);
+ mutex_unlock(&group->mutex);
+ return ret;
}
void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
2025-06-28 7:42 ` [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach Nicolin Chen
@ 2025-06-28 7:42 ` Nicolin Chen
2025-07-04 15:23 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
` (2 subsequent siblings)
4 siblings, 1 reply; 26+ messages in thread
From: Nicolin Chen @ 2025-06-28 7:42 UTC (permalink / raw)
To: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
This will need to check a per gdev property, since the dev pointer cannot
store any private iommu flag for the iommu code to use. Thus, pass in the
gdev pointer instead.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/iommu.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 08ff7efa8925..bd3deedcd2de 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -112,7 +112,7 @@ enum {
};
static int __iommu_device_set_domain(struct iommu_group *group,
- struct device *dev,
+ struct group_device *gdev,
struct iommu_domain *new_domain,
unsigned int flags);
static int __iommu_group_set_domain_internal(struct iommu_group *group,
@@ -602,7 +602,7 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
if (group->default_domain)
iommu_create_device_direct_mappings(group->default_domain, dev);
if (group->domain) {
- ret = __iommu_device_set_domain(group, dev, group->domain, 0);
+ ret = __iommu_device_set_domain(group, gdev, group->domain, 0);
if (ret)
goto err_remove_gdev;
} else if (!group->default_domain && !group_list) {
@@ -2267,10 +2267,11 @@ int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group)
EXPORT_SYMBOL_GPL(iommu_attach_group);
static int __iommu_device_set_domain(struct iommu_group *group,
- struct device *dev,
+ struct group_device *gdev,
struct iommu_domain *new_domain,
unsigned int flags)
{
+ struct device *dev = gdev->dev;
int ret;
/*
@@ -2350,8 +2351,7 @@ static int __iommu_group_set_domain_internal(struct iommu_group *group,
*/
result = 0;
for_each_group_device(group, gdev) {
- ret = __iommu_device_set_domain(group, gdev->dev, new_domain,
- flags);
+ ret = __iommu_device_set_domain(group, gdev, new_domain, flags);
if (ret) {
result = ret;
/*
@@ -2383,7 +2383,7 @@ static int __iommu_group_set_domain_internal(struct iommu_group *group,
*/
if (group->domain)
WARN_ON(__iommu_device_set_domain(
- group, gdev->dev, group->domain,
+ group, gdev, group->domain,
IOMMU_SET_DOMAIN_MUST_SUCCEED));
if (gdev == last_gdev)
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
2025-06-28 7:42 ` [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach Nicolin Chen
2025-06-28 7:42 ` [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain Nicolin Chen
@ 2025-06-28 7:42 ` Nicolin Chen
2025-06-28 13:28 ` Baolu Lu
2025-07-04 15:43 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 4/4] pci: Suspend iommu function prior to resetting a device Nicolin Chen
2025-07-24 6:50 ` [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Ethan Zhao
4 siblings, 2 replies; 26+ messages in thread
From: Nicolin Chen @ 2025-06-28 7:42 UTC (permalink / raw)
To: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
PCIe permits a device to ignore ATS invalidation TLPs, while processing a
reset. This creates a problem visible to the OS where an ATS invalidation
command will time out: e.g. an SVA domain will have no coordination with a
reset event and can racily issue ATS invalidations to a resetting device.
The OS should do something to mitigate this as we do not want production
systems to be reporting critical ATS failures, especially in a hypervisor
environment. Broadly, OS could arrange to ignore the timeouts, block page
table mutations to prevent invalidations, or disable and block ATS.
The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
block ATS before initiating a Function Level Reset. It also mentions that
other reset methods could have the same vulnerability as well.
Provide a callback from the PCI subsystem that will enclose the reset and
have the iommu core temporarily change all the attached domain to BLOCKED.
After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
ATS queries, synchronously stop issuing new ATS invalidations, and wait
for all ATS invalidations to complete. This can avoid any ATS invaliation
timeouts.
However, if there is a domain attachment/replacement happening during an
ongoing reset, the ATS might be re-enabled between the two function calls.
Introduce a new pending_reset flag in group_device to defer an attachment
during a reset, allowing iommu core to cache the target domains in the SW
level but bypassing the driver. The iommu_dev_reset_done() will re-attach
these soft-attached domains via __iommu_attach_device/set_group_pasid().
Notes:
- This only works for IOMMU drivers that implemented ops->blocked_domain
correctly with pci_disable_ats().
- This only works for IOMMU drivers that will not issue ATS invalidation
requests to the device, after it's docked at ops->blocked_domain.
Driver should fix itself to align with the aforementioned notes.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
include/linux/iommu.h | 12 ++++
drivers/iommu/iommu.c | 158 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 170 insertions(+)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 156732807994..a17161b8625a 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -1123,6 +1123,9 @@ void dev_iommu_priv_set(struct device *dev, void *priv);
extern struct mutex iommu_probe_device_lock;
int iommu_probe_device(struct device *dev);
+int iommu_dev_reset_prepare(struct device *dev);
+void iommu_dev_reset_done(struct device *dev);
+
int iommu_device_use_default_domain(struct device *dev);
void iommu_device_unuse_default_domain(struct device *dev);
@@ -1407,6 +1410,15 @@ static inline int iommu_fwspec_add_ids(struct device *dev, u32 *ids,
return -ENODEV;
}
+static inline int iommu_dev_reset_prepare(struct device *dev)
+{
+ return 0;
+}
+
+static inline void iommu_dev_reset_done(struct device *dev)
+{
+}
+
static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev)
{
return NULL;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index bd3deedcd2de..14bfeaa9ac29 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -71,12 +71,29 @@ struct group_device {
struct list_head list;
struct device *dev;
char *name;
+ bool pending_reset : 1;
};
/* Iterate over each struct group_device in a struct iommu_group */
#define for_each_group_device(group, pos) \
list_for_each_entry(pos, &(group)->devices, list)
+/* Callers must hold the dev->iommu_group->mutex */
+static struct group_device *device_to_group_device(struct device *dev)
+{
+ struct iommu_group *group = dev->iommu_group;
+ struct group_device *gdev;
+
+ lockdep_assert_held(&group->mutex);
+
+ /* gdev must be in the list */
+ for_each_group_device(group, gdev) {
+ if (gdev->dev == dev)
+ break;
+ }
+ return gdev;
+}
+
struct iommu_group_attribute {
struct attribute attr;
ssize_t (*show)(struct iommu_group *group, char *buf);
@@ -2155,8 +2172,17 @@ int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
int ret = 0;
mutex_lock(&group->mutex);
+
+ /*
+ * There is a racy attach while the device is resetting. Defer it until
+ * the iommu_dev_reset_done() that attaches the device to group->domain.
+ */
+ if (device_to_group_device(dev)->pending_reset)
+ goto unlock;
+
if (dev->iommu && dev->iommu->attach_deferred)
ret = __iommu_attach_device(domain, dev);
+unlock:
mutex_unlock(&group->mutex);
return ret;
}
@@ -2295,6 +2321,13 @@ static int __iommu_device_set_domain(struct iommu_group *group,
dev->iommu->attach_deferred = 0;
}
+ /*
+ * There is a racy attach while the device is resetting. Defer it until
+ * the iommu_dev_reset_done() that attaches the device to group->domain.
+ */
+ if (gdev->pending_reset)
+ return 0;
+
ret = __iommu_attach_device(new_domain, dev);
if (ret) {
/*
@@ -3378,6 +3411,13 @@ static int __iommu_set_group_pasid(struct iommu_domain *domain,
int ret;
for_each_group_device(group, device) {
+ /*
+ * There is a racy attach while the device is resetting. Defer
+ * it until the iommu_dev_reset_done() that attaches the device
+ * to group->domain.
+ */
+ if (device->pending_reset)
+ continue;
if (device->dev->iommu->max_pasids > 0) {
ret = domain->ops->set_dev_pasid(domain, device->dev,
pasid, old);
@@ -3799,6 +3839,124 @@ int iommu_replace_group_handle(struct iommu_group *group,
}
EXPORT_SYMBOL_NS_GPL(iommu_replace_group_handle, "IOMMUFD_INTERNAL");
+/*
+ * Caller must use iommu_dev_reset_prepare() and iommu_dev_reset_done() together
+ * before/after the core-level reset routine, to unclear the pending_reset flag
+ * and to put the iommu_group reference.
+ *
+ * These two functions are designed to be used by PCI reset functions that would
+ * not invoke any racy iommu_release_device() since PCI sysfs node gets removed
+ * before it notifies with a BUS_NOTIFY_REMOVED_DEVICE. When using them in other
+ * case, callers must ensure there will be no racy iommu_release_device() call,
+ * which otherwise would UAF the dev->iommu_group pointer.
+ */
+int iommu_dev_reset_prepare(struct device *dev)
+{
+ const struct iommu_ops *ops;
+ struct iommu_group *group;
+ unsigned long pasid;
+ void *entry;
+ int ret = 0;
+
+ if (!dev_has_iommu(dev))
+ return 0;
+
+ if (dev->iommu->require_direct) {
+ dev_warn(
+ dev,
+ "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
+ return -EINVAL;
+ }
+
+ /* group will be put in iommu_dev_reset_done() */
+ group = iommu_group_get(dev);
+
+ /* Caller ensures no racy iommu_release_device(), so this won't UAF */
+ mutex_lock(&group->mutex);
+
+ ops = dev_iommu_ops(dev);
+ if (!ops->blocked_domain) {
+ dev_warn(dev,
+ "IOMMU driver doesn't support IOMMU_DOMAIN_BLOCKED\n");
+ ret = -EOPNOTSUPP;
+ goto unlock;
+ }
+
+ device_to_group_device(dev)->pending_reset = true;
+
+ /* Device is already attached to the blocked_domain. Nothing to do */
+ if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
+ goto unlock;
+
+ /* Dock RID domain to blocked_domain while retaining group->domain */
+ ret = __iommu_attach_device(ops->blocked_domain, dev);
+ if (ret)
+ goto unlock;
+
+ /* Dock PASID domains to blocked_domain while retaining pasid_array */
+ xa_lock(&group->pasid_array);
+ xa_for_each_start(&group->pasid_array, pasid, entry, 1)
+ iommu_remove_dev_pasid(dev, pasid,
+ pasid_array_entry_to_domain(entry));
+ xa_unlock(&group->pasid_array);
+
+unlock:
+ mutex_unlock(&group->mutex);
+ if (ret)
+ iommu_group_put(group);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_dev_reset_prepare);
+
+/*
+ * Pair with a previous iommu_dev_reset_prepare() that was successfully returned
+ *
+ * Note that, although unlikely, there is a risk that re-attaching domains might
+ * fail due to some unexpected happening like OOM.
+ */
+void iommu_dev_reset_done(struct device *dev)
+{
+ struct iommu_group *group = dev->iommu_group;
+ const struct iommu_ops *ops;
+ struct group_device *gdev;
+ unsigned long pasid;
+ void *entry;
+
+ if (!dev_has_iommu(dev))
+ return;
+
+ mutex_lock(&group->mutex);
+
+ gdev = device_to_group_device(dev);
+
+ ops = dev_iommu_ops(dev);
+ /* iommu_dev_reset_prepare() was not successfully called */
+ if (WARN_ON(!ops->blocked_domain || !gdev->pending_reset)) {
+ mutex_unlock(&group->mutex);
+ return;
+ }
+
+ if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
+ goto done;
+
+ /* Shift RID domain back to group->domain */
+ WARN_ON(__iommu_attach_device(group->domain, dev));
+
+ /* Shift PASID domains back to domains retained in pasid_array */
+ xa_lock(&group->pasid_array);
+ xa_for_each_start(&group->pasid_array, pasid, entry, 1)
+ WARN_ON(__iommu_set_group_pasid(
+ pasid_array_entry_to_domain(entry), group, pasid,
+ ops->blocked_domain));
+ xa_unlock(&group->pasid_array);
+
+done:
+ gdev->pending_reset = false;
+ mutex_unlock(&group->mutex);
+ iommu_group_put(group);
+}
+EXPORT_SYMBOL_GPL(iommu_dev_reset_done);
+
#if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
/**
* iommu_dma_prepare_msi() - Map the MSI page in the IOMMU domain
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH RFC v2 4/4] pci: Suspend iommu function prior to resetting a device
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
` (2 preceding siblings ...)
2025-06-28 7:42 ` [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
@ 2025-06-28 7:42 ` Nicolin Chen
2025-07-24 6:50 ` [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Ethan Zhao
4 siblings, 0 replies; 26+ messages in thread
From: Nicolin Chen @ 2025-06-28 7:42 UTC (permalink / raw)
To: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
PCIe permits a device to ignore ATS invalidation TLPs, while processing a
reset. This creates a problem visible to the OS where an ATS invalidation
command will time out: e.g. an SVA domain will have no coordination with a
reset event and can racily issue ATS invalidations to a resetting device.
The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
block ATS before initiating a Function Level Reset. It also mentions that
other reset methods could have the same vulnerability as well.
Now iommu_dev_reset_prepare/done() helpers are introduced for this matter.
Use them in all the existing reset functions, which will attach the device
to an IOMMU_DOMAIN_BLOCKED during a reset, so as to allow IOMMU driver to:
- invoke pci_disable_ats() and pci_enable_ats() respectively
- wait for all ATS invalidations to complete
- stop issuing new ATS invalidations
- fence any incoming ATS queries
Add a warning if ATS isn't disabled, in which case IOMMU driver should fix
itself to disable ATS following the design in iommu_dev_reset_prepare().
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/pci/pci-acpi.c | 21 ++++++++++-
drivers/pci/pci.c | 84 +++++++++++++++++++++++++++++++++++++++---
drivers/pci/quirks.c | 27 +++++++++++++-
3 files changed, 124 insertions(+), 8 deletions(-)
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
index b78e0e417324..727957f193ca 100644
--- a/drivers/pci/pci-acpi.c
+++ b/drivers/pci/pci-acpi.c
@@ -9,6 +9,7 @@
#include <linux/delay.h>
#include <linux/init.h>
+#include <linux/iommu.h>
#include <linux/irqdomain.h>
#include <linux/pci.h>
#include <linux/msi.h>
@@ -974,6 +975,7 @@ void pci_set_acpi_fwnode(struct pci_dev *dev)
int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
{
acpi_handle handle = ACPI_HANDLE(&dev->dev);
+ int ret = 0;
if (!handle || !acpi_has_method(handle, "_RST"))
return -ENOTTY;
@@ -981,12 +983,27 @@ int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
if (probe)
return 0;
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ */
+ ret = iommu_dev_reset_prepare(&dev->dev);
+ if (ret) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return ret;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_RST", NULL, NULL))) {
pci_warn(dev, "ACPI _RST failed\n");
- return -ENOTTY;
+ ret = -ENOTTY;
}
- return 0;
+ iommu_dev_reset_done(&dev->dev);
+ return ret;
}
bool acpi_pci_power_manageable(struct pci_dev *dev)
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index e9448d55113b..ddb7a10ef500 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -13,6 +13,7 @@
#include <linux/delay.h>
#include <linux/dmi.h>
#include <linux/init.h>
+#include <linux/iommu.h>
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/pci.h>
@@ -4518,13 +4519,30 @@ EXPORT_SYMBOL(pci_wait_for_pending_transaction);
*/
int pcie_flr(struct pci_dev *dev)
{
+ int ret = 0;
+
if (!pci_wait_for_pending_transaction(dev))
pci_err(dev, "timed out waiting for pending transaction; performing function level reset anyway\n");
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ * Have to call it after waiting for pending DMA transaction.
+ */
+ ret = iommu_dev_reset_prepare(&dev->dev);
+ if (ret) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return ret;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR);
if (dev->imm_ready)
- return 0;
+ goto done;
/*
* Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within
@@ -4533,7 +4551,11 @@ int pcie_flr(struct pci_dev *dev)
*/
msleep(100);
- return pci_dev_wait(dev, "FLR", PCIE_RESET_READY_POLL_MS);
+ ret = pci_dev_wait(dev, "FLR", PCIE_RESET_READY_POLL_MS);
+
+done:
+ iommu_dev_reset_done(&dev->dev);
+ return ret;
}
EXPORT_SYMBOL_GPL(pcie_flr);
@@ -4561,6 +4583,7 @@ EXPORT_SYMBOL_GPL(pcie_reset_flr);
static int pci_af_flr(struct pci_dev *dev, bool probe)
{
+ int ret = 0;
int pos;
u8 cap;
@@ -4587,10 +4610,25 @@ static int pci_af_flr(struct pci_dev *dev, bool probe)
PCI_AF_STATUS_TP << 8))
pci_err(dev, "timed out waiting for pending transaction; performing AF function level reset anyway\n");
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ * Have to call it after waiting for pending DMA transaction.
+ */
+ ret = iommu_dev_reset_prepare(&dev->dev);
+ if (ret) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return ret;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR);
if (dev->imm_ready)
- return 0;
+ goto done;
/*
* Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006,
@@ -4600,7 +4638,11 @@ static int pci_af_flr(struct pci_dev *dev, bool probe)
*/
msleep(100);
- return pci_dev_wait(dev, "AF_FLR", PCIE_RESET_READY_POLL_MS);
+ ret = pci_dev_wait(dev, "AF_FLR", PCIE_RESET_READY_POLL_MS);
+
+done:
+ iommu_dev_reset_done(&dev->dev);
+ return ret;
}
/**
@@ -4621,6 +4663,7 @@ static int pci_af_flr(struct pci_dev *dev, bool probe)
static int pci_pm_reset(struct pci_dev *dev, bool probe)
{
u16 csr;
+ int ret;
if (!dev->pm_cap || dev->dev_flags & PCI_DEV_FLAGS_NO_PM_RESET)
return -ENOTTY;
@@ -4635,6 +4678,20 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe)
if (dev->current_state != PCI_D0)
return -EINVAL;
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ */
+ ret = iommu_dev_reset_prepare(&dev->dev);
+ if (ret) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return ret;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
csr &= ~PCI_PM_CTRL_STATE_MASK;
csr |= PCI_D3hot;
pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, csr);
@@ -4645,7 +4702,9 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe)
pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, csr);
pci_dev_d3_sleep(dev);
- return pci_dev_wait(dev, "PM D3hot->D0", PCIE_RESET_READY_POLL_MS);
+ ret = pci_dev_wait(dev, "PM D3hot->D0", PCIE_RESET_READY_POLL_MS);
+ iommu_dev_reset_done(&dev->dev);
+ return ret;
}
/**
@@ -5100,6 +5159,20 @@ static int cxl_reset_bus_function(struct pci_dev *dev, bool probe)
if (rc)
return -ENOTTY;
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ */
+ rc = iommu_dev_reset_prepare(&dev->dev);
+ if (rc) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return rc;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
if (reg & PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR) {
val = reg;
} else {
@@ -5114,6 +5187,7 @@ static int cxl_reset_bus_function(struct pci_dev *dev, bool probe)
pci_write_config_word(bridge, dvsec + PCI_DVSEC_CXL_PORT_CTL,
reg);
+ iommu_dev_reset_done(&dev->dev);
return rc;
}
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index d7f4ee634263..7a66c01392d9 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -21,6 +21,7 @@
#include <linux/pci.h>
#include <linux/isa-dma.h> /* isa_dma_bridge_buggy */
#include <linux/init.h>
+#include <linux/iommu.h>
#include <linux/delay.h>
#include <linux/acpi.h>
#include <linux/dmi.h>
@@ -4223,6 +4224,30 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
{ 0 }
};
+static int __pci_dev_specific_reset(struct pci_dev *dev, bool probe,
+ const struct pci_dev_reset_methods *i)
+{
+ int ret;
+
+ /*
+ * Per PCIe r6.3, sec 10.3.1 IMPLEMENTATION NOTE, software disables ATS
+ * before initiating a reset. Notify the iommu driver that enabled ATS.
+ */
+ ret = iommu_dev_reset_prepare(&dev->dev);
+ if (ret) {
+ pci_err(dev, "failed to stop IOMMU\n");
+ return ret;
+ }
+
+ /* Something wrong with the iommu driver that failed to disable ATS */
+ if (dev->ats_enabled)
+ pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
+
+ ret = i->reset(dev, probe);
+ iommu_dev_reset_done(&dev->dev);
+ return ret;
+}
+
/*
* These device-specific reset methods are here rather than in a driver
* because when a host assigns a device to a guest VM, the host may need
@@ -4237,7 +4262,7 @@ int pci_dev_specific_reset(struct pci_dev *dev, bool probe)
i->vendor == (u16)PCI_ANY_ID) &&
(i->device == dev->device ||
i->device == (u16)PCI_ANY_ID))
- return i->reset(dev, probe);
+ return __pci_dev_specific_reset(dev, probe, i);
}
return -ENOTTY;
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-28 7:42 ` [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
@ 2025-06-28 13:28 ` Baolu Lu
2025-06-30 12:38 ` Jason Gunthorpe
2025-07-04 15:43 ` Jason Gunthorpe
1 sibling, 1 reply; 26+ messages in thread
From: Baolu Lu @ 2025-06-28 13:28 UTC (permalink / raw)
To: Nicolin Chen, jgg, joro, will, robin.murphy, rafael, lenb,
bhelgaas
Cc: baolu.lu, iommu, linux-kernel, linux-acpi, linux-pci, patches,
pjaroszynski, vsethi, helgaas
On 6/28/2025 3:42 PM, Nicolin Chen wrote:
> PCIe permits a device to ignore ATS invalidation TLPs, while processing a
> reset. This creates a problem visible to the OS where an ATS invalidation
> command will time out: e.g. an SVA domain will have no coordination with a
> reset event and can racily issue ATS invalidations to a resetting device.
>
> The OS should do something to mitigate this as we do not want production
> systems to be reporting critical ATS failures, especially in a hypervisor
> environment. Broadly, OS could arrange to ignore the timeouts, block page
> table mutations to prevent invalidations, or disable and block ATS.
>
> The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
> block ATS before initiating a Function Level Reset. It also mentions that
> other reset methods could have the same vulnerability as well.
>
> Provide a callback from the PCI subsystem that will enclose the reset and
> have the iommu core temporarily change all the attached domain to BLOCKED.
> After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
> ATS queries, synchronously stop issuing new ATS invalidations, and wait
> for all ATS invalidations to complete. This can avoid any ATS invaliation
> timeouts.
>
> However, if there is a domain attachment/replacement happening during an
> ongoing reset, the ATS might be re-enabled between the two function calls.
> Introduce a new pending_reset flag in group_device to defer an attachment
> during a reset, allowing iommu core to cache the target domains in the SW
> level but bypassing the driver. The iommu_dev_reset_done() will re-attach
> these soft-attached domains via __iommu_attach_device/set_group_pasid().
>
> Notes:
> - This only works for IOMMU drivers that implemented ops->blocked_domain
> correctly with pci_disable_ats().
Does this mean the IOMMU driver should disable ATS when ops-
>blocked_domain is used? This might not be feasible because ops-
>blocked_domain might possibly be attached to a PASID of a device, while
other PASIDs still use ATS for functionality.
> - This only works for IOMMU drivers that will not issue ATS invalidation
> requests to the device, after it's docked at ops->blocked_domain.
> Driver should fix itself to align with the aforementioned notes.
My understanding of the requirements for the iommu drivers is: when all
PASIDs are docked in the blocking DMA state, the IOMMU driver should:
- Flush all outstanding ATS invalidation requests;
- Stop issuing any further ATS invalidations;
- Configure the hardware to reject further ATS translation requests.
>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> include/linux/iommu.h | 12 ++++
> drivers/iommu/iommu.c | 158 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 170 insertions(+)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 156732807994..a17161b8625a 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -1123,6 +1123,9 @@ void dev_iommu_priv_set(struct device *dev, void *priv);
> extern struct mutex iommu_probe_device_lock;
> int iommu_probe_device(struct device *dev);
>
> +int iommu_dev_reset_prepare(struct device *dev);
> +void iommu_dev_reset_done(struct device *dev);
> +
> int iommu_device_use_default_domain(struct device *dev);
> void iommu_device_unuse_default_domain(struct device *dev);
>
> @@ -1407,6 +1410,15 @@ static inline int iommu_fwspec_add_ids(struct device *dev, u32 *ids,
> return -ENODEV;
> }
>
> +static inline int iommu_dev_reset_prepare(struct device *dev)
> +{
> + return 0;
> +}
> +
> +static inline void iommu_dev_reset_done(struct device *dev)
> +{
> +}
> +
> static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev)
> {
> return NULL;
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index bd3deedcd2de..14bfeaa9ac29 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -71,12 +71,29 @@ struct group_device {
> struct list_head list;
> struct device *dev;
> char *name;
> + bool pending_reset : 1;
> };
>
> /* Iterate over each struct group_device in a struct iommu_group */
> #define for_each_group_device(group, pos) \
> list_for_each_entry(pos, &(group)->devices, list)
>
> +/* Callers must hold the dev->iommu_group->mutex */
> +static struct group_device *device_to_group_device(struct device *dev)
> +{
> + struct iommu_group *group = dev->iommu_group;
> + struct group_device *gdev;
> +
> + lockdep_assert_held(&group->mutex);
> +
> + /* gdev must be in the list */
> + for_each_group_device(group, gdev) {
> + if (gdev->dev == dev)
> + break;
> + }
> + return gdev;
> +}
> +
> struct iommu_group_attribute {
> struct attribute attr;
> ssize_t (*show)(struct iommu_group *group, char *buf);
> @@ -2155,8 +2172,17 @@ int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
> int ret = 0;
>
> mutex_lock(&group->mutex);
> +
> + /*
> + * There is a racy attach while the device is resetting. Defer it until
> + * the iommu_dev_reset_done() that attaches the device to group->domain.
> + */
> + if (device_to_group_device(dev)->pending_reset)
> + goto unlock;
> +
> if (dev->iommu && dev->iommu->attach_deferred)
> ret = __iommu_attach_device(domain, dev);
> +unlock:
> mutex_unlock(&group->mutex);
> return ret;
> }
> @@ -2295,6 +2321,13 @@ static int __iommu_device_set_domain(struct iommu_group *group,
> dev->iommu->attach_deferred = 0;
> }
>
> + /*
> + * There is a racy attach while the device is resetting. Defer it until
> + * the iommu_dev_reset_done() that attaches the device to group->domain.
> + */
> + if (gdev->pending_reset)
> + return 0;
> +
> ret = __iommu_attach_device(new_domain, dev);
> if (ret) {
> /*
> @@ -3378,6 +3411,13 @@ static int __iommu_set_group_pasid(struct iommu_domain *domain,
> int ret;
>
> for_each_group_device(group, device) {
> + /*
> + * There is a racy attach while the device is resetting. Defer
> + * it until the iommu_dev_reset_done() that attaches the device
> + * to group->domain.
> + */
> + if (device->pending_reset)
> + continue;
> if (device->dev->iommu->max_pasids > 0) {
> ret = domain->ops->set_dev_pasid(domain, device->dev,
> pasid, old);
> @@ -3799,6 +3839,124 @@ int iommu_replace_group_handle(struct iommu_group *group,
> }
> EXPORT_SYMBOL_NS_GPL(iommu_replace_group_handle, "IOMMUFD_INTERNAL");
>
> +/*
> + * Caller must use iommu_dev_reset_prepare() and iommu_dev_reset_done() together
> + * before/after the core-level reset routine, to unclear the pending_reset flag
> + * and to put the iommu_group reference.
> + *
> + * These two functions are designed to be used by PCI reset functions that would
> + * not invoke any racy iommu_release_device() since PCI sysfs node gets removed
> + * before it notifies with a BUS_NOTIFY_REMOVED_DEVICE. When using them in other
> + * case, callers must ensure there will be no racy iommu_release_device() call,
> + * which otherwise would UAF the dev->iommu_group pointer.
> + */
Use kdoc style comments.
> +int iommu_dev_reset_prepare(struct device *dev)
> +{
> + const struct iommu_ops *ops;
> + struct iommu_group *group;
> + unsigned long pasid;
> + void *entry;
> + int ret = 0;
> +
> + if (!dev_has_iommu(dev))
> + return 0;
> +
> + if (dev->iommu->require_direct) {
> + dev_warn(
> + dev,
> + "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
> + return -EINVAL;
> + }
> +
> + /* group will be put in iommu_dev_reset_done() */
> + group = iommu_group_get(dev);
> +
> + /* Caller ensures no racy iommu_release_device(), so this won't UAF */
> + mutex_lock(&group->mutex);
> +
> + ops = dev_iommu_ops(dev);
> + if (!ops->blocked_domain) {
> + dev_warn(dev,
> + "IOMMU driver doesn't support IOMMU_DOMAIN_BLOCKED\n");
> + ret = -EOPNOTSUPP;
> + goto unlock;
> + }
> +
> + device_to_group_device(dev)->pending_reset = true;
> +
> + /* Device is already attached to the blocked_domain. Nothing to do */
> + if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
> + goto unlock;
"group->domain->type == IOMMU_DOMAIN_BLOCKED" means that IOMMU_NO_PASID
is docked in the blocking DMA state, but it doesn't imply that other
PASIDs are also in the blocking DMA state. Therefore, we might still
need the following lines to handle other PASIDs.
On the other hand, perhaps we should use "group->domain == ops-
>blocked_domain" instead of "group->domain->type ==
IOMMU_DOMAIN_BLOCKED" to make the code consistent with the commit
message.
> +
> + /* Dock RID domain to blocked_domain while retaining group->domain */
> + ret = __iommu_attach_device(ops->blocked_domain, dev);
> + if (ret)
> + goto unlock;
> +
> + /* Dock PASID domains to blocked_domain while retaining pasid_array */
> + xa_lock(&group->pasid_array);
> + xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> + iommu_remove_dev_pasid(dev, pasid,
> + pasid_array_entry_to_domain(entry));
> + xa_unlock(&group->pasid_array);
> +
> +unlock:
> + mutex_unlock(&group->mutex);
> + if (ret)
> + iommu_group_put(group);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_prepare);
> +
> +/*
> + * Pair with a previous iommu_dev_reset_prepare() that was successfully returned
> + *
> + * Note that, although unlikely, there is a risk that re-attaching domains might
> + * fail due to some unexpected happening like OOM.
> + */
> +void iommu_dev_reset_done(struct device *dev)
> +{
> + struct iommu_group *group = dev->iommu_group;
> + const struct iommu_ops *ops;
> + struct group_device *gdev;
> + unsigned long pasid;
> + void *entry;
> +
> + if (!dev_has_iommu(dev))
> + return;
> +
> + mutex_lock(&group->mutex);
> +
> + gdev = device_to_group_device(dev);
> +
> + ops = dev_iommu_ops(dev);
> + /* iommu_dev_reset_prepare() was not successfully called */
> + if (WARN_ON(!ops->blocked_domain || !gdev->pending_reset)) {
> + mutex_unlock(&group->mutex);
> + return;
> + }
> +
> + if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
> + goto done;
The same here?
> +
> + /* Shift RID domain back to group->domain */
> + WARN_ON(__iommu_attach_device(group->domain, dev));
> +
> + /* Shift PASID domains back to domains retained in pasid_array */
> + xa_lock(&group->pasid_array);
> + xa_for_each_start(&group->pasid_array, pasid, entry, 1)
> + WARN_ON(__iommu_set_group_pasid(
> + pasid_array_entry_to_domain(entry), group, pasid,
> + ops->blocked_domain));
> + xa_unlock(&group->pasid_array);
> +
> +done:
> + gdev->pending_reset = false;
> + mutex_unlock(&group->mutex);
> + iommu_group_put(group);
> +}
> +EXPORT_SYMBOL_GPL(iommu_dev_reset_done);
> +
> #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
> /**
> * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU domain
Thanks,
baolu
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-28 13:28 ` Baolu Lu
@ 2025-06-30 12:38 ` Jason Gunthorpe
2025-06-30 17:29 ` Nicolin Chen
0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2025-06-30 12:38 UTC (permalink / raw)
To: Baolu Lu
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas
On Sat, Jun 28, 2025 at 09:28:12PM +0800, Baolu Lu wrote:
> Does this mean the IOMMU driver should disable ATS when ops-
> >blocked_domain is used? This might not be feasible because ops-
> >blocked_domain might possibly be attached to a PASID of a device,
> while other PASIDs still use ATS for functionality.
No.. The above should be setting everything, including PASIDs to the
blocked domain.
The driver doesn't have to disable ATS at the device, but ARM does.
It does have to stop issuing invalidations, which is part of the
definition of blocked in the first place.
> > - This only works for IOMMU drivers that will not issue ATS invalidation
> > requests to the device, after it's docked at ops->blocked_domain.
> > Driver should fix itself to align with the aforementioned notes.
>
> My understanding of the requirements for the iommu drivers is: when all
> PASIDs are docked in the blocking DMA state, the IOMMU driver should:
>
> - Flush all outstanding ATS invalidation requests;
Arugably driver needs to have serialized ATS invalidation
synchronously during the change to the blocked domain. The prior
paging domain could be immediately freed so lingering invalidations
are probably an existing bug.
> - Stop issuing any further ATS invalidations;
Yes
> - Configure the hardware to reject further ATS translation requests.
Not required. Blocked domain inherently responds to all ATS
translation requests with no-present which is not allowed to be
cached.
> > +int iommu_dev_reset_prepare(struct device *dev)
> > +{
> > + const struct iommu_ops *ops;
> > + struct iommu_group *group;
> > + unsigned long pasid;
> > + void *entry;
> > + int ret = 0;
> > +
> > + if (!dev_has_iommu(dev))
> > + return 0;
> > +
> > + if (dev->iommu->require_direct) {
> > + dev_warn(
> > + dev,
> > + "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
> > + return -EINVAL;
> > + }
> > +
> > + /* group will be put in iommu_dev_reset_done() */
> > + group = iommu_group_get(dev);
> > +
> > + /* Caller ensures no racy iommu_release_device(), so this won't UAF */
> > + mutex_lock(&group->mutex);
> > +
> > + ops = dev_iommu_ops(dev);
> > + if (!ops->blocked_domain) {
> > + dev_warn(dev,
> > + "IOMMU driver doesn't support IOMMU_DOMAIN_BLOCKED\n");
> > + ret = -EOPNOTSUPP;
> > + goto unlock;
> > + }
> > +
> > + device_to_group_device(dev)->pending_reset = true;
> > +
> > + /* Device is already attached to the blocked_domain. Nothing to do */
> > + if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
> > + goto unlock;
>
> "group->domain->type == IOMMU_DOMAIN_BLOCKED" means that IOMMU_NO_PASID
> is docked in the blocking DMA state, but it doesn't imply that other
> PASIDs are also in the blocking DMA state. Therefore, we might still
> need the following lines to handle other PASIDs.
Yes, we always have to check the xarray.
> On the other hand, perhaps we should use "group->domain == ops-
> >blocked_domain" instead of "group->domain->type ==
> IOMMU_DOMAIN_BLOCKED" to make the code consistent with the commit
> message.
ops->blocked_domain is not good, we support devices without static
blocking domain. But yes, using DOMAIN_BLOCKED is not greap, there is
a group->blocked_domain that should be used and will dynamicaly create
an empty paging domain if needed.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-30 12:38 ` Jason Gunthorpe
@ 2025-06-30 17:29 ` Nicolin Chen
2025-06-30 22:49 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Nicolin Chen @ 2025-06-30 17:29 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Baolu Lu, joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas
On Mon, Jun 30, 2025 at 09:38:14AM -0300, Jason Gunthorpe wrote:
> On Sat, Jun 28, 2025 at 09:28:12PM +0800, Baolu Lu wrote:
>
> > Does this mean the IOMMU driver should disable ATS when ops-
> > >blocked_domain is used? This might not be feasible because ops-
> > >blocked_domain might possibly be attached to a PASID of a device,
> > while other PASIDs still use ATS for functionality.
>
> No.. The above should be setting everything, including PASIDs to the
> blocked domain.
>
> The driver doesn't have to disable ATS at the device, but ARM does.
Oh, the code is expecting a pci_disable_ats() call, as the next
patch will check if ats is disabled on the PCI side..
If that's the case, we'd have to leave the ATS enabled but only
trust that iommu driver won't issue any new ATS invalidation?
Or should we ask driver to be "must" v.s. "doesn't have to"?
> > > + /* Device is already attached to the blocked_domain. Nothing to do */
> > > + if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
> > > + goto unlock;
> >
> > "group->domain->type == IOMMU_DOMAIN_BLOCKED" means that IOMMU_NO_PASID
> > is docked in the blocking DMA state, but it doesn't imply that other
> > PASIDs are also in the blocking DMA state. Therefore, we might still
> > need the following lines to handle other PASIDs.
>
> Yes, we always have to check the xarray.
OK. This check should apply to the RID domain attach only then.
> > On the other hand, perhaps we should use "group->domain == ops-
> > >blocked_domain" instead of "group->domain->type ==
> > IOMMU_DOMAIN_BLOCKED" to make the code consistent with the commit
> > message.
>
> ops->blocked_domain is not good, we support devices without static
> blocking domain. But yes, using DOMAIN_BLOCKED is not greap, there is
> a group->blocked_domain that should be used and will dynamicaly create
> an empty paging domain if needed.
You mean we should use the group->blocking_domain, even if it was
allocated to be a paging domain as the driver doesn't understand
a IOMMU_DOMAIN_BLOCKED yet?
Thanks
Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-30 17:29 ` Nicolin Chen
@ 2025-06-30 22:49 ` Jason Gunthorpe
0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2025-06-30 22:49 UTC (permalink / raw)
To: Nicolin Chen
Cc: Baolu Lu, joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas
On Mon, Jun 30, 2025 at 10:29:12AM -0700, Nicolin Chen wrote:
> On Mon, Jun 30, 2025 at 09:38:14AM -0300, Jason Gunthorpe wrote:
> > On Sat, Jun 28, 2025 at 09:28:12PM +0800, Baolu Lu wrote:
> >
> > > Does this mean the IOMMU driver should disable ATS when ops-
> > > >blocked_domain is used? This might not be feasible because ops-
> > > >blocked_domain might possibly be attached to a PASID of a device,
> > > while other PASIDs still use ATS for functionality.
> >
> > No.. The above should be setting everything, including PASIDs to the
> > blocked domain.
> >
> > The driver doesn't have to disable ATS at the device, but ARM does.
>
> Oh, the code is expecting a pci_disable_ats() call, as the next
> patch will check if ats is disabled on the PCI side..
I would not bother, it is alot more work to fix AMD and Intel iommu
drivers and I don't think it really buys us anything..
> If that's the case, we'd have to leave the ATS enabled but only
> trust that iommu driver won't issue any new ATS invalidation?
Yes.
> > ops->blocked_domain is not good, we support devices without static
> > blocking domain. But yes, using DOMAIN_BLOCKED is not greap, there is
> > a group->blocked_domain that should be used and will dynamicaly create
> > an empty paging domain if needed.
>
> You mean we should use the group->blocking_domain, even if it was
> allocated to be a paging domain as the driver doesn't understand
> a IOMMU_DOMAIN_BLOCKED yet?
Yes, and you just get a group->blocking_domain to assign for the same reason.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach
2025-06-28 7:42 ` [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach Nicolin Chen
@ 2025-07-04 15:22 ` Jason Gunthorpe
0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-04 15:22 UTC (permalink / raw)
To: Nicolin Chen
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Sat, Jun 28, 2025 at 12:42:39AM -0700, Nicolin Chen wrote:
> The iommu_deferred_attach() is a runtime asynchronous function called by
> iommu-dma function, which will race against other attach functions if it
> accesses something in the dev->iommu_group.
>
> Grab the lock to protect it like others who call __iommu_attach_device()
> as it will need to access dev->iommu_group.
>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> drivers/iommu/iommu.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
I vaugely recall seeing something like this before.
IIRC it can't actually race but there is no harm in taking the lock so
lockdep works reliably. It isn't fast path.
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain
2025-06-28 7:42 ` [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain Nicolin Chen
@ 2025-07-04 15:23 ` Jason Gunthorpe
0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-04 15:23 UTC (permalink / raw)
To: Nicolin Chen
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Sat, Jun 28, 2025 at 12:42:40AM -0700, Nicolin Chen wrote:
> This will need to check a per gdev property, since the dev pointer cannot
> store any private iommu flag for the iommu code to use. Thus, pass in the
> gdev pointer instead.
>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> drivers/iommu/iommu.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-06-28 7:42 ` [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
2025-06-28 13:28 ` Baolu Lu
@ 2025-07-04 15:43 ` Jason Gunthorpe
2025-07-22 21:58 ` Nicolin Chen
1 sibling, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-04 15:43 UTC (permalink / raw)
To: Nicolin Chen
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Sat, Jun 28, 2025 at 12:42:41AM -0700, Nicolin Chen wrote:
> - This only works for IOMMU drivers that implemented ops->blocked_domain
> correctly with pci_disable_ats().
As was in the thread, it works for everyone. Even if we install an
empty paging domain for blocking that still will stop the ATS
invalidations from being issued. ATS remains on but this is not a
problem.
> - This only works for IOMMU drivers that will not issue ATS invalidation
> requests to the device, after it's docked at ops->blocked_domain.
Which should be everyone.. It would be broken and racy with release to
do otherwise.
> @@ -2155,8 +2172,17 @@ int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
> int ret = 0;
>
> mutex_lock(&group->mutex);
> +
> + /*
> + * There is a racy attach while the device is resetting. Defer it until
> + * the iommu_dev_reset_done() that attaches the device to group->domain.
> + */
> + if (device_to_group_device(dev)->pending_reset)
> + goto unlock;
> +
> if (dev->iommu && dev->iommu->attach_deferred)
> ret = __iommu_attach_device(domain, dev);
> +unlock:
> mutex_unlock(&group->mutex);
Actually looking at this some more maybe write it like:
/*
* This is called on the dma mapping fast path so avoid locking. This
* is racy, but we have an expectation that the driver will setup its
* DMAs inside probe while still single threaded to avoid racing.
*/
if (dev->iommu && !READ_ONCE(dev->iommu->attach_deferred))
return 0;
guard(mutex)(&group->mutex);
if (device_to_group_device(dev)->pending_reset)
return 0;
if (!dev->iommu->attach_deferred)
return 0;
return __iommu_attach_device(domain, dev);
And of course it is already quite crazy to be doing FLR during a
device probe so this is not a realistic scenario.
> @@ -2295,6 +2321,13 @@ static int __iommu_device_set_domain(struct iommu_group *group,
> dev->iommu->attach_deferred = 0;
> }
>
> + /*
> + * There is a racy attach while the device is resetting. Defer it until
> + * the iommu_dev_reset_done() that attaches the device to group->domain.
"There is a concurrent attach" here and other places
> +int iommu_dev_reset_prepare(struct device *dev)
> +{
> + const struct iommu_ops *ops;
> + struct iommu_group *group;
> + unsigned long pasid;
> + void *entry;
> + int ret = 0;
> +
> + if (!dev_has_iommu(dev))
> + return 0;
> +
> + if (dev->iommu->require_direct) {
> + dev_warn(
> + dev,
> + "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
> + return -EINVAL;
> + }
I don't think we can do this. eg on ARM all devices have RMRs inside
VMs so this will completely break FLR inside a vm???
Either ignore this condition with the rational that we are about to
reset it so it doesn't matter, or we need to establish a new paging
domain for isolation purposes that has the RMR setup.
> + /* group will be put in iommu_dev_reset_done() */
> + group = iommu_group_get(dev);
Probably don't need this. If we are already requiring no
iommu_release_device() then we can keep with that.
> + /* Caller ensures no racy iommu_release_device(), so this won't UAF */
> + mutex_lock(&group->mutex);
It is the group_get above that won't UAF, this is fine once we have a
refcount.
> + ops = dev_iommu_ops(dev);
> + if (!ops->blocked_domain) {
> + dev_warn(dev,
> + "IOMMU driver doesn't support IOMMU_DOMAIN_BLOCKED\n");
> + ret = -EOPNOTSUPP;
> + goto unlock;
Not necessary, just use the existing flow to allocate an empty paging
domain to group->blocking_domain
> + device_to_group_device(dev)->pending_reset = true;
> +
> + /* Device is already attached to the blocked_domain. Nothing to do */
> + if (group->domain->type == IOMMU_DOMAIN_BLOCKED)
> + goto unlock;
Should be group->domain == group->blocking_domain
> + /* Dock RID domain to blocked_domain while retaining group->domain */
> + ret = __iommu_attach_device(ops->blocked_domain, dev);
group->blocking_domain
> + if (ret)
> + goto unlock;
> +
> + /* Dock PASID domains to blocked_domain while retaining pasid_array */
> + xa_lock(&group->pasid_array);
Not sure we need this lock? The group mutex already prevents mutation
of the xa list and I dont' think it is allowed to call
iommu_remove_dev_pasid() in an atomic context.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-04 15:43 ` Jason Gunthorpe
@ 2025-07-22 21:58 ` Nicolin Chen
2025-07-23 2:21 ` Baolu Lu
2025-07-27 16:25 ` Jason Gunthorpe
0 siblings, 2 replies; 26+ messages in thread
From: Nicolin Chen @ 2025-07-22 21:58 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
Sorry for a huge delay. I've addressed all, following your remarks.
Some feedbacks inline.
On Fri, Jul 04, 2025 at 12:43:42PM -0300, Jason Gunthorpe wrote:
> On Sat, Jun 28, 2025 at 12:42:41AM -0700, Nicolin Chen wrote:
>
> > - This only works for IOMMU drivers that implemented ops->blocked_domain
> > correctly with pci_disable_ats().
>
> As was in the thread, it works for everyone. Even if we install an
> empty paging domain for blocking that still will stop the ATS
> invalidations from being issued. ATS remains on but this is not a
> problem.
OK. And I am dropping this validation in the PCI patch:
/* Something wrong with the iommu driver that failed to disable ATS */
if (dev->ats_enabled)
pci_err(dev, "failed to stop ATS. ATS invalidation may time out\n");
> > @@ -2155,8 +2172,17 @@ int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
> > int ret = 0;
> >
> > mutex_lock(&group->mutex);
> > +
> > + /*
> > + * There is a racy attach while the device is resetting. Defer it until
> > + * the iommu_dev_reset_done() that attaches the device to group->domain.
> > + */
> > + if (device_to_group_device(dev)->pending_reset)
> > + goto unlock;
> > +
> > if (dev->iommu && dev->iommu->attach_deferred)
> > ret = __iommu_attach_device(domain, dev);
> > +unlock:
> > mutex_unlock(&group->mutex);
>
> Actually looking at this some more maybe write it like:
>
> /*
> * This is called on the dma mapping fast path so avoid locking. This
> * is racy, but we have an expectation that the driver will setup its
> * DMAs inside probe while still single threaded to avoid racing.
> */
> if (dev->iommu && !READ_ONCE(dev->iommu->attach_deferred))
This triggers a build error as attach_deferred is a bit-field. So I
am changing it from "u32 attach_deferred:1" to "bool" for this.
And, to keep the original logic, I think it should be:
if (!dev->iommu || !READ_ONCE(dev->iommu->attach_deferred))
> return 0;
>
> guard(mutex)(&group->mutex);
I recall Baolu mentioned that Joerg might not like the guard style
so I am keeping mutex_lock/unlock().
> if (device_to_group_device(dev)->pending_reset)
> return 0;
>
> if (!dev->iommu->attach_deferred)
> return 0;
I think this is redundant since the fast path checked.
> return __iommu_attach_device(domain, dev);
>
> And of course it is already quite crazy to be doing FLR during a
> device probe so this is not a realistic scenario.
Hmm, I am not sure about that, as I see iommu_deferred_attach() get
mostly invoked by a dma_alloc() or even a dma_map(). So, this might
not be confined to a device probe?
> > + if (dev->iommu->require_direct) {
> > + dev_warn(
> > + dev,
> > + "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
> > + return -EINVAL;
> > + }
>
> I don't think we can do this. eg on ARM all devices have RMRs inside
> VMs so this will completely break FLR inside a vm???
>
> Either ignore this condition with the rational that we are about to
> reset it so it doesn't matter, or we need to establish a new paging
> domain for isolation purposes that has the RMR setup.
Ah, you are right. ARM MSI in a VM uses RMR and sets this.
But does it also raise a question that a VM having RMR can't use
the blocked_domain, as __iommu_device_set_domain() has the exact
same check rejecting blocked_domain? Not sure if there would be
some unintended consequnce though...
> > + if (ret)
> > + goto unlock;
> > +
> > + /* Dock PASID domains to blocked_domain while retaining pasid_array */
> > + xa_lock(&group->pasid_array);
>
> Not sure we need this lock? The group mutex already prevents mutation
> of the xa list and I dont' think it is allowed to call
> iommu_remove_dev_pasid() in an atomic context.
I see only iommu_attach_handle_get() doesn't use group->mutex. And
it's a reader. So I think it's safe to drop the xa_lock.
I added this:
/* ||| iommu_map_sg
* Dock PASID domains to blocking_domain while retaining pasid_array.
*
* The pasid_array is mostly fenced by group->mutex, except one reader
* in iommu_attach_handle_get(), so it's safe to read without xa_lock.
*/
Thanks!
Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-22 21:58 ` Nicolin Chen
@ 2025-07-23 2:21 ` Baolu Lu
2025-07-23 2:53 ` Nicolin Chen
2025-07-27 16:25 ` Jason Gunthorpe
1 sibling, 1 reply; 26+ messages in thread
From: Baolu Lu @ 2025-07-23 2:21 UTC (permalink / raw)
To: Nicolin Chen, Jason Gunthorpe
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas
On 7/23/25 05:58, Nicolin Chen wrote:
>> return 0;
>>
>> guard(mutex)(&group->mutex);
> I recall Baolu mentioned that Joerg might not like the guard style
> so I am keeping mutex_lock/unlock().
You may be misremembering or mixing something up. I didn't see Joerg
express that opinion. :-)
My understanding is that cleanup.h could be used in new or refactored
code, but people don't like converting existing lock/unlock mechanisms
for no real benefit.
Thanks,
baolu
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-23 2:21 ` Baolu Lu
@ 2025-07-23 2:53 ` Nicolin Chen
0 siblings, 0 replies; 26+ messages in thread
From: Nicolin Chen @ 2025-07-23 2:53 UTC (permalink / raw)
To: Baolu Lu
Cc: Jason Gunthorpe, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas
On Wed, Jul 23, 2025 at 10:21:41AM +0800, Baolu Lu wrote:
> On 7/23/25 05:58, Nicolin Chen wrote:
> > > return 0;
> > >
> > > guard(mutex)(&group->mutex);
> > I recall Baolu mentioned that Joerg might not like the guard style
> > so I am keeping mutex_lock/unlock().
>
> You may be misremembering or mixing something up. I didn't see Joerg
> express that opinion. :-)
>
> My understanding is that cleanup.h could be used in new or refactored
> code, but people don't like converting existing lock/unlock mechanisms
> for no real benefit.
Ah, thanks for clarifying. Let's do the guard() way then :)
Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
` (3 preceding siblings ...)
2025-06-28 7:42 ` [PATCH RFC v2 4/4] pci: Suspend iommu function prior to resetting a device Nicolin Chen
@ 2025-07-24 6:50 ` Ethan Zhao
2025-07-25 16:41 ` Nicolin Chen
4 siblings, 1 reply; 26+ messages in thread
From: Ethan Zhao @ 2025-07-24 6:50 UTC (permalink / raw)
To: Nicolin Chen, jgg, joro, will, robin.murphy, rafael, lenb,
bhelgaas
Cc: iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On 6/28/2025 3:42 PM, Nicolin Chen wrote:
> Hi all,
>
> PCIe permits a device to ignore ATS invalidation TLPs, while processing a
> reset. This creates a problem visible to the OS where an ATS invalidation
> command will time out: e.g. an SVA domain will have no coordination with a
> reset event and can racily issue ATS invalidations to a resetting device.
>
> The OS should do something to mitigate this as we do not want production
> systems to be reporting critical ATS failures, especially in a hypervisor
> environment. Broadly, OS could arrange to ignore the timeouts, block page
> table mutations to prevent invalidations, or disable and block ATS.
>
> The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
> block ATS before initiating a Function Level Reset. It also mentions that
> other reset methods could have the same vulnerability as well.
>
> Provide a callback from the PCI subsystem that will enclose the reset and
> have the iommu core temporarily change all the attached domain to BLOCKED.
> After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
> ATS queries, synchronously stop issuing new ATS invalidations, and wait
> for all ATS invalidations to complete. This can avoid any ATS invaliation
> timeouts.
This approach seems effective for reset operations initiated through
software interface functions, but how would we handle those triggered by
hardware mechanisms? For example, resets caused by PCIe DPC mechanisms,
device firmware, or manual hot-plug operations?
Thanks,
Ethan
>
> When a device is resetting, any new domain attachment should be deferred,
> until the reset is finished. The iommu callback will log the
>
> However, if there is a domain attachment/replacement happening during an
> ongoing reset, the ATS might be re-enabled between the two function calls.
> Introduce a new pending_reset flag in iommu_group to defer any attachment
> during a reset, allowing iommu core to cache the target domains in the SW
> level but bypassing the driver. The iommu_dev_reset_done() will re-attach
> these soft-attached domains via __iommu_attach_device/set_group_pasid().
>
> Notes:
> - This only works for IOMMU drivers that implemented ops->blocked_domain
> correctly with pci_disable_ats().
> - This only works for IOMMU drivers that will not issue ATS invalidation
> requests to the device, after it's docked at ops->blocked_domain.
> Driver should fix itself to align with the aforementioned notes.
>
> This is on Github:
> https://github.com/nicolinc/iommufd/commits/iommu_dev_reset-rfcv2
>
> Changelog
> v2
> * [iommu] Update kdocs, inline comments, and commit logs
> * [iommu] Replace long-holding group->mutex with a pending_reset flag
> * [pci] Abort reset routines if iommu_dev_reset_prepare() fails
> * [pci] Apply the same vulnerability fix to other reset functions
> v1
> https://lore.kernel.org/all/cover.1749494161.git.nicolinc@nvidia.com/
>
> Thanks
> Nicolin
>
> Nicolin Chen (4):
> iommu: Lock group->mutex in iommu_deferred_attach
> iommu: Pass in gdev to __iommu_device_set_domain
> iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
> pci: Suspend iommu function prior to resetting a device
>
> include/linux/iommu.h | 12 +++
> drivers/iommu/iommu.c | 180 ++++++++++++++++++++++++++++++++++++++---
> drivers/pci/pci-acpi.c | 21 ++++-
> drivers/pci/pci.c | 84 +++++++++++++++++--
> drivers/pci/quirks.c | 27 ++++++-
> 5 files changed, 307 insertions(+), 17 deletions(-)
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-24 6:50 ` [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Ethan Zhao
@ 2025-07-25 16:41 ` Nicolin Chen
2025-07-27 12:48 ` Ethan Zhao
0 siblings, 1 reply; 26+ messages in thread
From: Nicolin Chen @ 2025-07-25 16:41 UTC (permalink / raw)
To: Ethan Zhao
Cc: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Thu, Jul 24, 2025 at 02:50:53PM +0800, Ethan Zhao wrote:
> On 6/28/2025 3:42 PM, Nicolin Chen wrote:
> > PCIe permits a device to ignore ATS invalidation TLPs, while processing a
> > reset. This creates a problem visible to the OS where an ATS invalidation
> > command will time out: e.g. an SVA domain will have no coordination with a
> > reset event and can racily issue ATS invalidations to a resetting device.
> >
> > The OS should do something to mitigate this as we do not want production
> > systems to be reporting critical ATS failures, especially in a hypervisor
> > environment. Broadly, OS could arrange to ignore the timeouts, block page
> > table mutations to prevent invalidations, or disable and block ATS.
> >
> > The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
> > block ATS before initiating a Function Level Reset. It also mentions that
> > other reset methods could have the same vulnerability as well.
> >
> > Provide a callback from the PCI subsystem that will enclose the reset and
> > have the iommu core temporarily change all the attached domain to BLOCKED.
> > After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
> > ATS queries, synchronously stop issuing new ATS invalidations, and wait
> > for all ATS invalidations to complete. This can avoid any ATS invaliation
> > timeouts.
>
> This approach seems effective for reset operations initiated through
> software interface functions, but how would we handle those triggered by
> hardware mechanisms? For example, resets caused by PCIe DPC mechanisms,
> device firmware, or manual hot-plug operations?
That's a good point. But I am not sure what SW can do about those.
IIUIC, DPC resets PCI at the HW level, SW only gets a notification
after the HW reset finishes. So, during this HW reset, iommu might
issue ATC invalidations (resulting in invalidation timeout noises)
since at the SW level the device is still actively attached to an
IOMMU instance. Right?
Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-25 16:41 ` Nicolin Chen
@ 2025-07-27 12:48 ` Ethan Zhao
2025-07-27 16:20 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Ethan Zhao @ 2025-07-27 12:48 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On 7/26/2025 12:41 AM, Nicolin Chen wrote:
> On Thu, Jul 24, 2025 at 02:50:53PM +0800, Ethan Zhao wrote:
>> On 6/28/2025 3:42 PM, Nicolin Chen wrote:
>>> PCIe permits a device to ignore ATS invalidation TLPs, while processing a
>>> reset. This creates a problem visible to the OS where an ATS invalidation
>>> command will time out: e.g. an SVA domain will have no coordination with a
>>> reset event and can racily issue ATS invalidations to a resetting device.
>>>
>>> The OS should do something to mitigate this as we do not want production
>>> systems to be reporting critical ATS failures, especially in a hypervisor
>>> environment. Broadly, OS could arrange to ignore the timeouts, block page
>>> table mutations to prevent invalidations, or disable and block ATS.
>>>
>>> The PCIe spec in sec 10.3.1 IMPLEMENTATION NOTE recommends to disable and
>>> block ATS before initiating a Function Level Reset. It also mentions that
>>> other reset methods could have the same vulnerability as well.
>>>
>>> Provide a callback from the PCI subsystem that will enclose the reset and
>>> have the iommu core temporarily change all the attached domain to BLOCKED.
>>> After attaching a BLOCKED domain, IOMMU drivers should fence any incoming
>>> ATS queries, synchronously stop issuing new ATS invalidations, and wait
>>> for all ATS invalidations to complete. This can avoid any ATS invaliation
>>> timeouts.
>>
>> This approach seems effective for reset operations initiated through
>> software interface functions, but how would we handle those triggered by
>> hardware mechanisms? For example, resets caused by PCIe DPC mechanisms,
>> device firmware, or manual hot-plug operations?
>
> That's a good point. But I am not sure what SW can do about those.
>
> IIUIC, DPC resets PCI at the HW level, SW only gets a notification
> after the HW reset finishes. So, during this HW reset, iommu might
> issue ATC invalidations (resulting in invalidation timeout noises)
> since at the SW level the device is still actively attached to an
> IOMMU instance. Right?
Yup, the situation is this: When the system receives notification of a
DPC event, the reset action triggered by the DPC has already occurred.
At the very least, the software has an opportunity to be notified that a
reset happened – though this notification inevitably lags behind the
actual reset behavior, creating a time window between the reset action
and its notification.
For DPC specifically, there is no notification mechanism before the
reset behavior takes place. Surprise Hot-plug events likely operate
under a similar constraint. (while we do have good opportunity to know
a hot-plug action is about to happen after attention button was pressed
for standard hot-plug hardware, adding code there is okay for now).
This becomes particularly thorny if an Address Translation Cache (ATC)
Invalidation request occurs within this time window. Asynchronously
cancelling such requests later would likely be problematic. Is this an
accurate assessment ?
At least, we can do some attempt in DPC and Hot-plug driver, and then
push the hardware specification update to provide pre-reset notification
for DPC & hotplug. does it make sense ?
Thanks,
Ethan
>
> Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-27 12:48 ` Ethan Zhao
@ 2025-07-27 16:20 ` Jason Gunthorpe
2025-07-29 6:16 ` Ethan Zhao
0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-27 16:20 UTC (permalink / raw)
To: Ethan Zhao
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote:
> At least, we can do some attempt in DPC and Hot-plug driver, and then
> push the hardware specification update to provide pre-reset notification for
> DPC & hotplug. does it make sense ?
I think DPC is a different case..
If we get a DPC we should also push the iommu into blocking, disable
ATS and abandon any outstanding ATC invalidations as part of
recovering from the DPC. Once everythings is cleaned up we can set the
iommu back up again and allow the driver to recover the device.
I think the current series is a good step along that path, but we'd
also need to improve the drivers to handle abandonding/aborting the
ATC invalidations.
IMHO DPC and SW initiated reset are separate projects.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-22 21:58 ` Nicolin Chen
2025-07-23 2:21 ` Baolu Lu
@ 2025-07-27 16:25 ` Jason Gunthorpe
2025-07-28 19:07 ` Nicolin Chen
1 sibling, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-27 16:25 UTC (permalink / raw)
To: Nicolin Chen
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Tue, Jul 22, 2025 at 02:58:21PM -0700, Nicolin Chen wrote:
> > /*
> > * This is called on the dma mapping fast path so avoid locking. This
> > * is racy, but we have an expectation that the driver will setup its
> > * DMAs inside probe while still single threaded to avoid racing.
> > */
> > if (dev->iommu && !READ_ONCE(dev->iommu->attach_deferred))
>
> This triggers a build error as attach_deferred is a bit-field. So I
> am changing it from "u32 attach_deferred:1" to "bool" for this.
Bleck, that seems undesirable.
> And, to keep the original logic, I think it should be:
> if (!dev->iommu || !READ_ONCE(dev->iommu->attach_deferred))
That doesn't seem right, if there is no iommu by the time a driver is
probed there never will be an iommu and this device should be running
in direct mode only.
> > And of course it is already quite crazy to be doing FLR during a
> > device probe so this is not a realistic scenario.
>
> Hmm, I am not sure about that, as I see iommu_deferred_attach() get
> mostly invoked by a dma_alloc() or even a dma_map(). So, this might
> not be confined to a device probe?
Once you do deferred_attach the first time it is done and won't have
any further impact. So long as the dev->iommu->attach_deferred guards
any changes to domains it is unlikely to be racing with FLR.
> > Either ignore this condition with the rational that we are about to
> > reset it so it doesn't matter, or we need to establish a new paging
> > domain for isolation purposes that has the RMR setup.
>
> Ah, you are right. ARM MSI in a VM uses RMR and sets this.
>
> But does it also raise a question that a VM having RMR can't use
> the blocked_domain, as __iommu_device_set_domain() has the exact
> same check rejecting blocked_domain? Not sure if there would be
> some unintended consequnce though...
Sounds like it needs some sorting out.. For the purposes of FLR I
think the blocked domain is OK, so maybe just move some of those
checks around?
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-27 16:25 ` Jason Gunthorpe
@ 2025-07-28 19:07 ` Nicolin Chen
2025-07-29 13:02 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Nicolin Chen @ 2025-07-28 19:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Sun, Jul 27, 2025 at 01:25:01PM -0300, Jason Gunthorpe wrote:
> On Tue, Jul 22, 2025 at 02:58:21PM -0700, Nicolin Chen wrote:
> > > /*
> > > * This is called on the dma mapping fast path so avoid locking. This
> > > * is racy, but we have an expectation that the driver will setup its
> > > * DMAs inside probe while still single threaded to avoid racing.
> > > */
> > > if (dev->iommu && !READ_ONCE(dev->iommu->attach_deferred))
> >
> > This triggers a build error as attach_deferred is a bit-field. So I
> > am changing it from "u32 attach_deferred:1" to "bool" for this.
>
> Bleck, that seems undesirable.
But inevitable for READ_ONCE :(
> > And, to keep the original logic, I think it should be:
> > if (!dev->iommu || !READ_ONCE(dev->iommu->attach_deferred))
>
> That doesn't seem right, if there is no iommu by the time a driver is
> probed there never will be an iommu and this device should be running
> in direct mode only.
Well, the current function does:
if (dev->iommu && dev->iommu->attach_deferred)
return __iommu_attach_device(domain, dev);
return 0;
So, matching to that logic, it would be:
if (!dev->iommu || !dev->iommu->attach_deferred)
return 0;
return __iommu_attach_device(domain, dev);
then add guard(mutex).
I do see your point. Yet, given that it is an exported function,
I think it'd be safer to have a check. Perhaps it should give a
WARN_ON(!dev->iommu).
> > > And of course it is already quite crazy to be doing FLR during a
> > > device probe so this is not a realistic scenario.
> >
> > Hmm, I am not sure about that, as I see iommu_deferred_attach() get
> > mostly invoked by a dma_alloc() or even a dma_map(). So, this might
> > not be confined to a device probe?
>
> Once you do deferred_attach the first time it is done and won't have
> any further impact. So long as the dev->iommu->attach_deferred guards
> any changes to domains it is unlikely to be racing with FLR.
I see. The existing callers are all in dma-iommu.c. So, we can
assume that iommu_deferred_attach() is already done, when a PCI
driver calls any function from dma-iommu.c.
> > > Either ignore this condition with the rational that we are about to
> > > reset it so it doesn't matter, or we need to establish a new paging
> > > domain for isolation purposes that has the RMR setup.
> >
> > Ah, you are right. ARM MSI in a VM uses RMR and sets this.
> >
> > But does it also raise a question that a VM having RMR can't use
> > the blocked_domain, as __iommu_device_set_domain() has the exact
> > same check rejecting blocked_domain? Not sure if there would be
> > some unintended consequnce though...
>
> Sounds like it needs some sorting out.. For the purposes of FLR I
> think the blocked domain is OK, so maybe just move some of those
> checks around?
These two new APIs call the lower-level __iommu_attach_device()
that does not check require_direct. So, we are fine, so long as
we don't check it in the new API as you previously pointed out.
I'm worried about using blocked domains in general.
Thanks
Nicolin
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-27 16:20 ` Jason Gunthorpe
@ 2025-07-29 6:16 ` Ethan Zhao
2025-07-29 12:59 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Ethan Zhao @ 2025-07-29 6:16 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On 7/28/2025 12:20 AM, Jason Gunthorpe wrote:
> On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote:
>
>> At least, we can do some attempt in DPC and Hot-plug driver, and then
>> push the hardware specification update to provide pre-reset notification for
>> DPC & hotplug. does it make sense ?
>
> I think DPC is a different case..
More complex and practical case.
>
> If we get a DPC we should also push the iommu into blocking, disable
> ATS and abandon any outstanding ATC invalidations as part of
> recovering from the DPC. Once everythings is cleaned up we can set the
Yup, even pure software resets, there might be ATC invalidation pending
(in software queue or HW queue).
> iommu back up again and allow the driver to recover the device.
>
> I think the current series is a good step along that path, but we'd
> also need to improve the drivers to handle abandonding/aborting the
> ATC invalidations.
Also aborting ATC invalidation works as per-condition for DPC or
Hot-plug cases. agree, such improvement seems necessary.
>
> IMHO DPC and SW initiated reset are separate projects.
Of Course, Rome wasn't built in a day; I endorse the success philosophy
of restricting project scope. The discussion is purely focused on
technical methodology.
Thanks,
Ethan
>
> Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-29 6:16 ` Ethan Zhao
@ 2025-07-29 12:59 ` Jason Gunthorpe
2025-07-31 1:10 ` Ethan Zhao
0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-29 12:59 UTC (permalink / raw)
To: Ethan Zhao
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Tue, Jul 29, 2025 at 02:16:43PM +0800, Ethan Zhao wrote:
>
>
> On 7/28/2025 12:20 AM, Jason Gunthorpe wrote:
> > On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote:
> >
> > > At least, we can do some attempt in DPC and Hot-plug driver, and then
> > > push the hardware specification update to provide pre-reset notification for
> > > DPC & hotplug. does it make sense ?
> >
> > I think DPC is a different case..
> More complex and practical case.
I'm not sure about that, we do FLRs all the time as a normal part of
VFIO and VMM operations. DPC is pretty rare, IMHO.
> > If we get a DPC we should also push the iommu into blocking, disable
> > ATS and abandon any outstanding ATC invalidations as part of
> > recovering from the DPC. Once everythings is cleaned up we can set the
> Yup, even pure software resets, there might be ATC invalidation pending
> (in software queue or HW queue).
The design of this patch series will require the iommu driver to wait
for the in-flight ATC invalidations during the blocking domain
attach. So for the SW initiated resets there should not be pending ATC
invalidations when the FLR is triggered.
We have been talking about DPC internally, and I think it will need a
related, but different flow since DPC can unavoidably trigger ATC
invalidation timeouts/failures and we must sensibly handle them in the
driver.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done()
2025-07-28 19:07 ` Nicolin Chen
@ 2025-07-29 13:02 ` Jason Gunthorpe
0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-29 13:02 UTC (permalink / raw)
To: Nicolin Chen
Cc: joro, will, robin.murphy, rafael, lenb, bhelgaas, iommu,
linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Mon, Jul 28, 2025 at 12:07:59PM -0700, Nicolin Chen wrote:
> On Sun, Jul 27, 2025 at 01:25:01PM -0300, Jason Gunthorpe wrote:
> > On Tue, Jul 22, 2025 at 02:58:21PM -0700, Nicolin Chen wrote:
> > > > /*
> > > > * This is called on the dma mapping fast path so avoid locking. This
> > > > * is racy, but we have an expectation that the driver will setup its
> > > > * DMAs inside probe while still single threaded to avoid racing.
> > > > */
> > > > if (dev->iommu && !READ_ONCE(dev->iommu->attach_deferred))
> > >
> > > This triggers a build error as attach_deferred is a bit-field. So I
> > > am changing it from "u32 attach_deferred:1" to "bool" for this.
> >
> > Bleck, that seems undesirable.
>
> But inevitable for READ_ONCE :(
I guess drop the READ_ONCE change
> > > And, to keep the original logic, I think it should be:
> > > if (!dev->iommu || !READ_ONCE(dev->iommu->attach_deferred))
> >
> > That doesn't seem right, if there is no iommu by the time a driver is
> > probed there never will be an iommu and this device should be running
> > in direct mode only.
>
> Well, the current function does:
> if (dev->iommu && dev->iommu->attach_deferred)
> return __iommu_attach_device(domain, dev);
> return 0;
>
> So, matching to that logic, it would be:
> if (!dev->iommu || !dev->iommu->attach_deferred)
> return 0;
> return __iommu_attach_device(domain, dev);
> then add guard(mutex).
Yeah Ok
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-29 12:59 ` Jason Gunthorpe
@ 2025-07-31 1:10 ` Ethan Zhao
2025-07-31 13:47 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Ethan Zhao @ 2025-07-31 1:10 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On 7/29/2025 8:59 PM, Jason Gunthorpe wrote:
> On Tue, Jul 29, 2025 at 02:16:43PM +0800, Ethan Zhao wrote:
>>
>>
>> On 7/28/2025 12:20 AM, Jason Gunthorpe wrote:
>>> On Sun, Jul 27, 2025 at 08:48:26PM +0800, Ethan Zhao wrote:
>>>
>>>> At least, we can do some attempt in DPC and Hot-plug driver, and then
>>>> push the hardware specification update to provide pre-reset notification for
>>>> DPC & hotplug. does it make sense ?
>>>
>>> I think DPC is a different case..
>> More complex and practical case.
>
> I'm not sure about that, we do FLRs all the time as a normal part of
> VFIO and VMM operations. DPC is pretty rare, IMHO.
DPC reset could be triggered by simply accessing its control bit, that
is boring, while data corruption hardware issue is really rare. >
>>> If we get a DPC we should also push the iommu into blocking, disable
>>> ATS and abandon any outstanding ATC invalidations as part of
>>> recovering from the DPC. Once everythings is cleaned up we can set the
>> Yup, even pure software resets, there might be ATC invalidation pending
>> (in software queue or HW queue).
>
> The design of this patch series will require the iommu driver to wait
> for the in-flight ATC invalidations during the blocking domain
I see there is pci_wait_for_pending_transaction() before the blocking
domain attachment.> attach. So for the SW initiated resets there should
not be pending ATC
> invalidations when the FLR is triggered.
>
> We have been talking about DPC internally, and I think it will need a
> related, but different flow since DPC can unavoidably trigger ATC
> invalidation timeouts/failures and we must sensibly handle them in the
There is race window for software to handle.
And for DPC containing data corruption as priority, seems not rational
to issue notification to software and then do resetting. alternative
way might be async modal support in iommu ATC invalidation path ?
Thanks,
Ethan > driver.
>
> Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets
2025-07-31 1:10 ` Ethan Zhao
@ 2025-07-31 13:47 ` Jason Gunthorpe
0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2025-07-31 13:47 UTC (permalink / raw)
To: Ethan Zhao
Cc: Nicolin Chen, joro, will, robin.murphy, rafael, lenb, bhelgaas,
iommu, linux-kernel, linux-acpi, linux-pci, patches, pjaroszynski,
vsethi, helgaas, baolu.lu
On Thu, Jul 31, 2025 at 09:10:59AM +0800, Ethan Zhao wrote:
> > invalidations when the FLR is triggered.
> >
> > We have been talking about DPC internally, and I think it will need a
> > related, but different flow since DPC can unavoidably trigger ATC
> > invalidation timeouts/failures and we must sensibly handle them in the
> There is race window for software to handle.
> And for DPC containing data corruption as priority, seems not rational to
> issue notification to software and then do resetting. alternative
> way might be async modal support in iommu ATC invalidation path ?
DPC would still act in HW to prevent corruption, SW would learn about
it either through a DPC async notify or through an ATC timeout, then
SW can reprogram the IOMMU to disable ATS.
We can't make the invalidation path async, the invalidation must
succeed or the iommu itself must fully fence future access to the
now-invalidate memory - most likely by disabling ATS, blocking
accepting translated TLPs and flushing out all previously accepted
translated TLPs.
Once invalidation finishes there must not be any IOMMU access to the
memory that was invalidation, and this cannot fail.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2025-07-31 13:47 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-28 7:42 [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Nicolin Chen
2025-06-28 7:42 ` [PATCH RFC v2 1/4] iommu: Lock group->mutex in iommu_deferred_attach Nicolin Chen
2025-07-04 15:22 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 2/4] iommu: Pass in gdev to __iommu_device_set_domain Nicolin Chen
2025-07-04 15:23 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 3/4] iommu: Introduce iommu_dev_reset_prepare() and iommu_dev_reset_done() Nicolin Chen
2025-06-28 13:28 ` Baolu Lu
2025-06-30 12:38 ` Jason Gunthorpe
2025-06-30 17:29 ` Nicolin Chen
2025-06-30 22:49 ` Jason Gunthorpe
2025-07-04 15:43 ` Jason Gunthorpe
2025-07-22 21:58 ` Nicolin Chen
2025-07-23 2:21 ` Baolu Lu
2025-07-23 2:53 ` Nicolin Chen
2025-07-27 16:25 ` Jason Gunthorpe
2025-07-28 19:07 ` Nicolin Chen
2025-07-29 13:02 ` Jason Gunthorpe
2025-06-28 7:42 ` [PATCH RFC v2 4/4] pci: Suspend iommu function prior to resetting a device Nicolin Chen
2025-07-24 6:50 ` [PATCH RFC v2 0/4] Disable ATS via iommu during PCI resets Ethan Zhao
2025-07-25 16:41 ` Nicolin Chen
2025-07-27 12:48 ` Ethan Zhao
2025-07-27 16:20 ` Jason Gunthorpe
2025-07-29 6:16 ` Ethan Zhao
2025-07-29 12:59 ` Jason Gunthorpe
2025-07-31 1:10 ` Ethan Zhao
2025-07-31 13:47 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).