* [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20
@ 2026-01-22 1:48 Lu Baolu
2026-01-22 1:48 ` [PATCH 1/7] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Lu Baolu
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
Hi Joerg,
The following changes have been queued for v6.20-rc1. They are about
some non-critical fixes, including:
- Skip dev-iotlb flush for inaccessible PCIe device
- Flush cache for PASID table before using it
- Use right invalidation method for SVA and NESTED domains
- Ensure atomicity in context and PASID entry updates
These patches are based on v6.19-rc6. Please consider them for the
iommu/vt-d branch.
Best regards,
baolu
Dmytro Maluka (1):
iommu/vt-d: Flush cache for PASID table before using it
Jinhui Guo (2):
iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without
scalable mode
iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in
scalable mode
Lu Baolu (3):
iommu/vt-d: Clear Present bit before tearing down PASID entry
iommu/vt-d: Clear Present bit before tearing down context entry
iommu/vt-d: Fix race condition during PASID entry replacement
Yi Liu (1):
iommu/vt-d: Flush piotlb for SVM and Nested domain
drivers/iommu/intel/iommu.h | 21 +++-
drivers/iommu/intel/pasid.h | 28 ++---
drivers/iommu/intel/cache.c | 9 +-
drivers/iommu/intel/iommu.c | 33 +++---
drivers/iommu/intel/nested.c | 9 +-
drivers/iommu/intel/pasid.c | 212 ++++-------------------------------
6 files changed, 83 insertions(+), 229 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/7] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 2/7] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in " Lu Baolu
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
From: Jinhui Guo <guojinhui.liam@bytedance.com>
PCIe endpoints with ATS enabled and passed through to userspace
(e.g., QEMU, DPDK) can hard-lock the host when their link drops,
either by surprise removal or by a link fault.
Commit 4fc82cd907ac ("iommu/vt-d: Don't issue ATS Invalidation
request when device is disconnected") adds pci_dev_is_disconnected()
to devtlb_invalidation_with_pasid() so ATS invalidation is skipped
only when the device is being safely removed, but it applies only
when Intel IOMMU scalable mode is enabled.
With scalable mode disabled or unsupported, a system hard-lock
occurs when a PCIe endpoint's link drops because the Intel IOMMU
waits indefinitely for an ATS invalidation that cannot complete.
Call Trace:
qi_submit_sync
qi_flush_dev_iotlb
__context_flush_dev_iotlb.part.0
domain_context_clear_one_cb
pci_for_each_dma_alias
device_block_translation
blocking_domain_attach_dev
iommu_deinit_device
__iommu_group_remove_device
iommu_release_device
iommu_bus_notifier
blocking_notifier_call_chain
bus_notify
device_del
pci_remove_bus_device
pci_stop_and_remove_bus_device
pciehp_unconfigure_device
pciehp_disable_slot
pciehp_handle_presence_or_link_change
pciehp_ist
Commit 81e921fd3216 ("iommu/vt-d: Fix NULL domain on device release")
adds intel_pasid_teardown_sm_context() to intel_iommu_release_device(),
which calls qi_flush_dev_iotlb() and can also hard-lock the system
when a PCIe endpoint's link drops.
Call Trace:
qi_submit_sync
qi_flush_dev_iotlb
__context_flush_dev_iotlb.part.0
intel_context_flush_no_pasid
device_pasid_table_teardown
pci_pasid_table_teardown
pci_for_each_dma_alias
intel_pasid_teardown_sm_context
intel_iommu_release_device
iommu_deinit_device
__iommu_group_remove_device
iommu_release_device
iommu_bus_notifier
blocking_notifier_call_chain
bus_notify
device_del
pci_remove_bus_device
pci_stop_and_remove_bus_device
pciehp_unconfigure_device
pciehp_disable_slot
pciehp_handle_presence_or_link_change
pciehp_ist
Sometimes the endpoint loses connection without a link-down event
(e.g., due to a link fault); killing the process (virsh destroy)
then hard-locks the host.
Call Trace:
qi_submit_sync
qi_flush_dev_iotlb
__context_flush_dev_iotlb.part.0
domain_context_clear_one_cb
pci_for_each_dma_alias
device_block_translation
blocking_domain_attach_dev
__iommu_attach_device
__iommu_device_set_domain
__iommu_group_set_domain_internal
iommu_detach_group
vfio_iommu_type1_detach_group
vfio_group_detach_container
vfio_group_fops_release
__fput
pci_dev_is_disconnected() only covers safe-removal paths;
pci_device_is_present() tests accessibility by reading
vendor/device IDs and internally calls pci_dev_is_disconnected().
On a ConnectX-5 (8 GT/s, x2) this costs ~70 µs.
Since __context_flush_dev_iotlb() is only called on
{attach,release}_dev paths (not hot), add pci_device_is_present()
there to skip inaccessible devices and avoid the hard-lock.
Fixes: 37764b952e1b ("iommu/vt-d: Global devTLB flush when present context entry changed")
Fixes: 81e921fd3216 ("iommu/vt-d: Fix NULL domain on device release")
Cc: stable@vger.kernel.org
Signed-off-by: Jinhui Guo <guojinhui.liam@bytedance.com>
Link: https://lore.kernel.org/r/20251211035946.2071-2-guojinhui.liam@bytedance.com
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
drivers/iommu/intel/pasid.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 3e2255057079..3f6d78180d79 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -1102,6 +1102,14 @@ static void __context_flush_dev_iotlb(struct device_domain_info *info)
if (!info->ats_enabled)
return;
+ /*
+ * Skip dev-IOTLB flush for inaccessible PCIe devices to prevent the
+ * Intel IOMMU from waiting indefinitely for an ATS invalidation that
+ * cannot complete.
+ */
+ if (!pci_device_is_present(to_pci_dev(info->dev)))
+ return;
+
qi_flush_dev_iotlb(info->iommu, PCI_DEVID(info->bus, info->devfn),
info->pfsid, info->ats_qdep, 0, MAX_AGAW_PFN_WIDTH);
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/7] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in scalable mode
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
2026-01-22 1:48 ` [PATCH 1/7] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 3/7] iommu/vt-d: Flush cache for PASID table before using it Lu Baolu
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
From: Jinhui Guo <guojinhui.liam@bytedance.com>
Commit 4fc82cd907ac ("iommu/vt-d: Don't issue ATS Invalidation
request when device is disconnected") relies on
pci_dev_is_disconnected() to skip ATS invalidation for
safely-removed devices, but it does not cover link-down caused
by faults, which can still hard-lock the system.
For example, if a VM fails to connect to the PCIe device,
"virsh destroy" is executed to release resources and isolate
the fault, but a hard-lockup occurs while releasing the group fd.
Call Trace:
qi_submit_sync
qi_flush_dev_iotlb
intel_pasid_tear_down_entry
device_block_translation
blocking_domain_attach_dev
__iommu_attach_device
__iommu_device_set_domain
__iommu_group_set_domain_internal
iommu_detach_group
vfio_iommu_type1_detach_group
vfio_group_detach_container
vfio_group_fops_release
__fput
Although pci_device_is_present() is slower than
pci_dev_is_disconnected(), it still takes only ~70 µs on a
ConnectX-5 (8 GT/s, x2) and becomes even faster as PCIe speed
and width increase.
Besides, devtlb_invalidation_with_pasid() is called only in the
paths below, which are far less frequent than memory map/unmap.
1. mm-struct release
2. {attach,release}_dev
3. set/remove PASID
4. dirty-tracking setup
The gain in system stability far outweighs the negligible cost
of using pci_device_is_present() instead of pci_dev_is_disconnected()
to decide when to skip ATS invalidation, especially under GDR
high-load conditions.
Fixes: 4fc82cd907ac ("iommu/vt-d: Don't issue ATS Invalidation request when device is disconnected")
Cc: stable@vger.kernel.org
Signed-off-by: Jinhui Guo <guojinhui.liam@bytedance.com>
Link: https://lore.kernel.org/r/20251211035946.2071-3-guojinhui.liam@bytedance.com
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
drivers/iommu/intel/pasid.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 3f6d78180d79..99692f88b883 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -218,7 +218,7 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
if (!info || !info->ats_enabled)
return;
- if (pci_dev_is_disconnected(to_pci_dev(dev)))
+ if (!pci_device_is_present(to_pci_dev(dev)))
return;
sid = PCI_DEVID(info->bus, info->devfn);
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/7] iommu/vt-d: Flush cache for PASID table before using it
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
2026-01-22 1:48 ` [PATCH 1/7] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Lu Baolu
2026-01-22 1:48 ` [PATCH 2/7] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in " Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 4/7] iommu/vt-d: Flush piotlb for SVM and Nested domain Lu Baolu
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
From: Dmytro Maluka <dmaluka@chromium.org>
When writing the address of a freshly allocated zero-initialized PASID
table to a PASID directory entry, do that after the CPU cache flush for
this PASID table, not before it, to avoid the time window when this
PASID table may be already used by non-coherent IOMMU hardware while
its contents in RAM is still some random old data, not zero-initialized.
Fixes: 194b3348bdbb ("iommu/vt-d: Fix PASID directory pointer coherency")
Signed-off-by: Dmytro Maluka <dmaluka@chromium.org>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20251221123508.37495-1-dmaluka@chromium.org
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
drivers/iommu/intel/pasid.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 99692f88b883..6379b211f12b 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -153,6 +153,9 @@ static struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
if (!entries)
return NULL;
+ if (!ecap_coherent(info->iommu->ecap))
+ clflush_cache_range(entries, VTD_PAGE_SIZE);
+
/*
* The pasid directory table entry won't be freed after
* allocation. No worry about the race with free and
@@ -165,10 +168,8 @@ static struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
iommu_free_pages(entries);
goto retry;
}
- if (!ecap_coherent(info->iommu->ecap)) {
- clflush_cache_range(entries, VTD_PAGE_SIZE);
+ if (!ecap_coherent(info->iommu->ecap))
clflush_cache_range(&dir[dir_index].val, sizeof(*dir));
- }
}
return &entries[index];
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/7] iommu/vt-d: Flush piotlb for SVM and Nested domain
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
` (2 preceding siblings ...)
2026-01-22 1:48 ` [PATCH 3/7] iommu/vt-d: Flush cache for PASID table before using it Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 5/7] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
From: Yi Liu <yi.l.liu@intel.com>
Besides the paging domains that use FS, SVM and Nested domains need to
use piotlb invalidation descriptor as well.
Fixes: b33125296b50 ("iommu/vt-d: Create unique domain ops for each stage")
Cc: stable@vger.kernel.org
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20251223065824.6164-1-yi.l.liu@intel.com
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
drivers/iommu/intel/cache.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
index 265e7290256b..385ae5cfb30d 100644
--- a/drivers/iommu/intel/cache.c
+++ b/drivers/iommu/intel/cache.c
@@ -363,6 +363,13 @@ static void qi_batch_add_pasid_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16
qi_batch_increment_index(iommu, batch);
}
+static bool intel_domain_use_piotlb(struct dmar_domain *domain)
+{
+ return domain->domain.type == IOMMU_DOMAIN_SVA ||
+ domain->domain.type == IOMMU_DOMAIN_NESTED ||
+ intel_domain_is_fs_paging(domain);
+}
+
static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *tag,
unsigned long addr, unsigned long pages,
unsigned long mask, int ih)
@@ -370,7 +377,7 @@ static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *
struct intel_iommu *iommu = tag->iommu;
u64 type = DMA_TLB_PSI_FLUSH;
- if (intel_domain_is_fs_paging(domain)) {
+ if (intel_domain_use_piotlb(domain)) {
qi_batch_add_piotlb(iommu, tag->domain_id, tag->pasid, addr,
pages, ih, domain->qi_batch);
return;
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/7] iommu/vt-d: Clear Present bit before tearing down PASID entry
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
` (3 preceding siblings ...)
2026-01-22 1:48 ` [PATCH 4/7] iommu/vt-d: Flush piotlb for SVM and Nested domain Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 6/7] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
The Intel VT-d Scalable Mode PASID table entry consists of 512 bits (64
bytes). When tearing down an entry, the current implementation zeros the
entire 64-byte structure immediately using multiple 64-bit writes.
Since the IOMMU hardware may fetch these 64 bytes using multiple
internal transactions (e.g., four 128-bit bursts), updating or zeroing
the entire entry while it is active (P=1) risks a "torn" read. If a
hardware fetch occurs simultaneously with the CPU zeroing the entry, the
hardware could observe an inconsistent state, leading to unpredictable
behavior or spurious faults.
Follow the "Guidance to Software for Invalidations" in the VT-d spec
(Section 6.5.3.3) by implementing the recommended ownership handshake:
1. Clear only the 'Present' (P) bit of the PASID entry.
2. Use a dma_wmb() to ensure the cleared bit is visible to hardware
before proceeding.
3. Execute the required invalidation sequence (PASID cache, IOTLB, and
Device-TLB flush) to ensure the hardware has released all cached
references.
4. Only after the flushes are complete, zero out the remaining fields
of the PASID entry.
Also, add a dma_wmb() in pasid_set_present() to ensure that all other
fields of the PASID entry are visible to the hardware before the Present
bit is set.
Fixes: 0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Dmytro Maluka <dmaluka@chromium.org>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20260120061816.2132558-2-baolu.lu@linux.intel.com
---
drivers/iommu/intel/pasid.h | 14 ++++++++++++++
drivers/iommu/intel/pasid.c | 6 +++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index b4c85242dc79..0b303bd0b0c1 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -234,9 +234,23 @@ static inline void pasid_set_wpe(struct pasid_entry *pe)
*/
static inline void pasid_set_present(struct pasid_entry *pe)
{
+ dma_wmb();
pasid_set_bits(&pe->val[0], 1 << 0, 1);
}
+/*
+ * Clear the Present (P) bit (bit 0) of a scalable-mode PASID table entry.
+ * This initiates the transition of the entry's ownership from hardware
+ * to software. The caller is responsible for fulfilling the invalidation
+ * handshake recommended by the VT-d spec, Section 6.5.3.3 (Guidance to
+ * Software for Invalidations).
+ */
+static inline void pasid_clear_present(struct pasid_entry *pe)
+{
+ pasid_set_bits(&pe->val[0], 1 << 0, 0);
+ dma_wmb();
+}
+
/*
* Setup Page Walk Snoop bit (Bit 87) of a scalable mode PASID
* entry.
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 6379b211f12b..07e056b24605 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -273,7 +273,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
did = pasid_get_domain_id(pte);
pgtt = pasid_pte_get_pgtt(pte);
- intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ pasid_clear_present(pte);
spin_unlock(&iommu->lock);
if (!ecap_coherent(iommu->ecap))
@@ -287,6 +287,10 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ if (!ecap_coherent(iommu->ecap))
+ clflush_cache_range(pte, sizeof(*pte));
+
if (!fault_ignore)
intel_iommu_drain_pasid_prq(dev, pasid);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/7] iommu/vt-d: Clear Present bit before tearing down context entry
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
` (4 preceding siblings ...)
2026-01-22 1:48 ` [PATCH 5/7] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 1:48 ` [PATCH 7/7] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu
2026-01-22 8:20 ` [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Joerg Roedel
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
When tearing down a context entry, the current implementation zeros the
entire 128-bit entry using multiple 64-bit writes. This creates a window
where the hardware can fetch a "torn" entry — where some fields are
already zeroed while the 'Present' bit is still set — leading to
unpredictable behavior or spurious faults.
While x86 provides strong write ordering, the compiler may reorder writes
to the two 64-bit halves of the context entry. Even without compiler
reordering, the hardware fetch is not guaranteed to be atomic with
respect to multiple CPU writes.
Align with the "Guidance to Software for Invalidations" in the VT-d spec
(Section 6.5.3.3) by implementing the recommended ownership handshake:
1. Clear only the 'Present' (P) bit of the context entry first to
signal the transition of ownership from hardware to software.
2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU.
3. Perform the required cache and context-cache invalidation to ensure
hardware no longer has cached references to the entry.
4. Fully zero out the entry only after the invalidation is complete.
Also, add a dma_wmb() to context_set_present() to ensure the entry
is fully initialized before the 'Present' bit becomes visible.
Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver")
Reported-by: Dmytro Maluka <dmaluka@chromium.org>
Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Dmytro Maluka <dmaluka@chromium.org>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20260120061816.2132558-3-baolu.lu@linux.intel.com
---
drivers/iommu/intel/iommu.h | 21 ++++++++++++++++++++-
drivers/iommu/intel/iommu.c | 4 +++-
drivers/iommu/intel/pasid.c | 5 ++++-
3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
index 25c5e22096d4..599913fb65d5 100644
--- a/drivers/iommu/intel/iommu.h
+++ b/drivers/iommu/intel/iommu.h
@@ -900,7 +900,26 @@ static inline int pfn_level_offset(u64 pfn, int level)
static inline void context_set_present(struct context_entry *context)
{
- context->lo |= 1;
+ u64 val;
+
+ dma_wmb();
+ val = READ_ONCE(context->lo) | 1;
+ WRITE_ONCE(context->lo, val);
+}
+
+/*
+ * Clear the Present (P) bit (bit 0) of a context table entry. This initiates
+ * the transition of the entry's ownership from hardware to software. The
+ * caller is responsible for fulfilling the invalidation handshake recommended
+ * by the VT-d spec, Section 6.5.3.3 (Guidance to Software for Invalidations).
+ */
+static inline void context_clear_present(struct context_entry *context)
+{
+ u64 val;
+
+ val = READ_ONCE(context->lo) & GENMASK_ULL(63, 1);
+ WRITE_ONCE(context->lo, val);
+ dma_wmb();
}
static inline void context_set_fault_enable(struct context_entry *context)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 134302fbcd92..c66cc51f9e51 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1240,10 +1240,12 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8
}
did = context_domain_id(context);
- context_clear_entry(context);
+ context_clear_present(context);
__iommu_flush_cache(iommu, context, sizeof(*context));
spin_unlock(&iommu->lock);
intel_context_flush_no_pasid(info, context, did);
+ context_clear_entry(context);
+ __iommu_flush_cache(iommu, context, sizeof(*context));
}
int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev,
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 07e056b24605..f5dfa9b9eb3e 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -1024,7 +1024,7 @@ static int device_pasid_table_setup(struct device *dev, u8 bus, u8 devfn)
}
if (context_copied(iommu, bus, devfn)) {
- context_clear_entry(context);
+ context_clear_present(context);
__iommu_flush_cache(iommu, context, sizeof(*context));
/*
@@ -1044,6 +1044,9 @@ static int device_pasid_table_setup(struct device *dev, u8 bus, u8 devfn)
iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
devtlb_invalidation_with_pasid(iommu, dev, IOMMU_NO_PASID);
+ context_clear_entry(context);
+ __iommu_flush_cache(iommu, context, sizeof(*context));
+
/*
* At this point, the device is supposed to finish reset at
* its driver probe stage, so no in-flight DMA will exist,
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 7/7] iommu/vt-d: Fix race condition during PASID entry replacement
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
` (5 preceding siblings ...)
2026-01-22 1:48 ` [PATCH 6/7] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu
@ 2026-01-22 1:48 ` Lu Baolu
2026-01-22 8:20 ` [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Joerg Roedel
7 siblings, 0 replies; 9+ messages in thread
From: Lu Baolu @ 2026-01-22 1:48 UTC (permalink / raw)
To: Joerg Roedel; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
The Intel VT-d PASID table entry is 512 bits (64 bytes). When replacing
an active PASID entry (e.g., during domain replacement), the current
implementation calculates a new entry on the stack and copies it to the
table using a single structure assignment.
struct pasid_entry *pte, new_pte;
pte = intel_pasid_get_entry(dev, pasid);
pasid_pte_config_first_level(iommu, &new_pte, ...);
*pte = new_pte;
Because the hardware may fetch the 512-bit PASID entry in multiple
128-bit chunks, updating the entire entry while it is active (Present
bit set) risks a "torn" read. In this scenario, the IOMMU hardware
could observe an inconsistent state — partially new data and partially
old data — leading to unpredictable behavior or spurious faults.
Fix this by removing the unsafe "replace" helpers and following the
"clear-then-update" flow, which ensures the Present bit is cleared and
the required invalidation handshake is completed before the new
configuration is applied.
Fixes: 7543ee63e811 ("iommu/vt-d: Add pasid replace helpers")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20260120061816.2132558-4-baolu.lu@linux.intel.com
---
drivers/iommu/intel/pasid.h | 14 ---
drivers/iommu/intel/iommu.c | 29 +++---
drivers/iommu/intel/nested.c | 9 +-
drivers/iommu/intel/pasid.c | 184 -----------------------------------
4 files changed, 16 insertions(+), 220 deletions(-)
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index 0b303bd0b0c1..c3c8c907983e 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -316,20 +316,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
struct device *dev, u32 pasid);
int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
u32 pasid, struct dmar_domain *domain);
-int intel_pasid_replace_first_level(struct intel_iommu *iommu,
- struct device *dev, phys_addr_t fsptptr,
- u32 pasid, u16 did, u16 old_did, int flags);
-int intel_pasid_replace_second_level(struct intel_iommu *iommu,
- struct dmar_domain *domain,
- struct device *dev, u16 old_did,
- u32 pasid);
-int intel_pasid_replace_pass_through(struct intel_iommu *iommu,
- struct device *dev, u16 old_did,
- u32 pasid);
-int intel_pasid_replace_nested(struct intel_iommu *iommu,
- struct device *dev, u32 pasid,
- u16 old_did, struct dmar_domain *domain);
-
void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
struct device *dev, u32 pasid,
bool fault_ignore);
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index c66cc51f9e51..705828b06e32 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1252,12 +1252,10 @@ int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev,
ioasid_t pasid, u16 did, phys_addr_t fsptptr,
int flags, struct iommu_domain *old)
{
- if (!old)
- return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid,
- did, flags);
- return intel_pasid_replace_first_level(iommu, dev, fsptptr, pasid, did,
- iommu_domain_did(old, iommu),
- flags);
+ if (old)
+ intel_pasid_tear_down_entry(iommu, dev, pasid, false);
+
+ return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid, did, flags);
}
static int domain_setup_second_level(struct intel_iommu *iommu,
@@ -1265,23 +1263,20 @@ static int domain_setup_second_level(struct intel_iommu *iommu,
struct device *dev, ioasid_t pasid,
struct iommu_domain *old)
{
- if (!old)
- return intel_pasid_setup_second_level(iommu, domain,
- dev, pasid);
- return intel_pasid_replace_second_level(iommu, domain, dev,
- iommu_domain_did(old, iommu),
- pasid);
+ if (old)
+ intel_pasid_tear_down_entry(iommu, dev, pasid, false);
+
+ return intel_pasid_setup_second_level(iommu, domain, dev, pasid);
}
static int domain_setup_passthrough(struct intel_iommu *iommu,
struct device *dev, ioasid_t pasid,
struct iommu_domain *old)
{
- if (!old)
- return intel_pasid_setup_pass_through(iommu, dev, pasid);
- return intel_pasid_replace_pass_through(iommu, dev,
- iommu_domain_did(old, iommu),
- pasid);
+ if (old)
+ intel_pasid_tear_down_entry(iommu, dev, pasid, false);
+
+ return intel_pasid_setup_pass_through(iommu, dev, pasid);
}
static int domain_setup_first_level(struct intel_iommu *iommu,
diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
index a3fb8c193ca6..e9a440e9c960 100644
--- a/drivers/iommu/intel/nested.c
+++ b/drivers/iommu/intel/nested.c
@@ -136,11 +136,10 @@ static int domain_setup_nested(struct intel_iommu *iommu,
struct device *dev, ioasid_t pasid,
struct iommu_domain *old)
{
- if (!old)
- return intel_pasid_setup_nested(iommu, dev, pasid, domain);
- return intel_pasid_replace_nested(iommu, dev, pasid,
- iommu_domain_did(old, iommu),
- domain);
+ if (old)
+ intel_pasid_tear_down_entry(iommu, dev, pasid, false);
+
+ return intel_pasid_setup_nested(iommu, dev, pasid, domain);
}
static int intel_nested_set_dev_pasid(struct iommu_domain *domain,
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index f5dfa9b9eb3e..b63a71904cfb 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -417,50 +417,6 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu, struct device *dev,
return 0;
}
-int intel_pasid_replace_first_level(struct intel_iommu *iommu,
- struct device *dev, phys_addr_t fsptptr,
- u32 pasid, u16 did, u16 old_did,
- int flags)
-{
- struct pasid_entry *pte, new_pte;
-
- if (!ecap_flts(iommu->ecap)) {
- pr_err("No first level translation support on %s\n",
- iommu->name);
- return -EINVAL;
- }
-
- if ((flags & PASID_FLAG_FL5LP) && !cap_fl5lp_support(iommu->cap)) {
- pr_err("No 5-level paging support for first-level on %s\n",
- iommu->name);
- return -EINVAL;
- }
-
- pasid_pte_config_first_level(iommu, &new_pte, fsptptr, did, flags);
-
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
- spin_unlock(&iommu->lock);
- return -ENODEV;
- }
-
- if (!pasid_pte_is_present(pte)) {
- spin_unlock(&iommu->lock);
- return -EINVAL;
- }
-
- WARN_ON(old_did != pasid_get_domain_id(pte));
-
- *pte = new_pte;
- spin_unlock(&iommu->lock);
-
- intel_pasid_flush_present(iommu, dev, pasid, old_did, pte);
- intel_iommu_drain_pasid_prq(dev, pasid);
-
- return 0;
-}
-
/*
* Set up the scalable mode pasid entry for second only translation type.
*/
@@ -527,51 +483,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
return 0;
}
-int intel_pasid_replace_second_level(struct intel_iommu *iommu,
- struct dmar_domain *domain,
- struct device *dev, u16 old_did,
- u32 pasid)
-{
- struct pasid_entry *pte, new_pte;
- u16 did;
-
- /*
- * If hardware advertises no support for second level
- * translation, return directly.
- */
- if (!ecap_slts(iommu->ecap)) {
- pr_err("No second level translation support on %s\n",
- iommu->name);
- return -EINVAL;
- }
-
- did = domain_id_iommu(domain, iommu);
-
- pasid_pte_config_second_level(iommu, &new_pte, domain, did);
-
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
- spin_unlock(&iommu->lock);
- return -ENODEV;
- }
-
- if (!pasid_pte_is_present(pte)) {
- spin_unlock(&iommu->lock);
- return -EINVAL;
- }
-
- WARN_ON(old_did != pasid_get_domain_id(pte));
-
- *pte = new_pte;
- spin_unlock(&iommu->lock);
-
- intel_pasid_flush_present(iommu, dev, pasid, old_did, pte);
- intel_iommu_drain_pasid_prq(dev, pasid);
-
- return 0;
-}
-
/*
* Set up dirty tracking on a second only or nested translation type.
*/
@@ -684,38 +595,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
return 0;
}
-int intel_pasid_replace_pass_through(struct intel_iommu *iommu,
- struct device *dev, u16 old_did,
- u32 pasid)
-{
- struct pasid_entry *pte, new_pte;
- u16 did = FLPT_DEFAULT_DID;
-
- pasid_pte_config_pass_through(iommu, &new_pte, did);
-
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
- spin_unlock(&iommu->lock);
- return -ENODEV;
- }
-
- if (!pasid_pte_is_present(pte)) {
- spin_unlock(&iommu->lock);
- return -EINVAL;
- }
-
- WARN_ON(old_did != pasid_get_domain_id(pte));
-
- *pte = new_pte;
- spin_unlock(&iommu->lock);
-
- intel_pasid_flush_present(iommu, dev, pasid, old_did, pte);
- intel_iommu_drain_pasid_prq(dev, pasid);
-
- return 0;
-}
-
/*
* Set the page snoop control for a pasid entry which has been set up.
*/
@@ -849,69 +728,6 @@ int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
return 0;
}
-int intel_pasid_replace_nested(struct intel_iommu *iommu,
- struct device *dev, u32 pasid,
- u16 old_did, struct dmar_domain *domain)
-{
- struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg;
- struct dmar_domain *s2_domain = domain->s2_domain;
- u16 did = domain_id_iommu(domain, iommu);
- struct pasid_entry *pte, new_pte;
-
- /* Address width should match the address width supported by hardware */
- switch (s1_cfg->addr_width) {
- case ADDR_WIDTH_4LEVEL:
- break;
- case ADDR_WIDTH_5LEVEL:
- if (!cap_fl5lp_support(iommu->cap)) {
- dev_err_ratelimited(dev,
- "5-level paging not supported\n");
- return -EINVAL;
- }
- break;
- default:
- dev_err_ratelimited(dev, "Invalid stage-1 address width %d\n",
- s1_cfg->addr_width);
- return -EINVAL;
- }
-
- if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu->ecap)) {
- pr_err_ratelimited("No supervisor request support on %s\n",
- iommu->name);
- return -EINVAL;
- }
-
- if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu->ecap)) {
- pr_err_ratelimited("No extended access flag support on %s\n",
- iommu->name);
- return -EINVAL;
- }
-
- pasid_pte_config_nestd(iommu, &new_pte, s1_cfg, s2_domain, did);
-
- spin_lock(&iommu->lock);
- pte = intel_pasid_get_entry(dev, pasid);
- if (!pte) {
- spin_unlock(&iommu->lock);
- return -ENODEV;
- }
-
- if (!pasid_pte_is_present(pte)) {
- spin_unlock(&iommu->lock);
- return -EINVAL;
- }
-
- WARN_ON(old_did != pasid_get_domain_id(pte));
-
- *pte = new_pte;
- spin_unlock(&iommu->lock);
-
- intel_pasid_flush_present(iommu, dev, pasid, old_did, pte);
- intel_iommu_drain_pasid_prq(dev, pasid);
-
- return 0;
-}
-
/*
* Interfaces to setup or teardown a pasid table to the scalable-mode
* context table entry:
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
` (6 preceding siblings ...)
2026-01-22 1:48 ` [PATCH 7/7] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu
@ 2026-01-22 8:20 ` Joerg Roedel
7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2026-01-22 8:20 UTC (permalink / raw)
To: Lu Baolu; +Cc: Yi Liu, Dmytro Maluka, Jinhui Guo, iommu, linux-kernel
On Thu, Jan 22, 2026 at 09:48:49AM +0800, Lu Baolu wrote:
> Hi Joerg,
>
> The following changes have been queued for v6.20-rc1. They are about
> some non-critical fixes, including:
>
> - Skip dev-iotlb flush for inaccessible PCIe device
> - Flush cache for PASID table before using it
> - Use right invalidation method for SVA and NESTED domains
> - Ensure atomicity in context and PASID entry updates
>
> These patches are based on v6.19-rc6. Please consider them for the
> iommu/vt-d branch.
>
> Best regards,
> baolu
>
> Dmytro Maluka (1):
> iommu/vt-d: Flush cache for PASID table before using it
>
> Jinhui Guo (2):
> iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without
> scalable mode
> iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in
> scalable mode
>
> Lu Baolu (3):
> iommu/vt-d: Clear Present bit before tearing down PASID entry
> iommu/vt-d: Clear Present bit before tearing down context entry
> iommu/vt-d: Fix race condition during PASID entry replacement
>
> Yi Liu (1):
> iommu/vt-d: Flush piotlb for SVM and Nested domain
>
> drivers/iommu/intel/iommu.h | 21 +++-
> drivers/iommu/intel/pasid.h | 28 ++---
> drivers/iommu/intel/cache.c | 9 +-
> drivers/iommu/intel/iommu.c | 33 +++---
> drivers/iommu/intel/nested.c | 9 +-
> drivers/iommu/intel/pasid.c | 212 ++++-------------------------------
> 6 files changed, 83 insertions(+), 229 deletions(-)
Applied all, thanks Baolu.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-22 8:21 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-22 1:48 [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Lu Baolu
2026-01-22 1:48 ` [PATCH 1/7] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Lu Baolu
2026-01-22 1:48 ` [PATCH 2/7] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in " Lu Baolu
2026-01-22 1:48 ` [PATCH 3/7] iommu/vt-d: Flush cache for PASID table before using it Lu Baolu
2026-01-22 1:48 ` [PATCH 4/7] iommu/vt-d: Flush piotlb for SVM and Nested domain Lu Baolu
2026-01-22 1:48 ` [PATCH 5/7] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu
2026-01-22 1:48 ` [PATCH 6/7] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu
2026-01-22 1:48 ` [PATCH 7/7] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu
2026-01-22 8:20 ` [PATCH 0/7] [PULL REQUEST] Intel IOMMU updates for v6.20 Joerg Roedel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox