* [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates
@ 2026-01-20 6:18 Lu Baolu
2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Lu Baolu @ 2026-01-20 6:18 UTC (permalink / raw)
To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian,
Jason Gunthorpe
Cc: Dmytro Maluka, Samiullah Khawaja, iommu, linux-kernel, Lu Baolu
This is a follow-up from recent discussions in the iommu community
mailing list [1] [2] regarding potential race conditions in table
entry updates.
The Intel VT-d hardware fetches translation table entries (context
entries and PASID entries) in 128-bit (16-byte) chunks. Currently, the
Linux driver often updates these entries using multiple 64-bit writes.
This creates a race condition where the IOMMU hardware may fetch a
"torn" entry — a mixture of old and new data — during a CPU update. This
can lead to unpredictable hardware behavior, spurious faults, or system
instability.
This addresses these atomicity issues by following the translation table
entry ownership handshake protocal recommended by the VT-d specification.
[1] https://lore.kernel.org/linux-iommu/20251227175728.4358-1-dmaluka@chromium.org/
[2] https://lore.kernel.org/linux-iommu/20260107201800.2486137-1-skhawaja@google.com/
Change log:
v2:
- Considering that these fixes should be backported deep into old
versions, and the previous solution relies heavily on the x86_64
cmpxchg16b instruction, which is not friendly to backport as it might
cause regressions on early hardware or configurations, we use the
simple dma_wmb() approach in this version.
- Jason proposed the entry-sync framework
(https://lore.kernel.org/linux-iommu/20260113150542.GF812923@nvidia.com/)
which consolidates the details of how to update a translation table
entry in common code shared by the individual drivers, so that the
IOMMU driver could be designed without considering the hitless or
non-hitless replace.
- To make life easier, I decided to split all the work into multiple
series. The first, as it is, covers fixing the real problems in a
backport-friendly way, and the next series covers entry-sync for
PASID table entry updates.
v1: https://lore.kernel.org/linux-iommu/20260113030052.977366-1-baolu.lu@linux.intel.com/
Lu Baolu (3):
iommu/vt-d: Clear Present bit before tearing down PASID entry
iommu/vt-d: Clear Present bit before tearing down context entry
iommu/vt-d: Fix race condition during PASID entry replacement
drivers/iommu/intel/iommu.h | 21 +++-
drivers/iommu/intel/pasid.h | 28 +++---
drivers/iommu/intel/iommu.c | 33 +++---
drivers/iommu/intel/nested.c | 9 +-
drivers/iommu/intel/pasid.c | 190 +----------------------------------
5 files changed, 58 insertions(+), 223 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 17+ messages in thread* [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry 2026-01-20 6:18 [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Lu Baolu @ 2026-01-20 6:18 ` Lu Baolu 2026-01-20 13:56 ` Dmytro Maluka 2026-01-21 6:16 ` Tian, Kevin 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu ` (2 subsequent siblings) 3 siblings, 2 replies; 17+ messages in thread From: Lu Baolu @ 2026-01-20 6:18 UTC (permalink / raw) To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu, linux-kernel, Lu Baolu The Intel VT-d Scalable Mode PASID table entry consists of 512 bits (64 bytes). When tearing down an entry, the current implementation zeros the entire 64-byte structure immediately using multiple 64-bit writes. Since the IOMMU hardware may fetch these 64 bytes using multiple internal transactions (e.g., four 128-bit bursts), updating or zeroing the entire entry while it is active (P=1) risks a "torn" read. If a hardware fetch occurs simultaneously with the CPU zeroing the entry, the hardware could observe an inconsistent state, leading to unpredictable behavior or spurious faults. Follow the "Guidance to Software for Invalidations" in the VT-d spec (Section 6.5.3.3) by implementing the recommended ownership handshake: 1. Clear only the 'Present' (P) bit of the PASID entry. 2. Use a dma_wmb() to ensure the cleared bit is visible to hardware before proceeding. 3. Execute the required invalidation sequence (PASID cache, IOTLB, and Device-TLB flush) to ensure the hardware has released all cached references. 4. Only after the flushes are complete, zero out the remaining fields of the PASID entry. Also, add a dma_wmb() in pasid_set_present() to ensure that all other fields of the PASID entry are visible to the hardware before the Present bit is set. Fixes: 0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> --- drivers/iommu/intel/pasid.h | 14 ++++++++++++++ drivers/iommu/intel/pasid.c | 6 +++++- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h index b4c85242dc79..0b303bd0b0c1 100644 --- a/drivers/iommu/intel/pasid.h +++ b/drivers/iommu/intel/pasid.h @@ -234,9 +234,23 @@ static inline void pasid_set_wpe(struct pasid_entry *pe) */ static inline void pasid_set_present(struct pasid_entry *pe) { + dma_wmb(); pasid_set_bits(&pe->val[0], 1 << 0, 1); } +/* + * Clear the Present (P) bit (bit 0) of a scalable-mode PASID table entry. + * This initiates the transition of the entry's ownership from hardware + * to software. The caller is responsible for fulfilling the invalidation + * handshake recommended by the VT-d spec, Section 6.5.3.3 (Guidance to + * Software for Invalidations). + */ +static inline void pasid_clear_present(struct pasid_entry *pe) +{ + pasid_set_bits(&pe->val[0], 1 << 0, 0); + dma_wmb(); +} + /* * Setup Page Walk Snoop bit (Bit 87) of a scalable mode PASID * entry. diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 3e2255057079..eb069aefa4fa 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -272,7 +272,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, did = pasid_get_domain_id(pte); pgtt = pasid_pte_get_pgtt(pte); - intel_pasid_clear_entry(dev, pasid, fault_ignore); + pasid_clear_present(pte); spin_unlock(&iommu->lock); if (!ecap_coherent(iommu->ecap)) @@ -286,6 +286,10 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); devtlb_invalidation_with_pasid(iommu, dev, pasid); + intel_pasid_clear_entry(dev, pasid, fault_ignore); + if (!ecap_coherent(iommu->ecap)) + clflush_cache_range(pte, sizeof(*pte)); + if (!fault_ignore) intel_iommu_drain_pasid_prq(dev, pasid); } -- 2.43.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry 2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu @ 2026-01-20 13:56 ` Dmytro Maluka 2026-01-20 18:14 ` Samiullah Khawaja 2026-01-21 6:16 ` Tian, Kevin 1 sibling, 1 reply; 17+ messages in thread From: Dmytro Maluka @ 2026-01-20 13:56 UTC (permalink / raw) To: Lu Baolu Cc: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe, Samiullah Khawaja, iommu, linux-kernel, Vineeth Pillai (Google), Aashish Sharma On Tue, Jan 20, 2026 at 02:18:12PM +0800, Lu Baolu wrote: > The Intel VT-d Scalable Mode PASID table entry consists of 512 bits (64 > bytes). When tearing down an entry, the current implementation zeros the > entire 64-byte structure immediately using multiple 64-bit writes. > > Since the IOMMU hardware may fetch these 64 bytes using multiple > internal transactions (e.g., four 128-bit bursts), updating or zeroing > the entire entry while it is active (P=1) risks a "torn" read. If a > hardware fetch occurs simultaneously with the CPU zeroing the entry, the > hardware could observe an inconsistent state, leading to unpredictable > behavior or spurious faults. > > Follow the "Guidance to Software for Invalidations" in the VT-d spec > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > 1. Clear only the 'Present' (P) bit of the PASID entry. > 2. Use a dma_wmb() to ensure the cleared bit is visible to hardware > before proceeding. > 3. Execute the required invalidation sequence (PASID cache, IOTLB, and > Device-TLB flush) to ensure the hardware has released all cached > references. > 4. Only after the flushes are complete, zero out the remaining fields > of the PASID entry. > > Also, add a dma_wmb() in pasid_set_present() to ensure that all other > fields of the PASID entry are visible to the hardware before the Present > bit is set. > > Fixes: 0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables") > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Dmytro Maluka <dmaluka@chromium.org> > --- > drivers/iommu/intel/pasid.h | 14 ++++++++++++++ > drivers/iommu/intel/pasid.c | 6 +++++- > 2 files changed, 19 insertions(+), 1 deletion(-) > > diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h > index b4c85242dc79..0b303bd0b0c1 100644 > --- a/drivers/iommu/intel/pasid.h > +++ b/drivers/iommu/intel/pasid.h > @@ -234,9 +234,23 @@ static inline void pasid_set_wpe(struct pasid_entry *pe) > */ > static inline void pasid_set_present(struct pasid_entry *pe) > { > + dma_wmb(); > pasid_set_bits(&pe->val[0], 1 << 0, 1); > } > > +/* > + * Clear the Present (P) bit (bit 0) of a scalable-mode PASID table entry. > + * This initiates the transition of the entry's ownership from hardware > + * to software. The caller is responsible for fulfilling the invalidation > + * handshake recommended by the VT-d spec, Section 6.5.3.3 (Guidance to > + * Software for Invalidations). > + */ > +static inline void pasid_clear_present(struct pasid_entry *pe) > +{ > + pasid_set_bits(&pe->val[0], 1 << 0, 0); > + dma_wmb(); > +} > + > /* > * Setup Page Walk Snoop bit (Bit 87) of a scalable mode PASID > * entry. > diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c > index 3e2255057079..eb069aefa4fa 100644 > --- a/drivers/iommu/intel/pasid.c > +++ b/drivers/iommu/intel/pasid.c > @@ -272,7 +272,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, > > did = pasid_get_domain_id(pte); > pgtt = pasid_pte_get_pgtt(pte); > - intel_pasid_clear_entry(dev, pasid, fault_ignore); > + pasid_clear_present(pte); > spin_unlock(&iommu->lock); > > if (!ecap_coherent(iommu->ecap)) > @@ -286,6 +286,10 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, > iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); > > devtlb_invalidation_with_pasid(iommu, dev, pasid); > + intel_pasid_clear_entry(dev, pasid, fault_ignore); > + if (!ecap_coherent(iommu->ecap)) > + clflush_cache_range(pte, sizeof(*pte)); > + > if (!fault_ignore) > intel_iommu_drain_pasid_prq(dev, pasid); > } > -- > 2.43.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry 2026-01-20 13:56 ` Dmytro Maluka @ 2026-01-20 18:14 ` Samiullah Khawaja 0 siblings, 0 replies; 17+ messages in thread From: Samiullah Khawaja @ 2026-01-20 18:14 UTC (permalink / raw) To: Dmytro Maluka Cc: Lu Baolu, Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe, iommu, linux-kernel, Vineeth Pillai (Google), Aashish Sharma On Tue, Jan 20, 2026 at 5:56 AM Dmytro Maluka <dmaluka@chromium.org> wrote: > > On Tue, Jan 20, 2026 at 02:18:12PM +0800, Lu Baolu wrote: > > The Intel VT-d Scalable Mode PASID table entry consists of 512 bits (64 > > bytes). When tearing down an entry, the current implementation zeros the > > entire 64-byte structure immediately using multiple 64-bit writes. > > > > Since the IOMMU hardware may fetch these 64 bytes using multiple > > internal transactions (e.g., four 128-bit bursts), updating or zeroing > > the entire entry while it is active (P=1) risks a "torn" read. If a > > hardware fetch occurs simultaneously with the CPU zeroing the entry, the > > hardware could observe an inconsistent state, leading to unpredictable > > behavior or spurious faults. > > > > Follow the "Guidance to Software for Invalidations" in the VT-d spec > > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > > > 1. Clear only the 'Present' (P) bit of the PASID entry. > > 2. Use a dma_wmb() to ensure the cleared bit is visible to hardware > > before proceeding. > > 3. Execute the required invalidation sequence (PASID cache, IOTLB, and > > Device-TLB flush) to ensure the hardware has released all cached > > references. > > 4. Only after the flushes are complete, zero out the remaining fields > > of the PASID entry. > > > > Also, add a dma_wmb() in pasid_set_present() to ensure that all other > > fields of the PASID entry are visible to the hardware before the Present > > bit is set. > > > > Fixes: 0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables") > > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > > Reviewed-by: Dmytro Maluka <dmaluka@chromium.org> > > > --- > > drivers/iommu/intel/pasid.h | 14 ++++++++++++++ > > drivers/iommu/intel/pasid.c | 6 +++++- > > 2 files changed, 19 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h > > index b4c85242dc79..0b303bd0b0c1 100644 > > --- a/drivers/iommu/intel/pasid.h > > +++ b/drivers/iommu/intel/pasid.h > > @@ -234,9 +234,23 @@ static inline void pasid_set_wpe(struct pasid_entry *pe) > > */ > > static inline void pasid_set_present(struct pasid_entry *pe) > > { > > + dma_wmb(); > > pasid_set_bits(&pe->val[0], 1 << 0, 1); > > } > > > > +/* > > + * Clear the Present (P) bit (bit 0) of a scalable-mode PASID table entry. > > + * This initiates the transition of the entry's ownership from hardware > > + * to software. The caller is responsible for fulfilling the invalidation > > + * handshake recommended by the VT-d spec, Section 6.5.3.3 (Guidance to > > + * Software for Invalidations). > > + */ > > +static inline void pasid_clear_present(struct pasid_entry *pe) > > +{ > > + pasid_set_bits(&pe->val[0], 1 << 0, 0); > > + dma_wmb(); > > +} > > + > > /* > > * Setup Page Walk Snoop bit (Bit 87) of a scalable mode PASID > > * entry. > > diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c > > index 3e2255057079..eb069aefa4fa 100644 > > --- a/drivers/iommu/intel/pasid.c > > +++ b/drivers/iommu/intel/pasid.c > > @@ -272,7 +272,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, > > > > did = pasid_get_domain_id(pte); > > pgtt = pasid_pte_get_pgtt(pte); > > - intel_pasid_clear_entry(dev, pasid, fault_ignore); > > + pasid_clear_present(pte); > > spin_unlock(&iommu->lock); > > > > if (!ecap_coherent(iommu->ecap)) > > @@ -286,6 +286,10 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, > > iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); > > > > devtlb_invalidation_with_pasid(iommu, dev, pasid); > > + intel_pasid_clear_entry(dev, pasid, fault_ignore); > > + if (!ecap_coherent(iommu->ecap)) > > + clflush_cache_range(pte, sizeof(*pte)); > > + > > if (!fault_ignore) > > intel_iommu_drain_pasid_prq(dev, pasid); > > } > > -- > > 2.43.0 Reviewed-by: Samiullah Khawaja <skhawaja@google.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry 2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu 2026-01-20 13:56 ` Dmytro Maluka @ 2026-01-21 6:16 ` Tian, Kevin 1 sibling, 0 replies; 17+ messages in thread From: Tian, Kevin @ 2026-01-21 6:16 UTC (permalink / raw) To: Lu Baolu, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org > From: Lu Baolu <baolu.lu@linux.intel.com> > Sent: Tuesday, January 20, 2026 2:18 PM > > The Intel VT-d Scalable Mode PASID table entry consists of 512 bits (64 > bytes). When tearing down an entry, the current implementation zeros the > entire 64-byte structure immediately using multiple 64-bit writes. > > Since the IOMMU hardware may fetch these 64 bytes using multiple > internal transactions (e.g., four 128-bit bursts), updating or zeroing > the entire entry while it is active (P=1) risks a "torn" read. If a > hardware fetch occurs simultaneously with the CPU zeroing the entry, the > hardware could observe an inconsistent state, leading to unpredictable > behavior or spurious faults. > > Follow the "Guidance to Software for Invalidations" in the VT-d spec > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > 1. Clear only the 'Present' (P) bit of the PASID entry. > 2. Use a dma_wmb() to ensure the cleared bit is visible to hardware > before proceeding. > 3. Execute the required invalidation sequence (PASID cache, IOTLB, and > Device-TLB flush) to ensure the hardware has released all cached > references. > 4. Only after the flushes are complete, zero out the remaining fields > of the PASID entry. > > Also, add a dma_wmb() in pasid_set_present() to ensure that all other > fields of the PASID entry are visible to the hardware before the Present > bit is set. > > Fixes: 0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables") > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-20 6:18 [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Lu Baolu 2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu @ 2026-01-20 6:18 ` Lu Baolu 2026-01-20 14:07 ` Dmytro Maluka ` (2 more replies) 2026-01-20 6:18 ` [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu 2026-01-20 13:56 ` [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Jason Gunthorpe 3 siblings, 3 replies; 17+ messages in thread From: Lu Baolu @ 2026-01-20 6:18 UTC (permalink / raw) To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu, linux-kernel, Lu Baolu When tearing down a context entry, the current implementation zeros the entire 128-bit entry using multiple 64-bit writes. This creates a window where the hardware can fetch a "torn" entry — where some fields are already zeroed while the 'Present' bit is still set — leading to unpredictable behavior or spurious faults. While x86 provides strong write ordering, the compiler may reorder writes to the two 64-bit halves of the context entry. Even without compiler reordering, the hardware fetch is not guaranteed to be atomic with respect to multiple CPU writes. Align with the "Guidance to Software for Invalidations" in the VT-d spec (Section 6.5.3.3) by implementing the recommended ownership handshake: 1. Clear only the 'Present' (P) bit of the context entry first to signal the transition of ownership from hardware to software. 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. 3. Perform the required cache and context-cache invalidation to ensure hardware no longer has cached references to the entry. 4. Fully zero out the entry only after the invalidation is complete. Also, add a dma_wmb() to context_set_present() to ensure the entry is fully initialized before the 'Present' bit becomes visible. Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") Reported-by: Dmytro Maluka <dmaluka@chromium.org> Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> --- drivers/iommu/intel/iommu.h | 21 ++++++++++++++++++++- drivers/iommu/intel/iommu.c | 4 +++- 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index 25c5e22096d4..599913fb65d5 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -900,7 +900,26 @@ static inline int pfn_level_offset(u64 pfn, int level) static inline void context_set_present(struct context_entry *context) { - context->lo |= 1; + u64 val; + + dma_wmb(); + val = READ_ONCE(context->lo) | 1; + WRITE_ONCE(context->lo, val); +} + +/* + * Clear the Present (P) bit (bit 0) of a context table entry. This initiates + * the transition of the entry's ownership from hardware to software. The + * caller is responsible for fulfilling the invalidation handshake recommended + * by the VT-d spec, Section 6.5.3.3 (Guidance to Software for Invalidations). + */ +static inline void context_clear_present(struct context_entry *context) +{ + u64 val; + + val = READ_ONCE(context->lo) & GENMASK_ULL(63, 1); + WRITE_ONCE(context->lo, val); + dma_wmb(); } static inline void context_set_fault_enable(struct context_entry *context) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 134302fbcd92..c66cc51f9e51 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1240,10 +1240,12 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8 } did = context_domain_id(context); - context_clear_entry(context); + context_clear_present(context); __iommu_flush_cache(iommu, context, sizeof(*context)); spin_unlock(&iommu->lock); intel_context_flush_no_pasid(info, context, did); + context_clear_entry(context); + __iommu_flush_cache(iommu, context, sizeof(*context)); } int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev, -- 2.43.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu @ 2026-01-20 14:07 ` Dmytro Maluka 2026-01-20 18:22 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 2 siblings, 0 replies; 17+ messages in thread From: Dmytro Maluka @ 2026-01-20 14:07 UTC (permalink / raw) To: Lu Baolu Cc: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe, Samiullah Khawaja, iommu, linux-kernel, Vineeth Pillai (Google), Aashish Sharma On Tue, Jan 20, 2026 at 02:18:13PM +0800, Lu Baolu wrote: > When tearing down a context entry, the current implementation zeros the > entire 128-bit entry using multiple 64-bit writes. This creates a window > where the hardware can fetch a "torn" entry — where some fields are > already zeroed while the 'Present' bit is still set — leading to > unpredictable behavior or spurious faults. > > While x86 provides strong write ordering, the compiler may reorder writes > to the two 64-bit halves of the context entry. Even without compiler > reordering, the hardware fetch is not guaranteed to be atomic with > respect to multiple CPU writes. > > Align with the "Guidance to Software for Invalidations" in the VT-d spec > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > 1. Clear only the 'Present' (P) bit of the context entry first to > signal the transition of ownership from hardware to software. > 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. > 3. Perform the required cache and context-cache invalidation to ensure > hardware no longer has cached references to the entry. > 4. Fully zero out the entry only after the invalidation is complete. > > Also, add a dma_wmb() to context_set_present() to ensure the entry > is fully initialized before the 'Present' bit becomes visible. > > Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") > Reported-by: Dmytro Maluka <dmaluka@chromium.org> > Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > --- > drivers/iommu/intel/iommu.h | 21 ++++++++++++++++++++- > drivers/iommu/intel/iommu.c | 4 +++- > 2 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h > index 25c5e22096d4..599913fb65d5 100644 > --- a/drivers/iommu/intel/iommu.h > +++ b/drivers/iommu/intel/iommu.h > @@ -900,7 +900,26 @@ static inline int pfn_level_offset(u64 pfn, int level) > > static inline void context_set_present(struct context_entry *context) > { > - context->lo |= 1; > + u64 val; > + > + dma_wmb(); > + val = READ_ONCE(context->lo) | 1; As IIRC Jason noted, READ_ONCE is not really necessary? > + WRITE_ONCE(context->lo, val); > +} > + > +/* > + * Clear the Present (P) bit (bit 0) of a context table entry. This initiates > + * the transition of the entry's ownership from hardware to software. The > + * caller is responsible for fulfilling the invalidation handshake recommended > + * by the VT-d spec, Section 6.5.3.3 (Guidance to Software for Invalidations). > + */ > +static inline void context_clear_present(struct context_entry *context) > +{ > + u64 val; > + > + val = READ_ONCE(context->lo) & GENMASK_ULL(63, 1); Maybe "& ~1ULL" would be a bit more readable? (and READ_ONCE not necessary here either?) Anyway, Reviewed-by: Dmytro Maluka <dmaluka@chromium.org> > + WRITE_ONCE(context->lo, val); > + dma_wmb(); > } > > static inline void context_set_fault_enable(struct context_entry *context) > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c > index 134302fbcd92..c66cc51f9e51 100644 > --- a/drivers/iommu/intel/iommu.c > +++ b/drivers/iommu/intel/iommu.c > @@ -1240,10 +1240,12 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8 > } > > did = context_domain_id(context); > - context_clear_entry(context); > + context_clear_present(context); > __iommu_flush_cache(iommu, context, sizeof(*context)); > spin_unlock(&iommu->lock); > intel_context_flush_no_pasid(info, context, did); > + context_clear_entry(context); > + __iommu_flush_cache(iommu, context, sizeof(*context)); > } > > int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev, > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu 2026-01-20 14:07 ` Dmytro Maluka @ 2026-01-20 18:22 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 2 siblings, 0 replies; 17+ messages in thread From: Samiullah Khawaja @ 2026-01-20 18:22 UTC (permalink / raw) To: Lu Baolu Cc: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe, Dmytro Maluka, iommu, linux-kernel On Mon, Jan 19, 2026 at 10:20 PM Lu Baolu <baolu.lu@linux.intel.com> wrote: > > When tearing down a context entry, the current implementation zeros the > entire 128-bit entry using multiple 64-bit writes. This creates a window > where the hardware can fetch a "torn" entry — where some fields are > already zeroed while the 'Present' bit is still set — leading to > unpredictable behavior or spurious faults. > > While x86 provides strong write ordering, the compiler may reorder writes > to the two 64-bit halves of the context entry. Even without compiler > reordering, the hardware fetch is not guaranteed to be atomic with > respect to multiple CPU writes. > > Align with the "Guidance to Software for Invalidations" in the VT-d spec > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > 1. Clear only the 'Present' (P) bit of the context entry first to > signal the transition of ownership from hardware to software. > 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. > 3. Perform the required cache and context-cache invalidation to ensure > hardware no longer has cached references to the entry. > 4. Fully zero out the entry only after the invalidation is complete. > > Also, add a dma_wmb() to context_set_present() to ensure the entry > is fully initialized before the 'Present' bit becomes visible. > > Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") > Reported-by: Dmytro Maluka <dmaluka@chromium.org> > Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > --- > drivers/iommu/intel/iommu.h | 21 ++++++++++++++++++++- > drivers/iommu/intel/iommu.c | 4 +++- > 2 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h > index 25c5e22096d4..599913fb65d5 100644 > --- a/drivers/iommu/intel/iommu.h > +++ b/drivers/iommu/intel/iommu.h > @@ -900,7 +900,26 @@ static inline int pfn_level_offset(u64 pfn, int level) > > static inline void context_set_present(struct context_entry *context) > { > - context->lo |= 1; > + u64 val; > + > + dma_wmb(); > + val = READ_ONCE(context->lo) | 1; > + WRITE_ONCE(context->lo, val); > +} > + > +/* > + * Clear the Present (P) bit (bit 0) of a context table entry. This initiates > + * the transition of the entry's ownership from hardware to software. The > + * caller is responsible for fulfilling the invalidation handshake recommended > + * by the VT-d spec, Section 6.5.3.3 (Guidance to Software for Invalidations). > + */ > +static inline void context_clear_present(struct context_entry *context) > +{ > + u64 val; > + > + val = READ_ONCE(context->lo) & GENMASK_ULL(63, 1); > + WRITE_ONCE(context->lo, val); > + dma_wmb(); > } > > static inline void context_set_fault_enable(struct context_entry *context) > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c > index 134302fbcd92..c66cc51f9e51 100644 > --- a/drivers/iommu/intel/iommu.c > +++ b/drivers/iommu/intel/iommu.c > @@ -1240,10 +1240,12 @@ static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8 > } > > did = context_domain_id(context); > - context_clear_entry(context); > + context_clear_present(context); > __iommu_flush_cache(iommu, context, sizeof(*context)); > spin_unlock(&iommu->lock); > intel_context_flush_no_pasid(info, context, did); > + context_clear_entry(context); > + __iommu_flush_cache(iommu, context, sizeof(*context)); > } > > int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev, > -- > 2.43.0 > Reviewed-by: Samiullah Khawaja <skhawaja@google.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu 2026-01-20 14:07 ` Dmytro Maluka 2026-01-20 18:22 ` Samiullah Khawaja @ 2026-01-21 6:23 ` Tian, Kevin 2026-01-21 7:28 ` Baolu Lu 2 siblings, 1 reply; 17+ messages in thread From: Tian, Kevin @ 2026-01-21 6:23 UTC (permalink / raw) To: Lu Baolu, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org > From: Lu Baolu <baolu.lu@linux.intel.com> > Sent: Tuesday, January 20, 2026 2:18 PM > > When tearing down a context entry, the current implementation zeros the > entire 128-bit entry using multiple 64-bit writes. This creates a window > where the hardware can fetch a "torn" entry — where some fields are > already zeroed while the 'Present' bit is still set — leading to > unpredictable behavior or spurious faults. > > While x86 provides strong write ordering, the compiler may reorder writes > to the two 64-bit halves of the context entry. Even without compiler > reordering, the hardware fetch is not guaranteed to be atomic with > respect to multiple CPU writes. > > Align with the "Guidance to Software for Invalidations" in the VT-d spec > (Section 6.5.3.3) by implementing the recommended ownership handshake: > > 1. Clear only the 'Present' (P) bit of the context entry first to > signal the transition of ownership from hardware to software. > 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. > 3. Perform the required cache and context-cache invalidation to ensure > hardware no longer has cached references to the entry. > 4. Fully zero out the entry only after the invalidation is complete. > > Also, add a dma_wmb() to context_set_present() to ensure the entry > is fully initialized before the 'Present' bit becomes visible. > > Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") > Reported-by: Dmytro Maluka <dmaluka@chromium.org> > Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> btw there is a context_clear_entry() for copied context entry in device_pasid_table_setup(), but this patch doesn't touch that path. It seems to assume that no in-flight DMA will exist at that point: if (context_copied(iommu, bus, devfn)) { context_clear_entry(context); ... /* * At this point, the device is supposed to finish reset at * its driver probe stage, so no in-flight DMA will exist, * and we don't need to worry anymore hereafter. */ clear_context_copied(iommu, bus, devfn); Is that guaranteed by all devices? from kdump feature p.o.v. if that assumption is broken it just means potential DMA errors in this transition window. But regarding to the issue which this patch tries to fix, in-fly DMAs may lead to undesired behaviors including memory corruption etc. So, should it be fixed too? ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-21 6:23 ` Tian, Kevin @ 2026-01-21 7:28 ` Baolu Lu 2026-01-21 7:50 ` Tian, Kevin 0 siblings, 1 reply; 17+ messages in thread From: Baolu Lu @ 2026-01-21 7:28 UTC (permalink / raw) To: Tian, Kevin, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org On 1/21/26 14:23, Tian, Kevin wrote: >> From: Lu Baolu <baolu.lu@linux.intel.com> >> Sent: Tuesday, January 20, 2026 2:18 PM >> >> When tearing down a context entry, the current implementation zeros the >> entire 128-bit entry using multiple 64-bit writes. This creates a window >> where the hardware can fetch a "torn" entry — where some fields are >> already zeroed while the 'Present' bit is still set — leading to >> unpredictable behavior or spurious faults. >> >> While x86 provides strong write ordering, the compiler may reorder writes >> to the two 64-bit halves of the context entry. Even without compiler >> reordering, the hardware fetch is not guaranteed to be atomic with >> respect to multiple CPU writes. >> >> Align with the "Guidance to Software for Invalidations" in the VT-d spec >> (Section 6.5.3.3) by implementing the recommended ownership handshake: >> >> 1. Clear only the 'Present' (P) bit of the context entry first to >> signal the transition of ownership from hardware to software. >> 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. >> 3. Perform the required cache and context-cache invalidation to ensure >> hardware no longer has cached references to the entry. >> 4. Fully zero out the entry only after the invalidation is complete. >> >> Also, add a dma_wmb() to context_set_present() to ensure the entry >> is fully initialized before the 'Present' bit becomes visible. >> >> Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") >> Reported-by: Dmytro Maluka <dmaluka@chromium.org> >> Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ >> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > > Reviewed-by: Kevin Tian <kevin.tian@intel.com> > > btw there is a context_clear_entry() for copied context entry in > device_pasid_table_setup(), but this patch doesn't touch that > path. It seems to assume that no in-flight DMA will exist at that > point: > > if (context_copied(iommu, bus, devfn)) { > context_clear_entry(context); > ... > /* > * At this point, the device is supposed to finish reset at > * its driver probe stage, so no in-flight DMA will exist, > * and we don't need to worry anymore hereafter. > */ > clear_context_copied(iommu, bus, devfn); > > Is that guaranteed by all devices? from kdump feature p.o.v. if > that assumption is broken it just means potential DMA errors > in this transition window. But regarding to the issue which this > patch tries to fix, in-fly DMAs may lead to undesired behaviors > including memory corruption etc. > > So, should it be fixed too? This path is triggered when the device driver has probed the device (ensuring it has been reset) and then calls the kernel DMA API for the first time. At this stage, there should be no in-flight DMAs. We can apply the same logic here to improve code readability, but this is not a bug that requires a fix. Or not? Thanks, baolu ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-21 7:28 ` Baolu Lu @ 2026-01-21 7:50 ` Tian, Kevin 2026-01-21 8:04 ` Baolu Lu 0 siblings, 1 reply; 17+ messages in thread From: Tian, Kevin @ 2026-01-21 7:50 UTC (permalink / raw) To: Baolu Lu, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org > From: Baolu Lu <baolu.lu@linux.intel.com> > Sent: Wednesday, January 21, 2026 3:29 PM > > On 1/21/26 14:23, Tian, Kevin wrote: > >> From: Lu Baolu <baolu.lu@linux.intel.com> > >> Sent: Tuesday, January 20, 2026 2:18 PM > >> > >> When tearing down a context entry, the current implementation zeros > the > >> entire 128-bit entry using multiple 64-bit writes. This creates a window > >> where the hardware can fetch a "torn" entry — where some fields are > >> already zeroed while the 'Present' bit is still set — leading to > >> unpredictable behavior or spurious faults. > >> > >> While x86 provides strong write ordering, the compiler may reorder > writes > >> to the two 64-bit halves of the context entry. Even without compiler > >> reordering, the hardware fetch is not guaranteed to be atomic with > >> respect to multiple CPU writes. > >> > >> Align with the "Guidance to Software for Invalidations" in the VT-d spec > >> (Section 6.5.3.3) by implementing the recommended ownership > handshake: > >> > >> 1. Clear only the 'Present' (P) bit of the context entry first to > >> signal the transition of ownership from hardware to software. > >> 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. > >> 3. Perform the required cache and context-cache invalidation to ensure > >> hardware no longer has cached references to the entry. > >> 4. Fully zero out the entry only after the invalidation is complete. > >> > >> Also, add a dma_wmb() to context_set_present() to ensure the entry > >> is fully initialized before the 'Present' bit becomes visible. > >> > >> Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") > >> Reported-by: Dmytro Maluka <dmaluka@chromium.org> > >> Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ > >> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > > > > Reviewed-by: Kevin Tian <kevin.tian@intel.com> > > > > btw there is a context_clear_entry() for copied context entry in > > device_pasid_table_setup(), but this patch doesn't touch that > > path. It seems to assume that no in-flight DMA will exist at that > > point: > > > > if (context_copied(iommu, bus, devfn)) { > > context_clear_entry(context); > > ... > > /* > > * At this point, the device is supposed to finish reset at > > * its driver probe stage, so no in-flight DMA will exist, > > * and we don't need to worry anymore hereafter. > > */ > > clear_context_copied(iommu, bus, devfn); > > > > Is that guaranteed by all devices? from kdump feature p.o.v. if > > that assumption is broken it just means potential DMA errors > > in this transition window. But regarding to the issue which this > > patch tries to fix, in-fly DMAs may lead to undesired behaviors > > including memory corruption etc. > > > > So, should it be fixed too? > > This path is triggered when the device driver has probed the device > (ensuring it has been reset) and then calls the kernel DMA API for the > first time. At this stage, there should be no in-flight DMAs. We can > apply the same logic here to improve code readability, but this is not a > bug that requires a fix. Or not? > device could be in whatever state when kdump is triggered. I'm not sure whether all device drivers will reset the device at probe time. Just thought that applying the same due diligence here could prevent any undesired damage just in case. Not exactly for backporting, but it's always good to have consistent logic to avoid special case based on subtle assumptions... ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-21 7:50 ` Tian, Kevin @ 2026-01-21 8:04 ` Baolu Lu 2026-01-21 8:12 ` Tian, Kevin 0 siblings, 1 reply; 17+ messages in thread From: Baolu Lu @ 2026-01-21 8:04 UTC (permalink / raw) To: Tian, Kevin, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org On 1/21/26 15:50, Tian, Kevin wrote: >> From: Baolu Lu <baolu.lu@linux.intel.com> >> Sent: Wednesday, January 21, 2026 3:29 PM >> >> On 1/21/26 14:23, Tian, Kevin wrote: >>>> From: Lu Baolu <baolu.lu@linux.intel.com> >>>> Sent: Tuesday, January 20, 2026 2:18 PM >>>> >>>> When tearing down a context entry, the current implementation zeros >> the >>>> entire 128-bit entry using multiple 64-bit writes. This creates a window >>>> where the hardware can fetch a "torn" entry — where some fields are >>>> already zeroed while the 'Present' bit is still set — leading to >>>> unpredictable behavior or spurious faults. >>>> >>>> While x86 provides strong write ordering, the compiler may reorder >> writes >>>> to the two 64-bit halves of the context entry. Even without compiler >>>> reordering, the hardware fetch is not guaranteed to be atomic with >>>> respect to multiple CPU writes. >>>> >>>> Align with the "Guidance to Software for Invalidations" in the VT-d spec >>>> (Section 6.5.3.3) by implementing the recommended ownership >> handshake: >>>> >>>> 1. Clear only the 'Present' (P) bit of the context entry first to >>>> signal the transition of ownership from hardware to software. >>>> 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. >>>> 3. Perform the required cache and context-cache invalidation to ensure >>>> hardware no longer has cached references to the entry. >>>> 4. Fully zero out the entry only after the invalidation is complete. >>>> >>>> Also, add a dma_wmb() to context_set_present() to ensure the entry >>>> is fully initialized before the 'Present' bit becomes visible. >>>> >>>> Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") >>>> Reported-by: Dmytro Maluka <dmaluka@chromium.org> >>>> Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ >>>> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> >>> >>> Reviewed-by: Kevin Tian <kevin.tian@intel.com> >>> >>> btw there is a context_clear_entry() for copied context entry in >>> device_pasid_table_setup(), but this patch doesn't touch that >>> path. It seems to assume that no in-flight DMA will exist at that >>> point: >>> >>> if (context_copied(iommu, bus, devfn)) { >>> context_clear_entry(context); >>> ... >>> /* >>> * At this point, the device is supposed to finish reset at >>> * its driver probe stage, so no in-flight DMA will exist, >>> * and we don't need to worry anymore hereafter. >>> */ >>> clear_context_copied(iommu, bus, devfn); >>> >>> Is that guaranteed by all devices? from kdump feature p.o.v. if >>> that assumption is broken it just means potential DMA errors >>> in this transition window. But regarding to the issue which this >>> patch tries to fix, in-fly DMAs may lead to undesired behaviors >>> including memory corruption etc. >>> >>> So, should it be fixed too? >> >> This path is triggered when the device driver has probed the device >> (ensuring it has been reset) and then calls the kernel DMA API for the >> first time. At this stage, there should be no in-flight DMAs. We can >> apply the same logic here to improve code readability, but this is not a >> bug that requires a fix. Or not? >> > > device could be in whatever state when kdump is triggered. I'm not > sure whether all device drivers will reset the device at probe time. Okay, agreed. > Just thought that applying the same due diligence here could prevent > any undesired damage just in case. Not exactly for backporting, but > it's always good to have consistent logic to avoid special case based > on subtle assumptions... So I will add the following additional change: diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 34f4af4e9b5c..b63a71904cfb 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -840,7 +840,7 @@ static int device_pasid_table_setup(struct device *dev, u8 bus, u8 devfn) } if (context_copied(iommu, bus, devfn)) { - context_clear_entry(context); + context_clear_present(context); __iommu_flush_cache(iommu, context, sizeof(*context)); /* @@ -860,6 +860,9 @@ static int device_pasid_table_setup(struct device *dev, u8 bus, u8 devfn) iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH); devtlb_invalidation_with_pasid(iommu, dev, IOMMU_NO_PASID); + context_clear_entry(context); + __iommu_flush_cache(iommu, context, sizeof(*context)); + /* * At this point, the device is supposed to finish reset at * its driver probe stage, so no in-flight DMA will exist, Appears good to you? Thanks, baolu ^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry 2026-01-21 8:04 ` Baolu Lu @ 2026-01-21 8:12 ` Tian, Kevin 0 siblings, 0 replies; 17+ messages in thread From: Tian, Kevin @ 2026-01-21 8:12 UTC (permalink / raw) To: Baolu Lu, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org > From: Baolu Lu <baolu.lu@linux.intel.com> > Sent: Wednesday, January 21, 2026 4:04 PM > > On 1/21/26 15:50, Tian, Kevin wrote: > >> From: Baolu Lu <baolu.lu@linux.intel.com> > >> Sent: Wednesday, January 21, 2026 3:29 PM > >> > >> On 1/21/26 14:23, Tian, Kevin wrote: > >>>> From: Lu Baolu <baolu.lu@linux.intel.com> > >>>> Sent: Tuesday, January 20, 2026 2:18 PM > >>>> > >>>> When tearing down a context entry, the current implementation zeros > >> the > >>>> entire 128-bit entry using multiple 64-bit writes. This creates a window > >>>> where the hardware can fetch a "torn" entry — where some fields are > >>>> already zeroed while the 'Present' bit is still set — leading to > >>>> unpredictable behavior or spurious faults. > >>>> > >>>> While x86 provides strong write ordering, the compiler may reorder > >> writes > >>>> to the two 64-bit halves of the context entry. Even without compiler > >>>> reordering, the hardware fetch is not guaranteed to be atomic with > >>>> respect to multiple CPU writes. > >>>> > >>>> Align with the "Guidance to Software for Invalidations" in the VT-d spec > >>>> (Section 6.5.3.3) by implementing the recommended ownership > >> handshake: > >>>> > >>>> 1. Clear only the 'Present' (P) bit of the context entry first to > >>>> signal the transition of ownership from hardware to software. > >>>> 2. Use dma_wmb() to ensure the cleared bit is visible to the IOMMU. > >>>> 3. Perform the required cache and context-cache invalidation to ensure > >>>> hardware no longer has cached references to the entry. > >>>> 4. Fully zero out the entry only after the invalidation is complete. > >>>> > >>>> Also, add a dma_wmb() to context_set_present() to ensure the entry > >>>> is fully initialized before the 'Present' bit becomes visible. > >>>> > >>>> Fixes: ba39592764ed2 ("Intel IOMMU: Intel IOMMU driver") > >>>> Reported-by: Dmytro Maluka <dmaluka@chromium.org> > >>>> Closes: https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/ > >>>> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > >>> > >>> Reviewed-by: Kevin Tian <kevin.tian@intel.com> > >>> > >>> btw there is a context_clear_entry() for copied context entry in > >>> device_pasid_table_setup(), but this patch doesn't touch that > >>> path. It seems to assume that no in-flight DMA will exist at that > >>> point: > >>> > >>> if (context_copied(iommu, bus, devfn)) { > >>> context_clear_entry(context); > >>> ... > >>> /* > >>> * At this point, the device is supposed to finish reset at > >>> * its driver probe stage, so no in-flight DMA will exist, > >>> * and we don't need to worry anymore hereafter. > >>> */ > >>> clear_context_copied(iommu, bus, devfn); > >>> > >>> Is that guaranteed by all devices? from kdump feature p.o.v. if > >>> that assumption is broken it just means potential DMA errors > >>> in this transition window. But regarding to the issue which this > >>> patch tries to fix, in-fly DMAs may lead to undesired behaviors > >>> including memory corruption etc. > >>> > >>> So, should it be fixed too? > >> > >> This path is triggered when the device driver has probed the device > >> (ensuring it has been reset) and then calls the kernel DMA API for the > >> first time. At this stage, there should be no in-flight DMAs. We can > >> apply the same logic here to improve code readability, but this is not a > >> bug that requires a fix. Or not? > >> > > > > device could be in whatever state when kdump is triggered. I'm not > > sure whether all device drivers will reset the device at probe time. > > Okay, agreed. > > > Just thought that applying the same due diligence here could prevent > > any undesired damage just in case. Not exactly for backporting, but > > it's always good to have consistent logic to avoid special case based > > on subtle assumptions... > > So I will add the following additional change: > > diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c > index 34f4af4e9b5c..b63a71904cfb 100644 > --- a/drivers/iommu/intel/pasid.c > +++ b/drivers/iommu/intel/pasid.c > @@ -840,7 +840,7 @@ static int device_pasid_table_setup(struct device > *dev, u8 bus, u8 devfn) > } > > if (context_copied(iommu, bus, devfn)) { > - context_clear_entry(context); > + context_clear_present(context); > __iommu_flush_cache(iommu, context, sizeof(*context)); > > /* > @@ -860,6 +860,9 @@ static int device_pasid_table_setup(struct device > *dev, u8 bus, u8 devfn) > iommu->flush.flush_iotlb(iommu, 0, 0, 0, > DMA_TLB_GLOBAL_FLUSH); > devtlb_invalidation_with_pasid(iommu, dev, IOMMU_NO_PASID); > > + context_clear_entry(context); > + __iommu_flush_cache(iommu, context, sizeof(*context)); > + > /* > * At this point, the device is supposed to finish reset at > * its driver probe stage, so no in-flight DMA will exist, > > Appears good to you? > yes ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement 2026-01-20 6:18 [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Lu Baolu 2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu @ 2026-01-20 6:18 ` Lu Baolu 2026-01-20 18:54 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 2026-01-20 13:56 ` [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Jason Gunthorpe 3 siblings, 2 replies; 17+ messages in thread From: Lu Baolu @ 2026-01-20 6:18 UTC (permalink / raw) To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu, linux-kernel, Lu Baolu The Intel VT-d PASID table entry is 512 bits (64 bytes). When replacing an active PASID entry (e.g., during domain replacement), the current implementation calculates a new entry on the stack and copies it to the table using a single structure assignment. struct pasid_entry *pte, new_pte; pte = intel_pasid_get_entry(dev, pasid); pasid_pte_config_first_level(iommu, &new_pte, ...); *pte = new_pte; Because the hardware may fetch the 512-bit PASID entry in multiple 128-bit chunks, updating the entire entry while it is active (Present bit set) risks a "torn" read. In this scenario, the IOMMU hardware could observe an inconsistent state — partially new data and partially old data — leading to unpredictable behavior or spurious faults. Fix this by removing the unsafe "replace" helpers and following the "clear-then-update" flow, which ensures the Present bit is cleared and the required invalidation handshake is completed before the new configuration is applied. Fixes: 7543ee63e811 ("iommu/vt-d: Add pasid replace helpers") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> --- drivers/iommu/intel/pasid.h | 14 --- drivers/iommu/intel/iommu.c | 29 +++--- drivers/iommu/intel/nested.c | 9 +- drivers/iommu/intel/pasid.c | 184 ----------------------------------- 4 files changed, 16 insertions(+), 220 deletions(-) diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h index 0b303bd0b0c1..c3c8c907983e 100644 --- a/drivers/iommu/intel/pasid.h +++ b/drivers/iommu/intel/pasid.h @@ -316,20 +316,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu, struct device *dev, u32 pasid); int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, u32 pasid, struct dmar_domain *domain); -int intel_pasid_replace_first_level(struct intel_iommu *iommu, - struct device *dev, phys_addr_t fsptptr, - u32 pasid, u16 did, u16 old_did, int flags); -int intel_pasid_replace_second_level(struct intel_iommu *iommu, - struct dmar_domain *domain, - struct device *dev, u16 old_did, - u32 pasid); -int intel_pasid_replace_pass_through(struct intel_iommu *iommu, - struct device *dev, u16 old_did, - u32 pasid); -int intel_pasid_replace_nested(struct intel_iommu *iommu, - struct device *dev, u32 pasid, - u16 old_did, struct dmar_domain *domain); - void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, u32 pasid, bool fault_ignore); diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index c66cc51f9e51..705828b06e32 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1252,12 +1252,10 @@ int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev, ioasid_t pasid, u16 did, phys_addr_t fsptptr, int flags, struct iommu_domain *old) { - if (!old) - return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid, - did, flags); - return intel_pasid_replace_first_level(iommu, dev, fsptptr, pasid, did, - iommu_domain_did(old, iommu), - flags); + if (old) + intel_pasid_tear_down_entry(iommu, dev, pasid, false); + + return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid, did, flags); } static int domain_setup_second_level(struct intel_iommu *iommu, @@ -1265,23 +1263,20 @@ static int domain_setup_second_level(struct intel_iommu *iommu, struct device *dev, ioasid_t pasid, struct iommu_domain *old) { - if (!old) - return intel_pasid_setup_second_level(iommu, domain, - dev, pasid); - return intel_pasid_replace_second_level(iommu, domain, dev, - iommu_domain_did(old, iommu), - pasid); + if (old) + intel_pasid_tear_down_entry(iommu, dev, pasid, false); + + return intel_pasid_setup_second_level(iommu, domain, dev, pasid); } static int domain_setup_passthrough(struct intel_iommu *iommu, struct device *dev, ioasid_t pasid, struct iommu_domain *old) { - if (!old) - return intel_pasid_setup_pass_through(iommu, dev, pasid); - return intel_pasid_replace_pass_through(iommu, dev, - iommu_domain_did(old, iommu), - pasid); + if (old) + intel_pasid_tear_down_entry(iommu, dev, pasid, false); + + return intel_pasid_setup_pass_through(iommu, dev, pasid); } static int domain_setup_first_level(struct intel_iommu *iommu, diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c index a3fb8c193ca6..e9a440e9c960 100644 --- a/drivers/iommu/intel/nested.c +++ b/drivers/iommu/intel/nested.c @@ -136,11 +136,10 @@ static int domain_setup_nested(struct intel_iommu *iommu, struct device *dev, ioasid_t pasid, struct iommu_domain *old) { - if (!old) - return intel_pasid_setup_nested(iommu, dev, pasid, domain); - return intel_pasid_replace_nested(iommu, dev, pasid, - iommu_domain_did(old, iommu), - domain); + if (old) + intel_pasid_tear_down_entry(iommu, dev, pasid, false); + + return intel_pasid_setup_nested(iommu, dev, pasid, domain); } static int intel_nested_set_dev_pasid(struct iommu_domain *domain, diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index eb069aefa4fa..4b880b9ad49d 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -416,50 +416,6 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu, struct device *dev, return 0; } -int intel_pasid_replace_first_level(struct intel_iommu *iommu, - struct device *dev, phys_addr_t fsptptr, - u32 pasid, u16 did, u16 old_did, - int flags) -{ - struct pasid_entry *pte, new_pte; - - if (!ecap_flts(iommu->ecap)) { - pr_err("No first level translation support on %s\n", - iommu->name); - return -EINVAL; - } - - if ((flags & PASID_FLAG_FL5LP) && !cap_fl5lp_support(iommu->cap)) { - pr_err("No 5-level paging support for first-level on %s\n", - iommu->name); - return -EINVAL; - } - - pasid_pte_config_first_level(iommu, &new_pte, fsptptr, did, flags); - - spin_lock(&iommu->lock); - pte = intel_pasid_get_entry(dev, pasid); - if (!pte) { - spin_unlock(&iommu->lock); - return -ENODEV; - } - - if (!pasid_pte_is_present(pte)) { - spin_unlock(&iommu->lock); - return -EINVAL; - } - - WARN_ON(old_did != pasid_get_domain_id(pte)); - - *pte = new_pte; - spin_unlock(&iommu->lock); - - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); - intel_iommu_drain_pasid_prq(dev, pasid); - - return 0; -} - /* * Set up the scalable mode pasid entry for second only translation type. */ @@ -526,51 +482,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, return 0; } -int intel_pasid_replace_second_level(struct intel_iommu *iommu, - struct dmar_domain *domain, - struct device *dev, u16 old_did, - u32 pasid) -{ - struct pasid_entry *pte, new_pte; - u16 did; - - /* - * If hardware advertises no support for second level - * translation, return directly. - */ - if (!ecap_slts(iommu->ecap)) { - pr_err("No second level translation support on %s\n", - iommu->name); - return -EINVAL; - } - - did = domain_id_iommu(domain, iommu); - - pasid_pte_config_second_level(iommu, &new_pte, domain, did); - - spin_lock(&iommu->lock); - pte = intel_pasid_get_entry(dev, pasid); - if (!pte) { - spin_unlock(&iommu->lock); - return -ENODEV; - } - - if (!pasid_pte_is_present(pte)) { - spin_unlock(&iommu->lock); - return -EINVAL; - } - - WARN_ON(old_did != pasid_get_domain_id(pte)); - - *pte = new_pte; - spin_unlock(&iommu->lock); - - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); - intel_iommu_drain_pasid_prq(dev, pasid); - - return 0; -} - /* * Set up dirty tracking on a second only or nested translation type. */ @@ -683,38 +594,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu, return 0; } -int intel_pasid_replace_pass_through(struct intel_iommu *iommu, - struct device *dev, u16 old_did, - u32 pasid) -{ - struct pasid_entry *pte, new_pte; - u16 did = FLPT_DEFAULT_DID; - - pasid_pte_config_pass_through(iommu, &new_pte, did); - - spin_lock(&iommu->lock); - pte = intel_pasid_get_entry(dev, pasid); - if (!pte) { - spin_unlock(&iommu->lock); - return -ENODEV; - } - - if (!pasid_pte_is_present(pte)) { - spin_unlock(&iommu->lock); - return -EINVAL; - } - - WARN_ON(old_did != pasid_get_domain_id(pte)); - - *pte = new_pte; - spin_unlock(&iommu->lock); - - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); - intel_iommu_drain_pasid_prq(dev, pasid); - - return 0; -} - /* * Set the page snoop control for a pasid entry which has been set up. */ @@ -848,69 +727,6 @@ int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, return 0; } -int intel_pasid_replace_nested(struct intel_iommu *iommu, - struct device *dev, u32 pasid, - u16 old_did, struct dmar_domain *domain) -{ - struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg; - struct dmar_domain *s2_domain = domain->s2_domain; - u16 did = domain_id_iommu(domain, iommu); - struct pasid_entry *pte, new_pte; - - /* Address width should match the address width supported by hardware */ - switch (s1_cfg->addr_width) { - case ADDR_WIDTH_4LEVEL: - break; - case ADDR_WIDTH_5LEVEL: - if (!cap_fl5lp_support(iommu->cap)) { - dev_err_ratelimited(dev, - "5-level paging not supported\n"); - return -EINVAL; - } - break; - default: - dev_err_ratelimited(dev, "Invalid stage-1 address width %d\n", - s1_cfg->addr_width); - return -EINVAL; - } - - if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu->ecap)) { - pr_err_ratelimited("No supervisor request support on %s\n", - iommu->name); - return -EINVAL; - } - - if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu->ecap)) { - pr_err_ratelimited("No extended access flag support on %s\n", - iommu->name); - return -EINVAL; - } - - pasid_pte_config_nestd(iommu, &new_pte, s1_cfg, s2_domain, did); - - spin_lock(&iommu->lock); - pte = intel_pasid_get_entry(dev, pasid); - if (!pte) { - spin_unlock(&iommu->lock); - return -ENODEV; - } - - if (!pasid_pte_is_present(pte)) { - spin_unlock(&iommu->lock); - return -EINVAL; - } - - WARN_ON(old_did != pasid_get_domain_id(pte)); - - *pte = new_pte; - spin_unlock(&iommu->lock); - - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); - intel_iommu_drain_pasid_prq(dev, pasid); - - return 0; -} - /* * Interfaces to setup or teardown a pasid table to the scalable-mode * context table entry: -- 2.43.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement 2026-01-20 6:18 ` [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu @ 2026-01-20 18:54 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 1 sibling, 0 replies; 17+ messages in thread From: Samiullah Khawaja @ 2026-01-20 18:54 UTC (permalink / raw) To: Lu Baolu Cc: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Jason Gunthorpe, Dmytro Maluka, iommu, linux-kernel On Mon, Jan 19, 2026 at 10:20 PM Lu Baolu <baolu.lu@linux.intel.com> wrote: > > The Intel VT-d PASID table entry is 512 bits (64 bytes). When replacing > an active PASID entry (e.g., during domain replacement), the current > implementation calculates a new entry on the stack and copies it to the > table using a single structure assignment. > > struct pasid_entry *pte, new_pte; > > pte = intel_pasid_get_entry(dev, pasid); > pasid_pte_config_first_level(iommu, &new_pte, ...); > *pte = new_pte; > > Because the hardware may fetch the 512-bit PASID entry in multiple > 128-bit chunks, updating the entire entry while it is active (Present > bit set) risks a "torn" read. In this scenario, the IOMMU hardware > could observe an inconsistent state — partially new data and partially > old data — leading to unpredictable behavior or spurious faults. > > Fix this by removing the unsafe "replace" helpers and following the > "clear-then-update" flow, which ensures the Present bit is cleared and > the required invalidation handshake is completed before the new > configuration is applied. > > Fixes: 7543ee63e811 ("iommu/vt-d: Add pasid replace helpers") > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > --- > drivers/iommu/intel/pasid.h | 14 --- > drivers/iommu/intel/iommu.c | 29 +++--- > drivers/iommu/intel/nested.c | 9 +- > drivers/iommu/intel/pasid.c | 184 ----------------------------------- > 4 files changed, 16 insertions(+), 220 deletions(-) > > diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h > index 0b303bd0b0c1..c3c8c907983e 100644 > --- a/drivers/iommu/intel/pasid.h > +++ b/drivers/iommu/intel/pasid.h > @@ -316,20 +316,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu, > struct device *dev, u32 pasid); > int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, > u32 pasid, struct dmar_domain *domain); > -int intel_pasid_replace_first_level(struct intel_iommu *iommu, > - struct device *dev, phys_addr_t fsptptr, > - u32 pasid, u16 did, u16 old_did, int flags); > -int intel_pasid_replace_second_level(struct intel_iommu *iommu, > - struct dmar_domain *domain, > - struct device *dev, u16 old_did, > - u32 pasid); > -int intel_pasid_replace_pass_through(struct intel_iommu *iommu, > - struct device *dev, u16 old_did, > - u32 pasid); > -int intel_pasid_replace_nested(struct intel_iommu *iommu, > - struct device *dev, u32 pasid, > - u16 old_did, struct dmar_domain *domain); > - > void intel_pasid_tear_down_entry(struct intel_iommu *iommu, > struct device *dev, u32 pasid, > bool fault_ignore); > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c > index c66cc51f9e51..705828b06e32 100644 > --- a/drivers/iommu/intel/iommu.c > +++ b/drivers/iommu/intel/iommu.c > @@ -1252,12 +1252,10 @@ int __domain_setup_first_level(struct intel_iommu *iommu, struct device *dev, > ioasid_t pasid, u16 did, phys_addr_t fsptptr, > int flags, struct iommu_domain *old) > { > - if (!old) > - return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid, > - did, flags); > - return intel_pasid_replace_first_level(iommu, dev, fsptptr, pasid, did, > - iommu_domain_did(old, iommu), > - flags); > + if (old) > + intel_pasid_tear_down_entry(iommu, dev, pasid, false); > + > + return intel_pasid_setup_first_level(iommu, dev, fsptptr, pasid, did, flags); > } > > static int domain_setup_second_level(struct intel_iommu *iommu, > @@ -1265,23 +1263,20 @@ static int domain_setup_second_level(struct intel_iommu *iommu, > struct device *dev, ioasid_t pasid, > struct iommu_domain *old) > { > - if (!old) > - return intel_pasid_setup_second_level(iommu, domain, > - dev, pasid); > - return intel_pasid_replace_second_level(iommu, domain, dev, > - iommu_domain_did(old, iommu), > - pasid); > + if (old) > + intel_pasid_tear_down_entry(iommu, dev, pasid, false); > + > + return intel_pasid_setup_second_level(iommu, domain, dev, pasid); > } > > static int domain_setup_passthrough(struct intel_iommu *iommu, > struct device *dev, ioasid_t pasid, > struct iommu_domain *old) > { > - if (!old) > - return intel_pasid_setup_pass_through(iommu, dev, pasid); > - return intel_pasid_replace_pass_through(iommu, dev, > - iommu_domain_did(old, iommu), > - pasid); > + if (old) > + intel_pasid_tear_down_entry(iommu, dev, pasid, false); > + > + return intel_pasid_setup_pass_through(iommu, dev, pasid); > } > > static int domain_setup_first_level(struct intel_iommu *iommu, > diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c > index a3fb8c193ca6..e9a440e9c960 100644 > --- a/drivers/iommu/intel/nested.c > +++ b/drivers/iommu/intel/nested.c > @@ -136,11 +136,10 @@ static int domain_setup_nested(struct intel_iommu *iommu, > struct device *dev, ioasid_t pasid, > struct iommu_domain *old) > { > - if (!old) > - return intel_pasid_setup_nested(iommu, dev, pasid, domain); > - return intel_pasid_replace_nested(iommu, dev, pasid, > - iommu_domain_did(old, iommu), > - domain); > + if (old) > + intel_pasid_tear_down_entry(iommu, dev, pasid, false); > + > + return intel_pasid_setup_nested(iommu, dev, pasid, domain); > } > > static int intel_nested_set_dev_pasid(struct iommu_domain *domain, > diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c > index eb069aefa4fa..4b880b9ad49d 100644 > --- a/drivers/iommu/intel/pasid.c > +++ b/drivers/iommu/intel/pasid.c > @@ -416,50 +416,6 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu, struct device *dev, > return 0; > } > > -int intel_pasid_replace_first_level(struct intel_iommu *iommu, > - struct device *dev, phys_addr_t fsptptr, > - u32 pasid, u16 did, u16 old_did, > - int flags) > -{ > - struct pasid_entry *pte, new_pte; > - > - if (!ecap_flts(iommu->ecap)) { > - pr_err("No first level translation support on %s\n", > - iommu->name); > - return -EINVAL; > - } > - > - if ((flags & PASID_FLAG_FL5LP) && !cap_fl5lp_support(iommu->cap)) { > - pr_err("No 5-level paging support for first-level on %s\n", > - iommu->name); > - return -EINVAL; > - } > - > - pasid_pte_config_first_level(iommu, &new_pte, fsptptr, did, flags); > - > - spin_lock(&iommu->lock); > - pte = intel_pasid_get_entry(dev, pasid); > - if (!pte) { > - spin_unlock(&iommu->lock); > - return -ENODEV; > - } > - > - if (!pasid_pte_is_present(pte)) { > - spin_unlock(&iommu->lock); > - return -EINVAL; > - } > - > - WARN_ON(old_did != pasid_get_domain_id(pte)); > - > - *pte = new_pte; > - spin_unlock(&iommu->lock); > - > - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); > - intel_iommu_drain_pasid_prq(dev, pasid); > - > - return 0; > -} > - > /* > * Set up the scalable mode pasid entry for second only translation type. > */ > @@ -526,51 +482,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, > return 0; > } > > -int intel_pasid_replace_second_level(struct intel_iommu *iommu, > - struct dmar_domain *domain, > - struct device *dev, u16 old_did, > - u32 pasid) > -{ > - struct pasid_entry *pte, new_pte; > - u16 did; > - > - /* > - * If hardware advertises no support for second level > - * translation, return directly. > - */ > - if (!ecap_slts(iommu->ecap)) { > - pr_err("No second level translation support on %s\n", > - iommu->name); > - return -EINVAL; > - } > - > - did = domain_id_iommu(domain, iommu); > - > - pasid_pte_config_second_level(iommu, &new_pte, domain, did); > - > - spin_lock(&iommu->lock); > - pte = intel_pasid_get_entry(dev, pasid); > - if (!pte) { > - spin_unlock(&iommu->lock); > - return -ENODEV; > - } > - > - if (!pasid_pte_is_present(pte)) { > - spin_unlock(&iommu->lock); > - return -EINVAL; > - } > - > - WARN_ON(old_did != pasid_get_domain_id(pte)); > - > - *pte = new_pte; > - spin_unlock(&iommu->lock); > - > - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); > - intel_iommu_drain_pasid_prq(dev, pasid); > - > - return 0; > -} > - > /* > * Set up dirty tracking on a second only or nested translation type. > */ > @@ -683,38 +594,6 @@ int intel_pasid_setup_pass_through(struct intel_iommu *iommu, > return 0; > } > > -int intel_pasid_replace_pass_through(struct intel_iommu *iommu, > - struct device *dev, u16 old_did, > - u32 pasid) > -{ > - struct pasid_entry *pte, new_pte; > - u16 did = FLPT_DEFAULT_DID; > - > - pasid_pte_config_pass_through(iommu, &new_pte, did); > - > - spin_lock(&iommu->lock); > - pte = intel_pasid_get_entry(dev, pasid); > - if (!pte) { > - spin_unlock(&iommu->lock); > - return -ENODEV; > - } > - > - if (!pasid_pte_is_present(pte)) { > - spin_unlock(&iommu->lock); > - return -EINVAL; > - } > - > - WARN_ON(old_did != pasid_get_domain_id(pte)); > - > - *pte = new_pte; > - spin_unlock(&iommu->lock); > - > - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); > - intel_iommu_drain_pasid_prq(dev, pasid); > - > - return 0; > -} > - > /* > * Set the page snoop control for a pasid entry which has been set up. > */ > @@ -848,69 +727,6 @@ int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev, > return 0; > } > > -int intel_pasid_replace_nested(struct intel_iommu *iommu, > - struct device *dev, u32 pasid, > - u16 old_did, struct dmar_domain *domain) > -{ > - struct iommu_hwpt_vtd_s1 *s1_cfg = &domain->s1_cfg; > - struct dmar_domain *s2_domain = domain->s2_domain; > - u16 did = domain_id_iommu(domain, iommu); > - struct pasid_entry *pte, new_pte; > - > - /* Address width should match the address width supported by hardware */ > - switch (s1_cfg->addr_width) { > - case ADDR_WIDTH_4LEVEL: > - break; > - case ADDR_WIDTH_5LEVEL: > - if (!cap_fl5lp_support(iommu->cap)) { > - dev_err_ratelimited(dev, > - "5-level paging not supported\n"); > - return -EINVAL; > - } > - break; > - default: > - dev_err_ratelimited(dev, "Invalid stage-1 address width %d\n", > - s1_cfg->addr_width); > - return -EINVAL; > - } > - > - if ((s1_cfg->flags & IOMMU_VTD_S1_SRE) && !ecap_srs(iommu->ecap)) { > - pr_err_ratelimited("No supervisor request support on %s\n", > - iommu->name); > - return -EINVAL; > - } > - > - if ((s1_cfg->flags & IOMMU_VTD_S1_EAFE) && !ecap_eafs(iommu->ecap)) { > - pr_err_ratelimited("No extended access flag support on %s\n", > - iommu->name); > - return -EINVAL; > - } > - > - pasid_pte_config_nestd(iommu, &new_pte, s1_cfg, s2_domain, did); > - > - spin_lock(&iommu->lock); > - pte = intel_pasid_get_entry(dev, pasid); > - if (!pte) { > - spin_unlock(&iommu->lock); > - return -ENODEV; > - } > - > - if (!pasid_pte_is_present(pte)) { > - spin_unlock(&iommu->lock); > - return -EINVAL; > - } > - > - WARN_ON(old_did != pasid_get_domain_id(pte)); > - > - *pte = new_pte; > - spin_unlock(&iommu->lock); > - > - intel_pasid_flush_present(iommu, dev, pasid, old_did, pte); > - intel_iommu_drain_pasid_prq(dev, pasid); > - > - return 0; > -} > - > /* > * Interfaces to setup or teardown a pasid table to the scalable-mode > * context table entry: > -- > 2.43.0 > Reviewed-by: Samiullah Khawaja <skhawaja@google.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement 2026-01-20 6:18 ` [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu 2026-01-20 18:54 ` Samiullah Khawaja @ 2026-01-21 6:23 ` Tian, Kevin 1 sibling, 0 replies; 17+ messages in thread From: Tian, Kevin @ 2026-01-21 6:23 UTC (permalink / raw) To: Lu Baolu, Joerg Roedel, Will Deacon, Robin Murphy, Jason Gunthorpe Cc: Dmytro Maluka, Samiullah Khawaja, iommu@lists.linux.dev, linux-kernel@vger.kernel.org > From: Lu Baolu <baolu.lu@linux.intel.com> > Sent: Tuesday, January 20, 2026 2:18 PM > > The Intel VT-d PASID table entry is 512 bits (64 bytes). When replacing > an active PASID entry (e.g., during domain replacement), the current > implementation calculates a new entry on the stack and copies it to the > table using a single structure assignment. > > struct pasid_entry *pte, new_pte; > > pte = intel_pasid_get_entry(dev, pasid); > pasid_pte_config_first_level(iommu, &new_pte, ...); > *pte = new_pte; > > Because the hardware may fetch the 512-bit PASID entry in multiple > 128-bit chunks, updating the entire entry while it is active (Present > bit set) risks a "torn" read. In this scenario, the IOMMU hardware > could observe an inconsistent state — partially new data and partially > old data — leading to unpredictable behavior or spurious faults. > > Fix this by removing the unsafe "replace" helpers and following the > "clear-then-update" flow, which ensures the Present bit is cleared and > the required invalidation handshake is completed before the new > configuration is applied. > > Fixes: 7543ee63e811 ("iommu/vt-d: Add pasid replace helpers") > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates 2026-01-20 6:18 [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Lu Baolu ` (2 preceding siblings ...) 2026-01-20 6:18 ` [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu @ 2026-01-20 13:56 ` Jason Gunthorpe 3 siblings, 0 replies; 17+ messages in thread From: Jason Gunthorpe @ 2026-01-20 13:56 UTC (permalink / raw) To: Lu Baolu Cc: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian, Dmytro Maluka, Samiullah Khawaja, iommu, linux-kernel On Tue, Jan 20, 2026 at 02:18:11PM +0800, Lu Baolu wrote: > This is a follow-up from recent discussions in the iommu community > mailing list [1] [2] regarding potential race conditions in table > entry updates. > > The Intel VT-d hardware fetches translation table entries (context > entries and PASID entries) in 128-bit (16-byte) chunks. Currently, the > Linux driver often updates these entries using multiple 64-bit writes. > This creates a race condition where the IOMMU hardware may fetch a > "torn" entry — a mixture of old and new data — during a CPU update. This > can lead to unpredictable hardware behavior, spurious faults, or system > instability. > > This addresses these atomicity issues by following the translation table > entry ownership handshake protocal recommended by the VT-d specification. This seems like a reasonable first series Thanks, Jason ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-01-21 8:12 UTC | newest] Thread overview: 17+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-20 6:18 [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Lu Baolu 2026-01-20 6:18 ` [PATCH v2 1/3] iommu/vt-d: Clear Present bit before tearing down PASID entry Lu Baolu 2026-01-20 13:56 ` Dmytro Maluka 2026-01-20 18:14 ` Samiullah Khawaja 2026-01-21 6:16 ` Tian, Kevin 2026-01-20 6:18 ` [PATCH v2 2/3] iommu/vt-d: Clear Present bit before tearing down context entry Lu Baolu 2026-01-20 14:07 ` Dmytro Maluka 2026-01-20 18:22 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 2026-01-21 7:28 ` Baolu Lu 2026-01-21 7:50 ` Tian, Kevin 2026-01-21 8:04 ` Baolu Lu 2026-01-21 8:12 ` Tian, Kevin 2026-01-20 6:18 ` [PATCH v2 3/3] iommu/vt-d: Fix race condition during PASID entry replacement Lu Baolu 2026-01-20 18:54 ` Samiullah Khawaja 2026-01-21 6:23 ` Tian, Kevin 2026-01-20 13:56 ` [PATCH v2 0/3] iommu/vt-d: Ensure atomicity in context and PASID entry updates Jason Gunthorpe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox