public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device
@ 2025-12-10 17:14 Jinhui Guo
  2025-12-10 17:14 ` [PATCH 1/2] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Jinhui Guo
  2025-12-10 17:14 ` [PATCH 2/2] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in " Jinhui Guo
  0 siblings, 2 replies; 6+ messages in thread
From: Jinhui Guo @ 2025-12-10 17:14 UTC (permalink / raw)
  To: dwmw2, baolu.lu, joro, will
  Cc: haifeng.zhao, guojinhui.liam, iommu, linux-kernel

We hit hard-lockups when the Intel IOMMU waits indefinitely for an ATS invalidation
that cannot complete, especially under GDR high-load conditions.

1. Hard-lock when a passthrough PCIe NIC with ATS enabled link-down in Intel IOMMU
   non-scalable mode. Two scenarios exist: NIC link-down with an explicit link-down
   event and link-down without any event.

   a) NIC link-down with an explicit link-dow event.
      Call Trace:
       qi_submit_sync
       qi_flush_dev_iotlb
       __context_flush_dev_iotlb.part.0
       domain_context_clear_one_cb
       pci_for_each_dma_alias
       device_block_translation
       blocking_domain_attach_dev
       iommu_deinit_device
       __iommu_group_remove_device
       iommu_release_device
       iommu_bus_notifier
       blocking_notifier_call_chain
       bus_notify
       device_del
       pci_remove_bus_device
       pci_stop_and_remove_bus_device
       pciehp_unconfigure_device
       pciehp_disable_slot
       pciehp_handle_presence_or_link_change
       pciehp_ist

   b) NIC link-down without an event - hard-lock on VM destroy.
      Call Trace:
       qi_submit_sync
       qi_flush_dev_iotlb
       __context_flush_dev_iotlb.part.0
       domain_context_clear_one_cb
       pci_for_each_dma_alias
       device_block_translation
       blocking_domain_attach_dev
       __iommu_attach_device
       __iommu_device_set_domain
       __iommu_group_set_domain_internal
       iommu_detach_group
       vfio_iommu_type1_detach_group
       vfio_group_detach_container
       vfio_group_fops_release
       __fput

2. Hard-lock when a passthrough PCIe NIC with ATS enabled link-down in Intel IOMMU
   scalable mode; NIC link-down without an event hard-locks on VM destroy.
   Call Trace:
    qi_submit_sync
    qi_flush_dev_iotlb
    intel_pasid_tear_down_entry
    device_block_translation
    blocking_domain_attach_dev
    __iommu_attach_device
    __iommu_device_set_domain
    __iommu_group_set_domain_internal
    iommu_detach_group
    vfio_iommu_type1_detach_group
    vfio_group_detach_container
    vfio_group_fops_release
    __fput

Fix both issues with two patches:
1. Skip dev-IOTLB flush for inaccessible devices in __context_flush_dev_iotlb() using
   pci_device_is_present().
2. Use pci_device_is_present() instead of pci_dev_is_disconnected() to decide when to
   skip ATS invalidation in devtlb_invalidation_with_pasid().

Jinhui Guo (2):
  iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without
    scalable mode
  iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in
    scalable mode

 drivers/iommu/intel/pasid.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-12-11  5:04 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-10 17:14 [PATCH 0/2] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device Jinhui Guo
2025-12-10 17:14 ` [PATCH 1/2] iommu/vt-d: Skip dev-iotlb flush for inaccessible PCIe device without scalable mode Jinhui Guo
2025-12-11  2:10   ` Baolu Lu
2025-12-11  4:17     ` Jinhui Guo
2025-12-11  4:59       ` Baolu Lu
2025-12-10 17:14 ` [PATCH 2/2] iommu/vt-d: Flush dev-IOTLB only when PCIe device is accessible in " Jinhui Guo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox