From: "Cédric Le Goater" <clg@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
Bjorn Helgaas <bhelgaas@google.com>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
linux-pci@vger.kernel.org, Robin Murphy <robin.murphy@arm.com>,
Will Deacon <will@kernel.org>
Cc: Alex Williamson <alex.williamson@redhat.com>,
Lu Baolu <baolu.lu@linux.intel.com>,
Donald Dutile <ddutile@redhat.com>,
galshalom@nvidia.com, Joerg Roedel <jroedel@suse.de>,
Kevin Tian <kevin.tian@intel.com>,
kvm@vger.kernel.org, maorg@nvidia.com, patches@lists.linux.dev,
tdave@nvidia.com, Tony Zhu <tony.zhu@intel.com>
Subject: Re: [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS
Date: Mon, 15 Sep 2025 11:41:44 +0200 [thread overview]
Message-ID: <835a9022-aca1-49ec-a704-578a4b3c5bbd@redhat.com> (raw)
In-Reply-To: <0-v3-8827cc7fc4e0+23f-pcie_switch_groups_jgg@nvidia.com>
On 9/5/25 20:06, Jason Gunthorpe wrote:
> The series patches have extensive descriptions as to the problem and
> solution, but in short the ACS flags are not analyzed according to the
> spec to form the iommu_groups that VFIO is expecting for security.
>
> ACS is an egress control only. For a path the ACS flags on each hop only
> effect what other devices the TLP is allowed to reach. It does not prevent
> other devices from reaching into this path.
>
> For VFIO if device A is permitted to access device B's MMIO then A and B
> must be grouped together. This says that even if a path has isolating ACS
> flags on each hop, off-path devices with non-isolating ACS can still reach
> into that path and must be grouped gother.
>
> For switches, a PCIe topology like:
>
> -- DSP 02:00.0 -> End Point A
> Root 00:00.0 -> USP 01:00.0 --|
> -- DSP 02:03.0 -> End Point B
>
> Will generate unique single device groups for every device even if ACS is
> not enabled on the two DSP ports. It should at least group A/B together
> because no ACS means A can reach the MMIO of B. This is a serious failure
> for the VFIO security model.
>
> For multi-function-devices, a PCIe topology like:
>
> -- MFD 00:1f.0 ACS not supported
> Root 00:00.00 --|- MFD 00:1f.2 ACS not supported
> |- MFD 00:1f.6 ACS = REQ_ACS_FLAGS
>
> Will group [1f.0, 1f.2] and 1f.6 gets a single device group. However from
> a spec perspective each device should get its own group, because ACS not
> supported can assume no loopback is possible by spec.
>
> For root-ports a PCIe topology like:
> -- Dev 01:00.0
> Root 00:00.00 --- Root Port 00:01.0 --|
> | -- Dev 01:00.1
> |- Dev 00:17.0
>
> Previously would group [00:01.0, 01:00.0, 01:00.1] together if there is no
> ACS capability in the root port.
>
> While ACS on root ports is underspecified in the spec, it should still
> function as an egress control and limit access to either the MMIO of the
> root port itself, or perhaps some other devices upstream of the root
> complex - 00:17.0 perhaps in this example.
>
> Historically the grouping in Linux has assumed the root port routes all
> traffic into the TA/IOMMU and never bypasses the TA to go to other
> functions in the root complex. Following the new understanding that ACS is
> required for internal loopback also treat root ports with no ACS
> capability as lacking internal loopback as well.
>
> There is also some confusing spec language about how ACS and SRIOV works
> which this series does not address.
>
>
> This entire series goes further and makes some additional improvements to
> the ACS validation found while studying this problem. The groups around a
> PCIe to PCI bridge are shrunk to not include the PCIe bridge.
>
> The last patches implement "ACS Enhanced" on top of it. Due to how ACS
> Enhanced was defined as a non-backward compatible feature it is important
> to get SW support out there.
>
> Due to the potential of iommu_groups becoming wider and thus non-usable
> for VFIO this should go to a linux-next tree to give it some more
> exposure.
>
> I have now tested this a few systems I could get:
>
> - Various Intel client systems:
> * Raptor Lake, with VMD enabled and using the real_dev mechanism
> * 6/7th generation 100 Series/C320
> * 5/6th generation 100 Series/C320 with a NIC MFD quirk
> * Tiger Lake
> * 5/6th generation Sunrise Point
FWIW, I have tested this series on some of the systems I use
for upstream VFIO :
Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz
Intel(R) Xeon(R) Silver 4514Y
Intel(R) 12th Gen Core(TM) i7-12700K
Neoverse-N1
I didn't see any regressions on IOMMU grouping like on v2.
Please ping me if you need more info on the PCI topology.
I also booted an IBM/S390 z16 LPAR with VFs to complete the
experiment. All good.
> The 6/7th gen system has a root port without an ACS capability and it
> becomes ungrouped as described above.
>
> All systems have changes, the MFDs in the root complex all become ungrouped.
>
> - NVIDIA Grace system with 5 different PCI switches from two vendors
> Bug fix widening the iommu_groups works as expected here
>
> This is on github: https://github.com/jgunthorpe/linux/commits/pcie_switch_groups
>
> v3:
> - Rebase to v6.17-rc4
> - Drop the quirks related patches
> - Change the MFD logic to process no ACS cap as meaning no internal
> loopback. This avoids creating non-isolated groups for MFD root ports in
> common AMD and Intel systems
> - Fix matching MFDs to ignore SRIOV VFs
> - Fix some kbuild splats
> v2: https://patch.msgid.link/r/0-v2-4a9b9c983431+10e2-pcie_switch_groups_jgg@nvidia.com
> - Revise comments and commit messages
> - Rename struct pci_alias_set to pci_reachable_set
> - Make more sense of the special bus->self = NULL case for SRIOV
> - Add pci_group_alloc_non_isolated() for readability
> - Rename BUS_DATA_PCI_UNISOLATED to BUS_DATA_PCI_NON_ISOLATED
> - Propogate BUS_DATA_PCI_NON_ISOLATED downstream from a MFD in case a MFD
> function is a bridge
> - New patches to add pci_mfd_isolation() to retain more cases of narrow
> groups on MFDs with missing ACS.
> - Redescribe the MFD related change as a bug fix. For a MFD to be
> isolated all functions must have egress control on their P2P.
> v1: https://patch.msgid.link/r/0-v1-74184c5043c6+195-pcie_switch_groups_jgg@nvidia.com
>
> Cc: galshalom@nvidia.com
> Cc: tdave@nvidia.com
> Cc: maorg@nvidia.com
> Cc: kvm@vger.kernel.org
> Cc: Ceric Le Goater" <clg@redhat.com>
Curiously, I didn't get the email. weird.
Cheers,
C.
> Cc: Donald Dutile <ddutile@redhat.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Jason Gunthorpe (11):
> PCI: Move REQ_ACS_FLAGS into pci_regs.h as PCI_ACS_ISOLATED
> PCI: Add pci_bus_isolated()
> iommu: Compute iommu_groups properly for PCIe switches
> iommu: Organize iommu_group by member size
> PCI: Add pci_reachable_set()
> iommu: Compute iommu_groups properly for PCIe MFDs
> iommu: Validate that pci_for_each_dma_alias() matches the groups
> PCI: Add the ACS Enhanced Capability definitions
> PCI: Enable ACS Enhanced bits for enable_acs and config_acs
> PCI: Check ACS DSP/USP redirect bits in pci_enable_pasid()
> PCI: Check ACS Extended flags for pci_bus_isolated()
>
> drivers/iommu/iommu.c | 510 +++++++++++++++++++++++-----------
> drivers/pci/ats.c | 4 +-
> drivers/pci/pci.c | 73 ++++-
> drivers/pci/search.c | 274 ++++++++++++++++++
> include/linux/pci.h | 46 +++
> include/uapi/linux/pci_regs.h | 18 ++
> 6 files changed, 759 insertions(+), 166 deletions(-)
>
>
> base-commit: b320789d6883cc00ac78ce83bccbfe7ed58afcf0
next prev parent reply other threads:[~2025-09-15 9:41 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-05 18:06 [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 01/11] PCI: Move REQ_ACS_FLAGS into pci_regs.h as PCI_ACS_ISOLATED Jason Gunthorpe
2025-09-09 4:08 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 02/11] PCI: Add pci_bus_isolated() Jason Gunthorpe
2025-09-09 4:09 ` Donald Dutile
2025-09-09 19:54 ` Bjorn Helgaas
2025-09-09 21:21 ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 03/11] iommu: Compute iommu_groups properly for PCIe switches Jason Gunthorpe
2025-09-09 4:14 ` Donald Dutile
2025-09-09 12:18 ` Jason Gunthorpe
2025-09-09 19:33 ` Donald Dutile
2025-09-09 20:27 ` Bjorn Helgaas
2025-09-09 21:21 ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 04/11] iommu: Organize iommu_group by member size Jason Gunthorpe
2025-09-09 4:16 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 05/11] PCI: Add pci_reachable_set() Jason Gunthorpe
2025-09-09 21:03 ` Bjorn Helgaas
2025-09-10 16:13 ` Jason Gunthorpe
2025-09-11 19:56 ` Donald Dutile
2025-09-15 13:38 ` Jason Gunthorpe
2025-09-15 14:32 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 06/11] iommu: Compute iommu_groups properly for PCIe MFDs Jason Gunthorpe
2025-09-09 4:57 ` Donald Dutile
2025-09-09 13:31 ` Jason Gunthorpe
2025-09-09 19:55 ` Donald Dutile
2025-09-09 21:24 ` Bjorn Helgaas
2025-09-09 23:20 ` Jason Gunthorpe
2025-09-10 1:59 ` Donald Dutile
2025-09-10 17:43 ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 07/11] iommu: Validate that pci_for_each_dma_alias() matches the groups Jason Gunthorpe
2025-09-09 5:00 ` Donald Dutile
2025-09-09 15:35 ` Jason Gunthorpe
2025-09-09 19:58 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 08/11] PCI: Add the ACS Enhanced Capability definitions Jason Gunthorpe
2025-09-09 5:01 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 09/11] PCI: Enable ACS Enhanced bits for enable_acs and config_acs Jason Gunthorpe
2025-09-09 5:01 ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 10/11] PCI: Check ACS DSP/USP redirect bits in pci_enable_pasid() Jason Gunthorpe
2025-09-09 5:02 ` Donald Dutile
2025-09-09 21:43 ` Bjorn Helgaas
2025-09-10 17:34 ` Jason Gunthorpe
2025-09-11 19:50 ` Donald Dutile
2026-01-20 18:08 ` Keith Busch
2025-09-05 18:06 ` [PATCH v3 11/11] PCI: Check ACS Extended flags for pci_bus_isolated() Jason Gunthorpe
2025-09-09 5:04 ` Donald Dutile
2025-09-15 9:41 ` Cédric Le Goater [this message]
2025-09-22 22:39 ` [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS Alex Williamson
2025-09-23 1:44 ` Donald Dutile
2025-09-23 2:06 ` Alex Williamson
2025-09-23 2:42 ` Donald Dutile
2025-09-23 22:23 ` Alex Williamson
2025-09-30 15:23 ` Donald Dutile
2025-09-30 16:21 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=835a9022-aca1-49ec-a704-578a4b3c5bbd@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=ddutile@redhat.com \
--cc=galshalom@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=jroedel@suse.de \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=maorg@nvidia.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=tdave@nvidia.com \
--cc=tony.zhu@intel.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox