Linux PCI subsystem development
 help / color / mirror / Atom feed
From: Donald Dutile <ddutile@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	linux-pci@vger.kernel.org, Robin Murphy <robin.murphy@arm.com>,
	Will Deacon <will@kernel.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	galshalom@nvidia.com, Joerg Roedel <jroedel@suse.de>,
	Kevin Tian <kevin.tian@intel.com>,
	kvm@vger.kernel.org, maorg@nvidia.com, patches@lists.linux.dev,
	tdave@nvidia.com, Tony Zhu <tony.zhu@intel.com>
Subject: Re: [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS
Date: Mon, 22 Sep 2025 21:44:27 -0400	[thread overview]
Message-ID: <e9d4f76a-5355-4068-a322-a6d5c081e406@redhat.com> (raw)
In-Reply-To: <20250922163947.5a8304d4.alex.williamson@redhat.com>



On 9/22/25 6:39 PM, Alex Williamson wrote:
> On Fri,  5 Sep 2025 15:06:15 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
>> The series patches have extensive descriptions as to the problem and
>> solution, but in short the ACS flags are not analyzed according to the
>> spec to form the iommu_groups that VFIO is expecting for security.
>>
>> ACS is an egress control only. For a path the ACS flags on each hop only
>> effect what other devices the TLP is allowed to reach. It does not prevent
>> other devices from reaching into this path.
>>
>> For VFIO if device A is permitted to access device B's MMIO then A and B
>> must be grouped together. This says that even if a path has isolating ACS
>> flags on each hop, off-path devices with non-isolating ACS can still reach
>> into that path and must be grouped gother.
>>
>> For switches, a PCIe topology like:
>>
>>                                 -- DSP 02:00.0 -> End Point A
>>   Root 00:00.0 -> USP 01:00.0 --|
>>                                 -- DSP 02:03.0 -> End Point B
>>
>> Will generate unique single device groups for every device even if ACS is
>> not enabled on the two DSP ports. It should at least group A/B together
>> because no ACS means A can reach the MMIO of B. This is a serious failure
>> for the VFIO security model.
>>
>> For multi-function-devices, a PCIe topology like:
>>
>>                    -- MFD 00:1f.0 ACS not supported
>>    Root 00:00.00 --|- MFD 00:1f.2 ACS not supported
>>                    |- MFD 00:1f.6 ACS = REQ_ACS_FLAGS
>>
>> Will group [1f.0, 1f.2] and 1f.6 gets a single device group. However from
>> a spec perspective each device should get its own group, because ACS not
>> supported can assume no loopback is possible by spec.
> 
> I just dug through the thread with Don that I think tries to justify
> this, but I have a lot of concerns about this.  I think the "must be
> implemented by Functions that support peer-to-peer traffic with other
> Functions" language is specifying that IF the device implements an ACS
> capability AND does not implement the specific ACS P2P flag being
> described, then and only then can we assume that form of P2P is not
> supported.  OTOH, we cannot assume anything regarding internal P2P of an
> MFD that does not implement an ACS capability at all.
> 
The first, non-IF'd, non-AND'd req in PCIe spec 7.0, section 6.12.1.2 is:
"ACS P2P Request Redirect: must be implemented by Functions that support peer-to-peer traffic with other
Functions. This includes SR-IOV Virtual Functions (VFs)."
There is not further statement about control of peer-to-peer traffic, just the ability to do so, or not.

Note: ACS P2P Request Redirect.

Later in that section it says:
ACS P2P Completion Redirect: must be implemented by Functions that implement ACS P2P Request Redirect.

That can be read as an 'IF Request-Redirect is implemented, than ACS Completion Request must be implemented.
IOW, the Completion Direct control is required if Request Redirect is implemented, and not necessary if
Request Redirect is omitted.

If ACS P2P Require Redirect isn't implemented, than per the first requirement for MFDs,
the PCIe device does not support peer-to-peer traffic amongst its function or virtual functions.

It goes on...
ACS Direct Translated P2P: must be implemented if the Function supports Address Translation Services (ATS)
and also peer-to-peer traffic with other Functions.

If an MFD does not do peer-to-peer, and P2P Request Redirect would be implemented if it did,
than this ACS control does not have to be implemented either.

Egress control structures are either optional or dependent on Request Redirect &/or Direct Translated P2P control,
which have been addressed above as not needed if on peer-to-peer btwn functions in an MFD (and their VFs).


Now, if previous PCIe spec versions (which I didn't read & re-read & re-read like the 6.12 section of PCIe spec 7.0)
had more IF and ANDs, than that could be cause for less than clear specmanship enabling vendors of MFDs
to yield a non-PCIe-7.0 conformant MFD wrt ACS structures.
I searched section 6.12.1.2 for if/IF and AND/and, and did not yield any conditions not stated above.

> I believe we even reached agreement with some NIC vendors in the early
> days of IOMMU groups that they needed to implement an "empty" ACS
> capability on their multifunction NICs such that they could describe in
> this way that internal P2P is not supported by the device.  Thanks,
> 
In the early days -- gen1->gen3 (2009->2015) I could see that happening.
I think time (a decade) has closed those defaults to less-common quirks.
If 'empty ACS' is how they liked to do it back than, sure.
[A definition of empty ACS may be needed to fully appreciate that statement, though.]
If this patch series needs to support an 'empty ACS' for this older case, let's add it now,
or follow-up with another fix.

In summary, I still haven't found the IF and AND you refer to in section 6.12.1.2 for MFDs,
so if you want to quote those sections I mis-read, or mis-interpreted their (subtle?) existence,
than I'm not immovable on the spec interpretation.

- Don

> Alex
> 
>>
>> For root-ports a PCIe topology like:
>>                                           -- Dev 01:00.0
>>    Root  00:00.00 --- Root Port 00:01.0 --|
>>                    |                      -- Dev 01:00.1
>> 		  |- Dev 00:17.0
>>
>> Previously would group [00:01.0, 01:00.0, 01:00.1] together if there is no
>> ACS capability in the root port.
>>
>> While ACS on root ports is underspecified in the spec, it should still
>> function as an egress control and limit access to either the MMIO of the
>> root port itself, or perhaps some other devices upstream of the root
>> complex - 00:17.0 perhaps in this example.
>>
>> Historically the grouping in Linux has assumed the root port routes all
>> traffic into the TA/IOMMU and never bypasses the TA to go to other
>> functions in the root complex. Following the new understanding that ACS is
>> required for internal loopback also treat root ports with no ACS
>> capability as lacking internal loopback as well.
>>
>> There is also some confusing spec language about how ACS and SRIOV works
>> which this series does not address.
>>
>>
>> This entire series goes further and makes some additional improvements to
>> the ACS validation found while studying this problem. The groups around a
>> PCIe to PCI bridge are shrunk to not include the PCIe bridge.
>>
>> The last patches implement "ACS Enhanced" on top of it. Due to how ACS
>> Enhanced was defined as a non-backward compatible feature it is important
>> to get SW support out there.
>>
>> Due to the potential of iommu_groups becoming wider and thus non-usable
>> for VFIO this should go to a linux-next tree to give it some more
>> exposure.
>>
>> I have now tested this a few systems I could get:
>>
>>   - Various Intel client systems:
>>     * Raptor Lake, with VMD enabled and using the real_dev mechanism
>>     * 6/7th generation 100 Series/C320
>>     * 5/6th generation 100 Series/C320 with a NIC MFD quirk
>>     * Tiger Lake
>>     * 5/6th generation Sunrise Point
>>
>>    The 6/7th gen system has a root port without an ACS capability and it
>>    becomes ungrouped as described above.
>>
>>    All systems have changes, the MFDs in the root complex all become ungrouped.
>>
>>   - NVIDIA Grace system with 5 different PCI switches from two vendors
>>     Bug fix widening the iommu_groups works as expected here
>>
>> This is on github: https://github.com/jgunthorpe/linux/commits/pcie_switch_groups
>>
>> v3:
>>   - Rebase to v6.17-rc4
>>   - Drop the quirks related patches
>>   - Change the MFD logic to process no ACS cap as meaning no internal
>>     loopback. This avoids creating non-isolated groups for MFD root ports in
>>     common AMD and Intel systems
>>   - Fix matching MFDs to ignore SRIOV VFs
>>   - Fix some kbuild splats
>> v2: https://patch.msgid.link/r/0-v2-4a9b9c983431+10e2-pcie_switch_groups_jgg@nvidia.com
>>   - Revise comments and commit messages
>>   - Rename struct pci_alias_set to pci_reachable_set
>>   - Make more sense of the special bus->self = NULL case for SRIOV
>>   - Add pci_group_alloc_non_isolated() for readability
>>   - Rename BUS_DATA_PCI_UNISOLATED to BUS_DATA_PCI_NON_ISOLATED
>>   - Propogate BUS_DATA_PCI_NON_ISOLATED downstream from a MFD in case a MFD
>>     function is a bridge
>>   - New patches to add pci_mfd_isolation() to retain more cases of narrow
>>     groups on MFDs with missing ACS.
>>   - Redescribe the MFD related change as a bug fix. For a MFD to be
>>     isolated all functions must have egress control on their P2P.
>> v1: https://patch.msgid.link/r/0-v1-74184c5043c6+195-pcie_switch_groups_jgg@nvidia.com
>>
>> Cc: galshalom@nvidia.com
>> Cc: tdave@nvidia.com
>> Cc: maorg@nvidia.com
>> Cc: kvm@vger.kernel.org
>> Cc: Ceric Le Goater" <clg@redhat.com>
>> Cc: Donald Dutile <ddutile@redhat.com>
>> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>>
>> Jason Gunthorpe (11):
>>    PCI: Move REQ_ACS_FLAGS into pci_regs.h as PCI_ACS_ISOLATED
>>    PCI: Add pci_bus_isolated()
>>    iommu: Compute iommu_groups properly for PCIe switches
>>    iommu: Organize iommu_group by member size
>>    PCI: Add pci_reachable_set()
>>    iommu: Compute iommu_groups properly for PCIe MFDs
>>    iommu: Validate that pci_for_each_dma_alias() matches the groups
>>    PCI: Add the ACS Enhanced Capability definitions
>>    PCI: Enable ACS Enhanced bits for enable_acs and config_acs
>>    PCI: Check ACS DSP/USP redirect bits in pci_enable_pasid()
>>    PCI: Check ACS Extended flags for pci_bus_isolated()
>>
>>   drivers/iommu/iommu.c         | 510 +++++++++++++++++++++++-----------
>>   drivers/pci/ats.c             |   4 +-
>>   drivers/pci/pci.c             |  73 ++++-
>>   drivers/pci/search.c          | 274 ++++++++++++++++++
>>   include/linux/pci.h           |  46 +++
>>   include/uapi/linux/pci_regs.h |  18 ++
>>   6 files changed, 759 insertions(+), 166 deletions(-)
>>
>>
>> base-commit: b320789d6883cc00ac78ce83bccbfe7ed58afcf0
> 


  reply	other threads:[~2025-09-23  1:44 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-05 18:06 [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 01/11] PCI: Move REQ_ACS_FLAGS into pci_regs.h as PCI_ACS_ISOLATED Jason Gunthorpe
2025-09-09  4:08   ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 02/11] PCI: Add pci_bus_isolated() Jason Gunthorpe
2025-09-09  4:09   ` Donald Dutile
2025-09-09 19:54   ` Bjorn Helgaas
2025-09-09 21:21     ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 03/11] iommu: Compute iommu_groups properly for PCIe switches Jason Gunthorpe
2025-09-09  4:14   ` Donald Dutile
2025-09-09 12:18     ` Jason Gunthorpe
2025-09-09 19:33       ` Donald Dutile
2025-09-09 20:27   ` Bjorn Helgaas
2025-09-09 21:21     ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 04/11] iommu: Organize iommu_group by member size Jason Gunthorpe
2025-09-09  4:16   ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 05/11] PCI: Add pci_reachable_set() Jason Gunthorpe
2025-09-09 21:03   ` Bjorn Helgaas
2025-09-10 16:13     ` Jason Gunthorpe
2025-09-11 19:56     ` Donald Dutile
2025-09-15 13:38       ` Jason Gunthorpe
2025-09-15 14:32         ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 06/11] iommu: Compute iommu_groups properly for PCIe MFDs Jason Gunthorpe
2025-09-09  4:57   ` Donald Dutile
2025-09-09 13:31     ` Jason Gunthorpe
2025-09-09 19:55       ` Donald Dutile
2025-09-09 21:24   ` Bjorn Helgaas
2025-09-09 23:20     ` Jason Gunthorpe
2025-09-10  1:59     ` Donald Dutile
2025-09-10 17:43       ` Jason Gunthorpe
2025-09-05 18:06 ` [PATCH v3 07/11] iommu: Validate that pci_for_each_dma_alias() matches the groups Jason Gunthorpe
2025-09-09  5:00   ` Donald Dutile
2025-09-09 15:35     ` Jason Gunthorpe
2025-09-09 19:58       ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 08/11] PCI: Add the ACS Enhanced Capability definitions Jason Gunthorpe
2025-09-09  5:01   ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 09/11] PCI: Enable ACS Enhanced bits for enable_acs and config_acs Jason Gunthorpe
2025-09-09  5:01   ` Donald Dutile
2025-09-05 18:06 ` [PATCH v3 10/11] PCI: Check ACS DSP/USP redirect bits in pci_enable_pasid() Jason Gunthorpe
2025-09-09  5:02   ` Donald Dutile
2025-09-09 21:43   ` Bjorn Helgaas
2025-09-10 17:34     ` Jason Gunthorpe
2025-09-11 19:50       ` Donald Dutile
2026-01-20 18:08   ` Keith Busch
2025-09-05 18:06 ` [PATCH v3 11/11] PCI: Check ACS Extended flags for pci_bus_isolated() Jason Gunthorpe
2025-09-09  5:04   ` Donald Dutile
2025-09-15  9:41 ` [PATCH v3 00/11] Fix incorrect iommu_groups with PCIe ACS Cédric Le Goater
2025-09-22 22:39 ` Alex Williamson
2025-09-23  1:44   ` Donald Dutile [this message]
2025-09-23  2:06     ` Alex Williamson
2025-09-23  2:42       ` Donald Dutile
2025-09-23 22:23         ` Alex Williamson
2025-09-30 15:23           ` Donald Dutile
2025-09-30 16:21             ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e9d4f76a-5355-4068-a322-a6d5c081e406@redhat.com \
    --to=ddutile@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=bhelgaas@google.com \
    --cc=galshalom@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=maorg@nvidia.com \
    --cc=patches@lists.linux.dev \
    --cc=robin.murphy@arm.com \
    --cc=tdave@nvidia.com \
    --cc=tony.zhu@intel.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox