Linux PCI subsystem development
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Donald Dutile <ddutile@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	linux-pci@vger.kernel.org, Robin Murphy <robin.murphy@arm.com>,
	Will Deacon <will@kernel.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	galshalom@nvidia.com, Joerg Roedel <jroedel@suse.de>,
	Kevin Tian <kevin.tian@intel.com>,
	kvm@vger.kernel.org, maorg@nvidia.com, patches@lists.linux.dev,
	tdave@nvidia.com, Tony Zhu <tony.zhu@intel.com>
Subject: Re: [PATCH 03/11] iommu: Compute iommu_groups properly for PCIe switches
Date: Mon, 22 Sep 2025 19:17:37 -0600	[thread overview]
Message-ID: <20250922191737.0df0dbed.alex.williamson@redhat.com> (raw)
In-Reply-To: <1845b412-e96d-438a-8c05-680ef70c04e6@redhat.com>

On Mon, 22 Sep 2025 20:51:31 -0400
Donald Dutile <ddutile@redhat.com> wrote:

> On 9/22/25 7:15 PM, Jason Gunthorpe wrote:
> > On Mon, Sep 22, 2025 at 04:32:00PM -0600, Alex Williamson wrote:  
> >> The ACS capability was only introduced in PCIe 2.0 and vendors have
> >> only become more diligent about implementing it as it's become
> >> important for device isolation and assignment.  
> PCIe-2.0 spec-wise, was released in 2007, 18 years ago.
> If hw is on a 3-yr lifecycle, that's 6 generations (7th including this year releases, assuming
> gen-1 was 2007); assuming a 5yr hw cycle, that's 4 generations of hardware.
> 
> Maybe a more interesting date is when DC servers implemented device-assignment/SRIOV
> in full-scale, and then, determine number of hw generations from that point on as
> 'learning -> devel-changing' years.
> I recall we had in in 'enterprise' customers in 2010, which only shaves one generation
> from above counts.

I don't see the relevance of these timelines.  A vendor with their head
in the sand still has their head in the sand regardless of time
passing.  Device assignment has a heavy non-enterprise user base.

> > IDK about this, I have very new systems and they still not have ACS
> > flags according to this interpretation.
> >   
> >> IMO, we can't assume anything at all about a multifunction device
> >> that does not implement ACS.  
> > 
> > Yeah this is all true.
> > 
> > But we are already assuming. Today we assume MFDs without caps must
> > have internal loopback in some cases, and then in other cases we
> > assume they don't.
> > 
> > I've sent and people have tested various different rules - please tell
> > me what you can live with.
> > 
> > Assuming the MFD does not have internal loopback, while not entirely
> > satisfactory, is the one that gives the least practical breakage.
> > 
> > I think it most accurately reflects the majority of real hardware out
> > there.
> > 
> > We can quirk to fix the remainder.
> > 
> > This is the best plan I've got..
> > 
> > Jason
> >   
> 
> +1 to Jason's conclusions.
> We should design the quirk hook to add ACS hooks for MFDs that do
> not adhere to the spec., which should be the minority, and that's what
> quirks are suppose to handle -- the odd cases.

Sorry, I can't agree.  I think we're conflating lack of a specific ACS
p2p capability to imply lack of internal p2p with lack of an ACS
capability at all.  I don't believe we can infer anything from the
latter.  Thanks,

Alex


  reply	other threads:[~2025-09-23  1:17 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-30 22:28 [PATCH 00/11] Fix incorrect iommu_groups with PCIe switches Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 01/11] PCI: Move REQ_ACS_FLAGS into pci_regs.h as PCI_ACS_ISOLATED Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 02/11] PCI: Add pci_bus_isolation() Jason Gunthorpe
2025-07-01 19:28   ` Alex Williamson
2025-07-02  1:00     ` Jason Gunthorpe
2025-07-03 15:30     ` Jason Gunthorpe
2025-07-03 22:17       ` Alex Williamson
2025-07-03 23:08         ` Alex Williamson
2025-07-03 23:21           ` Jason Gunthorpe
2025-07-03 23:15         ` Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 03/11] iommu: Compute iommu_groups properly for PCIe switches Jason Gunthorpe
2025-07-01 19:29   ` Alex Williamson
2025-07-02  1:04     ` Jason Gunthorpe
2025-07-17 19:25       ` Donald Dutile
2025-07-17 20:27         ` Jason Gunthorpe
2025-07-18  2:31           ` Donald Dutile
2025-07-18 13:32             ` Jason Gunthorpe
2025-09-22 22:32               ` Alex Williamson
2025-09-22 23:15                 ` Jason Gunthorpe
2025-09-23  0:51                   ` Donald Dutile
2025-09-23  1:17                     ` Alex Williamson [this message]
2025-09-23  1:10                   ` Alex Williamson
2025-09-23  2:26                     ` Donald Dutile
2025-09-23  2:50                       ` Alex Williamson
2025-09-23 12:32                         ` Jason Gunthorpe
2025-09-23 12:58                           ` Alex Williamson
2025-09-23 13:03                     ` Jason Gunthorpe
2025-09-23 21:29                       ` Alex Williamson
2025-09-25 12:20                         ` Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 04/11] iommu: Organize iommu_group by member size Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 05/11] PCI: Add pci_reachable_set() Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 06/11] iommu: Use pci_reachable_set() in pci_device_group() Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 07/11] iommu: Validate that pci_for_each_dma_alias() matches the groups Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 08/11] PCI: Add the ACS Enhanced Capability definitions Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 09/11] PCI: Enable ACS Enhanced bits for enable_acs and config_acs Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 10/11] PCI: Check ACS DSP/USP redirect bits in pci_enable_pasid() Jason Gunthorpe
2025-06-30 22:28 ` [PATCH 11/11] PCI: Check ACS Extended flags for pci_bus_isolated() Jason Gunthorpe
2025-07-01 21:48 ` [PATCH 00/11] Fix incorrect iommu_groups with PCIe switches Alex Williamson
2025-07-02  1:47   ` Jason Gunthorpe
2025-07-04  0:37   ` Jason Gunthorpe
2025-07-11 14:55     ` Alex Williamson
2025-07-11 16:08       ` Jason Gunthorpe
2025-07-08 20:47   ` Jason Gunthorpe
2025-07-11 15:40     ` Alex Williamson
2025-07-11 16:14       ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250922191737.0df0dbed.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=bhelgaas@google.com \
    --cc=ddutile@redhat.com \
    --cc=galshalom@nvidia.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=maorg@nvidia.com \
    --cc=patches@lists.linux.dev \
    --cc=robin.murphy@arm.com \
    --cc=tdave@nvidia.com \
    --cc=tony.zhu@intel.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox