From: "Christian König" <christian.koenig@amd.com>
To: "Kasireddy, Vivek" <vivek.kasireddy@intel.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Simona Vetter <simona.vetter@ffwll.ch>
Cc: "dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Logan Gunthorpe <logang@deltatee.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Subject: Re: [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs
Date: Mon, 22 Sep 2025 13:22:49 +0200 [thread overview]
Message-ID: <045c6892-9b15-4f31-aa6a-1f45528500f1@amd.com> (raw)
In-Reply-To: <IA0PR11MB718504F59BFA080EC0963E94F812A@IA0PR11MB7185.namprd11.prod.outlook.com>
Hi guys,
On 22.09.25 08:59, Kasireddy, Vivek wrote:
> Hi Jason,
>
>> Subject: Re: [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device
>> functions of Intel GPUs
>>
>> On Fri, Sep 19, 2025 at 06:22:45AM +0000, Kasireddy, Vivek wrote:
>>>> In this case messing with ACS is completely wrong. If the intention is
>>>> to convay a some kind of "private" address representing the physical
>>>> VRAM then you need to use a DMABUF mechanism to do that, not
>> deliver a
>>>> P2P address that the other side cannot access.
>>
>>> I think using a PCI BAR Address works just fine in this case because the Xe
>>> driver bound to PF on the Host can easily determine that it belongs to one
>>> of the VFs and translate it into VRAM Address.
>>
>> That isn't how the P2P or ACS mechansim works in Linux, it is about
>> the actual address used for DMA.
> Right, but this is not dealing with P2P DMA access between two random,
> unrelated devices. Instead, this is a special situation involving a GPU PF
> trying to access the VRAM of a VF that it provisioned and holds a reference
> on (note that the backing object for VF's VRAM is pinned by Xe on Host
> as part of resource provisioning). But it gets treated as regular P2P DMA
> because the exporters rely on pci_p2pdma_distance() or
> pci_p2pdma_map_type() to determine P2P compatibility.
>
> In other words, I am trying to look at this problem differently: how can the
> PF be allowed to access the VF's resource that it provisioned, particularly
> when the VF itself requests the PF to access it and when a hardware path
> (via PCIe fabric) is not required/supported or doesn't exist at all?
Well what exactly is happening here? You have a PF assigned to the host and a VF passed through to a guest, correct?
And now the PF (from the host side) wants to access a BAR of the VF?
Regards,
Christian.
>
> Furthermore, note that on a server system with a whitelisted PCIe upstream
> bridge, this quirk would not be needed at all as pci_p2pdma_map_type()
> would not have failed and this would have been a purely Xe driver specific
> problem to solve that would have required just the translation logic and no
> further changes anywhere. But my goal is to fix it across systems like
> workstations/desktops that do not typically have whitelisted PCIe upstream
> bridges.
>
>>
>> You can't translate a dma_addr_t to anything in the Xe PF driver
>> anyhow, once it goes through the IOMMU the necessary information is lost.
> Well, I already tested this path (via IOMMU, with your earlier vfio-pci +
> dmabuf patch that used dma_map_resource() and also with Leon's latest
> version) and found that I could still do the translation in the Xe PF driver
> after first calling iommu_iova_to_phys().
>
>> This is a fundamentally broken design to dma map something and
>> then try to reverse engineer the dma_addr_t back to something with
>> meaning.
> IIUC, I don't think this is a new or radical idea. I think the concept is slightly
> similar to using bounce buffers to address hardware DMA limitations except
> that there are no memory copies and the CPU is not involved. And, I don't see
> any other way to do this because I don't believe the exporter can provide a
> DMA address that the importer can use directly without any translation, which
> seems unavoidable in this case.
>
>>
>>>> Christian told me dmabuf has such a private address mechanism, so
>>>> please figure out a way to use it..
>>>
>>> Even if such as a mechanism exists, we still need a way to prevent
>>> pci_p2pdma_map_type() from failing when invoked by the exporter (vfio-
>> pci).
>>> Does it make sense to move this quirk into the exporter?
>>
>> When you export a private address through dmabuf the VFIO exporter
>> will not call p2pdma paths when generating it.
> I have cc'd Christian and Simona. Hopefully, they can help explain how
> the dmabuf private address mechanism can be used to address my
> use-case. And, I sincerely hope that it will work, otherwise I don't see
> any viable path forward for what I am trying to do other than using this
> quirk and translation. Note that the main reason why I am doing this
> is because I am seeing at-least ~35% performance gain when running
> light 3D/Gfx workloads.
>
>>
>>> Also, AFAICS, translating BAR Address to VRAM Address can only be
>>> done by the Xe driver bound to PF because it has access to provisioning
>>> data. In other words, vfio-pci would not be able to share any other
>>> address other than the BAR Address because it wouldn't know how to
>>> translate it to VRAM Address.
>>
>> If you have a vfio varient driver then the VF vfio driver could call
>> the Xe driver to create a suitable dmabuf using the private
>> addressing. This is probably what is required here if this is what you
>> are trying to do.
> Could this not be done via the vendor agnostic vfio-pci (+ dmabuf) driver
> instead of having to use a separate VF/vfio variant driver?
>
>>
>>>> No, don't, it is completely wrong to mess with ACS flags for the
>>>> problem you are trying to solve.
>>
>>> But I am not messing with any ACS flags here. I am just adding a quirk to
>>> sidestep the ACS enforcement check given that the PF to VF access does
>>> not involve the PCIe fabric in this case.
>>
>> Which is completely wrong. These are all based on fabric capability,
>> not based on code in drivers to wrongly "translate" the dma_addr_t.
> I am not sure why you consider translation to be wrong in this case
> given that it is done by a trusted entity (Xe PF driver) that is bound to
> the GPU PF and provisioned the resource that it is trying to access.
> What limitations do you see with this approach?
>
> Also, the quirk being added in this patch is indeed meant to address a
> specific case (GPU PF to VF access) to workaround a potential hardware
> limitation (non-existence of a direct PF to VF DMA access path via the
> PCIe fabric). Isn't that one of the main ideas behind using quirks -- to
> address hardware limitations?
>
> Thanks,
> Vivek
>
>>
>> Jason
next prev parent reply other threads:[~2025-09-22 11:22 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20250915072428.1712837-1-vivek.kasireddy@intel.com>
2025-09-15 7:21 ` [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs Vivek Kasireddy
2025-09-15 15:33 ` Logan Gunthorpe
2025-09-16 17:34 ` Bjorn Helgaas
2025-09-16 17:59 ` Jason Gunthorpe
2025-09-16 17:57 ` Jason Gunthorpe
2025-09-18 6:16 ` Kasireddy, Vivek
2025-09-18 12:04 ` Jason Gunthorpe
2025-09-19 6:22 ` Kasireddy, Vivek
2025-09-19 12:29 ` Jason Gunthorpe
2025-09-22 6:59 ` Kasireddy, Vivek
2025-09-22 11:22 ` Christian König [this message]
2025-09-22 12:20 ` Jason Gunthorpe
2025-09-22 12:25 ` Christian König
2025-09-22 12:29 ` Jason Gunthorpe
2025-09-22 13:20 ` Christian König
2025-09-22 13:27 ` Jason Gunthorpe
2025-09-22 13:57 ` Christian König
2025-09-22 14:00 ` Jason Gunthorpe
2025-09-23 5:53 ` Kasireddy, Vivek
2025-09-23 6:25 ` Matthew Brost
2025-09-23 6:44 ` Matthew Brost
2025-09-23 7:52 ` Christian König
2025-09-23 12:15 ` Jason Gunthorpe
2025-09-23 12:45 ` Christian König
2025-09-23 13:12 ` Jason Gunthorpe
2025-09-23 13:28 ` Christian König
2025-09-23 13:38 ` Jason Gunthorpe
2025-09-23 13:48 ` Christian König
2025-09-23 23:02 ` Matthew Brost
2025-09-24 8:29 ` Christian König
2025-09-24 6:50 ` Kasireddy, Vivek
2025-09-24 7:21 ` Christian König
2025-09-25 3:56 ` Kasireddy, Vivek
2025-09-25 10:51 ` Thomas Hellström
2025-09-25 11:28 ` Christian König
2025-09-25 13:11 ` Thomas Hellström
2025-09-25 13:33 ` Jason Gunthorpe
2025-09-25 15:40 ` Thomas Hellström
2025-09-25 15:55 ` Jason Gunthorpe
2025-09-26 6:12 ` Kasireddy, Vivek
2025-09-23 13:36 ` Christoph Hellwig
2025-09-23 6:01 ` Kasireddy, Vivek
2025-09-22 12:12 ` Jason Gunthorpe
2025-09-24 16:13 ` Simon Richter
2025-09-24 17:12 ` Jason Gunthorpe
2025-09-25 4:06 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=045c6892-9b15-4f31-aa6a-1f45528500f1@amd.com \
--to=christian.koenig@amd.com \
--cc=bhelgaas@google.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=simona.vetter@ffwll.ch \
--cc=vivek.kasireddy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox