From: "Christian König" <christian.koenig@amd.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Kasireddy, Vivek" <vivek.kasireddy@intel.com>,
"Jason Gunthorpe" <jgg@nvidia.com>
Cc: "Brost, Matthew" <matthew.brost@intel.com>,
Simona Vetter <simona.vetter@ffwll.ch>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Logan Gunthorpe <logang@deltatee.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Subject: Re: [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs
Date: Thu, 25 Sep 2025 13:28:51 +0200 [thread overview]
Message-ID: <50c946f3-08c5-421e-80bf-61834a58eddf@amd.com> (raw)
In-Reply-To: <ab09c09638c4482f99047672680c247b98c961c9.camel@linux.intel.com>
On 25.09.25 12:51, Thomas Hellström wrote:
>>> In that case I strongly suggest to add a private DMA-buf interface
>>> for the DMA-
>>> bufs exported by vfio-pci which returns which BAR and offset the
>>> DMA-buf
>>> represents.
>
> @Christian, Is what you're referring to here the "dma_buf private
> interconnect" we've been discussing previously, now only between vfio-
> pci and any interested importers instead of private to a known exporter
> and importer?
>
> If so I have a POC I can post as an RFC on a way to negotiate such an
> interconnect.
I was just about to write something up as well, but feel free to go ahead if you already have something.
>> Does this private dmabuf interface already exist or does it need to
>> be created
>> from the ground up?
Every driver which supports both exporting and importing DMA-buf has code to detect when somebody tries to re-import a buffer previously exported from the same device.
Now some drivers like amdgpu and I think XE as well also detect if the buffer is from another device handled by the same driver which potentially have private interconnects (XGMI or similar).
See function amdgpu_dmabuf_is_xgmi_accessible() in amdgpu_dma_buf.c for an example.
>> If it already exists, could you please share an example/reference of
>> how you
>> have used it with amdgpu or other drivers?
Well what's new is that we need to do this between two drivers unreleated to each other.
As far as I know previously that was all inside AMD drivers for example, while in this case vfio is a common vendor agnostic driver.
So we should probably make sure to get that right and vendor agnostic etc....
>> If it doesn't exist, I was wondering if it should be based on any
>> particular best
>> practices/ideas (or design patterns) that already exist in other
>> drivers?
>
> @Vivek, another question: Also on the guest side we're exporting dma-
> mapped adresses that are imported and somehow decoded by the guest
> virtio-gpu driver? Is something similar needed there?
>
> Also how would the guest side VF driver know that what is assumed to be
> a PF on the same device is actually a PF on the same device and not a
> completely different device with another driver? (In which case I
> assume it would like to export a system dma-buf)?
Another question is how is lifetime handled? E.g. does the guest know that a DMA-buf exists for it's BAR area?
Regards,
Christian.
>
> Thanks,
> Thomas
>
>
>
>>
>>>
>>> Ideally using the same structure Qemu used to provide the offset to
>>> the vfio-
>>> pci driver, but not a must have.
>>>
>>> This way the driver for the GPU PF (XE) can leverage this
>>> interface, validates
>>> that the DMA-buf comes from a VF it feels responsible for and do
>>> the math to
>>> figure out in which parts of the VRAM needs to be accessed to
>>> scanout the
>>> picture.
>> Sounds good. This is definitely a viable path forward and it looks
>> like we are all
>> in agreement with this idea.
>>
>> I guess we can start exploring how to implement the private dmabuf
>> interface
>> mechanism right away.
>>
>> Thanks,
>> Vivek
>>
>>>
>>> This way this private vfio-pci interface can also be used by
>>> iommufd for
>>> example.
>>>
>>> Regards,
>>> Christian.
>>>
>>>>
>>>> Thanks,
>>>> Vivek
>>>>
>>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>>
>>>>>>> What Simona agreed on is exactly what I proposed as well,
>>>>>>> that you
>>>>>>> get a private interface for exactly that use case.
>>>>>>
>>>>>> A "private" interface to exchange phys_addr_t between at
>>>>>> least
>>>>>> VFIO/KVM/iommufd - sure no complaint with that.
>>>>>>
>>>>>> Jason
>>>>
>>
>
next prev parent reply other threads:[~2025-09-25 11:29 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20250915072428.1712837-1-vivek.kasireddy@intel.com>
2025-09-15 7:21 ` [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs Vivek Kasireddy
2025-09-15 15:33 ` Logan Gunthorpe
2025-09-16 17:34 ` Bjorn Helgaas
2025-09-16 17:59 ` Jason Gunthorpe
2025-09-16 17:57 ` Jason Gunthorpe
2025-09-18 6:16 ` Kasireddy, Vivek
2025-09-18 12:04 ` Jason Gunthorpe
2025-09-19 6:22 ` Kasireddy, Vivek
2025-09-19 12:29 ` Jason Gunthorpe
2025-09-22 6:59 ` Kasireddy, Vivek
2025-09-22 11:22 ` Christian König
2025-09-22 12:20 ` Jason Gunthorpe
2025-09-22 12:25 ` Christian König
2025-09-22 12:29 ` Jason Gunthorpe
2025-09-22 13:20 ` Christian König
2025-09-22 13:27 ` Jason Gunthorpe
2025-09-22 13:57 ` Christian König
2025-09-22 14:00 ` Jason Gunthorpe
2025-09-23 5:53 ` Kasireddy, Vivek
2025-09-23 6:25 ` Matthew Brost
2025-09-23 6:44 ` Matthew Brost
2025-09-23 7:52 ` Christian König
2025-09-23 12:15 ` Jason Gunthorpe
2025-09-23 12:45 ` Christian König
2025-09-23 13:12 ` Jason Gunthorpe
2025-09-23 13:28 ` Christian König
2025-09-23 13:38 ` Jason Gunthorpe
2025-09-23 13:48 ` Christian König
2025-09-23 23:02 ` Matthew Brost
2025-09-24 8:29 ` Christian König
2025-09-24 6:50 ` Kasireddy, Vivek
2025-09-24 7:21 ` Christian König
2025-09-25 3:56 ` Kasireddy, Vivek
2025-09-25 10:51 ` Thomas Hellström
2025-09-25 11:28 ` Christian König [this message]
2025-09-25 13:11 ` Thomas Hellström
2025-09-25 13:33 ` Jason Gunthorpe
2025-09-25 15:40 ` Thomas Hellström
2025-09-25 15:55 ` Jason Gunthorpe
2025-09-26 6:12 ` Kasireddy, Vivek
2025-09-23 13:36 ` Christoph Hellwig
2025-09-23 6:01 ` Kasireddy, Vivek
2025-09-22 12:12 ` Jason Gunthorpe
2025-09-24 16:13 ` Simon Richter
2025-09-24 17:12 ` Jason Gunthorpe
2025-09-25 4:06 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50c946f3-08c5-421e-80bf-61834a58eddf@amd.com \
--to=christian.koenig@amd.com \
--cc=bhelgaas@google.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=matthew.brost@intel.com \
--cc=simona.vetter@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=vivek.kasireddy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox