From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: "Kasireddy, Vivek" <vivek.kasireddy@intel.com>,
"Christian König" <christian.koenig@amd.com>,
"Jason Gunthorpe" <jgg@nvidia.com>
Cc: "Brost, Matthew" <matthew.brost@intel.com>,
Simona Vetter <simona.vetter@ffwll.ch>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"intel-xe@lists.freedesktop.org"
<intel-xe@lists.freedesktop.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Logan Gunthorpe <logang@deltatee.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Subject: Re: [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs
Date: Thu, 25 Sep 2025 12:51:47 +0200 [thread overview]
Message-ID: <ab09c09638c4482f99047672680c247b98c961c9.camel@linux.intel.com> (raw)
In-Reply-To: <IA0PR11MB7185239DB2331A899561AA7DF81FA@IA0PR11MB7185.namprd11.prod.outlook.com>
On Thu, 2025-09-25 at 03:56 +0000, Kasireddy, Vivek wrote:
> Hi Christian,
>
> > Subject: Re: [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for
> > device
> > functions of Intel GPUs
> >
> > > >
> > > > On 23.09.25 15:38, Jason Gunthorpe wrote:
> > > > > On Tue, Sep 23, 2025 at 03:28:53PM +0200, Christian König
> > > > > wrote:
> > > > > > On 23.09.25 15:12, Jason Gunthorpe wrote:
> > > > > > > > When you want to communicate addresses in a device
> > > > > > > > specific
> > address
> > > > > > > > space you need a device specific type for that and not
> > > > > > > > abuse
> > > > > > > > phys_addr_t.
> > > > > > >
> > > > > > > I'm not talking about abusing phys_addr_t, I'm talking
> > > > > > > about putting a
> > > > > > > legitimate CPU address in there.
> > > > > > >
> > > > > > > You can argue it is hack in Xe to reverse engineer the
> > > > > > > VRAM offset
> > > > > > > from a CPU physical, and I would be sympathetic, but it
> > > > > > > does allow
> > > > > > > VFIO to be general not specialized to Xe.
> > > > > >
> > > > > > No, exactly that doesn't work for all use cases. That's why
> > > > > > I'm
> > > > > > pushing back so hard on using phys_addr_t or CPU addresses.
> > > > > >
> > > > > > See the CPU address is only valid temporary because the VF
> > > > > > BAR is
> > > > > > only a window into the device memory.
> > > > >
> > > > > I know, generally yes.
> > > > >
> > > > > But there should be no way that a VFIO VF driver in the
> > > > > hypervisor
> > > > > knows what is currently mapped to the VF's BAR. The only way
> > > > > I can
> > > > > make sense of what Xe is doing here is if the VF BAR is a
> > > > > static
> > > > > aperture of the VRAM..
> > > > >
> > > > > Would be nice to know the details.
> > > >
> > > > Yeah, that's why i asked how VFIO gets the information which
> > > > parts of the
> > > > it's BAR should be part of the DMA-buf?
> > > >
> > > > That would be really interesting to know.
> > > As Jason guessed, we are relying on the GPU VF being a Large BAR
> > > device here. In other words, as you suggested, this will not work
> > > if the
> > > VF BAR size is not as big as its actual VRAM portion. We can
> > > certainly add
> > > this check but we have not seen either the GPU PF or VF getting
> > > detected
> > > as Small BAR devices in various test environments.
> > >
> > > So, given the above, once a VF device is bound to vfio-pci driver
> > > and
> > > assigned to a Guest VM (launched via Qemu), Qemu's vfio layer
> > > maps
> > > all the VF's resources including the BARs. This mapping info
> > > (specifically
> > HVA)
> > > is leveraged (by Qemu) to identity the offset at which the Guest
> > > VM's buffer
> > > is located (in the BAR) and this info is then provided to vfio-
> > > pci kernel driver
> > > which finally creates the dmabuf (with BAR Addresses).
> >
> > In that case I strongly suggest to add a private DMA-buf interface
> > for the DMA-
> > bufs exported by vfio-pci which returns which BAR and offset the
> > DMA-buf
> > represents.
@Christian, Is what you're referring to here the "dma_buf private
interconnect" we've been discussing previously, now only between vfio-
pci and any interested importers instead of private to a known exporter
and importer?
If so I have a POC I can post as an RFC on a way to negotiate such an
interconnect.
> Does this private dmabuf interface already exist or does it need to
> be created
> from the ground up?
>
> If it already exists, could you please share an example/reference of
> how you
> have used it with amdgpu or other drivers?
>
> If it doesn't exist, I was wondering if it should be based on any
> particular best
> practices/ideas (or design patterns) that already exist in other
> drivers?
@Vivek, another question: Also on the guest side we're exporting dma-
mapped adresses that are imported and somehow decoded by the guest
virtio-gpu driver? Is something similar needed there?
Also how would the guest side VF driver know that what is assumed to be
a PF on the same device is actually a PF on the same device and not a
completely different device with another driver? (In which case I
assume it would like to export a system dma-buf)?
Thanks,
Thomas
>
> >
> > Ideally using the same structure Qemu used to provide the offset to
> > the vfio-
> > pci driver, but not a must have.
> >
> > This way the driver for the GPU PF (XE) can leverage this
> > interface, validates
> > that the DMA-buf comes from a VF it feels responsible for and do
> > the math to
> > figure out in which parts of the VRAM needs to be accessed to
> > scanout the
> > picture.
> Sounds good. This is definitely a viable path forward and it looks
> like we are all
> in agreement with this idea.
>
> I guess we can start exploring how to implement the private dmabuf
> interface
> mechanism right away.
>
> Thanks,
> Vivek
>
> >
> > This way this private vfio-pci interface can also be used by
> > iommufd for
> > example.
> >
> > Regards,
> > Christian.
> >
> > >
> > > Thanks,
> > > Vivek
> > >
> > > >
> > > > Regards,
> > > > Christian.
> > > >
> > > > >
> > > > > > What Simona agreed on is exactly what I proposed as well,
> > > > > > that you
> > > > > > get a private interface for exactly that use case.
> > > > >
> > > > > A "private" interface to exchange phys_addr_t between at
> > > > > least
> > > > > VFIO/KVM/iommufd - sure no complaint with that.
> > > > >
> > > > > Jason
> > >
>
next prev parent reply other threads:[~2025-09-25 10:51 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20250915072428.1712837-1-vivek.kasireddy@intel.com>
2025-09-15 7:21 ` [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs Vivek Kasireddy
2025-09-15 15:33 ` Logan Gunthorpe
2025-09-16 17:34 ` Bjorn Helgaas
2025-09-16 17:59 ` Jason Gunthorpe
2025-09-16 17:57 ` Jason Gunthorpe
2025-09-18 6:16 ` Kasireddy, Vivek
2025-09-18 12:04 ` Jason Gunthorpe
2025-09-19 6:22 ` Kasireddy, Vivek
2025-09-19 12:29 ` Jason Gunthorpe
2025-09-22 6:59 ` Kasireddy, Vivek
2025-09-22 11:22 ` Christian König
2025-09-22 12:20 ` Jason Gunthorpe
2025-09-22 12:25 ` Christian König
2025-09-22 12:29 ` Jason Gunthorpe
2025-09-22 13:20 ` Christian König
2025-09-22 13:27 ` Jason Gunthorpe
2025-09-22 13:57 ` Christian König
2025-09-22 14:00 ` Jason Gunthorpe
2025-09-23 5:53 ` Kasireddy, Vivek
2025-09-23 6:25 ` Matthew Brost
2025-09-23 6:44 ` Matthew Brost
2025-09-23 7:52 ` Christian König
2025-09-23 12:15 ` Jason Gunthorpe
2025-09-23 12:45 ` Christian König
2025-09-23 13:12 ` Jason Gunthorpe
2025-09-23 13:28 ` Christian König
2025-09-23 13:38 ` Jason Gunthorpe
2025-09-23 13:48 ` Christian König
2025-09-23 23:02 ` Matthew Brost
2025-09-24 8:29 ` Christian König
2025-09-24 6:50 ` Kasireddy, Vivek
2025-09-24 7:21 ` Christian König
2025-09-25 3:56 ` Kasireddy, Vivek
2025-09-25 10:51 ` Thomas Hellström [this message]
2025-09-25 11:28 ` Christian König
2025-09-25 13:11 ` Thomas Hellström
2025-09-25 13:33 ` Jason Gunthorpe
2025-09-25 15:40 ` Thomas Hellström
2025-09-25 15:55 ` Jason Gunthorpe
2025-09-26 6:12 ` Kasireddy, Vivek
2025-09-23 13:36 ` Christoph Hellwig
2025-09-23 6:01 ` Kasireddy, Vivek
2025-09-22 12:12 ` Jason Gunthorpe
2025-09-24 16:13 ` Simon Richter
2025-09-24 17:12 ` Jason Gunthorpe
2025-09-25 4:06 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab09c09638c4482f99047672680c247b98c961c9.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=matthew.brost@intel.com \
--cc=simona.vetter@ffwll.ch \
--cc=vivek.kasireddy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox