From: Simona Vetter <simona.vetter@ffwll.ch>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "Christian König" <christian.koenig@amd.com>,
"Christoph Hellwig" <hch@lst.de>,
"Leon Romanovsky" <leonro@nvidia.com>,
"Xu Yilun" <yilun.xu@linux.intel.com>,
kvm@vger.kernel.org, dri-devel@lists.freedesktop.org,
linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
sumit.semwal@linaro.org, pbonzini@redhat.com, seanjc@google.com,
alex.williamson@redhat.com, vivek.kasireddy@intel.com,
dan.j.williams@intel.com, aik@amd.com, yilun.xu@intel.com,
linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org,
lukas@wunner.de, yan.y.zhao@intel.com, daniel.vetter@ffwll.ch,
leon@kernel.org, baolu.lu@linux.intel.com,
zhenzhong.duan@intel.com, tao1.su@intel.com
Subject: Re: [RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked() kAPI
Date: Wed, 8 Jan 2025 19:44:54 +0100 [thread overview]
Message-ID: <Z37HpvHAfB0g9OQ-@phenom.ffwll.local> (raw)
In-Reply-To: <20250108162227.GT5556@nvidia.com>
On Wed, Jan 08, 2025 at 12:22:27PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 08, 2025 at 04:25:54PM +0100, Christian König wrote:
> > Am 08.01.25 um 15:58 schrieb Jason Gunthorpe:
> > > I have imagined a staged approach were DMABUF gets a new API that
> > > works with the new DMA API to do importer mapping with "P2P source
> > > information" and a gradual conversion.
> >
> > To make it clear as maintainer of that subsystem I would reject such a step
> > with all I have.
>
> This is unexpected, so you want to just leave dmabuf broken? Do you
> have any plan to fix it, to fix the misuse of the DMA API, and all
> the problems I listed below? This is a big deal, it is causing real
> problems today.
>
> If it going to be like this I think we will stop trying to use dmabuf
> and do something simpler for vfio/kvm/iommufd :(
As the gal who help edit the og dma-buf spec 13 years ago, I think adding
pfn isn't a terrible idea. By design, dma-buf is the "everything is
optional" interface. And in the beginning, even consistent locking was
optional, but we've managed to fix that by now :-/
Where I do agree with Christian is that stuffing pfn support into the
dma_buf_attachment interfaces feels a bit much wrong.
> > We have already gone down that road and it didn't worked at all and
> > was a really big pain to pull people back from it.
>
> Nobody has really seriously tried to improve the DMA API before, so I
> don't think this is true at all.
Aside, I really hope this finally happens!
> > > 3) Importing devices need to know if they are working with PCI P2P
> > > addresses during mapping because they need to do things like turn on
> > > ATS on their DMA. As for multi-path we have the same hacks inside mlx5
> > > today that assume DMABUFs are always P2P because we cannot determine
> > > if things are P2P or not after being DMA mapped.
> >
> > Why would you need ATS on PCI P2P and not for system memory accesses?
>
> ATS has a significant performance cost. It is mandatory for PCI P2P,
> but ideally should be avoided for CPU memory.
Huh, I didn't know that. And yeah kinda means we've butchered the pci p2p
stuff a bit I guess ...
> > > 5) iommufd and kvm are both using CPU addresses without DMA. No
> > > exporter mapping is possible
> >
> > We have customers using both KVM and XEN with DMA-buf, so I can clearly
> > confirm that this isn't true.
>
> Today they are mmaping the dma-buf into a VMA and then using KVM's
> follow_pfn() flow to extract the CPU pfn from the PTE. Any mmapable
> dma-buf must have a CPU PFN.
>
> Here Xu implements basically the same path, except without the VMA
> indirection, and it suddenly not OK? Illogical.
So the big difference is that for follow_pfn() you need mmu_notifier since
the mmap might move around, whereas with pfn smashed into
dma_buf_attachment you need dma_resv_lock rules, and the move_notify
callback if you go dynamic.
So I guess my first question is, which locking rules do you want here for
pfn importers?
If mmu notifiers is fine, then I think the current approach of follow_pfn
should be ok. But if you instead dma_resv_lock rules (or the cpu mmap
somehow is an issue itself), then I think the clean design is create a new
separate access mechanism just for that. It would be the 5th or so (kernel
vmap, userspace mmap, dma_buf_attach and driver private stuff like
virtio_dma_buf.c where you access your buffer with a uuid), so really not
a big deal.
And for non-contrived exporters we might be able to implement the other
access methods in terms of the pfn method generically, so this wouldn't
even be a terrible maintenance burden going forward. And meanwhile all the
contrived exporters just keep working as-is.
The other part is that cpu mmap is optional, and there's plenty of strange
exporters who don't implement. But you can dma map the attachment into
plenty devices. This tends to mostly be a thing on SoC devices with some
very funky memory. But I guess you don't care about these use-case, so
should be ok.
I couldn't come up with a good name for these pfn users, maybe
dma_buf_pfn_attachment? This does _not_ have a struct device, but maybe
some of these new p2p source specifiers (or a list of those which are
allowed, no idea how this would need to fit into the new dma api).
Cheers, Sima
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2025-01-08 18:44 UTC|newest]
Thread overview: 150+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 14:27 [RFC PATCH 00/12] Private MMIO support for private assigned dev Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked() kAPI Xu Yilun
2025-01-08 8:01 ` Christian König
2025-01-08 13:23 ` Jason Gunthorpe
2025-01-08 13:44 ` Christian König
2025-01-08 14:58 ` Jason Gunthorpe
2025-01-08 15:25 ` Christian König
2025-01-08 16:22 ` Jason Gunthorpe
2025-01-08 17:56 ` Xu Yilun
2025-01-10 19:24 ` Simona Vetter
2025-01-10 20:16 ` Jason Gunthorpe
2025-01-08 18:44 ` Simona Vetter [this message]
2025-01-08 19:22 ` Xu Yilun
2025-01-09 8:04 ` Christian König
2025-01-08 23:06 ` Xu Yilun
2025-01-10 19:34 ` Simona Vetter
2025-01-10 20:38 ` Jason Gunthorpe
2025-01-12 22:10 ` Xu Yilun
2025-01-14 14:44 ` Simona Vetter
2025-01-14 17:31 ` Jason Gunthorpe
2025-01-15 8:55 ` Simona Vetter
2025-01-15 9:32 ` Christoph Hellwig
2025-01-15 13:34 ` Jason Gunthorpe
2025-01-16 5:33 ` Christoph Hellwig
2024-06-19 23:39 ` Xu Yilun
2025-01-16 13:28 ` Jason Gunthorpe
2025-01-15 10:06 ` Christian König
2025-01-17 14:42 ` Simona Vetter
2025-01-20 12:14 ` Christian König
2025-01-20 17:59 ` Jason Gunthorpe
2025-01-20 18:50 ` Simona Vetter
2025-01-20 19:48 ` Jason Gunthorpe
2025-01-21 16:11 ` Simona Vetter
2025-01-21 17:36 ` Jason Gunthorpe
2025-01-22 11:04 ` Simona Vetter
2025-01-22 13:28 ` Jason Gunthorpe
2025-01-22 13:29 ` Christian König
2025-01-22 14:37 ` Jason Gunthorpe
2025-01-22 14:59 ` Christian König
2025-01-23 13:59 ` Jason Gunthorpe
2025-01-23 14:32 ` Christian König
2025-01-23 14:35 ` Christian König
2025-01-23 15:02 ` Jason Gunthorpe
2025-01-23 15:48 ` Christian König
2025-01-23 16:08 ` Jason Gunthorpe
2025-01-09 8:09 ` Christian König
2025-01-10 20:54 ` Jason Gunthorpe
2025-01-15 9:38 ` Christian König
2025-01-15 13:38 ` Jason Gunthorpe
2025-01-15 13:45 ` Christian König
2025-01-15 13:46 ` Christian König
2025-01-15 14:14 ` Jason Gunthorpe
2025-01-15 14:29 ` Christian König
2025-01-15 14:30 ` Christian König
2025-01-15 15:10 ` Jason Gunthorpe
2025-01-15 16:34 ` Christian König
2025-01-15 17:09 ` Jason Gunthorpe
2025-01-16 15:13 ` Christian König
2024-06-20 22:02 ` Xu Yilun
2025-01-20 13:44 ` Christian König
2025-01-22 4:16 ` Xu Yilun
2025-01-16 16:07 ` Jason Gunthorpe
2025-01-17 14:37 ` Simona Vetter
2025-01-09 9:10 ` Christian König
2025-01-09 9:28 ` Leon Romanovsky
2025-01-07 14:27 ` [RFC PATCH 02/12] vfio: Export vfio device get and put registration helpers Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 03/12] vfio/pci: Share the core device pointer while invoking feature functions Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 04/12] vfio/pci: Allow MMIO regions to be exported through dma-buf Xu Yilun
2026-05-06 2:35 ` Alexey Kardashevskiy
2026-05-06 13:16 ` Jason Gunthorpe
2026-05-07 7:16 ` Alexey Kardashevskiy
2026-05-11 12:01 ` Jason Gunthorpe
2026-05-11 23:42 ` Alexey Kardashevskiy
2026-05-11 23:56 ` Jason Gunthorpe
2026-05-12 5:49 ` Alexey Kardashevskiy
2025-01-07 14:27 ` [RFC PATCH 05/12] vfio/pci: Support get_pfn() callback for dma-buf Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 06/12] KVM: Support vfio_dmabuf backed MMIO region Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 07/12] KVM: x86/mmu: Handle page fault for vfio_dmabuf backed MMIO Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 08/12] vfio/pci: Create host unaccessible dma-buf for private device Xu Yilun
2025-01-08 13:30 ` Jason Gunthorpe
2025-01-08 16:57 ` Xu Yilun
2025-01-09 14:40 ` Jason Gunthorpe
2025-01-09 16:40 ` Xu Yilun
2025-01-10 13:31 ` Jason Gunthorpe
2025-01-11 3:48 ` Xu Yilun
2025-01-13 16:49 ` Jason Gunthorpe
2024-06-17 23:28 ` Xu Yilun
2025-01-14 13:35 ` Jason Gunthorpe
2025-01-15 12:57 ` Alexey Kardashevskiy
2025-01-15 13:01 ` Jason Gunthorpe
2025-01-17 1:57 ` Baolu Lu
2025-01-17 13:25 ` Jason Gunthorpe
2024-06-23 19:59 ` Xu Yilun
2025-01-20 13:25 ` Jason Gunthorpe
2024-06-24 21:12 ` Xu Yilun
2025-01-21 17:43 ` Jason Gunthorpe
2025-01-22 4:32 ` Xu Yilun
2025-01-22 12:55 ` Jason Gunthorpe
2025-01-23 7:41 ` Xu Yilun
2025-01-23 13:08 ` Jason Gunthorpe
2025-01-20 4:41 ` Baolu Lu
2025-01-20 9:45 ` Alexey Kardashevskiy
2025-01-20 13:28 ` Jason Gunthorpe
2025-03-12 1:37 ` Dan Williams
2025-03-17 16:38 ` Jason Gunthorpe
2025-01-07 14:27 ` [RFC PATCH 09/12] vfio/pci: Export vfio dma-buf specific info for importers Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 10/12] KVM: vfio_dmabuf: Fetch VFIO specific dma-buf data for sanity check Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 11/12] KVM: x86/mmu: Export kvm_is_mmio_pfn() Xu Yilun
2025-01-07 14:27 ` [RFC PATCH 12/12] KVM: TDX: Implement TDX specific private MMIO map/unmap for SEPT Xu Yilun
2025-04-29 6:48 ` [RFC PATCH 00/12] Private MMIO support for private assigned dev Alexey Kardashevskiy
2025-04-29 7:50 ` Alexey Kardashevskiy
2025-05-09 3:04 ` Alexey Kardashevskiy
2025-05-09 11:12 ` Xu Yilun
2025-05-09 16:28 ` Xu Yilun
2025-05-09 18:43 ` Jason Gunthorpe
2025-05-10 3:47 ` Xu Yilun
2025-05-12 9:30 ` Alexey Kardashevskiy
2025-05-12 14:06 ` Jason Gunthorpe
2025-05-13 10:03 ` Zhi Wang
2025-05-14 9:47 ` Xu Yilun
2025-05-14 20:05 ` Zhi Wang
2025-05-15 18:02 ` Xu Yilun
2025-05-15 19:21 ` Jason Gunthorpe
2025-05-16 6:19 ` Xu Yilun
2025-05-16 12:49 ` Jason Gunthorpe
2025-05-17 2:33 ` Xu Yilun
2025-05-20 10:57 ` Alexey Kardashevskiy
2025-05-24 3:33 ` Xu Yilun
2025-05-15 10:29 ` Alexey Kardashevskiy
2025-05-15 16:44 ` Zhi Wang
2025-05-15 16:53 ` Zhi Wang
2025-05-21 10:41 ` Alexey Kardashevskiy
2025-05-14 7:02 ` Xu Yilun
2025-05-14 16:33 ` Jason Gunthorpe
2025-05-15 16:04 ` Xu Yilun
2025-05-15 17:56 ` Jason Gunthorpe
2025-05-16 6:03 ` Xu Yilun
2025-05-22 3:45 ` Alexey Kardashevskiy
2025-05-24 3:13 ` Xu Yilun
2025-05-26 7:18 ` Alexey Kardashevskiy
2025-05-29 14:41 ` Xu Yilun
2025-05-29 16:29 ` Jason Gunthorpe
2025-05-30 16:07 ` Xu Yilun
2025-05-30 2:29 ` Alexey Kardashevskiy
2025-05-30 16:23 ` Xu Yilun
2025-06-10 4:20 ` Alexey Kardashevskiy
2025-06-10 5:19 ` Baolu Lu
2025-06-10 6:53 ` Xu Yilun
2025-05-14 3:20 ` Xu Yilun
2025-06-10 4:37 ` Alexey Kardashevskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z37HpvHAfB0g9OQ-@phenom.ffwll.local \
--to=simona.vetter@ffwll.ch \
--cc=aik@amd.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=christian.koenig@amd.com \
--cc=dan.j.williams@intel.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@lst.de \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=leon@kernel.org \
--cc=leonro@nvidia.com \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=lukas@wunner.de \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=sumit.semwal@linaro.org \
--cc=tao1.su@intel.com \
--cc=vivek.kasireddy@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yilun.xu@intel.com \
--cc=yilun.xu@linux.intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.