From: Leon Romanovsky <leon@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "Alex Williamson" <alex.williamson@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Christian König" <christian.koenig@amd.com>,
dri-devel@lists.freedesktop.org, iommu@lists.linux.dev,
"Jens Axboe" <axboe@kernel.dk>, "Joerg Roedel" <joro@8bytes.org>,
kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-media@vger.kernel.org, linux-mm@kvack.org,
linux-pci@vger.kernel.org,
"Logan Gunthorpe" <logang@deltatee.com>,
"Marek Szyprowski" <m.szyprowski@samsung.com>,
"Robin Murphy" <robin.murphy@arm.com>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Vivek Kasireddy" <vivek.kasireddy@intel.com>,
"Will Deacon" <will@kernel.org>
Subject: Re: [PATCH v5 9/9] vfio/pci: Add dma-buf export support for MMIO regions
Date: Fri, 17 Oct 2025 08:40:07 +0300 [thread overview]
Message-ID: <20251017054007.GB6199@unreal> (raw)
In-Reply-To: <20251016235332.GA265079@nvidia.com>
On Thu, Oct 16, 2025 at 08:53:32PM -0300, Jason Gunthorpe wrote:
> On Mon, Oct 13, 2025 at 06:26:11PM +0300, Leon Romanovsky wrote:
> > +
> > +static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf,
> > + struct dma_buf_attachment *attachment)
> > +{
> > + struct vfio_pci_dma_buf *priv = dmabuf->priv;
> > +
> > + if (!attachment->peer2peer)
> > + return -EOPNOTSUPP;
> > +
> > + if (priv->revoked)
> > + return -ENODEV;
> > +
> > + switch (pci_p2pdma_map_type(priv->provider, attachment->dev)) {
> > + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> > + break;
> > + case PCI_P2PDMA_MAP_BUS_ADDR:
> > + /*
> > + * There is no need in IOVA at all for this flow.
> > + * We rely on attachment->priv == NULL as a marker
> > + * for this mode.
> > + */
> > + return 0;
> > + default:
> > + return -EINVAL;
> > + }
> > +
> > + attachment->priv = kzalloc(sizeof(struct dma_iova_state), GFP_KERNEL);
> > + if (!attachment->priv)
> > + return -ENOMEM;
> > +
> > + dma_iova_try_alloc(attachment->dev, attachment->priv, 0, priv->size);
>
> The lifetime of this isn't good..
>
> > + return 0;
> > +}
> > +
> > +static void vfio_pci_dma_buf_detach(struct dma_buf *dmabuf,
> > + struct dma_buf_attachment *attachment)
> > +{
> > + kfree(attachment->priv);
> > +}
>
> If the caller fails to call map then it leaks the iova.
I'm relying on dmabuf code and documentation:
926 /**
927 * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list
...
932 *
933 * Returns struct dma_buf_attachment pointer for this attachment. Attachments
934 * must be cleaned up by calling dma_buf_detach().
Successful call to vfio_pci_dma_buf_attach() MUST be accompanied by call
to vfio_pci_dma_buf_detach(), so as far as dmabuf implementation follows
it, there is no leak.
>
> > +static struct sg_table *
> > +vfio_pci_dma_buf_map(struct dma_buf_attachment *attachment,
> > + enum dma_data_direction dir)
> > +{
> [..]
>
>
> > +err_unmap_dma:
> > + if (!i || !state)
> > + ; /* Do nothing */
> > + else if (dma_use_iova(state))
> > + dma_iova_destroy(attachment->dev, state, mapped_len, dir,
> > + attrs);
>
> If we hit this error path then it is freed..
>
> > +static void vfio_pci_dma_buf_unmap(struct dma_buf_attachment *attachment,
> > + struct sg_table *sgt,
> > + enum dma_data_direction dir)
> > +{
> > + struct vfio_pci_dma_buf *priv = attachment->dmabuf->priv;
> > + struct dma_iova_state *state = attachment->priv;
> > + unsigned long attrs = DMA_ATTR_MMIO;
> > + struct scatterlist *sgl;
> > + int i;
> > +
> > + if (!state)
> > + ; /* Do nothing */
> > + else if (dma_use_iova(state))
> > + dma_iova_destroy(attachment->dev, state, priv->size, dir,
> > + attrs);
>
> It is freed here too, but we can call map multiple times. Every time a
> move_notify happens can trigger another call to map.
>
> I think just call unlink in those two and put dma_iova_free in detach
Yes, it can work.
Thanks
>
> Jason
next prev parent reply other threads:[~2025-10-17 5:40 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-13 15:26 [PATCH v5 0/9] vfio/pci: Allow MMIO regions to be exported through dma-buf Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 1/9] PCI/P2PDMA: Separate the mmap() support from the core logic Leon Romanovsky
2025-10-17 6:30 ` Christoph Hellwig
2025-10-17 11:53 ` Jason Gunthorpe
2025-10-20 12:27 ` Christoph Hellwig
2025-10-20 12:58 ` Jason Gunthorpe
2025-10-20 15:04 ` Leon Romanovsky
2025-10-22 7:10 ` Christoph Hellwig
2025-10-22 11:43 ` Jason Gunthorpe
2025-10-13 15:26 ` [PATCH v5 2/9] PCI/P2PDMA: Simplify bus address mapping API Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 3/9] PCI/P2PDMA: Refactor to separate core P2P functionality from memory allocation Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 4/9] PCI/P2PDMA: Export pci_p2pdma_map_type() function Leon Romanovsky
2025-10-17 6:31 ` Christoph Hellwig
2025-10-17 12:14 ` Jason Gunthorpe
2025-10-20 12:29 ` Christoph Hellwig
2025-10-20 13:14 ` Jason Gunthorpe
2025-10-13 15:26 ` [PATCH v5 5/9] types: move phys_vec definition to common header Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 6/9] vfio: Export vfio device get and put registration helpers Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 7/9] vfio/pci: Share the core device pointer while invoking feature functions Leon Romanovsky
2025-10-13 15:26 ` [PATCH v5 8/9] vfio/pci: Enable peer-to-peer DMA transactions by default Leon Romanovsky
2025-10-16 4:09 ` Nicolin Chen
2025-10-16 6:10 ` Leon Romanovsky
2025-10-17 6:32 ` Christoph Hellwig
2025-10-17 11:55 ` Jason Gunthorpe
2025-10-20 12:28 ` Christoph Hellwig
2025-10-20 13:08 ` Jason Gunthorpe
2025-10-22 7:08 ` Christoph Hellwig
2025-10-22 11:38 ` Jason Gunthorpe
2025-10-22 11:54 ` Jason Gunthorpe
2025-10-13 15:26 ` [PATCH v5 9/9] vfio/pci: Add dma-buf export support for MMIO regions Leon Romanovsky
2025-10-16 23:53 ` Jason Gunthorpe
2025-10-17 5:40 ` Leon Romanovsky [this message]
2025-10-17 15:58 ` Jason Gunthorpe
2025-10-17 16:01 ` Jason Gunthorpe
2025-10-17 0:01 ` Jason Gunthorpe
2025-10-17 6:33 ` Christoph Hellwig
2025-10-17 12:16 ` Jason Gunthorpe
2025-10-17 13:02 ` Jason Gunthorpe
2025-10-17 16:13 ` Leon Romanovsky
2025-10-20 16:15 ` Jason Gunthorpe
2025-10-20 16:44 ` Leon Romanovsky
2025-10-20 16:51 ` Jason Gunthorpe
2025-10-17 23:40 ` Jason Gunthorpe
2025-10-22 12:50 ` Jason Gunthorpe
2025-10-26 7:55 ` Shuai Xue
2025-10-27 12:09 ` Jason Gunthorpe
2025-10-28 13:46 ` Shuai Xue
2025-10-27 23:13 ` David Matlack
2025-10-28 12:02 ` Leon Romanovsky
2025-10-30 22:28 ` Alex Mastro
2025-10-29 16:50 ` Alex Mastro
2025-10-29 18:21 ` Leon Romanovsky
2025-10-30 0:25 ` Samiullah Khawaja
2025-10-30 6:48 ` Leon Romanovsky
2025-10-30 12:57 ` Jason Gunthorpe
2025-10-30 20:38 ` Alex Williamson
2025-10-31 6:48 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251017054007.GB6199@unreal \
--to=leon@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=robin.murphy@arm.com \
--cc=sumit.semwal@linaro.org \
--cc=vivek.kasireddy@intel.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).