From: "Cédric Le Goater" <clg@redhat.com>
To: Vivek Kasireddy <vivek.kasireddy@intel.com>, qemu-devel@nongnu.org
Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Akihiko Odaki" <odaki@rsg.ci.i.u-tokyo.ac.jp>,
"Dmitry Osipenko" <dmitry.osipenko@collabora.com>
Subject: Re: [PATCH v1 7/7] virtio-gpu-udmabuf: Create dmabuf for blobs associated with VFIO devices
Date: Mon, 6 Oct 2025 17:59:49 +0200 [thread overview]
Message-ID: <cd0b246e-7f75-4df6-b1e7-8ae41834f6d1@redhat.com> (raw)
In-Reply-To: <20251003234138.85820-8-vivek.kasireddy@intel.com>
On 10/4/25 01:36, Vivek Kasireddy wrote:
> In addition to memfd, a blob resource can also have its backing
> storage in a VFIO device region. Therefore, we first need to figure
> out if the blob is backed by a VFIO device region or a memfd before
> we can call the right API to get a dmabuf fd created.
>
> So, once we have the ramblock and the associated mr, we rely on
> memory_region_is_ram_device() to tell us where the backing storage
> is located. If the blob resource is VFIO backed, we try to find the
> right VFIO device that contains the blob and then invoke the API
> vfio_device_create_dmabuf().
>
> Note that in virtio_gpu_remap_udmabuf(), we first try to test if
> the VFIO dmabuf exporter supports mmap or not. If it doesn't, we
> use the VFIO device fd directly to create the CPU mapping.
>
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp>
> Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
> hw/display/Kconfig | 5 ++
> hw/display/virtio-gpu-udmabuf.c | 143 ++++++++++++++++++++++++++++++--
> 2 files changed, 141 insertions(+), 7 deletions(-)
>
> diff --git a/hw/display/Kconfig b/hw/display/Kconfig
> index 1e95ab28ef..0d090f25f5 100644
> --- a/hw/display/Kconfig
> +++ b/hw/display/Kconfig
> @@ -106,6 +106,11 @@ config VIRTIO_VGA
> depends on VIRTIO_PCI
> select VGA
>
> +config VIRTIO_GPU_VFIO_BLOB
> + bool
> + default y
> + depends on VFIO
> +
> config VHOST_USER_GPU
> bool
> default y
> diff --git a/hw/display/virtio-gpu-udmabuf.c b/hw/display/virtio-gpu-udmabuf.c
> index d804f321aa..bd06b4f300 100644
> --- a/hw/display/virtio-gpu-udmabuf.c
> +++ b/hw/display/virtio-gpu-udmabuf.c
> @@ -18,6 +18,7 @@
> #include "ui/console.h"
> #include "hw/virtio/virtio-gpu.h"
> #include "hw/virtio/virtio-gpu-pixman.h"
> +#include "hw/vfio/vfio-device.h"
> #include "trace.h"
> #include "system/ramblock.h"
> #include "system/hostmem.h"
> @@ -27,6 +28,33 @@
> #include "standard-headers/linux/udmabuf.h"
> #include "standard-headers/drm/drm_fourcc.h"
>
> +static void vfio_create_dmabuf(VFIODevice *vdev,
> + struct virtio_gpu_simple_resource *res)
> +{
> +#if defined(VIRTIO_GPU_VFIO_BLOB)
> + res->dmabuf_fd = vfio_device_create_dmabuf(vdev, res->iov, res->iov_cnt);
I didn't realize an fd was returned until this patch. I'd suggest
renaming vfio_device_create_dmabuf() to vfio_device_create_dmabuf_fd(),
or something explicit IMO.
> + if (res->dmabuf_fd < 0) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: VFIO_DEVICE_FEATURE_DMA_BUF: %s\n",
> + __func__, strerror(errno));
> + }
> +#endif
> +}
> +
> +static VFIODevice *vfio_device_lookup(MemoryRegion *mr)
> +{
> +#if defined(VIRTIO_GPU_VFIO_BLOB)
> + VFIODevice *vdev;
> +
> + QLIST_FOREACH(vdev, &vfio_device_list, next) {
Hmm, I'm not sure we want to expose the VFIOdevice list to other
subsystems. I understand the need, and it's faster than iterating
over QOM devices, but I’d prefer that an API be provided for this
purpose.
I missed how much vfio_device_list has proliferated. Needs a check.
> + if (vdev->dev == mr->dev) {
> + return vdev;
> + }
> + }
> +#endif
> + return NULL;
> +}
> +
> static void virtio_gpu_create_udmabuf(struct virtio_gpu_simple_resource *res)
> {
> struct udmabuf_create_list *list;
> @@ -68,11 +96,73 @@ static void virtio_gpu_create_udmabuf(struct virtio_gpu_simple_resource *res)
> g_free(list);
> }
>
> -static void virtio_gpu_remap_udmabuf(struct virtio_gpu_simple_resource *res)
> +static void *vfio_dmabuf_mmap(struct virtio_gpu_simple_resource *res,
> + VFIODevice *vdev)
> +{
> + struct vfio_region_info *info;
> + ram_addr_t offset, len = 0;
> + void *map, *submap;
> + int i, ret = -1;
> + RAMBlock *rb;
> +
> + /*
> + * We first reserve a contiguous chunk of address space for the entire
> + * dmabuf, then replace it with smaller mappings that correspond to the
> + * individual segments of the dmabuf.
> + */
> + map = mmap(NULL, res->blob_size, PROT_READ, MAP_SHARED, vdev->fd, 0);
> + if (map == MAP_FAILED) {
> + return map;
> + }
> +
> + for (i = 0; i < res->iov_cnt; i++) {
> + rcu_read_lock();
> + rb = qemu_ram_block_from_host(res->iov[i].iov_base, false, &offset);
> + rcu_read_unlock();
> +
> + if (!rb) {
> + goto err;
> + }
> +
> +#if defined(VIRTIO_GPU_VFIO_BLOB)
> + ret = vfio_get_region_index_from_mr(rb->mr);
> + if (ret < 0) {
> + goto err;
> + }
> +
> + ret = vfio_device_get_region_info(vdev, ret, &info);
> +#endif
> + if (ret < 0) {
> + goto err;
> + }
"hmm" again. Not this patch fault but we lack proper documentation
for the VFIO API. Something to work on. Since this patch is using
vfio_device_get_region_info() could you please add documentation
for it ?
Thanks,
C.
> + submap = mmap(map + len, res->iov[i].iov_len, PROT_READ,
> + MAP_SHARED | MAP_FIXED, vdev->fd,
> + info->offset + offset);
> + if (submap == MAP_FAILED) {
> + goto err;
> + }
> +
> + len += res->iov[i].iov_len;
> + }
> + return map;
> +err:
> + munmap(map, res->blob_size);
> + return MAP_FAILED;
> +}
> +
> +static void virtio_gpu_remap_udmabuf(struct virtio_gpu_simple_resource *res,
> + VFIODevice *vdev)
> {
> res->remapped = mmap(NULL, res->blob_size, PROT_READ,
> MAP_SHARED, res->dmabuf_fd, 0);
> if (res->remapped == MAP_FAILED) {
> + if (vdev) {
> + res->remapped = vfio_dmabuf_mmap(res, vdev);
> + if (res->remapped != MAP_FAILED) {
> + return;
> + }
> + }
> warn_report("%s: dmabuf mmap failed: %s", __func__,
> strerror(errno));
> res->remapped = NULL;
> @@ -130,18 +220,59 @@ bool virtio_gpu_have_udmabuf(void)
>
> void virtio_gpu_init_udmabuf(struct virtio_gpu_simple_resource *res)
> {
> + VFIODevice *vdev = NULL;
> void *pdata = NULL;
> + ram_addr_t offset;
> + RAMBlock *rb;
>
> res->dmabuf_fd = -1;
> if (res->iov_cnt == 1 &&
> res->iov[0].iov_len < 4096) {
> pdata = res->iov[0].iov_base;
> } else {
> - virtio_gpu_create_udmabuf(res);
> - if (res->dmabuf_fd < 0) {
> + rcu_read_lock();
> + rb = qemu_ram_block_from_host(res->iov[0].iov_base, false, &offset);
> + rcu_read_unlock();
> +
> + if (!rb) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Could not find ram block for host address\n",
> + __func__);
> return;
> }
> - virtio_gpu_remap_udmabuf(res);
> +
> + if (memory_region_is_ram_device(rb->mr)) {
> + vdev = vfio_device_lookup(rb->mr);
> + if (!vdev) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Could not find device to create dmabuf\n",
> + __func__);
> + return;
> + }
> +
> + vfio_create_dmabuf(vdev, res);
> + if (res->dmabuf_fd < 0) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Could not create dmabuf from vfio device\n",
> + __func__);
> + return;
> + }
> + } else if (memory_region_is_ram(rb->mr) && virtio_gpu_have_udmabuf()) {
> + virtio_gpu_create_udmabuf(res);
> + if (res->dmabuf_fd < 0) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Could not create dmabuf from memfd\n",
> + __func__);
> + return;
> + }
> + } else {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: memory region cannot be used to create dmabuf\n",
> + __func__);
> + return;
> + }
> +
> + virtio_gpu_remap_udmabuf(res, vdev);
> if (!res->remapped) {
> return;
> }
> @@ -153,9 +284,7 @@ void virtio_gpu_init_udmabuf(struct virtio_gpu_simple_resource *res)
>
> void virtio_gpu_fini_udmabuf(struct virtio_gpu_simple_resource *res)
> {
> - if (res->remapped) {
> - virtio_gpu_destroy_udmabuf(res);
> - }
> + virtio_gpu_destroy_udmabuf(res);
> }
>
> static void virtio_gpu_free_dmabuf(VirtIOGPU *g, VGPUDMABuf *dmabuf)
next prev parent reply other threads:[~2025-10-06 16:02 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-03 23:35 [PATCH v1 0/7] vfio: Implement VFIO_DEVICE_FEATURE_DMA_BUF and use it in virtio-gpu Vivek Kasireddy
2025-10-03 23:35 ` [PATCH v1 1/7] virtio-gpu: Recreate the resource's dmabuf if new backing is attached Vivek Kasireddy
2025-10-14 4:36 ` Akihiko Odaki
2025-10-03 23:35 ` [PATCH v1 2/7] virtio-gpu: Don't rely on res->blob to identify blob resources Vivek Kasireddy
2025-10-10 4:19 ` Akihiko Odaki
2025-10-13 6:54 ` Kasireddy, Vivek
2025-10-14 4:15 ` Akihiko Odaki
2025-10-03 23:35 ` [PATCH v1 3/7] virtio-gpu: Find hva for Guest's DMA addr associated with a ram device Vivek Kasireddy
2025-10-14 4:38 ` Akihiko Odaki
2025-10-03 23:35 ` [PATCH v1 4/7] vfio/region: Add a helper to get region index from memory region Vivek Kasireddy
2025-10-06 8:26 ` Cédric Le Goater
2025-10-07 4:50 ` Kasireddy, Vivek
2025-10-07 9:05 ` Cédric Le Goater
2025-10-03 23:35 ` [PATCH v1 5/7] linux-headers: Update vfio.h to include VFIO_DEVICE_FEATURE_DMA_BUF Vivek Kasireddy
2025-10-03 23:35 ` [PATCH v1 6/7] vfio/device: Add support for VFIO_DEVICE_FEATURE_DMA_BUF Vivek Kasireddy
2025-10-06 8:24 ` Cédric Le Goater
2025-10-07 4:48 ` Kasireddy, Vivek
2025-10-03 23:36 ` [PATCH v1 7/7] virtio-gpu-udmabuf: Create dmabuf for blobs associated with VFIO devices Vivek Kasireddy
2025-10-06 15:59 ` Cédric Le Goater [this message]
2025-10-07 4:53 ` Kasireddy, Vivek
2025-10-07 6:51 ` Cédric Le Goater
2025-10-10 4:53 ` Akihiko Odaki
2025-10-13 7:00 ` Kasireddy, Vivek
2025-10-14 4:49 ` Akihiko Odaki
2025-10-06 15:28 ` [PATCH v1 0/7] vfio: Implement VFIO_DEVICE_FEATURE_DMA_BUF and use it in virtio-gpu Cédric Le Goater
2025-10-07 4:51 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cd0b246e-7f75-4df6-b1e7-8ae41834f6d1@redhat.com \
--to=clg@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=dmitry.osipenko@collabora.com \
--cc=marcandre.lureau@redhat.com \
--cc=odaki@rsg.ci.i.u-tokyo.ac.jp \
--cc=qemu-devel@nongnu.org \
--cc=vivek.kasireddy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).