qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests
@ 2025-08-18 10:03 Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
                   ` (7 more replies)
  0 siblings, 8 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Hi all,

v6->v7
- Fixed vhost_user_shmem_object_new to use
  memory_region_init_ram_from_fd as before
v5->v6
- Added intermediate QOM object to manage shared
  MemoryRegion lifecycle with reference counting,
  and automatic cleanup
- Resolved BAR conflict, change from 2 to 3
  to avoid conflict with `modern-pio-notify=on`
- Added SHMEM_CONFIG validation in vhost-user-test
- Changed VirtSharedMemory -> VirtioSharedMemory
- Changed MappedMemoryRegion -> VirtioSharedMemoryMapping
- Changed from heap-allocated MemoryRegion *mr to
  embedded MemoryRegion mr in VirtioSharedMemory
  structure to eliminate memory leaks and
  simplify cleanup
- Fixed VirtioSharedMemory initialization and
  cleanup with memory_region_init() and object_unparent()
- Other minor fixes, typos, and updates.

This patch series implements dynamic fd-backed
memory mapping support for vhost-user backends,
enabling backends to dynamically request memory
mappings and unmappings during runtime through the new
VHOST_USER_BACKEND_SHMEM_MAP/UNMAP protocol messages.

This feature benefits various VIRTIO devices that
require dynamic shared memory management, including
virtiofs (for DAX mappings), virtio-gpu (for resource
sharing), and the recently standardized virtio-media
device.

The implementation introduces a robust QOM-based
architecture for managing shared memory lifecycle:

- VhostUserShmemObject: an intermediate object that
  manages individual memory mappings
- VIRTIO Shared Memory Regions: generic container
  regions declared in VirtIODevice to support any
  vhost-user device type
- Dynamic Mapping: backends can request mappings via
  SHMEM_MAP messages, with the frontend creating
  MemoryRegions from the provided file descriptors and
  adding them as subregions

When a SHMEM_MAP request is received, the frontend:
1. Creates VhostUserShmemObject to manage the mapping
   lifecycle
2. Maps the provided fd with memory_region_init_ram_from_fd()
3. Creates a MemoryRegion backed by the mapped memory
4. Adds it as a subregion of the appropiate VIRTIO
   Shared Memory Region

The QOM reference counting ensures automatic cleanup
when mappings are removed or the device is destroyed.

This patch also includes:
- VHOST_USER_GET_SHMEM_CONFIG: a new frontend request
  allowing generic vhost-user devices to query shared
  memory configuration from backends at device
  initialization, enabling the generic vhost-user-device
  frontend to work with any backend regardless of specific
  shared memory requirements.

The implementation has been tested with rust-vmm based
backends and includes SHMEM_CONFIG QTest validation.

Albert Esteve (8):
  vhost-user: Add VirtIO Shared Memory map request
  vhost_user.rst: Align VhostUserMsg excerpt members
  vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec
  vhost_user: Add frontend get_shmem_config command
  vhost_user.rst: Add GET_SHMEM_CONFIG message
  tests/qtest: Add GET_SHMEM validation test
  qmp: add shmem feature map
  vhost-user-device: Add shared memory BAR

 docs/interop/vhost-user.rst               | 101 +++++++++
 hw/virtio/meson.build                     |   1 +
 hw/virtio/vhost-user-base.c               |  49 ++++-
 hw/virtio/vhost-user-device-pci.c         |  34 ++-
 hw/virtio/vhost-user-shmem.c              | 134 ++++++++++++
 hw/virtio/vhost-user.c                    | 250 +++++++++++++++++++++-
 hw/virtio/virtio-qmp.c                    |   3 +
 hw/virtio/virtio.c                        | 109 ++++++++++
 include/hw/virtio/vhost-backend.h         |  10 +
 include/hw/virtio/vhost-user-shmem.h      |  75 +++++++
 include/hw/virtio/vhost-user.h            |   1 +
 include/hw/virtio/virtio.h                |  95 ++++++++
 subprojects/libvhost-user/libvhost-user.c |  70 ++++++
 subprojects/libvhost-user/libvhost-user.h |  54 +++++
 tests/qtest/vhost-user-test.c             |  91 ++++++++
 15 files changed, 1070 insertions(+), 7 deletions(-)
 create mode 100644 hw/virtio/vhost-user-shmem.c
 create mode 100644 include/hw/virtio/vhost-user-shmem.h

-- 
2.49.0



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 18:58   ` Stefan Hajnoczi
  2025-08-19  9:22   ` David Hildenbrand
  2025-08-18 10:03 ` [PATCH v7 2/8] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add SHMEM_MAP/UNMAP requests to vhost-user for
dynamic management of VIRTIO Shared Memory mappings.

This implementation introduces VhostUserShmemObject
as an intermediate QOM parent for MemoryRegions
created for SHMEM_MAP requests. This object
provides reference-counted lifecycle management
with automatic cleanup.

This request allows backends to dynamically map
file descriptors into a VIRTIO Shared Memory
Regions identified by their shmid. Maps are created
using memory_region_init_ram_device_ptr() with
configurable read/write permissions, and the resulting
MemoryRegions are added as subregions to the shmem
container region. The mapped memory is then advertised
to the guest VIRTIO drivers as a base address plus
offset for reading and writting according
to the requested mmap flags.

The backend can unmap memory ranges within a given
VIRTIO Shared Memory Region to free resources.
Upon receiving this message, the frontend removes
the MemoryRegion as a subregion and automatically
unreferences the associated VhostUserShmemObject,
triggering cleanup if no other references exist.

Error handling has been improved to ensure consistent
behavior across handlers that manage their own
vhost_user_send_resp() calls. Since these handlers
clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
error checking ensures proper connection closure on
failures, maintaining the expected error flow.

Note the memory region commit for these
operations needs to be delayed until after we
respond to the backend to avoid deadlocks.

Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 hw/virtio/meson.build                     |   1 +
 hw/virtio/vhost-user-shmem.c              | 134 ++++++++++++++
 hw/virtio/vhost-user.c                    | 207 +++++++++++++++++++++-
 hw/virtio/virtio.c                        | 109 ++++++++++++
 include/hw/virtio/vhost-user-shmem.h      |  75 ++++++++
 include/hw/virtio/virtio.h                |  93 ++++++++++
 subprojects/libvhost-user/libvhost-user.c |  70 ++++++++
 subprojects/libvhost-user/libvhost-user.h |  54 ++++++
 8 files changed, 741 insertions(+), 2 deletions(-)
 create mode 100644 hw/virtio/vhost-user-shmem.c
 create mode 100644 include/hw/virtio/vhost-user-shmem.h

diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
index 3ea7b3cec8..5efcf70b75 100644
--- a/hw/virtio/meson.build
+++ b/hw/virtio/meson.build
@@ -20,6 +20,7 @@ if have_vhost
     # fixme - this really should be generic
     specific_virtio_ss.add(files('vhost-user.c'))
     system_virtio_ss.add(files('vhost-user-base.c'))
+    system_virtio_ss.add(files('vhost-user-shmem.c'))
 
     # MMIO Stubs
     system_virtio_ss.add(files('vhost-user-device.c'))
diff --git a/hw/virtio/vhost-user-shmem.c b/hw/virtio/vhost-user-shmem.c
new file mode 100644
index 0000000000..1d763b56b6
--- /dev/null
+++ b/hw/virtio/vhost-user-shmem.c
@@ -0,0 +1,134 @@
+/*
+ * VHost-user Shared Memory Object
+ *
+ * Copyright Red Hat, Inc. 2025
+ *
+ * Authors:
+ *     Albert Esteve <aesteve@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/virtio/vhost-user-shmem.h"
+#include "system/memory.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+/**
+ * VhostUserShmemObject
+ *
+ * An intermediate QOM object that manages individual shared memory mappings
+ * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
+ * MemoryRegion objects, providing proper lifecycle management with reference
+ * counting. When the object is unreferenced and its reference count drops
+ * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
+ */
+
+static void vhost_user_shmem_object_finalize(Object *obj);
+static void vhost_user_shmem_object_instance_init(Object *obj);
+
+static const TypeInfo vhost_user_shmem_object_info = {
+    .name = TYPE_VHOST_USER_SHMEM_OBJECT,
+    .parent = TYPE_OBJECT,
+    .instance_size = sizeof(VhostUserShmemObject),
+    .instance_init = vhost_user_shmem_object_instance_init,
+    .instance_finalize = vhost_user_shmem_object_finalize,
+};
+
+static void vhost_user_shmem_object_instance_init(Object *obj)
+{
+    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
+
+    shmem_obj->shmid = 0;
+    shmem_obj->fd = -1;
+    shmem_obj->shm_offset = 0;
+    shmem_obj->len = 0;
+    shmem_obj->mr = NULL;
+}
+
+static void vhost_user_shmem_object_finalize(Object *obj)
+{
+    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
+
+    /* Clean up MemoryRegion if it exists */
+    if (shmem_obj->mr) {
+        /* Unparent the MemoryRegion to trigger cleanup */
+        object_unparent(OBJECT(shmem_obj->mr));
+        shmem_obj->mr = NULL;
+    }
+
+    /* Close file descriptor */
+    if (shmem_obj->fd >= 0) {
+        close(shmem_obj->fd);
+        shmem_obj->fd = -1;
+    }
+}
+
+VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
+                                                   int fd,
+                                                   uint64_t fd_offset,
+                                                   uint64_t shm_offset,
+                                                   uint64_t len,
+                                                   uint16_t flags)
+{
+    VhostUserShmemObject *shmem_obj;
+    MemoryRegion *mr;
+    g_autoptr(GString) mr_name = g_string_new(NULL);
+    uint32_t ram_flags;
+    Error *local_err = NULL;
+
+    if (len == 0) {
+        error_report("Shared memory mapping size cannot be zero");
+        return NULL;
+    }
+
+    fd = dup(fd);
+    if (fd < 0) {
+        error_report("Failed to duplicate fd: %s", strerror(errno));
+        return NULL;
+    }
+
+    /* Determine RAM flags */
+    ram_flags = RAM_SHARED;
+    if (!(flags & VHOST_USER_FLAG_MAP_RW)) {
+        ram_flags |= RAM_READONLY_FD;
+    }
+
+    /* Create the VhostUserShmemObject */
+    shmem_obj = VHOST_USER_SHMEM_OBJECT(
+        object_new(TYPE_VHOST_USER_SHMEM_OBJECT));
+
+    /* Set up object properties */
+    shmem_obj->shmid = shmid;
+    shmem_obj->fd = fd;
+    shmem_obj->shm_offset = shm_offset;
+    shmem_obj->len = len;
+
+    /* Create MemoryRegion as a child of this object */
+    mr = g_new0(MemoryRegion, 1);
+    g_string_printf(mr_name, "vhost-user-shmem-%d-%" PRIx64, shmid, shm_offset);
+
+    /* Initialize MemoryRegion with file descriptor */
+    if (!memory_region_init_ram_from_fd(mr, OBJECT(shmem_obj), mr_name->str,
+                                        len, ram_flags, fd, fd_offset,
+                                        &local_err)) {
+        error_report_err(local_err);
+        g_free(mr);
+        close(fd);
+        object_unref(OBJECT(shmem_obj));
+        return NULL;
+    }
+
+    shmem_obj->mr = mr;
+    return shmem_obj;
+}
+
+static void vhost_user_shmem_register_types(void)
+{
+    type_register_static(&vhost_user_shmem_object_info);
+}
+
+type_init(vhost_user_shmem_register_types)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 1e1d6b0d6e..eb3ad728b0 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -11,6 +11,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "hw/virtio/virtio-dmabuf.h"
+#include "hw/virtio/vhost-user-shmem.h"
 #include "hw/virtio/vhost.h"
 #include "hw/virtio/virtio-crypto.h"
 #include "hw/virtio/vhost-user.h"
@@ -115,6 +116,8 @@ typedef enum VhostUserBackendRequest {
     VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
     VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
     VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
+    VHOST_USER_BACKEND_SHMEM_MAP = 9,
+    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
     VHOST_USER_BACKEND_MAX
 }  VhostUserBackendRequest;
 
@@ -192,6 +195,23 @@ typedef struct VhostUserShared {
     unsigned char uuid[16];
 } VhostUserShared;
 
+/* For the flags field of VhostUserMMap */
+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
+
+typedef struct {
+    /* VIRTIO Shared Memory Region ID */
+    uint8_t shmid;
+    uint8_t padding[7];
+    /* File offset */
+    uint64_t fd_offset;
+    /* Offset within the VIRTIO Shared Memory Region */
+    uint64_t shm_offset;
+    /* Size of the mapping */
+    uint64_t len;
+    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
+    uint16_t flags;
+} VhostUserMMap;
+
 typedef struct {
     VhostUserRequest request;
 
@@ -224,6 +244,7 @@ typedef union {
         VhostUserInflight inflight;
         VhostUserShared object;
         VhostUserTransferDeviceState transfer_state;
+        VhostUserMMap mmap;
 } VhostUserPayload;
 
 typedef struct VhostUserMsg {
@@ -1768,6 +1789,172 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
     return 0;
 }
 
+/**
+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
+ * @dev: vhost device
+ * @ioc: QIOChannel for communication
+ * @hdr: vhost-user message header
+ * @payload: message payload containing mapping details
+ * @fd: file descriptor for the shared memory region
+ *
+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
+ * lifecycle management with reference counting.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+static int
+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
+                                    QIOChannel *ioc,
+                                    VhostUserHeader *hdr,
+                                    VhostUserPayload *payload,
+                                    int fd)
+{
+    VirtioSharedMemory *shmem;
+    VhostUserMMap *vu_mmap = &payload->mmap;
+    Error *local_err = NULL;
+    g_autoptr(GString) shm_name = g_string_new(NULL);
+
+    if (fd < 0) {
+        error_report("Bad fd for map");
+        return -EBADF;
+    }
+
+    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
+        error_report("Device has no VIRTIO Shared Memory Regions. "
+                     "Requested ID: %d", vu_mmap->shmid);
+        return -EFAULT;
+    }
+
+    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
+    if (!shmem) {
+        error_report("VIRTIO Shared Memory Region at "
+                     "ID %d not found or unitialized", vu_mmap->shmid);
+        return -EFAULT;
+    }
+
+    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
+        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
+        error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
+                     vu_mmap->shm_offset, vu_mmap->len);
+        return -EFAULT;
+    }
+
+    g_string_printf(shm_name, "virtio-shm%i-%lu",
+                    vu_mmap->shmid, vu_mmap->shm_offset);
+
+    memory_region_transaction_begin();
+
+    /* Create VhostUserShmemObject as intermediate parent for MemoryRegion */
+    VhostUserShmemObject *shmem_obj = vhost_user_shmem_object_new(
+        vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
+        vu_mmap->len, vu_mmap->flags);
+
+    if (!shmem_obj) {
+        memory_region_transaction_commit();
+        return -EFAULT;
+    }
+
+    /* Add the mapping using our VhostUserShmemObject as the parent */
+    if (virtio_add_shmem_map(shmem, shmem_obj) != 0) {
+        error_report("Failed to add shared memory mapping");
+        object_unref(OBJECT(shmem_obj));
+        memory_region_transaction_commit();
+        return -EFAULT;
+    }
+
+    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+        payload->u64 = 0;
+        hdr->size = sizeof(payload->u64);
+        vhost_user_send_resp(ioc, hdr, payload, &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            memory_region_transaction_commit();
+            return -EFAULT;
+        }
+    }
+
+    memory_region_transaction_commit();
+
+    return 0;
+}
+
+/**
+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
+ * @dev: vhost device
+ * @ioc: QIOChannel for communication
+ * @hdr: vhost-user message header
+ * @payload: message payload containing unmapping details
+ *
+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
+ * the specified memory mapping from the VirtIO shared memory region. This
+ * automatically unreferences the associated VhostUserShmemObject, which may
+ * trigger its finalization and cleanup (munmap, close fd) if no other
+ * references exist.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+static int
+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
+                                      QIOChannel *ioc,
+                                      VhostUserHeader *hdr,
+                                      VhostUserPayload *payload)
+{
+    VirtioSharedMemory *shmem;
+    VirtioSharedMemoryMapping *mmap = NULL;
+    VhostUserMMap *vu_mmap = &payload->mmap;
+    Error *local_err = NULL;
+
+    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
+        error_report("Device has no VIRTIO Shared Memory Regions. "
+                     "Requested ID: %d", vu_mmap->shmid);
+        return -EFAULT;
+    }
+
+    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
+    if (!shmem) {
+        error_report("VIRTIO Shared Memory Region at "
+                     "ID %d not found or unitialized", vu_mmap->shmid);
+        return -EFAULT;
+    }
+
+    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
+        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
+        error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
+                     vu_mmap->shm_offset, vu_mmap->len);
+        return -EFAULT;
+    }
+
+    mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
+    if (!mmap) {
+        error_report("Shared memory mapping not found at offset %" PRIx64
+                     " with length %" PRIx64,
+                     vu_mmap->shm_offset, vu_mmap->len);
+        return -EFAULT;
+    }
+
+    memory_region_transaction_begin();
+    memory_region_del_subregion(&shmem->mr, mmap->mem);
+    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+        payload->u64 = 0;
+        hdr->size = sizeof(payload->u64);
+        vhost_user_send_resp(ioc, hdr, payload, &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            memory_region_transaction_commit();
+            return -EFAULT;
+        }
+    }
+    memory_region_transaction_commit();
+
+    /* Free the MemoryRegion only after vhost_commit */
+    virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
+
+    return 0;
+}
+
 static void close_backend_channel(struct vhost_user *u)
 {
     g_source_destroy(u->backend_src);
@@ -1833,8 +2020,24 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
                                                              &payload.object);
         break;
     case VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP:
-        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
-                                                             &hdr, &payload);
+        /* Handler manages its own response, check error and close connection */
+        if (vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
+                                                           &hdr, &payload) < 0) {
+            goto err;
+        }
+        break;
+    case VHOST_USER_BACKEND_SHMEM_MAP:
+        /* Handler manages its own response, check error and close connection */
+        if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
+                                                fd ? fd[0] : -1) < 0) {
+            goto err;
+        }
+        break;
+    case VHOST_USER_BACKEND_SHMEM_UNMAP:
+        /* Handler manages its own response, check error and close connection */
+        if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
+            goto err;
+        }
         break;
     default:
         error_report("Received unexpected msg type: %d.", hdr.request);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9a81ad912e..1ead5f653f 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -14,6 +14,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-virtio.h"
+#include "hw/virtio/vhost-user-shmem.h"
 #include "trace.h"
 #include "qemu/defer-call.h"
 #include "qemu/error-report.h"
@@ -3045,6 +3046,100 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
     return vmstate_save_state(f, &vmstate_virtio, vdev, NULL);
 }
 
+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid)
+{
+    VirtioSharedMemory *elem;
+    g_autofree char *name = NULL;
+
+    elem = g_new0(VirtioSharedMemory, 1);
+    elem->shmid = shmid;
+
+    /* Initialize embedded MemoryRegion as container for shmem mappings */
+    name = g_strdup_printf("virtio-shmem-%d", shmid);
+    memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
+    QTAILQ_INIT(&elem->mmaps);
+    QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
+    return QSIMPLEQ_LAST(&vdev->shmem_list, VirtioSharedMemory, entry);
+}
+
+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
+{
+    VirtioSharedMemory *shmem, *next;
+    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+        if (shmem->shmid == shmid) {
+            return shmem;
+        }
+    }
+    return NULL;
+}
+
+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
+                         VhostUserShmemObject *shmem_obj)
+{
+    VirtioSharedMemoryMapping *mmap;
+    if (!shmem_obj) {
+        error_report("VhostUserShmemObject cannot be NULL");
+        return -1;
+    }
+    if (!shmem_obj->mr) {
+        error_report("VhostUserShmemObject has no MemoryRegion");
+        return -1;
+    }
+
+    /* Validate boundaries against the VIRTIO shared memory region */
+    if (shmem_obj->shm_offset + shmem_obj->len > shmem->mr.size) {
+        error_report("Memory exceeds the shared memory boundaries");
+        return -1;
+    }
+
+    /* Create the VirtioSharedMemoryMapping wrapper */
+    mmap = g_new0(VirtioSharedMemoryMapping, 1);
+    mmap->mem = shmem_obj->mr;
+    mmap->offset = shmem_obj->shm_offset;
+    mmap->shmem_obj = shmem_obj;
+
+    /* Take a reference on the VhostUserShmemObject */
+    object_ref(OBJECT(shmem_obj));
+
+    /* Add as subregion to the VIRTIO shared memory */
+    memory_region_add_subregion(&shmem->mr, mmap->offset, mmap->mem);
+
+    /* Add to the mapped regions list */
+    QTAILQ_INSERT_TAIL(&shmem->mmaps, mmap, link);
+
+    return 0;
+}
+
+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
+                                          hwaddr offset, uint64_t size)
+{
+    VirtioSharedMemoryMapping *mmap;
+    QTAILQ_FOREACH(mmap, &shmem->mmaps, link) {
+        if (mmap->offset == offset && mmap->mem->size == size) {
+            return mmap;
+        }
+    }
+    return NULL;
+}
+
+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
+                          uint64_t size)
+{
+    VirtioSharedMemoryMapping *mmap = virtio_find_shmem_map(shmem, offset, size);
+    if (mmap == NULL) {
+        return;
+    }
+
+    /*
+     * Unref the VhostUserShmemObject which will trigger automatic cleanup
+     * when the reference count reaches zero.
+     */
+    object_unref(OBJECT(mmap->shmem_obj));
+
+    QTAILQ_REMOVE(&shmem->mmaps, mmap, link);
+    g_free(mmap);
+}
+
 /* A wrapper for use as a VMState .put function */
 static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
                               const VMStateField *field, JSONWriter *vmdesc)
@@ -3521,6 +3616,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
             NULL, virtio_vmstate_change, vdev);
     vdev->device_endian = virtio_default_endian();
     vdev->use_guest_notifier_mask = true;
+    QSIMPLEQ_INIT(&vdev->shmem_list);
 }
 
 /*
@@ -4032,11 +4128,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
 static void virtio_device_instance_finalize(Object *obj)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(obj);
+    VirtioSharedMemory *shmem;
 
     virtio_device_free_virtqueues(vdev);
 
     g_free(vdev->config);
     g_free(vdev->vector_queues);
+    while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
+        shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
+        while (!QTAILQ_EMPTY(&shmem->mmaps)) {
+            VirtioSharedMemoryMapping *mmap_reg = QTAILQ_FIRST(&shmem->mmaps);
+            virtio_del_shmem_map(shmem, mmap_reg->offset, mmap_reg->mem->size);
+        }
+
+        /* Clean up the embedded MemoryRegion */
+        object_unparent(OBJECT(&shmem->mr));
+        QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
+        g_free(shmem);
+    }
 }
 
 static const Property virtio_properties[] = {
diff --git a/include/hw/virtio/vhost-user-shmem.h b/include/hw/virtio/vhost-user-shmem.h
new file mode 100644
index 0000000000..1f8c7bdc1f
--- /dev/null
+++ b/include/hw/virtio/vhost-user-shmem.h
@@ -0,0 +1,75 @@
+/*
+ * VHost-user Shared Memory Object
+ *
+ * Copyright Red Hat, Inc. 2025
+ *
+ * Authors:
+ *     Albert Esteve <aesteve@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef VHOST_USER_SHMEM_H
+#define VHOST_USER_SHMEM_H
+
+#include "qemu/osdep.h"
+#include "qom/object.h"
+#include "system/memory.h"
+#include "qapi/error.h"
+
+/* vhost-user memory mapping flags */
+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
+
+#define TYPE_VHOST_USER_SHMEM_OBJECT "vhost-user-shmem"
+OBJECT_DECLARE_SIMPLE_TYPE(VhostUserShmemObject, VHOST_USER_SHMEM_OBJECT)
+
+/**
+ * VhostUserShmemObject:
+ * @parent: Parent object
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd: File descriptor for the shared memory region
+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @mr: MemoryRegion associated with this shared memory mapping
+ *
+ * An intermediate QOM object that manages individual shared memory mappings
+ * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
+ * MemoryRegion objects, providing proper lifecycle management with reference
+ * counting. When the object is unreferenced and its reference count drops
+ * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
+ */
+struct VhostUserShmemObject {
+    Object parent;
+
+    uint8_t shmid;
+    int fd;
+    uint64_t shm_offset;
+    uint64_t len;
+    MemoryRegion *mr;
+};
+
+/**
+ * vhost_user_shmem_object_new() - Create a new VhostUserShmemObject
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd: File descriptor for the shared memory
+ * @fd_offset: Offset within the file descriptor
+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @flags: Mapping flags (VHOST_USER_FLAG_MAP_*)
+ *
+ * Creates a new VhostUserShmemObject that manages a shared memory mapping.
+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
+ * as a child object. When the object is finalized, it will automatically
+ * clean up the MemoryRegion and close the file descriptor.
+ *
+ * Return: A new VhostUserShmemObject on success, NULL on error.
+ */
+VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
+                                                   int fd,
+                                                   uint64_t fd_offset,
+                                                   uint64_t shm_offset,
+                                                   uint64_t len,
+                                                   uint16_t flags);
+
+#endif /* VHOST_USER_SHMEM_H */
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index c594764f23..a563bbac2c 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -98,6 +98,26 @@ enum virtio_device_endian {
     VIRTIO_DEVICE_ENDIAN_BIG,
 };
 
+struct VhostUserShmemObject;
+
+struct VirtioSharedMemoryMapping {
+    MemoryRegion *mem;
+    hwaddr offset;
+    QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
+    struct VhostUserShmemObject *shmem_obj; /* Intermediate parent object */
+};
+
+typedef struct VirtioSharedMemoryMapping VirtioSharedMemoryMapping;
+
+struct VirtioSharedMemory {
+    uint8_t shmid;
+    MemoryRegion mr;
+    QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
+    QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
+};
+
+typedef struct VirtioSharedMemory VirtioSharedMemory;
+
 /**
  * struct VirtIODevice - common VirtIO structure
  * @name: name of the device
@@ -167,6 +187,8 @@ struct VirtIODevice
      */
     EventNotifier config_notifier;
     bool device_iotlb_enabled;
+    /* Shared memory region for mappings. */
+    QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
 };
 
 struct VirtioDeviceClass {
@@ -295,6 +317,77 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
 
 int virtio_save(VirtIODevice *vdev, QEMUFile *f);
 
+/**
+ * virtio_new_shmem_region() - Create a new shared memory region
+ * @vdev: VirtIODevice
+ * @shmid: Shared memory ID
+ *
+ * Creates a new VirtioSharedMemory region for the given device and ID.
+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
+ * be automatically freed when the device is destroyed. The caller
+ * should not free the returned pointer.
+ *
+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
+ */
+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid);
+
+/**
+ * virtio_find_shmem_region() - Find an existing shared memory region
+ * @vdev: VirtIODevice
+ * @shmid: Shared memory ID to find
+ *
+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
+ * is owned by the VirtIODevice and should not be freed by the caller.
+ *
+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
+ */
+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
+
+/**
+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
+ * @shmem: VirtioSharedMemory region
+ * @shmem_obj: VhostUserShmemObject to add (takes a reference)
+ *
+ * Adds a memory mapping to the shared memory region. The VhostUserShmemObject
+ * is added as a child of the mapping and will be automatically managed through
+ * QOM reference counting. The mapping will be removed when
+ * virtio_del_shmem_map() is called or when the shared memory region is
+ * destroyed.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
+                         struct VhostUserShmemObject *shmem_obj);
+
+/**
+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
+ * @shmem: VirtioSharedMemory region
+ * @offset: Offset within the shared memory region
+ * @size: Size of the mapping to find
+ *
+ * Finds an existing memory mapping that covers the specified range.
+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
+ * region and should not be freed by the caller.
+ *
+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
+ */
+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
+                                          hwaddr offset, uint64_t size);
+
+/**
+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
+ * @shmem: VirtioSharedMemory region
+ * @offset: Offset of the mapping to remove
+ * @size: Size of the mapping to remove
+ *
+ * Removes a memory mapping from the shared memory region. This will
+ * automatically unref the associated VhostUserShmemObject, which may
+ * trigger its finalization and cleanup if no other references exist.
+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
+ */
+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
+                          uint64_t size);
+
 extern const VMStateInfo virtio_vmstate_info;
 
 #define VMSTATE_VIRTIO_DEVICE \
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 9c630c2170..034cbfdc3c 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
     return vu_send_message(dev, &msg);
 }
 
+bool
+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
+             uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
+{
+    VhostUserMsg vmsg = {
+        .request = VHOST_USER_BACKEND_SHMEM_MAP,
+        .size = sizeof(vmsg.payload.mmap),
+        .flags = VHOST_USER_VERSION,
+        .payload.mmap = {
+            .shmid = shmid,
+            .fd_offset = fd_offset,
+            .shm_offset = shm_offset,
+            .len = len,
+            .flags = flags,
+        },
+        .fd_num = 1,
+        .fds[0] = fd,
+    };
+
+    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
+        return false;
+    }
+
+    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
+        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
+    }
+
+    pthread_mutex_lock(&dev->backend_mutex);
+    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
+        pthread_mutex_unlock(&dev->backend_mutex);
+        return false;
+    }
+
+    /* Also unlocks the backend_mutex */
+    return vu_process_message_reply(dev, &vmsg);
+}
+
+bool
+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
+{
+    VhostUserMsg vmsg = {
+        .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
+        .size = sizeof(vmsg.payload.mmap),
+        .flags = VHOST_USER_VERSION,
+        .payload.mmap = {
+            .shmid = shmid,
+            .fd_offset = 0,
+            .shm_offset = shm_offset,
+            .len = len,
+        },
+    };
+
+    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
+        return false;
+    }
+
+    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
+        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
+    }
+
+    pthread_mutex_lock(&dev->backend_mutex);
+    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
+        pthread_mutex_unlock(&dev->backend_mutex);
+        return false;
+    }
+
+    /* Also unlocks the backend_mutex */
+    return vu_process_message_reply(dev, &vmsg);
+}
+
 static bool
 vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
 {
diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
index 2ffc58c11b..26b710c92d 100644
--- a/subprojects/libvhost-user/libvhost-user.h
+++ b/subprojects/libvhost-user/libvhost-user.h
@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
     /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
     /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
     VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
+    /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
+    VHOST_USER_PROTOCOL_F_SHMEM = 20,
     VHOST_USER_PROTOCOL_F_MAX
 };
 
@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
     VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
     VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
     VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
+    VHOST_USER_BACKEND_SHMEM_MAP = 9,
+    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
     VHOST_USER_BACKEND_MAX
 }  VhostUserBackendRequest;
 
@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
     unsigned char uuid[UUID_LEN];
 } VhostUserShared;
 
+/* For the flags field of VhostUserMMap */
+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
+
+typedef struct {
+    /* VIRTIO Shared Memory Region ID */
+    uint8_t shmid;
+    uint8_t padding[7];
+    /* File offset */
+    uint64_t fd_offset;
+    /* Offset within the VIRTIO Shared Memory Region */
+    uint64_t shm_offset;
+    /* Size of the mapping */
+    uint64_t len;
+    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
+    uint16_t flags;
+} VhostUserMMap;
+
 #define VU_PACKED __attribute__((packed))
 
 typedef struct VhostUserMsg {
@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
         VhostUserVringArea area;
         VhostUserInflight inflight;
         VhostUserShared object;
+        VhostUserMMap mmap;
     } payload;
 
     int fds[VHOST_MEMORY_BASELINE_NREGIONS];
@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
  */
 bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
 
+/**
+ * vu_shmem_map:
+ * @dev: a VuDev context
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd_offset: File offset
+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @flags: Flags for the mmap operation
+ * @fd: A file descriptor
+ *
+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
+ *
+ * Returns: TRUE on success, FALSE on failure.
+ */
+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
+                  uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
+
+/**
+ * vu_shmem_unmap:
+ * @dev: a VuDev context
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd_offset: File offset
+ * @len: Size of the mapping
+ *
+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
+ * with the requested `shmid`.
+ *
+ * Returns: TRUE on success, FALSE on failure.
+ */
+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
+                    uint64_t len);
+
 /**
  * vu_queue_set_notification:
  * @dev: a VuDev context
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 2/8] vhost_user.rst: Align VhostUserMsg excerpt members
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 3/8] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add missing members to the VhostUserMsg excerpt in
the vhost-user spec documentation.

Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 docs/interop/vhost-user.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 2e50f2ddfa..436a94c0ee 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -366,11 +366,15 @@ In QEMU the vhost-user message is implemented with the following struct:
           struct vhost_vring_state state;
           struct vhost_vring_addr addr;
           VhostUserMemory memory;
+          VhostUserMemRegMsg mem_reg;
           VhostUserLog log;
           struct vhost_iotlb_msg iotlb;
           VhostUserConfig config;
+          VhostUserCryptoSession session;
           VhostUserVringArea area;
           VhostUserInflight inflight;
+          VhostUserShared object;
+          VhostUserTransferDeviceState transfer_state;
       };
   } QEMU_PACKED VhostUserMsg;
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 3/8] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 2/8] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 4/8] vhost_user: Add frontend get_shmem_config command Albert Esteve
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add SHMEM_MAP/_UNMAP request to the vhost-user
spec documentation.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 docs/interop/vhost-user.rst | 58 +++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 436a94c0ee..dab9f3af42 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -350,6 +350,27 @@ Device state transfer parameters
   In the future, additional phases might be added e.g. to allow
   iterative migration while the device is running.
 
+MMAP request
+^^^^^^^^^^^^
+
++-------+---------+-----------+------------+-----+-------+
+| shmid | padding | fd_offset | shm_offset | len | flags |
++-------+---------+-----------+------------+-----+-------+
+
+:shmid: a 8-bit shared memory region identifier
+
+:fd_offset: a 64-bit offset of this area from the start
+            of the supplied file descriptor
+
+:shm_offset: a 64-bit offset from the start of the
+             pointed shared memory region
+
+:len: a 64-bit size of the memory to map
+
+:flags: a 64-bit value:
+  - 0: Pages are mapped read-only
+  - 1: Pages are mapped read-write
+
 C structure
 -----------
 
@@ -375,6 +396,7 @@ In QEMU the vhost-user message is implemented with the following struct:
           VhostUserInflight inflight;
           VhostUserShared object;
           VhostUserTransferDeviceState transfer_state;
+          VhostUserMMap mmap;
       };
   } QEMU_PACKED VhostUserMsg;
 
@@ -1057,6 +1079,7 @@ Protocol features
   #define VHOST_USER_PROTOCOL_F_XEN_MMAP             17
   #define VHOST_USER_PROTOCOL_F_SHARED_OBJECT        18
   #define VHOST_USER_PROTOCOL_F_DEVICE_STATE         19
+  #define VHOST_USER_PROTOCOL_F_SHMEM                20
 
 Front-end message types
 -----------------------
@@ -1865,6 +1888,41 @@ is sent by the front-end.
   when the operation is successful, or non-zero otherwise. Note that if the
   operation fails, no fd is sent to the backend.
 
+``VHOST_USER_BACKEND_SHMEM_MAP``
+  :id: 9
+  :equivalent ioctl: N/A
+  :request payload: fd and ``struct VhostUserMMap``
+  :reply payload: N/A
+
+  When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+  successfully negotiated, this message can be submitted by the backends to
+  advertise a new mapping to be made in a given VIRTIO Shared Memory Region.
+  Upon receiving the message, the front-end will mmap the given fd into the
+  VIRTIO Shared Memory Region with the requested ``shmid``.
+  If``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and
+  back-end set the ``VHOST_USER_NEED_REPLY`` flag, the front-end
+  must respond with zero when operation is successfully completed,
+  or non-zero otherwise.
+
+  Mapping over an already existing map is not allowed and requests shall fail.
+  Therefore, the memory range in the request must correspond with a valid,
+  free region of the VIRTIO Shared Memory Region. Also, note that mappings
+  consume resources and that the request can fail when there are no resources
+  available.
+
+``VHOST_USER_BACKEND_SHMEM_UNMAP``
+  :id: 10
+  :equivalent ioctl: N/A
+  :request payload: ``struct VhostUserMMap``
+  :reply payload: N/A
+
+  When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+  successfully negotiated, this message can be submitted by the backends so
+  that the front-end un-mmaps a given range (``shm_offset``, ``len``) in the
+  VIRTIO Shared Memory Region with the requested ``shmid``. Note that the
+  given range shall correspond to the entirety of a valid mapped region.
+  A reply is generated indicating whether unmapping succeeded.
+
 .. _reply_ack:
 
 VHOST_USER_PROTOCOL_F_REPLY_ACK
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 4/8] vhost_user: Add frontend get_shmem_config command
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
                   ` (2 preceding siblings ...)
  2025-08-18 10:03 ` [PATCH v7 3/8] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 5/8] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

The frontend can use this command to retrieve
VirtIO Shared Memory Regions configuration from
the backend. The response contains the number of
shared memory regions, their size, and shmid.

This is useful when the frontend is unaware of
specific backend type and configuration,
for example, in the `vhost-user-device` case.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 hw/virtio/vhost-user.c            | 43 +++++++++++++++++++++++++++++++
 include/hw/virtio/vhost-backend.h | 10 +++++++
 include/hw/virtio/vhost-user.h    |  1 +
 include/hw/virtio/virtio.h        |  2 ++
 4 files changed, 56 insertions(+)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index eb3ad728b0..d1bf162497 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -105,6 +105,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_GET_SHARED_OBJECT = 41,
     VHOST_USER_SET_DEVICE_STATE_FD = 42,
     VHOST_USER_CHECK_DEVICE_STATE = 43,
+    VHOST_USER_GET_SHMEM_CONFIG = 44,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -139,6 +140,12 @@ typedef struct VhostUserMemRegMsg {
     VhostUserMemoryRegion region;
 } VhostUserMemRegMsg;
 
+typedef struct VhostUserShMemConfig {
+    uint32_t nregions;
+    uint32_t padding;
+    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
+} VhostUserShMemConfig;
+
 typedef struct VhostUserLog {
     uint64_t mmap_size;
     uint64_t mmap_offset;
@@ -245,6 +252,7 @@ typedef union {
         VhostUserShared object;
         VhostUserTransferDeviceState transfer_state;
         VhostUserMMap mmap;
+        VhostUserShMemConfig shmem;
 } VhostUserPayload;
 
 typedef struct VhostUserMsg {
@@ -3213,6 +3221,40 @@ static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
     return 0;
 }
 
+static int vhost_user_get_shmem_config(struct vhost_dev *dev,
+                                       int *nregions,
+                                       uint64_t *memory_sizes,
+                                       Error **errp)
+{
+    int ret;
+    VhostUserMsg msg = {
+        .hdr.request = VHOST_USER_GET_SHMEM_CONFIG,
+        .hdr.flags = VHOST_USER_VERSION,
+    };
+
+    if (!virtio_has_feature(dev->protocol_features,
+                            VHOST_USER_PROTOCOL_F_SHMEM)) {
+        return 0;
+    }
+
+    ret = vhost_user_write(dev, &msg, NULL, 0);
+    if (ret < 0) {
+        return ret;
+    }
+
+    ret = vhost_user_read(dev, &msg);
+    if (ret < 0) {
+        return ret;
+    }
+
+    assert(msg.payload.shmem.nregions <= VIRTIO_MAX_SHMEM_REGIONS);
+    *nregions = msg.payload.shmem.nregions;
+    memcpy(memory_sizes,
+           &msg.payload.shmem.memory_sizes,
+           sizeof(uint64_t) * msg.payload.shmem.nregions);
+    return 0;
+}
+
 const VhostOps user_ops = {
         .backend_type = VHOST_BACKEND_TYPE_USER,
         .vhost_backend_init = vhost_user_backend_init,
@@ -3251,4 +3293,5 @@ const VhostOps user_ops = {
         .vhost_supports_device_state = vhost_user_supports_device_state,
         .vhost_set_device_state_fd = vhost_user_set_device_state_fd,
         .vhost_check_device_state = vhost_user_check_device_state,
+        .vhost_get_shmem_config = vhost_user_get_shmem_config,
 };
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index d6df209a2f..42400b276e 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -159,6 +159,15 @@ typedef int (*vhost_set_device_state_fd_op)(struct vhost_dev *dev,
                                             int *reply_fd,
                                             Error **errp);
 typedef int (*vhost_check_device_state_op)(struct vhost_dev *dev, Error **errp);
+/*
+ * Max regions is VIRTIO_MAX_SHMEM_REGIONS, so that is the maximum
+ * number of memory_sizes that will be accepted.
+ */
+typedef int (*vhost_get_shmem_config_op)(struct vhost_dev *dev,
+                                         int *nregions,
+                                         uint64_t *memory_sizes,
+                                         Error **errp);
+
 
 typedef struct VhostOps {
     VhostBackendType backend_type;
@@ -214,6 +223,7 @@ typedef struct VhostOps {
     vhost_supports_device_state_op vhost_supports_device_state;
     vhost_set_device_state_fd_op vhost_set_device_state_fd;
     vhost_check_device_state_op vhost_check_device_state;
+    vhost_get_shmem_config_op vhost_get_shmem_config;
 } VhostOps;
 
 int vhost_backend_update_device_iotlb(struct vhost_dev *dev,
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 9a3f238b43..bacc7d184c 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -32,6 +32,7 @@ enum VhostUserProtocolFeature {
     /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
     VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
     VHOST_USER_PROTOCOL_F_DEVICE_STATE = 19,
+    VHOST_USER_PROTOCOL_F_SHMEM = 20,
     VHOST_USER_PROTOCOL_F_MAX
 };
 
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index a563bbac2c..2109186d7d 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -81,6 +81,8 @@ typedef struct VirtQueueElement
 
 #define VIRTIO_NO_VECTOR 0xffff
 
+#define VIRTIO_MAX_SHMEM_REGIONS 256
+
 /* special index value used internally for config irqs */
 #define VIRTIO_CONFIG_IRQ_IDX -1
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 5/8] vhost_user.rst: Add GET_SHMEM_CONFIG message
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
                   ` (3 preceding siblings ...)
  2025-08-18 10:03 ` [PATCH v7 4/8] vhost_user: Add frontend get_shmem_config command Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test Albert Esteve
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add GET_SHMEM_CONFIG vhost-user frontend
message to the spec documentation.

Reviewed-by: Alyssa Ross <hi@alyssa.is>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 docs/interop/vhost-user.rst | 39 +++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index dab9f3af42..54d64ae75f 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -371,6 +371,20 @@ MMAP request
   - 0: Pages are mapped read-only
   - 1: Pages are mapped read-write
 
+VIRTIO Shared Memory Region configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
++-------------+---------+------------+----+--------------+
+| num regions | padding | mem size 0 | .. | mem size 255 |
++-------------+---------+------------+----+--------------+
+
+:num regions: a 32-bit number of regions
+
+:padding: 32-bit
+
+:mem size: contains ``num regions`` 64-bit fields representing the size of each
+           VIRTIO Shared Memory Region
+
 C structure
 -----------
 
@@ -397,6 +411,7 @@ In QEMU the vhost-user message is implemented with the following struct:
           VhostUserShared object;
           VhostUserTransferDeviceState transfer_state;
           VhostUserMMap mmap;
+          VhostUserShMemConfig shmem;
       };
   } QEMU_PACKED VhostUserMsg;
 
@@ -1754,6 +1769,30 @@ Front-end message types
   Using this function requires prior negotiation of the
   ``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
 
+``VHOST_USER_GET_SHMEM_CONFIG``
+  :id: 44
+  :equivalent ioctl: N/A
+  :request payload: N/A
+  :reply payload: ``struct VhostUserShMemConfig``
+
+  When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+  successfully negotiated, this message can be submitted by the front-end
+  to gather the VIRTIO Shared Memory Region configuration. The back-end will
+  respond with the number of VIRTIO Shared Memory Regions it requires, and
+  each shared memory region size in an array. The shared memory IDs are
+  represented by the array index. The information returned shall comply
+  with the following rules:
+
+  * The shared information will remain valid and unchanged for the entire
+    lifetime of the connection.
+
+  * The Shared Memory Region size must be a multiple of the page size
+    supported by mmap(2).
+
+  * The size may be 0 if the region is unused. This can happen when the
+    device does not support an optional feature but does support a feature
+    that uses a higher shmid.
+
 Back-end message types
 ----------------------
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
                   ` (4 preceding siblings ...)
  2025-08-18 10:03 ` [PATCH v7 5/8] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 23:14   ` Stefan Hajnoczi
  2025-08-18 10:03 ` [PATCH v7 7/8] qmp: add shmem feature map Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 8/8] vhost-user-device: Add shared memory BAR Albert Esteve
  7 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Improve vhost-user-test to properly validate
VHOST_USER_GET_SHMEM_CONFIG message handling by
directly simulating the message exchange.

The test manually triggers the
VHOST_USER_GET_SHMEM_CONFIG message by calling
chr_read() with a crafted VhostUserMsg, allowing direct
validation of the shmem configuration response handler.

Added TestServerShmem structure to track shmem
configuration state, including nregions_sent and
sizes_sent arrays for comprehensive validation.
The test verifies that the response contains the expected
number of shared memory regions and their corresponding
sizes.

Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index 75cb3e44b2..44a5e90b2e 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_SET_VRING_ENABLE = 18,
     VHOST_USER_GET_CONFIG = 24,
     VHOST_USER_SET_CONFIG = 25,
+    VHOST_USER_GET_SHMEM_CONFIG = 44,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -109,6 +110,20 @@ typedef struct VhostUserLog {
     uint64_t mmap_offset;
 } VhostUserLog;
 
+#define VIRTIO_MAX_SHMEM_REGIONS 256
+
+typedef struct VhostUserShMemConfig {
+    uint32_t nregions;
+    uint32_t padding;
+    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
+} VhostUserShMemConfig;
+
+typedef struct TestServerShmem {
+    bool test_enabled;
+    uint32_t nregions_sent;
+    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
+} TestServerShmem;
+
 typedef struct VhostUserMsg {
     VhostUserRequest request;
 
@@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
         struct vhost_vring_addr addr;
         VhostUserMemory memory;
         VhostUserLog log;
+        VhostUserShMemConfig shmem;
     } payload;
 } QEMU_PACKED VhostUserMsg;
 
@@ -170,6 +186,7 @@ typedef struct TestServer {
     bool test_fail;
     int test_flags;
     int queues;
+    TestServerShmem shmem;
     struct vhost_user_ops *vu_ops;
 } TestServer;
 
@@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
         qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
                    msg.payload.state.num ? "enabled" : "disabled");
         break;
+    
+    case VHOST_USER_GET_SHMEM_CONFIG:
+        if (!s->shmem.test_enabled) {
+            /* Reply with error if shmem feature not enabled */
+            msg.flags |= VHOST_USER_REPLY_MASK;
+            msg.size = sizeof(uint64_t);
+            msg.payload.u64 = -1; /* Error */
+            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
+        } else {
+            /* Reply with test shmem configuration */
+            msg.flags |= VHOST_USER_REPLY_MASK;
+            msg.size = sizeof(VhostUserShMemConfig);
+            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
+            msg.payload.shmem.padding = 0;
+            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
+            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
+            
+            /* Record what we're sending for test validation */
+            s->shmem.nregions_sent = msg.payload.shmem.nregions;
+            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
+            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
+            
+            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
+        }
+        break;
 
     default:
         qos_printf("vhost-user: un-handled message: %d\n", msg.request);
@@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
     return server;
 }
 
+static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
+{
+    TestServer *server = test_server_new("vhost-user-test", arg);
+    test_server_listen(server);
+
+    /* Enable shmem testing for this server */
+    server->shmem.test_enabled = true;
+
+    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
+    server->vu_ops->append_opts(server, cmd_line, "");
+
+    g_test_queue_destroy(vhost_user_test_cleanup, server);
+
+    return server;
+}
+
 static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
 {
     TestServer *server = arg;
@@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
     .get_protocol_features = vu_net_get_protocol_features,
 };
 
+/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
+static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
+{
+    TestServer *s = arg;
+    
+    g_assert_true(s->shmem.test_enabled);
+    
+    g_mutex_lock(&s->data_mutex);
+    s->shmem.nregions_sent = 0;
+    s->shmem.sizes_sent[0] = 0;
+    s->shmem.sizes_sent[1] = 0;
+    g_mutex_unlock(&s->data_mutex);
+    
+    VhostUserMsg msg = {
+        .request = VHOST_USER_GET_SHMEM_CONFIG,
+        .flags = VHOST_USER_VERSION,
+        .size = 0,
+    };
+    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
+
+    g_mutex_lock(&s->data_mutex);
+    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
+    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
+    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
+    g_mutex_unlock(&s->data_mutex);
+}
+
 static void register_vhost_user_test(void)
 {
     QOSGraphTestOptions opts = {
@@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
     qos_add_test("vhost-user/multiqueue",
                  "virtio-net",
                  test_multiqueue, &opts);
+    
+    opts.before = vhost_user_test_setup_shmem_config;
+    opts.edge.extra_device_opts = "";
+    qos_add_test("vhost-user/shmem-config",
+                 "virtio-net",
+                 test_shmem_config, &opts);
 }
 libqos_init(register_vhost_user_test);
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 7/8] qmp: add shmem feature map
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
                   ` (5 preceding siblings ...)
  2025-08-18 10:03 ` [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-18 10:03 ` [PATCH v7 8/8] vhost-user-device: Add shared memory BAR Albert Esteve
  7 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add new vhost-user protocol
VHOST_USER_PROTOCOL_F_SHMEM feature to
feature map.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 hw/virtio/virtio-qmp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/virtio/virtio-qmp.c b/hw/virtio/virtio-qmp.c
index 3b6377cf0d..8c2cfd0916 100644
--- a/hw/virtio/virtio-qmp.c
+++ b/hw/virtio/virtio-qmp.c
@@ -127,6 +127,9 @@ static const qmp_virtio_feature_map_t vhost_user_protocol_map[] = {
     FEATURE_ENTRY(VHOST_USER_PROTOCOL_F_DEVICE_STATE, \
             "VHOST_USER_PROTOCOL_F_DEVICE_STATE: Backend device state transfer "
             "supported"),
+    FEATURE_ENTRY(VHOST_USER_PROTOCOL_F_SHMEM, \
+                "VHOST_USER_PROTOCOL_F_SHMEM: Backend shared memory mapping "
+                "supported"),
     { -1, "" }
 };
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 8/8] vhost-user-device: Add shared memory BAR
  2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
                   ` (6 preceding siblings ...)
  2025-08-18 10:03 ` [PATCH v7 7/8] qmp: add shmem feature map Albert Esteve
@ 2025-08-18 10:03 ` Albert Esteve
  2025-08-19 10:42   ` Stefan Hajnoczi
  7 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2025-08-18 10:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: david, Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp, Albert Esteve

Add shared memory BAR support to vhost-user-device-pci
to enable direct file mapping for VIRTIO Shared
Memory Regions.

The implementation creates a consolidated shared
memory BAR that contains all VIRTIO Shared
Memory Regions as subregions. Each region is
configured with its proper shmid, size, and
offset within the BAR. The number and size of
regions are retrieved via VHOST_USER_GET_SHMEM_CONFIG
message sent by vhost-user-base during realization
after virtio_init().

Specifiically, it uses BAR 3 to avoid conflicts, as
it is currently unused.

The shared memory BAR is only created when the
backend supports VHOST_USER_PROTOCOL_F_SHMEM and
has configured shared memory regions. This maintains
backward compatibility with backends that do not
support shared memory functionality.

Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
 hw/virtio/vhost-user-base.c       | 49 +++++++++++++++++++++++++++++--
 hw/virtio/vhost-user-device-pci.c | 34 +++++++++++++++++++--
 2 files changed, 78 insertions(+), 5 deletions(-)

diff --git a/hw/virtio/vhost-user-base.c b/hw/virtio/vhost-user-base.c
index ff67a020b4..932f9b5596 100644
--- a/hw/virtio/vhost-user-base.c
+++ b/hw/virtio/vhost-user-base.c
@@ -16,6 +16,7 @@
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/vhost-user-base.h"
 #include "qemu/error-report.h"
+#include "migration/blocker.h"
 
 static void vub_start(VirtIODevice *vdev)
 {
@@ -276,7 +277,9 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VHostUserBase *vub = VHOST_USER_BASE(dev);
-    int ret;
+    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
+    g_autofree char *name = NULL;
+    int i, ret, nregions;
 
     if (!vub->chardev.chr) {
         error_setg(errp, "vhost-user-base: missing chardev");
@@ -319,7 +322,7 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
 
     /* Allocate queues */
     vub->vqs = g_ptr_array_sized_new(vub->num_vqs);
-    for (int i = 0; i < vub->num_vqs; i++) {
+    for (i = 0; i < vub->num_vqs; i++) {
         g_ptr_array_add(vub->vqs,
                         virtio_add_queue(vdev, vub->vq_size,
                                          vub_handle_output));
@@ -333,11 +336,51 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
                          VHOST_BACKEND_TYPE_USER, 0, errp);
 
     if (ret < 0) {
-        do_vhost_user_cleanup(vdev, vub);
+        goto err;
+    }
+
+    ret = vub->vhost_dev.vhost_ops->vhost_get_shmem_config(&vub->vhost_dev,
+                                                           &nregions,
+                                                           memory_sizes,
+                                                           errp);
+
+    if (ret < 0) {
+        goto err;
+    }
+
+    for (i = 0; i < nregions; i++) {
+        if (memory_sizes[i]) {
+            if (vub->vhost_dev.migration_blocker == NULL) {
+                error_setg(&vub->vhost_dev.migration_blocker,
+                       "Migration disabled: devices with VIRTIO Shared Memory "
+                       "Regions do not support migration yet.");
+                ret = migrate_add_blocker_normal(
+                    &vub->vhost_dev.migration_blocker,
+                    errp);
+
+                if (ret < 0) {
+                    goto err;
+                }
+            }
+
+            if (memory_sizes[i] % qemu_real_host_page_size() != 0) {
+                error_setg(errp, "Shared memory %d size must be a power of 2 "
+                                 "no smaller than the page size", i);
+                goto err;
+            }
+
+            name = g_strdup_printf("vub-shm-%d", i);
+            memory_region_init(&virtio_new_shmem_region(vdev, i)->mr,
+                               OBJECT(vdev), name,
+                               memory_sizes[i]);
+        }
     }
 
     qemu_chr_fe_set_handlers(&vub->chardev, NULL, NULL, vub_event, NULL,
                              dev, NULL, true);
+    return;
+err:
+    do_vhost_user_cleanup(vdev, vub);
 }
 
 static void vub_device_unrealize(DeviceState *dev)
diff --git a/hw/virtio/vhost-user-device-pci.c b/hw/virtio/vhost-user-device-pci.c
index f10bac874e..bac99e7c60 100644
--- a/hw/virtio/vhost-user-device-pci.c
+++ b/hw/virtio/vhost-user-device-pci.c
@@ -8,14 +8,18 @@
  */
 
 #include "qemu/osdep.h"
+#include "qapi/error.h"
 #include "hw/qdev-properties.h"
 #include "hw/virtio/vhost-user-base.h"
 #include "hw/virtio/virtio-pci.h"
 
+#define VIRTIO_DEVICE_PCI_SHMEM_BAR 3
+
 struct VHostUserDevicePCI {
     VirtIOPCIProxy parent_obj;
 
     VHostUserBase vub;
+    MemoryRegion shmembar;
 };
 
 #define TYPE_VHOST_USER_DEVICE_PCI "vhost-user-device-pci-base"
@@ -25,10 +29,36 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserDevicePCI, VHOST_USER_DEVICE_PCI)
 static void vhost_user_device_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
 {
     VHostUserDevicePCI *dev = VHOST_USER_DEVICE_PCI(vpci_dev);
-    DeviceState *vdev = DEVICE(&dev->vub);
+    DeviceState *dev_state = DEVICE(&dev->vub);
+    VirtIODevice *vdev = VIRTIO_DEVICE(dev_state);
+    VirtioSharedMemory *shmem, *next;
+    uint64_t offset = 0, shmem_size = 0;
 
     vpci_dev->nvectors = 1;
-    qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
+    qdev_realize(dev_state, BUS(&vpci_dev->bus), errp);
+
+    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+        if (shmem->mr.size > UINT64_MAX - shmem_size) {
+            error_setg(errp, "Total shared memory required overflow");
+            return;
+        }
+        shmem_size = shmem_size + shmem->mr.size;
+    }
+    if (shmem_size) {
+        memory_region_init(&dev->shmembar, OBJECT(vpci_dev),
+                           "vhost-device-pci-shmembar", shmem_size);
+        QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+            memory_region_add_subregion(&dev->shmembar, offset, &shmem->mr);
+            virtio_pci_add_shm_cap(vpci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
+                                   offset, shmem->mr.size, shmem->shmid);
+            offset = offset + shmem->mr.size;
+        }
+        pci_register_bar(&vpci_dev->pci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
+                        PCI_BASE_ADDRESS_SPACE_MEMORY |
+                        PCI_BASE_ADDRESS_MEM_PREFETCH |
+                        PCI_BASE_ADDRESS_MEM_TYPE_64,
+                        &dev->shmembar);
+    }
 }
 
 static void vhost_user_device_pci_class_init(ObjectClass *klass,
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request
  2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
@ 2025-08-18 18:58   ` Stefan Hajnoczi
  2025-08-19 12:47     ` Albert Esteve
  2025-08-19  9:22   ` David Hildenbrand
  1 sibling, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2025-08-18 18:58 UTC (permalink / raw)
  To: Albert Esteve
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 38748 bytes --]

On Mon, Aug 18, 2025 at 12:03:46PM +0200, Albert Esteve wrote:

(I haven't fully reviewed this yet, but here are my current comments.)

> Add SHMEM_MAP/UNMAP requests to vhost-user for
> dynamic management of VIRTIO Shared Memory mappings.
> 
> This implementation introduces VhostUserShmemObject
> as an intermediate QOM parent for MemoryRegions
> created for SHMEM_MAP requests. This object
> provides reference-counted lifecycle management
> with automatic cleanup.
> 
> This request allows backends to dynamically map
> file descriptors into a VIRTIO Shared Memory
> Regions identified by their shmid. Maps are created
> using memory_region_init_ram_device_ptr() with
> configurable read/write permissions, and the resulting
> MemoryRegions are added as subregions to the shmem
> container region. The mapped memory is then advertised
> to the guest VIRTIO drivers as a base address plus
> offset for reading and writting according
> to the requested mmap flags.
> 
> The backend can unmap memory ranges within a given
> VIRTIO Shared Memory Region to free resources.
> Upon receiving this message, the frontend removes
> the MemoryRegion as a subregion and automatically
> unreferences the associated VhostUserShmemObject,
> triggering cleanup if no other references exist.
> 
> Error handling has been improved to ensure consistent
> behavior across handlers that manage their own
> vhost_user_send_resp() calls. Since these handlers
> clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> error checking ensures proper connection closure on
> failures, maintaining the expected error flow.
> 
> Note the memory region commit for these
> operations needs to be delayed until after we
> respond to the backend to avoid deadlocks.
> 
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
>  hw/virtio/meson.build                     |   1 +
>  hw/virtio/vhost-user-shmem.c              | 134 ++++++++++++++
>  hw/virtio/vhost-user.c                    | 207 +++++++++++++++++++++-
>  hw/virtio/virtio.c                        | 109 ++++++++++++
>  include/hw/virtio/vhost-user-shmem.h      |  75 ++++++++
>  include/hw/virtio/virtio.h                |  93 ++++++++++
>  subprojects/libvhost-user/libvhost-user.c |  70 ++++++++
>  subprojects/libvhost-user/libvhost-user.h |  54 ++++++
>  8 files changed, 741 insertions(+), 2 deletions(-)
>  create mode 100644 hw/virtio/vhost-user-shmem.c
>  create mode 100644 include/hw/virtio/vhost-user-shmem.h
> 
> diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> index 3ea7b3cec8..5efcf70b75 100644
> --- a/hw/virtio/meson.build
> +++ b/hw/virtio/meson.build
> @@ -20,6 +20,7 @@ if have_vhost
>      # fixme - this really should be generic
>      specific_virtio_ss.add(files('vhost-user.c'))
>      system_virtio_ss.add(files('vhost-user-base.c'))
> +    system_virtio_ss.add(files('vhost-user-shmem.c'))
>  
>      # MMIO Stubs
>      system_virtio_ss.add(files('vhost-user-device.c'))
> diff --git a/hw/virtio/vhost-user-shmem.c b/hw/virtio/vhost-user-shmem.c
> new file mode 100644
> index 0000000000..1d763b56b6
> --- /dev/null
> +++ b/hw/virtio/vhost-user-shmem.c
> @@ -0,0 +1,134 @@
> +/*
> + * VHost-user Shared Memory Object
> + *
> + * Copyright Red Hat, Inc. 2025
> + *
> + * Authors:
> + *     Albert Esteve <aesteve@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "hw/virtio/vhost-user-shmem.h"
> +#include "system/memory.h"
> +#include "qapi/error.h"
> +#include "qemu/error-report.h"
> +#include "trace.h"
> +
> +/**
> + * VhostUserShmemObject
> + *
> + * An intermediate QOM object that manages individual shared memory mappings
> + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> + * MemoryRegion objects, providing proper lifecycle management with reference
> + * counting. When the object is unreferenced and its reference count drops
> + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> + */
> +
> +static void vhost_user_shmem_object_finalize(Object *obj);
> +static void vhost_user_shmem_object_instance_init(Object *obj);
> +
> +static const TypeInfo vhost_user_shmem_object_info = {
> +    .name = TYPE_VHOST_USER_SHMEM_OBJECT,
> +    .parent = TYPE_OBJECT,
> +    .instance_size = sizeof(VhostUserShmemObject),
> +    .instance_init = vhost_user_shmem_object_instance_init,
> +    .instance_finalize = vhost_user_shmem_object_finalize,
> +};
> +
> +static void vhost_user_shmem_object_instance_init(Object *obj)
> +{
> +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> +
> +    shmem_obj->shmid = 0;
> +    shmem_obj->fd = -1;
> +    shmem_obj->shm_offset = 0;
> +    shmem_obj->len = 0;
> +    shmem_obj->mr = NULL;
> +}
> +
> +static void vhost_user_shmem_object_finalize(Object *obj)
> +{
> +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> +
> +    /* Clean up MemoryRegion if it exists */
> +    if (shmem_obj->mr) {
> +        /* Unparent the MemoryRegion to trigger cleanup */
> +        object_unparent(OBJECT(shmem_obj->mr));
> +        shmem_obj->mr = NULL;
> +    }
> +
> +    /* Close file descriptor */
> +    if (shmem_obj->fd >= 0) {
> +        close(shmem_obj->fd);
> +        shmem_obj->fd = -1;
> +    }
> +}
> +
> +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> +                                                   int fd,
> +                                                   uint64_t fd_offset,
> +                                                   uint64_t shm_offset,
> +                                                   uint64_t len,
> +                                                   uint16_t flags)
> +{
> +    VhostUserShmemObject *shmem_obj;
> +    MemoryRegion *mr;
> +    g_autoptr(GString) mr_name = g_string_new(NULL);
> +    uint32_t ram_flags;
> +    Error *local_err = NULL;
> +
> +    if (len == 0) {
> +        error_report("Shared memory mapping size cannot be zero");
> +        return NULL;
> +    }
> +
> +    fd = dup(fd);
> +    if (fd < 0) {
> +        error_report("Failed to duplicate fd: %s", strerror(errno));
> +        return NULL;
> +    }
> +
> +    /* Determine RAM flags */
> +    ram_flags = RAM_SHARED;
> +    if (!(flags & VHOST_USER_FLAG_MAP_RW)) {
> +        ram_flags |= RAM_READONLY_FD;
> +    }
> +
> +    /* Create the VhostUserShmemObject */
> +    shmem_obj = VHOST_USER_SHMEM_OBJECT(
> +        object_new(TYPE_VHOST_USER_SHMEM_OBJECT));
> +
> +    /* Set up object properties */
> +    shmem_obj->shmid = shmid;
> +    shmem_obj->fd = fd;
> +    shmem_obj->shm_offset = shm_offset;
> +    shmem_obj->len = len;
> +
> +    /* Create MemoryRegion as a child of this object */
> +    mr = g_new0(MemoryRegion, 1);
> +    g_string_printf(mr_name, "vhost-user-shmem-%d-%" PRIx64, shmid, shm_offset);
> +
> +    /* Initialize MemoryRegion with file descriptor */
> +    if (!memory_region_init_ram_from_fd(mr, OBJECT(shmem_obj), mr_name->str,
> +                                        len, ram_flags, fd, fd_offset,
> +                                        &local_err)) {
> +        error_report_err(local_err);
> +        g_free(mr);
> +        close(fd);
> +        object_unref(OBJECT(shmem_obj));
> +        return NULL;
> +    }
> +
> +    shmem_obj->mr = mr;
> +    return shmem_obj;
> +}
> +
> +static void vhost_user_shmem_register_types(void)
> +{
> +    type_register_static(&vhost_user_shmem_object_info);
> +}
> +
> +type_init(vhost_user_shmem_register_types)
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 1e1d6b0d6e..eb3ad728b0 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -11,6 +11,7 @@
>  #include "qemu/osdep.h"
>  #include "qapi/error.h"
>  #include "hw/virtio/virtio-dmabuf.h"
> +#include "hw/virtio/vhost-user-shmem.h"
>  #include "hw/virtio/vhost.h"
>  #include "hw/virtio/virtio-crypto.h"
>  #include "hw/virtio/vhost-user.h"
> @@ -115,6 +116,8 @@ typedef enum VhostUserBackendRequest {
>      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
>      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
>      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
>      VHOST_USER_BACKEND_MAX
>  }  VhostUserBackendRequest;
>  
> @@ -192,6 +195,23 @@ typedef struct VhostUserShared {
>      unsigned char uuid[16];
>  } VhostUserShared;
>  
> +/* For the flags field of VhostUserMMap */
> +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> +
> +typedef struct {
> +    /* VIRTIO Shared Memory Region ID */
> +    uint8_t shmid;
> +    uint8_t padding[7];
> +    /* File offset */
> +    uint64_t fd_offset;
> +    /* Offset within the VIRTIO Shared Memory Region */
> +    uint64_t shm_offset;
> +    /* Size of the mapping */
> +    uint64_t len;
> +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> +    uint16_t flags;
> +} VhostUserMMap;
> +
>  typedef struct {
>      VhostUserRequest request;
>  
> @@ -224,6 +244,7 @@ typedef union {
>          VhostUserInflight inflight;
>          VhostUserShared object;
>          VhostUserTransferDeviceState transfer_state;
> +        VhostUserMMap mmap;
>  } VhostUserPayload;
>  
>  typedef struct VhostUserMsg {
> @@ -1768,6 +1789,172 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
>      return 0;
>  }
>  
> +/**
> + * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> + * @dev: vhost device
> + * @ioc: QIOChannel for communication
> + * @hdr: vhost-user message header
> + * @payload: message payload containing mapping details
> + * @fd: file descriptor for the shared memory region
> + *
> + * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> + * a VhostUserShmemObject to manage the shared memory mapping and adds it
> + * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> + * serves as an intermediate parent for the MemoryRegion, ensuring proper
> + * lifecycle management with reference counting.
> + *
> + * Returns: 0 on success, negative errno on failure
> + */
> +static int
> +vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> +                                    QIOChannel *ioc,
> +                                    VhostUserHeader *hdr,
> +                                    VhostUserPayload *payload,
> +                                    int fd)
> +{
> +    VirtioSharedMemory *shmem;
> +    VhostUserMMap *vu_mmap = &payload->mmap;
> +    Error *local_err = NULL;
> +    g_autoptr(GString) shm_name = g_string_new(NULL);
> +
> +    if (fd < 0) {
> +        error_report("Bad fd for map");
> +        return -EBADF;
> +    }
> +
> +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> +        error_report("Device has no VIRTIO Shared Memory Regions. "
> +                     "Requested ID: %d", vu_mmap->shmid);
> +        return -EFAULT;
> +    }
> +
> +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> +    if (!shmem) {
> +        error_report("VIRTIO Shared Memory Region at "
> +                     "ID %d not found or unitialized", vu_mmap->shmid);
> +        return -EFAULT;
> +    }
> +
> +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> +        error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> +                     vu_mmap->shm_offset, vu_mmap->len);
> +        return -EFAULT;
> +    }
> +
> +    g_string_printf(shm_name, "virtio-shm%i-%lu",
> +                    vu_mmap->shmid, vu_mmap->shm_offset);
> +
> +    memory_region_transaction_begin();
> +
> +    /* Create VhostUserShmemObject as intermediate parent for MemoryRegion */
> +    VhostUserShmemObject *shmem_obj = vhost_user_shmem_object_new(
> +        vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> +        vu_mmap->len, vu_mmap->flags);
> +
> +    if (!shmem_obj) {
> +        memory_region_transaction_commit();
> +        return -EFAULT;
> +    }
> +
> +    /* Add the mapping using our VhostUserShmemObject as the parent */
> +    if (virtio_add_shmem_map(shmem, shmem_obj) != 0) {
> +        error_report("Failed to add shared memory mapping");
> +        object_unref(OBJECT(shmem_obj));
> +        memory_region_transaction_commit();
> +        return -EFAULT;
> +    }
> +
> +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> +        payload->u64 = 0;
> +        hdr->size = sizeof(payload->u64);
> +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> +        if (local_err) {
> +            error_report_err(local_err);
> +            memory_region_transaction_commit();
> +            return -EFAULT;
> +        }
> +    }
> +
> +    memory_region_transaction_commit();
> +
> +    return 0;
> +}
> +
> +/**
> + * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> + * @dev: vhost device
> + * @ioc: QIOChannel for communication
> + * @hdr: vhost-user message header
> + * @payload: message payload containing unmapping details
> + *
> + * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> + * the specified memory mapping from the VirtIO shared memory region. This
> + * automatically unreferences the associated VhostUserShmemObject, which may
> + * trigger its finalization and cleanup (munmap, close fd) if no other
> + * references exist.
> + *
> + * Returns: 0 on success, negative errno on failure
> + */
> +static int
> +vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> +                                      QIOChannel *ioc,
> +                                      VhostUserHeader *hdr,
> +                                      VhostUserPayload *payload)
> +{
> +    VirtioSharedMemory *shmem;
> +    VirtioSharedMemoryMapping *mmap = NULL;
> +    VhostUserMMap *vu_mmap = &payload->mmap;
> +    Error *local_err = NULL;
> +
> +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> +        error_report("Device has no VIRTIO Shared Memory Regions. "
> +                     "Requested ID: %d", vu_mmap->shmid);
> +        return -EFAULT;
> +    }
> +
> +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> +    if (!shmem) {
> +        error_report("VIRTIO Shared Memory Region at "
> +                     "ID %d not found or unitialized", vu_mmap->shmid);
> +        return -EFAULT;
> +    }
> +
> +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> +        error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> +                     vu_mmap->shm_offset, vu_mmap->len);
> +        return -EFAULT;
> +    }
> +
> +    mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> +    if (!mmap) {
> +        error_report("Shared memory mapping not found at offset %" PRIx64
> +                     " with length %" PRIx64,
> +                     vu_mmap->shm_offset, vu_mmap->len);
> +        return -EFAULT;
> +    }
> +
> +    memory_region_transaction_begin();
> +    memory_region_del_subregion(&shmem->mr, mmap->mem);
> +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> +        payload->u64 = 0;
> +        hdr->size = sizeof(payload->u64);
> +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> +        if (local_err) {
> +            error_report_err(local_err);
> +            memory_region_transaction_commit();
> +            return -EFAULT;
> +        }
> +    }
> +    memory_region_transaction_commit();
> +
> +    /* Free the MemoryRegion only after vhost_commit */
> +    virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> +
> +    return 0;
> +}
> +
>  static void close_backend_channel(struct vhost_user *u)
>  {
>      g_source_destroy(u->backend_src);
> @@ -1833,8 +2020,24 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
>                                                               &payload.object);
>          break;
>      case VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP:
> -        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> -                                                             &hdr, &payload);
> +        /* Handler manages its own response, check error and close connection */
> +        if (vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> +                                                           &hdr, &payload) < 0) {
> +            goto err;
> +        }
> +        break;
> +    case VHOST_USER_BACKEND_SHMEM_MAP:
> +        /* Handler manages its own response, check error and close connection */
> +        if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> +                                                fd ? fd[0] : -1) < 0) {
> +            goto err;
> +        }
> +        break;
> +    case VHOST_USER_BACKEND_SHMEM_UNMAP:
> +        /* Handler manages its own response, check error and close connection */
> +        if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> +            goto err;
> +        }
>          break;
>      default:
>          error_report("Received unexpected msg type: %d.", hdr.request);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 9a81ad912e..1ead5f653f 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -14,6 +14,7 @@
>  #include "qemu/osdep.h"
>  #include "qapi/error.h"
>  #include "qapi/qapi-commands-virtio.h"
> +#include "hw/virtio/vhost-user-shmem.h"
>  #include "trace.h"
>  #include "qemu/defer-call.h"
>  #include "qemu/error-report.h"
> @@ -3045,6 +3046,100 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
>      return vmstate_save_state(f, &vmstate_virtio, vdev, NULL);
>  }
>  
> +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> +{
> +    VirtioSharedMemory *elem;
> +    g_autofree char *name = NULL;
> +
> +    elem = g_new0(VirtioSharedMemory, 1);
> +    elem->shmid = shmid;
> +
> +    /* Initialize embedded MemoryRegion as container for shmem mappings */
> +    name = g_strdup_printf("virtio-shmem-%d", shmid);
> +    memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
> +    QTAILQ_INIT(&elem->mmaps);
> +    QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> +    return QSIMPLEQ_LAST(&vdev->shmem_list, VirtioSharedMemory, entry);

"return elem;" is simpler.

> +}
> +
> +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> +{
> +    VirtioSharedMemory *shmem, *next;
> +    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> +        if (shmem->shmid == shmid) {
> +            return shmem;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> +                         VhostUserShmemObject *shmem_obj)
> +{
> +    VirtioSharedMemoryMapping *mmap;
> +    if (!shmem_obj) {
> +        error_report("VhostUserShmemObject cannot be NULL");
> +        return -1;
> +    }
> +    if (!shmem_obj->mr) {
> +        error_report("VhostUserShmemObject has no MemoryRegion");
> +        return -1;
> +    }
> +
> +    /* Validate boundaries against the VIRTIO shared memory region */
> +    if (shmem_obj->shm_offset + shmem_obj->len > shmem->mr.size) {

From above:

  memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);

shmem->mr's size is UINT64_MAX and this if statement doesn't handle
integer overflow. What is the purpose of this size check?

> +        error_report("Memory exceeds the shared memory boundaries");
> +        return -1;
> +    }
> +
> +    /* Create the VirtioSharedMemoryMapping wrapper */
> +    mmap = g_new0(VirtioSharedMemoryMapping, 1);
> +    mmap->mem = shmem_obj->mr;
> +    mmap->offset = shmem_obj->shm_offset;
> +    mmap->shmem_obj = shmem_obj;
> +
> +    /* Take a reference on the VhostUserShmemObject */
> +    object_ref(OBJECT(shmem_obj));

Why is the reference count incremented here? The caller seems to pass
ownership to this function...at least
vhost_user_backend_handle_shmem_map() doesn't touch shmem_obj afterwards
and doesn't unref it.

> +
> +    /* Add as subregion to the VIRTIO shared memory */
> +    memory_region_add_subregion(&shmem->mr, mmap->offset, mmap->mem);
> +
> +    /* Add to the mapped regions list */
> +    QTAILQ_INSERT_TAIL(&shmem->mmaps, mmap, link);
> +
> +    return 0;
> +}
> +
> +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> +                                          hwaddr offset, uint64_t size)
> +{
> +    VirtioSharedMemoryMapping *mmap;
> +    QTAILQ_FOREACH(mmap, &shmem->mmaps, link) {
> +        if (mmap->offset == offset && mmap->mem->size == size) {
> +            return mmap;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> +                          uint64_t size)
> +{
> +    VirtioSharedMemoryMapping *mmap = virtio_find_shmem_map(shmem, offset, size);
> +    if (mmap == NULL) {
> +        return;
> +    }
> +
> +    /*
> +     * Unref the VhostUserShmemObject which will trigger automatic cleanup
> +     * when the reference count reaches zero.
> +     */
> +    object_unref(OBJECT(mmap->shmem_obj));
> +
> +    QTAILQ_REMOVE(&shmem->mmaps, mmap, link);
> +    g_free(mmap);
> +}
> +
>  /* A wrapper for use as a VMState .put function */
>  static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
>                                const VMStateField *field, JSONWriter *vmdesc)
> @@ -3521,6 +3616,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
>              NULL, virtio_vmstate_change, vdev);
>      vdev->device_endian = virtio_default_endian();
>      vdev->use_guest_notifier_mask = true;
> +    QSIMPLEQ_INIT(&vdev->shmem_list);
>  }
>  
>  /*
> @@ -4032,11 +4128,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
>  static void virtio_device_instance_finalize(Object *obj)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> +    VirtioSharedMemory *shmem;
>  
>      virtio_device_free_virtqueues(vdev);
>  
>      g_free(vdev->config);
>      g_free(vdev->vector_queues);
> +    while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> +        shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> +        while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> +            VirtioSharedMemoryMapping *mmap_reg = QTAILQ_FIRST(&shmem->mmaps);
> +            virtio_del_shmem_map(shmem, mmap_reg->offset, mmap_reg->mem->size);
> +        }
> +
> +        /* Clean up the embedded MemoryRegion */
> +        object_unparent(OBJECT(&shmem->mr));
> +        QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> +        g_free(shmem);
> +    }
>  }
>  
>  static const Property virtio_properties[] = {
> diff --git a/include/hw/virtio/vhost-user-shmem.h b/include/hw/virtio/vhost-user-shmem.h
> new file mode 100644
> index 0000000000..1f8c7bdc1f
> --- /dev/null
> +++ b/include/hw/virtio/vhost-user-shmem.h
> @@ -0,0 +1,75 @@
> +/*
> + * VHost-user Shared Memory Object
> + *
> + * Copyright Red Hat, Inc. 2025
> + *
> + * Authors:
> + *     Albert Esteve <aesteve@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef VHOST_USER_SHMEM_H
> +#define VHOST_USER_SHMEM_H
> +
> +#include "qemu/osdep.h"
> +#include "qom/object.h"
> +#include "system/memory.h"
> +#include "qapi/error.h"
> +
> +/* vhost-user memory mapping flags */
> +#define VHOST_USER_FLAG_MAP_RW (1u << 0)

This constant is part of the vhost-user protocol. It would be nicer to
keep that all in one file instead of spreading protocol definitions
across multiple files.

In this case you could replace vhost_user_shmem_object_new()'s flags
argument with a bool allow_write argument. That way the vhost-user
protocol parsing happens in vhost-user.c and not vhost-user-shmem.c.

Alternatively, you could move the protocol definitions from vhost-user.c
into a header file and include them from vhost-user-shmem.c.

> +
> +#define TYPE_VHOST_USER_SHMEM_OBJECT "vhost-user-shmem"
> +OBJECT_DECLARE_SIMPLE_TYPE(VhostUserShmemObject, VHOST_USER_SHMEM_OBJECT)
> +
> +/**
> + * VhostUserShmemObject:
> + * @parent: Parent object
> + * @shmid: VIRTIO Shared Memory Region ID
> + * @fd: File descriptor for the shared memory region
> + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> + * @len: Size of the mapping
> + * @mr: MemoryRegion associated with this shared memory mapping
> + *
> + * An intermediate QOM object that manages individual shared memory mappings
> + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> + * MemoryRegion objects, providing proper lifecycle management with reference
> + * counting. When the object is unreferenced and its reference count drops
> + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> + */
> +struct VhostUserShmemObject {
> +    Object parent;
> +
> +    uint8_t shmid;
> +    int fd;
> +    uint64_t shm_offset;
> +    uint64_t len;
> +    MemoryRegion *mr;
> +};
> +
> +/**
> + * vhost_user_shmem_object_new() - Create a new VhostUserShmemObject
> + * @shmid: VIRTIO Shared Memory Region ID
> + * @fd: File descriptor for the shared memory
> + * @fd_offset: Offset within the file descriptor
> + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> + * @len: Size of the mapping
> + * @flags: Mapping flags (VHOST_USER_FLAG_MAP_*)
> + *
> + * Creates a new VhostUserShmemObject that manages a shared memory mapping.
> + * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> + * as a child object. When the object is finalized, it will automatically
> + * clean up the MemoryRegion and close the file descriptor.
> + *
> + * Return: A new VhostUserShmemObject on success, NULL on error.
> + */
> +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> +                                                   int fd,
> +                                                   uint64_t fd_offset,
> +                                                   uint64_t shm_offset,
> +                                                   uint64_t len,
> +                                                   uint16_t flags);
> +
> +#endif /* VHOST_USER_SHMEM_H */
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index c594764f23..a563bbac2c 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -98,6 +98,26 @@ enum virtio_device_endian {
>      VIRTIO_DEVICE_ENDIAN_BIG,
>  };
>  
> +struct VhostUserShmemObject;
> +
> +struct VirtioSharedMemoryMapping {
> +    MemoryRegion *mem;
> +    hwaddr offset;
> +    QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> +    struct VhostUserShmemObject *shmem_obj; /* Intermediate parent object */
> +};
> +
> +typedef struct VirtioSharedMemoryMapping VirtioSharedMemoryMapping;
> +
> +struct VirtioSharedMemory {
> +    uint8_t shmid;
> +    MemoryRegion mr;
> +    QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> +    QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> +};

VirtioSharedMemoryMapping and VirtioSharedMemory duplicate information
from VhostUserShmemObject (shmid, memory region pointers, offsets). This
makes the relationship between VIRTIO and vhost-user code confusing.

I wonder if VhostUserShmemObject is specific to the vhost-user protocol
or if any VIRTIO device implementation that needs a VIRTIO Shared Memory
Region with an fd, offset, etc should be able to use it? If yes, then it
should be renamed and made part of the core hw/virtio/ code rather than
vhost-user.

> +
> +typedef struct VirtioSharedMemory VirtioSharedMemory;
> +
>  /**
>   * struct VirtIODevice - common VirtIO structure
>   * @name: name of the device
> @@ -167,6 +187,8 @@ struct VirtIODevice
>       */
>      EventNotifier config_notifier;
>      bool device_iotlb_enabled;
> +    /* Shared memory region for mappings. */
> +    QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
>  };
>  
>  struct VirtioDeviceClass {
> @@ -295,6 +317,77 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
>  
>  int virtio_save(VirtIODevice *vdev, QEMUFile *f);
>  
> +/**
> + * virtio_new_shmem_region() - Create a new shared memory region
> + * @vdev: VirtIODevice
> + * @shmid: Shared memory ID
> + *
> + * Creates a new VirtioSharedMemory region for the given device and ID.
> + * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> + * be automatically freed when the device is destroyed. The caller
> + * should not free the returned pointer.
> + *
> + * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> + */
> +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> +
> +/**
> + * virtio_find_shmem_region() - Find an existing shared memory region
> + * @vdev: VirtIODevice
> + * @shmid: Shared memory ID to find
> + *
> + * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> + * is owned by the VirtIODevice and should not be freed by the caller.
> + *
> + * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> + */
> +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> +
> +/**
> + * virtio_add_shmem_map() - Add a memory mapping to a shared region
> + * @shmem: VirtioSharedMemory region
> + * @shmem_obj: VhostUserShmemObject to add (takes a reference)
> + *
> + * Adds a memory mapping to the shared memory region. The VhostUserShmemObject
> + * is added as a child of the mapping and will be automatically managed through
> + * QOM reference counting. The mapping will be removed when
> + * virtio_del_shmem_map() is called or when the shared memory region is
> + * destroyed.
> + *
> + * Returns: 0 on success, negative errno on failure
> + */
> +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> +                         struct VhostUserShmemObject *shmem_obj);

This API suggests the answer to my question above about whether
VhostUserShmemObject is really a core hw/virtio/ concept rather than a
vhost-user protocol concept is "yes". I think VhostUserShmemObject
should be renamed and maybe unified with VirtioSharedMemoryMapping.

> +
> +/**
> + * virtio_find_shmem_map() - Find a memory mapping in a shared region
> + * @shmem: VirtioSharedMemory region
> + * @offset: Offset within the shared memory region
> + * @size: Size of the mapping to find
> + *
> + * Finds an existing memory mapping that covers the specified range.
> + * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> + * region and should not be freed by the caller.
> + *
> + * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> + */
> +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> +                                          hwaddr offset, uint64_t size);
> +
> +/**
> + * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> + * @shmem: VirtioSharedMemory region
> + * @offset: Offset of the mapping to remove
> + * @size: Size of the mapping to remove
> + *
> + * Removes a memory mapping from the shared memory region. This will
> + * automatically unref the associated VhostUserShmemObject, which may
> + * trigger its finalization and cleanup if no other references exist.
> + * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> + */
> +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> +                          uint64_t size);
> +
>  extern const VMStateInfo virtio_vmstate_info;
>  
>  #define VMSTATE_VIRTIO_DEVICE \
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 9c630c2170..034cbfdc3c 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
>      return vu_send_message(dev, &msg);
>  }
>  
> +bool
> +vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> +             uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> +{
> +    VhostUserMsg vmsg = {
> +        .request = VHOST_USER_BACKEND_SHMEM_MAP,
> +        .size = sizeof(vmsg.payload.mmap),
> +        .flags = VHOST_USER_VERSION,
> +        .payload.mmap = {
> +            .shmid = shmid,
> +            .fd_offset = fd_offset,
> +            .shm_offset = shm_offset,
> +            .len = len,
> +            .flags = flags,
> +        },
> +        .fd_num = 1,
> +        .fds[0] = fd,
> +    };
> +
> +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> +        return false;
> +    }
> +
> +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> +    }
> +
> +    pthread_mutex_lock(&dev->backend_mutex);
> +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> +        pthread_mutex_unlock(&dev->backend_mutex);
> +        return false;
> +    }
> +
> +    /* Also unlocks the backend_mutex */
> +    return vu_process_message_reply(dev, &vmsg);
> +}
> +
> +bool
> +vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> +{
> +    VhostUserMsg vmsg = {
> +        .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> +        .size = sizeof(vmsg.payload.mmap),
> +        .flags = VHOST_USER_VERSION,
> +        .payload.mmap = {
> +            .shmid = shmid,
> +            .fd_offset = 0,
> +            .shm_offset = shm_offset,
> +            .len = len,
> +        },
> +    };
> +
> +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> +        return false;
> +    }
> +
> +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> +    }
> +
> +    pthread_mutex_lock(&dev->backend_mutex);
> +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> +        pthread_mutex_unlock(&dev->backend_mutex);
> +        return false;
> +    }
> +
> +    /* Also unlocks the backend_mutex */
> +    return vu_process_message_reply(dev, &vmsg);
> +}
> +
>  static bool
>  vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
>  {
> diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> index 2ffc58c11b..26b710c92d 100644
> --- a/subprojects/libvhost-user/libvhost-user.h
> +++ b/subprojects/libvhost-user/libvhost-user.h
> @@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
>      /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
>      /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
>      VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> +    /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> +    VHOST_USER_PROTOCOL_F_SHMEM = 20,
>      VHOST_USER_PROTOCOL_F_MAX
>  };
>  
> @@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
>      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
>      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
>      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
>      VHOST_USER_BACKEND_MAX
>  }  VhostUserBackendRequest;
>  
> @@ -186,6 +190,23 @@ typedef struct VhostUserShared {
>      unsigned char uuid[UUID_LEN];
>  } VhostUserShared;
>  
> +/* For the flags field of VhostUserMMap */
> +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> +
> +typedef struct {
> +    /* VIRTIO Shared Memory Region ID */
> +    uint8_t shmid;
> +    uint8_t padding[7];
> +    /* File offset */
> +    uint64_t fd_offset;
> +    /* Offset within the VIRTIO Shared Memory Region */
> +    uint64_t shm_offset;
> +    /* Size of the mapping */
> +    uint64_t len;
> +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> +    uint16_t flags;
> +} VhostUserMMap;
> +
>  #define VU_PACKED __attribute__((packed))
>  
>  typedef struct VhostUserMsg {
> @@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
>          VhostUserVringArea area;
>          VhostUserInflight inflight;
>          VhostUserShared object;
> +        VhostUserMMap mmap;
>      } payload;
>  
>      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> @@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
>   */
>  bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
>  
> +/**
> + * vu_shmem_map:
> + * @dev: a VuDev context
> + * @shmid: VIRTIO Shared Memory Region ID
> + * @fd_offset: File offset
> + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> + * @len: Size of the mapping
> + * @flags: Flags for the mmap operation
> + * @fd: A file descriptor
> + *
> + * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> + *
> + * Returns: TRUE on success, FALSE on failure.
> + */
> +bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> +                  uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> +
> +/**
> + * vu_shmem_unmap:
> + * @dev: a VuDev context
> + * @shmid: VIRTIO Shared Memory Region ID
> + * @fd_offset: File offset
> + * @len: Size of the mapping
> + *
> + * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> + * with the requested `shmid`.
> + *
> + * Returns: TRUE on success, FALSE on failure.
> + */
> +bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> +                    uint64_t len);
> +
>  /**
>   * vu_queue_set_notification:
>   * @dev: a VuDev context
> -- 
> 2.49.0
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-18 10:03 ` [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test Albert Esteve
@ 2025-08-18 23:14   ` Stefan Hajnoczi
  2025-08-19 12:16     ` Albert Esteve
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2025-08-18 23:14 UTC (permalink / raw)
  To: Albert Esteve
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 6670 bytes --]

On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> Improve vhost-user-test to properly validate
> VHOST_USER_GET_SHMEM_CONFIG message handling by
> directly simulating the message exchange.
> 
> The test manually triggers the
> VHOST_USER_GET_SHMEM_CONFIG message by calling
> chr_read() with a crafted VhostUserMsg, allowing direct
> validation of the shmem configuration response handler.

It looks like this test case invokes its own chr_read() function without
going through QEMU, so I don't understand what this is testing?

> 
> Added TestServerShmem structure to track shmem
> configuration state, including nregions_sent and
> sizes_sent arrays for comprehensive validation.
> The test verifies that the response contains the expected
> number of shared memory regions and their corresponding
> sizes.
> 
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
>  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
>  1 file changed, 91 insertions(+)
> 
> diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> index 75cb3e44b2..44a5e90b2e 100644
> --- a/tests/qtest/vhost-user-test.c
> +++ b/tests/qtest/vhost-user-test.c
> @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
>      VHOST_USER_SET_VRING_ENABLE = 18,
>      VHOST_USER_GET_CONFIG = 24,
>      VHOST_USER_SET_CONFIG = 25,
> +    VHOST_USER_GET_SHMEM_CONFIG = 44,
>      VHOST_USER_MAX
>  } VhostUserRequest;
>  
> @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
>      uint64_t mmap_offset;
>  } VhostUserLog;
>  
> +#define VIRTIO_MAX_SHMEM_REGIONS 256
> +
> +typedef struct VhostUserShMemConfig {
> +    uint32_t nregions;
> +    uint32_t padding;
> +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> +} VhostUserShMemConfig;
> +
> +typedef struct TestServerShmem {
> +    bool test_enabled;
> +    uint32_t nregions_sent;
> +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> +} TestServerShmem;
> +
>  typedef struct VhostUserMsg {
>      VhostUserRequest request;
>  
> @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
>          struct vhost_vring_addr addr;
>          VhostUserMemory memory;
>          VhostUserLog log;
> +        VhostUserShMemConfig shmem;
>      } payload;
>  } QEMU_PACKED VhostUserMsg;
>  
> @@ -170,6 +186,7 @@ typedef struct TestServer {
>      bool test_fail;
>      int test_flags;
>      int queues;
> +    TestServerShmem shmem;
>      struct vhost_user_ops *vu_ops;
>  } TestServer;
>  
> @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
>          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
>                     msg.payload.state.num ? "enabled" : "disabled");
>          break;
> +    
> +    case VHOST_USER_GET_SHMEM_CONFIG:
> +        if (!s->shmem.test_enabled) {
> +            /* Reply with error if shmem feature not enabled */
> +            msg.flags |= VHOST_USER_REPLY_MASK;
> +            msg.size = sizeof(uint64_t);
> +            msg.payload.u64 = -1; /* Error */
> +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> +        } else {
> +            /* Reply with test shmem configuration */
> +            msg.flags |= VHOST_USER_REPLY_MASK;
> +            msg.size = sizeof(VhostUserShMemConfig);
> +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> +            msg.payload.shmem.padding = 0;
> +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> +            
> +            /* Record what we're sending for test validation */
> +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> +            
> +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> +        }
> +        break;
>  
>      default:
>          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
>      return server;
>  }
>  
> +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> +{
> +    TestServer *server = test_server_new("vhost-user-test", arg);
> +    test_server_listen(server);
> +
> +    /* Enable shmem testing for this server */
> +    server->shmem.test_enabled = true;
> +
> +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> +    server->vu_ops->append_opts(server, cmd_line, "");
> +
> +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> +
> +    return server;
> +}
> +
>  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
>  {
>      TestServer *server = arg;
> @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
>      .get_protocol_features = vu_net_get_protocol_features,
>  };
>  
> +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> +{
> +    TestServer *s = arg;
> +    
> +    g_assert_true(s->shmem.test_enabled);
> +    
> +    g_mutex_lock(&s->data_mutex);
> +    s->shmem.nregions_sent = 0;
> +    s->shmem.sizes_sent[0] = 0;
> +    s->shmem.sizes_sent[1] = 0;
> +    g_mutex_unlock(&s->data_mutex);
> +    
> +    VhostUserMsg msg = {
> +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> +        .flags = VHOST_USER_VERSION,
> +        .size = 0,
> +    };
> +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> +
> +    g_mutex_lock(&s->data_mutex);
> +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> +    g_mutex_unlock(&s->data_mutex);
> +}
> +
>  static void register_vhost_user_test(void)
>  {
>      QOSGraphTestOptions opts = {
> @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
>      qos_add_test("vhost-user/multiqueue",
>                   "virtio-net",
>                   test_multiqueue, &opts);
> +    
> +    opts.before = vhost_user_test_setup_shmem_config;
> +    opts.edge.extra_device_opts = "";
> +    qos_add_test("vhost-user/shmem-config",
> +                 "virtio-net",
> +                 test_shmem_config, &opts);
>  }
>  libqos_init(register_vhost_user_test);
>  
> -- 
> 2.49.0
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request
  2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
  2025-08-18 18:58   ` Stefan Hajnoczi
@ 2025-08-19  9:22   ` David Hildenbrand
  1 sibling, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-08-19  9:22 UTC (permalink / raw)
  To: Albert Esteve, qemu-devel
  Cc: Michael S. Tsirkin, hi, jasowang, Laurent Vivier, dbassey,
	Stefano Garzarella, Paolo Bonzini, stefanha, stevensd,
	Fabiano Rosas, Alex Bennée, slp

On 18.08.25 12:03, Albert Esteve wrote:
> Add SHMEM_MAP/UNMAP requests to vhost-user for
> dynamic management of VIRTIO Shared Memory mappings.
> 
> This implementation introduces VhostUserShmemObject
> as an intermediate QOM parent for MemoryRegions
> created for SHMEM_MAP requests. This object
> provides reference-counted lifecycle management
> with automatic cleanup.
> 
> This request allows backends to dynamically map
> file descriptors into a VIRTIO Shared Memory
> Regions identified by their shmid. Maps are created
> using memory_region_init_ram_device_ptr() with
> configurable read/write permissions, and the resulting
> MemoryRegions are added as subregions to the shmem
> container region. The mapped memory is then advertised
> to the guest VIRTIO drivers as a base address plus
> offset for reading and writting according
> to the requested mmap flags.
> 
> The backend can unmap memory ranges within a given
> VIRTIO Shared Memory Region to free resources.
> Upon receiving this message, the frontend removes
> the MemoryRegion as a subregion and automatically
> unreferences the associated VhostUserShmemObject,
> triggering cleanup if no other references exist.
> 
> Error handling has been improved to ensure consistent
> behavior across handlers that manage their own
> vhost_user_send_resp() calls. Since these handlers
> clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> error checking ensures proper connection closure on
> failures, maintaining the expected error flow.
> 
> Note the memory region commit for these
> operations needs to be delayed until after we
> respond to the backend to avoid deadlocks.

Just a general comment: feel free to use up to 72 chars per line. 
Currently you're just a bit over 50.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 8/8] vhost-user-device: Add shared memory BAR
  2025-08-18 10:03 ` [PATCH v7 8/8] vhost-user-device: Add shared memory BAR Albert Esteve
@ 2025-08-19 10:42   ` Stefan Hajnoczi
  2025-08-19 11:41     ` Albert Esteve
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2025-08-19 10:42 UTC (permalink / raw)
  To: Albert Esteve
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 8503 bytes --]

On Mon, Aug 18, 2025 at 12:03:53PM +0200, Albert Esteve wrote:
> Add shared memory BAR support to vhost-user-device-pci
> to enable direct file mapping for VIRTIO Shared
> Memory Regions.
> 
> The implementation creates a consolidated shared
> memory BAR that contains all VIRTIO Shared
> Memory Regions as subregions. Each region is
> configured with its proper shmid, size, and
> offset within the BAR. The number and size of
> regions are retrieved via VHOST_USER_GET_SHMEM_CONFIG
> message sent by vhost-user-base during realization
> after virtio_init().
> 
> Specifiically, it uses BAR 3 to avoid conflicts, as
> it is currently unused.
> 
> The shared memory BAR is only created when the
> backend supports VHOST_USER_PROTOCOL_F_SHMEM and
> has configured shared memory regions. This maintains
> backward compatibility with backends that do not
> support shared memory functionality.
> 
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
>  hw/virtio/vhost-user-base.c       | 49 +++++++++++++++++++++++++++++--
>  hw/virtio/vhost-user-device-pci.c | 34 +++++++++++++++++++--
>  2 files changed, 78 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/virtio/vhost-user-base.c b/hw/virtio/vhost-user-base.c
> index ff67a020b4..932f9b5596 100644
> --- a/hw/virtio/vhost-user-base.c
> +++ b/hw/virtio/vhost-user-base.c
> @@ -16,6 +16,7 @@
>  #include "hw/virtio/virtio-bus.h"
>  #include "hw/virtio/vhost-user-base.h"
>  #include "qemu/error-report.h"
> +#include "migration/blocker.h"
>  
>  static void vub_start(VirtIODevice *vdev)
>  {
> @@ -276,7 +277,9 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>      VHostUserBase *vub = VHOST_USER_BASE(dev);
> -    int ret;
> +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> +    g_autofree char *name = NULL;
> +    int i, ret, nregions;
>  
>      if (!vub->chardev.chr) {
>          error_setg(errp, "vhost-user-base: missing chardev");
> @@ -319,7 +322,7 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
>  
>      /* Allocate queues */
>      vub->vqs = g_ptr_array_sized_new(vub->num_vqs);
> -    for (int i = 0; i < vub->num_vqs; i++) {
> +    for (i = 0; i < vub->num_vqs; i++) {
>          g_ptr_array_add(vub->vqs,
>                          virtio_add_queue(vdev, vub->vq_size,
>                                           vub_handle_output));
> @@ -333,11 +336,51 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
>                           VHOST_BACKEND_TYPE_USER, 0, errp);
>  
>      if (ret < 0) {
> -        do_vhost_user_cleanup(vdev, vub);
> +        goto err;
> +    }
> +
> +    ret = vub->vhost_dev.vhost_ops->vhost_get_shmem_config(&vub->vhost_dev,
> +                                                           &nregions,
> +                                                           memory_sizes,
> +                                                           errp);
> +
> +    if (ret < 0) {
> +        goto err;
> +    }
> +
> +    for (i = 0; i < nregions; i++) {
> +        if (memory_sizes[i]) {
> +            if (vub->vhost_dev.migration_blocker == NULL) {
> +                error_setg(&vub->vhost_dev.migration_blocker,
> +                       "Migration disabled: devices with VIRTIO Shared Memory "
> +                       "Regions do not support migration yet.");
> +                ret = migrate_add_blocker_normal(
> +                    &vub->vhost_dev.migration_blocker,
> +                    errp);
> +
> +                if (ret < 0) {
> +                    goto err;
> +                }
> +            }
> +
> +            if (memory_sizes[i] % qemu_real_host_page_size() != 0) {
> +                error_setg(errp, "Shared memory %d size must be a power of 2 "
> +                                 "no smaller than the page size", i);
> +                goto err;
> +            }
> +
> +            name = g_strdup_printf("vub-shm-%d", i);

name is leaked because it's scope extends until the end of the function
(after the loop) but a newly allocated string is assigned each time
around the loop. This can be fixed by moving the local variable
declaration inside the if statement body.

> +            memory_region_init(&virtio_new_shmem_region(vdev, i)->mr,
> +                               OBJECT(vdev), name,
> +                               memory_sizes[i]);

->mr is already initialized inside virtio_new_shmem_region(). I suggest
changing the definition of virtio_new_shmem_region() like this:

  void virtio_add_shmem_region(VirtIODevice *vdev, uint8_t shmid,
                               uint64_t size)

and then calling it like this:

  virtio_add_shmem_region(vdev, shmid, memory_sizes[i]);

("new" usually returns a new instance whereas "add" modifies an owner
object/container. I think "add" is more appropriate here.)

> +        }
>      }
>  
>      qemu_chr_fe_set_handlers(&vub->chardev, NULL, NULL, vub_event, NULL,
>                               dev, NULL, true);
> +    return;
> +err:
> +    do_vhost_user_cleanup(vdev, vub);
>  }
>  
>  static void vub_device_unrealize(DeviceState *dev)
> diff --git a/hw/virtio/vhost-user-device-pci.c b/hw/virtio/vhost-user-device-pci.c
> index f10bac874e..bac99e7c60 100644
> --- a/hw/virtio/vhost-user-device-pci.c
> +++ b/hw/virtio/vhost-user-device-pci.c
> @@ -8,14 +8,18 @@
>   */
>  
>  #include "qemu/osdep.h"
> +#include "qapi/error.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/virtio/vhost-user-base.h"
>  #include "hw/virtio/virtio-pci.h"
>  
> +#define VIRTIO_DEVICE_PCI_SHMEM_BAR 3
> +
>  struct VHostUserDevicePCI {
>      VirtIOPCIProxy parent_obj;
>  
>      VHostUserBase vub;
> +    MemoryRegion shmembar;
>  };
>  
>  #define TYPE_VHOST_USER_DEVICE_PCI "vhost-user-device-pci-base"
> @@ -25,10 +29,36 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserDevicePCI, VHOST_USER_DEVICE_PCI)
>  static void vhost_user_device_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
>  {
>      VHostUserDevicePCI *dev = VHOST_USER_DEVICE_PCI(vpci_dev);
> -    DeviceState *vdev = DEVICE(&dev->vub);
> +    DeviceState *dev_state = DEVICE(&dev->vub);
> +    VirtIODevice *vdev = VIRTIO_DEVICE(dev_state);
> +    VirtioSharedMemory *shmem, *next;
> +    uint64_t offset = 0, shmem_size = 0;
>  
>      vpci_dev->nvectors = 1;
> -    qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
> +    qdev_realize(dev_state, BUS(&vpci_dev->bus), errp);
> +
> +    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {

This is not specific to vhost-user-device-pci.c. All VIRTIO devices with
Shared Memory Regions need PCI BAR setup code. Since vdev->shmem_list is
part of the core hw/virtio/ code, it would make sense to move this into
into hw/virtio/virtio-pci.c.

> +        if (shmem->mr.size > UINT64_MAX - shmem_size) {
> +            error_setg(errp, "Total shared memory required overflow");
> +            return;
> +        }
> +        shmem_size = shmem_size + shmem->mr.size;
> +    }
> +    if (shmem_size) {
> +        memory_region_init(&dev->shmembar, OBJECT(vpci_dev),
> +                           "vhost-device-pci-shmembar", shmem_size);
> +        QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> +            memory_region_add_subregion(&dev->shmembar, offset, &shmem->mr);
> +            virtio_pci_add_shm_cap(vpci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
> +                                   offset, shmem->mr.size, shmem->shmid);
> +            offset = offset + shmem->mr.size;
> +        }
> +        pci_register_bar(&vpci_dev->pci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                        PCI_BASE_ADDRESS_MEM_PREFETCH |
> +                        PCI_BASE_ADDRESS_MEM_TYPE_64,
> +                        &dev->shmembar);

This does not follow the same approach as virtio-gpu-pci.c and
virtio-vga.c. They config the VirtIOPCIProxy's BARs
(->modern_io_bar_idx, ->modern_mem_bar_idx, and ->msix_bar_idx) to
control the BAR layout first and then call qdev_realize().

Why does this patch do things differently? It looks like it's assuming
vpci_dev always has a specific BAR layout (it could change).

> +    }

>  }
>  
>  static void vhost_user_device_pci_class_init(ObjectClass *klass,
> -- 
> 2.49.0
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 8/8] vhost-user-device: Add shared memory BAR
  2025-08-19 10:42   ` Stefan Hajnoczi
@ 2025-08-19 11:41     ` Albert Esteve
  0 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-19 11:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Mon, Aug 18, 2025 at 12:03:53PM +0200, Albert Esteve wrote:
> > Add shared memory BAR support to vhost-user-device-pci
> > to enable direct file mapping for VIRTIO Shared
> > Memory Regions.
> >
> > The implementation creates a consolidated shared
> > memory BAR that contains all VIRTIO Shared
> > Memory Regions as subregions. Each region is
> > configured with its proper shmid, size, and
> > offset within the BAR. The number and size of
> > regions are retrieved via VHOST_USER_GET_SHMEM_CONFIG
> > message sent by vhost-user-base during realization
> > after virtio_init().
> >
> > Specifiically, it uses BAR 3 to avoid conflicts, as
> > it is currently unused.
> >
> > The shared memory BAR is only created when the
> > backend supports VHOST_USER_PROTOCOL_F_SHMEM and
> > has configured shared memory regions. This maintains
> > backward compatibility with backends that do not
> > support shared memory functionality.
> >
> > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > ---
> >  hw/virtio/vhost-user-base.c       | 49 +++++++++++++++++++++++++++++--
> >  hw/virtio/vhost-user-device-pci.c | 34 +++++++++++++++++++--
> >  2 files changed, 78 insertions(+), 5 deletions(-)
> >
> > diff --git a/hw/virtio/vhost-user-base.c b/hw/virtio/vhost-user-base.c
> > index ff67a020b4..932f9b5596 100644
> > --- a/hw/virtio/vhost-user-base.c
> > +++ b/hw/virtio/vhost-user-base.c
> > @@ -16,6 +16,7 @@
> >  #include "hw/virtio/virtio-bus.h"
> >  #include "hw/virtio/vhost-user-base.h"
> >  #include "qemu/error-report.h"
> > +#include "migration/blocker.h"
> >
> >  static void vub_start(VirtIODevice *vdev)
> >  {
> > @@ -276,7 +277,9 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> >      VHostUserBase *vub = VHOST_USER_BASE(dev);
> > -    int ret;
> > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > +    g_autofree char *name = NULL;
> > +    int i, ret, nregions;
> >
> >      if (!vub->chardev.chr) {
> >          error_setg(errp, "vhost-user-base: missing chardev");
> > @@ -319,7 +322,7 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
> >
> >      /* Allocate queues */
> >      vub->vqs = g_ptr_array_sized_new(vub->num_vqs);
> > -    for (int i = 0; i < vub->num_vqs; i++) {
> > +    for (i = 0; i < vub->num_vqs; i++) {
> >          g_ptr_array_add(vub->vqs,
> >                          virtio_add_queue(vdev, vub->vq_size,
> >                                           vub_handle_output));
> > @@ -333,11 +336,51 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
> >                           VHOST_BACKEND_TYPE_USER, 0, errp);
> >
> >      if (ret < 0) {
> > -        do_vhost_user_cleanup(vdev, vub);
> > +        goto err;
> > +    }
> > +
> > +    ret = vub->vhost_dev.vhost_ops->vhost_get_shmem_config(&vub->vhost_dev,
> > +                                                           &nregions,
> > +                                                           memory_sizes,
> > +                                                           errp);
> > +
> > +    if (ret < 0) {
> > +        goto err;
> > +    }
> > +
> > +    for (i = 0; i < nregions; i++) {
> > +        if (memory_sizes[i]) {
> > +            if (vub->vhost_dev.migration_blocker == NULL) {
> > +                error_setg(&vub->vhost_dev.migration_blocker,
> > +                       "Migration disabled: devices with VIRTIO Shared Memory "
> > +                       "Regions do not support migration yet.");
> > +                ret = migrate_add_blocker_normal(
> > +                    &vub->vhost_dev.migration_blocker,
> > +                    errp);
> > +
> > +                if (ret < 0) {
> > +                    goto err;
> > +                }
> > +            }
> > +
> > +            if (memory_sizes[i] % qemu_real_host_page_size() != 0) {
> > +                error_setg(errp, "Shared memory %d size must be a power of 2 "
> > +                                 "no smaller than the page size", i);
> > +                goto err;
> > +            }
> > +
> > +            name = g_strdup_printf("vub-shm-%d", i);
>
> name is leaked because it's scope extends until the end of the function
> (after the loop) but a newly allocated string is assigned each time
> around the loop. This can be fixed by moving the local variable
> declaration inside the if statement body.
>
> > +            memory_region_init(&virtio_new_shmem_region(vdev, i)->mr,
> > +                               OBJECT(vdev), name,
> > +                               memory_sizes[i]);
>
> ->mr is already initialized inside virtio_new_shmem_region(). I suggest
> changing the definition of virtio_new_shmem_region() like this:
>
>   void virtio_add_shmem_region(VirtIODevice *vdev, uint8_t shmid,
>                                uint64_t size)
>
> and then calling it like this:
>
>   virtio_add_shmem_region(vdev, shmid, memory_sizes[i]);
>
> ("new" usually returns a new instance whereas "add" modifies an owner
> object/container. I think "add" is more appropriate here.)

Yes, I was checking your comment in the first patch and came to this.
I was changing it as you suggested. I messed that up with double init
and max size.

>
> > +        }
> >      }
> >
> >      qemu_chr_fe_set_handlers(&vub->chardev, NULL, NULL, vub_event, NULL,
> >                               dev, NULL, true);
> > +    return;
> > +err:
> > +    do_vhost_user_cleanup(vdev, vub);
> >  }
> >
> >  static void vub_device_unrealize(DeviceState *dev)
> > diff --git a/hw/virtio/vhost-user-device-pci.c b/hw/virtio/vhost-user-device-pci.c
> > index f10bac874e..bac99e7c60 100644
> > --- a/hw/virtio/vhost-user-device-pci.c
> > +++ b/hw/virtio/vhost-user-device-pci.c
> > @@ -8,14 +8,18 @@
> >   */
> >
> >  #include "qemu/osdep.h"
> > +#include "qapi/error.h"
> >  #include "hw/qdev-properties.h"
> >  #include "hw/virtio/vhost-user-base.h"
> >  #include "hw/virtio/virtio-pci.h"
> >
> > +#define VIRTIO_DEVICE_PCI_SHMEM_BAR 3
> > +
> >  struct VHostUserDevicePCI {
> >      VirtIOPCIProxy parent_obj;
> >
> >      VHostUserBase vub;
> > +    MemoryRegion shmembar;
> >  };
> >
> >  #define TYPE_VHOST_USER_DEVICE_PCI "vhost-user-device-pci-base"
> > @@ -25,10 +29,36 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserDevicePCI, VHOST_USER_DEVICE_PCI)
> >  static void vhost_user_device_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> >  {
> >      VHostUserDevicePCI *dev = VHOST_USER_DEVICE_PCI(vpci_dev);
> > -    DeviceState *vdev = DEVICE(&dev->vub);
> > +    DeviceState *dev_state = DEVICE(&dev->vub);
> > +    VirtIODevice *vdev = VIRTIO_DEVICE(dev_state);
> > +    VirtioSharedMemory *shmem, *next;
> > +    uint64_t offset = 0, shmem_size = 0;
> >
> >      vpci_dev->nvectors = 1;
> > -    qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
> > +    qdev_realize(dev_state, BUS(&vpci_dev->bus), errp);
> > +
> > +    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
>
> This is not specific to vhost-user-device-pci.c. All VIRTIO devices with
> Shared Memory Regions need PCI BAR setup code. Since vdev->shmem_list is
> part of the core hw/virtio/ code, it would make sense to move this into
> into hw/virtio/virtio-pci.c.
>
> > +        if (shmem->mr.size > UINT64_MAX - shmem_size) {
> > +            error_setg(errp, "Total shared memory required overflow");
> > +            return;
> > +        }
> > +        shmem_size = shmem_size + shmem->mr.size;
> > +    }
> > +    if (shmem_size) {
> > +        memory_region_init(&dev->shmembar, OBJECT(vpci_dev),
> > +                           "vhost-device-pci-shmembar", shmem_size);
> > +        QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> > +            memory_region_add_subregion(&dev->shmembar, offset, &shmem->mr);
> > +            virtio_pci_add_shm_cap(vpci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
> > +                                   offset, shmem->mr.size, shmem->shmid);
> > +            offset = offset + shmem->mr.size;
> > +        }
> > +        pci_register_bar(&vpci_dev->pci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                        PCI_BASE_ADDRESS_MEM_PREFETCH |
> > +                        PCI_BASE_ADDRESS_MEM_TYPE_64,
> > +                        &dev->shmembar);
>
> This does not follow the same approach as virtio-gpu-pci.c and
> virtio-vga.c. They config the VirtIOPCIProxy's BARs
> (->modern_io_bar_idx, ->modern_mem_bar_idx, and ->msix_bar_idx) to
> control the BAR layout first and then call qdev_realize().
>
> Why does this patch do things differently? It looks like it's assuming
> vpci_dev always has a specific BAR layout (it could change).
>
> > +    }
>
> >  }
> >
> >  static void vhost_user_device_pci_class_init(ObjectClass *klass,
> > --
> > 2.49.0
> >



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-18 23:14   ` Stefan Hajnoczi
@ 2025-08-19 12:16     ` Albert Esteve
  2025-08-20  8:47       ` Alyssa Ross
                         ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-19 12:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > Improve vhost-user-test to properly validate
> > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > directly simulating the message exchange.
> >
> > The test manually triggers the
> > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > chr_read() with a crafted VhostUserMsg, allowing direct
> > validation of the shmem configuration response handler.
>
> It looks like this test case invokes its own chr_read() function without
> going through QEMU, so I don't understand what this is testing?

I spent some time trying to test it, but in the end I could not
instatiate vhost-user-device because it is non user_creatable. I did
not find any test for vhost-user-device anywhere else either. But I
had already added most of the infrastructure here so I fallback to
chr_read() communication to avoid having to delete everything. My
though was that once we have other devices that use shared memory,
they could tweak the test to instantiate the proper device and test
this and the map/unmap operations.

Although after writing this, I think other devices will actually a
specific layout for their shared memory. So
VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
vhost-user-device.

In general, trying to test this patch series has been a headache other
than trying with external device code I have. If you have an idea that
I could try to test this, I can try. Otherwise, probably is best to
remove this commit from the series and wait for another vhost-user
device that uses map/unmap to land to be able to test it.



>
> >
> > Added TestServerShmem structure to track shmem
> > configuration state, including nregions_sent and
> > sizes_sent arrays for comprehensive validation.
> > The test verifies that the response contains the expected
> > number of shared memory regions and their corresponding
> > sizes.
> >
> > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > ---
> >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> >  1 file changed, 91 insertions(+)
> >
> > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > index 75cb3e44b2..44a5e90b2e 100644
> > --- a/tests/qtest/vhost-user-test.c
> > +++ b/tests/qtest/vhost-user-test.c
> > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> >      VHOST_USER_SET_VRING_ENABLE = 18,
> >      VHOST_USER_GET_CONFIG = 24,
> >      VHOST_USER_SET_CONFIG = 25,
> > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> >      VHOST_USER_MAX
> >  } VhostUserRequest;
> >
> > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> >      uint64_t mmap_offset;
> >  } VhostUserLog;
> >
> > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > +
> > +typedef struct VhostUserShMemConfig {
> > +    uint32_t nregions;
> > +    uint32_t padding;
> > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > +} VhostUserShMemConfig;
> > +
> > +typedef struct TestServerShmem {
> > +    bool test_enabled;
> > +    uint32_t nregions_sent;
> > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > +} TestServerShmem;
> > +
> >  typedef struct VhostUserMsg {
> >      VhostUserRequest request;
> >
> > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> >          struct vhost_vring_addr addr;
> >          VhostUserMemory memory;
> >          VhostUserLog log;
> > +        VhostUserShMemConfig shmem;
> >      } payload;
> >  } QEMU_PACKED VhostUserMsg;
> >
> > @@ -170,6 +186,7 @@ typedef struct TestServer {
> >      bool test_fail;
> >      int test_flags;
> >      int queues;
> > +    TestServerShmem shmem;
> >      struct vhost_user_ops *vu_ops;
> >  } TestServer;
> >
> > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> >                     msg.payload.state.num ? "enabled" : "disabled");
> >          break;
> > +
> > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > +        if (!s->shmem.test_enabled) {
> > +            /* Reply with error if shmem feature not enabled */
> > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > +            msg.size = sizeof(uint64_t);
> > +            msg.payload.u64 = -1; /* Error */
> > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > +        } else {
> > +            /* Reply with test shmem configuration */
> > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > +            msg.size = sizeof(VhostUserShMemConfig);
> > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > +            msg.payload.shmem.padding = 0;
> > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > +
> > +            /* Record what we're sending for test validation */
> > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > +
> > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > +        }
> > +        break;
> >
> >      default:
> >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> >      return server;
> >  }
> >
> > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > +{
> > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > +    test_server_listen(server);
> > +
> > +    /* Enable shmem testing for this server */
> > +    server->shmem.test_enabled = true;
> > +
> > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > +    server->vu_ops->append_opts(server, cmd_line, "");
> > +
> > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > +
> > +    return server;
> > +}
> > +
> >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> >  {
> >      TestServer *server = arg;
> > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> >      .get_protocol_features = vu_net_get_protocol_features,
> >  };
> >
> > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > +{
> > +    TestServer *s = arg;
> > +
> > +    g_assert_true(s->shmem.test_enabled);
> > +
> > +    g_mutex_lock(&s->data_mutex);
> > +    s->shmem.nregions_sent = 0;
> > +    s->shmem.sizes_sent[0] = 0;
> > +    s->shmem.sizes_sent[1] = 0;
> > +    g_mutex_unlock(&s->data_mutex);
> > +
> > +    VhostUserMsg msg = {
> > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > +        .flags = VHOST_USER_VERSION,
> > +        .size = 0,
> > +    };
> > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > +
> > +    g_mutex_lock(&s->data_mutex);
> > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > +    g_mutex_unlock(&s->data_mutex);
> > +}
> > +
> >  static void register_vhost_user_test(void)
> >  {
> >      QOSGraphTestOptions opts = {
> > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> >      qos_add_test("vhost-user/multiqueue",
> >                   "virtio-net",
> >                   test_multiqueue, &opts);
> > +
> > +    opts.before = vhost_user_test_setup_shmem_config;
> > +    opts.edge.extra_device_opts = "";
> > +    qos_add_test("vhost-user/shmem-config",
> > +                 "virtio-net",
> > +                 test_shmem_config, &opts);
> >  }
> >  libqos_init(register_vhost_user_test);
> >
> > --
> > 2.49.0
> >



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request
  2025-08-18 18:58   ` Stefan Hajnoczi
@ 2025-08-19 12:47     ` Albert Esteve
  2025-08-19 12:56       ` Albert Esteve
  0 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2025-08-19 12:47 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Mon, Aug 18, 2025 at 10:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Mon, Aug 18, 2025 at 12:03:46PM +0200, Albert Esteve wrote:
>
> (I haven't fully reviewed this yet, but here are my current comments.)
>
> > Add SHMEM_MAP/UNMAP requests to vhost-user for
> > dynamic management of VIRTIO Shared Memory mappings.
> >
> > This implementation introduces VhostUserShmemObject
> > as an intermediate QOM parent for MemoryRegions
> > created for SHMEM_MAP requests. This object
> > provides reference-counted lifecycle management
> > with automatic cleanup.
> >
> > This request allows backends to dynamically map
> > file descriptors into a VIRTIO Shared Memory
> > Regions identified by their shmid. Maps are created
> > using memory_region_init_ram_device_ptr() with
> > configurable read/write permissions, and the resulting
> > MemoryRegions are added as subregions to the shmem
> > container region. The mapped memory is then advertised
> > to the guest VIRTIO drivers as a base address plus
> > offset for reading and writting according
> > to the requested mmap flags.
> >
> > The backend can unmap memory ranges within a given
> > VIRTIO Shared Memory Region to free resources.
> > Upon receiving this message, the frontend removes
> > the MemoryRegion as a subregion and automatically
> > unreferences the associated VhostUserShmemObject,
> > triggering cleanup if no other references exist.
> >
> > Error handling has been improved to ensure consistent
> > behavior across handlers that manage their own
> > vhost_user_send_resp() calls. Since these handlers
> > clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> > error checking ensures proper connection closure on
> > failures, maintaining the expected error flow.
> >
> > Note the memory region commit for these
> > operations needs to be delayed until after we
> > respond to the backend to avoid deadlocks.
> >
> > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > ---
> >  hw/virtio/meson.build                     |   1 +
> >  hw/virtio/vhost-user-shmem.c              | 134 ++++++++++++++
> >  hw/virtio/vhost-user.c                    | 207 +++++++++++++++++++++-
> >  hw/virtio/virtio.c                        | 109 ++++++++++++
> >  include/hw/virtio/vhost-user-shmem.h      |  75 ++++++++
> >  include/hw/virtio/virtio.h                |  93 ++++++++++
> >  subprojects/libvhost-user/libvhost-user.c |  70 ++++++++
> >  subprojects/libvhost-user/libvhost-user.h |  54 ++++++
> >  8 files changed, 741 insertions(+), 2 deletions(-)
> >  create mode 100644 hw/virtio/vhost-user-shmem.c
> >  create mode 100644 include/hw/virtio/vhost-user-shmem.h
> >
> > diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> > index 3ea7b3cec8..5efcf70b75 100644
> > --- a/hw/virtio/meson.build
> > +++ b/hw/virtio/meson.build
> > @@ -20,6 +20,7 @@ if have_vhost
> >      # fixme - this really should be generic
> >      specific_virtio_ss.add(files('vhost-user.c'))
> >      system_virtio_ss.add(files('vhost-user-base.c'))
> > +    system_virtio_ss.add(files('vhost-user-shmem.c'))
> >
> >      # MMIO Stubs
> >      system_virtio_ss.add(files('vhost-user-device.c'))
> > diff --git a/hw/virtio/vhost-user-shmem.c b/hw/virtio/vhost-user-shmem.c
> > new file mode 100644
> > index 0000000000..1d763b56b6
> > --- /dev/null
> > +++ b/hw/virtio/vhost-user-shmem.c
> > @@ -0,0 +1,134 @@
> > +/*
> > + * VHost-user Shared Memory Object
> > + *
> > + * Copyright Red Hat, Inc. 2025
> > + *
> > + * Authors:
> > + *     Albert Esteve <aesteve@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> > + * See the COPYING file in the top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "hw/virtio/vhost-user-shmem.h"
> > +#include "system/memory.h"
> > +#include "qapi/error.h"
> > +#include "qemu/error-report.h"
> > +#include "trace.h"
> > +
> > +/**
> > + * VhostUserShmemObject
> > + *
> > + * An intermediate QOM object that manages individual shared memory mappings
> > + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> > + * MemoryRegion objects, providing proper lifecycle management with reference
> > + * counting. When the object is unreferenced and its reference count drops
> > + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> > + */
> > +
> > +static void vhost_user_shmem_object_finalize(Object *obj);
> > +static void vhost_user_shmem_object_instance_init(Object *obj);
> > +
> > +static const TypeInfo vhost_user_shmem_object_info = {
> > +    .name = TYPE_VHOST_USER_SHMEM_OBJECT,
> > +    .parent = TYPE_OBJECT,
> > +    .instance_size = sizeof(VhostUserShmemObject),
> > +    .instance_init = vhost_user_shmem_object_instance_init,
> > +    .instance_finalize = vhost_user_shmem_object_finalize,
> > +};
> > +
> > +static void vhost_user_shmem_object_instance_init(Object *obj)
> > +{
> > +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> > +
> > +    shmem_obj->shmid = 0;
> > +    shmem_obj->fd = -1;
> > +    shmem_obj->shm_offset = 0;
> > +    shmem_obj->len = 0;
> > +    shmem_obj->mr = NULL;
> > +}
> > +
> > +static void vhost_user_shmem_object_finalize(Object *obj)
> > +{
> > +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> > +
> > +    /* Clean up MemoryRegion if it exists */
> > +    if (shmem_obj->mr) {
> > +        /* Unparent the MemoryRegion to trigger cleanup */
> > +        object_unparent(OBJECT(shmem_obj->mr));
> > +        shmem_obj->mr = NULL;
> > +    }
> > +
> > +    /* Close file descriptor */
> > +    if (shmem_obj->fd >= 0) {
> > +        close(shmem_obj->fd);
> > +        shmem_obj->fd = -1;
> > +    }
> > +}
> > +
> > +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> > +                                                   int fd,
> > +                                                   uint64_t fd_offset,
> > +                                                   uint64_t shm_offset,
> > +                                                   uint64_t len,
> > +                                                   uint16_t flags)
> > +{
> > +    VhostUserShmemObject *shmem_obj;
> > +    MemoryRegion *mr;
> > +    g_autoptr(GString) mr_name = g_string_new(NULL);
> > +    uint32_t ram_flags;
> > +    Error *local_err = NULL;
> > +
> > +    if (len == 0) {
> > +        error_report("Shared memory mapping size cannot be zero");
> > +        return NULL;
> > +    }
> > +
> > +    fd = dup(fd);
> > +    if (fd < 0) {
> > +        error_report("Failed to duplicate fd: %s", strerror(errno));
> > +        return NULL;
> > +    }
> > +
> > +    /* Determine RAM flags */
> > +    ram_flags = RAM_SHARED;
> > +    if (!(flags & VHOST_USER_FLAG_MAP_RW)) {
> > +        ram_flags |= RAM_READONLY_FD;
> > +    }
> > +
> > +    /* Create the VhostUserShmemObject */
> > +    shmem_obj = VHOST_USER_SHMEM_OBJECT(
> > +        object_new(TYPE_VHOST_USER_SHMEM_OBJECT));
> > +
> > +    /* Set up object properties */
> > +    shmem_obj->shmid = shmid;
> > +    shmem_obj->fd = fd;
> > +    shmem_obj->shm_offset = shm_offset;
> > +    shmem_obj->len = len;
> > +
> > +    /* Create MemoryRegion as a child of this object */
> > +    mr = g_new0(MemoryRegion, 1);
> > +    g_string_printf(mr_name, "vhost-user-shmem-%d-%" PRIx64, shmid, shm_offset);
> > +
> > +    /* Initialize MemoryRegion with file descriptor */
> > +    if (!memory_region_init_ram_from_fd(mr, OBJECT(shmem_obj), mr_name->str,
> > +                                        len, ram_flags, fd, fd_offset,
> > +                                        &local_err)) {
> > +        error_report_err(local_err);
> > +        g_free(mr);
> > +        close(fd);
> > +        object_unref(OBJECT(shmem_obj));
> > +        return NULL;
> > +    }
> > +
> > +    shmem_obj->mr = mr;
> > +    return shmem_obj;
> > +}
> > +
> > +static void vhost_user_shmem_register_types(void)
> > +{
> > +    type_register_static(&vhost_user_shmem_object_info);
> > +}
> > +
> > +type_init(vhost_user_shmem_register_types)
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > index 1e1d6b0d6e..eb3ad728b0 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -11,6 +11,7 @@
> >  #include "qemu/osdep.h"
> >  #include "qapi/error.h"
> >  #include "hw/virtio/virtio-dmabuf.h"
> > +#include "hw/virtio/vhost-user-shmem.h"
> >  #include "hw/virtio/vhost.h"
> >  #include "hw/virtio/virtio-crypto.h"
> >  #include "hw/virtio/vhost-user.h"
> > @@ -115,6 +116,8 @@ typedef enum VhostUserBackendRequest {
> >      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> >      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> >      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> >      VHOST_USER_BACKEND_MAX
> >  }  VhostUserBackendRequest;
> >
> > @@ -192,6 +195,23 @@ typedef struct VhostUserShared {
> >      unsigned char uuid[16];
> >  } VhostUserShared;
> >
> > +/* For the flags field of VhostUserMMap */
> > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > +
> > +typedef struct {
> > +    /* VIRTIO Shared Memory Region ID */
> > +    uint8_t shmid;
> > +    uint8_t padding[7];
> > +    /* File offset */
> > +    uint64_t fd_offset;
> > +    /* Offset within the VIRTIO Shared Memory Region */
> > +    uint64_t shm_offset;
> > +    /* Size of the mapping */
> > +    uint64_t len;
> > +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > +    uint16_t flags;
> > +} VhostUserMMap;
> > +
> >  typedef struct {
> >      VhostUserRequest request;
> >
> > @@ -224,6 +244,7 @@ typedef union {
> >          VhostUserInflight inflight;
> >          VhostUserShared object;
> >          VhostUserTransferDeviceState transfer_state;
> > +        VhostUserMMap mmap;
> >  } VhostUserPayload;
> >
> >  typedef struct VhostUserMsg {
> > @@ -1768,6 +1789,172 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> >      return 0;
> >  }
> >
> > +/**
> > + * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> > + * @dev: vhost device
> > + * @ioc: QIOChannel for communication
> > + * @hdr: vhost-user message header
> > + * @payload: message payload containing mapping details
> > + * @fd: file descriptor for the shared memory region
> > + *
> > + * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> > + * a VhostUserShmemObject to manage the shared memory mapping and adds it
> > + * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> > + * serves as an intermediate parent for the MemoryRegion, ensuring proper
> > + * lifecycle management with reference counting.
> > + *
> > + * Returns: 0 on success, negative errno on failure
> > + */
> > +static int
> > +vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > +                                    QIOChannel *ioc,
> > +                                    VhostUserHeader *hdr,
> > +                                    VhostUserPayload *payload,
> > +                                    int fd)
> > +{
> > +    VirtioSharedMemory *shmem;
> > +    VhostUserMMap *vu_mmap = &payload->mmap;
> > +    Error *local_err = NULL;
> > +    g_autoptr(GString) shm_name = g_string_new(NULL);
> > +
> > +    if (fd < 0) {
> > +        error_report("Bad fd for map");
> > +        return -EBADF;
> > +    }
> > +
> > +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > +        error_report("Device has no VIRTIO Shared Memory Regions. "
> > +                     "Requested ID: %d", vu_mmap->shmid);
> > +        return -EFAULT;
> > +    }
> > +
> > +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > +    if (!shmem) {
> > +        error_report("VIRTIO Shared Memory Region at "
> > +                     "ID %d not found or unitialized", vu_mmap->shmid);
> > +        return -EFAULT;
> > +    }
> > +
> > +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > +        error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> > +                     vu_mmap->shm_offset, vu_mmap->len);
> > +        return -EFAULT;
> > +    }
> > +
> > +    g_string_printf(shm_name, "virtio-shm%i-%lu",
> > +                    vu_mmap->shmid, vu_mmap->shm_offset);
> > +
> > +    memory_region_transaction_begin();
> > +
> > +    /* Create VhostUserShmemObject as intermediate parent for MemoryRegion */
> > +    VhostUserShmemObject *shmem_obj = vhost_user_shmem_object_new(
> > +        vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> > +        vu_mmap->len, vu_mmap->flags);
> > +
> > +    if (!shmem_obj) {
> > +        memory_region_transaction_commit();
> > +        return -EFAULT;
> > +    }
> > +
> > +    /* Add the mapping using our VhostUserShmemObject as the parent */
> > +    if (virtio_add_shmem_map(shmem, shmem_obj) != 0) {
> > +        error_report("Failed to add shared memory mapping");
> > +        object_unref(OBJECT(shmem_obj));
> > +        memory_region_transaction_commit();
> > +        return -EFAULT;
> > +    }
> > +
> > +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > +        payload->u64 = 0;
> > +        hdr->size = sizeof(payload->u64);
> > +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> > +        if (local_err) {
> > +            error_report_err(local_err);
> > +            memory_region_transaction_commit();
> > +            return -EFAULT;
> > +        }
> > +    }
> > +
> > +    memory_region_transaction_commit();
> > +
> > +    return 0;
> > +}
> > +
> > +/**
> > + * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> > + * @dev: vhost device
> > + * @ioc: QIOChannel for communication
> > + * @hdr: vhost-user message header
> > + * @payload: message payload containing unmapping details
> > + *
> > + * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> > + * the specified memory mapping from the VirtIO shared memory region. This
> > + * automatically unreferences the associated VhostUserShmemObject, which may
> > + * trigger its finalization and cleanup (munmap, close fd) if no other
> > + * references exist.
> > + *
> > + * Returns: 0 on success, negative errno on failure
> > + */
> > +static int
> > +vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> > +                                      QIOChannel *ioc,
> > +                                      VhostUserHeader *hdr,
> > +                                      VhostUserPayload *payload)
> > +{
> > +    VirtioSharedMemory *shmem;
> > +    VirtioSharedMemoryMapping *mmap = NULL;
> > +    VhostUserMMap *vu_mmap = &payload->mmap;
> > +    Error *local_err = NULL;
> > +
> > +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > +        error_report("Device has no VIRTIO Shared Memory Regions. "
> > +                     "Requested ID: %d", vu_mmap->shmid);
> > +        return -EFAULT;
> > +    }
> > +
> > +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > +    if (!shmem) {
> > +        error_report("VIRTIO Shared Memory Region at "
> > +                     "ID %d not found or unitialized", vu_mmap->shmid);
> > +        return -EFAULT;
> > +    }
> > +
> > +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > +        error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> > +                     vu_mmap->shm_offset, vu_mmap->len);
> > +        return -EFAULT;
> > +    }
> > +
> > +    mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > +    if (!mmap) {
> > +        error_report("Shared memory mapping not found at offset %" PRIx64
> > +                     " with length %" PRIx64,
> > +                     vu_mmap->shm_offset, vu_mmap->len);
> > +        return -EFAULT;
> > +    }
> > +
> > +    memory_region_transaction_begin();
> > +    memory_region_del_subregion(&shmem->mr, mmap->mem);
> > +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > +        payload->u64 = 0;
> > +        hdr->size = sizeof(payload->u64);
> > +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> > +        if (local_err) {
> > +            error_report_err(local_err);
> > +            memory_region_transaction_commit();
> > +            return -EFAULT;
> > +        }
> > +    }
> > +    memory_region_transaction_commit();
> > +
> > +    /* Free the MemoryRegion only after vhost_commit */
> > +    virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > +
> > +    return 0;
> > +}
> > +
> >  static void close_backend_channel(struct vhost_user *u)
> >  {
> >      g_source_destroy(u->backend_src);
> > @@ -1833,8 +2020,24 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> >                                                               &payload.object);
> >          break;
> >      case VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP:
> > -        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > -                                                             &hdr, &payload);
> > +        /* Handler manages its own response, check error and close connection */
> > +        if (vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > +                                                           &hdr, &payload) < 0) {
> > +            goto err;
> > +        }
> > +        break;
> > +    case VHOST_USER_BACKEND_SHMEM_MAP:
> > +        /* Handler manages its own response, check error and close connection */
> > +        if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> > +                                                fd ? fd[0] : -1) < 0) {
> > +            goto err;
> > +        }
> > +        break;
> > +    case VHOST_USER_BACKEND_SHMEM_UNMAP:
> > +        /* Handler manages its own response, check error and close connection */
> > +        if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> > +            goto err;
> > +        }
> >          break;
> >      default:
> >          error_report("Received unexpected msg type: %d.", hdr.request);
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index 9a81ad912e..1ead5f653f 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -14,6 +14,7 @@
> >  #include "qemu/osdep.h"
> >  #include "qapi/error.h"
> >  #include "qapi/qapi-commands-virtio.h"
> > +#include "hw/virtio/vhost-user-shmem.h"
> >  #include "trace.h"
> >  #include "qemu/defer-call.h"
> >  #include "qemu/error-report.h"
> > @@ -3045,6 +3046,100 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> >      return vmstate_save_state(f, &vmstate_virtio, vdev, NULL);
> >  }
> >
> > +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > +{
> > +    VirtioSharedMemory *elem;
> > +    g_autofree char *name = NULL;
> > +
> > +    elem = g_new0(VirtioSharedMemory, 1);
> > +    elem->shmid = shmid;
> > +
> > +    /* Initialize embedded MemoryRegion as container for shmem mappings */
> > +    name = g_strdup_printf("virtio-shmem-%d", shmid);
> > +    memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
> > +    QTAILQ_INIT(&elem->mmaps);
> > +    QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> > +    return QSIMPLEQ_LAST(&vdev->shmem_list, VirtioSharedMemory, entry);
>
> "return elem;" is simpler.

This sounds familiar. I hope is not a change that got lost, but simply
that I changed
it somewhere else but I repeated the pattern here. I will change it.

>
> > +}
> > +
> > +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > +{
> > +    VirtioSharedMemory *shmem, *next;
> > +    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> > +        if (shmem->shmid == shmid) {
> > +            return shmem;
> > +        }
> > +    }
> > +    return NULL;
> > +}
> > +
> > +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > +                         VhostUserShmemObject *shmem_obj)
> > +{
> > +    VirtioSharedMemoryMapping *mmap;
> > +    if (!shmem_obj) {
> > +        error_report("VhostUserShmemObject cannot be NULL");
> > +        return -1;
> > +    }
> > +    if (!shmem_obj->mr) {
> > +        error_report("VhostUserShmemObject has no MemoryRegion");
> > +        return -1;
> > +    }
> > +
> > +    /* Validate boundaries against the VIRTIO shared memory region */
> > +    if (shmem_obj->shm_offset + shmem_obj->len > shmem->mr.size) {
>
> From above:
>
>   memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
>
> shmem->mr's size is UINT64_MAX and this if statement doesn't handle
> integer overflow. What is the purpose of this size check?
>
> > +        error_report("Memory exceeds the shared memory boundaries");
> > +        return -1;
> > +    }
> > +
> > +    /* Create the VirtioSharedMemoryMapping wrapper */
> > +    mmap = g_new0(VirtioSharedMemoryMapping, 1);
> > +    mmap->mem = shmem_obj->mr;
> > +    mmap->offset = shmem_obj->shm_offset;
> > +    mmap->shmem_obj = shmem_obj;
> > +
> > +    /* Take a reference on the VhostUserShmemObject */
> > +    object_ref(OBJECT(shmem_obj));
>
> Why is the reference count incremented here? The caller seems to pass
> ownership to this function...at least
> vhost_user_backend_handle_shmem_map() doesn't touch shmem_obj afterwards
> and doesn't unref it.

You are right. transfering ownership is better and does not require
the count increment. I'll remove this.

>
> > +
> > +    /* Add as subregion to the VIRTIO shared memory */
> > +    memory_region_add_subregion(&shmem->mr, mmap->offset, mmap->mem);
> > +
> > +    /* Add to the mapped regions list */
> > +    QTAILQ_INSERT_TAIL(&shmem->mmaps, mmap, link);
> > +
> > +    return 0;
> > +}
> > +
> > +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > +                                          hwaddr offset, uint64_t size)
> > +{
> > +    VirtioSharedMemoryMapping *mmap;
> > +    QTAILQ_FOREACH(mmap, &shmem->mmaps, link) {
> > +        if (mmap->offset == offset && mmap->mem->size == size) {
> > +            return mmap;
> > +        }
> > +    }
> > +    return NULL;
> > +}
> > +
> > +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > +                          uint64_t size)
> > +{
> > +    VirtioSharedMemoryMapping *mmap = virtio_find_shmem_map(shmem, offset, size);
> > +    if (mmap == NULL) {
> > +        return;
> > +    }
> > +
> > +    /*
> > +     * Unref the VhostUserShmemObject which will trigger automatic cleanup
> > +     * when the reference count reaches zero.
> > +     */
> > +    object_unref(OBJECT(mmap->shmem_obj));
> > +
> > +    QTAILQ_REMOVE(&shmem->mmaps, mmap, link);
> > +    g_free(mmap);
> > +}
> > +
> >  /* A wrapper for use as a VMState .put function */
> >  static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> >                                const VMStateField *field, JSONWriter *vmdesc)
> > @@ -3521,6 +3616,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> >              NULL, virtio_vmstate_change, vdev);
> >      vdev->device_endian = virtio_default_endian();
> >      vdev->use_guest_notifier_mask = true;
> > +    QSIMPLEQ_INIT(&vdev->shmem_list);
> >  }
> >
> >  /*
> > @@ -4032,11 +4128,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> >  static void virtio_device_instance_finalize(Object *obj)
> >  {
> >      VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> > +    VirtioSharedMemory *shmem;
> >
> >      virtio_device_free_virtqueues(vdev);
> >
> >      g_free(vdev->config);
> >      g_free(vdev->vector_queues);
> > +    while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> > +        shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> > +        while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > +            VirtioSharedMemoryMapping *mmap_reg = QTAILQ_FIRST(&shmem->mmaps);
> > +            virtio_del_shmem_map(shmem, mmap_reg->offset, mmap_reg->mem->size);
> > +        }
> > +
> > +        /* Clean up the embedded MemoryRegion */
> > +        object_unparent(OBJECT(&shmem->mr));
> > +        QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> > +        g_free(shmem);
> > +    }
> >  }
> >
> >  static const Property virtio_properties[] = {
> > diff --git a/include/hw/virtio/vhost-user-shmem.h b/include/hw/virtio/vhost-user-shmem.h
> > new file mode 100644
> > index 0000000000..1f8c7bdc1f
> > --- /dev/null
> > +++ b/include/hw/virtio/vhost-user-shmem.h
> > @@ -0,0 +1,75 @@
> > +/*
> > + * VHost-user Shared Memory Object
> > + *
> > + * Copyright Red Hat, Inc. 2025
> > + *
> > + * Authors:
> > + *     Albert Esteve <aesteve@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> > + * See the COPYING file in the top-level directory.
> > + */
> > +
> > +#ifndef VHOST_USER_SHMEM_H
> > +#define VHOST_USER_SHMEM_H
> > +
> > +#include "qemu/osdep.h"
> > +#include "qom/object.h"
> > +#include "system/memory.h"
> > +#include "qapi/error.h"
> > +
> > +/* vhost-user memory mapping flags */
> > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
>
> This constant is part of the vhost-user protocol. It would be nicer to
> keep that all in one file instead of spreading protocol definitions
> across multiple files.
>
> In this case you could replace vhost_user_shmem_object_new()'s flags
> argument with a bool allow_write argument. That way the vhost-user
> protocol parsing happens in vhost-user.c and not vhost-user-shmem.c.

I'll go for this option.

>
> Alternatively, you could move the protocol definitions from vhost-user.c
> into a header file and include them from vhost-user-shmem.c.
>
> > +
> > +#define TYPE_VHOST_USER_SHMEM_OBJECT "vhost-user-shmem"
> > +OBJECT_DECLARE_SIMPLE_TYPE(VhostUserShmemObject, VHOST_USER_SHMEM_OBJECT)
> > +
> > +/**
> > + * VhostUserShmemObject:
> > + * @parent: Parent object
> > + * @shmid: VIRTIO Shared Memory Region ID
> > + * @fd: File descriptor for the shared memory region
> > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > + * @len: Size of the mapping
> > + * @mr: MemoryRegion associated with this shared memory mapping
> > + *
> > + * An intermediate QOM object that manages individual shared memory mappings
> > + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> > + * MemoryRegion objects, providing proper lifecycle management with reference
> > + * counting. When the object is unreferenced and its reference count drops
> > + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> > + */
> > +struct VhostUserShmemObject {
> > +    Object parent;
> > +
> > +    uint8_t shmid;
> > +    int fd;
> > +    uint64_t shm_offset;
> > +    uint64_t len;
> > +    MemoryRegion *mr;
> > +};
> > +
> > +/**
> > + * vhost_user_shmem_object_new() - Create a new VhostUserShmemObject
> > + * @shmid: VIRTIO Shared Memory Region ID
> > + * @fd: File descriptor for the shared memory
> > + * @fd_offset: Offset within the file descriptor
> > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > + * @len: Size of the mapping
> > + * @flags: Mapping flags (VHOST_USER_FLAG_MAP_*)
> > + *
> > + * Creates a new VhostUserShmemObject that manages a shared memory mapping.
> > + * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> > + * as a child object. When the object is finalized, it will automatically
> > + * clean up the MemoryRegion and close the file descriptor.
> > + *
> > + * Return: A new VhostUserShmemObject on success, NULL on error.
> > + */
> > +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> > +                                                   int fd,
> > +                                                   uint64_t fd_offset,
> > +                                                   uint64_t shm_offset,
> > +                                                   uint64_t len,
> > +                                                   uint16_t flags);
> > +
> > +#endif /* VHOST_USER_SHMEM_H */
> > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > index c594764f23..a563bbac2c 100644
> > --- a/include/hw/virtio/virtio.h
> > +++ b/include/hw/virtio/virtio.h
> > @@ -98,6 +98,26 @@ enum virtio_device_endian {
> >      VIRTIO_DEVICE_ENDIAN_BIG,
> >  };
> >
> > +struct VhostUserShmemObject;
> > +
> > +struct VirtioSharedMemoryMapping {
> > +    MemoryRegion *mem;
> > +    hwaddr offset;
> > +    QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> > +    struct VhostUserShmemObject *shmem_obj; /* Intermediate parent object */
> > +};
> > +
> > +typedef struct VirtioSharedMemoryMapping VirtioSharedMemoryMapping;
> > +
> > +struct VirtioSharedMemory {
> > +    uint8_t shmid;
> > +    MemoryRegion mr;
> > +    QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> > +    QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> > +};
>
> VirtioSharedMemoryMapping and VirtioSharedMemory duplicate information
> from VhostUserShmemObject (shmid, memory region pointers, offsets). This
> makes the relationship between VIRTIO and vhost-user code confusing.
>
> I wonder if VhostUserShmemObject is specific to the vhost-user protocol
> or if any VIRTIO device implementation that needs a VIRTIO Shared Memory
> Region with an fd, offset, etc should be able to use it? If yes, then it
> should be renamed and made part of the core hw/virtio/ code rather than
> vhost-user.
>
> > +
> > +typedef struct VirtioSharedMemory VirtioSharedMemory;
> > +
> >  /**
> >   * struct VirtIODevice - common VirtIO structure
> >   * @name: name of the device
> > @@ -167,6 +187,8 @@ struct VirtIODevice
> >       */
> >      EventNotifier config_notifier;
> >      bool device_iotlb_enabled;
> > +    /* Shared memory region for mappings. */
> > +    QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> >  };
> >
> >  struct VirtioDeviceClass {
> > @@ -295,6 +317,77 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> >
> >  int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> >
> > +/**
> > + * virtio_new_shmem_region() - Create a new shared memory region
> > + * @vdev: VirtIODevice
> > + * @shmid: Shared memory ID
> > + *
> > + * Creates a new VirtioSharedMemory region for the given device and ID.
> > + * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> > + * be automatically freed when the device is destroyed. The caller
> > + * should not free the returned pointer.
> > + *
> > + * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> > + */
> > +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > +
> > +/**
> > + * virtio_find_shmem_region() - Find an existing shared memory region
> > + * @vdev: VirtIODevice
> > + * @shmid: Shared memory ID to find
> > + *
> > + * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> > + * is owned by the VirtIODevice and should not be freed by the caller.
> > + *
> > + * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> > + */
> > +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > +
> > +/**
> > + * virtio_add_shmem_map() - Add a memory mapping to a shared region
> > + * @shmem: VirtioSharedMemory region
> > + * @shmem_obj: VhostUserShmemObject to add (takes a reference)
> > + *
> > + * Adds a memory mapping to the shared memory region. The VhostUserShmemObject
> > + * is added as a child of the mapping and will be automatically managed through
> > + * QOM reference counting. The mapping will be removed when
> > + * virtio_del_shmem_map() is called or when the shared memory region is
> > + * destroyed.
> > + *
> > + * Returns: 0 on success, negative errno on failure
> > + */
> > +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > +                         struct VhostUserShmemObject *shmem_obj);
>
> This API suggests the answer to my question above about whether
> VhostUserShmemObject is really a core hw/virtio/ concept rather than a
> vhost-user protocol concept is "yes". I think VhostUserShmemObject
> should be renamed and maybe unified with VirtioSharedMemoryMapping.

The answer would yes, you are right. The messages to map/unmap are
vhost-user-specific, but shared memory is a core virtio concept. I
will rename it.

>
> > +
> > +/**
> > + * virtio_find_shmem_map() - Find a memory mapping in a shared region
> > + * @shmem: VirtioSharedMemory region
> > + * @offset: Offset within the shared memory region
> > + * @size: Size of the mapping to find
> > + *
> > + * Finds an existing memory mapping that covers the specified range.
> > + * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> > + * region and should not be freed by the caller.
> > + *
> > + * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> > + */
> > +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > +                                          hwaddr offset, uint64_t size);
> > +
> > +/**
> > + * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> > + * @shmem: VirtioSharedMemory region
> > + * @offset: Offset of the mapping to remove
> > + * @size: Size of the mapping to remove
> > + *
> > + * Removes a memory mapping from the shared memory region. This will
> > + * automatically unref the associated VhostUserShmemObject, which may
> > + * trigger its finalization and cleanup if no other references exist.
> > + * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> > + */
> > +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > +                          uint64_t size);
> > +
> >  extern const VMStateInfo virtio_vmstate_info;
> >
> >  #define VMSTATE_VIRTIO_DEVICE \
> > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > index 9c630c2170..034cbfdc3c 100644
> > --- a/subprojects/libvhost-user/libvhost-user.c
> > +++ b/subprojects/libvhost-user/libvhost-user.c
> > @@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> >      return vu_send_message(dev, &msg);
> >  }
> >
> > +bool
> > +vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > +             uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> > +{
> > +    VhostUserMsg vmsg = {
> > +        .request = VHOST_USER_BACKEND_SHMEM_MAP,
> > +        .size = sizeof(vmsg.payload.mmap),
> > +        .flags = VHOST_USER_VERSION,
> > +        .payload.mmap = {
> > +            .shmid = shmid,
> > +            .fd_offset = fd_offset,
> > +            .shm_offset = shm_offset,
> > +            .len = len,
> > +            .flags = flags,
> > +        },
> > +        .fd_num = 1,
> > +        .fds[0] = fd,
> > +    };
> > +
> > +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > +        return false;
> > +    }
> > +
> > +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > +    }
> > +
> > +    pthread_mutex_lock(&dev->backend_mutex);
> > +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > +        pthread_mutex_unlock(&dev->backend_mutex);
> > +        return false;
> > +    }
> > +
> > +    /* Also unlocks the backend_mutex */
> > +    return vu_process_message_reply(dev, &vmsg);
> > +}
> > +
> > +bool
> > +vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> > +{
> > +    VhostUserMsg vmsg = {
> > +        .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> > +        .size = sizeof(vmsg.payload.mmap),
> > +        .flags = VHOST_USER_VERSION,
> > +        .payload.mmap = {
> > +            .shmid = shmid,
> > +            .fd_offset = 0,
> > +            .shm_offset = shm_offset,
> > +            .len = len,
> > +        },
> > +    };
> > +
> > +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > +        return false;
> > +    }
> > +
> > +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > +    }
> > +
> > +    pthread_mutex_lock(&dev->backend_mutex);
> > +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > +        pthread_mutex_unlock(&dev->backend_mutex);
> > +        return false;
> > +    }
> > +
> > +    /* Also unlocks the backend_mutex */
> > +    return vu_process_message_reply(dev, &vmsg);
> > +}
> > +
> >  static bool
> >  vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> >  {
> > diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> > index 2ffc58c11b..26b710c92d 100644
> > --- a/subprojects/libvhost-user/libvhost-user.h
> > +++ b/subprojects/libvhost-user/libvhost-user.h
> > @@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> >      /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> >      /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> >      VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> > +    /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> > +    VHOST_USER_PROTOCOL_F_SHMEM = 20,
> >      VHOST_USER_PROTOCOL_F_MAX
> >  };
> >
> > @@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> >      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> >      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> >      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> >      VHOST_USER_BACKEND_MAX
> >  }  VhostUserBackendRequest;
> >
> > @@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> >      unsigned char uuid[UUID_LEN];
> >  } VhostUserShared;
> >
> > +/* For the flags field of VhostUserMMap */
> > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > +
> > +typedef struct {
> > +    /* VIRTIO Shared Memory Region ID */
> > +    uint8_t shmid;
> > +    uint8_t padding[7];
> > +    /* File offset */
> > +    uint64_t fd_offset;
> > +    /* Offset within the VIRTIO Shared Memory Region */
> > +    uint64_t shm_offset;
> > +    /* Size of the mapping */
> > +    uint64_t len;
> > +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > +    uint16_t flags;
> > +} VhostUserMMap;
> > +
> >  #define VU_PACKED __attribute__((packed))
> >
> >  typedef struct VhostUserMsg {
> > @@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> >          VhostUserVringArea area;
> >          VhostUserInflight inflight;
> >          VhostUserShared object;
> > +        VhostUserMMap mmap;
> >      } payload;
> >
> >      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> > @@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> >   */
> >  bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> >
> > +/**
> > + * vu_shmem_map:
> > + * @dev: a VuDev context
> > + * @shmid: VIRTIO Shared Memory Region ID
> > + * @fd_offset: File offset
> > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > + * @len: Size of the mapping
> > + * @flags: Flags for the mmap operation
> > + * @fd: A file descriptor
> > + *
> > + * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> > + *
> > + * Returns: TRUE on success, FALSE on failure.
> > + */
> > +bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > +                  uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> > +
> > +/**
> > + * vu_shmem_unmap:
> > + * @dev: a VuDev context
> > + * @shmid: VIRTIO Shared Memory Region ID
> > + * @fd_offset: File offset
> > + * @len: Size of the mapping
> > + *
> > + * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> > + * with the requested `shmid`.
> > + *
> > + * Returns: TRUE on success, FALSE on failure.
> > + */
> > +bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> > +                    uint64_t len);
> > +
> >  /**
> >   * vu_queue_set_notification:
> >   * @dev: a VuDev context
> > --
> > 2.49.0
> >



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request
  2025-08-19 12:47     ` Albert Esteve
@ 2025-08-19 12:56       ` Albert Esteve
  0 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-08-19 12:56 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Tue, Aug 19, 2025 at 2:47 PM Albert Esteve <aesteve@redhat.com> wrote:
>
> On Mon, Aug 18, 2025 at 10:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Mon, Aug 18, 2025 at 12:03:46PM +0200, Albert Esteve wrote:
> >
> > (I haven't fully reviewed this yet, but here are my current comments.)
> >
> > > Add SHMEM_MAP/UNMAP requests to vhost-user for
> > > dynamic management of VIRTIO Shared Memory mappings.
> > >
> > > This implementation introduces VhostUserShmemObject
> > > as an intermediate QOM parent for MemoryRegions
> > > created for SHMEM_MAP requests. This object
> > > provides reference-counted lifecycle management
> > > with automatic cleanup.
> > >
> > > This request allows backends to dynamically map
> > > file descriptors into a VIRTIO Shared Memory
> > > Regions identified by their shmid. Maps are created
> > > using memory_region_init_ram_device_ptr() with
> > > configurable read/write permissions, and the resulting
> > > MemoryRegions are added as subregions to the shmem
> > > container region. The mapped memory is then advertised
> > > to the guest VIRTIO drivers as a base address plus
> > > offset for reading and writting according
> > > to the requested mmap flags.
> > >
> > > The backend can unmap memory ranges within a given
> > > VIRTIO Shared Memory Region to free resources.
> > > Upon receiving this message, the frontend removes
> > > the MemoryRegion as a subregion and automatically
> > > unreferences the associated VhostUserShmemObject,
> > > triggering cleanup if no other references exist.
> > >
> > > Error handling has been improved to ensure consistent
> > > behavior across handlers that manage their own
> > > vhost_user_send_resp() calls. Since these handlers
> > > clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> > > error checking ensures proper connection closure on
> > > failures, maintaining the expected error flow.
> > >
> > > Note the memory region commit for these
> > > operations needs to be delayed until after we
> > > respond to the backend to avoid deadlocks.
> > >
> > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > ---
> > >  hw/virtio/meson.build                     |   1 +
> > >  hw/virtio/vhost-user-shmem.c              | 134 ++++++++++++++
> > >  hw/virtio/vhost-user.c                    | 207 +++++++++++++++++++++-
> > >  hw/virtio/virtio.c                        | 109 ++++++++++++
> > >  include/hw/virtio/vhost-user-shmem.h      |  75 ++++++++
> > >  include/hw/virtio/virtio.h                |  93 ++++++++++
> > >  subprojects/libvhost-user/libvhost-user.c |  70 ++++++++
> > >  subprojects/libvhost-user/libvhost-user.h |  54 ++++++
> > >  8 files changed, 741 insertions(+), 2 deletions(-)
> > >  create mode 100644 hw/virtio/vhost-user-shmem.c
> > >  create mode 100644 include/hw/virtio/vhost-user-shmem.h
> > >
> > > diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> > > index 3ea7b3cec8..5efcf70b75 100644
> > > --- a/hw/virtio/meson.build
> > > +++ b/hw/virtio/meson.build
> > > @@ -20,6 +20,7 @@ if have_vhost
> > >      # fixme - this really should be generic
> > >      specific_virtio_ss.add(files('vhost-user.c'))
> > >      system_virtio_ss.add(files('vhost-user-base.c'))
> > > +    system_virtio_ss.add(files('vhost-user-shmem.c'))
> > >
> > >      # MMIO Stubs
> > >      system_virtio_ss.add(files('vhost-user-device.c'))
> > > diff --git a/hw/virtio/vhost-user-shmem.c b/hw/virtio/vhost-user-shmem.c
> > > new file mode 100644
> > > index 0000000000..1d763b56b6
> > > --- /dev/null
> > > +++ b/hw/virtio/vhost-user-shmem.c
> > > @@ -0,0 +1,134 @@
> > > +/*
> > > + * VHost-user Shared Memory Object
> > > + *
> > > + * Copyright Red Hat, Inc. 2025
> > > + *
> > > + * Authors:
> > > + *     Albert Esteve <aesteve@redhat.com>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> > > + * See the COPYING file in the top-level directory.
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "hw/virtio/vhost-user-shmem.h"
> > > +#include "system/memory.h"
> > > +#include "qapi/error.h"
> > > +#include "qemu/error-report.h"
> > > +#include "trace.h"
> > > +
> > > +/**
> > > + * VhostUserShmemObject
> > > + *
> > > + * An intermediate QOM object that manages individual shared memory mappings
> > > + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> > > + * MemoryRegion objects, providing proper lifecycle management with reference
> > > + * counting. When the object is unreferenced and its reference count drops
> > > + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> > > + */
> > > +
> > > +static void vhost_user_shmem_object_finalize(Object *obj);
> > > +static void vhost_user_shmem_object_instance_init(Object *obj);
> > > +
> > > +static const TypeInfo vhost_user_shmem_object_info = {
> > > +    .name = TYPE_VHOST_USER_SHMEM_OBJECT,
> > > +    .parent = TYPE_OBJECT,
> > > +    .instance_size = sizeof(VhostUserShmemObject),
> > > +    .instance_init = vhost_user_shmem_object_instance_init,
> > > +    .instance_finalize = vhost_user_shmem_object_finalize,
> > > +};
> > > +
> > > +static void vhost_user_shmem_object_instance_init(Object *obj)
> > > +{
> > > +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> > > +
> > > +    shmem_obj->shmid = 0;
> > > +    shmem_obj->fd = -1;
> > > +    shmem_obj->shm_offset = 0;
> > > +    shmem_obj->len = 0;
> > > +    shmem_obj->mr = NULL;
> > > +}
> > > +
> > > +static void vhost_user_shmem_object_finalize(Object *obj)
> > > +{
> > > +    VhostUserShmemObject *shmem_obj = VHOST_USER_SHMEM_OBJECT(obj);
> > > +
> > > +    /* Clean up MemoryRegion if it exists */
> > > +    if (shmem_obj->mr) {
> > > +        /* Unparent the MemoryRegion to trigger cleanup */
> > > +        object_unparent(OBJECT(shmem_obj->mr));
> > > +        shmem_obj->mr = NULL;
> > > +    }
> > > +
> > > +    /* Close file descriptor */
> > > +    if (shmem_obj->fd >= 0) {
> > > +        close(shmem_obj->fd);
> > > +        shmem_obj->fd = -1;
> > > +    }
> > > +}
> > > +
> > > +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> > > +                                                   int fd,
> > > +                                                   uint64_t fd_offset,
> > > +                                                   uint64_t shm_offset,
> > > +                                                   uint64_t len,
> > > +                                                   uint16_t flags)
> > > +{
> > > +    VhostUserShmemObject *shmem_obj;
> > > +    MemoryRegion *mr;
> > > +    g_autoptr(GString) mr_name = g_string_new(NULL);
> > > +    uint32_t ram_flags;
> > > +    Error *local_err = NULL;
> > > +
> > > +    if (len == 0) {
> > > +        error_report("Shared memory mapping size cannot be zero");
> > > +        return NULL;
> > > +    }
> > > +
> > > +    fd = dup(fd);
> > > +    if (fd < 0) {
> > > +        error_report("Failed to duplicate fd: %s", strerror(errno));
> > > +        return NULL;
> > > +    }
> > > +
> > > +    /* Determine RAM flags */
> > > +    ram_flags = RAM_SHARED;
> > > +    if (!(flags & VHOST_USER_FLAG_MAP_RW)) {
> > > +        ram_flags |= RAM_READONLY_FD;
> > > +    }
> > > +
> > > +    /* Create the VhostUserShmemObject */
> > > +    shmem_obj = VHOST_USER_SHMEM_OBJECT(
> > > +        object_new(TYPE_VHOST_USER_SHMEM_OBJECT));
> > > +
> > > +    /* Set up object properties */
> > > +    shmem_obj->shmid = shmid;
> > > +    shmem_obj->fd = fd;
> > > +    shmem_obj->shm_offset = shm_offset;
> > > +    shmem_obj->len = len;
> > > +
> > > +    /* Create MemoryRegion as a child of this object */
> > > +    mr = g_new0(MemoryRegion, 1);
> > > +    g_string_printf(mr_name, "vhost-user-shmem-%d-%" PRIx64, shmid, shm_offset);
> > > +
> > > +    /* Initialize MemoryRegion with file descriptor */
> > > +    if (!memory_region_init_ram_from_fd(mr, OBJECT(shmem_obj), mr_name->str,
> > > +                                        len, ram_flags, fd, fd_offset,
> > > +                                        &local_err)) {
> > > +        error_report_err(local_err);
> > > +        g_free(mr);
> > > +        close(fd);
> > > +        object_unref(OBJECT(shmem_obj));
> > > +        return NULL;
> > > +    }
> > > +
> > > +    shmem_obj->mr = mr;
> > > +    return shmem_obj;
> > > +}
> > > +
> > > +static void vhost_user_shmem_register_types(void)
> > > +{
> > > +    type_register_static(&vhost_user_shmem_object_info);
> > > +}
> > > +
> > > +type_init(vhost_user_shmem_register_types)
> > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > index 1e1d6b0d6e..eb3ad728b0 100644
> > > --- a/hw/virtio/vhost-user.c
> > > +++ b/hw/virtio/vhost-user.c
> > > @@ -11,6 +11,7 @@
> > >  #include "qemu/osdep.h"
> > >  #include "qapi/error.h"
> > >  #include "hw/virtio/virtio-dmabuf.h"
> > > +#include "hw/virtio/vhost-user-shmem.h"
> > >  #include "hw/virtio/vhost.h"
> > >  #include "hw/virtio/virtio-crypto.h"
> > >  #include "hw/virtio/vhost-user.h"
> > > @@ -115,6 +116,8 @@ typedef enum VhostUserBackendRequest {
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > > +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > > +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > >      VHOST_USER_BACKEND_MAX
> > >  }  VhostUserBackendRequest;
> > >
> > > @@ -192,6 +195,23 @@ typedef struct VhostUserShared {
> > >      unsigned char uuid[16];
> > >  } VhostUserShared;
> > >
> > > +/* For the flags field of VhostUserMMap */
> > > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > > +
> > > +typedef struct {
> > > +    /* VIRTIO Shared Memory Region ID */
> > > +    uint8_t shmid;
> > > +    uint8_t padding[7];
> > > +    /* File offset */
> > > +    uint64_t fd_offset;
> > > +    /* Offset within the VIRTIO Shared Memory Region */
> > > +    uint64_t shm_offset;
> > > +    /* Size of the mapping */
> > > +    uint64_t len;
> > > +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > > +    uint16_t flags;
> > > +} VhostUserMMap;
> > > +
> > >  typedef struct {
> > >      VhostUserRequest request;
> > >
> > > @@ -224,6 +244,7 @@ typedef union {
> > >          VhostUserInflight inflight;
> > >          VhostUserShared object;
> > >          VhostUserTransferDeviceState transfer_state;
> > > +        VhostUserMMap mmap;
> > >  } VhostUserPayload;
> > >
> > >  typedef struct VhostUserMsg {
> > > @@ -1768,6 +1789,172 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > >      return 0;
> > >  }
> > >
> > > +/**
> > > + * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> > > + * @dev: vhost device
> > > + * @ioc: QIOChannel for communication
> > > + * @hdr: vhost-user message header
> > > + * @payload: message payload containing mapping details
> > > + * @fd: file descriptor for the shared memory region
> > > + *
> > > + * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> > > + * a VhostUserShmemObject to manage the shared memory mapping and adds it
> > > + * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> > > + * serves as an intermediate parent for the MemoryRegion, ensuring proper
> > > + * lifecycle management with reference counting.
> > > + *
> > > + * Returns: 0 on success, negative errno on failure
> > > + */
> > > +static int
> > > +vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > +                                    QIOChannel *ioc,
> > > +                                    VhostUserHeader *hdr,
> > > +                                    VhostUserPayload *payload,
> > > +                                    int fd)
> > > +{
> > > +    VirtioSharedMemory *shmem;
> > > +    VhostUserMMap *vu_mmap = &payload->mmap;
> > > +    Error *local_err = NULL;
> > > +    g_autoptr(GString) shm_name = g_string_new(NULL);
> > > +
> > > +    if (fd < 0) {
> > > +        error_report("Bad fd for map");
> > > +        return -EBADF;
> > > +    }
> > > +
> > > +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > > +        error_report("Device has no VIRTIO Shared Memory Regions. "
> > > +                     "Requested ID: %d", vu_mmap->shmid);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > > +    if (!shmem) {
> > > +        error_report("VIRTIO Shared Memory Region at "
> > > +                     "ID %d not found or unitialized", vu_mmap->shmid);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > > +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > > +        error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> > > +                     vu_mmap->shm_offset, vu_mmap->len);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    g_string_printf(shm_name, "virtio-shm%i-%lu",
> > > +                    vu_mmap->shmid, vu_mmap->shm_offset);
> > > +
> > > +    memory_region_transaction_begin();
> > > +
> > > +    /* Create VhostUserShmemObject as intermediate parent for MemoryRegion */
> > > +    VhostUserShmemObject *shmem_obj = vhost_user_shmem_object_new(
> > > +        vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> > > +        vu_mmap->len, vu_mmap->flags);
> > > +
> > > +    if (!shmem_obj) {
> > > +        memory_region_transaction_commit();
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    /* Add the mapping using our VhostUserShmemObject as the parent */
> > > +    if (virtio_add_shmem_map(shmem, shmem_obj) != 0) {
> > > +        error_report("Failed to add shared memory mapping");
> > > +        object_unref(OBJECT(shmem_obj));
> > > +        memory_region_transaction_commit();
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > +        payload->u64 = 0;
> > > +        hdr->size = sizeof(payload->u64);
> > > +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> > > +        if (local_err) {
> > > +            error_report_err(local_err);
> > > +            memory_region_transaction_commit();
> > > +            return -EFAULT;
> > > +        }
> > > +    }
> > > +
> > > +    memory_region_transaction_commit();
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +/**
> > > + * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> > > + * @dev: vhost device
> > > + * @ioc: QIOChannel for communication
> > > + * @hdr: vhost-user message header
> > > + * @payload: message payload containing unmapping details
> > > + *
> > > + * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> > > + * the specified memory mapping from the VirtIO shared memory region. This
> > > + * automatically unreferences the associated VhostUserShmemObject, which may
> > > + * trigger its finalization and cleanup (munmap, close fd) if no other
> > > + * references exist.
> > > + *
> > > + * Returns: 0 on success, negative errno on failure
> > > + */
> > > +static int
> > > +vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> > > +                                      QIOChannel *ioc,
> > > +                                      VhostUserHeader *hdr,
> > > +                                      VhostUserPayload *payload)
> > > +{
> > > +    VirtioSharedMemory *shmem;
> > > +    VirtioSharedMemoryMapping *mmap = NULL;
> > > +    VhostUserMMap *vu_mmap = &payload->mmap;
> > > +    Error *local_err = NULL;
> > > +
> > > +    if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > > +        error_report("Device has no VIRTIO Shared Memory Regions. "
> > > +                     "Requested ID: %d", vu_mmap->shmid);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > > +    if (!shmem) {
> > > +        error_report("VIRTIO Shared Memory Region at "
> > > +                     "ID %d not found or unitialized", vu_mmap->shmid);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > > +        (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > > +        error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> > > +                     vu_mmap->shm_offset, vu_mmap->len);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > > +    if (!mmap) {
> > > +        error_report("Shared memory mapping not found at offset %" PRIx64
> > > +                     " with length %" PRIx64,
> > > +                     vu_mmap->shm_offset, vu_mmap->len);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    memory_region_transaction_begin();
> > > +    memory_region_del_subregion(&shmem->mr, mmap->mem);
> > > +    if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > +        payload->u64 = 0;
> > > +        hdr->size = sizeof(payload->u64);
> > > +        vhost_user_send_resp(ioc, hdr, payload, &local_err);
> > > +        if (local_err) {
> > > +            error_report_err(local_err);
> > > +            memory_region_transaction_commit();
> > > +            return -EFAULT;
> > > +        }
> > > +    }
> > > +    memory_region_transaction_commit();
> > > +
> > > +    /* Free the MemoryRegion only after vhost_commit */
> > > +    virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > > +
> > > +    return 0;
> > > +}
> > > +
> > >  static void close_backend_channel(struct vhost_user *u)
> > >  {
> > >      g_source_destroy(u->backend_src);
> > > @@ -1833,8 +2020,24 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > >                                                               &payload.object);
> > >          break;
> > >      case VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP:
> > > -        ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > > -                                                             &hdr, &payload);
> > > +        /* Handler manages its own response, check error and close connection */
> > > +        if (vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > > +                                                           &hdr, &payload) < 0) {
> > > +            goto err;
> > > +        }
> > > +        break;
> > > +    case VHOST_USER_BACKEND_SHMEM_MAP:
> > > +        /* Handler manages its own response, check error and close connection */
> > > +        if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> > > +                                                fd ? fd[0] : -1) < 0) {
> > > +            goto err;
> > > +        }
> > > +        break;
> > > +    case VHOST_USER_BACKEND_SHMEM_UNMAP:
> > > +        /* Handler manages its own response, check error and close connection */
> > > +        if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> > > +            goto err;
> > > +        }
> > >          break;
> > >      default:
> > >          error_report("Received unexpected msg type: %d.", hdr.request);
> > > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > > index 9a81ad912e..1ead5f653f 100644
> > > --- a/hw/virtio/virtio.c
> > > +++ b/hw/virtio/virtio.c
> > > @@ -14,6 +14,7 @@
> > >  #include "qemu/osdep.h"
> > >  #include "qapi/error.h"
> > >  #include "qapi/qapi-commands-virtio.h"
> > > +#include "hw/virtio/vhost-user-shmem.h"
> > >  #include "trace.h"
> > >  #include "qemu/defer-call.h"
> > >  #include "qemu/error-report.h"
> > > @@ -3045,6 +3046,100 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> > >      return vmstate_save_state(f, &vmstate_virtio, vdev, NULL);
> > >  }
> > >
> > > +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > > +{
> > > +    VirtioSharedMemory *elem;
> > > +    g_autofree char *name = NULL;
> > > +
> > > +    elem = g_new0(VirtioSharedMemory, 1);
> > > +    elem->shmid = shmid;
> > > +
> > > +    /* Initialize embedded MemoryRegion as container for shmem mappings */
> > > +    name = g_strdup_printf("virtio-shmem-%d", shmid);
> > > +    memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
> > > +    QTAILQ_INIT(&elem->mmaps);
> > > +    QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> > > +    return QSIMPLEQ_LAST(&vdev->shmem_list, VirtioSharedMemory, entry);
> >
> > "return elem;" is simpler.
>
> This sounds familiar. I hope is not a change that got lost, but simply
> that I changed
> it somewhere else but I repeated the pattern here. I will change it.
>
> >
> > > +}
> > > +
> > > +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > > +{
> > > +    VirtioSharedMemory *shmem, *next;
> > > +    QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> > > +        if (shmem->shmid == shmid) {
> > > +            return shmem;
> > > +        }
> > > +    }
> > > +    return NULL;
> > > +}
> > > +
> > > +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > > +                         VhostUserShmemObject *shmem_obj)
> > > +{
> > > +    VirtioSharedMemoryMapping *mmap;
> > > +    if (!shmem_obj) {
> > > +        error_report("VhostUserShmemObject cannot be NULL");
> > > +        return -1;
> > > +    }
> > > +    if (!shmem_obj->mr) {
> > > +        error_report("VhostUserShmemObject has no MemoryRegion");
> > > +        return -1;
> > > +    }
> > > +
> > > +    /* Validate boundaries against the VIRTIO shared memory region */
> > > +    if (shmem_obj->shm_offset + shmem_obj->len > shmem->mr.size) {
> >
> > From above:
> >
> >   memory_region_init(&elem->mr, OBJECT(vdev), name, UINT64_MAX);
> >
> > shmem->mr's size is UINT64_MAX and this if statement doesn't handle
> > integer overflow. What is the purpose of this size check?
> >
> > > +        error_report("Memory exceeds the shared memory boundaries");
> > > +        return -1;
> > > +    }
> > > +
> > > +    /* Create the VirtioSharedMemoryMapping wrapper */
> > > +    mmap = g_new0(VirtioSharedMemoryMapping, 1);
> > > +    mmap->mem = shmem_obj->mr;
> > > +    mmap->offset = shmem_obj->shm_offset;
> > > +    mmap->shmem_obj = shmem_obj;
> > > +
> > > +    /* Take a reference on the VhostUserShmemObject */
> > > +    object_ref(OBJECT(shmem_obj));
> >
> > Why is the reference count incremented here? The caller seems to pass
> > ownership to this function...at least
> > vhost_user_backend_handle_shmem_map() doesn't touch shmem_obj afterwards
> > and doesn't unref it.
>
> You are right. transfering ownership is better and does not require
> the count increment. I'll remove this.
>
> >
> > > +
> > > +    /* Add as subregion to the VIRTIO shared memory */
> > > +    memory_region_add_subregion(&shmem->mr, mmap->offset, mmap->mem);
> > > +
> > > +    /* Add to the mapped regions list */
> > > +    QTAILQ_INSERT_TAIL(&shmem->mmaps, mmap, link);
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > > +                                          hwaddr offset, uint64_t size)
> > > +{
> > > +    VirtioSharedMemoryMapping *mmap;
> > > +    QTAILQ_FOREACH(mmap, &shmem->mmaps, link) {
> > > +        if (mmap->offset == offset && mmap->mem->size == size) {
> > > +            return mmap;
> > > +        }
> > > +    }
> > > +    return NULL;
> > > +}
> > > +
> > > +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > > +                          uint64_t size)
> > > +{
> > > +    VirtioSharedMemoryMapping *mmap = virtio_find_shmem_map(shmem, offset, size);
> > > +    if (mmap == NULL) {
> > > +        return;
> > > +    }
> > > +
> > > +    /*
> > > +     * Unref the VhostUserShmemObject which will trigger automatic cleanup
> > > +     * when the reference count reaches zero.
> > > +     */
> > > +    object_unref(OBJECT(mmap->shmem_obj));
> > > +
> > > +    QTAILQ_REMOVE(&shmem->mmaps, mmap, link);
> > > +    g_free(mmap);
> > > +}
> > > +
> > >  /* A wrapper for use as a VMState .put function */
> > >  static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> > >                                const VMStateField *field, JSONWriter *vmdesc)
> > > @@ -3521,6 +3616,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> > >              NULL, virtio_vmstate_change, vdev);
> > >      vdev->device_endian = virtio_default_endian();
> > >      vdev->use_guest_notifier_mask = true;
> > > +    QSIMPLEQ_INIT(&vdev->shmem_list);
> > >  }
> > >
> > >  /*
> > > @@ -4032,11 +4128,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> > >  static void virtio_device_instance_finalize(Object *obj)
> > >  {
> > >      VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> > > +    VirtioSharedMemory *shmem;
> > >
> > >      virtio_device_free_virtqueues(vdev);
> > >
> > >      g_free(vdev->config);
> > >      g_free(vdev->vector_queues);
> > > +    while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> > > +        shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> > > +        while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > > +            VirtioSharedMemoryMapping *mmap_reg = QTAILQ_FIRST(&shmem->mmaps);
> > > +            virtio_del_shmem_map(shmem, mmap_reg->offset, mmap_reg->mem->size);
> > > +        }
> > > +
> > > +        /* Clean up the embedded MemoryRegion */
> > > +        object_unparent(OBJECT(&shmem->mr));
> > > +        QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> > > +        g_free(shmem);
> > > +    }
> > >  }
> > >
> > >  static const Property virtio_properties[] = {
> > > diff --git a/include/hw/virtio/vhost-user-shmem.h b/include/hw/virtio/vhost-user-shmem.h
> > > new file mode 100644
> > > index 0000000000..1f8c7bdc1f
> > > --- /dev/null
> > > +++ b/include/hw/virtio/vhost-user-shmem.h
> > > @@ -0,0 +1,75 @@
> > > +/*
> > > + * VHost-user Shared Memory Object
> > > + *
> > > + * Copyright Red Hat, Inc. 2025
> > > + *
> > > + * Authors:
> > > + *     Albert Esteve <aesteve@redhat.com>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> > > + * See the COPYING file in the top-level directory.
> > > + */
> > > +
> > > +#ifndef VHOST_USER_SHMEM_H
> > > +#define VHOST_USER_SHMEM_H
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qom/object.h"
> > > +#include "system/memory.h"
> > > +#include "qapi/error.h"
> > > +
> > > +/* vhost-user memory mapping flags */
> > > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> >
> > This constant is part of the vhost-user protocol. It would be nicer to
> > keep that all in one file instead of spreading protocol definitions
> > across multiple files.
> >
> > In this case you could replace vhost_user_shmem_object_new()'s flags
> > argument with a bool allow_write argument. That way the vhost-user
> > protocol parsing happens in vhost-user.c and not vhost-user-shmem.c.
>
> I'll go for this option.
>
> >
> > Alternatively, you could move the protocol definitions from vhost-user.c
> > into a header file and include them from vhost-user-shmem.c.
> >
> > > +
> > > +#define TYPE_VHOST_USER_SHMEM_OBJECT "vhost-user-shmem"
> > > +OBJECT_DECLARE_SIMPLE_TYPE(VhostUserShmemObject, VHOST_USER_SHMEM_OBJECT)
> > > +
> > > +/**
> > > + * VhostUserShmemObject:
> > > + * @parent: Parent object
> > > + * @shmid: VIRTIO Shared Memory Region ID
> > > + * @fd: File descriptor for the shared memory region
> > > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > > + * @len: Size of the mapping
> > > + * @mr: MemoryRegion associated with this shared memory mapping
> > > + *
> > > + * An intermediate QOM object that manages individual shared memory mappings
> > > + * created by VHOST_USER_BACKEND_SHMEM_MAP requests. It acts as a parent for
> > > + * MemoryRegion objects, providing proper lifecycle management with reference
> > > + * counting. When the object is unreferenced and its reference count drops
> > > + * to zero, it automatically cleans up the MemoryRegion and unmaps the memory.
> > > + */
> > > +struct VhostUserShmemObject {
> > > +    Object parent;
> > > +
> > > +    uint8_t shmid;
> > > +    int fd;
> > > +    uint64_t shm_offset;
> > > +    uint64_t len;
> > > +    MemoryRegion *mr;
> > > +};
> > > +
> > > +/**
> > > + * vhost_user_shmem_object_new() - Create a new VhostUserShmemObject
> > > + * @shmid: VIRTIO Shared Memory Region ID
> > > + * @fd: File descriptor for the shared memory
> > > + * @fd_offset: Offset within the file descriptor
> > > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > > + * @len: Size of the mapping
> > > + * @flags: Mapping flags (VHOST_USER_FLAG_MAP_*)
> > > + *
> > > + * Creates a new VhostUserShmemObject that manages a shared memory mapping.
> > > + * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> > > + * as a child object. When the object is finalized, it will automatically
> > > + * clean up the MemoryRegion and close the file descriptor.
> > > + *
> > > + * Return: A new VhostUserShmemObject on success, NULL on error.
> > > + */
> > > +VhostUserShmemObject *vhost_user_shmem_object_new(uint8_t shmid,
> > > +                                                   int fd,
> > > +                                                   uint64_t fd_offset,
> > > +                                                   uint64_t shm_offset,
> > > +                                                   uint64_t len,
> > > +                                                   uint16_t flags);
> > > +
> > > +#endif /* VHOST_USER_SHMEM_H */
> > > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > > index c594764f23..a563bbac2c 100644
> > > --- a/include/hw/virtio/virtio.h
> > > +++ b/include/hw/virtio/virtio.h
> > > @@ -98,6 +98,26 @@ enum virtio_device_endian {
> > >      VIRTIO_DEVICE_ENDIAN_BIG,
> > >  };
> > >
> > > +struct VhostUserShmemObject;
> > > +
> > > +struct VirtioSharedMemoryMapping {
> > > +    MemoryRegion *mem;
> > > +    hwaddr offset;
> > > +    QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> > > +    struct VhostUserShmemObject *shmem_obj; /* Intermediate parent object */
> > > +};
> > > +
> > > +typedef struct VirtioSharedMemoryMapping VirtioSharedMemoryMapping;
> > > +
> > > +struct VirtioSharedMemory {
> > > +    uint8_t shmid;
> > > +    MemoryRegion mr;
> > > +    QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> > > +    QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> > > +};
> >
> > VirtioSharedMemoryMapping and VirtioSharedMemory duplicate information
> > from VhostUserShmemObject (shmid, memory region pointers, offsets). This
> > makes the relationship between VIRTIO and vhost-user code confusing.
> >
> > I wonder if VhostUserShmemObject is specific to the vhost-user protocol
> > or if any VIRTIO device implementation that needs a VIRTIO Shared Memory
> > Region with an fd, offset, etc should be able to use it? If yes, then it
> > should be renamed and made part of the core hw/virtio/ code rather than
> > vhost-user.
> >
> > > +
> > > +typedef struct VirtioSharedMemory VirtioSharedMemory;
> > > +
> > >  /**
> > >   * struct VirtIODevice - common VirtIO structure
> > >   * @name: name of the device
> > > @@ -167,6 +187,8 @@ struct VirtIODevice
> > >       */
> > >      EventNotifier config_notifier;
> > >      bool device_iotlb_enabled;
> > > +    /* Shared memory region for mappings. */
> > > +    QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> > >  };
> > >
> > >  struct VirtioDeviceClass {
> > > @@ -295,6 +317,77 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> > >
> > >  int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> > >
> > > +/**
> > > + * virtio_new_shmem_region() - Create a new shared memory region
> > > + * @vdev: VirtIODevice
> > > + * @shmid: Shared memory ID
> > > + *
> > > + * Creates a new VirtioSharedMemory region for the given device and ID.
> > > + * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> > > + * be automatically freed when the device is destroyed. The caller
> > > + * should not free the returned pointer.
> > > + *
> > > + * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> > > + */
> > > +VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > > +
> > > +/**
> > > + * virtio_find_shmem_region() - Find an existing shared memory region
> > > + * @vdev: VirtIODevice
> > > + * @shmid: Shared memory ID to find
> > > + *
> > > + * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> > > + * is owned by the VirtIODevice and should not be freed by the caller.
> > > + *
> > > + * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> > > + */
> > > +VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > > +
> > > +/**
> > > + * virtio_add_shmem_map() - Add a memory mapping to a shared region
> > > + * @shmem: VirtioSharedMemory region
> > > + * @shmem_obj: VhostUserShmemObject to add (takes a reference)
> > > + *
> > > + * Adds a memory mapping to the shared memory region. The VhostUserShmemObject
> > > + * is added as a child of the mapping and will be automatically managed through
> > > + * QOM reference counting. The mapping will be removed when
> > > + * virtio_del_shmem_map() is called or when the shared memory region is
> > > + * destroyed.
> > > + *
> > > + * Returns: 0 on success, negative errno on failure
> > > + */
> > > +int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > > +                         struct VhostUserShmemObject *shmem_obj);
> >
> > This API suggests the answer to my question above about whether
> > VhostUserShmemObject is really a core hw/virtio/ concept rather than a
> > vhost-user protocol concept is "yes". I think VhostUserShmemObject
> > should be renamed and maybe unified with VirtioSharedMemoryMapping.
>
> The answer would yes, you are right. The messages to map/unmap are
> vhost-user-specific, but shared memory is a core virtio concept. I
> will rename it.

Actually I will try to unify with VirtioSharedMemoryMapping. I tried
that already before and I was not successful, but I learned a bit more
of the qemu object patterns. And ultimately will be the best solution.

>
> >
> > > +
> > > +/**
> > > + * virtio_find_shmem_map() - Find a memory mapping in a shared region
> > > + * @shmem: VirtioSharedMemory region
> > > + * @offset: Offset within the shared memory region
> > > + * @size: Size of the mapping to find
> > > + *
> > > + * Finds an existing memory mapping that covers the specified range.
> > > + * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> > > + * region and should not be freed by the caller.
> > > + *
> > > + * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> > > + */
> > > +VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > > +                                          hwaddr offset, uint64_t size);
> > > +
> > > +/**
> > > + * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> > > + * @shmem: VirtioSharedMemory region
> > > + * @offset: Offset of the mapping to remove
> > > + * @size: Size of the mapping to remove
> > > + *
> > > + * Removes a memory mapping from the shared memory region. This will
> > > + * automatically unref the associated VhostUserShmemObject, which may
> > > + * trigger its finalization and cleanup if no other references exist.
> > > + * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> > > + */
> > > +void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > > +                          uint64_t size);
> > > +
> > >  extern const VMStateInfo virtio_vmstate_info;
> > >
> > >  #define VMSTATE_VIRTIO_DEVICE \
> > > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > > index 9c630c2170..034cbfdc3c 100644
> > > --- a/subprojects/libvhost-user/libvhost-user.c
> > > +++ b/subprojects/libvhost-user/libvhost-user.c
> > > @@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> > >      return vu_send_message(dev, &msg);
> > >  }
> > >
> > > +bool
> > > +vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > > +             uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> > > +{
> > > +    VhostUserMsg vmsg = {
> > > +        .request = VHOST_USER_BACKEND_SHMEM_MAP,
> > > +        .size = sizeof(vmsg.payload.mmap),
> > > +        .flags = VHOST_USER_VERSION,
> > > +        .payload.mmap = {
> > > +            .shmid = shmid,
> > > +            .fd_offset = fd_offset,
> > > +            .shm_offset = shm_offset,
> > > +            .len = len,
> > > +            .flags = flags,
> > > +        },
> > > +        .fd_num = 1,
> > > +        .fds[0] = fd,
> > > +    };
> > > +
> > > +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > > +        return false;
> > > +    }
> > > +
> > > +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > > +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > > +    }
> > > +
> > > +    pthread_mutex_lock(&dev->backend_mutex);
> > > +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > > +        pthread_mutex_unlock(&dev->backend_mutex);
> > > +        return false;
> > > +    }
> > > +
> > > +    /* Also unlocks the backend_mutex */
> > > +    return vu_process_message_reply(dev, &vmsg);
> > > +}
> > > +
> > > +bool
> > > +vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> > > +{
> > > +    VhostUserMsg vmsg = {
> > > +        .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> > > +        .size = sizeof(vmsg.payload.mmap),
> > > +        .flags = VHOST_USER_VERSION,
> > > +        .payload.mmap = {
> > > +            .shmid = shmid,
> > > +            .fd_offset = 0,
> > > +            .shm_offset = shm_offset,
> > > +            .len = len,
> > > +        },
> > > +    };
> > > +
> > > +    if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > > +        return false;
> > > +    }
> > > +
> > > +    if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > > +        vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > > +    }
> > > +
> > > +    pthread_mutex_lock(&dev->backend_mutex);
> > > +    if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > > +        pthread_mutex_unlock(&dev->backend_mutex);
> > > +        return false;
> > > +    }
> > > +
> > > +    /* Also unlocks the backend_mutex */
> > > +    return vu_process_message_reply(dev, &vmsg);
> > > +}
> > > +
> > >  static bool
> > >  vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> > >  {
> > > diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> > > index 2ffc58c11b..26b710c92d 100644
> > > --- a/subprojects/libvhost-user/libvhost-user.h
> > > +++ b/subprojects/libvhost-user/libvhost-user.h
> > > @@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> > >      /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> > >      /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> > >      VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> > > +    /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> > > +    VHOST_USER_PROTOCOL_F_SHMEM = 20,
> > >      VHOST_USER_PROTOCOL_F_MAX
> > >  };
> > >
> > > @@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > >      VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > > +    VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > > +    VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > >      VHOST_USER_BACKEND_MAX
> > >  }  VhostUserBackendRequest;
> > >
> > > @@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> > >      unsigned char uuid[UUID_LEN];
> > >  } VhostUserShared;
> > >
> > > +/* For the flags field of VhostUserMMap */
> > > +#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > > +
> > > +typedef struct {
> > > +    /* VIRTIO Shared Memory Region ID */
> > > +    uint8_t shmid;
> > > +    uint8_t padding[7];
> > > +    /* File offset */
> > > +    uint64_t fd_offset;
> > > +    /* Offset within the VIRTIO Shared Memory Region */
> > > +    uint64_t shm_offset;
> > > +    /* Size of the mapping */
> > > +    uint64_t len;
> > > +    /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > > +    uint16_t flags;
> > > +} VhostUserMMap;
> > > +
> > >  #define VU_PACKED __attribute__((packed))
> > >
> > >  typedef struct VhostUserMsg {
> > > @@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> > >          VhostUserVringArea area;
> > >          VhostUserInflight inflight;
> > >          VhostUserShared object;
> > > +        VhostUserMMap mmap;
> > >      } payload;
> > >
> > >      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> > > @@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > >   */
> > >  bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > >
> > > +/**
> > > + * vu_shmem_map:
> > > + * @dev: a VuDev context
> > > + * @shmid: VIRTIO Shared Memory Region ID
> > > + * @fd_offset: File offset
> > > + * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > > + * @len: Size of the mapping
> > > + * @flags: Flags for the mmap operation
> > > + * @fd: A file descriptor
> > > + *
> > > + * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> > > + *
> > > + * Returns: TRUE on success, FALSE on failure.
> > > + */
> > > +bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > > +                  uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> > > +
> > > +/**
> > > + * vu_shmem_unmap:
> > > + * @dev: a VuDev context
> > > + * @shmid: VIRTIO Shared Memory Region ID
> > > + * @fd_offset: File offset
> > > + * @len: Size of the mapping
> > > + *
> > > + * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> > > + * with the requested `shmid`.
> > > + *
> > > + * Returns: TRUE on success, FALSE on failure.
> > > + */
> > > +bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> > > +                    uint64_t len);
> > > +
> > >  /**
> > >   * vu_queue_set_notification:
> > >   * @dev: a VuDev context
> > > --
> > > 2.49.0
> > >



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-19 12:16     ` Albert Esteve
@ 2025-08-20  8:47       ` Alyssa Ross
  2025-08-20 15:42       ` Stefan Hajnoczi
  2025-08-20 20:33       ` Stefan Hajnoczi
  2 siblings, 0 replies; 22+ messages in thread
From: Alyssa Ross @ 2025-08-20  8:47 UTC (permalink / raw)
  To: Albert Esteve, Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, jasowang, Laurent Vivier,
	dbassey, Stefano Garzarella, Paolo Bonzini, stevensd,
	Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 2182 bytes --]

Albert Esteve <aesteve@redhat.com> writes:

> On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>
>> On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
>> > Improve vhost-user-test to properly validate
>> > VHOST_USER_GET_SHMEM_CONFIG message handling by
>> > directly simulating the message exchange.
>> >
>> > The test manually triggers the
>> > VHOST_USER_GET_SHMEM_CONFIG message by calling
>> > chr_read() with a crafted VhostUserMsg, allowing direct
>> > validation of the shmem configuration response handler.
>>
>> It looks like this test case invokes its own chr_read() function without
>> going through QEMU, so I don't understand what this is testing?
>
> I spent some time trying to test it, but in the end I could not
> instatiate vhost-user-device because it is non user_creatable. I did
> not find any test for vhost-user-device anywhere else either. But I
> had already added most of the infrastructure here so I fallback to
> chr_read() communication to avoid having to delete everything. My
> though was that once we have other devices that use shared memory,
> they could tweak the test to instantiate the proper device and test
> this and the map/unmap operations.
>
> Although after writing this, I think other devices will actually a
> specific layout for their shared memory. So
> VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> vhost-user-device.

FWIW: I'm not so sure — my non-upstream Cloud Hypervisor frontend for
the crosvm vhost-user GPU device[1] uses the equivalent of
VHOST_USER_GET_SHMEM_CONFIG to allow the backend to choose the size of
the shared memory region, and I could imagine that being something other
devices might want to do too?

[1]: https://spectrum-os.org/software/cloud-hypervisor/

> In general, trying to test this patch series has been a headache other
> than trying with external device code I have. If you have an idea that
> I could try to test this, I can try. Otherwise, probably is best to
> remove this commit from the series and wait for another vhost-user
> device that uses map/unmap to land to be able to test it.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-19 12:16     ` Albert Esteve
  2025-08-20  8:47       ` Alyssa Ross
@ 2025-08-20 15:42       ` Stefan Hajnoczi
  2025-08-20 20:33       ` Stefan Hajnoczi
  2 siblings, 0 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2025-08-20 15:42 UTC (permalink / raw)
  To: Albert Esteve
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 9553 bytes --]

On Tue, Aug 19, 2025 at 02:16:47PM +0200, Albert Esteve wrote:
> On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > > Improve vhost-user-test to properly validate
> > > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > > directly simulating the message exchange.
> > >
> > > The test manually triggers the
> > > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > > chr_read() with a crafted VhostUserMsg, allowing direct
> > > validation of the shmem configuration response handler.
> >
> > It looks like this test case invokes its own chr_read() function without
> > going through QEMU, so I don't understand what this is testing?
> 
> I spent some time trying to test it, but in the end I could not
> instatiate vhost-user-device because it is non user_creatable. I did
> not find any test for vhost-user-device anywhere else either. But I

My understanding is that you need to manually comment out user_creatable
= false in the QEMU source code and recompile. This choice was made
because users were confused with the vhost-user-device and this is an
attempt to hide the device. However, it is a useful device for testing.

> had already added most of the infrastructure here so I fallback to
> chr_read() communication to avoid having to delete everything. My
> though was that once we have other devices that use shared memory,
> they could tweak the test to instantiate the proper device and test
> this and the map/unmap operations.
> 
> Although after writing this, I think other devices will actually a
> specific layout for their shared memory. So
> VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> vhost-user-device.
> 
> In general, trying to test this patch series has been a headache other
> than trying with external device code I have. If you have an idea that
> I could try to test this, I can try. Otherwise, probably is best to
> remove this commit from the series and wait for another vhost-user
> device that uses map/unmap to land to be able to test it.

Here is an idea:
- Extend vhost-user-test.c's chr_read() to handle GET_SHMEM_CONFIG.
- Add -device vhost-user-test-device (which inherits from
  vhost-user-device but sets user_creatable to true) to QEMU.
- Add a shmem test case to vhost-user-test.c that instantiates
  vhost-user-test-device, sends MAP/UNMAP messages, and verifies that
  qtest memory qtest_readb()/qtest_writeb() are able to modify the
  shared memory as intended.

That means vhost-user-test.c provides the vhost-user backend and QEMU
uses vhost-user-test-device to connect. The test case uses qtest to
control QEMU and check that the VIRTIO Shared Memory Regions behave as
intended.

Stefan

> 
> 
> 
> >
> > >
> > > Added TestServerShmem structure to track shmem
> > > configuration state, including nregions_sent and
> > > sizes_sent arrays for comprehensive validation.
> > > The test verifies that the response contains the expected
> > > number of shared memory regions and their corresponding
> > > sizes.
> > >
> > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > ---
> > >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> > >  1 file changed, 91 insertions(+)
> > >
> > > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > > index 75cb3e44b2..44a5e90b2e 100644
> > > --- a/tests/qtest/vhost-user-test.c
> > > +++ b/tests/qtest/vhost-user-test.c
> > > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> > >      VHOST_USER_SET_VRING_ENABLE = 18,
> > >      VHOST_USER_GET_CONFIG = 24,
> > >      VHOST_USER_SET_CONFIG = 25,
> > > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> > >      VHOST_USER_MAX
> > >  } VhostUserRequest;
> > >
> > > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> > >      uint64_t mmap_offset;
> > >  } VhostUserLog;
> > >
> > > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > > +
> > > +typedef struct VhostUserShMemConfig {
> > > +    uint32_t nregions;
> > > +    uint32_t padding;
> > > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} VhostUserShMemConfig;
> > > +
> > > +typedef struct TestServerShmem {
> > > +    bool test_enabled;
> > > +    uint32_t nregions_sent;
> > > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} TestServerShmem;
> > > +
> > >  typedef struct VhostUserMsg {
> > >      VhostUserRequest request;
> > >
> > > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> > >          struct vhost_vring_addr addr;
> > >          VhostUserMemory memory;
> > >          VhostUserLog log;
> > > +        VhostUserShMemConfig shmem;
> > >      } payload;
> > >  } QEMU_PACKED VhostUserMsg;
> > >
> > > @@ -170,6 +186,7 @@ typedef struct TestServer {
> > >      bool test_fail;
> > >      int test_flags;
> > >      int queues;
> > > +    TestServerShmem shmem;
> > >      struct vhost_user_ops *vu_ops;
> > >  } TestServer;
> > >
> > > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> > >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> > >                     msg.payload.state.num ? "enabled" : "disabled");
> > >          break;
> > > +
> > > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > > +        if (!s->shmem.test_enabled) {
> > > +            /* Reply with error if shmem feature not enabled */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(uint64_t);
> > > +            msg.payload.u64 = -1; /* Error */
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        } else {
> > > +            /* Reply with test shmem configuration */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(VhostUserShMemConfig);
> > > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > > +            msg.payload.shmem.padding = 0;
> > > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > > +
> > > +            /* Record what we're sending for test validation */
> > > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > > +
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        }
> > > +        break;
> > >
> > >      default:
> > >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> > >      return server;
> > >  }
> > >
> > > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > > +{
> > > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > > +    test_server_listen(server);
> > > +
> > > +    /* Enable shmem testing for this server */
> > > +    server->shmem.test_enabled = true;
> > > +
> > > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > > +    server->vu_ops->append_opts(server, cmd_line, "");
> > > +
> > > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > > +
> > > +    return server;
> > > +}
> > > +
> > >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> > >  {
> > >      TestServer *server = arg;
> > > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> > >      .get_protocol_features = vu_net_get_protocol_features,
> > >  };
> > >
> > > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > > +{
> > > +    TestServer *s = arg;
> > > +
> > > +    g_assert_true(s->shmem.test_enabled);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    s->shmem.nregions_sent = 0;
> > > +    s->shmem.sizes_sent[0] = 0;
> > > +    s->shmem.sizes_sent[1] = 0;
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +
> > > +    VhostUserMsg msg = {
> > > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > > +        .flags = VHOST_USER_VERSION,
> > > +        .size = 0,
> > > +    };
> > > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +}
> > > +
> > >  static void register_vhost_user_test(void)
> > >  {
> > >      QOSGraphTestOptions opts = {
> > > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> > >      qos_add_test("vhost-user/multiqueue",
> > >                   "virtio-net",
> > >                   test_multiqueue, &opts);
> > > +
> > > +    opts.before = vhost_user_test_setup_shmem_config;
> > > +    opts.edge.extra_device_opts = "";
> > > +    qos_add_test("vhost-user/shmem-config",
> > > +                 "virtio-net",
> > > +                 test_shmem_config, &opts);
> > >  }
> > >  libqos_init(register_vhost_user_test);
> > >
> > > --
> > > 2.49.0
> > >
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-19 12:16     ` Albert Esteve
  2025-08-20  8:47       ` Alyssa Ross
  2025-08-20 15:42       ` Stefan Hajnoczi
@ 2025-08-20 20:33       ` Stefan Hajnoczi
  2025-08-21  6:50         ` Albert Esteve
  2 siblings, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2025-08-20 20:33 UTC (permalink / raw)
  To: Albert Esteve
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

[-- Attachment #1: Type: text/plain, Size: 8775 bytes --]

On Tue, Aug 19, 2025 at 02:16:47PM +0200, Albert Esteve wrote:
> On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > > Improve vhost-user-test to properly validate
> > > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > > directly simulating the message exchange.
> > >
> > > The test manually triggers the
> > > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > > chr_read() with a crafted VhostUserMsg, allowing direct
> > > validation of the shmem configuration response handler.
> >
> > It looks like this test case invokes its own chr_read() function without
> > going through QEMU, so I don't understand what this is testing?
> 
> I spent some time trying to test it, but in the end I could not
> instatiate vhost-user-device because it is non user_creatable. I did
> not find any test for vhost-user-device anywhere else either. But I
> had already added most of the infrastructure here so I fallback to
> chr_read() communication to avoid having to delete everything. My
> though was that once we have other devices that use shared memory,
> they could tweak the test to instantiate the proper device and test
> this and the map/unmap operations.
> 
> Although after writing this, I think other devices will actually a
> specific layout for their shared memory. So
> VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> vhost-user-device.
> 
> In general, trying to test this patch series has been a headache other
> than trying with external device code I have. If you have an idea that
> I could try to test this, I can try. Otherwise, probably is best to
> remove this commit from the series and wait for another vhost-user
> device that uses map/unmap to land to be able to test it.

Alex Bennee has renamed vhost-user-device to vhost-user-test-device and
set user_creatable = true:
https://lore.kernel.org/qemu-devel/20250820195632.1956795-1-alex.bennee@linaro.org/T/#t

> 
> 
> 
> >
> > >
> > > Added TestServerShmem structure to track shmem
> > > configuration state, including nregions_sent and
> > > sizes_sent arrays for comprehensive validation.
> > > The test verifies that the response contains the expected
> > > number of shared memory regions and their corresponding
> > > sizes.
> > >
> > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > ---
> > >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> > >  1 file changed, 91 insertions(+)
> > >
> > > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > > index 75cb3e44b2..44a5e90b2e 100644
> > > --- a/tests/qtest/vhost-user-test.c
> > > +++ b/tests/qtest/vhost-user-test.c
> > > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> > >      VHOST_USER_SET_VRING_ENABLE = 18,
> > >      VHOST_USER_GET_CONFIG = 24,
> > >      VHOST_USER_SET_CONFIG = 25,
> > > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> > >      VHOST_USER_MAX
> > >  } VhostUserRequest;
> > >
> > > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> > >      uint64_t mmap_offset;
> > >  } VhostUserLog;
> > >
> > > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > > +
> > > +typedef struct VhostUserShMemConfig {
> > > +    uint32_t nregions;
> > > +    uint32_t padding;
> > > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} VhostUserShMemConfig;
> > > +
> > > +typedef struct TestServerShmem {
> > > +    bool test_enabled;
> > > +    uint32_t nregions_sent;
> > > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > > +} TestServerShmem;
> > > +
> > >  typedef struct VhostUserMsg {
> > >      VhostUserRequest request;
> > >
> > > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> > >          struct vhost_vring_addr addr;
> > >          VhostUserMemory memory;
> > >          VhostUserLog log;
> > > +        VhostUserShMemConfig shmem;
> > >      } payload;
> > >  } QEMU_PACKED VhostUserMsg;
> > >
> > > @@ -170,6 +186,7 @@ typedef struct TestServer {
> > >      bool test_fail;
> > >      int test_flags;
> > >      int queues;
> > > +    TestServerShmem shmem;
> > >      struct vhost_user_ops *vu_ops;
> > >  } TestServer;
> > >
> > > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> > >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> > >                     msg.payload.state.num ? "enabled" : "disabled");
> > >          break;
> > > +
> > > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > > +        if (!s->shmem.test_enabled) {
> > > +            /* Reply with error if shmem feature not enabled */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(uint64_t);
> > > +            msg.payload.u64 = -1; /* Error */
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        } else {
> > > +            /* Reply with test shmem configuration */
> > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > +            msg.size = sizeof(VhostUserShMemConfig);
> > > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > > +            msg.payload.shmem.padding = 0;
> > > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > > +
> > > +            /* Record what we're sending for test validation */
> > > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > > +
> > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > +        }
> > > +        break;
> > >
> > >      default:
> > >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> > >      return server;
> > >  }
> > >
> > > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > > +{
> > > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > > +    test_server_listen(server);
> > > +
> > > +    /* Enable shmem testing for this server */
> > > +    server->shmem.test_enabled = true;
> > > +
> > > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > > +    server->vu_ops->append_opts(server, cmd_line, "");
> > > +
> > > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > > +
> > > +    return server;
> > > +}
> > > +
> > >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> > >  {
> > >      TestServer *server = arg;
> > > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> > >      .get_protocol_features = vu_net_get_protocol_features,
> > >  };
> > >
> > > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > > +{
> > > +    TestServer *s = arg;
> > > +
> > > +    g_assert_true(s->shmem.test_enabled);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    s->shmem.nregions_sent = 0;
> > > +    s->shmem.sizes_sent[0] = 0;
> > > +    s->shmem.sizes_sent[1] = 0;
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +
> > > +    VhostUserMsg msg = {
> > > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > > +        .flags = VHOST_USER_VERSION,
> > > +        .size = 0,
> > > +    };
> > > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > > +
> > > +    g_mutex_lock(&s->data_mutex);
> > > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > > +    g_mutex_unlock(&s->data_mutex);
> > > +}
> > > +
> > >  static void register_vhost_user_test(void)
> > >  {
> > >      QOSGraphTestOptions opts = {
> > > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> > >      qos_add_test("vhost-user/multiqueue",
> > >                   "virtio-net",
> > >                   test_multiqueue, &opts);
> > > +
> > > +    opts.before = vhost_user_test_setup_shmem_config;
> > > +    opts.edge.extra_device_opts = "";
> > > +    qos_add_test("vhost-user/shmem-config",
> > > +                 "virtio-net",
> > > +                 test_shmem_config, &opts);
> > >  }
> > >  libqos_init(register_vhost_user_test);
> > >
> > > --
> > > 2.49.0
> > >
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-20 20:33       ` Stefan Hajnoczi
@ 2025-08-21  6:50         ` Albert Esteve
  2025-09-10 11:10           ` Albert Esteve
  0 siblings, 1 reply; 22+ messages in thread
From: Albert Esteve @ 2025-08-21  6:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Wed, Aug 20, 2025 at 10:33 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Aug 19, 2025 at 02:16:47PM +0200, Albert Esteve wrote:
> > On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > >
> > > On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > > > Improve vhost-user-test to properly validate
> > > > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > > > directly simulating the message exchange.
> > > >
> > > > The test manually triggers the
> > > > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > > > chr_read() with a crafted VhostUserMsg, allowing direct
> > > > validation of the shmem configuration response handler.
> > >
> > > It looks like this test case invokes its own chr_read() function without
> > > going through QEMU, so I don't understand what this is testing?
> >
> > I spent some time trying to test it, but in the end I could not
> > instatiate vhost-user-device because it is non user_creatable. I did
> > not find any test for vhost-user-device anywhere else either. But I
> > had already added most of the infrastructure here so I fallback to
> > chr_read() communication to avoid having to delete everything. My
> > though was that once we have other devices that use shared memory,
> > they could tweak the test to instantiate the proper device and test
> > this and the map/unmap operations.
> >
> > Although after writing this, I think other devices will actually a
> > specific layout for their shared memory. So
> > VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> > vhost-user-device.
> >
> > In general, trying to test this patch series has been a headache other
> > than trying with external device code I have. If you have an idea that
> > I could try to test this, I can try. Otherwise, probably is best to
> > remove this commit from the series and wait for another vhost-user
> > device that uses map/unmap to land to be able to test it.
>
> Alex Bennee has renamed vhost-user-device to vhost-user-test-device and
> set user_creatable = true:
> https://lore.kernel.org/qemu-devel/20250820195632.1956795-1-alex.bennee@linaro.org/T/#t

Oh, great! Thanks for letting me know.

That allows having a QTest with the vhost-user-test-device available
and run it in piplines if necessary, without manually
changing/recompiling. I'll try to add it to the test again in this
commit.

Thank you, Stefan and Alyssa, for the hints.

>
> >
> >
> >
> > >
> > > >
> > > > Added TestServerShmem structure to track shmem
> > > > configuration state, including nregions_sent and
> > > > sizes_sent arrays for comprehensive validation.
> > > > The test verifies that the response contains the expected
> > > > number of shared memory regions and their corresponding
> > > > sizes.
> > > >
> > > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > > ---
> > > >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 91 insertions(+)
> > > >
> > > > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > > > index 75cb3e44b2..44a5e90b2e 100644
> > > > --- a/tests/qtest/vhost-user-test.c
> > > > +++ b/tests/qtest/vhost-user-test.c
> > > > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> > > >      VHOST_USER_SET_VRING_ENABLE = 18,
> > > >      VHOST_USER_GET_CONFIG = 24,
> > > >      VHOST_USER_SET_CONFIG = 25,
> > > > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> > > >      VHOST_USER_MAX
> > > >  } VhostUserRequest;
> > > >
> > > > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> > > >      uint64_t mmap_offset;
> > > >  } VhostUserLog;
> > > >
> > > > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > > > +
> > > > +typedef struct VhostUserShMemConfig {
> > > > +    uint32_t nregions;
> > > > +    uint32_t padding;
> > > > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > > > +} VhostUserShMemConfig;
> > > > +
> > > > +typedef struct TestServerShmem {
> > > > +    bool test_enabled;
> > > > +    uint32_t nregions_sent;
> > > > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > > > +} TestServerShmem;
> > > > +
> > > >  typedef struct VhostUserMsg {
> > > >      VhostUserRequest request;
> > > >
> > > > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> > > >          struct vhost_vring_addr addr;
> > > >          VhostUserMemory memory;
> > > >          VhostUserLog log;
> > > > +        VhostUserShMemConfig shmem;
> > > >      } payload;
> > > >  } QEMU_PACKED VhostUserMsg;
> > > >
> > > > @@ -170,6 +186,7 @@ typedef struct TestServer {
> > > >      bool test_fail;
> > > >      int test_flags;
> > > >      int queues;
> > > > +    TestServerShmem shmem;
> > > >      struct vhost_user_ops *vu_ops;
> > > >  } TestServer;
> > > >
> > > > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> > > >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> > > >                     msg.payload.state.num ? "enabled" : "disabled");
> > > >          break;
> > > > +
> > > > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > > > +        if (!s->shmem.test_enabled) {
> > > > +            /* Reply with error if shmem feature not enabled */
> > > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > > +            msg.size = sizeof(uint64_t);
> > > > +            msg.payload.u64 = -1; /* Error */
> > > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > > +        } else {
> > > > +            /* Reply with test shmem configuration */
> > > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > > +            msg.size = sizeof(VhostUserShMemConfig);
> > > > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > > > +            msg.payload.shmem.padding = 0;
> > > > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > > > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > > > +
> > > > +            /* Record what we're sending for test validation */
> > > > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > > > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > > > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > > > +
> > > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > > +        }
> > > > +        break;
> > > >
> > > >      default:
> > > >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > > > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> > > >      return server;
> > > >  }
> > > >
> > > > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > > > +{
> > > > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > > > +    test_server_listen(server);
> > > > +
> > > > +    /* Enable shmem testing for this server */
> > > > +    server->shmem.test_enabled = true;
> > > > +
> > > > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > > > +    server->vu_ops->append_opts(server, cmd_line, "");
> > > > +
> > > > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > > > +
> > > > +    return server;
> > > > +}
> > > > +
> > > >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> > > >  {
> > > >      TestServer *server = arg;
> > > > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> > > >      .get_protocol_features = vu_net_get_protocol_features,
> > > >  };
> > > >
> > > > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > > > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > > > +{
> > > > +    TestServer *s = arg;
> > > > +
> > > > +    g_assert_true(s->shmem.test_enabled);
> > > > +
> > > > +    g_mutex_lock(&s->data_mutex);
> > > > +    s->shmem.nregions_sent = 0;
> > > > +    s->shmem.sizes_sent[0] = 0;
> > > > +    s->shmem.sizes_sent[1] = 0;
> > > > +    g_mutex_unlock(&s->data_mutex);
> > > > +
> > > > +    VhostUserMsg msg = {
> > > > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > > > +        .flags = VHOST_USER_VERSION,
> > > > +        .size = 0,
> > > > +    };
> > > > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > > > +
> > > > +    g_mutex_lock(&s->data_mutex);
> > > > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > > > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > > > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > > > +    g_mutex_unlock(&s->data_mutex);
> > > > +}
> > > > +
> > > >  static void register_vhost_user_test(void)
> > > >  {
> > > >      QOSGraphTestOptions opts = {
> > > > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> > > >      qos_add_test("vhost-user/multiqueue",
> > > >                   "virtio-net",
> > > >                   test_multiqueue, &opts);
> > > > +
> > > > +    opts.before = vhost_user_test_setup_shmem_config;
> > > > +    opts.edge.extra_device_opts = "";
> > > > +    qos_add_test("vhost-user/shmem-config",
> > > > +                 "virtio-net",
> > > > +                 test_shmem_config, &opts);
> > > >  }
> > > >  libqos_init(register_vhost_user_test);
> > > >
> > > > --
> > > > 2.49.0
> > > >
> >



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test
  2025-08-21  6:50         ` Albert Esteve
@ 2025-09-10 11:10           ` Albert Esteve
  0 siblings, 0 replies; 22+ messages in thread
From: Albert Esteve @ 2025-09-10 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, david, Michael S. Tsirkin, hi, jasowang,
	Laurent Vivier, dbassey, Stefano Garzarella, Paolo Bonzini,
	stevensd, Fabiano Rosas, Alex Bennée, slp

On Thu, Aug 21, 2025 at 8:50 AM Albert Esteve <aesteve@redhat.com> wrote:
>
> On Wed, Aug 20, 2025 at 10:33 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> > On Tue, Aug 19, 2025 at 02:16:47PM +0200, Albert Esteve wrote:
> > > On Tue, Aug 19, 2025 at 12:42 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > > >
> > > > On Mon, Aug 18, 2025 at 12:03:51PM +0200, Albert Esteve wrote:
> > > > > Improve vhost-user-test to properly validate
> > > > > VHOST_USER_GET_SHMEM_CONFIG message handling by
> > > > > directly simulating the message exchange.
> > > > >
> > > > > The test manually triggers the
> > > > > VHOST_USER_GET_SHMEM_CONFIG message by calling
> > > > > chr_read() with a crafted VhostUserMsg, allowing direct
> > > > > validation of the shmem configuration response handler.
> > > >
> > > > It looks like this test case invokes its own chr_read() function without
> > > > going through QEMU, so I don't understand what this is testing?
> > >
> > > I spent some time trying to test it, but in the end I could not
> > > instatiate vhost-user-device because it is non user_creatable. I did
> > > not find any test for vhost-user-device anywhere else either. But I
> > > had already added most of the infrastructure here so I fallback to
> > > chr_read() communication to avoid having to delete everything. My
> > > though was that once we have other devices that use shared memory,
> > > they could tweak the test to instantiate the proper device and test
> > > this and the map/unmap operations.
> > >
> > > Although after writing this, I think other devices will actually a
> > > specific layout for their shared memory. So
> > > VHOST_USER_GET_SHMEM_CONFIG is only ever going to be used by
> > > vhost-user-device.
> > >
> > > In general, trying to test this patch series has been a headache other
> > > than trying with external device code I have. If you have an idea that
> > > I could try to test this, I can try. Otherwise, probably is best to
> > > remove this commit from the series and wait for another vhost-user
> > > device that uses map/unmap to land to be able to test it.
> >
> > Alex Bennee has renamed vhost-user-device to vhost-user-test-device and
> > set user_creatable = true:
> > https://lore.kernel.org/qemu-devel/20250820195632.1956795-1-alex.bennee@linaro.org/T/#t
>
> Oh, great! Thanks for letting me know.
>
> That allows having a QTest with the vhost-user-test-device available
> and run it in piplines if necessary, without manually
> changing/recompiling. I'll try to add it to the test again in this
> commit.
>
> Thank you, Stefan and Alyssa, for the hints.

Hi,

I wanted to make a note before sending the next version. I have been
trying to test it by forcing user_creatable to true locally while the
other PATCH lands. But it will need more work, and I do not want to
delay the new version much further. Thus, I will remove this commit
from the next version and keep working locally.

>
> >
> > >
> > >
> > >
> > > >
> > > > >
> > > > > Added TestServerShmem structure to track shmem
> > > > > configuration state, including nregions_sent and
> > > > > sizes_sent arrays for comprehensive validation.
> > > > > The test verifies that the response contains the expected
> > > > > number of shared memory regions and their corresponding
> > > > > sizes.
> > > > >
> > > > > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > > > ---
> > > > >  tests/qtest/vhost-user-test.c | 91 +++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 91 insertions(+)
> > > > >
> > > > > diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
> > > > > index 75cb3e44b2..44a5e90b2e 100644
> > > > > --- a/tests/qtest/vhost-user-test.c
> > > > > +++ b/tests/qtest/vhost-user-test.c
> > > > > @@ -88,6 +88,7 @@ typedef enum VhostUserRequest {
> > > > >      VHOST_USER_SET_VRING_ENABLE = 18,
> > > > >      VHOST_USER_GET_CONFIG = 24,
> > > > >      VHOST_USER_SET_CONFIG = 25,
> > > > > +    VHOST_USER_GET_SHMEM_CONFIG = 44,
> > > > >      VHOST_USER_MAX
> > > > >  } VhostUserRequest;
> > > > >
> > > > > @@ -109,6 +110,20 @@ typedef struct VhostUserLog {
> > > > >      uint64_t mmap_offset;
> > > > >  } VhostUserLog;
> > > > >
> > > > > +#define VIRTIO_MAX_SHMEM_REGIONS 256
> > > > > +
> > > > > +typedef struct VhostUserShMemConfig {
> > > > > +    uint32_t nregions;
> > > > > +    uint32_t padding;
> > > > > +    uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
> > > > > +} VhostUserShMemConfig;
> > > > > +
> > > > > +typedef struct TestServerShmem {
> > > > > +    bool test_enabled;
> > > > > +    uint32_t nregions_sent;
> > > > > +    uint64_t sizes_sent[VIRTIO_MAX_SHMEM_REGIONS];
> > > > > +} TestServerShmem;
> > > > > +
> > > > >  typedef struct VhostUserMsg {
> > > > >      VhostUserRequest request;
> > > > >
> > > > > @@ -124,6 +139,7 @@ typedef struct VhostUserMsg {
> > > > >          struct vhost_vring_addr addr;
> > > > >          VhostUserMemory memory;
> > > > >          VhostUserLog log;
> > > > > +        VhostUserShMemConfig shmem;
> > > > >      } payload;
> > > > >  } QEMU_PACKED VhostUserMsg;
> > > > >
> > > > > @@ -170,6 +186,7 @@ typedef struct TestServer {
> > > > >      bool test_fail;
> > > > >      int test_flags;
> > > > >      int queues;
> > > > > +    TestServerShmem shmem;
> > > > >      struct vhost_user_ops *vu_ops;
> > > > >  } TestServer;
> > > > >
> > > > > @@ -513,6 +530,31 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
> > > > >          qos_printf("set_vring(%d)=%s\n", msg.payload.state.index,
> > > > >                     msg.payload.state.num ? "enabled" : "disabled");
> > > > >          break;
> > > > > +
> > > > > +    case VHOST_USER_GET_SHMEM_CONFIG:
> > > > > +        if (!s->shmem.test_enabled) {
> > > > > +            /* Reply with error if shmem feature not enabled */
> > > > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > > > +            msg.size = sizeof(uint64_t);
> > > > > +            msg.payload.u64 = -1; /* Error */
> > > > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > > > +        } else {
> > > > > +            /* Reply with test shmem configuration */
> > > > > +            msg.flags |= VHOST_USER_REPLY_MASK;
> > > > > +            msg.size = sizeof(VhostUserShMemConfig);
> > > > > +            msg.payload.shmem.nregions = 2; /* Test with 2 regions */
> > > > > +            msg.payload.shmem.padding = 0;
> > > > > +            msg.payload.shmem.memory_sizes[0] = 0x100000; /* 1MB */
> > > > > +            msg.payload.shmem.memory_sizes[1] = 0x200000; /* 2MB */
> > > > > +
> > > > > +            /* Record what we're sending for test validation */
> > > > > +            s->shmem.nregions_sent = msg.payload.shmem.nregions;
> > > > > +            s->shmem.sizes_sent[0] = msg.payload.shmem.memory_sizes[0];
> > > > > +            s->shmem.sizes_sent[1] = msg.payload.shmem.memory_sizes[1];
> > > > > +
> > > > > +            qemu_chr_fe_write_all(chr, (uint8_t *) &msg, VHOST_USER_HDR_SIZE + msg.size);
> > > > > +        }
> > > > > +        break;
> > > > >
> > > > >      default:
> > > > >          qos_printf("vhost-user: un-handled message: %d\n", msg.request);
> > > > > @@ -809,6 +851,22 @@ static void *vhost_user_test_setup_shm(GString *cmd_line, void *arg)
> > > > >      return server;
> > > > >  }
> > > > >
> > > > > +static void *vhost_user_test_setup_shmem_config(GString *cmd_line, void *arg)
> > > > > +{
> > > > > +    TestServer *server = test_server_new("vhost-user-test", arg);
> > > > > +    test_server_listen(server);
> > > > > +
> > > > > +    /* Enable shmem testing for this server */
> > > > > +    server->shmem.test_enabled = true;
> > > > > +
> > > > > +    append_mem_opts(server, cmd_line, 256, TEST_MEMFD_SHM);
> > > > > +    server->vu_ops->append_opts(server, cmd_line, "");
> > > > > +
> > > > > +    g_test_queue_destroy(vhost_user_test_cleanup, server);
> > > > > +
> > > > > +    return server;
> > > > > +}
> > > > > +
> > > > >  static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
> > > > >  {
> > > > >      TestServer *server = arg;
> > > > > @@ -1089,6 +1147,33 @@ static struct vhost_user_ops g_vu_net_ops = {
> > > > >      .get_protocol_features = vu_net_get_protocol_features,
> > > > >  };
> > > > >
> > > > > +/* Test function for VHOST_USER_GET_SHMEM_CONFIG message */
> > > > > +static void test_shmem_config(void *obj, void *arg, QGuestAllocator *alloc)
> > > > > +{
> > > > > +    TestServer *s = arg;
> > > > > +
> > > > > +    g_assert_true(s->shmem.test_enabled);
> > > > > +
> > > > > +    g_mutex_lock(&s->data_mutex);
> > > > > +    s->shmem.nregions_sent = 0;
> > > > > +    s->shmem.sizes_sent[0] = 0;
> > > > > +    s->shmem.sizes_sent[1] = 0;
> > > > > +    g_mutex_unlock(&s->data_mutex);
> > > > > +
> > > > > +    VhostUserMsg msg = {
> > > > > +        .request = VHOST_USER_GET_SHMEM_CONFIG,
> > > > > +        .flags = VHOST_USER_VERSION,
> > > > > +        .size = 0,
> > > > > +    };
> > > > > +    chr_read(s, (uint8_t *) &msg, VHOST_USER_HDR_SIZE);
> > > > > +
> > > > > +    g_mutex_lock(&s->data_mutex);
> > > > > +    g_assert_cmpint(s->shmem.nregions_sent, ==, 2);
> > > > > +    g_assert_cmpint(s->shmem.sizes_sent[0], ==, 0x100000); /* 1MB */
> > > > > +    g_assert_cmpint(s->shmem.sizes_sent[1], ==, 0x200000); /* 2MB */
> > > > > +    g_mutex_unlock(&s->data_mutex);
> > > > > +}
> > > > > +
> > > > >  static void register_vhost_user_test(void)
> > > > >  {
> > > > >      QOSGraphTestOptions opts = {
> > > > > @@ -1136,6 +1221,12 @@ static void register_vhost_user_test(void)
> > > > >      qos_add_test("vhost-user/multiqueue",
> > > > >                   "virtio-net",
> > > > >                   test_multiqueue, &opts);
> > > > > +
> > > > > +    opts.before = vhost_user_test_setup_shmem_config;
> > > > > +    opts.edge.extra_device_opts = "";
> > > > > +    qos_add_test("vhost-user/shmem-config",
> > > > > +                 "virtio-net",
> > > > > +                 test_shmem_config, &opts);
> > > > >  }
> > > > >  libqos_init(register_vhost_user_test);
> > > > >
> > > > > --
> > > > > 2.49.0
> > > > >
> > >



^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2025-09-10 11:11 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-18 10:03 [PATCH v7 0/8] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-08-18 10:03 ` [PATCH v7 1/8] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-08-18 18:58   ` Stefan Hajnoczi
2025-08-19 12:47     ` Albert Esteve
2025-08-19 12:56       ` Albert Esteve
2025-08-19  9:22   ` David Hildenbrand
2025-08-18 10:03 ` [PATCH v7 2/8] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
2025-08-18 10:03 ` [PATCH v7 3/8] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
2025-08-18 10:03 ` [PATCH v7 4/8] vhost_user: Add frontend get_shmem_config command Albert Esteve
2025-08-18 10:03 ` [PATCH v7 5/8] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
2025-08-18 10:03 ` [PATCH v7 6/8] tests/qtest: Add GET_SHMEM validation test Albert Esteve
2025-08-18 23:14   ` Stefan Hajnoczi
2025-08-19 12:16     ` Albert Esteve
2025-08-20  8:47       ` Alyssa Ross
2025-08-20 15:42       ` Stefan Hajnoczi
2025-08-20 20:33       ` Stefan Hajnoczi
2025-08-21  6:50         ` Albert Esteve
2025-09-10 11:10           ` Albert Esteve
2025-08-18 10:03 ` [PATCH v7 7/8] qmp: add shmem feature map Albert Esteve
2025-08-18 10:03 ` [PATCH v7 8/8] vhost-user-device: Add shared memory BAR Albert Esteve
2025-08-19 10:42   ` Stefan Hajnoczi
2025-08-19 11:41     ` Albert Esteve

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).