* [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests
@ 2025-10-15 8:59 Albert Esteve
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
` (6 more replies)
0 siblings, 7 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Hi all,
v8->v9
- Fixed vhost-user new handlers to ensure that they always
reply
- Made MMAP request flags field u64 everywhere
- Fixed double memory_region_del_subregion() on UNMAP
- Add mappings cleaning on virtio_reset()
- Some small typos and fixes
- Fixed virtio pci bar mapping for vhost-user-test-device
v7->v8
- Unified VhostUserShmemObject and VirtioSharedMemoryMapping
- Refined shmem_obj lifecycle by transferring ownership
- Other small improvements
This patch series implements dynamic fd-backed memory mapping support
for vhost-user backends, enabling backends to dynamically request memory
mappings and unmappings during runtime through the new
VHOST_USER_BACKEND_SHMEM_MAP/UNMAP protocol messages.
This feature benefits various VIRTIO devices that require dynamic shared
memory management, including virtiofs (for DAX mappings), virtio-gpu
(for resource sharing), and the recently standardized virtio-media.
The implementation introduces a QOM-based architecture for managing
shared memory lifecycle:
- VirtioSharedMemoryMapping: an intermediate object that manages
individual memory mappings by acting as generic container for regions
declared in any vhost-user device type
- Dynamic Mapping: backends can request mappings via SHMEM_MAP messages,
with the frontend creating MemoryRegions from the provided file
descriptors and adding them as subregions
When a SHMEM_MAP request is received, the frontend:
1. Creates VirtioSharedMemoryMapping to manage the mapping lifecycle
2. Maps the provided fd with memory_region_init_ram_from_fd()
3. Creates a MemoryRegion backed by the mapped memory
4. Adds it as a subregion of the appropiate VIRTIO Shared Memory Region
The QOM reference counting ensures automatic cleanup when mappings are
removed or the device is destroyed.
This patch also includes:
- VHOST_USER_GET_SHMEM_CONFIG: a new frontend request allowing generic
vhost-user devices to query shared memory configuration from backends
at device initialization, enabling the generic vhost-user-device
frontend to work with any backend regardless of specific shared memory
requirements.
The implementation has been tested with rust-vmm based backends.
Albert Esteve (7):
vhost-user: Add VirtIO Shared Memory map request
vhost_user.rst: Align VhostUserMsg excerpt members
vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec
vhost_user: Add frontend get_shmem_config command
vhost_user.rst: Add GET_SHMEM_CONFIG message
qmp: add shmem feature map
vhost-user-device: Add shared memory BAR
docs/interop/vhost-user.rst | 101 +++++++++
hw/virtio/vhost-user-base.c | 47 +++-
hw/virtio/vhost-user-test-device-pci.c | 35 ++-
hw/virtio/vhost-user.c | 257 ++++++++++++++++++++++
hw/virtio/virtio-qmp.c | 3 +
hw/virtio/virtio.c | 207 +++++++++++++++++
include/hw/virtio/vhost-backend.h | 10 +
include/hw/virtio/vhost-user.h | 1 +
include/hw/virtio/virtio.h | 138 ++++++++++++
subprojects/libvhost-user/libvhost-user.c | 70 ++++++
subprojects/libvhost-user/libvhost-user.h | 54 +++++
11 files changed, 918 insertions(+), 5 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 9:47 ` Manos Pitsidianakis
2025-10-15 15:27 ` Stefan Hajnoczi
2025-10-15 8:59 ` [PATCH v9 2/7] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
` (5 subsequent siblings)
6 siblings, 2 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
VIRTIO Shared Memory mappings.
This implementation introduces VirtioSharedMemoryMapping as a unified
QOM object that manages both the mapping metadata and MemoryRegion
lifecycle. This object provides reference-counted lifecycle management
with automatic cleanup of file descriptors and memory regions
through QOM finalization.
This request allows backends to dynamically map file descriptors into a
VIRTIO Shared Memory Region identified by their shmid. Maps are created
using memory_region_init_ram_from_fd() with configurable read/write
permissions, and the resulting MemoryRegions are added as subregions to
the shmem container region. The mapped memory is then advertised to the
guest VIRTIO drivers as a base address plus offset for reading and
writting according to the requested mmap flags.
The backend can unmap memory ranges within a given VIRTIO Shared Memory
Region to free resources. Upon receiving this message, the frontend
removes the MemoryRegion as a subregion and automatically unreferences
the VirtioSharedMemoryMapping object, triggering cleanup if no other
references exist.
Error handling has been improved to ensure consistent behavior across
handlers that manage their own vhost_user_send_resp() calls. Since
these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
error checking ensures proper connection closure on failures,
maintaining the expected error flow.
Note the memory region commit for these operations needs to be delayed
until after we reply to the backend to avoid deadlocks. Otherwise,
the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
before the reply.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
hw/virtio/virtio.c | 207 +++++++++++++++++++++
include/hw/virtio/virtio.h | 136 ++++++++++++++
subprojects/libvhost-user/libvhost-user.c | 70 +++++++
subprojects/libvhost-user/libvhost-user.h | 54 ++++++
5 files changed, 680 insertions(+)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 36c9c2e04d..8a4385a5ad 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
VHOST_USER_BACKEND_MAX
} VhostUserBackendRequest;
@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
unsigned char uuid[16];
} VhostUserShared;
+/* For the flags field of VhostUserMMap */
+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
+
+typedef struct {
+ /* VIRTIO Shared Memory Region ID */
+ uint8_t shmid;
+ uint8_t padding[7];
+ /* File offset */
+ uint64_t fd_offset;
+ /* Offset within the VIRTIO Shared Memory Region */
+ uint64_t shm_offset;
+ /* Size of the mapping */
+ uint64_t len;
+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
+ uint64_t flags;
+} VhostUserMMap;
+
typedef struct {
VhostUserRequest request;
@@ -224,6 +243,7 @@ typedef union {
VhostUserInflight inflight;
VhostUserShared object;
VhostUserTransferDeviceState transfer_state;
+ VhostUserMMap mmap;
} VhostUserPayload;
typedef struct VhostUserMsg {
@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
return 0;
}
+/**
+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
+ * @dev: vhost device
+ * @ioc: QIOChannel for communication
+ * @hdr: vhost-user message header
+ * @payload: message payload containing mapping details
+ * @fd: file descriptor for the shared memory region
+ *
+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
+ * lifecycle management with reference counting.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+static int
+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
+ QIOChannel *ioc,
+ VhostUserHeader *hdr,
+ VhostUserPayload *payload,
+ int fd)
+{
+ VirtioSharedMemory *shmem;
+ VhostUserMMap *vu_mmap = &payload->mmap;
+ VirtioSharedMemoryMapping *existing;
+ Error *local_err = NULL;
+ int ret = 0;
+
+ if (fd < 0) {
+ error_report("Bad fd for map");
+ ret = -EBADF;
+ goto send_reply;
+ }
+
+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
+ error_report("Device has no VIRTIO Shared Memory Regions. "
+ "Requested ID: %d", vu_mmap->shmid);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
+ if (!shmem) {
+ error_report("VIRTIO Shared Memory Region at "
+ "ID %d not found or uninitialized", vu_mmap->shmid);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
+ vu_mmap->shm_offset, vu_mmap->len);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
+ if (ranges_overlap(existing->offset, existing->len,
+ vu_mmap->shm_offset, vu_mmap->len)) {
+ error_report("VIRTIO Shared Memory mapping overlap");
+ ret = -EFAULT;
+ goto send_reply;
+ }
+ }
+
+ memory_region_transaction_begin();
+
+ /* Create VirtioSharedMemoryMapping object */
+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
+
+ if (!mapping) {
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ /* Add the mapping to the shared memory region */
+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
+ error_report("Failed to add shared memory mapping");
+ object_unref(OBJECT(mapping));
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+send_reply:
+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+ payload->u64 = !!ret;
+ hdr->size = sizeof(payload->u64);
+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
+ error_report_err(local_err);
+ memory_region_transaction_commit();
+ return -EFAULT;
+ }
+ }
+
+ memory_region_transaction_commit();
+
+ return 0;
+}
+
+/**
+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
+ * @dev: vhost device
+ * @ioc: QIOChannel for communication
+ * @hdr: vhost-user message header
+ * @payload: message payload containing unmapping details
+ *
+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
+ * the specified memory mapping from the VirtIO shared memory region. This
+ * automatically unreferences the associated VhostUserShmemObject, which may
+ * trigger its finalization and cleanup (munmap, close fd) if no other
+ * references exist.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+static int
+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
+ QIOChannel *ioc,
+ VhostUserHeader *hdr,
+ VhostUserPayload *payload)
+{
+ VirtioSharedMemory *shmem;
+ VirtioSharedMemoryMapping *mmap = NULL;
+ VhostUserMMap *vu_mmap = &payload->mmap;
+ Error *local_err = NULL;
+ int ret = 0;
+
+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
+ error_report("Device has no VIRTIO Shared Memory Regions. "
+ "Requested ID: %d", vu_mmap->shmid);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
+ if (!shmem) {
+ error_report("VIRTIO Shared Memory Region at "
+ "ID %d not found or uninitialized", vu_mmap->shmid);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
+ vu_mmap->shm_offset, vu_mmap->len);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
+ if (!mmap) {
+ error_report("Shared memory mapping not found at offset %" PRIx64
+ " with length %" PRIx64,
+ vu_mmap->shm_offset, vu_mmap->len);
+ ret = -EFAULT;
+ goto send_reply;
+ }
+
+send_reply:
+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+ payload->u64 = !!ret;
+ hdr->size = sizeof(payload->u64);
+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
+ error_report_err(local_err);
+ return -EFAULT;
+ }
+ }
+
+ if (!ret && mmap) {
+ /* Free the MemoryRegion only after reply */
+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
+ }
+
+ return 0;
+}
+
static void close_backend_channel(struct vhost_user *u)
{
g_source_destroy(u->backend_src);
@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
&hdr, &payload);
break;
+ case VHOST_USER_BACKEND_SHMEM_MAP:
+ /* Handler manages its own response, check error and close connection */
+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
+ fd ? fd[0] : -1) < 0) {
+ goto err;
+ }
+ break;
+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
+ /* Handler manages its own response, check error and close connection */
+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
+ goto err;
+ }
+ break;
default:
error_report("Received unexpected msg type: %d.", hdr.request);
ret = -EINVAL;
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 153ee0a0cf..37070e7951 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
}
+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
+{
+ VirtioSharedMemory *elem;
+ g_autofree char *name = NULL;
+
+ elem = g_new0(VirtioSharedMemory, 1);
+ elem->shmid = shmid;
+
+ /* Initialize embedded MemoryRegion as container for shmem mappings */
+ name = g_strdup_printf("virtio-shmem-%d", shmid);
+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
+ QTAILQ_INIT(&elem->mmaps);
+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
+ return elem;
+}
+
+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
+{
+ VirtioSharedMemory *shmem, *next;
+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+ if (shmem->shmid == shmid) {
+ return shmem;
+ }
+ }
+ return NULL;
+}
+
+static void virtio_shared_memory_mapping_instance_init(Object *obj)
+{
+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
+
+ mapping->shmid = 0;
+ mapping->fd = -1;
+ mapping->offset = 0;
+ mapping->len = 0;
+ mapping->mr = NULL;
+}
+
+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
+{
+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
+
+ /* Clean up MemoryRegion if it exists */
+ if (mapping->mr) {
+ /* Unparent the MemoryRegion to trigger cleanup */
+ object_unparent(OBJECT(mapping->mr));
+ mapping->mr = NULL;
+ }
+
+ /* Close file descriptor */
+ if (mapping->fd >= 0) {
+ close(mapping->fd);
+ mapping->fd = -1;
+ }
+}
+
+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
+ int fd,
+ uint64_t fd_offset,
+ uint64_t shm_offset,
+ uint64_t len,
+ bool allow_write)
+{
+ VirtioSharedMemoryMapping *mapping;
+ MemoryRegion *mr;
+ g_autoptr(GString) mr_name = g_string_new(NULL);
+ uint32_t ram_flags;
+ Error *local_err = NULL;
+
+ if (len == 0) {
+ error_report("Shared memory mapping size cannot be zero");
+ return NULL;
+ }
+
+ fd = dup(fd);
+ if (fd < 0) {
+ error_report("Failed to duplicate fd: %s", strerror(errno));
+ return NULL;
+ }
+
+ /* Determine RAM flags */
+ ram_flags = RAM_SHARED;
+ if (!allow_write) {
+ ram_flags |= RAM_READONLY_FD;
+ }
+
+ /* Create the VirtioSharedMemoryMapping */
+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
+
+ /* Set up object properties */
+ mapping->shmid = shmid;
+ mapping->fd = fd;
+ mapping->offset = shm_offset;
+ mapping->len = len;
+
+ /* Create MemoryRegion as a child of this object */
+ mr = g_new0(MemoryRegion, 1);
+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
+
+ /* Initialize MemoryRegion with file descriptor */
+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
+ len, ram_flags, fd, fd_offset,
+ &local_err)) {
+ error_report_err(local_err);
+ g_free(mr);
+ close(fd);
+ object_unref(OBJECT(mapping));
+ return NULL;
+ }
+
+ mapping->mr = mr;
+ return mapping;
+}
+
+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
+ VirtioSharedMemoryMapping *mapping)
+{
+ if (!mapping) {
+ error_report("VirtioSharedMemoryMapping cannot be NULL");
+ return -1;
+ }
+ if (!mapping->mr) {
+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
+ return -1;
+ }
+
+ /* Validate boundaries against the VIRTIO shared memory region */
+ if (mapping->offset + mapping->len > shmem->mr.size) {
+ error_report("Memory exceeds the shared memory boundaries");
+ return -1;
+ }
+
+ /* Add as subregion to the VIRTIO shared memory */
+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
+
+ /* Add to the mapped regions list */
+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
+
+ return 0;
+}
+
+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
+ hwaddr offset, uint64_t size)
+{
+ VirtioSharedMemoryMapping *mapping;
+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
+ if (mapping->offset == offset && mapping->mr->size == size) {
+ return mapping;
+ }
+ }
+ return NULL;
+}
+
+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
+ uint64_t size)
+{
+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
+ if (mapping == NULL) {
+ return;
+ }
+
+ /*
+ * Remove from memory region first
+ */
+ memory_region_del_subregion(&shmem->mr, mapping->mr);
+
+ /*
+ * Remove from list and unref the mapping which will trigger automatic cleanup
+ * when the reference count reaches zero.
+ */
+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
+ object_unref(OBJECT(mapping));
+}
+
/* A wrapper for use as a VMState .put function */
static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
const VMStateField *field, JSONWriter *vmdesc)
@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
{
VirtIODevice *vdev = opaque;
VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
+ VirtioSharedMemory *shmem;
uint64_t features[VIRTIO_FEATURES_NU64S];
int i;
@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
__virtio_queue_reset(vdev, i);
}
+
+ /* Mappings are removed to prevent stale fds from remaining open. */
+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
+ }
+ }
}
static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
NULL, virtio_vmstate_change, vdev);
vdev->device_endian = virtio_default_endian();
vdev->use_guest_notifier_mask = true;
+ QSIMPLEQ_INIT(&vdev->shmem_list);
}
/*
@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
static void virtio_device_instance_finalize(Object *obj)
{
VirtIODevice *vdev = VIRTIO_DEVICE(obj);
+ VirtioSharedMemory *shmem;
virtio_device_free_virtqueues(vdev);
g_free(vdev->config);
g_free(vdev->vector_queues);
+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
+ }
+
+ /* Clean up the embedded MemoryRegion */
+ object_unparent(OBJECT(&shmem->mr));
+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
+ g_free(shmem);
+ }
}
static const Property virtio_properties[] = {
@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
.class_size = sizeof(VirtioDeviceClass),
};
+static const TypeInfo virtio_shared_memory_mapping_info = {
+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
+ .parent = TYPE_OBJECT,
+ .instance_size = sizeof(VirtioSharedMemoryMapping),
+ .instance_init = virtio_shared_memory_mapping_instance_init,
+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
+};
+
static void virtio_register_types(void)
{
type_register_static(&virtio_device_info);
+ type_register_static(&virtio_shared_memory_mapping_info);
}
type_init(virtio_register_types)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index d97529c3f1..cee807e236 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -99,6 +99,46 @@ enum virtio_device_endian {
VIRTIO_DEVICE_ENDIAN_BIG,
};
+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
+
+/**
+ * VirtioSharedMemoryMapping:
+ * @parent: Parent QOM object
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd: File descriptor for the shared memory region
+ * @offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @mr: MemoryRegion associated with this shared memory mapping
+ * @link: List entry for the shared memory region's mapping list
+ *
+ * A QOM object that represents an individual file descriptor-based shared
+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
+ * counting. When the object is unreferenced and its reference count drops
+ * to zero, it automatically cleans up the MemoryRegion and closes the file
+ * descriptor.
+ */
+struct VirtioSharedMemoryMapping {
+ Object parent;
+
+ uint8_t shmid;
+ int fd;
+ hwaddr offset;
+ uint64_t len;
+ MemoryRegion *mr;
+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
+};
+
+struct VirtioSharedMemory {
+ uint8_t shmid;
+ MemoryRegion mr;
+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
+};
+
+typedef struct VirtioSharedMemory VirtioSharedMemory;
+
/**
* struct VirtIODevice - common VirtIO structure
* @name: name of the device
@@ -168,6 +208,8 @@ struct VirtIODevice
*/
EventNotifier config_notifier;
bool device_iotlb_enabled;
+ /* Shared memory region for mappings. */
+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
};
struct VirtioDeviceClass {
@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
int virtio_save(VirtIODevice *vdev, QEMUFile *f);
+/**
+ * virtio_new_shmem_region() - Create a new shared memory region
+ * @vdev: VirtIODevice
+ * @shmid: Shared memory ID
+ * @size: Size of the shared memory region
+ *
+ * Creates a new VirtioSharedMemory region for the given device and ID.
+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
+ * be automatically freed when the device is destroyed. The caller
+ * should not free the returned pointer.
+ *
+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
+ */
+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
+
+/**
+ * virtio_find_shmem_region() - Find an existing shared memory region
+ * @vdev: VirtIODevice
+ * @shmid: Shared memory ID to find
+ *
+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
+ * is owned by the VirtIODevice and should not be freed by the caller.
+ *
+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
+ */
+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
+
+/**
+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd: File descriptor for the shared memory
+ * @fd_offset: Offset within the file descriptor
+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @allow_write: Whether to allow write access to the mapping
+ *
+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
+ * as a child object. When the object is finalized, it will automatically
+ * clean up the MemoryRegion and close the file descriptor.
+ *
+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
+ */
+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
+ int fd,
+ uint64_t fd_offset,
+ uint64_t shm_offset,
+ uint64_t len,
+ bool allow_write);
+
+/**
+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
+ * @shmem: VirtioSharedMemory region
+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
+ *
+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
+ * ownership is transferred to the shared memory region and will be automatically
+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
+ * called or when the shared memory region is destroyed.
+ *
+ * Returns: 0 on success, negative errno on failure
+ */
+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
+ VirtioSharedMemoryMapping *mapping);
+
+/**
+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
+ * @shmem: VirtioSharedMemory region
+ * @offset: Offset within the shared memory region
+ * @size: Size of the mapping to find
+ *
+ * Finds an existing memory mapping that covers the specified range.
+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
+ * region and should not be freed by the caller.
+ *
+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
+ */
+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
+ hwaddr offset, uint64_t size);
+
+/**
+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
+ * @shmem: VirtioSharedMemory region
+ * @offset: Offset of the mapping to remove
+ * @size: Size of the mapping to remove
+ *
+ * Removes a memory mapping from the shared memory region. This will
+ * automatically unref the associated VhostUserShmemObject, which may
+ * trigger its finalization and cleanup if no other references exist.
+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
+ */
+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
+ uint64_t size);
+
extern const VMStateInfo virtio_vmstate_info;
#define VMSTATE_VIRTIO_DEVICE \
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 9c630c2170..034cbfdc3c 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
return vu_send_message(dev, &msg);
}
+bool
+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
+{
+ VhostUserMsg vmsg = {
+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
+ .size = sizeof(vmsg.payload.mmap),
+ .flags = VHOST_USER_VERSION,
+ .payload.mmap = {
+ .shmid = shmid,
+ .fd_offset = fd_offset,
+ .shm_offset = shm_offset,
+ .len = len,
+ .flags = flags,
+ },
+ .fd_num = 1,
+ .fds[0] = fd,
+ };
+
+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
+ return false;
+ }
+
+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
+ }
+
+ pthread_mutex_lock(&dev->backend_mutex);
+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
+ pthread_mutex_unlock(&dev->backend_mutex);
+ return false;
+ }
+
+ /* Also unlocks the backend_mutex */
+ return vu_process_message_reply(dev, &vmsg);
+}
+
+bool
+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
+{
+ VhostUserMsg vmsg = {
+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
+ .size = sizeof(vmsg.payload.mmap),
+ .flags = VHOST_USER_VERSION,
+ .payload.mmap = {
+ .shmid = shmid,
+ .fd_offset = 0,
+ .shm_offset = shm_offset,
+ .len = len,
+ },
+ };
+
+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
+ return false;
+ }
+
+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
+ }
+
+ pthread_mutex_lock(&dev->backend_mutex);
+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
+ pthread_mutex_unlock(&dev->backend_mutex);
+ return false;
+ }
+
+ /* Also unlocks the backend_mutex */
+ return vu_process_message_reply(dev, &vmsg);
+}
+
static bool
vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
{
diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
index 2ffc58c11b..6a2d0f9fae 100644
--- a/subprojects/libvhost-user/libvhost-user.h
+++ b/subprojects/libvhost-user/libvhost-user.h
@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
/* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
/* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
VHOST_USER_PROTOCOL_F_MAX
};
@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
VHOST_USER_BACKEND_MAX
} VhostUserBackendRequest;
@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
unsigned char uuid[UUID_LEN];
} VhostUserShared;
+/* For the flags field of VhostUserMMap */
+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
+
+typedef struct {
+ /* VIRTIO Shared Memory Region ID */
+ uint8_t shmid;
+ uint8_t padding[7];
+ /* File offset */
+ uint64_t fd_offset;
+ /* Offset within the VIRTIO Shared Memory Region */
+ uint64_t shm_offset;
+ /* Size of the mapping */
+ uint64_t len;
+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
+ uint64_t flags;
+} VhostUserMMap;
+
#define VU_PACKED __attribute__((packed))
typedef struct VhostUserMsg {
@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
VhostUserVringArea area;
VhostUserInflight inflight;
VhostUserShared object;
+ VhostUserMMap mmap;
} payload;
int fds[VHOST_MEMORY_BASELINE_NREGIONS];
@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
*/
bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
+/**
+ * vu_shmem_map:
+ * @dev: a VuDev context
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd_offset: File offset
+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
+ * @len: Size of the mapping
+ * @flags: Flags for the mmap operation
+ * @fd: A file descriptor
+ *
+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
+ *
+ * Returns: TRUE on success, FALSE on failure.
+ */
+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
+
+/**
+ * vu_shmem_unmap:
+ * @dev: a VuDev context
+ * @shmid: VIRTIO Shared Memory Region ID
+ * @fd_offset: File offset
+ * @len: Size of the mapping
+ *
+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
+ * with the requested `shmid`.
+ *
+ * Returns: TRUE on success, FALSE on failure.
+ */
+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
+ uint64_t len);
+
/**
* vu_queue_set_notification:
* @dev: a VuDev context
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 2/7] vhost_user.rst: Align VhostUserMsg excerpt members
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 8:59 ` [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
` (4 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add missing members to the VhostUserMsg excerpt in
the vhost-user spec documentation.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
docs/interop/vhost-user.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 2e50f2ddfa..436a94c0ee 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -366,11 +366,15 @@ In QEMU the vhost-user message is implemented with the following struct:
struct vhost_vring_state state;
struct vhost_vring_addr addr;
VhostUserMemory memory;
+ VhostUserMemRegMsg mem_reg;
VhostUserLog log;
struct vhost_iotlb_msg iotlb;
VhostUserConfig config;
+ VhostUserCryptoSession session;
VhostUserVringArea area;
VhostUserInflight inflight;
+ VhostUserShared object;
+ VhostUserTransferDeviceState transfer_state;
};
} QEMU_PACKED VhostUserMsg;
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-10-15 8:59 ` [PATCH v9 2/7] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 9:29 ` Manos Pitsidianakis
2025-10-15 8:59 ` [PATCH v9 4/7] vhost_user: Add frontend get_shmem_config command Albert Esteve
` (3 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add SHMEM_MAP/_UNMAP request to the vhost-user
spec documentation.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
docs/interop/vhost-user.rst | 58 +++++++++++++++++++++++++++++++++++++
1 file changed, 58 insertions(+)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 436a94c0ee..dab9f3af42 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -350,6 +350,27 @@ Device state transfer parameters
In the future, additional phases might be added e.g. to allow
iterative migration while the device is running.
+MMAP request
+^^^^^^^^^^^^
+
++-------+---------+-----------+------------+-----+-------+
+| shmid | padding | fd_offset | shm_offset | len | flags |
++-------+---------+-----------+------------+-----+-------+
+
+:shmid: a 8-bit shared memory region identifier
+
+:fd_offset: a 64-bit offset of this area from the start
+ of the supplied file descriptor
+
+:shm_offset: a 64-bit offset from the start of the
+ pointed shared memory region
+
+:len: a 64-bit size of the memory to map
+
+:flags: a 64-bit value:
+ - 0: Pages are mapped read-only
+ - 1: Pages are mapped read-write
+
C structure
-----------
@@ -375,6 +396,7 @@ In QEMU the vhost-user message is implemented with the following struct:
VhostUserInflight inflight;
VhostUserShared object;
VhostUserTransferDeviceState transfer_state;
+ VhostUserMMap mmap;
};
} QEMU_PACKED VhostUserMsg;
@@ -1057,6 +1079,7 @@ Protocol features
#define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
#define VHOST_USER_PROTOCOL_F_SHARED_OBJECT 18
#define VHOST_USER_PROTOCOL_F_DEVICE_STATE 19
+ #define VHOST_USER_PROTOCOL_F_SHMEM 20
Front-end message types
-----------------------
@@ -1865,6 +1888,41 @@ is sent by the front-end.
when the operation is successful, or non-zero otherwise. Note that if the
operation fails, no fd is sent to the backend.
+``VHOST_USER_BACKEND_SHMEM_MAP``
+ :id: 9
+ :equivalent ioctl: N/A
+ :request payload: fd and ``struct VhostUserMMap``
+ :reply payload: N/A
+
+ When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+ successfully negotiated, this message can be submitted by the backends to
+ advertise a new mapping to be made in a given VIRTIO Shared Memory Region.
+ Upon receiving the message, the front-end will mmap the given fd into the
+ VIRTIO Shared Memory Region with the requested ``shmid``.
+ If``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and
+ back-end set the ``VHOST_USER_NEED_REPLY`` flag, the front-end
+ must respond with zero when operation is successfully completed,
+ or non-zero otherwise.
+
+ Mapping over an already existing map is not allowed and requests shall fail.
+ Therefore, the memory range in the request must correspond with a valid,
+ free region of the VIRTIO Shared Memory Region. Also, note that mappings
+ consume resources and that the request can fail when there are no resources
+ available.
+
+``VHOST_USER_BACKEND_SHMEM_UNMAP``
+ :id: 10
+ :equivalent ioctl: N/A
+ :request payload: ``struct VhostUserMMap``
+ :reply payload: N/A
+
+ When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+ successfully negotiated, this message can be submitted by the backends so
+ that the front-end un-mmaps a given range (``shm_offset``, ``len``) in the
+ VIRTIO Shared Memory Region with the requested ``shmid``. Note that the
+ given range shall correspond to the entirety of a valid mapped region.
+ A reply is generated indicating whether unmapping succeeded.
+
.. _reply_ack:
VHOST_USER_PROTOCOL_F_REPLY_ACK
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 4/7] vhost_user: Add frontend get_shmem_config command
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
` (2 preceding siblings ...)
2025-10-15 8:59 ` [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 8:59 ` [PATCH v9 5/7] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
` (2 subsequent siblings)
6 siblings, 0 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
The frontend can use this command to retrieve
VirtIO Shared Memory Regions configuration from
the backend. The response contains the number of
shared memory regions, their size, and shmid.
This is useful when the frontend is unaware of
specific backend type and configuration,
for example, in the `vhost-user-device` case.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
hw/virtio/vhost-user.c | 44 +++++++++++++++++++++++++++++++
include/hw/virtio/vhost-backend.h | 10 +++++++
include/hw/virtio/vhost-user.h | 1 +
include/hw/virtio/virtio.h | 2 ++
4 files changed, 57 insertions(+)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 8a4385a5ad..c39cf52dbc 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -104,6 +104,7 @@ typedef enum VhostUserRequest {
VHOST_USER_GET_SHARED_OBJECT = 41,
VHOST_USER_SET_DEVICE_STATE_FD = 42,
VHOST_USER_CHECK_DEVICE_STATE = 43,
+ VHOST_USER_GET_SHMEM_CONFIG = 44,
VHOST_USER_MAX
} VhostUserRequest;
@@ -138,6 +139,12 @@ typedef struct VhostUserMemRegMsg {
VhostUserMemoryRegion region;
} VhostUserMemRegMsg;
+typedef struct VhostUserShMemConfig {
+ uint32_t nregions;
+ uint32_t padding;
+ uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
+} VhostUserShMemConfig;
+
typedef struct VhostUserLog {
uint64_t mmap_size;
uint64_t mmap_offset;
@@ -244,6 +251,7 @@ typedef union {
VhostUserShared object;
VhostUserTransferDeviceState transfer_state;
VhostUserMMap mmap;
+ VhostUserShMemConfig shmem;
} VhostUserPayload;
typedef struct VhostUserMsg {
@@ -3226,6 +3234,41 @@ static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
return 0;
}
+static int vhost_user_get_shmem_config(struct vhost_dev *dev,
+ int *nregions,
+ uint64_t *memory_sizes,
+ Error **errp)
+{
+ int ret;
+ VhostUserMsg msg = {
+ .hdr.request = VHOST_USER_GET_SHMEM_CONFIG,
+ .hdr.flags = VHOST_USER_VERSION,
+ };
+
+ if (!virtio_has_feature(dev->protocol_features,
+ VHOST_USER_PROTOCOL_F_SHMEM)) {
+ *nregions = 0;
+ return 0;
+ }
+
+ ret = vhost_user_write(dev, &msg, NULL, 0);
+ if (ret < 0) {
+ return ret;
+ }
+
+ ret = vhost_user_read(dev, &msg);
+ if (ret < 0) {
+ return ret;
+ }
+
+ assert(msg.payload.shmem.nregions <= VIRTIO_MAX_SHMEM_REGIONS);
+ *nregions = msg.payload.shmem.nregions;
+ memcpy(memory_sizes,
+ &msg.payload.shmem.memory_sizes,
+ sizeof(uint64_t) * VIRTIO_MAX_SHMEM_REGIONS);
+ return 0;
+}
+
const VhostOps user_ops = {
.backend_type = VHOST_BACKEND_TYPE_USER,
.vhost_backend_init = vhost_user_backend_init,
@@ -3264,4 +3307,5 @@ const VhostOps user_ops = {
.vhost_supports_device_state = vhost_user_supports_device_state,
.vhost_set_device_state_fd = vhost_user_set_device_state_fd,
.vhost_check_device_state = vhost_user_check_device_state,
+ .vhost_get_shmem_config = vhost_user_get_shmem_config,
};
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index ff94fa1734..ce58b4d2d6 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -163,6 +163,15 @@ typedef int (*vhost_set_device_state_fd_op)(struct vhost_dev *dev,
int *reply_fd,
Error **errp);
typedef int (*vhost_check_device_state_op)(struct vhost_dev *dev, Error **errp);
+/*
+ * Max regions is VIRTIO_MAX_SHMEM_REGIONS, so that is the maximum
+ * number of memory_sizes that will be accepted.
+ */
+typedef int (*vhost_get_shmem_config_op)(struct vhost_dev *dev,
+ int *nregions,
+ uint64_t *memory_sizes,
+ Error **errp);
+
typedef struct VhostOps {
VhostBackendType backend_type;
@@ -220,6 +229,7 @@ typedef struct VhostOps {
vhost_supports_device_state_op vhost_supports_device_state;
vhost_set_device_state_fd_op vhost_set_device_state_fd;
vhost_check_device_state_op vhost_check_device_state;
+ vhost_get_shmem_config_op vhost_get_shmem_config;
} VhostOps;
int vhost_backend_update_device_iotlb(struct vhost_dev *dev,
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 9a3f238b43..bacc7d184c 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -32,6 +32,7 @@ enum VhostUserProtocolFeature {
/* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
VHOST_USER_PROTOCOL_F_DEVICE_STATE = 19,
+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
VHOST_USER_PROTOCOL_F_MAX
};
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index cee807e236..b05c4c8252 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -82,6 +82,8 @@ typedef struct VirtQueueElement
#define VIRTIO_NO_VECTOR 0xffff
+#define VIRTIO_MAX_SHMEM_REGIONS 256
+
/* special index value used internally for config irqs */
#define VIRTIO_CONFIG_IRQ_IDX -1
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 5/7] vhost_user.rst: Add GET_SHMEM_CONFIG message
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
` (3 preceding siblings ...)
2025-10-15 8:59 ` [PATCH v9 4/7] vhost_user: Add frontend get_shmem_config command Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 8:59 ` [PATCH v9 6/7] qmp: add shmem feature map Albert Esteve
2025-10-15 8:59 ` [PATCH v9 7/7] vhost-user-device: Add shared memory BAR Albert Esteve
6 siblings, 0 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add GET_SHMEM_CONFIG vhost-user frontend
message to the spec documentation.
Reviewed-by: Alyssa Ross <hi@alyssa.is>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
docs/interop/vhost-user.rst | 39 +++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index dab9f3af42..54d64ae75f 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -371,6 +371,20 @@ MMAP request
- 0: Pages are mapped read-only
- 1: Pages are mapped read-write
+VIRTIO Shared Memory Region configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
++-------------+---------+------------+----+--------------+
+| num regions | padding | mem size 0 | .. | mem size 255 |
++-------------+---------+------------+----+--------------+
+
+:num regions: a 32-bit number of regions
+
+:padding: 32-bit
+
+:mem size: contains ``num regions`` 64-bit fields representing the size of each
+ VIRTIO Shared Memory Region
+
C structure
-----------
@@ -397,6 +411,7 @@ In QEMU the vhost-user message is implemented with the following struct:
VhostUserShared object;
VhostUserTransferDeviceState transfer_state;
VhostUserMMap mmap;
+ VhostUserShMemConfig shmem;
};
} QEMU_PACKED VhostUserMsg;
@@ -1754,6 +1769,30 @@ Front-end message types
Using this function requires prior negotiation of the
``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
+``VHOST_USER_GET_SHMEM_CONFIG``
+ :id: 44
+ :equivalent ioctl: N/A
+ :request payload: N/A
+ :reply payload: ``struct VhostUserShMemConfig``
+
+ When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
+ successfully negotiated, this message can be submitted by the front-end
+ to gather the VIRTIO Shared Memory Region configuration. The back-end will
+ respond with the number of VIRTIO Shared Memory Regions it requires, and
+ each shared memory region size in an array. The shared memory IDs are
+ represented by the array index. The information returned shall comply
+ with the following rules:
+
+ * The shared information will remain valid and unchanged for the entire
+ lifetime of the connection.
+
+ * The Shared Memory Region size must be a multiple of the page size
+ supported by mmap(2).
+
+ * The size may be 0 if the region is unused. This can happen when the
+ device does not support an optional feature but does support a feature
+ that uses a higher shmid.
+
Back-end message types
----------------------
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 6/7] qmp: add shmem feature map
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
` (4 preceding siblings ...)
2025-10-15 8:59 ` [PATCH v9 5/7] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 8:59 ` [PATCH v9 7/7] vhost-user-device: Add shared memory BAR Albert Esteve
6 siblings, 0 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add new vhost-user protocol
VHOST_USER_PROTOCOL_F_SHMEM feature to
feature map.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
hw/virtio/virtio-qmp.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/virtio/virtio-qmp.c b/hw/virtio/virtio-qmp.c
index b338344c6c..b58a4b1e0e 100644
--- a/hw/virtio/virtio-qmp.c
+++ b/hw/virtio/virtio-qmp.c
@@ -127,6 +127,9 @@ static const qmp_virtio_feature_map_t vhost_user_protocol_map[] = {
FEATURE_ENTRY(VHOST_USER_PROTOCOL_F_DEVICE_STATE, \
"VHOST_USER_PROTOCOL_F_DEVICE_STATE: Backend device state transfer "
"supported"),
+ FEATURE_ENTRY(VHOST_USER_PROTOCOL_F_SHMEM, \
+ "VHOST_USER_PROTOCOL_F_SHMEM: Backend shared memory mapping "
+ "supported"),
{ -1, "" }
};
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v9 7/7] vhost-user-device: Add shared memory BAR
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
` (5 preceding siblings ...)
2025-10-15 8:59 ` [PATCH v9 6/7] qmp: add shmem feature map Albert Esteve
@ 2025-10-15 8:59 ` Albert Esteve
2025-10-15 15:35 ` Stefan Hajnoczi
6 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 8:59 UTC (permalink / raw)
To: qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
Add shared memory BAR support to vhost-user-device-pci
to enable direct file mapping for VIRTIO Shared
Memory Regions.
The implementation creates a consolidated shared
memory BAR that contains all VIRTIO Shared
Memory Regions as subregions. Each region is
configured with its proper shmid, size, and
offset within the BAR. The number and size of
regions are retrieved via VHOST_USER_GET_SHMEM_CONFIG
message sent by vhost-user-base during realization
after virtio_init().
Specifiically, it uses BAR 3 to avoid conflicts, as
it is currently unused.
The shared memory BAR is only created when the
backend supports VHOST_USER_PROTOCOL_F_SHMEM and
has configured shared memory regions. This maintains
backward compatibility with backends that do not
support shared memory functionality.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
---
hw/virtio/vhost-user-base.c | 47 ++++++++++++++++++++++++--
hw/virtio/vhost-user-test-device-pci.c | 35 +++++++++++++++++--
2 files changed, 77 insertions(+), 5 deletions(-)
diff --git a/hw/virtio/vhost-user-base.c b/hw/virtio/vhost-user-base.c
index ff67a020b4..82f49500e4 100644
--- a/hw/virtio/vhost-user-base.c
+++ b/hw/virtio/vhost-user-base.c
@@ -16,6 +16,7 @@
#include "hw/virtio/virtio-bus.h"
#include "hw/virtio/vhost-user-base.h"
#include "qemu/error-report.h"
+#include "migration/blocker.h"
static void vub_start(VirtIODevice *vdev)
{
@@ -276,7 +277,9 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VHostUserBase *vub = VHOST_USER_BASE(dev);
- int ret;
+ uint64_t memory_sizes[VIRTIO_MAX_SHMEM_REGIONS];
+ g_autofree char *name = NULL;
+ int i, ret, nregions, regions_processed = 0;
if (!vub->chardev.chr) {
error_setg(errp, "vhost-user-base: missing chardev");
@@ -319,7 +322,7 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
/* Allocate queues */
vub->vqs = g_ptr_array_sized_new(vub->num_vqs);
- for (int i = 0; i < vub->num_vqs; i++) {
+ for (i = 0; i < vub->num_vqs; i++) {
g_ptr_array_add(vub->vqs,
virtio_add_queue(vdev, vub->vq_size,
vub_handle_output));
@@ -333,11 +336,49 @@ static void vub_device_realize(DeviceState *dev, Error **errp)
VHOST_BACKEND_TYPE_USER, 0, errp);
if (ret < 0) {
- do_vhost_user_cleanup(vdev, vub);
+ goto err;
+ }
+
+ ret = vub->vhost_dev.vhost_ops->vhost_get_shmem_config(&vub->vhost_dev,
+ &nregions,
+ memory_sizes,
+ errp);
+
+ if (ret < 0) {
+ goto err;
+ }
+
+ for (i = 0; i < VIRTIO_MAX_SHMEM_REGIONS && regions_processed < nregions; i++) {
+ if (memory_sizes[i]) {
+ regions_processed++;
+ if (vub->vhost_dev.migration_blocker == NULL) {
+ error_setg(&vub->vhost_dev.migration_blocker,
+ "Migration disabled: devices with VIRTIO Shared Memory "
+ "Regions do not support migration yet.");
+ ret = migrate_add_blocker_normal(
+ &vub->vhost_dev.migration_blocker,
+ errp);
+
+ if (ret < 0) {
+ goto err;
+ }
+ }
+
+ if (memory_sizes[i] % qemu_real_host_page_size() != 0) {
+ error_setg(errp, "Shared memory %d size must be a power of 2 "
+ "no smaller than the page size", i);
+ goto err;
+ }
+
+ virtio_new_shmem_region(vdev, i, memory_sizes[i]);
+ }
}
qemu_chr_fe_set_handlers(&vub->chardev, NULL, NULL, vub_event, NULL,
dev, NULL, true);
+ return;
+err:
+ do_vhost_user_cleanup(vdev, vub);
}
static void vub_device_unrealize(DeviceState *dev)
diff --git a/hw/virtio/vhost-user-test-device-pci.c b/hw/virtio/vhost-user-test-device-pci.c
index b4ed0efb50..60e59f5b1c 100644
--- a/hw/virtio/vhost-user-test-device-pci.c
+++ b/hw/virtio/vhost-user-test-device-pci.c
@@ -8,14 +8,18 @@
*/
#include "qemu/osdep.h"
+#include "qapi/error.h"
#include "hw/qdev-properties.h"
#include "hw/virtio/vhost-user-base.h"
#include "hw/virtio/virtio-pci.h"
+#define VIRTIO_DEVICE_PCI_SHMEM_BAR 4
+
struct VHostUserDevicePCI {
VirtIOPCIProxy parent_obj;
VHostUserBase vub;
+ MemoryRegion shmembar;
};
#define TYPE_VHOST_USER_TEST_DEVICE_PCI "vhost-user-test-device-pci-base"
@@ -25,10 +29,37 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserDevicePCI, VHOST_USER_TEST_DEVICE_PCI)
static void vhost_user_device_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
{
VHostUserDevicePCI *dev = VHOST_USER_TEST_DEVICE_PCI(vpci_dev);
- DeviceState *vdev = DEVICE(&dev->vub);
+ DeviceState *dev_state = DEVICE(&dev->vub);
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev_state);
+ VirtioSharedMemory *shmem, *next;
+ uint64_t offset = 0, shmem_size = 0;
+ vpci_dev->modern_mem_bar_idx = 2;
vpci_dev->nvectors = 1;
- qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
+ qdev_realize(dev_state, BUS(&vpci_dev->bus), errp);
+
+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+ if (shmem->mr.size > UINT64_MAX - shmem_size) {
+ error_setg(errp, "Total shared memory required overflow");
+ return;
+ }
+ shmem_size = shmem_size + shmem->mr.size;
+ }
+ if (shmem_size) {
+ memory_region_init(&dev->shmembar, OBJECT(vpci_dev),
+ "vhost-device-pci-shmembar", shmem_size);
+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
+ memory_region_add_subregion(&dev->shmembar, offset, &shmem->mr);
+ virtio_pci_add_shm_cap(vpci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
+ offset, shmem->mr.size, shmem->shmid);
+ offset = offset + shmem->mr.size;
+ }
+ pci_register_bar(&vpci_dev->pci_dev, VIRTIO_DEVICE_PCI_SHMEM_BAR,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &dev->shmembar);
+ }
}
static void vhost_user_device_pci_class_init(ObjectClass *klass,
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec
2025-10-15 8:59 ` [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
@ 2025-10-15 9:29 ` Manos Pitsidianakis
0 siblings, 0 replies; 17+ messages in thread
From: Manos Pitsidianakis @ 2025-10-15 9:29 UTC (permalink / raw)
To: Albert Esteve, qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
>Add SHMEM_MAP/_UNMAP request to the vhost-user
>spec documentation.
>
>Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>Signed-off-by: Albert Esteve <aesteve@redhat.com>
>---
I found a minor typo (response inline) but otherwise LGTM
Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
> docs/interop/vhost-user.rst | 58 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 58 insertions(+)
>
>diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
>index 436a94c0ee..dab9f3af42 100644
>--- a/docs/interop/vhost-user.rst
>+++ b/docs/interop/vhost-user.rst
>@@ -350,6 +350,27 @@ Device state transfer parameters
> In the future, additional phases might be added e.g. to allow
> iterative migration while the device is running.
>
>+MMAP request
>+^^^^^^^^^^^^
>+
>++-------+---------+-----------+------------+-----+-------+
>+| shmid | padding | fd_offset | shm_offset | len | flags |
>++-------+---------+-----------+------------+-----+-------+
>+
>+:shmid: a 8-bit shared memory region identifier
>+
>+:fd_offset: a 64-bit offset of this area from the start
>+ of the supplied file descriptor
>+
>+:shm_offset: a 64-bit offset from the start of the
>+ pointed shared memory region
>+
>+:len: a 64-bit size of the memory to map
>+
>+:flags: a 64-bit value:
>+ - 0: Pages are mapped read-only
>+ - 1: Pages are mapped read-write
>+
> C structure
> -----------
>
>@@ -375,6 +396,7 @@ In QEMU the vhost-user message is implemented with the following struct:
> VhostUserInflight inflight;
> VhostUserShared object;
> VhostUserTransferDeviceState transfer_state;
>+ VhostUserMMap mmap;
> };
> } QEMU_PACKED VhostUserMsg;
>
>@@ -1057,6 +1079,7 @@ Protocol features
> #define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
> #define VHOST_USER_PROTOCOL_F_SHARED_OBJECT 18
> #define VHOST_USER_PROTOCOL_F_DEVICE_STATE 19
>+ #define VHOST_USER_PROTOCOL_F_SHMEM 20
>
> Front-end message types
> -----------------------
>@@ -1865,6 +1888,41 @@ is sent by the front-end.
> when the operation is successful, or non-zero otherwise. Note that if the
> operation fails, no fd is sent to the backend.
>
>+``VHOST_USER_BACKEND_SHMEM_MAP``
>+ :id: 9
>+ :equivalent ioctl: N/A
>+ :request payload: fd and ``struct VhostUserMMap``
>+ :reply payload: N/A
>+
>+ When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
>+ successfully negotiated, this message can be submitted by the backends to
>+ advertise a new mapping to be made in a given VIRTIO Shared Memory Region.
>+ Upon receiving the message, the front-end will mmap the given fd into the
>+ VIRTIO Shared Memory Region with the requested ``shmid``.
>+ If``VHOST_USER_PROTOCOL_F_REPLY_ACK`` is negotiated, and
Space missing after initial "If"
>+ back-end set the ``VHOST_USER_NEED_REPLY`` flag, the front-end
>+ must respond with zero when operation is successfully completed,
>+ or non-zero otherwise.
>+
>+ Mapping over an already existing map is not allowed and requests shall fail.
>+ Therefore, the memory range in the request must correspond with a valid,
>+ free region of the VIRTIO Shared Memory Region. Also, note that mappings
>+ consume resources and that the request can fail when there are no
>resources
>+ available.
>+
>+``VHOST_USER_BACKEND_SHMEM_UNMAP``
>+ :id: 10
>+ :equivalent ioctl: N/A
>+ :request payload: ``struct VhostUserMMap``
>+ :reply payload: N/A
>+
>+ When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
>+ successfully negotiated, this message can be submitted by the backends so
>+ that the front-end un-mmaps a given range (``shm_offset``, ``len``) in the
>+ VIRTIO Shared Memory Region with the requested ``shmid``. Note that the
>+ given range shall correspond to the entirety of a valid mapped region.
>+ A reply is generated indicating whether unmapping succeeded.
>+
> .. _reply_ack:
>
> VHOST_USER_PROTOCOL_F_REPLY_ACK
>--
>2.49.0
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
@ 2025-10-15 9:47 ` Manos Pitsidianakis
2025-10-15 12:29 ` Albert Esteve
2025-10-15 15:27 ` Stefan Hajnoczi
1 sibling, 1 reply; 17+ messages in thread
From: Manos Pitsidianakis @ 2025-10-15 9:47 UTC (permalink / raw)
To: Albert Esteve, qemu-devel
Cc: stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, stefanha, Stefano Garzarella, slp, dbassey,
Albert Esteve
On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
>Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
>VIRTIO Shared Memory mappings.
>
>This implementation introduces VirtioSharedMemoryMapping as a unified
>QOM object that manages both the mapping metadata and MemoryRegion
>lifecycle. This object provides reference-counted lifecycle management
>with automatic cleanup of file descriptors and memory regions
>through QOM finalization.
>
>This request allows backends to dynamically map file descriptors into a
>VIRTIO Shared Memory Region identified by their shmid. Maps are created
>using memory_region_init_ram_from_fd() with configurable read/write
>permissions, and the resulting MemoryRegions are added as subregions to
>the shmem container region. The mapped memory is then advertised to the
>guest VIRTIO drivers as a base address plus offset for reading and
>writting according to the requested mmap flags.
>
>The backend can unmap memory ranges within a given VIRTIO Shared Memory
>Region to free resources. Upon receiving this message, the frontend
>removes the MemoryRegion as a subregion and automatically unreferences
>the VirtioSharedMemoryMapping object, triggering cleanup if no other
>references exist.
>
>Error handling has been improved to ensure consistent behavior across
>handlers that manage their own vhost_user_send_resp() calls. Since
>these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
>error checking ensures proper connection closure on failures,
>maintaining the expected error flow.
>
>Note the memory region commit for these operations needs to be delayed
>until after we reply to the backend to avoid deadlocks. Otherwise,
>the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
>before the reply.
>
>Signed-off-by: Albert Esteve <aesteve@redhat.com>
>---
> hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
> hw/virtio/virtio.c | 207 +++++++++++++++++++++
> include/hw/virtio/virtio.h | 136 ++++++++++++++
> subprojects/libvhost-user/libvhost-user.c | 70 +++++++
> subprojects/libvhost-user/libvhost-user.h | 54 ++++++
> 5 files changed, 680 insertions(+)
>
>diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>index 36c9c2e04d..8a4385a5ad 100644
>--- a/hw/virtio/vhost-user.c
>+++ b/hw/virtio/vhost-user.c
>@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
> VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
>+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
>+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> VHOST_USER_BACKEND_MAX
> } VhostUserBackendRequest;
>
>@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
> unsigned char uuid[16];
> } VhostUserShared;
>
>+/* For the flags field of VhostUserMMap */
>+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
>+
>+typedef struct {
>+ /* VIRTIO Shared Memory Region ID */
>+ uint8_t shmid;
>+ uint8_t padding[7];
>+ /* File offset */
>+ uint64_t fd_offset;
>+ /* Offset within the VIRTIO Shared Memory Region */
>+ uint64_t shm_offset;
>+ /* Size of the mapping */
>+ uint64_t len;
>+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
>+ uint64_t flags;
>+} VhostUserMMap;
>+
> typedef struct {
> VhostUserRequest request;
>
>@@ -224,6 +243,7 @@ typedef union {
> VhostUserInflight inflight;
> VhostUserShared object;
> VhostUserTransferDeviceState transfer_state;
>+ VhostUserMMap mmap;
> } VhostUserPayload;
>
> typedef struct VhostUserMsg {
>@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> return 0;
> }
>
>+/**
>+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
>+ * @dev: vhost device
>+ * @ioc: QIOChannel for communication
>+ * @hdr: vhost-user message header
>+ * @payload: message payload containing mapping details
>+ * @fd: file descriptor for the shared memory region
>+ *
>+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
>+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
>+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
>+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
>+ * lifecycle management with reference counting.
>+ *
>+ * Returns: 0 on success, negative errno on failure
>+ */
>+static int
>+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
>+ QIOChannel *ioc,
>+ VhostUserHeader *hdr,
>+ VhostUserPayload *payload,
>+ int fd)
>+{
>+ VirtioSharedMemory *shmem;
>+ VhostUserMMap *vu_mmap = &payload->mmap;
>+ VirtioSharedMemoryMapping *existing;
>+ Error *local_err = NULL;
>+ int ret = 0;
>+
>+ if (fd < 0) {
>+ error_report("Bad fd for map");
>+ ret = -EBADF;
>+ goto send_reply;
Apologies if I missed something obvious, but won't going to send_reply
at this point cause memory_region_transaction_commit() to be called?
Maybe we should have two cleanup labels, depending on whether
memory_region_transaction_begin() has been called yet?
>+ }
>+
>+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
>+ error_report("Device has no VIRTIO Shared Memory Regions. "
>+ "Requested ID: %d", vu_mmap->shmid);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
>+ if (!shmem) {
>+ error_report("VIRTIO Shared Memory Region at "
>+ "ID %d not found or uninitialized", vu_mmap->shmid);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
>+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
>+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
>+ vu_mmap->shm_offset, vu_mmap->len);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
>+ if (ranges_overlap(existing->offset, existing->len,
>+ vu_mmap->shm_offset, vu_mmap->len)) {
>+ error_report("VIRTIO Shared Memory mapping overlap");
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+ }
>+
>+ memory_region_transaction_begin();
>+
>+ /* Create VirtioSharedMemoryMapping object */
>+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
>+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
>+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
>+
>+ if (!mapping) {
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ /* Add the mapping to the shared memory region */
>+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
>+ error_report("Failed to add shared memory mapping");
>+ object_unref(OBJECT(mapping));
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+send_reply:
>+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
>+ payload->u64 = !!ret;
>+ hdr->size = sizeof(payload->u64);
>+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
>+ error_report_err(local_err);
>+ memory_region_transaction_commit();
>+ return -EFAULT;
>+ }
>+ }
>+
>+ memory_region_transaction_commit();
>+
>+ return 0;
>+}
>+
>+/**
>+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
>+ * @dev: vhost device
>+ * @ioc: QIOChannel for communication
>+ * @hdr: vhost-user message header
>+ * @payload: message payload containing unmapping details
>+ *
>+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
>+ * the specified memory mapping from the VirtIO shared memory region. This
>+ * automatically unreferences the associated VhostUserShmemObject, which may
>+ * trigger its finalization and cleanup (munmap, close fd) if no other
>+ * references exist.
>+ *
>+ * Returns: 0 on success, negative errno on failure
>+ */
>+static int
>+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
>+ QIOChannel *ioc,
>+ VhostUserHeader *hdr,
>+ VhostUserPayload *payload)
>+{
>+ VirtioSharedMemory *shmem;
>+ VirtioSharedMemoryMapping *mmap = NULL;
>+ VhostUserMMap *vu_mmap = &payload->mmap;
>+ Error *local_err = NULL;
>+ int ret = 0;
>+
>+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
>+ error_report("Device has no VIRTIO Shared Memory Regions. "
>+ "Requested ID: %d", vu_mmap->shmid);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
>+ if (!shmem) {
>+ error_report("VIRTIO Shared Memory Region at "
>+ "ID %d not found or uninitialized", vu_mmap->shmid);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
>+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
>+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
>+ vu_mmap->shm_offset, vu_mmap->len);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
>+ if (!mmap) {
>+ error_report("Shared memory mapping not found at offset %" PRIx64
>+ " with length %" PRIx64,
>+ vu_mmap->shm_offset, vu_mmap->len);
>+ ret = -EFAULT;
>+ goto send_reply;
>+ }
>+
>+send_reply:
>+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
>+ payload->u64 = !!ret;
>+ hdr->size = sizeof(payload->u64);
>+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
>+ error_report_err(local_err);
>+ return -EFAULT;
>+ }
>+ }
>+
>+ if (!ret && mmap) {
>+ /* Free the MemoryRegion only after reply */
>+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
>+ }
>+
>+ return 0;
>+}
>+
> static void close_backend_channel(struct vhost_user *u)
> {
> g_source_destroy(u->backend_src);
>@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> &hdr, &payload);
> break;
>+ case VHOST_USER_BACKEND_SHMEM_MAP:
>+ /* Handler manages its own response, check error and close connection */
>+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
>+ fd ? fd[0] : -1) < 0) {
>+ goto err;
>+ }
>+ break;
>+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
>+ /* Handler manages its own response, check error and close connection */
>+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
>+ goto err;
>+ }
>+ break;
> default:
> error_report("Received unexpected msg type: %d.", hdr.request);
> ret = -EINVAL;
>diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>index 153ee0a0cf..37070e7951 100644
>--- a/hw/virtio/virtio.c
>+++ b/hw/virtio/virtio.c
>@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
> }
>
>+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
>+{
>+ VirtioSharedMemory *elem;
>+ g_autofree char *name = NULL;
>+
>+ elem = g_new0(VirtioSharedMemory, 1);
>+ elem->shmid = shmid;
>+
>+ /* Initialize embedded MemoryRegion as container for shmem mappings */
>+ name = g_strdup_printf("virtio-shmem-%d", shmid);
>+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
>+ QTAILQ_INIT(&elem->mmaps);
>+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
>+ return elem;
>+}
>+
>+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
>+{
>+ VirtioSharedMemory *shmem, *next;
>+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
>+ if (shmem->shmid == shmid) {
>+ return shmem;
>+ }
>+ }
>+ return NULL;
>+}
>+
>+static void virtio_shared_memory_mapping_instance_init(Object *obj)
>+{
>+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
>+
>+ mapping->shmid = 0;
>+ mapping->fd = -1;
>+ mapping->offset = 0;
>+ mapping->len = 0;
>+ mapping->mr = NULL;
>+}
>+
>+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
>+{
>+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
>+
>+ /* Clean up MemoryRegion if it exists */
>+ if (mapping->mr) {
>+ /* Unparent the MemoryRegion to trigger cleanup */
>+ object_unparent(OBJECT(mapping->mr));
>+ mapping->mr = NULL;
>+ }
>+
>+ /* Close file descriptor */
>+ if (mapping->fd >= 0) {
>+ close(mapping->fd);
>+ mapping->fd = -1;
>+ }
>+}
>+
>+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
>+ int fd,
>+ uint64_t fd_offset,
>+ uint64_t shm_offset,
>+ uint64_t len,
>+ bool allow_write)
>+{
>+ VirtioSharedMemoryMapping *mapping;
>+ MemoryRegion *mr;
>+ g_autoptr(GString) mr_name = g_string_new(NULL);
>+ uint32_t ram_flags;
>+ Error *local_err = NULL;
>+
>+ if (len == 0) {
>+ error_report("Shared memory mapping size cannot be zero");
>+ return NULL;
>+ }
>+
>+ fd = dup(fd);
>+ if (fd < 0) {
>+ error_report("Failed to duplicate fd: %s", strerror(errno));
>+ return NULL;
>+ }
>+
>+ /* Determine RAM flags */
>+ ram_flags = RAM_SHARED;
>+ if (!allow_write) {
>+ ram_flags |= RAM_READONLY_FD;
>+ }
>+
>+ /* Create the VirtioSharedMemoryMapping */
>+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
>+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
>+
>+ /* Set up object properties */
>+ mapping->shmid = shmid;
>+ mapping->fd = fd;
>+ mapping->offset = shm_offset;
>+ mapping->len = len;
>+
>+ /* Create MemoryRegion as a child of this object */
>+ mr = g_new0(MemoryRegion, 1);
>+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
>+
>+ /* Initialize MemoryRegion with file descriptor */
>+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
>+ len, ram_flags, fd, fd_offset,
>+ &local_err)) {
>+ error_report_err(local_err);
>+ g_free(mr);
>+ close(fd);
>+ object_unref(OBJECT(mapping));
>+ return NULL;
>+ }
>+
>+ mapping->mr = mr;
>+ return mapping;
>+}
>+
>+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
>+ VirtioSharedMemoryMapping *mapping)
>+{
>+ if (!mapping) {
>+ error_report("VirtioSharedMemoryMapping cannot be NULL");
>+ return -1;
>+ }
>+ if (!mapping->mr) {
>+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
>+ return -1;
>+ }
>+
>+ /* Validate boundaries against the VIRTIO shared memory region */
>+ if (mapping->offset + mapping->len > shmem->mr.size) {
>+ error_report("Memory exceeds the shared memory boundaries");
>+ return -1;
>+ }
>+
>+ /* Add as subregion to the VIRTIO shared memory */
>+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
>+
>+ /* Add to the mapped regions list */
>+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
>+
>+ return 0;
>+}
>+
>+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
>+ hwaddr offset, uint64_t size)
>+{
>+ VirtioSharedMemoryMapping *mapping;
>+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
>+ if (mapping->offset == offset && mapping->mr->size == size) {
>+ return mapping;
>+ }
>+ }
>+ return NULL;
>+}
>+
>+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
>+ uint64_t size)
>+{
>+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
>+ if (mapping == NULL) {
>+ return;
>+ }
>+
>+ /*
>+ * Remove from memory region first
>+ */
>+ memory_region_del_subregion(&shmem->mr, mapping->mr);
>+
>+ /*
>+ * Remove from list and unref the mapping which will trigger automatic cleanup
>+ * when the reference count reaches zero.
>+ */
>+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
>+ object_unref(OBJECT(mapping));
>+}
>+
> /* A wrapper for use as a VMState .put function */
> static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> const VMStateField *field, JSONWriter *vmdesc)
>@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
> {
> VirtIODevice *vdev = opaque;
> VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
>+ VirtioSharedMemory *shmem;
> uint64_t features[VIRTIO_FEATURES_NU64S];
> int i;
>
>@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
> for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> __virtio_queue_reset(vdev, i);
> }
>+
>+ /* Mappings are removed to prevent stale fds from remaining open. */
>+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
>+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
>+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
>+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
>+ }
>+ }
> }
>
> static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
>@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> NULL, virtio_vmstate_change, vdev);
> vdev->device_endian = virtio_default_endian();
> vdev->use_guest_notifier_mask = true;
>+ QSIMPLEQ_INIT(&vdev->shmem_list);
> }
>
> /*
>@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> static void virtio_device_instance_finalize(Object *obj)
> {
> VirtIODevice *vdev = VIRTIO_DEVICE(obj);
>+ VirtioSharedMemory *shmem;
>
> virtio_device_free_virtqueues(vdev);
>
> g_free(vdev->config);
> g_free(vdev->vector_queues);
>+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
>+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
>+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
>+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
>+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
>+ }
>+
>+ /* Clean up the embedded MemoryRegion */
>+ object_unparent(OBJECT(&shmem->mr));
>+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
>+ g_free(shmem);
>+ }
> }
>
> static const Property virtio_properties[] = {
>@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
> .class_size = sizeof(VirtioDeviceClass),
> };
>
>+static const TypeInfo virtio_shared_memory_mapping_info = {
>+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
>+ .parent = TYPE_OBJECT,
>+ .instance_size = sizeof(VirtioSharedMemoryMapping),
>+ .instance_init = virtio_shared_memory_mapping_instance_init,
>+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
>+};
>+
> static void virtio_register_types(void)
> {
> type_register_static(&virtio_device_info);
>+ type_register_static(&virtio_shared_memory_mapping_info);
> }
>
> type_init(virtio_register_types)
>diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
>index d97529c3f1..cee807e236 100644
>--- a/include/hw/virtio/virtio.h
>+++ b/include/hw/virtio/virtio.h
>@@ -99,6 +99,46 @@ enum virtio_device_endian {
> VIRTIO_DEVICE_ENDIAN_BIG,
> };
>
>+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
>+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
>+
>+/**
>+ * VirtioSharedMemoryMapping:
>+ * @parent: Parent QOM object
>+ * @shmid: VIRTIO Shared Memory Region ID
>+ * @fd: File descriptor for the shared memory region
>+ * @offset: Offset within the VIRTIO Shared Memory Region
>+ * @len: Size of the mapping
>+ * @mr: MemoryRegion associated with this shared memory mapping
>+ * @link: List entry for the shared memory region's mapping list
>+ *
>+ * A QOM object that represents an individual file descriptor-based shared
>+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
>+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
>+ * counting. When the object is unreferenced and its reference count drops
>+ * to zero, it automatically cleans up the MemoryRegion and closes the file
>+ * descriptor.
>+ */
>+struct VirtioSharedMemoryMapping {
>+ Object parent;
>+
>+ uint8_t shmid;
>+ int fd;
>+ hwaddr offset;
>+ uint64_t len;
>+ MemoryRegion *mr;
>+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
>+};
>+
>+struct VirtioSharedMemory {
>+ uint8_t shmid;
>+ MemoryRegion mr;
>+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
>+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
>+};
>+
>+typedef struct VirtioSharedMemory VirtioSharedMemory;
>+
> /**
> * struct VirtIODevice - common VirtIO structure
> * @name: name of the device
>@@ -168,6 +208,8 @@ struct VirtIODevice
> */
> EventNotifier config_notifier;
> bool device_iotlb_enabled;
>+ /* Shared memory region for mappings. */
>+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> };
>
> struct VirtioDeviceClass {
>@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
>
> int virtio_save(VirtIODevice *vdev, QEMUFile *f);
>
>+/**
>+ * virtio_new_shmem_region() - Create a new shared memory region
>+ * @vdev: VirtIODevice
>+ * @shmid: Shared memory ID
>+ * @size: Size of the shared memory region
>+ *
>+ * Creates a new VirtioSharedMemory region for the given device and ID.
>+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
>+ * be automatically freed when the device is destroyed. The caller
>+ * should not free the returned pointer.
>+ *
>+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
>+ */
>+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
>+
>+/**
>+ * virtio_find_shmem_region() - Find an existing shared memory region
>+ * @vdev: VirtIODevice
>+ * @shmid: Shared memory ID to find
>+ *
>+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
>+ * is owned by the VirtIODevice and should not be freed by the caller.
>+ *
>+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
>+ */
>+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
>+
>+/**
>+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
>+ * @shmid: VIRTIO Shared Memory Region ID
>+ * @fd: File descriptor for the shared memory
>+ * @fd_offset: Offset within the file descriptor
>+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
>+ * @len: Size of the mapping
>+ * @allow_write: Whether to allow write access to the mapping
>+ *
>+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
>+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
>+ * as a child object. When the object is finalized, it will automatically
>+ * clean up the MemoryRegion and close the file descriptor.
>+ *
>+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
>+ */
>+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
>+ int fd,
>+ uint64_t fd_offset,
>+ uint64_t shm_offset,
>+ uint64_t len,
>+ bool allow_write);
>+
>+/**
>+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
>+ * @shmem: VirtioSharedMemory region
>+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
>+ *
>+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
>+ * ownership is transferred to the shared memory region and will be automatically
>+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
>+ * called or when the shared memory region is destroyed.
>+ *
>+ * Returns: 0 on success, negative errno on failure
>+ */
>+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
>+ VirtioSharedMemoryMapping *mapping);
>+
>+/**
>+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
>+ * @shmem: VirtioSharedMemory region
>+ * @offset: Offset within the shared memory region
>+ * @size: Size of the mapping to find
>+ *
>+ * Finds an existing memory mapping that covers the specified range.
>+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
>+ * region and should not be freed by the caller.
>+ *
>+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
>+ */
>+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
>+ hwaddr offset, uint64_t size);
>+
>+/**
>+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
>+ * @shmem: VirtioSharedMemory region
>+ * @offset: Offset of the mapping to remove
>+ * @size: Size of the mapping to remove
>+ *
>+ * Removes a memory mapping from the shared memory region. This will
>+ * automatically unref the associated VhostUserShmemObject, which may
>+ * trigger its finalization and cleanup if no other references exist.
>+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
>+ */
>+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
>+ uint64_t size);
>+
> extern const VMStateInfo virtio_vmstate_info;
>
> #define VMSTATE_VIRTIO_DEVICE \
>diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
>index 9c630c2170..034cbfdc3c 100644
>--- a/subprojects/libvhost-user/libvhost-user.c
>+++ b/subprojects/libvhost-user/libvhost-user.c
>@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> return vu_send_message(dev, &msg);
> }
>
>+bool
>+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
>+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
>+{
>+ VhostUserMsg vmsg = {
>+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
>+ .size = sizeof(vmsg.payload.mmap),
>+ .flags = VHOST_USER_VERSION,
>+ .payload.mmap = {
>+ .shmid = shmid,
>+ .fd_offset = fd_offset,
>+ .shm_offset = shm_offset,
>+ .len = len,
>+ .flags = flags,
>+ },
>+ .fd_num = 1,
>+ .fds[0] = fd,
>+ };
>+
>+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
>+ return false;
>+ }
>+
>+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
>+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
>+ }
>+
>+ pthread_mutex_lock(&dev->backend_mutex);
>+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
>+ pthread_mutex_unlock(&dev->backend_mutex);
>+ return false;
>+ }
>+
>+ /* Also unlocks the backend_mutex */
>+ return vu_process_message_reply(dev, &vmsg);
>+}
>+
>+bool
>+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
>+{
>+ VhostUserMsg vmsg = {
>+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
>+ .size = sizeof(vmsg.payload.mmap),
>+ .flags = VHOST_USER_VERSION,
>+ .payload.mmap = {
>+ .shmid = shmid,
>+ .fd_offset = 0,
>+ .shm_offset = shm_offset,
>+ .len = len,
>+ },
>+ };
>+
>+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
>+ return false;
>+ }
>+
>+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
>+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
>+ }
>+
>+ pthread_mutex_lock(&dev->backend_mutex);
>+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
>+ pthread_mutex_unlock(&dev->backend_mutex);
>+ return false;
>+ }
>+
>+ /* Also unlocks the backend_mutex */
>+ return vu_process_message_reply(dev, &vmsg);
>+}
>+
> static bool
> vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> {
>diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
>index 2ffc58c11b..6a2d0f9fae 100644
>--- a/subprojects/libvhost-user/libvhost-user.h
>+++ b/subprojects/libvhost-user/libvhost-user.h
>@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
>+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
>+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
> VHOST_USER_PROTOCOL_F_MAX
> };
>
>@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
>+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
>+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> VHOST_USER_BACKEND_MAX
> } VhostUserBackendRequest;
>
>@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> unsigned char uuid[UUID_LEN];
> } VhostUserShared;
>
>+/* For the flags field of VhostUserMMap */
>+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
>+
>+typedef struct {
>+ /* VIRTIO Shared Memory Region ID */
>+ uint8_t shmid;
>+ uint8_t padding[7];
>+ /* File offset */
>+ uint64_t fd_offset;
>+ /* Offset within the VIRTIO Shared Memory Region */
>+ uint64_t shm_offset;
>+ /* Size of the mapping */
>+ uint64_t len;
>+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
>+ uint64_t flags;
>+} VhostUserMMap;
>+
> #define VU_PACKED __attribute__((packed))
>
> typedef struct VhostUserMsg {
>@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> VhostUserVringArea area;
> VhostUserInflight inflight;
> VhostUserShared object;
>+ VhostUserMMap mmap;
> } payload;
>
> int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> */
> bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
>
>+/**
>+ * vu_shmem_map:
>+ * @dev: a VuDev context
>+ * @shmid: VIRTIO Shared Memory Region ID
>+ * @fd_offset: File offset
>+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
>+ * @len: Size of the mapping
>+ * @flags: Flags for the mmap operation
>+ * @fd: A file descriptor
>+ *
>+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
>+ *
>+ * Returns: TRUE on success, FALSE on failure.
>+ */
>+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
>+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
>+
>+/**
>+ * vu_shmem_unmap:
>+ * @dev: a VuDev context
>+ * @shmid: VIRTIO Shared Memory Region ID
>+ * @fd_offset: File offset
>+ * @len: Size of the mapping
>+ *
>+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
>+ * with the requested `shmid`.
>+ *
>+ * Returns: TRUE on success, FALSE on failure.
>+ */
>+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
>+ uint64_t len);
>+
> /**
> * vu_queue_set_notification:
> * @dev: a VuDev context
>--
>2.49.0
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 9:47 ` Manos Pitsidianakis
@ 2025-10-15 12:29 ` Albert Esteve
2025-10-15 12:45 ` Manos Pitsidianakis
0 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2025-10-15 12:29 UTC (permalink / raw)
To: Manos Pitsidianakis
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, Fabiano Rosas, Michael S. Tsirkin,
stefanha, Stefano Garzarella, slp, dbassey
On Wed, Oct 15, 2025 at 11:52 AM Manos Pitsidianakis
<manos.pitsidianakis@linaro.org> wrote:
>
> On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
> >Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
> >VIRTIO Shared Memory mappings.
> >
> >This implementation introduces VirtioSharedMemoryMapping as a unified
> >QOM object that manages both the mapping metadata and MemoryRegion
> >lifecycle. This object provides reference-counted lifecycle management
> >with automatic cleanup of file descriptors and memory regions
> >through QOM finalization.
> >
> >This request allows backends to dynamically map file descriptors into a
> >VIRTIO Shared Memory Region identified by their shmid. Maps are created
> >using memory_region_init_ram_from_fd() with configurable read/write
> >permissions, and the resulting MemoryRegions are added as subregions to
> >the shmem container region. The mapped memory is then advertised to the
> >guest VIRTIO drivers as a base address plus offset for reading and
> >writting according to the requested mmap flags.
> >
> >The backend can unmap memory ranges within a given VIRTIO Shared Memory
> >Region to free resources. Upon receiving this message, the frontend
> >removes the MemoryRegion as a subregion and automatically unreferences
> >the VirtioSharedMemoryMapping object, triggering cleanup if no other
> >references exist.
> >
> >Error handling has been improved to ensure consistent behavior across
> >handlers that manage their own vhost_user_send_resp() calls. Since
> >these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> >error checking ensures proper connection closure on failures,
> >maintaining the expected error flow.
> >
> >Note the memory region commit for these operations needs to be delayed
> >until after we reply to the backend to avoid deadlocks. Otherwise,
> >the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
> >before the reply.
> >
> >Signed-off-by: Albert Esteve <aesteve@redhat.com>
> >---
> > hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
> > hw/virtio/virtio.c | 207 +++++++++++++++++++++
> > include/hw/virtio/virtio.h | 136 ++++++++++++++
> > subprojects/libvhost-user/libvhost-user.c | 70 +++++++
> > subprojects/libvhost-user/libvhost-user.h | 54 ++++++
> > 5 files changed, 680 insertions(+)
> >
> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >index 36c9c2e04d..8a4385a5ad 100644
> >--- a/hw/virtio/vhost-user.c
> >+++ b/hw/virtio/vhost-user.c
> >@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > VHOST_USER_BACKEND_MAX
> > } VhostUserBackendRequest;
> >
> >@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
> > unsigned char uuid[16];
> > } VhostUserShared;
> >
> >+/* For the flags field of VhostUserMMap */
> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> >+
> >+typedef struct {
> >+ /* VIRTIO Shared Memory Region ID */
> >+ uint8_t shmid;
> >+ uint8_t padding[7];
> >+ /* File offset */
> >+ uint64_t fd_offset;
> >+ /* Offset within the VIRTIO Shared Memory Region */
> >+ uint64_t shm_offset;
> >+ /* Size of the mapping */
> >+ uint64_t len;
> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> >+ uint64_t flags;
> >+} VhostUserMMap;
> >+
> > typedef struct {
> > VhostUserRequest request;
> >
> >@@ -224,6 +243,7 @@ typedef union {
> > VhostUserInflight inflight;
> > VhostUserShared object;
> > VhostUserTransferDeviceState transfer_state;
> >+ VhostUserMMap mmap;
> > } VhostUserPayload;
> >
> > typedef struct VhostUserMsg {
> >@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > return 0;
> > }
> >
> >+/**
> >+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> >+ * @dev: vhost device
> >+ * @ioc: QIOChannel for communication
> >+ * @hdr: vhost-user message header
> >+ * @payload: message payload containing mapping details
> >+ * @fd: file descriptor for the shared memory region
> >+ *
> >+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> >+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
> >+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> >+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
> >+ * lifecycle management with reference counting.
> >+ *
> >+ * Returns: 0 on success, negative errno on failure
> >+ */
> >+static int
> >+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> >+ QIOChannel *ioc,
> >+ VhostUserHeader *hdr,
> >+ VhostUserPayload *payload,
> >+ int fd)
> >+{
> >+ VirtioSharedMemory *shmem;
> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> >+ VirtioSharedMemoryMapping *existing;
> >+ Error *local_err = NULL;
> >+ int ret = 0;
> >+
> >+ if (fd < 0) {
> >+ error_report("Bad fd for map");
> >+ ret = -EBADF;
> >+ goto send_reply;
>
> Apologies if I missed something obvious, but won't going to send_reply
> at this point cause memory_region_transaction_commit() to be called?
> Maybe we should have two cleanup labels, depending on whether
> memory_region_transaction_begin() has been called yet?
You are right, this is not good, at transaction_depth 0 it would fail
an assertion within memory_region_transaction_commit().
Not sure another cleanup label will work, though, as the transaction
commit has to go after the reply. Maybe we can make the reply within
the handler optional (leveraging the REPLY_MASK flag). But that will
need to be tuned when this patch lands:
https://lists.gnu.org/archive/html/qemu-devel/2025-10/msg03632.html
Or just a simpler solution like this would do?
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index c39cf52dbc..46126eed3b 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1824,6 +1824,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
VirtioSharedMemoryMapping *existing;
Error *local_err = NULL;
int ret = 0;
+ bool transaction_started = false;
if (fd < 0) {
error_report("Bad fd for map");
@@ -1864,6 +1865,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
}
memory_region_transaction_begin();
+ transaction_started = true;
/* Create VirtioSharedMemoryMapping object */
VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
@@ -1889,14 +1891,18 @@ send_reply:
hdr->size = sizeof(payload->u64);
if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
error_report_err(local_err);
- memory_region_transaction_commit();
+ if (transaction_started) {
+ memory_region_transaction_commit();
+ }
return -EFAULT;
}
}
- memory_region_transaction_commit();
-
- return 0;
+ /* Only commit if we started a transaction */
+ if (transaction_started) {
+ memory_region_transaction_commit();
+ }
+ return ret;
}
>
> >+ }
> >+
> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> >+ "Requested ID: %d", vu_mmap->shmid);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> >+ if (!shmem) {
> >+ error_report("VIRTIO Shared Memory Region at "
> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> >+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> >+ vu_mmap->shm_offset, vu_mmap->len);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
> >+ if (ranges_overlap(existing->offset, existing->len,
> >+ vu_mmap->shm_offset, vu_mmap->len)) {
> >+ error_report("VIRTIO Shared Memory mapping overlap");
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+ }
> >+
> >+ memory_region_transaction_begin();
> >+
> >+ /* Create VirtioSharedMemoryMapping object */
> >+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> >+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> >+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
> >+
> >+ if (!mapping) {
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ /* Add the mapping to the shared memory region */
> >+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
> >+ error_report("Failed to add shared memory mapping");
> >+ object_unref(OBJECT(mapping));
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+send_reply:
> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> >+ payload->u64 = !!ret;
> >+ hdr->size = sizeof(payload->u64);
> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> >+ error_report_err(local_err);
> >+ memory_region_transaction_commit();
> >+ return -EFAULT;
> >+ }
> >+ }
> >+
> >+ memory_region_transaction_commit();
> >+
> >+ return 0;
> >+}
> >+
> >+/**
> >+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> >+ * @dev: vhost device
> >+ * @ioc: QIOChannel for communication
> >+ * @hdr: vhost-user message header
> >+ * @payload: message payload containing unmapping details
> >+ *
> >+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> >+ * the specified memory mapping from the VirtIO shared memory region. This
> >+ * automatically unreferences the associated VhostUserShmemObject, which may
> >+ * trigger its finalization and cleanup (munmap, close fd) if no other
> >+ * references exist.
> >+ *
> >+ * Returns: 0 on success, negative errno on failure
> >+ */
> >+static int
> >+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> >+ QIOChannel *ioc,
> >+ VhostUserHeader *hdr,
> >+ VhostUserPayload *payload)
> >+{
> >+ VirtioSharedMemory *shmem;
> >+ VirtioSharedMemoryMapping *mmap = NULL;
> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> >+ Error *local_err = NULL;
> >+ int ret = 0;
> >+
> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> >+ "Requested ID: %d", vu_mmap->shmid);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> >+ if (!shmem) {
> >+ error_report("VIRTIO Shared Memory Region at "
> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> >+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> >+ vu_mmap->shm_offset, vu_mmap->len);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> >+ if (!mmap) {
> >+ error_report("Shared memory mapping not found at offset %" PRIx64
> >+ " with length %" PRIx64,
> >+ vu_mmap->shm_offset, vu_mmap->len);
> >+ ret = -EFAULT;
> >+ goto send_reply;
> >+ }
> >+
> >+send_reply:
> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> >+ payload->u64 = !!ret;
> >+ hdr->size = sizeof(payload->u64);
> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> >+ error_report_err(local_err);
> >+ return -EFAULT;
> >+ }
> >+ }
> >+
> >+ if (!ret && mmap) {
> >+ /* Free the MemoryRegion only after reply */
> >+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> >+ }
> >+
> >+ return 0;
> >+}
> >+
> > static void close_backend_channel(struct vhost_user *u)
> > {
> > g_source_destroy(u->backend_src);
> >@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > &hdr, &payload);
> > break;
> >+ case VHOST_USER_BACKEND_SHMEM_MAP:
> >+ /* Handler manages its own response, check error and close connection */
> >+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> >+ fd ? fd[0] : -1) < 0) {
> >+ goto err;
> >+ }
> >+ break;
> >+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
> >+ /* Handler manages its own response, check error and close connection */
> >+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> >+ goto err;
> >+ }
> >+ break;
> > default:
> > error_report("Received unexpected msg type: %d.", hdr.request);
> > ret = -EINVAL;
> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >index 153ee0a0cf..37070e7951 100644
> >--- a/hw/virtio/virtio.c
> >+++ b/hw/virtio/virtio.c
> >@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
> > }
> >
> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
> >+{
> >+ VirtioSharedMemory *elem;
> >+ g_autofree char *name = NULL;
> >+
> >+ elem = g_new0(VirtioSharedMemory, 1);
> >+ elem->shmid = shmid;
> >+
> >+ /* Initialize embedded MemoryRegion as container for shmem mappings */
> >+ name = g_strdup_printf("virtio-shmem-%d", shmid);
> >+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
> >+ QTAILQ_INIT(&elem->mmaps);
> >+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> >+ return elem;
> >+}
> >+
> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> >+{
> >+ VirtioSharedMemory *shmem, *next;
> >+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> >+ if (shmem->shmid == shmid) {
> >+ return shmem;
> >+ }
> >+ }
> >+ return NULL;
> >+}
> >+
> >+static void virtio_shared_memory_mapping_instance_init(Object *obj)
> >+{
> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> >+
> >+ mapping->shmid = 0;
> >+ mapping->fd = -1;
> >+ mapping->offset = 0;
> >+ mapping->len = 0;
> >+ mapping->mr = NULL;
> >+}
> >+
> >+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
> >+{
> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> >+
> >+ /* Clean up MemoryRegion if it exists */
> >+ if (mapping->mr) {
> >+ /* Unparent the MemoryRegion to trigger cleanup */
> >+ object_unparent(OBJECT(mapping->mr));
> >+ mapping->mr = NULL;
> >+ }
> >+
> >+ /* Close file descriptor */
> >+ if (mapping->fd >= 0) {
> >+ close(mapping->fd);
> >+ mapping->fd = -1;
> >+ }
> >+}
> >+
> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> >+ int fd,
> >+ uint64_t fd_offset,
> >+ uint64_t shm_offset,
> >+ uint64_t len,
> >+ bool allow_write)
> >+{
> >+ VirtioSharedMemoryMapping *mapping;
> >+ MemoryRegion *mr;
> >+ g_autoptr(GString) mr_name = g_string_new(NULL);
> >+ uint32_t ram_flags;
> >+ Error *local_err = NULL;
> >+
> >+ if (len == 0) {
> >+ error_report("Shared memory mapping size cannot be zero");
> >+ return NULL;
> >+ }
> >+
> >+ fd = dup(fd);
> >+ if (fd < 0) {
> >+ error_report("Failed to duplicate fd: %s", strerror(errno));
> >+ return NULL;
> >+ }
> >+
> >+ /* Determine RAM flags */
> >+ ram_flags = RAM_SHARED;
> >+ if (!allow_write) {
> >+ ram_flags |= RAM_READONLY_FD;
> >+ }
> >+
> >+ /* Create the VirtioSharedMemoryMapping */
> >+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
> >+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
> >+
> >+ /* Set up object properties */
> >+ mapping->shmid = shmid;
> >+ mapping->fd = fd;
> >+ mapping->offset = shm_offset;
> >+ mapping->len = len;
> >+
> >+ /* Create MemoryRegion as a child of this object */
> >+ mr = g_new0(MemoryRegion, 1);
> >+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
> >+
> >+ /* Initialize MemoryRegion with file descriptor */
> >+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
> >+ len, ram_flags, fd, fd_offset,
> >+ &local_err)) {
> >+ error_report_err(local_err);
> >+ g_free(mr);
> >+ close(fd);
> >+ object_unref(OBJECT(mapping));
> >+ return NULL;
> >+ }
> >+
> >+ mapping->mr = mr;
> >+ return mapping;
> >+}
> >+
> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> >+ VirtioSharedMemoryMapping *mapping)
> >+{
> >+ if (!mapping) {
> >+ error_report("VirtioSharedMemoryMapping cannot be NULL");
> >+ return -1;
> >+ }
> >+ if (!mapping->mr) {
> >+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
> >+ return -1;
> >+ }
> >+
> >+ /* Validate boundaries against the VIRTIO shared memory region */
> >+ if (mapping->offset + mapping->len > shmem->mr.size) {
> >+ error_report("Memory exceeds the shared memory boundaries");
> >+ return -1;
> >+ }
> >+
> >+ /* Add as subregion to the VIRTIO shared memory */
> >+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
> >+
> >+ /* Add to the mapped regions list */
> >+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
> >+
> >+ return 0;
> >+}
> >+
> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> >+ hwaddr offset, uint64_t size)
> >+{
> >+ VirtioSharedMemoryMapping *mapping;
> >+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
> >+ if (mapping->offset == offset && mapping->mr->size == size) {
> >+ return mapping;
> >+ }
> >+ }
> >+ return NULL;
> >+}
> >+
> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> >+ uint64_t size)
> >+{
> >+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
> >+ if (mapping == NULL) {
> >+ return;
> >+ }
> >+
> >+ /*
> >+ * Remove from memory region first
> >+ */
> >+ memory_region_del_subregion(&shmem->mr, mapping->mr);
> >+
> >+ /*
> >+ * Remove from list and unref the mapping which will trigger automatic cleanup
> >+ * when the reference count reaches zero.
> >+ */
> >+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
> >+ object_unref(OBJECT(mapping));
> >+}
> >+
> > /* A wrapper for use as a VMState .put function */
> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> > const VMStateField *field, JSONWriter *vmdesc)
> >@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
> > {
> > VirtIODevice *vdev = opaque;
> > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> >+ VirtioSharedMemory *shmem;
> > uint64_t features[VIRTIO_FEATURES_NU64S];
> > int i;
> >
> >@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
> > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> > __virtio_queue_reset(vdev, i);
> > }
> >+
> >+ /* Mappings are removed to prevent stale fds from remaining open. */
> >+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> >+ }
> >+ }
> > }
> >
> > static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
> >@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> > NULL, virtio_vmstate_change, vdev);
> > vdev->device_endian = virtio_default_endian();
> > vdev->use_guest_notifier_mask = true;
> >+ QSIMPLEQ_INIT(&vdev->shmem_list);
> > }
> >
> > /*
> >@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> > static void virtio_device_instance_finalize(Object *obj)
> > {
> > VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> >+ VirtioSharedMemory *shmem;
> >
> > virtio_device_free_virtqueues(vdev);
> >
> > g_free(vdev->config);
> > g_free(vdev->vector_queues);
> >+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> >+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> >+ }
> >+
> >+ /* Clean up the embedded MemoryRegion */
> >+ object_unparent(OBJECT(&shmem->mr));
> >+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> >+ g_free(shmem);
> >+ }
> > }
> >
> > static const Property virtio_properties[] = {
> >@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
> > .class_size = sizeof(VirtioDeviceClass),
> > };
> >
> >+static const TypeInfo virtio_shared_memory_mapping_info = {
> >+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
> >+ .parent = TYPE_OBJECT,
> >+ .instance_size = sizeof(VirtioSharedMemoryMapping),
> >+ .instance_init = virtio_shared_memory_mapping_instance_init,
> >+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
> >+};
> >+
> > static void virtio_register_types(void)
> > {
> > type_register_static(&virtio_device_info);
> >+ type_register_static(&virtio_shared_memory_mapping_info);
> > }
> >
> > type_init(virtio_register_types)
> >diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> >index d97529c3f1..cee807e236 100644
> >--- a/include/hw/virtio/virtio.h
> >+++ b/include/hw/virtio/virtio.h
> >@@ -99,6 +99,46 @@ enum virtio_device_endian {
> > VIRTIO_DEVICE_ENDIAN_BIG,
> > };
> >
> >+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
> >+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
> >+
> >+/**
> >+ * VirtioSharedMemoryMapping:
> >+ * @parent: Parent QOM object
> >+ * @shmid: VIRTIO Shared Memory Region ID
> >+ * @fd: File descriptor for the shared memory region
> >+ * @offset: Offset within the VIRTIO Shared Memory Region
> >+ * @len: Size of the mapping
> >+ * @mr: MemoryRegion associated with this shared memory mapping
> >+ * @link: List entry for the shared memory region's mapping list
> >+ *
> >+ * A QOM object that represents an individual file descriptor-based shared
> >+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
> >+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
> >+ * counting. When the object is unreferenced and its reference count drops
> >+ * to zero, it automatically cleans up the MemoryRegion and closes the file
> >+ * descriptor.
> >+ */
> >+struct VirtioSharedMemoryMapping {
> >+ Object parent;
> >+
> >+ uint8_t shmid;
> >+ int fd;
> >+ hwaddr offset;
> >+ uint64_t len;
> >+ MemoryRegion *mr;
> >+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> >+};
> >+
> >+struct VirtioSharedMemory {
> >+ uint8_t shmid;
> >+ MemoryRegion mr;
> >+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> >+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> >+};
> >+
> >+typedef struct VirtioSharedMemory VirtioSharedMemory;
> >+
> > /**
> > * struct VirtIODevice - common VirtIO structure
> > * @name: name of the device
> >@@ -168,6 +208,8 @@ struct VirtIODevice
> > */
> > EventNotifier config_notifier;
> > bool device_iotlb_enabled;
> >+ /* Shared memory region for mappings. */
> >+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> > };
> >
> > struct VirtioDeviceClass {
> >@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> >
> > int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> >
> >+/**
> >+ * virtio_new_shmem_region() - Create a new shared memory region
> >+ * @vdev: VirtIODevice
> >+ * @shmid: Shared memory ID
> >+ * @size: Size of the shared memory region
> >+ *
> >+ * Creates a new VirtioSharedMemory region for the given device and ID.
> >+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> >+ * be automatically freed when the device is destroyed. The caller
> >+ * should not free the returned pointer.
> >+ *
> >+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> >+ */
> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
> >+
> >+/**
> >+ * virtio_find_shmem_region() - Find an existing shared memory region
> >+ * @vdev: VirtIODevice
> >+ * @shmid: Shared memory ID to find
> >+ *
> >+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> >+ * is owned by the VirtIODevice and should not be freed by the caller.
> >+ *
> >+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> >+ */
> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> >+
> >+/**
> >+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
> >+ * @shmid: VIRTIO Shared Memory Region ID
> >+ * @fd: File descriptor for the shared memory
> >+ * @fd_offset: Offset within the file descriptor
> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> >+ * @len: Size of the mapping
> >+ * @allow_write: Whether to allow write access to the mapping
> >+ *
> >+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
> >+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> >+ * as a child object. When the object is finalized, it will automatically
> >+ * clean up the MemoryRegion and close the file descriptor.
> >+ *
> >+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
> >+ */
> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> >+ int fd,
> >+ uint64_t fd_offset,
> >+ uint64_t shm_offset,
> >+ uint64_t len,
> >+ bool allow_write);
> >+
> >+/**
> >+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
> >+ * @shmem: VirtioSharedMemory region
> >+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
> >+ *
> >+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
> >+ * ownership is transferred to the shared memory region and will be automatically
> >+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
> >+ * called or when the shared memory region is destroyed.
> >+ *
> >+ * Returns: 0 on success, negative errno on failure
> >+ */
> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> >+ VirtioSharedMemoryMapping *mapping);
> >+
> >+/**
> >+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
> >+ * @shmem: VirtioSharedMemory region
> >+ * @offset: Offset within the shared memory region
> >+ * @size: Size of the mapping to find
> >+ *
> >+ * Finds an existing memory mapping that covers the specified range.
> >+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> >+ * region and should not be freed by the caller.
> >+ *
> >+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> >+ */
> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> >+ hwaddr offset, uint64_t size);
> >+
> >+/**
> >+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> >+ * @shmem: VirtioSharedMemory region
> >+ * @offset: Offset of the mapping to remove
> >+ * @size: Size of the mapping to remove
> >+ *
> >+ * Removes a memory mapping from the shared memory region. This will
> >+ * automatically unref the associated VhostUserShmemObject, which may
> >+ * trigger its finalization and cleanup if no other references exist.
> >+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> >+ */
> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> >+ uint64_t size);
> >+
> > extern const VMStateInfo virtio_vmstate_info;
> >
> > #define VMSTATE_VIRTIO_DEVICE \
> >diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> >index 9c630c2170..034cbfdc3c 100644
> >--- a/subprojects/libvhost-user/libvhost-user.c
> >+++ b/subprojects/libvhost-user/libvhost-user.c
> >@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> > return vu_send_message(dev, &msg);
> > }
> >
> >+bool
> >+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> >+{
> >+ VhostUserMsg vmsg = {
> >+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
> >+ .size = sizeof(vmsg.payload.mmap),
> >+ .flags = VHOST_USER_VERSION,
> >+ .payload.mmap = {
> >+ .shmid = shmid,
> >+ .fd_offset = fd_offset,
> >+ .shm_offset = shm_offset,
> >+ .len = len,
> >+ .flags = flags,
> >+ },
> >+ .fd_num = 1,
> >+ .fds[0] = fd,
> >+ };
> >+
> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> >+ return false;
> >+ }
> >+
> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> >+ }
> >+
> >+ pthread_mutex_lock(&dev->backend_mutex);
> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> >+ pthread_mutex_unlock(&dev->backend_mutex);
> >+ return false;
> >+ }
> >+
> >+ /* Also unlocks the backend_mutex */
> >+ return vu_process_message_reply(dev, &vmsg);
> >+}
> >+
> >+bool
> >+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> >+{
> >+ VhostUserMsg vmsg = {
> >+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> >+ .size = sizeof(vmsg.payload.mmap),
> >+ .flags = VHOST_USER_VERSION,
> >+ .payload.mmap = {
> >+ .shmid = shmid,
> >+ .fd_offset = 0,
> >+ .shm_offset = shm_offset,
> >+ .len = len,
> >+ },
> >+ };
> >+
> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> >+ return false;
> >+ }
> >+
> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> >+ }
> >+
> >+ pthread_mutex_lock(&dev->backend_mutex);
> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> >+ pthread_mutex_unlock(&dev->backend_mutex);
> >+ return false;
> >+ }
> >+
> >+ /* Also unlocks the backend_mutex */
> >+ return vu_process_message_reply(dev, &vmsg);
> >+}
> >+
> > static bool
> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> > {
> >diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> >index 2ffc58c11b..6a2d0f9fae 100644
> >--- a/subprojects/libvhost-user/libvhost-user.h
> >+++ b/subprojects/libvhost-user/libvhost-user.h
> >@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> > /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> > /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> > VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> >+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> >+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
> > VHOST_USER_PROTOCOL_F_MAX
> > };
> >
> >@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > VHOST_USER_BACKEND_MAX
> > } VhostUserBackendRequest;
> >
> >@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> > unsigned char uuid[UUID_LEN];
> > } VhostUserShared;
> >
> >+/* For the flags field of VhostUserMMap */
> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> >+
> >+typedef struct {
> >+ /* VIRTIO Shared Memory Region ID */
> >+ uint8_t shmid;
> >+ uint8_t padding[7];
> >+ /* File offset */
> >+ uint64_t fd_offset;
> >+ /* Offset within the VIRTIO Shared Memory Region */
> >+ uint64_t shm_offset;
> >+ /* Size of the mapping */
> >+ uint64_t len;
> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> >+ uint64_t flags;
> >+} VhostUserMMap;
> >+
> > #define VU_PACKED __attribute__((packed))
> >
> > typedef struct VhostUserMsg {
> >@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> > VhostUserVringArea area;
> > VhostUserInflight inflight;
> > VhostUserShared object;
> >+ VhostUserMMap mmap;
> > } payload;
> >
> > int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> >@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > */
> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> >
> >+/**
> >+ * vu_shmem_map:
> >+ * @dev: a VuDev context
> >+ * @shmid: VIRTIO Shared Memory Region ID
> >+ * @fd_offset: File offset
> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> >+ * @len: Size of the mapping
> >+ * @flags: Flags for the mmap operation
> >+ * @fd: A file descriptor
> >+ *
> >+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> >+ *
> >+ * Returns: TRUE on success, FALSE on failure.
> >+ */
> >+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> >+
> >+/**
> >+ * vu_shmem_unmap:
> >+ * @dev: a VuDev context
> >+ * @shmid: VIRTIO Shared Memory Region ID
> >+ * @fd_offset: File offset
> >+ * @len: Size of the mapping
> >+ *
> >+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> >+ * with the requested `shmid`.
> >+ *
> >+ * Returns: TRUE on success, FALSE on failure.
> >+ */
> >+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> >+ uint64_t len);
> >+
> > /**
> > * vu_queue_set_notification:
> > * @dev: a VuDev context
> >--
> >2.49.0
> >
>
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 12:29 ` Albert Esteve
@ 2025-10-15 12:45 ` Manos Pitsidianakis
2025-10-16 13:42 ` Albert Esteve
0 siblings, 1 reply; 17+ messages in thread
From: Manos Pitsidianakis @ 2025-10-15 12:45 UTC (permalink / raw)
To: Albert Esteve
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, Fabiano Rosas, Michael S. Tsirkin,
stefanha, Stefano Garzarella, slp, dbassey
On Wed, 15 Oct 2025 15:29, Albert Esteve <aesteve@redhat.com> wrote:
>On Wed, Oct 15, 2025 at 11:52 AM Manos Pitsidianakis
><manos.pitsidianakis@linaro.org> wrote:
>>
>> On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
>> >Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
>> >VIRTIO Shared Memory mappings.
>> >
>> >This implementation introduces VirtioSharedMemoryMapping as a unified
>> >QOM object that manages both the mapping metadata and MemoryRegion
>> >lifecycle. This object provides reference-counted lifecycle management
>> >with automatic cleanup of file descriptors and memory regions
>> >through QOM finalization.
>> >
>> >This request allows backends to dynamically map file descriptors into a
>> >VIRTIO Shared Memory Region identified by their shmid. Maps are created
>> >using memory_region_init_ram_from_fd() with configurable read/write
>> >permissions, and the resulting MemoryRegions are added as subregions to
>> >the shmem container region. The mapped memory is then advertised to the
>> >guest VIRTIO drivers as a base address plus offset for reading and
>> >writting according to the requested mmap flags.
>> >
>> >The backend can unmap memory ranges within a given VIRTIO Shared Memory
>> >Region to free resources. Upon receiving this message, the frontend
>> >removes the MemoryRegion as a subregion and automatically unreferences
>> >the VirtioSharedMemoryMapping object, triggering cleanup if no other
>> >references exist.
>> >
>> >Error handling has been improved to ensure consistent behavior across
>> >handlers that manage their own vhost_user_send_resp() calls. Since
>> >these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
>> >error checking ensures proper connection closure on failures,
>> >maintaining the expected error flow.
>> >
>> >Note the memory region commit for these operations needs to be delayed
>> >until after we reply to the backend to avoid deadlocks. Otherwise,
>> >the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
>> >before the reply.
>> >
>> >Signed-off-by: Albert Esteve <aesteve@redhat.com>
>> >---
>> > hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
>> > hw/virtio/virtio.c | 207 +++++++++++++++++++++
>> > include/hw/virtio/virtio.h | 136 ++++++++++++++
>> > subprojects/libvhost-user/libvhost-user.c | 70 +++++++
>> > subprojects/libvhost-user/libvhost-user.h | 54 ++++++
>> > 5 files changed, 680 insertions(+)
>> >
>> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>> >index 36c9c2e04d..8a4385a5ad 100644
>> >--- a/hw/virtio/vhost-user.c
>> >+++ b/hw/virtio/vhost-user.c
>> >@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
>> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
>> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
>> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
>> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
>> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
>> > VHOST_USER_BACKEND_MAX
>> > } VhostUserBackendRequest;
>> >
>> >@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
>> > unsigned char uuid[16];
>> > } VhostUserShared;
>> >
>> >+/* For the flags field of VhostUserMMap */
>> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
>> >+
>> >+typedef struct {
>> >+ /* VIRTIO Shared Memory Region ID */
>> >+ uint8_t shmid;
>> >+ uint8_t padding[7];
>> >+ /* File offset */
>> >+ uint64_t fd_offset;
>> >+ /* Offset within the VIRTIO Shared Memory Region */
>> >+ uint64_t shm_offset;
>> >+ /* Size of the mapping */
>> >+ uint64_t len;
>> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
>> >+ uint64_t flags;
>> >+} VhostUserMMap;
>> >+
>> > typedef struct {
>> > VhostUserRequest request;
>> >
>> >@@ -224,6 +243,7 @@ typedef union {
>> > VhostUserInflight inflight;
>> > VhostUserShared object;
>> > VhostUserTransferDeviceState transfer_state;
>> >+ VhostUserMMap mmap;
>> > } VhostUserPayload;
>> >
>> > typedef struct VhostUserMsg {
>> >@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
>> > return 0;
>> > }
>> >
>> >+/**
>> >+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
>> >+ * @dev: vhost device
>> >+ * @ioc: QIOChannel for communication
>> >+ * @hdr: vhost-user message header
>> >+ * @payload: message payload containing mapping details
>> >+ * @fd: file descriptor for the shared memory region
>> >+ *
>> >+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
>> >+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
>> >+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
>> >+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
>> >+ * lifecycle management with reference counting.
>> >+ *
>> >+ * Returns: 0 on success, negative errno on failure
>> >+ */
>> >+static int
>> >+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
>> >+ QIOChannel *ioc,
>> >+ VhostUserHeader *hdr,
>> >+ VhostUserPayload *payload,
>> >+ int fd)
>> >+{
>> >+ VirtioSharedMemory *shmem;
>> >+ VhostUserMMap *vu_mmap = &payload->mmap;
>> >+ VirtioSharedMemoryMapping *existing;
>> >+ Error *local_err = NULL;
>> >+ int ret = 0;
>> >+
>> >+ if (fd < 0) {
>> >+ error_report("Bad fd for map");
>> >+ ret = -EBADF;
>> >+ goto send_reply;
>>
>> Apologies if I missed something obvious, but won't going to send_reply
>> at this point cause memory_region_transaction_commit() to be called?
>> Maybe we should have two cleanup labels, depending on whether
>> memory_region_transaction_begin() has been called yet?
>
>You are right, this is not good, at transaction_depth 0 it would fail
>an assertion within memory_region_transaction_commit().
>
>Not sure another cleanup label will work, though, as the transaction
>commit has to go after the reply. Maybe we can make the reply within
>the handler optional (leveraging the REPLY_MASK flag). But that will
>need to be tuned when this patch lands:
>https://lists.gnu.org/archive/html/qemu-devel/2025-10/msg03632.html
>
>Or just a simpler solution like this would do?
Yes this looks like enough. The another cleanup label alternative I
meant would be to do:
+send_reply_no_commit:
+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+ payload->u64 = !!ret;
+ hdr->size = sizeof(payload->u64);
+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
+ error_report_err(local_err);
+ return -EFAULT;
+ }
+ }
+ return 0;
+send_reply:
+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
+ payload->u64 = !!ret;
+ hdr->size = sizeof(payload->u64);
+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
+ error_report_err(local_err);
+ memory_region_transaction_commit();
+ return -EFAULT;
+ }
+ }
+
+ memory_region_transaction_commit();
+
+ return 0;
+}
But this is not much prettier. (In rust this would be trivial with a MR
transaction RAII object...)
>
>diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>index c39cf52dbc..46126eed3b 100644
>--- a/hw/virtio/vhost-user.c
>+++ b/hw/virtio/vhost-user.c
>@@ -1824,6 +1824,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> VirtioSharedMemoryMapping *existing;
> Error *local_err = NULL;
> int ret = 0;
>+ bool transaction_started = false;
>
> if (fd < 0) {
> error_report("Bad fd for map");
>@@ -1864,6 +1865,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> }
>
> memory_region_transaction_begin();
>+ transaction_started = true;
>
> /* Create VirtioSharedMemoryMapping object */
> VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
>@@ -1889,14 +1891,18 @@ send_reply:
> hdr->size = sizeof(payload->u64);
> if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> error_report_err(local_err);
>- memory_region_transaction_commit();
>+ if (transaction_started) {
>+ memory_region_transaction_commit();
>+ }
> return -EFAULT;
> }
> }
>
>- memory_region_transaction_commit();
>-
>- return 0;
>+ /* Only commit if we started a transaction */
>+ if (transaction_started) {
>+ memory_region_transaction_commit();
>+ }
>+ return ret;
> }
>
>>
>> >+ }
>> >+
>> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
>> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
>> >+ "Requested ID: %d", vu_mmap->shmid);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
>> >+ if (!shmem) {
>> >+ error_report("VIRTIO Shared Memory Region at "
>> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
>> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
>> >+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
>> >+ vu_mmap->shm_offset, vu_mmap->len);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
>> >+ if (ranges_overlap(existing->offset, existing->len,
>> >+ vu_mmap->shm_offset, vu_mmap->len)) {
>> >+ error_report("VIRTIO Shared Memory mapping overlap");
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+ }
>> >+
>> >+ memory_region_transaction_begin();
>> >+
>> >+ /* Create VirtioSharedMemoryMapping object */
>> >+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
>> >+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
>> >+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
>> >+
>> >+ if (!mapping) {
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ /* Add the mapping to the shared memory region */
>> >+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
>> >+ error_report("Failed to add shared memory mapping");
>> >+ object_unref(OBJECT(mapping));
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+send_reply:
>> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
>> >+ payload->u64 = !!ret;
>> >+ hdr->size = sizeof(payload->u64);
>> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
>> >+ error_report_err(local_err);
>> >+ memory_region_transaction_commit();
>> >+ return -EFAULT;
>> >+ }
>> >+ }
>> >+
>> >+ memory_region_transaction_commit();
>> >+
>> >+ return 0;
>> >+}
>> >+
>> >+/**
>> >+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
>> >+ * @dev: vhost device
>> >+ * @ioc: QIOChannel for communication
>> >+ * @hdr: vhost-user message header
>> >+ * @payload: message payload containing unmapping details
>> >+ *
>> >+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
>> >+ * the specified memory mapping from the VirtIO shared memory region. This
>> >+ * automatically unreferences the associated VhostUserShmemObject, which may
>> >+ * trigger its finalization and cleanup (munmap, close fd) if no other
>> >+ * references exist.
>> >+ *
>> >+ * Returns: 0 on success, negative errno on failure
>> >+ */
>> >+static int
>> >+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
>> >+ QIOChannel *ioc,
>> >+ VhostUserHeader *hdr,
>> >+ VhostUserPayload *payload)
>> >+{
>> >+ VirtioSharedMemory *shmem;
>> >+ VirtioSharedMemoryMapping *mmap = NULL;
>> >+ VhostUserMMap *vu_mmap = &payload->mmap;
>> >+ Error *local_err = NULL;
>> >+ int ret = 0;
>> >+
>> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
>> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
>> >+ "Requested ID: %d", vu_mmap->shmid);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
>> >+ if (!shmem) {
>> >+ error_report("VIRTIO Shared Memory Region at "
>> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
>> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
>> >+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
>> >+ vu_mmap->shm_offset, vu_mmap->len);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
>> >+ if (!mmap) {
>> >+ error_report("Shared memory mapping not found at offset %" PRIx64
>> >+ " with length %" PRIx64,
>> >+ vu_mmap->shm_offset, vu_mmap->len);
>> >+ ret = -EFAULT;
>> >+ goto send_reply;
>> >+ }
>> >+
>> >+send_reply:
>> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
>> >+ payload->u64 = !!ret;
>> >+ hdr->size = sizeof(payload->u64);
>> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
>> >+ error_report_err(local_err);
>> >+ return -EFAULT;
>> >+ }
>> >+ }
>> >+
>> >+ if (!ret && mmap) {
>> >+ /* Free the MemoryRegion only after reply */
>> >+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
>> >+ }
>> >+
>> >+ return 0;
>> >+}
>> >+
>> > static void close_backend_channel(struct vhost_user *u)
>> > {
>> > g_source_destroy(u->backend_src);
>> >@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
>> > ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
>> > &hdr, &payload);
>> > break;
>> >+ case VHOST_USER_BACKEND_SHMEM_MAP:
>> >+ /* Handler manages its own response, check error and close connection */
>> >+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
>> >+ fd ? fd[0] : -1) < 0) {
>> >+ goto err;
>> >+ }
>> >+ break;
>> >+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
>> >+ /* Handler manages its own response, check error and close connection */
>> >+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
>> >+ goto err;
>> >+ }
>> >+ break;
>> > default:
>> > error_report("Received unexpected msg type: %d.", hdr.request);
>> > ret = -EINVAL;
>> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>> >index 153ee0a0cf..37070e7951 100644
>> >--- a/hw/virtio/virtio.c
>> >+++ b/hw/virtio/virtio.c
>> >@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
>> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
>> > }
>> >
>> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
>> >+{
>> >+ VirtioSharedMemory *elem;
>> >+ g_autofree char *name = NULL;
>> >+
>> >+ elem = g_new0(VirtioSharedMemory, 1);
>> >+ elem->shmid = shmid;
>> >+
>> >+ /* Initialize embedded MemoryRegion as container for shmem mappings */
>> >+ name = g_strdup_printf("virtio-shmem-%d", shmid);
>> >+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
>> >+ QTAILQ_INIT(&elem->mmaps);
>> >+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
>> >+ return elem;
>> >+}
>> >+
>> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
>> >+{
>> >+ VirtioSharedMemory *shmem, *next;
>> >+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
>> >+ if (shmem->shmid == shmid) {
>> >+ return shmem;
>> >+ }
>> >+ }
>> >+ return NULL;
>> >+}
>> >+
>> >+static void virtio_shared_memory_mapping_instance_init(Object *obj)
>> >+{
>> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
>> >+
>> >+ mapping->shmid = 0;
>> >+ mapping->fd = -1;
>> >+ mapping->offset = 0;
>> >+ mapping->len = 0;
>> >+ mapping->mr = NULL;
>> >+}
>> >+
>> >+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
>> >+{
>> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
>> >+
>> >+ /* Clean up MemoryRegion if it exists */
>> >+ if (mapping->mr) {
>> >+ /* Unparent the MemoryRegion to trigger cleanup */
>> >+ object_unparent(OBJECT(mapping->mr));
>> >+ mapping->mr = NULL;
>> >+ }
>> >+
>> >+ /* Close file descriptor */
>> >+ if (mapping->fd >= 0) {
>> >+ close(mapping->fd);
>> >+ mapping->fd = -1;
>> >+ }
>> >+}
>> >+
>> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
>> >+ int fd,
>> >+ uint64_t fd_offset,
>> >+ uint64_t shm_offset,
>> >+ uint64_t len,
>> >+ bool allow_write)
>> >+{
>> >+ VirtioSharedMemoryMapping *mapping;
>> >+ MemoryRegion *mr;
>> >+ g_autoptr(GString) mr_name = g_string_new(NULL);
>> >+ uint32_t ram_flags;
>> >+ Error *local_err = NULL;
>> >+
>> >+ if (len == 0) {
>> >+ error_report("Shared memory mapping size cannot be zero");
>> >+ return NULL;
>> >+ }
>> >+
>> >+ fd = dup(fd);
>> >+ if (fd < 0) {
>> >+ error_report("Failed to duplicate fd: %s", strerror(errno));
>> >+ return NULL;
>> >+ }
>> >+
>> >+ /* Determine RAM flags */
>> >+ ram_flags = RAM_SHARED;
>> >+ if (!allow_write) {
>> >+ ram_flags |= RAM_READONLY_FD;
>> >+ }
>> >+
>> >+ /* Create the VirtioSharedMemoryMapping */
>> >+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
>> >+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
>> >+
>> >+ /* Set up object properties */
>> >+ mapping->shmid = shmid;
>> >+ mapping->fd = fd;
>> >+ mapping->offset = shm_offset;
>> >+ mapping->len = len;
>> >+
>> >+ /* Create MemoryRegion as a child of this object */
>> >+ mr = g_new0(MemoryRegion, 1);
>> >+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
>> >+
>> >+ /* Initialize MemoryRegion with file descriptor */
>> >+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
>> >+ len, ram_flags, fd, fd_offset,
>> >+ &local_err)) {
>> >+ error_report_err(local_err);
>> >+ g_free(mr);
>> >+ close(fd);
>> >+ object_unref(OBJECT(mapping));
>> >+ return NULL;
>> >+ }
>> >+
>> >+ mapping->mr = mr;
>> >+ return mapping;
>> >+}
>> >+
>> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
>> >+ VirtioSharedMemoryMapping *mapping)
>> >+{
>> >+ if (!mapping) {
>> >+ error_report("VirtioSharedMemoryMapping cannot be NULL");
>> >+ return -1;
>> >+ }
>> >+ if (!mapping->mr) {
>> >+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
>> >+ return -1;
>> >+ }
>> >+
>> >+ /* Validate boundaries against the VIRTIO shared memory region */
>> >+ if (mapping->offset + mapping->len > shmem->mr.size) {
>> >+ error_report("Memory exceeds the shared memory boundaries");
>> >+ return -1;
>> >+ }
>> >+
>> >+ /* Add as subregion to the VIRTIO shared memory */
>> >+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
>> >+
>> >+ /* Add to the mapped regions list */
>> >+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
>> >+
>> >+ return 0;
>> >+}
>> >+
>> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
>> >+ hwaddr offset, uint64_t size)
>> >+{
>> >+ VirtioSharedMemoryMapping *mapping;
>> >+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
>> >+ if (mapping->offset == offset && mapping->mr->size == size) {
>> >+ return mapping;
>> >+ }
>> >+ }
>> >+ return NULL;
>> >+}
>> >+
>> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
>> >+ uint64_t size)
>> >+{
>> >+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
>> >+ if (mapping == NULL) {
>> >+ return;
>> >+ }
>> >+
>> >+ /*
>> >+ * Remove from memory region first
>> >+ */
>> >+ memory_region_del_subregion(&shmem->mr, mapping->mr);
>> >+
>> >+ /*
>> >+ * Remove from list and unref the mapping which will trigger automatic cleanup
>> >+ * when the reference count reaches zero.
>> >+ */
>> >+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
>> >+ object_unref(OBJECT(mapping));
>> >+}
>> >+
>> > /* A wrapper for use as a VMState .put function */
>> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
>> > const VMStateField *field, JSONWriter *vmdesc)
>> >@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
>> > {
>> > VirtIODevice *vdev = opaque;
>> > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
>> >+ VirtioSharedMemory *shmem;
>> > uint64_t features[VIRTIO_FEATURES_NU64S];
>> > int i;
>> >
>> >@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
>> > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
>> > __virtio_queue_reset(vdev, i);
>> > }
>> >+
>> >+ /* Mappings are removed to prevent stale fds from remaining open. */
>> >+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
>> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
>> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
>> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
>> >+ }
>> >+ }
>> > }
>> >
>> > static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
>> >@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
>> > NULL, virtio_vmstate_change, vdev);
>> > vdev->device_endian = virtio_default_endian();
>> > vdev->use_guest_notifier_mask = true;
>> >+ QSIMPLEQ_INIT(&vdev->shmem_list);
>> > }
>> >
>> > /*
>> >@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
>> > static void virtio_device_instance_finalize(Object *obj)
>> > {
>> > VirtIODevice *vdev = VIRTIO_DEVICE(obj);
>> >+ VirtioSharedMemory *shmem;
>> >
>> > virtio_device_free_virtqueues(vdev);
>> >
>> > g_free(vdev->config);
>> > g_free(vdev->vector_queues);
>> >+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
>> >+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
>> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
>> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
>> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
>> >+ }
>> >+
>> >+ /* Clean up the embedded MemoryRegion */
>> >+ object_unparent(OBJECT(&shmem->mr));
>> >+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
>> >+ g_free(shmem);
>> >+ }
>> > }
>> >
>> > static const Property virtio_properties[] = {
>> >@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
>> > .class_size = sizeof(VirtioDeviceClass),
>> > };
>> >
>> >+static const TypeInfo virtio_shared_memory_mapping_info = {
>> >+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
>> >+ .parent = TYPE_OBJECT,
>> >+ .instance_size = sizeof(VirtioSharedMemoryMapping),
>> >+ .instance_init = virtio_shared_memory_mapping_instance_init,
>> >+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
>> >+};
>> >+
>> > static void virtio_register_types(void)
>> > {
>> > type_register_static(&virtio_device_info);
>> >+ type_register_static(&virtio_shared_memory_mapping_info);
>> > }
>> >
>> > type_init(virtio_register_types)
>> >diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
>> >index d97529c3f1..cee807e236 100644
>> >--- a/include/hw/virtio/virtio.h
>> >+++ b/include/hw/virtio/virtio.h
>> >@@ -99,6 +99,46 @@ enum virtio_device_endian {
>> > VIRTIO_DEVICE_ENDIAN_BIG,
>> > };
>> >
>> >+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
>> >+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
>> >+
>> >+/**
>> >+ * VirtioSharedMemoryMapping:
>> >+ * @parent: Parent QOM object
>> >+ * @shmid: VIRTIO Shared Memory Region ID
>> >+ * @fd: File descriptor for the shared memory region
>> >+ * @offset: Offset within the VIRTIO Shared Memory Region
>> >+ * @len: Size of the mapping
>> >+ * @mr: MemoryRegion associated with this shared memory mapping
>> >+ * @link: List entry for the shared memory region's mapping list
>> >+ *
>> >+ * A QOM object that represents an individual file descriptor-based shared
>> >+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
>> >+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
>> >+ * counting. When the object is unreferenced and its reference count drops
>> >+ * to zero, it automatically cleans up the MemoryRegion and closes the file
>> >+ * descriptor.
>> >+ */
>> >+struct VirtioSharedMemoryMapping {
>> >+ Object parent;
>> >+
>> >+ uint8_t shmid;
>> >+ int fd;
>> >+ hwaddr offset;
>> >+ uint64_t len;
>> >+ MemoryRegion *mr;
>> >+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
>> >+};
>> >+
>> >+struct VirtioSharedMemory {
>> >+ uint8_t shmid;
>> >+ MemoryRegion mr;
>> >+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
>> >+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
>> >+};
>> >+
>> >+typedef struct VirtioSharedMemory VirtioSharedMemory;
>> >+
>> > /**
>> > * struct VirtIODevice - common VirtIO structure
>> > * @name: name of the device
>> >@@ -168,6 +208,8 @@ struct VirtIODevice
>> > */
>> > EventNotifier config_notifier;
>> > bool device_iotlb_enabled;
>> >+ /* Shared memory region for mappings. */
>> >+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
>> > };
>> >
>> > struct VirtioDeviceClass {
>> >@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
>> >
>> > int virtio_save(VirtIODevice *vdev, QEMUFile *f);
>> >
>> >+/**
>> >+ * virtio_new_shmem_region() - Create a new shared memory region
>> >+ * @vdev: VirtIODevice
>> >+ * @shmid: Shared memory ID
>> >+ * @size: Size of the shared memory region
>> >+ *
>> >+ * Creates a new VirtioSharedMemory region for the given device and ID.
>> >+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
>> >+ * be automatically freed when the device is destroyed. The caller
>> >+ * should not free the returned pointer.
>> >+ *
>> >+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
>> >+ */
>> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
>> >+
>> >+/**
>> >+ * virtio_find_shmem_region() - Find an existing shared memory region
>> >+ * @vdev: VirtIODevice
>> >+ * @shmid: Shared memory ID to find
>> >+ *
>> >+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
>> >+ * is owned by the VirtIODevice and should not be freed by the caller.
>> >+ *
>> >+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
>> >+ */
>> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
>> >+
>> >+/**
>> >+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
>> >+ * @shmid: VIRTIO Shared Memory Region ID
>> >+ * @fd: File descriptor for the shared memory
>> >+ * @fd_offset: Offset within the file descriptor
>> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
>> >+ * @len: Size of the mapping
>> >+ * @allow_write: Whether to allow write access to the mapping
>> >+ *
>> >+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
>> >+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
>> >+ * as a child object. When the object is finalized, it will automatically
>> >+ * clean up the MemoryRegion and close the file descriptor.
>> >+ *
>> >+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
>> >+ */
>> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
>> >+ int fd,
>> >+ uint64_t fd_offset,
>> >+ uint64_t shm_offset,
>> >+ uint64_t len,
>> >+ bool allow_write);
>> >+
>> >+/**
>> >+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
>> >+ * @shmem: VirtioSharedMemory region
>> >+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
>> >+ *
>> >+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
>> >+ * ownership is transferred to the shared memory region and will be automatically
>> >+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
>> >+ * called or when the shared memory region is destroyed.
>> >+ *
>> >+ * Returns: 0 on success, negative errno on failure
>> >+ */
>> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
>> >+ VirtioSharedMemoryMapping *mapping);
>> >+
>> >+/**
>> >+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
>> >+ * @shmem: VirtioSharedMemory region
>> >+ * @offset: Offset within the shared memory region
>> >+ * @size: Size of the mapping to find
>> >+ *
>> >+ * Finds an existing memory mapping that covers the specified range.
>> >+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
>> >+ * region and should not be freed by the caller.
>> >+ *
>> >+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
>> >+ */
>> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
>> >+ hwaddr offset, uint64_t size);
>> >+
>> >+/**
>> >+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
>> >+ * @shmem: VirtioSharedMemory region
>> >+ * @offset: Offset of the mapping to remove
>> >+ * @size: Size of the mapping to remove
>> >+ *
>> >+ * Removes a memory mapping from the shared memory region. This will
>> >+ * automatically unref the associated VhostUserShmemObject, which may
>> >+ * trigger its finalization and cleanup if no other references exist.
>> >+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
>> >+ */
>> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
>> >+ uint64_t size);
>> >+
>> > extern const VMStateInfo virtio_vmstate_info;
>> >
>> > #define VMSTATE_VIRTIO_DEVICE \
>> >diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
>> >index 9c630c2170..034cbfdc3c 100644
>> >--- a/subprojects/libvhost-user/libvhost-user.c
>> >+++ b/subprojects/libvhost-user/libvhost-user.c
>> >@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
>> > return vu_send_message(dev, &msg);
>> > }
>> >
>> >+bool
>> >+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
>> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
>> >+{
>> >+ VhostUserMsg vmsg = {
>> >+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
>> >+ .size = sizeof(vmsg.payload.mmap),
>> >+ .flags = VHOST_USER_VERSION,
>> >+ .payload.mmap = {
>> >+ .shmid = shmid,
>> >+ .fd_offset = fd_offset,
>> >+ .shm_offset = shm_offset,
>> >+ .len = len,
>> >+ .flags = flags,
>> >+ },
>> >+ .fd_num = 1,
>> >+ .fds[0] = fd,
>> >+ };
>> >+
>> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
>> >+ return false;
>> >+ }
>> >+
>> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
>> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
>> >+ }
>> >+
>> >+ pthread_mutex_lock(&dev->backend_mutex);
>> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
>> >+ pthread_mutex_unlock(&dev->backend_mutex);
>> >+ return false;
>> >+ }
>> >+
>> >+ /* Also unlocks the backend_mutex */
>> >+ return vu_process_message_reply(dev, &vmsg);
>> >+}
>> >+
>> >+bool
>> >+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
>> >+{
>> >+ VhostUserMsg vmsg = {
>> >+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
>> >+ .size = sizeof(vmsg.payload.mmap),
>> >+ .flags = VHOST_USER_VERSION,
>> >+ .payload.mmap = {
>> >+ .shmid = shmid,
>> >+ .fd_offset = 0,
>> >+ .shm_offset = shm_offset,
>> >+ .len = len,
>> >+ },
>> >+ };
>> >+
>> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
>> >+ return false;
>> >+ }
>> >+
>> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
>> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
>> >+ }
>> >+
>> >+ pthread_mutex_lock(&dev->backend_mutex);
>> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
>> >+ pthread_mutex_unlock(&dev->backend_mutex);
>> >+ return false;
>> >+ }
>> >+
>> >+ /* Also unlocks the backend_mutex */
>> >+ return vu_process_message_reply(dev, &vmsg);
>> >+}
>> >+
>> > static bool
>> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
>> > {
>> >diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
>> >index 2ffc58c11b..6a2d0f9fae 100644
>> >--- a/subprojects/libvhost-user/libvhost-user.h
>> >+++ b/subprojects/libvhost-user/libvhost-user.h
>> >@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
>> > /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
>> > /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
>> > VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
>> >+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
>> >+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
>> > VHOST_USER_PROTOCOL_F_MAX
>> > };
>> >
>> >@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
>> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
>> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
>> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
>> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
>> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
>> > VHOST_USER_BACKEND_MAX
>> > } VhostUserBackendRequest;
>> >
>> >@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
>> > unsigned char uuid[UUID_LEN];
>> > } VhostUserShared;
>> >
>> >+/* For the flags field of VhostUserMMap */
>> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
>> >+
>> >+typedef struct {
>> >+ /* VIRTIO Shared Memory Region ID */
>> >+ uint8_t shmid;
>> >+ uint8_t padding[7];
>> >+ /* File offset */
>> >+ uint64_t fd_offset;
>> >+ /* Offset within the VIRTIO Shared Memory Region */
>> >+ uint64_t shm_offset;
>> >+ /* Size of the mapping */
>> >+ uint64_t len;
>> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
>> >+ uint64_t flags;
>> >+} VhostUserMMap;
>> >+
>> > #define VU_PACKED __attribute__((packed))
>> >
>> > typedef struct VhostUserMsg {
>> >@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
>> > VhostUserVringArea area;
>> > VhostUserInflight inflight;
>> > VhostUserShared object;
>> >+ VhostUserMMap mmap;
>> > } payload;
>> >
>> > int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>> >@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
>> > */
>> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
>> >
>> >+/**
>> >+ * vu_shmem_map:
>> >+ * @dev: a VuDev context
>> >+ * @shmid: VIRTIO Shared Memory Region ID
>> >+ * @fd_offset: File offset
>> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
>> >+ * @len: Size of the mapping
>> >+ * @flags: Flags for the mmap operation
>> >+ * @fd: A file descriptor
>> >+ *
>> >+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
>> >+ *
>> >+ * Returns: TRUE on success, FALSE on failure.
>> >+ */
>> >+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
>> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
>> >+
>> >+/**
>> >+ * vu_shmem_unmap:
>> >+ * @dev: a VuDev context
>> >+ * @shmid: VIRTIO Shared Memory Region ID
>> >+ * @fd_offset: File offset
>> >+ * @len: Size of the mapping
>> >+ *
>> >+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
>> >+ * with the requested `shmid`.
>> >+ *
>> >+ * Returns: TRUE on success, FALSE on failure.
>> >+ */
>> >+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
>> >+ uint64_t len);
>> >+
>> > /**
>> > * vu_queue_set_notification:
>> > * @dev: a VuDev context
>> >--
>> >2.49.0
>> >
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-10-15 9:47 ` Manos Pitsidianakis
@ 2025-10-15 15:27 ` Stefan Hajnoczi
1 sibling, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2025-10-15 15:27 UTC (permalink / raw)
To: Albert Esteve
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, Stefano Garzarella, slp, dbassey
[-- Attachment #1: Type: text/plain, Size: 940 bytes --]
On Wed, Oct 15, 2025 at 10:59:24AM +0200, Albert Esteve wrote:
> +static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
> +{
> + VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> +
> + /* Clean up MemoryRegion if it exists */
> + if (mapping->mr) {
> + /* Unparent the MemoryRegion to trigger cleanup */
> + object_unparent(OBJECT(mapping->mr));
> + mapping->mr = NULL;
> + }
> +
> + /* Close file descriptor */
> + if (mapping->fd >= 0) {
> + close(mapping->fd);
> + mapping->fd = -1;
> + }
This looks like a double close(), which could accidentally close a
reused fd in a multi-threaded program. reclaim_ramblock() already closes
the file descriptor (it's stored in RAMBlock), so I think the
VirtioSharedMemoryMapping struct should not hold on to fd and close it
here.
Other than that:
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 7/7] vhost-user-device: Add shared memory BAR
2025-10-15 8:59 ` [PATCH v9 7/7] vhost-user-device: Add shared memory BAR Albert Esteve
@ 2025-10-15 15:35 ` Stefan Hajnoczi
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2025-10-15 15:35 UTC (permalink / raw)
To: Albert Esteve
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, manos.pitsidianakis, Fabiano Rosas,
Michael S. Tsirkin, Stefano Garzarella, slp, dbassey
[-- Attachment #1: Type: text/plain, Size: 1439 bytes --]
On Wed, Oct 15, 2025 at 10:59:30AM +0200, Albert Esteve wrote:
> @@ -25,10 +29,37 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserDevicePCI, VHOST_USER_TEST_DEVICE_PCI)
> static void vhost_user_device_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> {
> VHostUserDevicePCI *dev = VHOST_USER_TEST_DEVICE_PCI(vpci_dev);
> - DeviceState *vdev = DEVICE(&dev->vub);
> + DeviceState *dev_state = DEVICE(&dev->vub);
> + VirtIODevice *vdev = VIRTIO_DEVICE(dev_state);
> + VirtioSharedMemory *shmem, *next;
> + uint64_t offset = 0, shmem_size = 0;
>
> + vpci_dev->modern_mem_bar_idx = 2;
> vpci_dev->nvectors = 1;
> - qdev_realize(vdev, BUS(&vpci_dev->bus), errp);
> + qdev_realize(dev_state, BUS(&vpci_dev->bus), errp);
> +
> + QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> + if (shmem->mr.size > UINT64_MAX - shmem_size) {
> + error_setg(errp, "Total shared memory required overflow");
> + return;
> + }
> + shmem_size = shmem_size + shmem->mr.size;
> + }
> + if (shmem_size) {
Let's reject invalid BAR configurations when VIRTIO Shared Memory
Regions are in use:
if (vpci_dev->flags & VIRTIO_PCI_FLAG_MODERN_PIO_NOTIFY) {
error_setg(errp, "modern-pio-notify is not supported due to PCI BAR layout limitations");
return;
}
Otherwise:
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-15 12:45 ` Manos Pitsidianakis
@ 2025-10-16 13:42 ` Albert Esteve
2025-10-16 13:53 ` Manos Pitsidianakis
0 siblings, 1 reply; 17+ messages in thread
From: Albert Esteve @ 2025-10-16 13:42 UTC (permalink / raw)
To: Manos Pitsidianakis
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, Fabiano Rosas, Michael S. Tsirkin,
stefanha, Stefano Garzarella, slp, dbassey
On Wed, Oct 15, 2025 at 2:51 PM Manos Pitsidianakis
<manos.pitsidianakis@linaro.org> wrote:
>
> On Wed, 15 Oct 2025 15:29, Albert Esteve <aesteve@redhat.com> wrote:
> >On Wed, Oct 15, 2025 at 11:52 AM Manos Pitsidianakis
> ><manos.pitsidianakis@linaro.org> wrote:
> >>
> >> On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
> >> >Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
> >> >VIRTIO Shared Memory mappings.
> >> >
> >> >This implementation introduces VirtioSharedMemoryMapping as a unified
> >> >QOM object that manages both the mapping metadata and MemoryRegion
> >> >lifecycle. This object provides reference-counted lifecycle management
> >> >with automatic cleanup of file descriptors and memory regions
> >> >through QOM finalization.
> >> >
> >> >This request allows backends to dynamically map file descriptors into a
> >> >VIRTIO Shared Memory Region identified by their shmid. Maps are created
> >> >using memory_region_init_ram_from_fd() with configurable read/write
> >> >permissions, and the resulting MemoryRegions are added as subregions to
> >> >the shmem container region. The mapped memory is then advertised to the
> >> >guest VIRTIO drivers as a base address plus offset for reading and
> >> >writting according to the requested mmap flags.
> >> >
> >> >The backend can unmap memory ranges within a given VIRTIO Shared Memory
> >> >Region to free resources. Upon receiving this message, the frontend
> >> >removes the MemoryRegion as a subregion and automatically unreferences
> >> >the VirtioSharedMemoryMapping object, triggering cleanup if no other
> >> >references exist.
> >> >
> >> >Error handling has been improved to ensure consistent behavior across
> >> >handlers that manage their own vhost_user_send_resp() calls. Since
> >> >these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> >> >error checking ensures proper connection closure on failures,
> >> >maintaining the expected error flow.
> >> >
> >> >Note the memory region commit for these operations needs to be delayed
> >> >until after we reply to the backend to avoid deadlocks. Otherwise,
> >> >the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
> >> >before the reply.
> >> >
> >> >Signed-off-by: Albert Esteve <aesteve@redhat.com>
> >> >---
> >> > hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
> >> > hw/virtio/virtio.c | 207 +++++++++++++++++++++
> >> > include/hw/virtio/virtio.h | 136 ++++++++++++++
> >> > subprojects/libvhost-user/libvhost-user.c | 70 +++++++
> >> > subprojects/libvhost-user/libvhost-user.h | 54 ++++++
> >> > 5 files changed, 680 insertions(+)
> >> >
> >> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >> >index 36c9c2e04d..8a4385a5ad 100644
> >> >--- a/hw/virtio/vhost-user.c
> >> >+++ b/hw/virtio/vhost-user.c
> >> >@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> >> > VHOST_USER_BACKEND_MAX
> >> > } VhostUserBackendRequest;
> >> >
> >> >@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
> >> > unsigned char uuid[16];
> >> > } VhostUserShared;
> >> >
> >> >+/* For the flags field of VhostUserMMap */
> >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> >> >+
> >> >+typedef struct {
> >> >+ /* VIRTIO Shared Memory Region ID */
> >> >+ uint8_t shmid;
> >> >+ uint8_t padding[7];
> >> >+ /* File offset */
> >> >+ uint64_t fd_offset;
> >> >+ /* Offset within the VIRTIO Shared Memory Region */
> >> >+ uint64_t shm_offset;
> >> >+ /* Size of the mapping */
> >> >+ uint64_t len;
> >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> >> >+ uint64_t flags;
> >> >+} VhostUserMMap;
> >> >+
> >> > typedef struct {
> >> > VhostUserRequest request;
> >> >
> >> >@@ -224,6 +243,7 @@ typedef union {
> >> > VhostUserInflight inflight;
> >> > VhostUserShared object;
> >> > VhostUserTransferDeviceState transfer_state;
> >> >+ VhostUserMMap mmap;
> >> > } VhostUserPayload;
> >> >
> >> > typedef struct VhostUserMsg {
> >> >@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> >> > return 0;
> >> > }
> >> >
> >> >+/**
> >> >+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> >> >+ * @dev: vhost device
> >> >+ * @ioc: QIOChannel for communication
> >> >+ * @hdr: vhost-user message header
> >> >+ * @payload: message payload containing mapping details
> >> >+ * @fd: file descriptor for the shared memory region
> >> >+ *
> >> >+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> >> >+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
> >> >+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> >> >+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
> >> >+ * lifecycle management with reference counting.
> >> >+ *
> >> >+ * Returns: 0 on success, negative errno on failure
> >> >+ */
> >> >+static int
> >> >+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> >> >+ QIOChannel *ioc,
> >> >+ VhostUserHeader *hdr,
> >> >+ VhostUserPayload *payload,
> >> >+ int fd)
> >> >+{
> >> >+ VirtioSharedMemory *shmem;
> >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> >> >+ VirtioSharedMemoryMapping *existing;
> >> >+ Error *local_err = NULL;
> >> >+ int ret = 0;
> >> >+
> >> >+ if (fd < 0) {
> >> >+ error_report("Bad fd for map");
> >> >+ ret = -EBADF;
> >> >+ goto send_reply;
> >>
> >> Apologies if I missed something obvious, but won't going to send_reply
> >> at this point cause memory_region_transaction_commit() to be called?
> >> Maybe we should have two cleanup labels, depending on whether
> >> memory_region_transaction_begin() has been called yet?
> >
> >You are right, this is not good, at transaction_depth 0 it would fail
> >an assertion within memory_region_transaction_commit().
> >
> >Not sure another cleanup label will work, though, as the transaction
> >commit has to go after the reply. Maybe we can make the reply within
> >the handler optional (leveraging the REPLY_MASK flag). But that will
> >need to be tuned when this patch lands:
> >https://lists.gnu.org/archive/html/qemu-devel/2025-10/msg03632.html
> >
> >Or just a simpler solution like this would do?
>
>
> Yes this looks like enough. The another cleanup label alternative I
> meant would be to do:
>
> +send_reply_no_commit:
> + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> + payload->u64 = !!ret;
> + hdr->size = sizeof(payload->u64);
> + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> + error_report_err(local_err);
> + return -EFAULT;
> + }
> + }
> + return 0;
> +send_reply:
> + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> + payload->u64 = !!ret;
> + hdr->size = sizeof(payload->u64);
> + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> + error_report_err(local_err);
> + memory_region_transaction_commit();
> + return -EFAULT;
> + }
> + }
> +
> + memory_region_transaction_commit();
> +
> + return 0;
> +}
>
> But this is not much prettier. (In rust this would be trivial with a MR
> transaction RAII object...)
I tried this solution too and it fails with `Illegal instruction (core
dumped)`. I am not sure why, the logic should be basically the same.
It has me puzzled.
So... I'll use the boolean variable solution. That one seems to work fine.
>
> >
> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> >index c39cf52dbc..46126eed3b 100644
> >--- a/hw/virtio/vhost-user.c
> >+++ b/hw/virtio/vhost-user.c
> >@@ -1824,6 +1824,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > VirtioSharedMemoryMapping *existing;
> > Error *local_err = NULL;
> > int ret = 0;
> >+ bool transaction_started = false;
> >
> > if (fd < 0) {
> > error_report("Bad fd for map");
> >@@ -1864,6 +1865,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > }
> >
> > memory_region_transaction_begin();
> >+ transaction_started = true;
> >
> > /* Create VirtioSharedMemoryMapping object */
> > VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> >@@ -1889,14 +1891,18 @@ send_reply:
> > hdr->size = sizeof(payload->u64);
> > if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > error_report_err(local_err);
> >- memory_region_transaction_commit();
> >+ if (transaction_started) {
> >+ memory_region_transaction_commit();
> >+ }
> > return -EFAULT;
> > }
> > }
> >
> >- memory_region_transaction_commit();
> >-
> >- return 0;
> >+ /* Only commit if we started a transaction */
> >+ if (transaction_started) {
> >+ memory_region_transaction_commit();
> >+ }
> >+ return ret;
> > }
> >
> >>
> >> >+ }
> >> >+
> >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> >> >+ "Requested ID: %d", vu_mmap->shmid);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> >> >+ if (!shmem) {
> >> >+ error_report("VIRTIO Shared Memory Region at "
> >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> >> >+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> >> >+ vu_mmap->shm_offset, vu_mmap->len);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
> >> >+ if (ranges_overlap(existing->offset, existing->len,
> >> >+ vu_mmap->shm_offset, vu_mmap->len)) {
> >> >+ error_report("VIRTIO Shared Memory mapping overlap");
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+ }
> >> >+
> >> >+ memory_region_transaction_begin();
> >> >+
> >> >+ /* Create VirtioSharedMemoryMapping object */
> >> >+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> >> >+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> >> >+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
> >> >+
> >> >+ if (!mapping) {
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ /* Add the mapping to the shared memory region */
> >> >+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
> >> >+ error_report("Failed to add shared memory mapping");
> >> >+ object_unref(OBJECT(mapping));
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+send_reply:
> >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> >> >+ payload->u64 = !!ret;
> >> >+ hdr->size = sizeof(payload->u64);
> >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> >> >+ error_report_err(local_err);
> >> >+ memory_region_transaction_commit();
> >> >+ return -EFAULT;
> >> >+ }
> >> >+ }
> >> >+
> >> >+ memory_region_transaction_commit();
> >> >+
> >> >+ return 0;
> >> >+}
> >> >+
> >> >+/**
> >> >+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> >> >+ * @dev: vhost device
> >> >+ * @ioc: QIOChannel for communication
> >> >+ * @hdr: vhost-user message header
> >> >+ * @payload: message payload containing unmapping details
> >> >+ *
> >> >+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> >> >+ * the specified memory mapping from the VirtIO shared memory region. This
> >> >+ * automatically unreferences the associated VhostUserShmemObject, which may
> >> >+ * trigger its finalization and cleanup (munmap, close fd) if no other
> >> >+ * references exist.
> >> >+ *
> >> >+ * Returns: 0 on success, negative errno on failure
> >> >+ */
> >> >+static int
> >> >+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> >> >+ QIOChannel *ioc,
> >> >+ VhostUserHeader *hdr,
> >> >+ VhostUserPayload *payload)
> >> >+{
> >> >+ VirtioSharedMemory *shmem;
> >> >+ VirtioSharedMemoryMapping *mmap = NULL;
> >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> >> >+ Error *local_err = NULL;
> >> >+ int ret = 0;
> >> >+
> >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> >> >+ "Requested ID: %d", vu_mmap->shmid);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> >> >+ if (!shmem) {
> >> >+ error_report("VIRTIO Shared Memory Region at "
> >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> >> >+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> >> >+ vu_mmap->shm_offset, vu_mmap->len);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> >> >+ if (!mmap) {
> >> >+ error_report("Shared memory mapping not found at offset %" PRIx64
> >> >+ " with length %" PRIx64,
> >> >+ vu_mmap->shm_offset, vu_mmap->len);
> >> >+ ret = -EFAULT;
> >> >+ goto send_reply;
> >> >+ }
> >> >+
> >> >+send_reply:
> >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> >> >+ payload->u64 = !!ret;
> >> >+ hdr->size = sizeof(payload->u64);
> >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> >> >+ error_report_err(local_err);
> >> >+ return -EFAULT;
> >> >+ }
> >> >+ }
> >> >+
> >> >+ if (!ret && mmap) {
> >> >+ /* Free the MemoryRegion only after reply */
> >> >+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> >> >+ }
> >> >+
> >> >+ return 0;
> >> >+}
> >> >+
> >> > static void close_backend_channel(struct vhost_user *u)
> >> > {
> >> > g_source_destroy(u->backend_src);
> >> >@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> >> > ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> >> > &hdr, &payload);
> >> > break;
> >> >+ case VHOST_USER_BACKEND_SHMEM_MAP:
> >> >+ /* Handler manages its own response, check error and close connection */
> >> >+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> >> >+ fd ? fd[0] : -1) < 0) {
> >> >+ goto err;
> >> >+ }
> >> >+ break;
> >> >+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
> >> >+ /* Handler manages its own response, check error and close connection */
> >> >+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> >> >+ goto err;
> >> >+ }
> >> >+ break;
> >> > default:
> >> > error_report("Received unexpected msg type: %d.", hdr.request);
> >> > ret = -EINVAL;
> >> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >> >index 153ee0a0cf..37070e7951 100644
> >> >--- a/hw/virtio/virtio.c
> >> >+++ b/hw/virtio/virtio.c
> >> >@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> >> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
> >> > }
> >> >
> >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
> >> >+{
> >> >+ VirtioSharedMemory *elem;
> >> >+ g_autofree char *name = NULL;
> >> >+
> >> >+ elem = g_new0(VirtioSharedMemory, 1);
> >> >+ elem->shmid = shmid;
> >> >+
> >> >+ /* Initialize embedded MemoryRegion as container for shmem mappings */
> >> >+ name = g_strdup_printf("virtio-shmem-%d", shmid);
> >> >+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
> >> >+ QTAILQ_INIT(&elem->mmaps);
> >> >+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> >> >+ return elem;
> >> >+}
> >> >+
> >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> >> >+{
> >> >+ VirtioSharedMemory *shmem, *next;
> >> >+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> >> >+ if (shmem->shmid == shmid) {
> >> >+ return shmem;
> >> >+ }
> >> >+ }
> >> >+ return NULL;
> >> >+}
> >> >+
> >> >+static void virtio_shared_memory_mapping_instance_init(Object *obj)
> >> >+{
> >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> >> >+
> >> >+ mapping->shmid = 0;
> >> >+ mapping->fd = -1;
> >> >+ mapping->offset = 0;
> >> >+ mapping->len = 0;
> >> >+ mapping->mr = NULL;
> >> >+}
> >> >+
> >> >+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
> >> >+{
> >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> >> >+
> >> >+ /* Clean up MemoryRegion if it exists */
> >> >+ if (mapping->mr) {
> >> >+ /* Unparent the MemoryRegion to trigger cleanup */
> >> >+ object_unparent(OBJECT(mapping->mr));
> >> >+ mapping->mr = NULL;
> >> >+ }
> >> >+
> >> >+ /* Close file descriptor */
> >> >+ if (mapping->fd >= 0) {
> >> >+ close(mapping->fd);
> >> >+ mapping->fd = -1;
> >> >+ }
> >> >+}
> >> >+
> >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> >> >+ int fd,
> >> >+ uint64_t fd_offset,
> >> >+ uint64_t shm_offset,
> >> >+ uint64_t len,
> >> >+ bool allow_write)
> >> >+{
> >> >+ VirtioSharedMemoryMapping *mapping;
> >> >+ MemoryRegion *mr;
> >> >+ g_autoptr(GString) mr_name = g_string_new(NULL);
> >> >+ uint32_t ram_flags;
> >> >+ Error *local_err = NULL;
> >> >+
> >> >+ if (len == 0) {
> >> >+ error_report("Shared memory mapping size cannot be zero");
> >> >+ return NULL;
> >> >+ }
> >> >+
> >> >+ fd = dup(fd);
> >> >+ if (fd < 0) {
> >> >+ error_report("Failed to duplicate fd: %s", strerror(errno));
> >> >+ return NULL;
> >> >+ }
> >> >+
> >> >+ /* Determine RAM flags */
> >> >+ ram_flags = RAM_SHARED;
> >> >+ if (!allow_write) {
> >> >+ ram_flags |= RAM_READONLY_FD;
> >> >+ }
> >> >+
> >> >+ /* Create the VirtioSharedMemoryMapping */
> >> >+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
> >> >+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
> >> >+
> >> >+ /* Set up object properties */
> >> >+ mapping->shmid = shmid;
> >> >+ mapping->fd = fd;
> >> >+ mapping->offset = shm_offset;
> >> >+ mapping->len = len;
> >> >+
> >> >+ /* Create MemoryRegion as a child of this object */
> >> >+ mr = g_new0(MemoryRegion, 1);
> >> >+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
> >> >+
> >> >+ /* Initialize MemoryRegion with file descriptor */
> >> >+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
> >> >+ len, ram_flags, fd, fd_offset,
> >> >+ &local_err)) {
> >> >+ error_report_err(local_err);
> >> >+ g_free(mr);
> >> >+ close(fd);
> >> >+ object_unref(OBJECT(mapping));
> >> >+ return NULL;
> >> >+ }
> >> >+
> >> >+ mapping->mr = mr;
> >> >+ return mapping;
> >> >+}
> >> >+
> >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> >> >+ VirtioSharedMemoryMapping *mapping)
> >> >+{
> >> >+ if (!mapping) {
> >> >+ error_report("VirtioSharedMemoryMapping cannot be NULL");
> >> >+ return -1;
> >> >+ }
> >> >+ if (!mapping->mr) {
> >> >+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
> >> >+ return -1;
> >> >+ }
> >> >+
> >> >+ /* Validate boundaries against the VIRTIO shared memory region */
> >> >+ if (mapping->offset + mapping->len > shmem->mr.size) {
> >> >+ error_report("Memory exceeds the shared memory boundaries");
> >> >+ return -1;
> >> >+ }
> >> >+
> >> >+ /* Add as subregion to the VIRTIO shared memory */
> >> >+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
> >> >+
> >> >+ /* Add to the mapped regions list */
> >> >+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
> >> >+
> >> >+ return 0;
> >> >+}
> >> >+
> >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> >> >+ hwaddr offset, uint64_t size)
> >> >+{
> >> >+ VirtioSharedMemoryMapping *mapping;
> >> >+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
> >> >+ if (mapping->offset == offset && mapping->mr->size == size) {
> >> >+ return mapping;
> >> >+ }
> >> >+ }
> >> >+ return NULL;
> >> >+}
> >> >+
> >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> >> >+ uint64_t size)
> >> >+{
> >> >+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
> >> >+ if (mapping == NULL) {
> >> >+ return;
> >> >+ }
> >> >+
> >> >+ /*
> >> >+ * Remove from memory region first
> >> >+ */
> >> >+ memory_region_del_subregion(&shmem->mr, mapping->mr);
> >> >+
> >> >+ /*
> >> >+ * Remove from list and unref the mapping which will trigger automatic cleanup
> >> >+ * when the reference count reaches zero.
> >> >+ */
> >> >+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
> >> >+ object_unref(OBJECT(mapping));
> >> >+}
> >> >+
> >> > /* A wrapper for use as a VMState .put function */
> >> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> >> > const VMStateField *field, JSONWriter *vmdesc)
> >> >@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
> >> > {
> >> > VirtIODevice *vdev = opaque;
> >> > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> >> >+ VirtioSharedMemory *shmem;
> >> > uint64_t features[VIRTIO_FEATURES_NU64S];
> >> > int i;
> >> >
> >> >@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
> >> > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> >> > __virtio_queue_reset(vdev, i);
> >> > }
> >> >+
> >> >+ /* Mappings are removed to prevent stale fds from remaining open. */
> >> >+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
> >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> >> >+ }
> >> >+ }
> >> > }
> >> >
> >> > static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
> >> >@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> >> > NULL, virtio_vmstate_change, vdev);
> >> > vdev->device_endian = virtio_default_endian();
> >> > vdev->use_guest_notifier_mask = true;
> >> >+ QSIMPLEQ_INIT(&vdev->shmem_list);
> >> > }
> >> >
> >> > /*
> >> >@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> >> > static void virtio_device_instance_finalize(Object *obj)
> >> > {
> >> > VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> >> >+ VirtioSharedMemory *shmem;
> >> >
> >> > virtio_device_free_virtqueues(vdev);
> >> >
> >> > g_free(vdev->config);
> >> > g_free(vdev->vector_queues);
> >> >+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> >> >+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> >> >+ }
> >> >+
> >> >+ /* Clean up the embedded MemoryRegion */
> >> >+ object_unparent(OBJECT(&shmem->mr));
> >> >+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> >> >+ g_free(shmem);
> >> >+ }
> >> > }
> >> >
> >> > static const Property virtio_properties[] = {
> >> >@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
> >> > .class_size = sizeof(VirtioDeviceClass),
> >> > };
> >> >
> >> >+static const TypeInfo virtio_shared_memory_mapping_info = {
> >> >+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
> >> >+ .parent = TYPE_OBJECT,
> >> >+ .instance_size = sizeof(VirtioSharedMemoryMapping),
> >> >+ .instance_init = virtio_shared_memory_mapping_instance_init,
> >> >+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
> >> >+};
> >> >+
> >> > static void virtio_register_types(void)
> >> > {
> >> > type_register_static(&virtio_device_info);
> >> >+ type_register_static(&virtio_shared_memory_mapping_info);
> >> > }
> >> >
> >> > type_init(virtio_register_types)
> >> >diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> >> >index d97529c3f1..cee807e236 100644
> >> >--- a/include/hw/virtio/virtio.h
> >> >+++ b/include/hw/virtio/virtio.h
> >> >@@ -99,6 +99,46 @@ enum virtio_device_endian {
> >> > VIRTIO_DEVICE_ENDIAN_BIG,
> >> > };
> >> >
> >> >+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
> >> >+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
> >> >+
> >> >+/**
> >> >+ * VirtioSharedMemoryMapping:
> >> >+ * @parent: Parent QOM object
> >> >+ * @shmid: VIRTIO Shared Memory Region ID
> >> >+ * @fd: File descriptor for the shared memory region
> >> >+ * @offset: Offset within the VIRTIO Shared Memory Region
> >> >+ * @len: Size of the mapping
> >> >+ * @mr: MemoryRegion associated with this shared memory mapping
> >> >+ * @link: List entry for the shared memory region's mapping list
> >> >+ *
> >> >+ * A QOM object that represents an individual file descriptor-based shared
> >> >+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
> >> >+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
> >> >+ * counting. When the object is unreferenced and its reference count drops
> >> >+ * to zero, it automatically cleans up the MemoryRegion and closes the file
> >> >+ * descriptor.
> >> >+ */
> >> >+struct VirtioSharedMemoryMapping {
> >> >+ Object parent;
> >> >+
> >> >+ uint8_t shmid;
> >> >+ int fd;
> >> >+ hwaddr offset;
> >> >+ uint64_t len;
> >> >+ MemoryRegion *mr;
> >> >+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> >> >+};
> >> >+
> >> >+struct VirtioSharedMemory {
> >> >+ uint8_t shmid;
> >> >+ MemoryRegion mr;
> >> >+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> >> >+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> >> >+};
> >> >+
> >> >+typedef struct VirtioSharedMemory VirtioSharedMemory;
> >> >+
> >> > /**
> >> > * struct VirtIODevice - common VirtIO structure
> >> > * @name: name of the device
> >> >@@ -168,6 +208,8 @@ struct VirtIODevice
> >> > */
> >> > EventNotifier config_notifier;
> >> > bool device_iotlb_enabled;
> >> >+ /* Shared memory region for mappings. */
> >> >+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> >> > };
> >> >
> >> > struct VirtioDeviceClass {
> >> >@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> >> >
> >> > int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> >> >
> >> >+/**
> >> >+ * virtio_new_shmem_region() - Create a new shared memory region
> >> >+ * @vdev: VirtIODevice
> >> >+ * @shmid: Shared memory ID
> >> >+ * @size: Size of the shared memory region
> >> >+ *
> >> >+ * Creates a new VirtioSharedMemory region for the given device and ID.
> >> >+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> >> >+ * be automatically freed when the device is destroyed. The caller
> >> >+ * should not free the returned pointer.
> >> >+ *
> >> >+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> >> >+ */
> >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
> >> >+
> >> >+/**
> >> >+ * virtio_find_shmem_region() - Find an existing shared memory region
> >> >+ * @vdev: VirtIODevice
> >> >+ * @shmid: Shared memory ID to find
> >> >+ *
> >> >+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> >> >+ * is owned by the VirtIODevice and should not be freed by the caller.
> >> >+ *
> >> >+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> >> >+ */
> >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> >> >+
> >> >+/**
> >> >+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
> >> >+ * @shmid: VIRTIO Shared Memory Region ID
> >> >+ * @fd: File descriptor for the shared memory
> >> >+ * @fd_offset: Offset within the file descriptor
> >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> >> >+ * @len: Size of the mapping
> >> >+ * @allow_write: Whether to allow write access to the mapping
> >> >+ *
> >> >+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
> >> >+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> >> >+ * as a child object. When the object is finalized, it will automatically
> >> >+ * clean up the MemoryRegion and close the file descriptor.
> >> >+ *
> >> >+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
> >> >+ */
> >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> >> >+ int fd,
> >> >+ uint64_t fd_offset,
> >> >+ uint64_t shm_offset,
> >> >+ uint64_t len,
> >> >+ bool allow_write);
> >> >+
> >> >+/**
> >> >+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
> >> >+ * @shmem: VirtioSharedMemory region
> >> >+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
> >> >+ *
> >> >+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
> >> >+ * ownership is transferred to the shared memory region and will be automatically
> >> >+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
> >> >+ * called or when the shared memory region is destroyed.
> >> >+ *
> >> >+ * Returns: 0 on success, negative errno on failure
> >> >+ */
> >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> >> >+ VirtioSharedMemoryMapping *mapping);
> >> >+
> >> >+/**
> >> >+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
> >> >+ * @shmem: VirtioSharedMemory region
> >> >+ * @offset: Offset within the shared memory region
> >> >+ * @size: Size of the mapping to find
> >> >+ *
> >> >+ * Finds an existing memory mapping that covers the specified range.
> >> >+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> >> >+ * region and should not be freed by the caller.
> >> >+ *
> >> >+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> >> >+ */
> >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> >> >+ hwaddr offset, uint64_t size);
> >> >+
> >> >+/**
> >> >+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> >> >+ * @shmem: VirtioSharedMemory region
> >> >+ * @offset: Offset of the mapping to remove
> >> >+ * @size: Size of the mapping to remove
> >> >+ *
> >> >+ * Removes a memory mapping from the shared memory region. This will
> >> >+ * automatically unref the associated VhostUserShmemObject, which may
> >> >+ * trigger its finalization and cleanup if no other references exist.
> >> >+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> >> >+ */
> >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> >> >+ uint64_t size);
> >> >+
> >> > extern const VMStateInfo virtio_vmstate_info;
> >> >
> >> > #define VMSTATE_VIRTIO_DEVICE \
> >> >diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> >> >index 9c630c2170..034cbfdc3c 100644
> >> >--- a/subprojects/libvhost-user/libvhost-user.c
> >> >+++ b/subprojects/libvhost-user/libvhost-user.c
> >> >@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> >> > return vu_send_message(dev, &msg);
> >> > }
> >> >
> >> >+bool
> >> >+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> >> >+{
> >> >+ VhostUserMsg vmsg = {
> >> >+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
> >> >+ .size = sizeof(vmsg.payload.mmap),
> >> >+ .flags = VHOST_USER_VERSION,
> >> >+ .payload.mmap = {
> >> >+ .shmid = shmid,
> >> >+ .fd_offset = fd_offset,
> >> >+ .shm_offset = shm_offset,
> >> >+ .len = len,
> >> >+ .flags = flags,
> >> >+ },
> >> >+ .fd_num = 1,
> >> >+ .fds[0] = fd,
> >> >+ };
> >> >+
> >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> >> >+ return false;
> >> >+ }
> >> >+
> >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> >> >+ }
> >> >+
> >> >+ pthread_mutex_lock(&dev->backend_mutex);
> >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> >> >+ return false;
> >> >+ }
> >> >+
> >> >+ /* Also unlocks the backend_mutex */
> >> >+ return vu_process_message_reply(dev, &vmsg);
> >> >+}
> >> >+
> >> >+bool
> >> >+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> >> >+{
> >> >+ VhostUserMsg vmsg = {
> >> >+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> >> >+ .size = sizeof(vmsg.payload.mmap),
> >> >+ .flags = VHOST_USER_VERSION,
> >> >+ .payload.mmap = {
> >> >+ .shmid = shmid,
> >> >+ .fd_offset = 0,
> >> >+ .shm_offset = shm_offset,
> >> >+ .len = len,
> >> >+ },
> >> >+ };
> >> >+
> >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> >> >+ return false;
> >> >+ }
> >> >+
> >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> >> >+ }
> >> >+
> >> >+ pthread_mutex_lock(&dev->backend_mutex);
> >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> >> >+ return false;
> >> >+ }
> >> >+
> >> >+ /* Also unlocks the backend_mutex */
> >> >+ return vu_process_message_reply(dev, &vmsg);
> >> >+}
> >> >+
> >> > static bool
> >> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> >> > {
> >> >diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> >> >index 2ffc58c11b..6a2d0f9fae 100644
> >> >--- a/subprojects/libvhost-user/libvhost-user.h
> >> >+++ b/subprojects/libvhost-user/libvhost-user.h
> >> >@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> >> > /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> >> > /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> >> > VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> >> >+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> >> >+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
> >> > VHOST_USER_PROTOCOL_F_MAX
> >> > };
> >> >
> >> >@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> >> > VHOST_USER_BACKEND_MAX
> >> > } VhostUserBackendRequest;
> >> >
> >> >@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> >> > unsigned char uuid[UUID_LEN];
> >> > } VhostUserShared;
> >> >
> >> >+/* For the flags field of VhostUserMMap */
> >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> >> >+
> >> >+typedef struct {
> >> >+ /* VIRTIO Shared Memory Region ID */
> >> >+ uint8_t shmid;
> >> >+ uint8_t padding[7];
> >> >+ /* File offset */
> >> >+ uint64_t fd_offset;
> >> >+ /* Offset within the VIRTIO Shared Memory Region */
> >> >+ uint64_t shm_offset;
> >> >+ /* Size of the mapping */
> >> >+ uint64_t len;
> >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> >> >+ uint64_t flags;
> >> >+} VhostUserMMap;
> >> >+
> >> > #define VU_PACKED __attribute__((packed))
> >> >
> >> > typedef struct VhostUserMsg {
> >> >@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> >> > VhostUserVringArea area;
> >> > VhostUserInflight inflight;
> >> > VhostUserShared object;
> >> >+ VhostUserMMap mmap;
> >> > } payload;
> >> >
> >> > int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> >> >@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> >> > */
> >> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> >> >
> >> >+/**
> >> >+ * vu_shmem_map:
> >> >+ * @dev: a VuDev context
> >> >+ * @shmid: VIRTIO Shared Memory Region ID
> >> >+ * @fd_offset: File offset
> >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> >> >+ * @len: Size of the mapping
> >> >+ * @flags: Flags for the mmap operation
> >> >+ * @fd: A file descriptor
> >> >+ *
> >> >+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> >> >+ *
> >> >+ * Returns: TRUE on success, FALSE on failure.
> >> >+ */
> >> >+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> >> >+
> >> >+/**
> >> >+ * vu_shmem_unmap:
> >> >+ * @dev: a VuDev context
> >> >+ * @shmid: VIRTIO Shared Memory Region ID
> >> >+ * @fd_offset: File offset
> >> >+ * @len: Size of the mapping
> >> >+ *
> >> >+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> >> >+ * with the requested `shmid`.
> >> >+ *
> >> >+ * Returns: TRUE on success, FALSE on failure.
> >> >+ */
> >> >+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> >> >+ uint64_t len);
> >> >+
> >> > /**
> >> > * vu_queue_set_notification:
> >> > * @dev: a VuDev context
> >> >--
> >> >2.49.0
> >> >
> >>
> >
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-16 13:42 ` Albert Esteve
@ 2025-10-16 13:53 ` Manos Pitsidianakis
2025-10-16 14:20 ` Albert Esteve
0 siblings, 1 reply; 17+ messages in thread
From: Manos Pitsidianakis @ 2025-10-16 13:53 UTC (permalink / raw)
To: Albert Esteve
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, Fabiano Rosas, Michael S. Tsirkin,
stefanha, Stefano Garzarella, slp, dbassey
On Thu, Oct 16, 2025 at 4:43 PM Albert Esteve <aesteve@redhat.com> wrote:
>
> On Wed, Oct 15, 2025 at 2:51 PM Manos Pitsidianakis
> <manos.pitsidianakis@linaro.org> wrote:
> >
> > On Wed, 15 Oct 2025 15:29, Albert Esteve <aesteve@redhat.com> wrote:
> > >On Wed, Oct 15, 2025 at 11:52 AM Manos Pitsidianakis
> > ><manos.pitsidianakis@linaro.org> wrote:
> > >>
> > >> On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
> > >> >Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
> > >> >VIRTIO Shared Memory mappings.
> > >> >
> > >> >This implementation introduces VirtioSharedMemoryMapping as a unified
> > >> >QOM object that manages both the mapping metadata and MemoryRegion
> > >> >lifecycle. This object provides reference-counted lifecycle management
> > >> >with automatic cleanup of file descriptors and memory regions
> > >> >through QOM finalization.
> > >> >
> > >> >This request allows backends to dynamically map file descriptors into a
> > >> >VIRTIO Shared Memory Region identified by their shmid. Maps are created
> > >> >using memory_region_init_ram_from_fd() with configurable read/write
> > >> >permissions, and the resulting MemoryRegions are added as subregions to
> > >> >the shmem container region. The mapped memory is then advertised to the
> > >> >guest VIRTIO drivers as a base address plus offset for reading and
> > >> >writting according to the requested mmap flags.
> > >> >
> > >> >The backend can unmap memory ranges within a given VIRTIO Shared Memory
> > >> >Region to free resources. Upon receiving this message, the frontend
> > >> >removes the MemoryRegion as a subregion and automatically unreferences
> > >> >the VirtioSharedMemoryMapping object, triggering cleanup if no other
> > >> >references exist.
> > >> >
> > >> >Error handling has been improved to ensure consistent behavior across
> > >> >handlers that manage their own vhost_user_send_resp() calls. Since
> > >> >these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> > >> >error checking ensures proper connection closure on failures,
> > >> >maintaining the expected error flow.
> > >> >
> > >> >Note the memory region commit for these operations needs to be delayed
> > >> >until after we reply to the backend to avoid deadlocks. Otherwise,
> > >> >the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
> > >> >before the reply.
> > >> >
> > >> >Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > >> >---
> > >> > hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
> > >> > hw/virtio/virtio.c | 207 +++++++++++++++++++++
> > >> > include/hw/virtio/virtio.h | 136 ++++++++++++++
> > >> > subprojects/libvhost-user/libvhost-user.c | 70 +++++++
> > >> > subprojects/libvhost-user/libvhost-user.h | 54 ++++++
> > >> > 5 files changed, 680 insertions(+)
> > >> >
> > >> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > >> >index 36c9c2e04d..8a4385a5ad 100644
> > >> >--- a/hw/virtio/vhost-user.c
> > >> >+++ b/hw/virtio/vhost-user.c
> > >> >@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > >> > VHOST_USER_BACKEND_MAX
> > >> > } VhostUserBackendRequest;
> > >> >
> > >> >@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
> > >> > unsigned char uuid[16];
> > >> > } VhostUserShared;
> > >> >
> > >> >+/* For the flags field of VhostUserMMap */
> > >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > >> >+
> > >> >+typedef struct {
> > >> >+ /* VIRTIO Shared Memory Region ID */
> > >> >+ uint8_t shmid;
> > >> >+ uint8_t padding[7];
> > >> >+ /* File offset */
> > >> >+ uint64_t fd_offset;
> > >> >+ /* Offset within the VIRTIO Shared Memory Region */
> > >> >+ uint64_t shm_offset;
> > >> >+ /* Size of the mapping */
> > >> >+ uint64_t len;
> > >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > >> >+ uint64_t flags;
> > >> >+} VhostUserMMap;
> > >> >+
> > >> > typedef struct {
> > >> > VhostUserRequest request;
> > >> >
> > >> >@@ -224,6 +243,7 @@ typedef union {
> > >> > VhostUserInflight inflight;
> > >> > VhostUserShared object;
> > >> > VhostUserTransferDeviceState transfer_state;
> > >> >+ VhostUserMMap mmap;
> > >> > } VhostUserPayload;
> > >> >
> > >> > typedef struct VhostUserMsg {
> > >> >@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > >> > return 0;
> > >> > }
> > >> >
> > >> >+/**
> > >> >+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> > >> >+ * @dev: vhost device
> > >> >+ * @ioc: QIOChannel for communication
> > >> >+ * @hdr: vhost-user message header
> > >> >+ * @payload: message payload containing mapping details
> > >> >+ * @fd: file descriptor for the shared memory region
> > >> >+ *
> > >> >+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> > >> >+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
> > >> >+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> > >> >+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
> > >> >+ * lifecycle management with reference counting.
> > >> >+ *
> > >> >+ * Returns: 0 on success, negative errno on failure
> > >> >+ */
> > >> >+static int
> > >> >+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > >> >+ QIOChannel *ioc,
> > >> >+ VhostUserHeader *hdr,
> > >> >+ VhostUserPayload *payload,
> > >> >+ int fd)
> > >> >+{
> > >> >+ VirtioSharedMemory *shmem;
> > >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> > >> >+ VirtioSharedMemoryMapping *existing;
> > >> >+ Error *local_err = NULL;
> > >> >+ int ret = 0;
> > >> >+
> > >> >+ if (fd < 0) {
> > >> >+ error_report("Bad fd for map");
> > >> >+ ret = -EBADF;
> > >> >+ goto send_reply;
> > >>
> > >> Apologies if I missed something obvious, but won't going to send_reply
> > >> at this point cause memory_region_transaction_commit() to be called?
> > >> Maybe we should have two cleanup labels, depending on whether
> > >> memory_region_transaction_begin() has been called yet?
> > >
> > >You are right, this is not good, at transaction_depth 0 it would fail
> > >an assertion within memory_region_transaction_commit().
> > >
> > >Not sure another cleanup label will work, though, as the transaction
> > >commit has to go after the reply. Maybe we can make the reply within
> > >the handler optional (leveraging the REPLY_MASK flag). But that will
> > >need to be tuned when this patch lands:
> > >https://lists.gnu.org/archive/html/qemu-devel/2025-10/msg03632.html
> > >
> > >Or just a simpler solution like this would do?
> >
> >
> > Yes this looks like enough. The another cleanup label alternative I
> > meant would be to do:
> >
> > +send_reply_no_commit:
> > + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > + payload->u64 = !!ret;
> > + hdr->size = sizeof(payload->u64);
> > + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > + error_report_err(local_err);
> > + return -EFAULT;
> > + }
> > + }
> > + return 0;
> > +send_reply:
> > + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > + payload->u64 = !!ret;
> > + hdr->size = sizeof(payload->u64);
> > + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > + error_report_err(local_err);
> > + memory_region_transaction_commit();
> > + return -EFAULT;
> > + }
> > + }
> > +
> > + memory_region_transaction_commit();
> > +
> > + return 0;
> > +}
> >
> > But this is not much prettier. (In rust this would be trivial with a MR
> > transaction RAII object...)
>
> I tried this solution too and it fails with `Illegal instruction (core
> dumped)`. I am not sure why, the logic should be basically the same.
> It has me puzzled.
>
> So... I'll use the boolean variable solution. That one seems to work fine.
Sounds like you're hitting the abort, because `send_reply_no_commit`
is first, and it runs on success (it should be after `send_reply`).
Oops :)
>
> >
> > >
> > >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > >index c39cf52dbc..46126eed3b 100644
> > >--- a/hw/virtio/vhost-user.c
> > >+++ b/hw/virtio/vhost-user.c
> > >@@ -1824,6 +1824,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > VirtioSharedMemoryMapping *existing;
> > > Error *local_err = NULL;
> > > int ret = 0;
> > >+ bool transaction_started = false;
> > >
> > > if (fd < 0) {
> > > error_report("Bad fd for map");
> > >@@ -1864,6 +1865,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > }
> > >
> > > memory_region_transaction_begin();
> > >+ transaction_started = true;
> > >
> > > /* Create VirtioSharedMemoryMapping object */
> > > VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> > >@@ -1889,14 +1891,18 @@ send_reply:
> > > hdr->size = sizeof(payload->u64);
> > > if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > error_report_err(local_err);
> > >- memory_region_transaction_commit();
> > >+ if (transaction_started) {
> > >+ memory_region_transaction_commit();
> > >+ }
> > > return -EFAULT;
> > > }
> > > }
> > >
> > >- memory_region_transaction_commit();
> > >-
> > >- return 0;
> > >+ /* Only commit if we started a transaction */
> > >+ if (transaction_started) {
> > >+ memory_region_transaction_commit();
> > >+ }
> > >+ return ret;
> > > }
> > >
> > >>
> > >> >+ }
> > >> >+
> > >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> > >> >+ "Requested ID: %d", vu_mmap->shmid);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > >> >+ if (!shmem) {
> > >> >+ error_report("VIRTIO Shared Memory Region at "
> > >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > >> >+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
> > >> >+ if (ranges_overlap(existing->offset, existing->len,
> > >> >+ vu_mmap->shm_offset, vu_mmap->len)) {
> > >> >+ error_report("VIRTIO Shared Memory mapping overlap");
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+ }
> > >> >+
> > >> >+ memory_region_transaction_begin();
> > >> >+
> > >> >+ /* Create VirtioSharedMemoryMapping object */
> > >> >+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> > >> >+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> > >> >+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
> > >> >+
> > >> >+ if (!mapping) {
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ /* Add the mapping to the shared memory region */
> > >> >+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
> > >> >+ error_report("Failed to add shared memory mapping");
> > >> >+ object_unref(OBJECT(mapping));
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+send_reply:
> > >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > >> >+ payload->u64 = !!ret;
> > >> >+ hdr->size = sizeof(payload->u64);
> > >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > >> >+ error_report_err(local_err);
> > >> >+ memory_region_transaction_commit();
> > >> >+ return -EFAULT;
> > >> >+ }
> > >> >+ }
> > >> >+
> > >> >+ memory_region_transaction_commit();
> > >> >+
> > >> >+ return 0;
> > >> >+}
> > >> >+
> > >> >+/**
> > >> >+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> > >> >+ * @dev: vhost device
> > >> >+ * @ioc: QIOChannel for communication
> > >> >+ * @hdr: vhost-user message header
> > >> >+ * @payload: message payload containing unmapping details
> > >> >+ *
> > >> >+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> > >> >+ * the specified memory mapping from the VirtIO shared memory region. This
> > >> >+ * automatically unreferences the associated VhostUserShmemObject, which may
> > >> >+ * trigger its finalization and cleanup (munmap, close fd) if no other
> > >> >+ * references exist.
> > >> >+ *
> > >> >+ * Returns: 0 on success, negative errno on failure
> > >> >+ */
> > >> >+static int
> > >> >+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> > >> >+ QIOChannel *ioc,
> > >> >+ VhostUserHeader *hdr,
> > >> >+ VhostUserPayload *payload)
> > >> >+{
> > >> >+ VirtioSharedMemory *shmem;
> > >> >+ VirtioSharedMemoryMapping *mmap = NULL;
> > >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> > >> >+ Error *local_err = NULL;
> > >> >+ int ret = 0;
> > >> >+
> > >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> > >> >+ "Requested ID: %d", vu_mmap->shmid);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > >> >+ if (!shmem) {
> > >> >+ error_report("VIRTIO Shared Memory Region at "
> > >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > >> >+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > >> >+ if (!mmap) {
> > >> >+ error_report("Shared memory mapping not found at offset %" PRIx64
> > >> >+ " with length %" PRIx64,
> > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > >> >+ ret = -EFAULT;
> > >> >+ goto send_reply;
> > >> >+ }
> > >> >+
> > >> >+send_reply:
> > >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > >> >+ payload->u64 = !!ret;
> > >> >+ hdr->size = sizeof(payload->u64);
> > >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > >> >+ error_report_err(local_err);
> > >> >+ return -EFAULT;
> > >> >+ }
> > >> >+ }
> > >> >+
> > >> >+ if (!ret && mmap) {
> > >> >+ /* Free the MemoryRegion only after reply */
> > >> >+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > >> >+ }
> > >> >+
> > >> >+ return 0;
> > >> >+}
> > >> >+
> > >> > static void close_backend_channel(struct vhost_user *u)
> > >> > {
> > >> > g_source_destroy(u->backend_src);
> > >> >@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > >> > ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > >> > &hdr, &payload);
> > >> > break;
> > >> >+ case VHOST_USER_BACKEND_SHMEM_MAP:
> > >> >+ /* Handler manages its own response, check error and close connection */
> > >> >+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> > >> >+ fd ? fd[0] : -1) < 0) {
> > >> >+ goto err;
> > >> >+ }
> > >> >+ break;
> > >> >+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
> > >> >+ /* Handler manages its own response, check error and close connection */
> > >> >+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> > >> >+ goto err;
> > >> >+ }
> > >> >+ break;
> > >> > default:
> > >> > error_report("Received unexpected msg type: %d.", hdr.request);
> > >> > ret = -EINVAL;
> > >> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > >> >index 153ee0a0cf..37070e7951 100644
> > >> >--- a/hw/virtio/virtio.c
> > >> >+++ b/hw/virtio/virtio.c
> > >> >@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> > >> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
> > >> > }
> > >> >
> > >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
> > >> >+{
> > >> >+ VirtioSharedMemory *elem;
> > >> >+ g_autofree char *name = NULL;
> > >> >+
> > >> >+ elem = g_new0(VirtioSharedMemory, 1);
> > >> >+ elem->shmid = shmid;
> > >> >+
> > >> >+ /* Initialize embedded MemoryRegion as container for shmem mappings */
> > >> >+ name = g_strdup_printf("virtio-shmem-%d", shmid);
> > >> >+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
> > >> >+ QTAILQ_INIT(&elem->mmaps);
> > >> >+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> > >> >+ return elem;
> > >> >+}
> > >> >+
> > >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > >> >+{
> > >> >+ VirtioSharedMemory *shmem, *next;
> > >> >+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> > >> >+ if (shmem->shmid == shmid) {
> > >> >+ return shmem;
> > >> >+ }
> > >> >+ }
> > >> >+ return NULL;
> > >> >+}
> > >> >+
> > >> >+static void virtio_shared_memory_mapping_instance_init(Object *obj)
> > >> >+{
> > >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> > >> >+
> > >> >+ mapping->shmid = 0;
> > >> >+ mapping->fd = -1;
> > >> >+ mapping->offset = 0;
> > >> >+ mapping->len = 0;
> > >> >+ mapping->mr = NULL;
> > >> >+}
> > >> >+
> > >> >+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
> > >> >+{
> > >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> > >> >+
> > >> >+ /* Clean up MemoryRegion if it exists */
> > >> >+ if (mapping->mr) {
> > >> >+ /* Unparent the MemoryRegion to trigger cleanup */
> > >> >+ object_unparent(OBJECT(mapping->mr));
> > >> >+ mapping->mr = NULL;
> > >> >+ }
> > >> >+
> > >> >+ /* Close file descriptor */
> > >> >+ if (mapping->fd >= 0) {
> > >> >+ close(mapping->fd);
> > >> >+ mapping->fd = -1;
> > >> >+ }
> > >> >+}
> > >> >+
> > >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> > >> >+ int fd,
> > >> >+ uint64_t fd_offset,
> > >> >+ uint64_t shm_offset,
> > >> >+ uint64_t len,
> > >> >+ bool allow_write)
> > >> >+{
> > >> >+ VirtioSharedMemoryMapping *mapping;
> > >> >+ MemoryRegion *mr;
> > >> >+ g_autoptr(GString) mr_name = g_string_new(NULL);
> > >> >+ uint32_t ram_flags;
> > >> >+ Error *local_err = NULL;
> > >> >+
> > >> >+ if (len == 0) {
> > >> >+ error_report("Shared memory mapping size cannot be zero");
> > >> >+ return NULL;
> > >> >+ }
> > >> >+
> > >> >+ fd = dup(fd);
> > >> >+ if (fd < 0) {
> > >> >+ error_report("Failed to duplicate fd: %s", strerror(errno));
> > >> >+ return NULL;
> > >> >+ }
> > >> >+
> > >> >+ /* Determine RAM flags */
> > >> >+ ram_flags = RAM_SHARED;
> > >> >+ if (!allow_write) {
> > >> >+ ram_flags |= RAM_READONLY_FD;
> > >> >+ }
> > >> >+
> > >> >+ /* Create the VirtioSharedMemoryMapping */
> > >> >+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
> > >> >+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
> > >> >+
> > >> >+ /* Set up object properties */
> > >> >+ mapping->shmid = shmid;
> > >> >+ mapping->fd = fd;
> > >> >+ mapping->offset = shm_offset;
> > >> >+ mapping->len = len;
> > >> >+
> > >> >+ /* Create MemoryRegion as a child of this object */
> > >> >+ mr = g_new0(MemoryRegion, 1);
> > >> >+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
> > >> >+
> > >> >+ /* Initialize MemoryRegion with file descriptor */
> > >> >+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
> > >> >+ len, ram_flags, fd, fd_offset,
> > >> >+ &local_err)) {
> > >> >+ error_report_err(local_err);
> > >> >+ g_free(mr);
> > >> >+ close(fd);
> > >> >+ object_unref(OBJECT(mapping));
> > >> >+ return NULL;
> > >> >+ }
> > >> >+
> > >> >+ mapping->mr = mr;
> > >> >+ return mapping;
> > >> >+}
> > >> >+
> > >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > >> >+ VirtioSharedMemoryMapping *mapping)
> > >> >+{
> > >> >+ if (!mapping) {
> > >> >+ error_report("VirtioSharedMemoryMapping cannot be NULL");
> > >> >+ return -1;
> > >> >+ }
> > >> >+ if (!mapping->mr) {
> > >> >+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
> > >> >+ return -1;
> > >> >+ }
> > >> >+
> > >> >+ /* Validate boundaries against the VIRTIO shared memory region */
> > >> >+ if (mapping->offset + mapping->len > shmem->mr.size) {
> > >> >+ error_report("Memory exceeds the shared memory boundaries");
> > >> >+ return -1;
> > >> >+ }
> > >> >+
> > >> >+ /* Add as subregion to the VIRTIO shared memory */
> > >> >+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
> > >> >+
> > >> >+ /* Add to the mapped regions list */
> > >> >+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
> > >> >+
> > >> >+ return 0;
> > >> >+}
> > >> >+
> > >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > >> >+ hwaddr offset, uint64_t size)
> > >> >+{
> > >> >+ VirtioSharedMemoryMapping *mapping;
> > >> >+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
> > >> >+ if (mapping->offset == offset && mapping->mr->size == size) {
> > >> >+ return mapping;
> > >> >+ }
> > >> >+ }
> > >> >+ return NULL;
> > >> >+}
> > >> >+
> > >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > >> >+ uint64_t size)
> > >> >+{
> > >> >+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
> > >> >+ if (mapping == NULL) {
> > >> >+ return;
> > >> >+ }
> > >> >+
> > >> >+ /*
> > >> >+ * Remove from memory region first
> > >> >+ */
> > >> >+ memory_region_del_subregion(&shmem->mr, mapping->mr);
> > >> >+
> > >> >+ /*
> > >> >+ * Remove from list and unref the mapping which will trigger automatic cleanup
> > >> >+ * when the reference count reaches zero.
> > >> >+ */
> > >> >+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
> > >> >+ object_unref(OBJECT(mapping));
> > >> >+}
> > >> >+
> > >> > /* A wrapper for use as a VMState .put function */
> > >> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> > >> > const VMStateField *field, JSONWriter *vmdesc)
> > >> >@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
> > >> > {
> > >> > VirtIODevice *vdev = opaque;
> > >> > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> > >> >+ VirtioSharedMemory *shmem;
> > >> > uint64_t features[VIRTIO_FEATURES_NU64S];
> > >> > int i;
> > >> >
> > >> >@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
> > >> > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> > >> > __virtio_queue_reset(vdev, i);
> > >> > }
> > >> >+
> > >> >+ /* Mappings are removed to prevent stale fds from remaining open. */
> > >> >+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
> > >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> > >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> > >> >+ }
> > >> >+ }
> > >> > }
> > >> >
> > >> > static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
> > >> >@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> > >> > NULL, virtio_vmstate_change, vdev);
> > >> > vdev->device_endian = virtio_default_endian();
> > >> > vdev->use_guest_notifier_mask = true;
> > >> >+ QSIMPLEQ_INIT(&vdev->shmem_list);
> > >> > }
> > >> >
> > >> > /*
> > >> >@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> > >> > static void virtio_device_instance_finalize(Object *obj)
> > >> > {
> > >> > VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> > >> >+ VirtioSharedMemory *shmem;
> > >> >
> > >> > virtio_device_free_virtqueues(vdev);
> > >> >
> > >> > g_free(vdev->config);
> > >> > g_free(vdev->vector_queues);
> > >> >+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> > >> >+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> > >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> > >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> > >> >+ }
> > >> >+
> > >> >+ /* Clean up the embedded MemoryRegion */
> > >> >+ object_unparent(OBJECT(&shmem->mr));
> > >> >+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> > >> >+ g_free(shmem);
> > >> >+ }
> > >> > }
> > >> >
> > >> > static const Property virtio_properties[] = {
> > >> >@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
> > >> > .class_size = sizeof(VirtioDeviceClass),
> > >> > };
> > >> >
> > >> >+static const TypeInfo virtio_shared_memory_mapping_info = {
> > >> >+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
> > >> >+ .parent = TYPE_OBJECT,
> > >> >+ .instance_size = sizeof(VirtioSharedMemoryMapping),
> > >> >+ .instance_init = virtio_shared_memory_mapping_instance_init,
> > >> >+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
> > >> >+};
> > >> >+
> > >> > static void virtio_register_types(void)
> > >> > {
> > >> > type_register_static(&virtio_device_info);
> > >> >+ type_register_static(&virtio_shared_memory_mapping_info);
> > >> > }
> > >> >
> > >> > type_init(virtio_register_types)
> > >> >diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > >> >index d97529c3f1..cee807e236 100644
> > >> >--- a/include/hw/virtio/virtio.h
> > >> >+++ b/include/hw/virtio/virtio.h
> > >> >@@ -99,6 +99,46 @@ enum virtio_device_endian {
> > >> > VIRTIO_DEVICE_ENDIAN_BIG,
> > >> > };
> > >> >
> > >> >+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
> > >> >+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
> > >> >+
> > >> >+/**
> > >> >+ * VirtioSharedMemoryMapping:
> > >> >+ * @parent: Parent QOM object
> > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > >> >+ * @fd: File descriptor for the shared memory region
> > >> >+ * @offset: Offset within the VIRTIO Shared Memory Region
> > >> >+ * @len: Size of the mapping
> > >> >+ * @mr: MemoryRegion associated with this shared memory mapping
> > >> >+ * @link: List entry for the shared memory region's mapping list
> > >> >+ *
> > >> >+ * A QOM object that represents an individual file descriptor-based shared
> > >> >+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
> > >> >+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
> > >> >+ * counting. When the object is unreferenced and its reference count drops
> > >> >+ * to zero, it automatically cleans up the MemoryRegion and closes the file
> > >> >+ * descriptor.
> > >> >+ */
> > >> >+struct VirtioSharedMemoryMapping {
> > >> >+ Object parent;
> > >> >+
> > >> >+ uint8_t shmid;
> > >> >+ int fd;
> > >> >+ hwaddr offset;
> > >> >+ uint64_t len;
> > >> >+ MemoryRegion *mr;
> > >> >+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> > >> >+};
> > >> >+
> > >> >+struct VirtioSharedMemory {
> > >> >+ uint8_t shmid;
> > >> >+ MemoryRegion mr;
> > >> >+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> > >> >+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> > >> >+};
> > >> >+
> > >> >+typedef struct VirtioSharedMemory VirtioSharedMemory;
> > >> >+
> > >> > /**
> > >> > * struct VirtIODevice - common VirtIO structure
> > >> > * @name: name of the device
> > >> >@@ -168,6 +208,8 @@ struct VirtIODevice
> > >> > */
> > >> > EventNotifier config_notifier;
> > >> > bool device_iotlb_enabled;
> > >> >+ /* Shared memory region for mappings. */
> > >> >+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> > >> > };
> > >> >
> > >> > struct VirtioDeviceClass {
> > >> >@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> > >> >
> > >> > int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> > >> >
> > >> >+/**
> > >> >+ * virtio_new_shmem_region() - Create a new shared memory region
> > >> >+ * @vdev: VirtIODevice
> > >> >+ * @shmid: Shared memory ID
> > >> >+ * @size: Size of the shared memory region
> > >> >+ *
> > >> >+ * Creates a new VirtioSharedMemory region for the given device and ID.
> > >> >+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> > >> >+ * be automatically freed when the device is destroyed. The caller
> > >> >+ * should not free the returned pointer.
> > >> >+ *
> > >> >+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> > >> >+ */
> > >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
> > >> >+
> > >> >+/**
> > >> >+ * virtio_find_shmem_region() - Find an existing shared memory region
> > >> >+ * @vdev: VirtIODevice
> > >> >+ * @shmid: Shared memory ID to find
> > >> >+ *
> > >> >+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> > >> >+ * is owned by the VirtIODevice and should not be freed by the caller.
> > >> >+ *
> > >> >+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> > >> >+ */
> > >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > >> >+
> > >> >+/**
> > >> >+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
> > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > >> >+ * @fd: File descriptor for the shared memory
> > >> >+ * @fd_offset: Offset within the file descriptor
> > >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > >> >+ * @len: Size of the mapping
> > >> >+ * @allow_write: Whether to allow write access to the mapping
> > >> >+ *
> > >> >+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
> > >> >+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> > >> >+ * as a child object. When the object is finalized, it will automatically
> > >> >+ * clean up the MemoryRegion and close the file descriptor.
> > >> >+ *
> > >> >+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
> > >> >+ */
> > >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> > >> >+ int fd,
> > >> >+ uint64_t fd_offset,
> > >> >+ uint64_t shm_offset,
> > >> >+ uint64_t len,
> > >> >+ bool allow_write);
> > >> >+
> > >> >+/**
> > >> >+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
> > >> >+ * @shmem: VirtioSharedMemory region
> > >> >+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
> > >> >+ *
> > >> >+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
> > >> >+ * ownership is transferred to the shared memory region and will be automatically
> > >> >+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
> > >> >+ * called or when the shared memory region is destroyed.
> > >> >+ *
> > >> >+ * Returns: 0 on success, negative errno on failure
> > >> >+ */
> > >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > >> >+ VirtioSharedMemoryMapping *mapping);
> > >> >+
> > >> >+/**
> > >> >+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
> > >> >+ * @shmem: VirtioSharedMemory region
> > >> >+ * @offset: Offset within the shared memory region
> > >> >+ * @size: Size of the mapping to find
> > >> >+ *
> > >> >+ * Finds an existing memory mapping that covers the specified range.
> > >> >+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> > >> >+ * region and should not be freed by the caller.
> > >> >+ *
> > >> >+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> > >> >+ */
> > >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > >> >+ hwaddr offset, uint64_t size);
> > >> >+
> > >> >+/**
> > >> >+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> > >> >+ * @shmem: VirtioSharedMemory region
> > >> >+ * @offset: Offset of the mapping to remove
> > >> >+ * @size: Size of the mapping to remove
> > >> >+ *
> > >> >+ * Removes a memory mapping from the shared memory region. This will
> > >> >+ * automatically unref the associated VhostUserShmemObject, which may
> > >> >+ * trigger its finalization and cleanup if no other references exist.
> > >> >+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> > >> >+ */
> > >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > >> >+ uint64_t size);
> > >> >+
> > >> > extern const VMStateInfo virtio_vmstate_info;
> > >> >
> > >> > #define VMSTATE_VIRTIO_DEVICE \
> > >> >diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > >> >index 9c630c2170..034cbfdc3c 100644
> > >> >--- a/subprojects/libvhost-user/libvhost-user.c
> > >> >+++ b/subprojects/libvhost-user/libvhost-user.c
> > >> >@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> > >> > return vu_send_message(dev, &msg);
> > >> > }
> > >> >
> > >> >+bool
> > >> >+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> > >> >+{
> > >> >+ VhostUserMsg vmsg = {
> > >> >+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
> > >> >+ .size = sizeof(vmsg.payload.mmap),
> > >> >+ .flags = VHOST_USER_VERSION,
> > >> >+ .payload.mmap = {
> > >> >+ .shmid = shmid,
> > >> >+ .fd_offset = fd_offset,
> > >> >+ .shm_offset = shm_offset,
> > >> >+ .len = len,
> > >> >+ .flags = flags,
> > >> >+ },
> > >> >+ .fd_num = 1,
> > >> >+ .fds[0] = fd,
> > >> >+ };
> > >> >+
> > >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > >> >+ return false;
> > >> >+ }
> > >> >+
> > >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > >> >+ }
> > >> >+
> > >> >+ pthread_mutex_lock(&dev->backend_mutex);
> > >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> > >> >+ return false;
> > >> >+ }
> > >> >+
> > >> >+ /* Also unlocks the backend_mutex */
> > >> >+ return vu_process_message_reply(dev, &vmsg);
> > >> >+}
> > >> >+
> > >> >+bool
> > >> >+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> > >> >+{
> > >> >+ VhostUserMsg vmsg = {
> > >> >+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> > >> >+ .size = sizeof(vmsg.payload.mmap),
> > >> >+ .flags = VHOST_USER_VERSION,
> > >> >+ .payload.mmap = {
> > >> >+ .shmid = shmid,
> > >> >+ .fd_offset = 0,
> > >> >+ .shm_offset = shm_offset,
> > >> >+ .len = len,
> > >> >+ },
> > >> >+ };
> > >> >+
> > >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > >> >+ return false;
> > >> >+ }
> > >> >+
> > >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > >> >+ }
> > >> >+
> > >> >+ pthread_mutex_lock(&dev->backend_mutex);
> > >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> > >> >+ return false;
> > >> >+ }
> > >> >+
> > >> >+ /* Also unlocks the backend_mutex */
> > >> >+ return vu_process_message_reply(dev, &vmsg);
> > >> >+}
> > >> >+
> > >> > static bool
> > >> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> > >> > {
> > >> >diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> > >> >index 2ffc58c11b..6a2d0f9fae 100644
> > >> >--- a/subprojects/libvhost-user/libvhost-user.h
> > >> >+++ b/subprojects/libvhost-user/libvhost-user.h
> > >> >@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> > >> > /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> > >> > /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> > >> > VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> > >> >+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> > >> >+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
> > >> > VHOST_USER_PROTOCOL_F_MAX
> > >> > };
> > >> >
> > >> >@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > >> > VHOST_USER_BACKEND_MAX
> > >> > } VhostUserBackendRequest;
> > >> >
> > >> >@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> > >> > unsigned char uuid[UUID_LEN];
> > >> > } VhostUserShared;
> > >> >
> > >> >+/* For the flags field of VhostUserMMap */
> > >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > >> >+
> > >> >+typedef struct {
> > >> >+ /* VIRTIO Shared Memory Region ID */
> > >> >+ uint8_t shmid;
> > >> >+ uint8_t padding[7];
> > >> >+ /* File offset */
> > >> >+ uint64_t fd_offset;
> > >> >+ /* Offset within the VIRTIO Shared Memory Region */
> > >> >+ uint64_t shm_offset;
> > >> >+ /* Size of the mapping */
> > >> >+ uint64_t len;
> > >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > >> >+ uint64_t flags;
> > >> >+} VhostUserMMap;
> > >> >+
> > >> > #define VU_PACKED __attribute__((packed))
> > >> >
> > >> > typedef struct VhostUserMsg {
> > >> >@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> > >> > VhostUserVringArea area;
> > >> > VhostUserInflight inflight;
> > >> > VhostUserShared object;
> > >> >+ VhostUserMMap mmap;
> > >> > } payload;
> > >> >
> > >> > int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> > >> >@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > >> > */
> > >> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > >> >
> > >> >+/**
> > >> >+ * vu_shmem_map:
> > >> >+ * @dev: a VuDev context
> > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > >> >+ * @fd_offset: File offset
> > >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > >> >+ * @len: Size of the mapping
> > >> >+ * @flags: Flags for the mmap operation
> > >> >+ * @fd: A file descriptor
> > >> >+ *
> > >> >+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> > >> >+ *
> > >> >+ * Returns: TRUE on success, FALSE on failure.
> > >> >+ */
> > >> >+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> > >> >+
> > >> >+/**
> > >> >+ * vu_shmem_unmap:
> > >> >+ * @dev: a VuDev context
> > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > >> >+ * @fd_offset: File offset
> > >> >+ * @len: Size of the mapping
> > >> >+ *
> > >> >+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> > >> >+ * with the requested `shmid`.
> > >> >+ *
> > >> >+ * Returns: TRUE on success, FALSE on failure.
> > >> >+ */
> > >> >+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> > >> >+ uint64_t len);
> > >> >+
> > >> > /**
> > >> > * vu_queue_set_notification:
> > >> > * @dev: a VuDev context
> > >> >--
> > >> >2.49.0
> > >> >
> > >>
> > >
> >
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request
2025-10-16 13:53 ` Manos Pitsidianakis
@ 2025-10-16 14:20 ` Albert Esteve
0 siblings, 0 replies; 17+ messages in thread
From: Albert Esteve @ 2025-10-16 14:20 UTC (permalink / raw)
To: Manos Pitsidianakis
Cc: qemu-devel, stevensd, jasowang, Paolo Bonzini, hi, Laurent Vivier,
Alex Bennée, david, Fabiano Rosas, Michael S. Tsirkin,
stefanha, Stefano Garzarella, slp, dbassey
On Thu, Oct 16, 2025 at 3:54 PM Manos Pitsidianakis
<manos.pitsidianakis@linaro.org> wrote:
>
> On Thu, Oct 16, 2025 at 4:43 PM Albert Esteve <aesteve@redhat.com> wrote:
> >
> > On Wed, Oct 15, 2025 at 2:51 PM Manos Pitsidianakis
> > <manos.pitsidianakis@linaro.org> wrote:
> > >
> > > On Wed, 15 Oct 2025 15:29, Albert Esteve <aesteve@redhat.com> wrote:
> > > >On Wed, Oct 15, 2025 at 11:52 AM Manos Pitsidianakis
> > > ><manos.pitsidianakis@linaro.org> wrote:
> > > >>
> > > >> On Wed, 15 Oct 2025 11:59, Albert Esteve <aesteve@redhat.com> wrote:
> > > >> >Add SHMEM_MAP/UNMAP requests to vhost-user for dynamic management of
> > > >> >VIRTIO Shared Memory mappings.
> > > >> >
> > > >> >This implementation introduces VirtioSharedMemoryMapping as a unified
> > > >> >QOM object that manages both the mapping metadata and MemoryRegion
> > > >> >lifecycle. This object provides reference-counted lifecycle management
> > > >> >with automatic cleanup of file descriptors and memory regions
> > > >> >through QOM finalization.
> > > >> >
> > > >> >This request allows backends to dynamically map file descriptors into a
> > > >> >VIRTIO Shared Memory Region identified by their shmid. Maps are created
> > > >> >using memory_region_init_ram_from_fd() with configurable read/write
> > > >> >permissions, and the resulting MemoryRegions are added as subregions to
> > > >> >the shmem container region. The mapped memory is then advertised to the
> > > >> >guest VIRTIO drivers as a base address plus offset for reading and
> > > >> >writting according to the requested mmap flags.
> > > >> >
> > > >> >The backend can unmap memory ranges within a given VIRTIO Shared Memory
> > > >> >Region to free resources. Upon receiving this message, the frontend
> > > >> >removes the MemoryRegion as a subregion and automatically unreferences
> > > >> >the VirtioSharedMemoryMapping object, triggering cleanup if no other
> > > >> >references exist.
> > > >> >
> > > >> >Error handling has been improved to ensure consistent behavior across
> > > >> >handlers that manage their own vhost_user_send_resp() calls. Since
> > > >> >these handlers clear the VHOST_USER_NEED_REPLY_MASK flag, explicit
> > > >> >error checking ensures proper connection closure on failures,
> > > >> >maintaining the expected error flow.
> > > >> >
> > > >> >Note the memory region commit for these operations needs to be delayed
> > > >> >until after we reply to the backend to avoid deadlocks. Otherwise,
> > > >> >the MemoryListener would send a VHOST_USER_SET_MEM_TABLE message
> > > >> >before the reply.
> > > >> >
> > > >> >Signed-off-by: Albert Esteve <aesteve@redhat.com>
> > > >> >---
> > > >> > hw/virtio/vhost-user.c | 213 ++++++++++++++++++++++
> > > >> > hw/virtio/virtio.c | 207 +++++++++++++++++++++
> > > >> > include/hw/virtio/virtio.h | 136 ++++++++++++++
> > > >> > subprojects/libvhost-user/libvhost-user.c | 70 +++++++
> > > >> > subprojects/libvhost-user/libvhost-user.h | 54 ++++++
> > > >> > 5 files changed, 680 insertions(+)
> > > >> >
> > > >> >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > >> >index 36c9c2e04d..8a4385a5ad 100644
> > > >> >--- a/hw/virtio/vhost-user.c
> > > >> >+++ b/hw/virtio/vhost-user.c
> > > >> >@@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest {
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > > >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > > >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > > >> > VHOST_USER_BACKEND_MAX
> > > >> > } VhostUserBackendRequest;
> > > >> >
> > > >> >@@ -192,6 +194,23 @@ typedef struct VhostUserShared {
> > > >> > unsigned char uuid[16];
> > > >> > } VhostUserShared;
> > > >> >
> > > >> >+/* For the flags field of VhostUserMMap */
> > > >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > > >> >+
> > > >> >+typedef struct {
> > > >> >+ /* VIRTIO Shared Memory Region ID */
> > > >> >+ uint8_t shmid;
> > > >> >+ uint8_t padding[7];
> > > >> >+ /* File offset */
> > > >> >+ uint64_t fd_offset;
> > > >> >+ /* Offset within the VIRTIO Shared Memory Region */
> > > >> >+ uint64_t shm_offset;
> > > >> >+ /* Size of the mapping */
> > > >> >+ uint64_t len;
> > > >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > > >> >+ uint64_t flags;
> > > >> >+} VhostUserMMap;
> > > >> >+
> > > >> > typedef struct {
> > > >> > VhostUserRequest request;
> > > >> >
> > > >> >@@ -224,6 +243,7 @@ typedef union {
> > > >> > VhostUserInflight inflight;
> > > >> > VhostUserShared object;
> > > >> > VhostUserTransferDeviceState transfer_state;
> > > >> >+ VhostUserMMap mmap;
> > > >> > } VhostUserPayload;
> > > >> >
> > > >> > typedef struct VhostUserMsg {
> > > >> >@@ -1768,6 +1788,186 @@ vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u,
> > > >> > return 0;
> > > >> > }
> > > >> >
> > > >> >+/**
> > > >> >+ * vhost_user_backend_handle_shmem_map() - Handle SHMEM_MAP backend request
> > > >> >+ * @dev: vhost device
> > > >> >+ * @ioc: QIOChannel for communication
> > > >> >+ * @hdr: vhost-user message header
> > > >> >+ * @payload: message payload containing mapping details
> > > >> >+ * @fd: file descriptor for the shared memory region
> > > >> >+ *
> > > >> >+ * Handles VHOST_USER_BACKEND_SHMEM_MAP requests from the backend. Creates
> > > >> >+ * a VhostUserShmemObject to manage the shared memory mapping and adds it
> > > >> >+ * to the appropriate VirtIO shared memory region. The VhostUserShmemObject
> > > >> >+ * serves as an intermediate parent for the MemoryRegion, ensuring proper
> > > >> >+ * lifecycle management with reference counting.
> > > >> >+ *
> > > >> >+ * Returns: 0 on success, negative errno on failure
> > > >> >+ */
> > > >> >+static int
> > > >> >+vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > >> >+ QIOChannel *ioc,
> > > >> >+ VhostUserHeader *hdr,
> > > >> >+ VhostUserPayload *payload,
> > > >> >+ int fd)
> > > >> >+{
> > > >> >+ VirtioSharedMemory *shmem;
> > > >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> > > >> >+ VirtioSharedMemoryMapping *existing;
> > > >> >+ Error *local_err = NULL;
> > > >> >+ int ret = 0;
> > > >> >+
> > > >> >+ if (fd < 0) {
> > > >> >+ error_report("Bad fd for map");
> > > >> >+ ret = -EBADF;
> > > >> >+ goto send_reply;
> > > >>
> > > >> Apologies if I missed something obvious, but won't going to send_reply
> > > >> at this point cause memory_region_transaction_commit() to be called?
> > > >> Maybe we should have two cleanup labels, depending on whether
> > > >> memory_region_transaction_begin() has been called yet?
> > > >
> > > >You are right, this is not good, at transaction_depth 0 it would fail
> > > >an assertion within memory_region_transaction_commit().
> > > >
> > > >Not sure another cleanup label will work, though, as the transaction
> > > >commit has to go after the reply. Maybe we can make the reply within
> > > >the handler optional (leveraging the REPLY_MASK flag). But that will
> > > >need to be tuned when this patch lands:
> > > >https://lists.gnu.org/archive/html/qemu-devel/2025-10/msg03632.html
> > > >
> > > >Or just a simpler solution like this would do?
> > >
> > >
> > > Yes this looks like enough. The another cleanup label alternative I
> > > meant would be to do:
> > >
> > > +send_reply_no_commit:
> > > + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > + payload->u64 = !!ret;
> > > + hdr->size = sizeof(payload->u64);
> > > + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > + error_report_err(local_err);
> > > + return -EFAULT;
> > > + }
> > > + }
> > > + return 0;
> > > +send_reply:
> > > + if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > + payload->u64 = !!ret;
> > > + hdr->size = sizeof(payload->u64);
> > > + if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > + error_report_err(local_err);
> > > + memory_region_transaction_commit();
> > > + return -EFAULT;
> > > + }
> > > + }
> > > +
> > > + memory_region_transaction_commit();
> > > +
> > > + return 0;
> > > +}
> > >
> > > But this is not much prettier. (In rust this would be trivial with a MR
> > > transaction RAII object...)
> >
> > I tried this solution too and it fails with `Illegal instruction (core
> > dumped)`. I am not sure why, the logic should be basically the same.
> > It has me puzzled.
> >
> > So... I'll use the boolean variable solution. That one seems to work fine.
>
> Sounds like you're hitting the abort, because `send_reply_no_commit`
> is first, and it runs on success (it should be after `send_reply`).
> Oops :)
Ah right! I was so centered on the goto's that I didn't see that the
success run doesn't go through them. I spent more time that I'd like
to admit trying to see where it went south.
Well, thanks! Then I'll switch and go with the label solution instead :)
>
> >
> > >
> > > >
> > > >diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > >index c39cf52dbc..46126eed3b 100644
> > > >--- a/hw/virtio/vhost-user.c
> > > >+++ b/hw/virtio/vhost-user.c
> > > >@@ -1824,6 +1824,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > > VirtioSharedMemoryMapping *existing;
> > > > Error *local_err = NULL;
> > > > int ret = 0;
> > > >+ bool transaction_started = false;
> > > >
> > > > if (fd < 0) {
> > > > error_report("Bad fd for map");
> > > >@@ -1864,6 +1865,7 @@ vhost_user_backend_handle_shmem_map(struct vhost_dev *dev,
> > > > }
> > > >
> > > > memory_region_transaction_begin();
> > > >+ transaction_started = true;
> > > >
> > > > /* Create VirtioSharedMemoryMapping object */
> > > > VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> > > >@@ -1889,14 +1891,18 @@ send_reply:
> > > > hdr->size = sizeof(payload->u64);
> > > > if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > > error_report_err(local_err);
> > > >- memory_region_transaction_commit();
> > > >+ if (transaction_started) {
> > > >+ memory_region_transaction_commit();
> > > >+ }
> > > > return -EFAULT;
> > > > }
> > > > }
> > > >
> > > >- memory_region_transaction_commit();
> > > >-
> > > >- return 0;
> > > >+ /* Only commit if we started a transaction */
> > > >+ if (transaction_started) {
> > > >+ memory_region_transaction_commit();
> > > >+ }
> > > >+ return ret;
> > > > }
> > > >
> > > >>
> > > >> >+ }
> > > >> >+
> > > >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > > >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> > > >> >+ "Requested ID: %d", vu_mmap->shmid);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > > >> >+ if (!shmem) {
> > > >> >+ error_report("VIRTIO Shared Memory Region at "
> > > >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > > >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > > >> >+ error_report("Bad offset/len for mmap %" PRIx64 "+%" PRIx64,
> > > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ QTAILQ_FOREACH(existing, &shmem->mmaps, link) {
> > > >> >+ if (ranges_overlap(existing->offset, existing->len,
> > > >> >+ vu_mmap->shm_offset, vu_mmap->len)) {
> > > >> >+ error_report("VIRTIO Shared Memory mapping overlap");
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+ }
> > > >> >+
> > > >> >+ memory_region_transaction_begin();
> > > >> >+
> > > >> >+ /* Create VirtioSharedMemoryMapping object */
> > > >> >+ VirtioSharedMemoryMapping *mapping = virtio_shared_memory_mapping_new(
> > > >> >+ vu_mmap->shmid, fd, vu_mmap->fd_offset, vu_mmap->shm_offset,
> > > >> >+ vu_mmap->len, vu_mmap->flags & VHOST_USER_FLAG_MAP_RW);
> > > >> >+
> > > >> >+ if (!mapping) {
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Add the mapping to the shared memory region */
> > > >> >+ if (virtio_add_shmem_map(shmem, mapping) != 0) {
> > > >> >+ error_report("Failed to add shared memory mapping");
> > > >> >+ object_unref(OBJECT(mapping));
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+send_reply:
> > > >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > >> >+ payload->u64 = !!ret;
> > > >> >+ hdr->size = sizeof(payload->u64);
> > > >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > >> >+ error_report_err(local_err);
> > > >> >+ memory_region_transaction_commit();
> > > >> >+ return -EFAULT;
> > > >> >+ }
> > > >> >+ }
> > > >> >+
> > > >> >+ memory_region_transaction_commit();
> > > >> >+
> > > >> >+ return 0;
> > > >> >+}
> > > >> >+
> > > >> >+/**
> > > >> >+ * vhost_user_backend_handle_shmem_unmap() - Handle SHMEM_UNMAP backend request
> > > >> >+ * @dev: vhost device
> > > >> >+ * @ioc: QIOChannel for communication
> > > >> >+ * @hdr: vhost-user message header
> > > >> >+ * @payload: message payload containing unmapping details
> > > >> >+ *
> > > >> >+ * Handles VHOST_USER_BACKEND_SHMEM_UNMAP requests from the backend. Removes
> > > >> >+ * the specified memory mapping from the VirtIO shared memory region. This
> > > >> >+ * automatically unreferences the associated VhostUserShmemObject, which may
> > > >> >+ * trigger its finalization and cleanup (munmap, close fd) if no other
> > > >> >+ * references exist.
> > > >> >+ *
> > > >> >+ * Returns: 0 on success, negative errno on failure
> > > >> >+ */
> > > >> >+static int
> > > >> >+vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev,
> > > >> >+ QIOChannel *ioc,
> > > >> >+ VhostUserHeader *hdr,
> > > >> >+ VhostUserPayload *payload)
> > > >> >+{
> > > >> >+ VirtioSharedMemory *shmem;
> > > >> >+ VirtioSharedMemoryMapping *mmap = NULL;
> > > >> >+ VhostUserMMap *vu_mmap = &payload->mmap;
> > > >> >+ Error *local_err = NULL;
> > > >> >+ int ret = 0;
> > > >> >+
> > > >> >+ if (QSIMPLEQ_EMPTY(&dev->vdev->shmem_list)) {
> > > >> >+ error_report("Device has no VIRTIO Shared Memory Regions. "
> > > >> >+ "Requested ID: %d", vu_mmap->shmid);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ shmem = virtio_find_shmem_region(dev->vdev, vu_mmap->shmid);
> > > >> >+ if (!shmem) {
> > > >> >+ error_report("VIRTIO Shared Memory Region at "
> > > >> >+ "ID %d not found or uninitialized", vu_mmap->shmid);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len ||
> > > >> >+ (vu_mmap->shm_offset + vu_mmap->len) > shmem->mr.size) {
> > > >> >+ error_report("Bad offset/len for unmmap %" PRIx64 "+%" PRIx64,
> > > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+ mmap = virtio_find_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > > >> >+ if (!mmap) {
> > > >> >+ error_report("Shared memory mapping not found at offset %" PRIx64
> > > >> >+ " with length %" PRIx64,
> > > >> >+ vu_mmap->shm_offset, vu_mmap->len);
> > > >> >+ ret = -EFAULT;
> > > >> >+ goto send_reply;
> > > >> >+ }
> > > >> >+
> > > >> >+send_reply:
> > > >> >+ if (hdr->flags & VHOST_USER_NEED_REPLY_MASK) {
> > > >> >+ payload->u64 = !!ret;
> > > >> >+ hdr->size = sizeof(payload->u64);
> > > >> >+ if (!vhost_user_send_resp(ioc, hdr, payload, &local_err)) {
> > > >> >+ error_report_err(local_err);
> > > >> >+ return -EFAULT;
> > > >> >+ }
> > > >> >+ }
> > > >> >+
> > > >> >+ if (!ret && mmap) {
> > > >> >+ /* Free the MemoryRegion only after reply */
> > > >> >+ virtio_del_shmem_map(shmem, vu_mmap->shm_offset, vu_mmap->len);
> > > >> >+ }
> > > >> >+
> > > >> >+ return 0;
> > > >> >+}
> > > >> >+
> > > >> > static void close_backend_channel(struct vhost_user *u)
> > > >> > {
> > > >> > g_source_destroy(u->backend_src);
> > > >> >@@ -1836,6 +2036,19 @@ static gboolean backend_read(QIOChannel *ioc, GIOCondition condition,
> > > >> > ret = vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc,
> > > >> > &hdr, &payload);
> > > >> > break;
> > > >> >+ case VHOST_USER_BACKEND_SHMEM_MAP:
> > > >> >+ /* Handler manages its own response, check error and close connection */
> > > >> >+ if (vhost_user_backend_handle_shmem_map(dev, ioc, &hdr, &payload,
> > > >> >+ fd ? fd[0] : -1) < 0) {
> > > >> >+ goto err;
> > > >> >+ }
> > > >> >+ break;
> > > >> >+ case VHOST_USER_BACKEND_SHMEM_UNMAP:
> > > >> >+ /* Handler manages its own response, check error and close connection */
> > > >> >+ if (vhost_user_backend_handle_shmem_unmap(dev, ioc, &hdr, &payload) < 0) {
> > > >> >+ goto err;
> > > >> >+ }
> > > >> >+ break;
> > > >> > default:
> > > >> > error_report("Received unexpected msg type: %d.", hdr.request);
> > > >> > ret = -EINVAL;
> > > >> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > > >> >index 153ee0a0cf..37070e7951 100644
> > > >> >--- a/hw/virtio/virtio.c
> > > >> >+++ b/hw/virtio/virtio.c
> > > >> >@@ -3086,6 +3086,181 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f)
> > > >> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL, &error_fatal);
> > > >> > }
> > > >> >
> > > >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size)
> > > >> >+{
> > > >> >+ VirtioSharedMemory *elem;
> > > >> >+ g_autofree char *name = NULL;
> > > >> >+
> > > >> >+ elem = g_new0(VirtioSharedMemory, 1);
> > > >> >+ elem->shmid = shmid;
> > > >> >+
> > > >> >+ /* Initialize embedded MemoryRegion as container for shmem mappings */
> > > >> >+ name = g_strdup_printf("virtio-shmem-%d", shmid);
> > > >> >+ memory_region_init(&elem->mr, OBJECT(vdev), name, size);
> > > >> >+ QTAILQ_INIT(&elem->mmaps);
> > > >> >+ QSIMPLEQ_INSERT_TAIL(&vdev->shmem_list, elem, entry);
> > > >> >+ return elem;
> > > >> >+}
> > > >> >+
> > > >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid)
> > > >> >+{
> > > >> >+ VirtioSharedMemory *shmem, *next;
> > > >> >+ QSIMPLEQ_FOREACH_SAFE(shmem, &vdev->shmem_list, entry, next) {
> > > >> >+ if (shmem->shmid == shmid) {
> > > >> >+ return shmem;
> > > >> >+ }
> > > >> >+ }
> > > >> >+ return NULL;
> > > >> >+}
> > > >> >+
> > > >> >+static void virtio_shared_memory_mapping_instance_init(Object *obj)
> > > >> >+{
> > > >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> > > >> >+
> > > >> >+ mapping->shmid = 0;
> > > >> >+ mapping->fd = -1;
> > > >> >+ mapping->offset = 0;
> > > >> >+ mapping->len = 0;
> > > >> >+ mapping->mr = NULL;
> > > >> >+}
> > > >> >+
> > > >> >+static void virtio_shared_memory_mapping_instance_finalize(Object *obj)
> > > >> >+{
> > > >> >+ VirtioSharedMemoryMapping *mapping = VIRTIO_SHARED_MEMORY_MAPPING(obj);
> > > >> >+
> > > >> >+ /* Clean up MemoryRegion if it exists */
> > > >> >+ if (mapping->mr) {
> > > >> >+ /* Unparent the MemoryRegion to trigger cleanup */
> > > >> >+ object_unparent(OBJECT(mapping->mr));
> > > >> >+ mapping->mr = NULL;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Close file descriptor */
> > > >> >+ if (mapping->fd >= 0) {
> > > >> >+ close(mapping->fd);
> > > >> >+ mapping->fd = -1;
> > > >> >+ }
> > > >> >+}
> > > >> >+
> > > >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> > > >> >+ int fd,
> > > >> >+ uint64_t fd_offset,
> > > >> >+ uint64_t shm_offset,
> > > >> >+ uint64_t len,
> > > >> >+ bool allow_write)
> > > >> >+{
> > > >> >+ VirtioSharedMemoryMapping *mapping;
> > > >> >+ MemoryRegion *mr;
> > > >> >+ g_autoptr(GString) mr_name = g_string_new(NULL);
> > > >> >+ uint32_t ram_flags;
> > > >> >+ Error *local_err = NULL;
> > > >> >+
> > > >> >+ if (len == 0) {
> > > >> >+ error_report("Shared memory mapping size cannot be zero");
> > > >> >+ return NULL;
> > > >> >+ }
> > > >> >+
> > > >> >+ fd = dup(fd);
> > > >> >+ if (fd < 0) {
> > > >> >+ error_report("Failed to duplicate fd: %s", strerror(errno));
> > > >> >+ return NULL;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Determine RAM flags */
> > > >> >+ ram_flags = RAM_SHARED;
> > > >> >+ if (!allow_write) {
> > > >> >+ ram_flags |= RAM_READONLY_FD;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Create the VirtioSharedMemoryMapping */
> > > >> >+ mapping = VIRTIO_SHARED_MEMORY_MAPPING(
> > > >> >+ object_new(TYPE_VIRTIO_SHARED_MEMORY_MAPPING));
> > > >> >+
> > > >> >+ /* Set up object properties */
> > > >> >+ mapping->shmid = shmid;
> > > >> >+ mapping->fd = fd;
> > > >> >+ mapping->offset = shm_offset;
> > > >> >+ mapping->len = len;
> > > >> >+
> > > >> >+ /* Create MemoryRegion as a child of this object */
> > > >> >+ mr = g_new0(MemoryRegion, 1);
> > > >> >+ g_string_printf(mr_name, "virtio-shmem-%d-%" PRIx64, shmid, shm_offset);
> > > >> >+
> > > >> >+ /* Initialize MemoryRegion with file descriptor */
> > > >> >+ if (!memory_region_init_ram_from_fd(mr, OBJECT(mapping), mr_name->str,
> > > >> >+ len, ram_flags, fd, fd_offset,
> > > >> >+ &local_err)) {
> > > >> >+ error_report_err(local_err);
> > > >> >+ g_free(mr);
> > > >> >+ close(fd);
> > > >> >+ object_unref(OBJECT(mapping));
> > > >> >+ return NULL;
> > > >> >+ }
> > > >> >+
> > > >> >+ mapping->mr = mr;
> > > >> >+ return mapping;
> > > >> >+}
> > > >> >+
> > > >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > > >> >+ VirtioSharedMemoryMapping *mapping)
> > > >> >+{
> > > >> >+ if (!mapping) {
> > > >> >+ error_report("VirtioSharedMemoryMapping cannot be NULL");
> > > >> >+ return -1;
> > > >> >+ }
> > > >> >+ if (!mapping->mr) {
> > > >> >+ error_report("VirtioSharedMemoryMapping has no MemoryRegion");
> > > >> >+ return -1;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Validate boundaries against the VIRTIO shared memory region */
> > > >> >+ if (mapping->offset + mapping->len > shmem->mr.size) {
> > > >> >+ error_report("Memory exceeds the shared memory boundaries");
> > > >> >+ return -1;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Add as subregion to the VIRTIO shared memory */
> > > >> >+ memory_region_add_subregion(&shmem->mr, mapping->offset, mapping->mr);
> > > >> >+
> > > >> >+ /* Add to the mapped regions list */
> > > >> >+ QTAILQ_INSERT_TAIL(&shmem->mmaps, mapping, link);
> > > >> >+
> > > >> >+ return 0;
> > > >> >+}
> > > >> >+
> > > >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > > >> >+ hwaddr offset, uint64_t size)
> > > >> >+{
> > > >> >+ VirtioSharedMemoryMapping *mapping;
> > > >> >+ QTAILQ_FOREACH(mapping, &shmem->mmaps, link) {
> > > >> >+ if (mapping->offset == offset && mapping->mr->size == size) {
> > > >> >+ return mapping;
> > > >> >+ }
> > > >> >+ }
> > > >> >+ return NULL;
> > > >> >+}
> > > >> >+
> > > >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > > >> >+ uint64_t size)
> > > >> >+{
> > > >> >+ VirtioSharedMemoryMapping *mapping = virtio_find_shmem_map(shmem, offset, size);
> > > >> >+ if (mapping == NULL) {
> > > >> >+ return;
> > > >> >+ }
> > > >> >+
> > > >> >+ /*
> > > >> >+ * Remove from memory region first
> > > >> >+ */
> > > >> >+ memory_region_del_subregion(&shmem->mr, mapping->mr);
> > > >> >+
> > > >> >+ /*
> > > >> >+ * Remove from list and unref the mapping which will trigger automatic cleanup
> > > >> >+ * when the reference count reaches zero.
> > > >> >+ */
> > > >> >+ QTAILQ_REMOVE(&shmem->mmaps, mapping, link);
> > > >> >+ object_unref(OBJECT(mapping));
> > > >> >+}
> > > >> >+
> > > >> > /* A wrapper for use as a VMState .put function */
> > > >> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t size,
> > > >> > const VMStateField *field, JSONWriter *vmdesc)
> > > >> >@@ -3212,6 +3387,7 @@ void virtio_reset(void *opaque)
> > > >> > {
> > > >> > VirtIODevice *vdev = opaque;
> > > >> > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> > > >> >+ VirtioSharedMemory *shmem;
> > > >> > uint64_t features[VIRTIO_FEATURES_NU64S];
> > > >> > int i;
> > > >> >
> > > >> >@@ -3251,6 +3427,14 @@ void virtio_reset(void *opaque)
> > > >> > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> > > >> > __virtio_queue_reset(vdev, i);
> > > >> > }
> > > >> >+
> > > >> >+ /* Mappings are removed to prevent stale fds from remaining open. */
> > > >> >+ QSIMPLEQ_FOREACH(shmem, &vdev->shmem_list, entry) {
> > > >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > > >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> > > >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> > > >> >+ }
> > > >> >+ }
> > > >> > }
> > > >> >
> > > >> > static void virtio_device_check_notification_compatibility(VirtIODevice *vdev,
> > > >> >@@ -3574,6 +3758,7 @@ void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size)
> > > >> > NULL, virtio_vmstate_change, vdev);
> > > >> > vdev->device_endian = virtio_default_endian();
> > > >> > vdev->use_guest_notifier_mask = true;
> > > >> >+ QSIMPLEQ_INIT(&vdev->shmem_list);
> > > >> > }
> > > >> >
> > > >> > /*
> > > >> >@@ -4085,11 +4270,24 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> > > >> > static void virtio_device_instance_finalize(Object *obj)
> > > >> > {
> > > >> > VirtIODevice *vdev = VIRTIO_DEVICE(obj);
> > > >> >+ VirtioSharedMemory *shmem;
> > > >> >
> > > >> > virtio_device_free_virtqueues(vdev);
> > > >> >
> > > >> > g_free(vdev->config);
> > > >> > g_free(vdev->vector_queues);
> > > >> >+ while (!QSIMPLEQ_EMPTY(&vdev->shmem_list)) {
> > > >> >+ shmem = QSIMPLEQ_FIRST(&vdev->shmem_list);
> > > >> >+ while (!QTAILQ_EMPTY(&shmem->mmaps)) {
> > > >> >+ VirtioSharedMemoryMapping *mapping = QTAILQ_FIRST(&shmem->mmaps);
> > > >> >+ virtio_del_shmem_map(shmem, mapping->offset, mapping->mr->size);
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Clean up the embedded MemoryRegion */
> > > >> >+ object_unparent(OBJECT(&shmem->mr));
> > > >> >+ QSIMPLEQ_REMOVE_HEAD(&vdev->shmem_list, entry);
> > > >> >+ g_free(shmem);
> > > >> >+ }
> > > >> > }
> > > >> >
> > > >> > static const Property virtio_properties[] = {
> > > >> >@@ -4455,9 +4653,18 @@ static const TypeInfo virtio_device_info = {
> > > >> > .class_size = sizeof(VirtioDeviceClass),
> > > >> > };
> > > >> >
> > > >> >+static const TypeInfo virtio_shared_memory_mapping_info = {
> > > >> >+ .name = TYPE_VIRTIO_SHARED_MEMORY_MAPPING,
> > > >> >+ .parent = TYPE_OBJECT,
> > > >> >+ .instance_size = sizeof(VirtioSharedMemoryMapping),
> > > >> >+ .instance_init = virtio_shared_memory_mapping_instance_init,
> > > >> >+ .instance_finalize = virtio_shared_memory_mapping_instance_finalize,
> > > >> >+};
> > > >> >+
> > > >> > static void virtio_register_types(void)
> > > >> > {
> > > >> > type_register_static(&virtio_device_info);
> > > >> >+ type_register_static(&virtio_shared_memory_mapping_info);
> > > >> > }
> > > >> >
> > > >> > type_init(virtio_register_types)
> > > >> >diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> > > >> >index d97529c3f1..cee807e236 100644
> > > >> >--- a/include/hw/virtio/virtio.h
> > > >> >+++ b/include/hw/virtio/virtio.h
> > > >> >@@ -99,6 +99,46 @@ enum virtio_device_endian {
> > > >> > VIRTIO_DEVICE_ENDIAN_BIG,
> > > >> > };
> > > >> >
> > > >> >+#define TYPE_VIRTIO_SHARED_MEMORY_MAPPING "virtio-shared-memory-mapping"
> > > >> >+OBJECT_DECLARE_SIMPLE_TYPE(VirtioSharedMemoryMapping, VIRTIO_SHARED_MEMORY_MAPPING)
> > > >> >+
> > > >> >+/**
> > > >> >+ * VirtioSharedMemoryMapping:
> > > >> >+ * @parent: Parent QOM object
> > > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > > >> >+ * @fd: File descriptor for the shared memory region
> > > >> >+ * @offset: Offset within the VIRTIO Shared Memory Region
> > > >> >+ * @len: Size of the mapping
> > > >> >+ * @mr: MemoryRegion associated with this shared memory mapping
> > > >> >+ * @link: List entry for the shared memory region's mapping list
> > > >> >+ *
> > > >> >+ * A QOM object that represents an individual file descriptor-based shared
> > > >> >+ * memory mapping within a VIRTIO Shared Memory Region. It manages the
> > > >> >+ * MemoryRegion lifecycle and file descriptor cleanup through QOM reference
> > > >> >+ * counting. When the object is unreferenced and its reference count drops
> > > >> >+ * to zero, it automatically cleans up the MemoryRegion and closes the file
> > > >> >+ * descriptor.
> > > >> >+ */
> > > >> >+struct VirtioSharedMemoryMapping {
> > > >> >+ Object parent;
> > > >> >+
> > > >> >+ uint8_t shmid;
> > > >> >+ int fd;
> > > >> >+ hwaddr offset;
> > > >> >+ uint64_t len;
> > > >> >+ MemoryRegion *mr;
> > > >> >+ QTAILQ_ENTRY(VirtioSharedMemoryMapping) link;
> > > >> >+};
> > > >> >+
> > > >> >+struct VirtioSharedMemory {
> > > >> >+ uint8_t shmid;
> > > >> >+ MemoryRegion mr;
> > > >> >+ QTAILQ_HEAD(, VirtioSharedMemoryMapping) mmaps;
> > > >> >+ QSIMPLEQ_ENTRY(VirtioSharedMemory) entry;
> > > >> >+};
> > > >> >+
> > > >> >+typedef struct VirtioSharedMemory VirtioSharedMemory;
> > > >> >+
> > > >> > /**
> > > >> > * struct VirtIODevice - common VirtIO structure
> > > >> > * @name: name of the device
> > > >> >@@ -168,6 +208,8 @@ struct VirtIODevice
> > > >> > */
> > > >> > EventNotifier config_notifier;
> > > >> > bool device_iotlb_enabled;
> > > >> >+ /* Shared memory region for mappings. */
> > > >> >+ QSIMPLEQ_HEAD(, VirtioSharedMemory) shmem_list;
> > > >> > };
> > > >> >
> > > >> > struct VirtioDeviceClass {
> > > >> >@@ -298,6 +340,100 @@ void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
> > > >> >
> > > >> > int virtio_save(VirtIODevice *vdev, QEMUFile *f);
> > > >> >
> > > >> >+/**
> > > >> >+ * virtio_new_shmem_region() - Create a new shared memory region
> > > >> >+ * @vdev: VirtIODevice
> > > >> >+ * @shmid: Shared memory ID
> > > >> >+ * @size: Size of the shared memory region
> > > >> >+ *
> > > >> >+ * Creates a new VirtioSharedMemory region for the given device and ID.
> > > >> >+ * The returned VirtioSharedMemory is owned by the VirtIODevice and will
> > > >> >+ * be automatically freed when the device is destroyed. The caller
> > > >> >+ * should not free the returned pointer.
> > > >> >+ *
> > > >> >+ * Returns: Pointer to the new VirtioSharedMemory region, or NULL on failure
> > > >> >+ */
> > > >> >+VirtioSharedMemory *virtio_new_shmem_region(VirtIODevice *vdev, uint8_t shmid, uint64_t size);
> > > >> >+
> > > >> >+/**
> > > >> >+ * virtio_find_shmem_region() - Find an existing shared memory region
> > > >> >+ * @vdev: VirtIODevice
> > > >> >+ * @shmid: Shared memory ID to find
> > > >> >+ *
> > > >> >+ * Finds an existing VirtioSharedMemory region by ID. The returned pointer
> > > >> >+ * is owned by the VirtIODevice and should not be freed by the caller.
> > > >> >+ *
> > > >> >+ * Returns: Pointer to the VirtioSharedMemory region, or NULL if not found
> > > >> >+ */
> > > >> >+VirtioSharedMemory *virtio_find_shmem_region(VirtIODevice *vdev, uint8_t shmid);
> > > >> >+
> > > >> >+/**
> > > >> >+ * virtio_shared_memory_mapping_new() - Create a new VirtioSharedMemoryMapping
> > > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > > >> >+ * @fd: File descriptor for the shared memory
> > > >> >+ * @fd_offset: Offset within the file descriptor
> > > >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > > >> >+ * @len: Size of the mapping
> > > >> >+ * @allow_write: Whether to allow write access to the mapping
> > > >> >+ *
> > > >> >+ * Creates a new VirtioSharedMemoryMapping that manages a shared memory mapping.
> > > >> >+ * The object will create a MemoryRegion using memory_region_init_ram_from_fd()
> > > >> >+ * as a child object. When the object is finalized, it will automatically
> > > >> >+ * clean up the MemoryRegion and close the file descriptor.
> > > >> >+ *
> > > >> >+ * Return: A new VirtioSharedMemoryMapping on success, NULL on error.
> > > >> >+ */
> > > >> >+VirtioSharedMemoryMapping *virtio_shared_memory_mapping_new(uint8_t shmid,
> > > >> >+ int fd,
> > > >> >+ uint64_t fd_offset,
> > > >> >+ uint64_t shm_offset,
> > > >> >+ uint64_t len,
> > > >> >+ bool allow_write);
> > > >> >+
> > > >> >+/**
> > > >> >+ * virtio_add_shmem_map() - Add a memory mapping to a shared region
> > > >> >+ * @shmem: VirtioSharedMemory region
> > > >> >+ * @mapping: VirtioSharedMemoryMapping to add (transfers ownership)
> > > >> >+ *
> > > >> >+ * Adds a memory mapping to the shared memory region. The VirtioSharedMemoryMapping
> > > >> >+ * ownership is transferred to the shared memory region and will be automatically
> > > >> >+ * cleaned up through QOM reference counting when virtio_del_shmem_map() is
> > > >> >+ * called or when the shared memory region is destroyed.
> > > >> >+ *
> > > >> >+ * Returns: 0 on success, negative errno on failure
> > > >> >+ */
> > > >> >+int virtio_add_shmem_map(VirtioSharedMemory *shmem,
> > > >> >+ VirtioSharedMemoryMapping *mapping);
> > > >> >+
> > > >> >+/**
> > > >> >+ * virtio_find_shmem_map() - Find a memory mapping in a shared region
> > > >> >+ * @shmem: VirtioSharedMemory region
> > > >> >+ * @offset: Offset within the shared memory region
> > > >> >+ * @size: Size of the mapping to find
> > > >> >+ *
> > > >> >+ * Finds an existing memory mapping that covers the specified range.
> > > >> >+ * The returned VirtioSharedMemoryMapping is owned by the VirtioSharedMemory
> > > >> >+ * region and should not be freed by the caller.
> > > >> >+ *
> > > >> >+ * Returns: Pointer to the VirtioSharedMemoryMapping, or NULL if not found
> > > >> >+ */
> > > >> >+VirtioSharedMemoryMapping *virtio_find_shmem_map(VirtioSharedMemory *shmem,
> > > >> >+ hwaddr offset, uint64_t size);
> > > >> >+
> > > >> >+/**
> > > >> >+ * virtio_del_shmem_map() - Remove a memory mapping from a shared region
> > > >> >+ * @shmem: VirtioSharedMemory region
> > > >> >+ * @offset: Offset of the mapping to remove
> > > >> >+ * @size: Size of the mapping to remove
> > > >> >+ *
> > > >> >+ * Removes a memory mapping from the shared memory region. This will
> > > >> >+ * automatically unref the associated VhostUserShmemObject, which may
> > > >> >+ * trigger its finalization and cleanup if no other references exist.
> > > >> >+ * The mapping's MemoryRegion will be properly unmapped and cleaned up.
> > > >> >+ */
> > > >> >+void virtio_del_shmem_map(VirtioSharedMemory *shmem, hwaddr offset,
> > > >> >+ uint64_t size);
> > > >> >+
> > > >> > extern const VMStateInfo virtio_vmstate_info;
> > > >> >
> > > >> > #define VMSTATE_VIRTIO_DEVICE \
> > > >> >diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > > >> >index 9c630c2170..034cbfdc3c 100644
> > > >> >--- a/subprojects/libvhost-user/libvhost-user.c
> > > >> >+++ b/subprojects/libvhost-user/libvhost-user.c
> > > >> >@@ -1592,6 +1592,76 @@ vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN])
> > > >> > return vu_send_message(dev, &msg);
> > > >> > }
> > > >> >
> > > >> >+bool
> > > >> >+vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > > >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd)
> > > >> >+{
> > > >> >+ VhostUserMsg vmsg = {
> > > >> >+ .request = VHOST_USER_BACKEND_SHMEM_MAP,
> > > >> >+ .size = sizeof(vmsg.payload.mmap),
> > > >> >+ .flags = VHOST_USER_VERSION,
> > > >> >+ .payload.mmap = {
> > > >> >+ .shmid = shmid,
> > > >> >+ .fd_offset = fd_offset,
> > > >> >+ .shm_offset = shm_offset,
> > > >> >+ .len = len,
> > > >> >+ .flags = flags,
> > > >> >+ },
> > > >> >+ .fd_num = 1,
> > > >> >+ .fds[0] = fd,
> > > >> >+ };
> > > >> >+
> > > >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > > >> >+ return false;
> > > >> >+ }
> > > >> >+
> > > >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > > >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > > >> >+ }
> > > >> >+
> > > >> >+ pthread_mutex_lock(&dev->backend_mutex);
> > > >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > > >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> > > >> >+ return false;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Also unlocks the backend_mutex */
> > > >> >+ return vu_process_message_reply(dev, &vmsg);
> > > >> >+}
> > > >> >+
> > > >> >+bool
> > > >> >+vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset, uint64_t len)
> > > >> >+{
> > > >> >+ VhostUserMsg vmsg = {
> > > >> >+ .request = VHOST_USER_BACKEND_SHMEM_UNMAP,
> > > >> >+ .size = sizeof(vmsg.payload.mmap),
> > > >> >+ .flags = VHOST_USER_VERSION,
> > > >> >+ .payload.mmap = {
> > > >> >+ .shmid = shmid,
> > > >> >+ .fd_offset = 0,
> > > >> >+ .shm_offset = shm_offset,
> > > >> >+ .len = len,
> > > >> >+ },
> > > >> >+ };
> > > >> >+
> > > >> >+ if (!vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_SHMEM)) {
> > > >> >+ return false;
> > > >> >+ }
> > > >> >+
> > > >> >+ if (vu_has_protocol_feature(dev, VHOST_USER_PROTOCOL_F_REPLY_ACK)) {
> > > >> >+ vmsg.flags |= VHOST_USER_NEED_REPLY_MASK;
> > > >> >+ }
> > > >> >+
> > > >> >+ pthread_mutex_lock(&dev->backend_mutex);
> > > >> >+ if (!vu_message_write(dev, dev->backend_fd, &vmsg)) {
> > > >> >+ pthread_mutex_unlock(&dev->backend_mutex);
> > > >> >+ return false;
> > > >> >+ }
> > > >> >+
> > > >> >+ /* Also unlocks the backend_mutex */
> > > >> >+ return vu_process_message_reply(dev, &vmsg);
> > > >> >+}
> > > >> >+
> > > >> > static bool
> > > >> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
> > > >> > {
> > > >> >diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
> > > >> >index 2ffc58c11b..6a2d0f9fae 100644
> > > >> >--- a/subprojects/libvhost-user/libvhost-user.h
> > > >> >+++ b/subprojects/libvhost-user/libvhost-user.h
> > > >> >@@ -69,6 +69,8 @@ enum VhostUserProtocolFeature {
> > > >> > /* Feature 16 is reserved for VHOST_USER_PROTOCOL_F_STATUS. */
> > > >> > /* Feature 17 reserved for VHOST_USER_PROTOCOL_F_XEN_MMAP. */
> > > >> > VHOST_USER_PROTOCOL_F_SHARED_OBJECT = 18,
> > > >> >+ /* Feature 19 is reserved for VHOST_USER_PROTOCOL_F_DEVICE_STATE */
> > > >> >+ VHOST_USER_PROTOCOL_F_SHMEM = 20,
> > > >> > VHOST_USER_PROTOCOL_F_MAX
> > > >> > };
> > > >> >
> > > >> >@@ -127,6 +129,8 @@ typedef enum VhostUserBackendRequest {
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD = 6,
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE = 7,
> > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP = 8,
> > > >> >+ VHOST_USER_BACKEND_SHMEM_MAP = 9,
> > > >> >+ VHOST_USER_BACKEND_SHMEM_UNMAP = 10,
> > > >> > VHOST_USER_BACKEND_MAX
> > > >> > } VhostUserBackendRequest;
> > > >> >
> > > >> >@@ -186,6 +190,23 @@ typedef struct VhostUserShared {
> > > >> > unsigned char uuid[UUID_LEN];
> > > >> > } VhostUserShared;
> > > >> >
> > > >> >+/* For the flags field of VhostUserMMap */
> > > >> >+#define VHOST_USER_FLAG_MAP_RW (1u << 0)
> > > >> >+
> > > >> >+typedef struct {
> > > >> >+ /* VIRTIO Shared Memory Region ID */
> > > >> >+ uint8_t shmid;
> > > >> >+ uint8_t padding[7];
> > > >> >+ /* File offset */
> > > >> >+ uint64_t fd_offset;
> > > >> >+ /* Offset within the VIRTIO Shared Memory Region */
> > > >> >+ uint64_t shm_offset;
> > > >> >+ /* Size of the mapping */
> > > >> >+ uint64_t len;
> > > >> >+ /* Flags for the mmap operation, from VHOST_USER_FLAG_MAP_* */
> > > >> >+ uint64_t flags;
> > > >> >+} VhostUserMMap;
> > > >> >+
> > > >> > #define VU_PACKED __attribute__((packed))
> > > >> >
> > > >> > typedef struct VhostUserMsg {
> > > >> >@@ -210,6 +231,7 @@ typedef struct VhostUserMsg {
> > > >> > VhostUserVringArea area;
> > > >> > VhostUserInflight inflight;
> > > >> > VhostUserShared object;
> > > >> >+ VhostUserMMap mmap;
> > > >> > } payload;
> > > >> >
> > > >> > int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> > > >> >@@ -593,6 +615,38 @@ bool vu_add_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > > >> > */
> > > >> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]);
> > > >> >
> > > >> >+/**
> > > >> >+ * vu_shmem_map:
> > > >> >+ * @dev: a VuDev context
> > > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > > >> >+ * @fd_offset: File offset
> > > >> >+ * @shm_offset: Offset within the VIRTIO Shared Memory Region
> > > >> >+ * @len: Size of the mapping
> > > >> >+ * @flags: Flags for the mmap operation
> > > >> >+ * @fd: A file descriptor
> > > >> >+ *
> > > >> >+ * Advertises a new mapping to be made in a given VIRTIO Shared Memory Region.
> > > >> >+ *
> > > >> >+ * Returns: TRUE on success, FALSE on failure.
> > > >> >+ */
> > > >> >+bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset,
> > > >> >+ uint64_t shm_offset, uint64_t len, uint64_t flags, int fd);
> > > >> >+
> > > >> >+/**
> > > >> >+ * vu_shmem_unmap:
> > > >> >+ * @dev: a VuDev context
> > > >> >+ * @shmid: VIRTIO Shared Memory Region ID
> > > >> >+ * @fd_offset: File offset
> > > >> >+ * @len: Size of the mapping
> > > >> >+ *
> > > >> >+ * The front-end un-mmaps a given range in the VIRTIO Shared Memory Region
> > > >> >+ * with the requested `shmid`.
> > > >> >+ *
> > > >> >+ * Returns: TRUE on success, FALSE on failure.
> > > >> >+ */
> > > >> >+bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t shm_offset,
> > > >> >+ uint64_t len);
> > > >> >+
> > > >> > /**
> > > >> > * vu_queue_set_notification:
> > > >> > * @dev: a VuDev context
> > > >> >--
> > > >> >2.49.0
> > > >> >
> > > >>
> > > >
> > >
> >
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-10-16 14:22 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-15 8:59 [PATCH v9 0/7] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-10-15 8:59 ` [PATCH v9 1/7] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-10-15 9:47 ` Manos Pitsidianakis
2025-10-15 12:29 ` Albert Esteve
2025-10-15 12:45 ` Manos Pitsidianakis
2025-10-16 13:42 ` Albert Esteve
2025-10-16 13:53 ` Manos Pitsidianakis
2025-10-16 14:20 ` Albert Esteve
2025-10-15 15:27 ` Stefan Hajnoczi
2025-10-15 8:59 ` [PATCH v9 2/7] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
2025-10-15 8:59 ` [PATCH v9 3/7] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
2025-10-15 9:29 ` Manos Pitsidianakis
2025-10-15 8:59 ` [PATCH v9 4/7] vhost_user: Add frontend get_shmem_config command Albert Esteve
2025-10-15 8:59 ` [PATCH v9 5/7] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
2025-10-15 8:59 ` [PATCH v9 6/7] qmp: add shmem feature map Albert Esteve
2025-10-15 8:59 ` [PATCH v9 7/7] vhost-user-device: Add shared memory BAR Albert Esteve
2025-10-15 15:35 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).