From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6F999EE499B for ; Wed, 11 Sep 2024 11:58:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1soLyr-0000w0-7S; Wed, 11 Sep 2024 07:57:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1soLyf-0000vJ-8h for qemu-devel@nongnu.org; Wed, 11 Sep 2024 07:57:41 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1soLyZ-0004Lt-Mo for qemu-devel@nongnu.org; Wed, 11 Sep 2024 07:57:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1726055852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bmAba6EueYskIdyvw0KZcJMw2Se9uua+39d/u6tvIwM=; b=JPeiw4J42pOJC9E/KsHjSPugf0I3ulxWVMk6x8g9AbBE1Gh3XvHDpCD34Hay7JDCVkF4/s fSNlW+cw7E9UEvsV6GSwPSUevUfKPPkWGXuCsnns1XaV9VPU1VDTva7aI2Q4GWEwY8mg3B JaUP1IK5xeBw6jbxB8TMRvTsvzpzKh4= Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-37-bnXRSzK0NPmTgodzzK0F3Q-1; Wed, 11 Sep 2024 07:57:28 -0400 X-MC-Unique: bnXRSzK0NPmTgodzzK0F3Q-1 Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-718d51f33a6so5145517b3a.2 for ; Wed, 11 Sep 2024 04:57:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726055847; x=1726660647; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bmAba6EueYskIdyvw0KZcJMw2Se9uua+39d/u6tvIwM=; b=vBMaFbqQnDtGabR2UxnfDDiBSQLb4oz2dmysPaGA18nh1kE5ZYlHyngu0IkNUSJhCA WmDyqtICbfnr0O57fFLN5XfGqlockuYWhRAS58NAnape24lM+7CgEBiEiCv4g/pKctf1 GhKspHie/78Sygslrt6p8DqmAYrleuQmHEZbZkbdf3YNEkdynph/pCyfoqQmDt8uXpHt 5Qf1+70WdJJFpZGe+xo9lB7V017PdIeB0hQlGJf7kpEQ9/g7vkTGaTPHiCda/q31iRc5 +dhzNFevFNYWgYMYMkSVmuPdtv1Ycl1XP5NgZ3akDsyhHL7z/mIATrAb3C6vbQosqGJk av+Q== X-Gm-Message-State: AOJu0Yzz35mxpF3dFF4NvtmsU825Wzw0o5GlVkd3YV7cEVP1fminUvXP iJiCuXg7NfrfVhJXdhNjc/MCWH2mtA8TZusOJprw+M9HmWF7n0uRTnf8AiBrRcx4vtphkkLDpFn xD08kMVE70S9N4Fa8b5EvyNEyy1WSgYpbLMrKNovgfF5HqxF4gqXl/xzv1oY4KUz6S84qo1zPOZ gpqF5be435RpOzf8AQbVs/uIS/5kY= X-Received: by 2002:a05:6a00:66e1:b0:714:3acb:9d4b with SMTP id d2e1a72fcca58-718d5ee03c7mr17038639b3a.18.1726055847193; Wed, 11 Sep 2024 04:57:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFXYNlyBOMoA6vjnIaDG6jU6H/vpsc1qvnpcDlaJMUWvwk3Q45M5XB2pFKuTR5ldnFSxv1WORSUQbm7yFp6vKg= X-Received: by 2002:a05:6a00:66e1:b0:714:3acb:9d4b with SMTP id d2e1a72fcca58-718d5ee03c7mr17038603b3a.18.1726055846662; Wed, 11 Sep 2024 04:57:26 -0700 (PDT) MIME-Version: 1.0 References: <20240628145710.1516121-1-aesteve@redhat.com> <20240628145710.1516121-2-aesteve@redhat.com> <20240711074510.GC563880@dynamic-pd01.res.v6.highway.a1.net> <20240905164525.GF1922502@fedora> In-Reply-To: <20240905164525.GF1922502@fedora> From: Albert Esteve Date: Wed, 11 Sep 2024 13:57:15 +0200 Message-ID: Subject: Re: [RFC PATCH v2 1/5] vhost-user: Add VIRTIO Shared Memory map request To: Stefan Hajnoczi Cc: qemu-devel@nongnu.org, jasowang@redhat.com, david@redhat.com, slp@redhat.com, =?UTF-8?B?QWxleCBCZW5uw6ll?= , "Michael S. Tsirkin" Content-Type: multipart/alternative; boundary="0000000000009a6b8b0621d6b12a" Received-SPF: pass client-ip=170.10.129.124; envelope-from=aesteve@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.144, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org --0000000000009a6b8b0621d6b12a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, Sep 5, 2024 at 6:45=E2=80=AFPM Stefan Hajnoczi wrote: > On Tue, Sep 03, 2024 at 01:54:12PM +0200, Albert Esteve wrote: > > On Tue, Sep 3, 2024 at 11:54=E2=80=AFAM Albert Esteve > wrote: > > > > > > > > > > > On Thu, Jul 11, 2024 at 9:45=E2=80=AFAM Stefan Hajnoczi > > > wrote: > > > > > >> On Fri, Jun 28, 2024 at 04:57:06PM +0200, Albert Esteve wrote: > > >> > Add SHMEM_MAP/UNMAP requests to vhost-user to > > >> > handle VIRTIO Shared Memory mappings. > > >> > > > >> > This request allows backends to dynamically map > > >> > fds into a VIRTIO Shared Memory Region indentified > > >> > by its `shmid`. Then, the fd memory is advertised > > >> > to the driver as a base addres + offset, so it > > >> > can be read/written (depending on the mmap flags > > >> > requested) while its valid. > > >> > > > >> > The backend can munmap the memory range > > >> > in a given VIRTIO Shared Memory Region (again, > > >> > identified by its `shmid`), to free it. Upon > > >> > receiving this message, the front-end must > > >> > mmap the regions with PROT_NONE to reserve > > >> > the virtual memory space. > > >> > > > >> > The device model needs to create MemoryRegion > > >> > instances for the VIRTIO Shared Memory Regions > > >> > and add them to the `VirtIODevice` instance. > > >> > > > >> > Signed-off-by: Albert Esteve > > >> > --- > > >> > docs/interop/vhost-user.rst | 27 +++++ > > >> > hw/virtio/vhost-user.c | 122 > ++++++++++++++++++++++ > > >> > hw/virtio/virtio.c | 12 +++ > > >> > include/hw/virtio/virtio.h | 5 + > > >> > subprojects/libvhost-user/libvhost-user.c | 65 ++++++++++++ > > >> > subprojects/libvhost-user/libvhost-user.h | 53 ++++++++++ > > >> > 6 files changed, 284 insertions(+) > > >> > > > >> > diff --git a/docs/interop/vhost-user.rst > b/docs/interop/vhost-user.rst > > >> > index d8419fd2f1..d52ba719d5 100644 > > >> > --- a/docs/interop/vhost-user.rst > > >> > +++ b/docs/interop/vhost-user.rst > > >> > @@ -1859,6 +1859,33 @@ is sent by the front-end. > > >> > when the operation is successful, or non-zero otherwise. Note > that > > >> if the > > >> > operation fails, no fd is sent to the backend. > > >> > > > >> > +``VHOST_USER_BACKEND_SHMEM_MAP`` > > >> > + :id: 9 > > >> > + :equivalent ioctl: N/A > > >> > + :request payload: fd and ``struct VhostUserMMap`` > > >> > + :reply payload: N/A > > >> > + > > >> > + This message can be submitted by the backends to advertise a ne= w > > >> mapping > > >> > + to be made in a given VIRTIO Shared Memory Region. Upon receivi= ng > > >> the message, > > >> > + The front-end will mmap the given fd into the VIRTIO Shared > Memory > > >> Region > > >> > + with the requested ``shmid``. A reply is generated indicating > > >> whether mapping > > >> > + succeeded. > > >> > + > > >> > + Mapping over an already existing map is not allowed and request > > >> shall fail. > > >> > + Therefore, the memory range in the request must correspond with= a > > >> valid, > > >> > + free region of the VIRTIO Shared Memory Region. > > >> > + > > >> > +``VHOST_USER_BACKEND_SHMEM_UNMAP`` > > >> > + :id: 10 > > >> > + :equivalent ioctl: N/A > > >> > + :request payload: ``struct VhostUserMMap`` > > >> > + :reply payload: N/A > > >> > + > > >> > + This message can be submitted by the backends so that the > front-end > > >> un-mmap > > >> > + a given range (``offset``, ``len``) in the VIRTIO Shared Memory > > >> Region with > > >> > > >> s/offset/shm_offset/ > > >> > > >> > + the requested ``shmid``. > > >> > > >> Please clarify that must correspond to the entirety of= a > > >> valid mapped region. > > >> > > >> By the way, the VIRTIO 1.3 gives the following behavior for the > virtiofs > > >> DAX Window: > > >> > > >> When a FUSE_SETUPMAPPING request perfectly overlaps a previous > > >> mapping, the previous mapping is replaced. When a mapping partiall= y > > >> overlaps a previous mapping, the previous mapping is split into on= e > or > > >> two smaller mappings. When a mapping is partially unmapped it is > also > > >> split into one or two smaller mappings. > > >> > > >> Establishing new mappings or splitting existing mappings consumes > > >> resources. If the device runs out of resources the FUSE_SETUPMAPPI= NG > > >> request fails until resources are available again following > > >> FUSE_REMOVEMAPPING. > > >> > > >> I think SETUPMAPPING/REMOVMAPPING can be implemented using > > >> SHMEM_MAP/UNMAP. SHMEM_MAP/UNMAP do not allow atomically replacing > > >> partial ranges, but as far as I know that's not necessary for virtio= fs > > >> in practice. > > >> > > >> It's worth mentioning that mappings consume resources and that > SHMEM_MAP > > >> can fail when there are no resources available. The process-wide lim= it > > >> is vm.max_map_count on Linux although a vhost-user frontend may redu= ce > > >> it further to control vhost-user resource usage. > > >> > > >> > + A reply is generated indicating whether unmapping succeeded. > > >> > + > > >> > .. _reply_ack: > > >> > > > >> > VHOST_USER_PROTOCOL_F_REPLY_ACK > > >> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c > > >> > index cdf9af4a4b..7ee8a472c6 100644 > > >> > --- a/hw/virtio/vhost-user.c > > >> > +++ b/hw/virtio/vhost-user.c > > >> > @@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest { > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD =3D 6, > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE =3D 7, > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP =3D 8, > > >> > + VHOST_USER_BACKEND_SHMEM_MAP =3D 9, > > >> > + VHOST_USER_BACKEND_SHMEM_UNMAP =3D 10, > > >> > VHOST_USER_BACKEND_MAX > > >> > } VhostUserBackendRequest; > > >> > > > >> > @@ -192,6 +194,24 @@ typedef struct VhostUserShared { > > >> > unsigned char uuid[16]; > > >> > } VhostUserShared; > > >> > > > >> > +/* For the flags field of VhostUserMMap */ > > >> > +#define VHOST_USER_FLAG_MAP_R (1u << 0) > > >> > +#define VHOST_USER_FLAG_MAP_W (1u << 1) > > >> > + > > >> > +typedef struct { > > >> > + /* VIRTIO Shared Memory Region ID */ > > >> > + uint8_t shmid; > > >> > + uint8_t padding[7]; > > >> > + /* File offset */ > > >> > + uint64_t fd_offset; > > >> > + /* Offset within the VIRTIO Shared Memory Region */ > > >> > + uint64_t shm_offset; > > >> > + /* Size of the mapping */ > > >> > + uint64_t len; > > >> > + /* Flags for the mmap operation, from VHOST_USER_FLAG_* */ > > >> > + uint64_t flags; > > >> > +} VhostUserMMap; > > >> > + > > >> > typedef struct { > > >> > VhostUserRequest request; > > >> > > > >> > @@ -224,6 +244,7 @@ typedef union { > > >> > VhostUserInflight inflight; > > >> > VhostUserShared object; > > >> > VhostUserTransferDeviceState transfer_state; > > >> > + VhostUserMMap mmap; > > >> > } VhostUserPayload; > > >> > > > >> > typedef struct VhostUserMsg { > > >> > @@ -1748,6 +1769,100 @@ > > >> vhost_user_backend_handle_shared_object_lookup(struct vhost_user *u, > > >> > return 0; > > >> > } > > >> > > > >> > +static int > > >> > +vhost_user_backend_handle_shmem_map(struct vhost_dev *dev, > > >> > + VhostUserMMap *vu_mmap, > > >> > + int fd) > > >> > +{ > > >> > + void *addr =3D 0; > > >> > + MemoryRegion *mr =3D NULL; > > >> > + > > >> > + if (fd < 0) { > > >> > + error_report("Bad fd for map"); > > >> > + return -EBADF; > > >> > + } > > >> > + > > >> > + if (!dev->vdev->shmem_list || > > >> > + dev->vdev->n_shmem_regions <=3D vu_mmap->shmid) { > > >> > + error_report("Device only has %d VIRTIO Shared Memory > Regions. > > >> " > > >> > + "Requested ID: %d", > > >> > + dev->vdev->n_shmem_regions, vu_mmap->shmid); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + mr =3D &dev->vdev->shmem_list[vu_mmap->shmid]; > > >> > + > > >> > + if (!mr) { > > >> > + error_report("VIRTIO Shared Memory Region at " > > >> > + "ID %d unitialized", vu_mmap->shmid); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len || > > >> > + (vu_mmap->shm_offset + vu_mmap->len) > mr->size) { > > >> > + error_report("Bad offset/len for mmap %" PRIx64 "+%" > PRIx64, > > >> > + vu_mmap->shm_offset, vu_mmap->len); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + void *shmem_ptr =3D memory_region_get_ram_ptr(mr); > > >> > + > > >> > + addr =3D mmap(shmem_ptr + vu_mmap->shm_offset, vu_mmap->len, > > >> > > >> Missing check for overlap between range [shm_offset, shm_offset + le= n) > > >> and existing mappings. > > >> > > > > > > Not sure how to do this check. Specifically, I am not sure how previo= us > > > ranges are stored within the MemoryRegion. Is looping through > > > mr->subregions > > > a valid option? > > > > > > > Maybe something like this would do? > > ``` > > if (memory_region_find(mr, vu_mmap->shm_offset, vu_mmap->len).mr) = { > > error_report("Requested memory (%" PRIx64 "+%" PRIx64 " overalp= s > " > > "with previously mapped memory", > > vu_mmap->shm_offset, vu_mmap->len); > > return -EFAULT; > > } > > ``` > > I don't think that works because the QEMU MemoryRegion covers the entire > range, some of which contains mappings and some of which is empty. It > would be necessary to track mappings that have been made. > > I'm not aware of a security implication if the overlap check is missing, > so I guess it may be okay to skip it and rely on the vhost-user back-end > author to honor the spec. I'm not totally against that because it's > faster and less code, but it feels a bit iffy to not enforce the input > validation that the spec requires. > > Maintain a list of mappings so this check can be performed? > > Ok, I prefer to aim for the better solution and see where that takes us. So I will add a mapped_regions list or something like that to the MemoryRegion struct in a new commit, so that it can be reviewed independently. With the infrastructure's code in the patch we can decide if it is worth to have it. Thank you! > > > > > > > > > > >> > > >> > + ((vu_mmap->flags & VHOST_USER_FLAG_MAP_R) ? PROT_READ : 0= ) > | > > >> > + ((vu_mmap->flags & VHOST_USER_FLAG_MAP_W) ? PROT_WRITE : > 0), > > >> > + MAP_SHARED | MAP_FIXED, fd, vu_mmap->fd_offset); > > >> > + > > >> > + if (addr =3D=3D MAP_FAILED) { > > >> > + error_report("Failed to mmap mem fd"); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + return 0; > > >> > +} > > >> > + > > >> > +static int > > >> > +vhost_user_backend_handle_shmem_unmap(struct vhost_dev *dev, > > >> > + VhostUserMMap *vu_mmap) > > >> > +{ > > >> > + void *addr =3D 0; > > >> > + MemoryRegion *mr =3D NULL; > > >> > + > > >> > + if (!dev->vdev->shmem_list || > > >> > + dev->vdev->n_shmem_regions <=3D vu_mmap->shmid) { > > >> > + error_report("Device only has %d VIRTIO Shared Memory > Regions. > > >> " > > >> > + "Requested ID: %d", > > >> > + dev->vdev->n_shmem_regions, vu_mmap->shmid); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + mr =3D &dev->vdev->shmem_list[vu_mmap->shmid]; > > >> > + > > >> > + if (!mr) { > > >> > + error_report("VIRTIO Shared Memory Region at " > > >> > + "ID %d unitialized", vu_mmap->shmid); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + if ((vu_mmap->shm_offset + vu_mmap->len) < vu_mmap->len || > > >> > + (vu_mmap->shm_offset + vu_mmap->len) > mr->size) { > > >> > + error_report("Bad offset/len for mmap %" PRIx64 "+%" > PRIx64, > > >> > + vu_mmap->shm_offset, vu_mmap->len); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + void *shmem_ptr =3D memory_region_get_ram_ptr(mr); > > >> > + > > >> > + addr =3D mmap(shmem_ptr + vu_mmap->shm_offset, vu_mmap->len, > > >> > > >> Missing check for existing mapping with exact range [shm_offset, len= ) > > >> match. > > >> > > >> > + PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXE= D, > > >> -1, 0); > > >> > + > > >> > + if (addr =3D=3D MAP_FAILED) { > > >> > + error_report("Failed to unmap memory"); > > >> > + return -EFAULT; > > >> > + } > > >> > + > > >> > + return 0; > > >> > +} > > >> > + > > >> > static void close_backend_channel(struct vhost_user *u) > > >> > { > > >> > g_source_destroy(u->backend_src); > > >> > @@ -1816,6 +1931,13 @@ static gboolean backend_read(QIOChannel *io= c, > > >> GIOCondition condition, > > >> > ret =3D > > >> vhost_user_backend_handle_shared_object_lookup(dev->opaque, ioc, > > >> > &hdr= , > > >> &payload); > > >> > break; > > >> > + case VHOST_USER_BACKEND_SHMEM_MAP: > > >> > + ret =3D vhost_user_backend_handle_shmem_map(dev, > &payload.mmap, > > >> > + fd ? fd[0] : -1= ); > > >> > + break; > > >> > + case VHOST_USER_BACKEND_SHMEM_UNMAP: > > >> > + ret =3D vhost_user_backend_handle_shmem_unmap(dev, > > >> &payload.mmap); > > >> > + break; > > >> > default: > > >> > error_report("Received unexpected msg type: %d.", > hdr.request); > > >> > ret =3D -EINVAL; > > >> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c > > >> > index 893a072c9d..9f2da5b11e 100644 > > >> > --- a/hw/virtio/virtio.c > > >> > +++ b/hw/virtio/virtio.c > > >> > @@ -2856,6 +2856,16 @@ int virtio_save(VirtIODevice *vdev, QEMUFil= e > *f) > > >> > return vmstate_save_state(f, &vmstate_virtio, vdev, NULL); > > >> > } > > >> > > > >> > +MemoryRegion *virtio_new_shmem_region(VirtIODevice *vdev) > > >> > +{ > > >> > + MemoryRegion *mr =3D g_new0(MemoryRegion, 1); > > >> > + ++vdev->n_shmem_regions; > > >> > + vdev->shmem_list =3D g_renew(MemoryRegion, vdev->shmem_list, > > >> > + vdev->n_shmem_regions); > > >> > > >> Where is shmem_list freed? > > >> > > >> The name "list" is misleading since this is an array, not a list. > > >> > > >> > + vdev->shmem_list[vdev->n_shmem_regions - 1] =3D *mr; > > >> > + return mr; > > >> > +} > > >> > > >> This looks weird. The contents of mr are copied into shmem_list[] an= d > > >> then the pointer to mr is returned? Did you mean for the field's typ= e > to > > >> be MemoryRegion **shmem_list and then vdev->shmem_list[...] =3D mr w= ould > > >> stash the pointer? > > >> > > >> > + > > >> > /* A wrapper for use as a VMState .put function */ > > >> > static int virtio_device_put(QEMUFile *f, void *opaque, size_t > size, > > >> > const VMStateField *field, JSONWrit= er > > >> *vmdesc) > > >> > @@ -3264,6 +3274,8 @@ void virtio_init(VirtIODevice *vdev, uint16_= t > > >> device_id, size_t config_size) > > >> > virtio_vmstate_change, vdev); > > >> > vdev->device_endian =3D virtio_default_endian(); > > >> > vdev->use_guest_notifier_mask =3D true; > > >> > + vdev->shmem_list =3D NULL; > > >> > + vdev->n_shmem_regions =3D 0; > > >> > } > > >> > > > >> > /* > > >> > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio= .h > > >> > index 7d5ffdc145..16d598aadc 100644 > > >> > --- a/include/hw/virtio/virtio.h > > >> > +++ b/include/hw/virtio/virtio.h > > >> > @@ -165,6 +165,9 @@ struct VirtIODevice > > >> > */ > > >> > EventNotifier config_notifier; > > >> > bool device_iotlb_enabled; > > >> > + /* Shared memory region for vhost-user mappings. */ > > >> > + MemoryRegion *shmem_list; > > >> > + int n_shmem_regions; > > >> > }; > > >> > > > >> > struct VirtioDeviceClass { > > >> > @@ -280,6 +283,8 @@ void virtio_notify(VirtIODevice *vdev, VirtQue= ue > > >> *vq); > > >> > > > >> > int virtio_save(VirtIODevice *vdev, QEMUFile *f); > > >> > > > >> > +MemoryRegion *virtio_new_shmem_region(VirtIODevice *vdev); > > >> > + > > >> > extern const VMStateInfo virtio_vmstate_info; > > >> > > > >> > #define VMSTATE_VIRTIO_DEVICE \ > > >> > diff --git a/subprojects/libvhost-user/libvhost-user.c > > >> b/subprojects/libvhost-user/libvhost-user.c > > >> > index a879149fef..28556d183a 100644 > > >> > --- a/subprojects/libvhost-user/libvhost-user.c > > >> > +++ b/subprojects/libvhost-user/libvhost-user.c > > >> > @@ -1586,6 +1586,71 @@ vu_rm_shared_object(VuDev *dev, unsigned ch= ar > > >> uuid[UUID_LEN]) > > >> > return vu_send_message(dev, &msg); > > >> > } > > >> > > > >> > +bool > > >> > +vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset, > > >> > + uint64_t shm_offset, uint64_t len, uint64_t flags) > > >> > +{ > > >> > + bool result =3D false; > > >> > + VhostUserMsg msg_reply; > > >> > + VhostUserMsg vmsg =3D { > > >> > + .request =3D VHOST_USER_BACKEND_SHMEM_MAP, > > >> > + .size =3D sizeof(vmsg.payload.mmap), > > >> > + .flags =3D VHOST_USER_VERSION, > > >> > + .payload.mmap =3D { > > >> > + .shmid =3D shmid, > > >> > + .fd_offset =3D fd_offset, > > >> > + .shm_offset =3D shm_offset, > > >> > + .len =3D len, > > >> > + .flags =3D flags, > > >> > + }, > > >> > + }; > > >> > + > > >> > + if (vu_has_protocol_feature(dev, > VHOST_USER_PROTOCOL_F_REPLY_ACK)) > > >> { > > >> > + vmsg.flags |=3D VHOST_USER_NEED_REPLY_MASK; > > >> > + } > > >> > + > > >> > + pthread_mutex_lock(&dev->backend_mutex); > > >> > + if (!vu_message_write(dev, dev->backend_fd, &vmsg)) { > > >> > + pthread_mutex_unlock(&dev->backend_mutex); > > >> > + return false; > > >> > + } > > >> > + > > >> > + /* Also unlocks the backend_mutex */ > > >> > + return vu_process_message_reply(dev, &vmsg); > > >> > +} > > >> > + > > >> > +bool > > >> > +vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t fd_offset, > > >> > + uint64_t shm_offset, uint64_t len) > > >> > +{ > > >> > + bool result =3D false; > > >> > + VhostUserMsg msg_reply; > > >> > + VhostUserMsg vmsg =3D { > > >> > + .request =3D VHOST_USER_BACKEND_SHMEM_UNMAP, > > >> > + .size =3D sizeof(vmsg.payload.mmap), > > >> > + .flags =3D VHOST_USER_VERSION, > > >> > + .payload.mmap =3D { > > >> > + .shmid =3D shmid, > > >> > + .fd_offset =3D fd_offset, > > >> > > >> What is the meaning of this field? I expected it to be set to 0. > > >> > > >> > + .shm_offset =3D shm_offset, > > >> > + .len =3D len, > > >> > + }, > > >> > + }; > > >> > + > > >> > + if (vu_has_protocol_feature(dev, > VHOST_USER_PROTOCOL_F_REPLY_ACK)) > > >> { > > >> > + vmsg.flags |=3D VHOST_USER_NEED_REPLY_MASK; > > >> > + } > > >> > + > > >> > + pthread_mutex_lock(&dev->backend_mutex); > > >> > + if (!vu_message_write(dev, dev->backend_fd, &vmsg)) { > > >> > + pthread_mutex_unlock(&dev->backend_mutex); > > >> > + return false; > > >> > + } > > >> > + > > >> > + /* Also unlocks the backend_mutex */ > > >> > + return vu_process_message_reply(dev, &vmsg); > > >> > +} > > >> > + > > >> > static bool > > >> > vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg) > > >> > { > > >> > diff --git a/subprojects/libvhost-user/libvhost-user.h > > >> b/subprojects/libvhost-user/libvhost-user.h > > >> > index deb40e77b3..7f6c22cc1a 100644 > > >> > --- a/subprojects/libvhost-user/libvhost-user.h > > >> > +++ b/subprojects/libvhost-user/libvhost-user.h > > >> > @@ -127,6 +127,8 @@ typedef enum VhostUserBackendRequest { > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_ADD =3D 6, > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_REMOVE =3D 7, > > >> > VHOST_USER_BACKEND_SHARED_OBJECT_LOOKUP =3D 8, > > >> > + VHOST_USER_BACKEND_SHMEM_MAP =3D 9, > > >> > + VHOST_USER_BACKEND_SHMEM_UNMAP =3D 10, > > >> > VHOST_USER_BACKEND_MAX > > >> > } VhostUserBackendRequest; > > >> > > > >> > @@ -186,6 +188,24 @@ typedef struct VhostUserShared { > > >> > unsigned char uuid[UUID_LEN]; > > >> > } VhostUserShared; > > >> > > > >> > +/* For the flags field of VhostUserMMap */ > > >> > +#define VHOST_USER_FLAG_MAP_R (1u << 0) > > >> > +#define VHOST_USER_FLAG_MAP_W (1u << 1) > > >> > + > > >> > +typedef struct { > > >> > + /* VIRTIO Shared Memory Region ID */ > > >> > + uint8_t shmid; > > >> > + uint8_t padding[7]; > > >> > + /* File offset */ > > >> > + uint64_t fd_offset; > > >> > + /* Offset within the VIRTIO Shared Memory Region */ > > >> > + uint64_t shm_offset; > > >> > + /* Size of the mapping */ > > >> > + uint64_t len; > > >> > + /* Flags for the mmap operation, from VHOST_USER_FLAG_* */ > > >> > + uint64_t flags; > > >> > +} VhostUserMMap; > > >> > + > > >> > #if defined(_WIN32) && (defined(__x86_64__) || defined(__i386__)) > > >> > # define VU_PACKED __attribute__((gcc_struct, packed)) > > >> > #else > > >> > @@ -214,6 +234,7 @@ typedef struct VhostUserMsg { > > >> > VhostUserVringArea area; > > >> > VhostUserInflight inflight; > > >> > VhostUserShared object; > > >> > + VhostUserMMap mmap; > > >> > } payload; > > >> > > > >> > int fds[VHOST_MEMORY_BASELINE_NREGIONS]; > > >> > @@ -597,6 +618,38 @@ bool vu_add_shared_object(VuDev *dev, unsigne= d > > >> char uuid[UUID_LEN]); > > >> > */ > > >> > bool vu_rm_shared_object(VuDev *dev, unsigned char uuid[UUID_LEN]= ); > > >> > > > >> > +/** > > >> > + * vu_shmem_map: > > >> > + * @dev: a VuDev context > > >> > + * @shmid: VIRTIO Shared Memory Region ID > > >> > + * @fd_offset: File offset > > >> > + * @shm_offset: Offset within the VIRTIO Shared Memory Region > > >> > + * @len: Size of the mapping > > >> > + * @flags: Flags for the mmap operation > > >> > + * > > >> > + * Advertises a new mapping to be made in a given VIRTIO Shared > Memory > > >> Region. > > >> > + * > > >> > + * Returns: TRUE on success, FALSE on failure. > > >> > + */ > > >> > +bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_offset, > > >> > + uint64_t shm_offset, uint64_t len, uint64_t > flags); > > >> > + > > >> > +/** > > >> > + * vu_shmem_map: > > >> > + * @dev: a VuDev context > > >> > + * @shmid: VIRTIO Shared Memory Region ID > > >> > + * @fd_offset: File offset > > >> > + * @shm_offset: Offset within the VIRTIO Shared Memory Region > > >> > + * @len: Size of the mapping > > >> > + * > > >> > + * The front-end un-mmaps a given range in the VIRTIO Shared Memo= ry > > >> Region > > >> > + * with the requested `shmid`. > > >> > + * > > >> > + * Returns: TRUE on success, FALSE on failure. > > >> > + */ > > >> > +bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t fd_offset= , > > >> > + uint64_t shm_offset, uint64_t len); > > >> > + > > >> > /** > > >> > * vu_queue_set_notification: > > >> > * @dev: a VuDev context > > >> > -- > > >> > 2.45.2 > > >> > > > >> > > > > --0000000000009a6b8b0621d6b12a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable



<= div dir=3D"ltr" class=3D"gmail_attr">On Thu, Sep 5, 2024 at 6:45=E2=80=AFPM= Stefan Hajnoczi <stefanha@redhat= .com> wrote:
On Tue, Sep 03, 2024 at 01:54:12PM +0200, Albert Esteve wrote:
> On Tue, Sep 3, 2024 at 11:54=E2=80=AFAM Albert Esteve <aesteve@redhat.com> wrot= e:
>
> >
> >
> > On Thu, Jul 11, 2024 at 9:45=E2=80=AFAM Stefan Hajnoczi <stefanha@redhat.com= >
> > wrote:
> >
> >> On Fri, Jun 28, 2024 at 04:57:06PM +0200, Albert Esteve wrote= :
> >> > Add SHMEM_MAP/UNMAP requests to vhost-user to
> >> > handle VIRTIO Shared Memory mappings.
> >> >
> >> > This request allows backends to dynamically map
> >> > fds into a VIRTIO Shared Memory Region indentified
> >> > by its `shmid`. Then, the fd memory is advertised
> >> > to the driver as a base addres + offset, so it
> >> > can be read/written (depending on the mmap flags
> >> > requested) while its valid.
> >> >
> >> > The backend can munmap the memory range
> >> > in a given VIRTIO Shared Memory Region (again,
> >> > identified by its `shmid`), to free it. Upon
> >> > receiving this message, the front-end must
> >> > mmap the regions with PROT_NONE to reserve
> >> > the virtual memory space.
> >> >
> >> > The device model needs to create MemoryRegion
> >> > instances for the VIRTIO Shared Memory Regions
> >> > and add them to the `VirtIODevice` instance.
> >> >
> >> > Signed-off-by: Albert Esteve <aesteve@redhat.com>
> >> > ---
> >> >=C2=A0 docs/interop/vhost-user.rst=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 27 +++++
> >> >=C2=A0 hw/virtio/vhost-user.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 122 ++++++++++++++++++++++
> >> >=C2=A0 hw/virtio/virtio.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 12 +++
> >> >=C2=A0 include/hw/virtio/virtio.h=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A05 +
> >> >=C2=A0 subprojects/libvhost-user/libvhost-user.c |=C2=A0 = 65 ++++++++++++
> >> >=C2=A0 subprojects/libvhost-user/libvhost-user.h |=C2=A0 = 53 ++++++++++
> >> >=C2=A0 6 files changed, 284 insertions(+)
> >> >
> >> > diff --git a/docs/interop/vhost-user.rst b/docs/interop/= vhost-user.rst
> >> > index d8419fd2f1..d52ba719d5 100644
> >> > --- a/docs/interop/vhost-user.rst
> >> > +++ b/docs/interop/vhost-user.rst
> >> > @@ -1859,6 +1859,33 @@ is sent by the front-end.
> >> >=C2=A0 =C2=A0 when the operation is successful, or non-ze= ro otherwise. Note that
> >> if the
> >> >=C2=A0 =C2=A0 operation fails, no fd is sent to the backe= nd.
> >> >
> >> > +``VHOST_USER_BACKEND_SHMEM_MAP``
> >> > +=C2=A0 :id: 9
> >> > +=C2=A0 :equivalent ioctl: N/A
> >> > +=C2=A0 :request payload: fd and ``struct VhostUserMMap`= `
> >> > +=C2=A0 :reply payload: N/A
> >> > +
> >> > +=C2=A0 This message can be submitted by the backends to= advertise a new
> >> mapping
> >> > +=C2=A0 to be made in a given VIRTIO Shared Memory Regio= n. Upon receiving
> >> the message,
> >> > +=C2=A0 The front-end will mmap the given fd into the VI= RTIO Shared Memory
> >> Region
> >> > +=C2=A0 with the requested ``shmid``. A reply is generat= ed indicating
> >> whether mapping
> >> > +=C2=A0 succeeded.
> >> > +
> >> > +=C2=A0 Mapping over an already existing map is not allo= wed and request
> >> shall fail.
> >> > +=C2=A0 Therefore, the memory range in the request must = correspond with a
> >> valid,
> >> > +=C2=A0 free region of the VIRTIO Shared Memory Region.<= br> > >> > +
> >> > +``VHOST_USER_BACKEND_SHMEM_UNMAP``
> >> > +=C2=A0 :id: 10
> >> > +=C2=A0 :equivalent ioctl: N/A
> >> > +=C2=A0 :request payload: ``struct VhostUserMMap``
> >> > +=C2=A0 :reply payload: N/A
> >> > +
> >> > +=C2=A0 This message can be submitted by the backends so= that the front-end
> >> un-mmap
> >> > +=C2=A0 a given range (``offset``, ``len``) in the VIRTI= O Shared Memory
> >> Region with
> >>
> >> s/offset/shm_offset/
> >>
> >> > +=C2=A0 the requested ``shmid``.
> >>
> >> Please clarify that <offset, len> must correspond to th= e entirety of a
> >> valid mapped region.
> >>
> >> By the way, the VIRTIO 1.3 gives the following behavior for t= he virtiofs
> >> DAX Window:
> >>
> >>=C2=A0 =C2=A0When a FUSE_SETUPMAPPING request perfectly overla= ps a previous
> >>=C2=A0 =C2=A0mapping, the previous mapping is replaced. When a= mapping partially
> >>=C2=A0 =C2=A0overlaps a previous mapping, the previous mapping= is split into one or
> >>=C2=A0 =C2=A0two smaller mappings. When a mapping is partially= unmapped it is also
> >>=C2=A0 =C2=A0split into one or two smaller mappings.
> >>
> >>=C2=A0 =C2=A0Establishing new mappings or splitting existing m= appings consumes
> >>=C2=A0 =C2=A0resources. If the device runs out of resources th= e FUSE_SETUPMAPPING
> >>=C2=A0 =C2=A0request fails until resources are available again= following
> >>=C2=A0 =C2=A0FUSE_REMOVEMAPPING.
> >>
> >> I think SETUPMAPPING/REMOVMAPPING can be implemented using > >> SHMEM_MAP/UNMAP. SHMEM_MAP/UNMAP do not allow atomically repl= acing
> >> partial ranges, but as far as I know that's not necessary= for virtiofs
> >> in practice.
> >>
> >> It's worth mentioning that mappings consume resources and= that SHMEM_MAP
> >> can fail when there are no resources available. The process-w= ide limit
> >> is vm.max_map_count on Linux although a vhost-user frontend m= ay reduce
> >> it further to control vhost-user resource usage.
> >>
> >> > +=C2=A0 A reply is generated indicating whether unmappin= g succeeded.
> >> > +
> >> >=C2=A0 .. _reply_ack:
> >> >
> >> >=C2=A0 VHOST_USER_PROTOCOL_F_REPLY_ACK
> >> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-us= er.c
> >> > index cdf9af4a4b..7ee8a472c6 100644
> >> > --- a/hw/virtio/vhost-user.c
> >> > +++ b/hw/virtio/vhost-user.c
> >> > @@ -115,6 +115,8 @@ typedef enum VhostUserBackendRequest= {
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_ADD= =3D 6,
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_REM= OVE =3D 7,
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_LOO= KUP =3D 8,
> >> > +=C2=A0 =C2=A0 VHOST_USER_BACKEND_SHMEM_MAP =3D 9,
> >> > +=C2=A0 =C2=A0 VHOST_USER_BACKEND_SHMEM_UNMAP =3D 10, > >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_MAX
> >> >=C2=A0 }=C2=A0 VhostUserBackendRequest;
> >> >
> >> > @@ -192,6 +194,24 @@ typedef struct VhostUserShared { > >> >=C2=A0 =C2=A0 =C2=A0 unsigned char uuid[16];
> >> >=C2=A0 } VhostUserShared;
> >> >
> >> > +/* For the flags field of VhostUserMMap */
> >> > +#define VHOST_USER_FLAG_MAP_R (1u << 0)
> >> > +#define VHOST_USER_FLAG_MAP_W (1u << 1)
> >> > +
> >> > +typedef struct {
> >> > +=C2=A0 =C2=A0 /* VIRTIO Shared Memory Region ID */
> >> > +=C2=A0 =C2=A0 uint8_t shmid;
> >> > +=C2=A0 =C2=A0 uint8_t padding[7];
> >> > +=C2=A0 =C2=A0 /* File offset */
> >> > +=C2=A0 =C2=A0 uint64_t fd_offset;
> >> > +=C2=A0 =C2=A0 /* Offset within the VIRTIO Shared Memory= Region */
> >> > +=C2=A0 =C2=A0 uint64_t shm_offset;
> >> > +=C2=A0 =C2=A0 /* Size of the mapping */
> >> > +=C2=A0 =C2=A0 uint64_t len;
> >> > +=C2=A0 =C2=A0 /* Flags for the mmap operation, from VHO= ST_USER_FLAG_* */
> >> > +=C2=A0 =C2=A0 uint64_t flags;
> >> > +} VhostUserMMap;
> >> > +
> >> >=C2=A0 typedef struct {
> >> >=C2=A0 =C2=A0 =C2=A0 VhostUserRequest request;
> >> >
> >> > @@ -224,6 +244,7 @@ typedef union {
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserInflight infl= ight;
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserShared object= ;
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserTransferDevic= eState transfer_state;
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserMMap mmap;
> >> >=C2=A0 } VhostUserPayload;
> >> >
> >> >=C2=A0 typedef struct VhostUserMsg {
> >> > @@ -1748,6 +1769,100 @@
> >> vhost_user_backend_handle_shared_object_lookup(struct vhost_u= ser *u,
> >> >=C2=A0 =C2=A0 =C2=A0 return 0;
> >> >=C2=A0 }
> >> >
> >> > +static int
> >> > +vhost_user_backend_handle_shmem_map(struct vhost_dev *d= ev,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Vhos= tUserMMap *vu_mmap,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int = fd)
> >> > +{
> >> > +=C2=A0 =C2=A0 void *addr =3D 0;
> >> > +=C2=A0 =C2=A0 MemoryRegion *mr =3D NULL;
> >> > +
> >> > +=C2=A0 =C2=A0 if (fd < 0) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Bad fd f= or map");
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EBADF;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 if (!dev->vdev->shmem_list ||
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dev->vdev->n_shmem_re= gions <=3D vu_mmap->shmid) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Device o= nly has %d VIRTIO Shared Memory Regions.
> >> "
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"Requested ID: %d",
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0dev->vdev->n_shmem_regions, vu_mmap->shmid);<= br> > >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 mr =3D &dev->vdev->shmem_list[v= u_mmap->shmid];
> >> > +
> >> > +=C2=A0 =C2=A0 if (!mr) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("VIRTIO S= hared Memory Region at "
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"ID %d unitialized", vu_mmap->shmid);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 if ((vu_mmap->shm_offset + vu_mmap->= ;len) < vu_mmap->len ||
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (vu_mmap->shm_offset + v= u_mmap->len) > mr->size) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Bad offs= et/len for mmap %" PRIx64 "+%" PRIx64,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0vu_mmap->shm_offset, vu_mmap->len);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 void *shmem_ptr =3D memory_region_get_ram= _ptr(mr);
> >> > +
> >> > +=C2=A0 =C2=A0 addr =3D mmap(shmem_ptr + vu_mmap->shm= _offset, vu_mmap->len,
> >>
> >> Missing check for overlap between range [shm_offset, shm_offs= et + len)
> >> and existing mappings.
> >>
> >
> > Not sure how to do this check. Specifically, I am not sure how pr= evious
> > ranges are stored within the MemoryRegion. Is looping through
> > mr->subregions
> > a valid option?
> >
>
> Maybe something like this would do?
> ```
>=C2=A0 =C2=A0 =C2=A0 if (memory_region_find(mr, vu_mmap->shm_offset,= vu_mmap->len).mr) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0error_report("Requested memory (= %" PRIx64 "+%" PRIx64 " overalps "
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 "with previously mapped memory",
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 vu_mmap->shm_offset, vu_mmap->len);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EFAULT;
>=C2=A0 =C2=A0 =C2=A0}
> ```

I don't think that works because the QEMU MemoryRegion covers the entir= e
range, some of which contains mappings and some of which is empty. It
would be necessary to track mappings that have been made.

I'm not aware of a security implication if the overlap check is missing= ,
so I guess it may be okay to skip it and rely on the vhost-user back-end author to honor the spec. I'm not totally against that because it's=
faster and less code, but it feels a bit iffy to not enforce the input
validation that the spec requires.

Maintain a list of mappings so this check can be performed?


Ok, I prefer to aim for the better sol= ution and see where that takes us.
So I will add a mapped_regions= list or something like that to the
MemoryRegion struct in a new = commit, so that it can be reviewed
independently. With the infras= tructure's code in the patch we can decide if
it is worth to = have it.

Thank you!
=C2=A0
>
> >
> >
> >>
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ((vu_mmap->flags & V= HOST_USER_FLAG_MAP_R) ? PROT_READ : 0) |
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ((vu_mmap->flags & V= HOST_USER_FLAG_MAP_W) ? PROT_WRITE : 0),
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 MAP_SHARED | MAP_FIXED, fd,= vu_mmap->fd_offset);
> >> > +
> >> > +=C2=A0 =C2=A0 if (addr =3D=3D MAP_FAILED) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Failed t= o mmap mem fd");
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 return 0;
> >> > +}
> >> > +
> >> > +static int
> >> > +vhost_user_backend_handle_shmem_unmap(struct vhost_dev = *dev,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 VhostUserMMap *vu_mmap)
> >> > +{
> >> > +=C2=A0 =C2=A0 void *addr =3D 0;
> >> > +=C2=A0 =C2=A0 MemoryRegion *mr =3D NULL;
> >> > +
> >> > +=C2=A0 =C2=A0 if (!dev->vdev->shmem_list ||
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dev->vdev->n_shmem_re= gions <=3D vu_mmap->shmid) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Device o= nly has %d VIRTIO Shared Memory Regions.
> >> "
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"Requested ID: %d",
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0dev->vdev->n_shmem_regions, vu_mmap->shmid);<= br> > >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 mr =3D &dev->vdev->shmem_list[v= u_mmap->shmid];
> >> > +
> >> > +=C2=A0 =C2=A0 if (!mr) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("VIRTIO S= hared Memory Region at "
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"ID %d unitialized", vu_mmap->shmid);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 if ((vu_mmap->shm_offset + vu_mmap->= ;len) < vu_mmap->len ||
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (vu_mmap->shm_offset + v= u_mmap->len) > mr->size) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Bad offs= et/len for mmap %" PRIx64 "+%" PRIx64,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0vu_mmap->shm_offset, vu_mmap->len);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 void *shmem_ptr =3D memory_region_get_ram= _ptr(mr);
> >> > +
> >> > +=C2=A0 =C2=A0 addr =3D mmap(shmem_ptr + vu_mmap->shm= _offset, vu_mmap->len,
> >>
> >> Missing check for existing mapping with exact range [shm_offs= et, len)
> >> match.
> >>
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
> >> -1, 0);
> >> > +
> >> > +=C2=A0 =C2=A0 if (addr =3D=3D MAP_FAILED) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Failed t= o unmap memory");
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EFAULT;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 return 0;
> >> > +}
> >> > +
> >> >=C2=A0 static void close_backend_channel(struct vhost_use= r *u)
> >> >=C2=A0 {
> >> >=C2=A0 =C2=A0 =C2=A0 g_source_destroy(u->backend_src);=
> >> > @@ -1816,6 +1931,13 @@ static gboolean backend_read(QIOC= hannel *ioc,
> >> GIOCondition condition,
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D
> >> vhost_user_backend_handle_shared_object_lookup(dev->opaque= , ioc,
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0&hdr,
> >> &payload);
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> >> > +=C2=A0 =C2=A0 case VHOST_USER_BACKEND_SHMEM_MAP:
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D vhost_user_backend_= handle_shmem_map(dev, &payload.mmap,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fd ? fd[0] : -1);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> >> > +=C2=A0 =C2=A0 case VHOST_USER_BACKEND_SHMEM_UNMAP:
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D vhost_user_backend_= handle_shmem_unmap(dev,
> >> &payload.mmap);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> >> >=C2=A0 =C2=A0 =C2=A0 default:
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 error_report("Rec= eived unexpected msg type: %d.", hdr.request);
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D -EINVAL;
> >> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >> > index 893a072c9d..9f2da5b11e 100644
> >> > --- a/hw/virtio/virtio.c
> >> > +++ b/hw/virtio/virtio.c
> >> > @@ -2856,6 +2856,16 @@ int virtio_save(VirtIODevice *vde= v, QEMUFile *f)
> >> >=C2=A0 =C2=A0 =C2=A0 return vmstate_save_state(f, &vm= state_virtio, vdev, NULL);
> >> >=C2=A0 }
> >> >
> >> > +MemoryRegion *virtio_new_shmem_region(VirtIODevice *vde= v)
> >> > +{
> >> > +=C2=A0 =C2=A0 MemoryRegion *mr =3D g_new0(MemoryRegion,= 1);
> >> > +=C2=A0 =C2=A0 ++vdev->n_shmem_regions;
> >> > +=C2=A0 =C2=A0 vdev->shmem_list =3D g_renew(MemoryReg= ion, vdev->shmem_list,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0vdev->n_shmem_re= gions);
> >>
> >> Where is shmem_list freed?
> >>
> >> The name "list" is misleading since this is an arra= y, not a list.
> >>
> >> > +=C2=A0 =C2=A0 vdev->shmem_list[vdev->n_shmem_regi= ons - 1] =3D *mr;
> >> > +=C2=A0 =C2=A0 return mr;
> >> > +}
> >>
> >> This looks weird. The contents of mr are copied into shmem_li= st[] and
> >> then the pointer to mr is returned? Did you mean for the fiel= d's type to
> >> be MemoryRegion **shmem_list and then vdev->shmem_list[...= ] =3D mr would
> >> stash the pointer?
> >>
> >> > +
> >> >=C2=A0 /* A wrapper for use as a VMState .put function */=
> >> >=C2=A0 static int virtio_device_put(QEMUFile *f, void *op= aque, size_t size,
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const VMStateField = *field, JSONWriter
> >> *vmdesc)
> >> > @@ -3264,6 +3274,8 @@ void virtio_init(VirtIODevice *vde= v, uint16_t
> >> device_id, size_t config_size)
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 virtio_v= mstate_change, vdev);
> >> >=C2=A0 =C2=A0 =C2=A0 vdev->device_endian =3D virtio_de= fault_endian();
> >> >=C2=A0 =C2=A0 =C2=A0 vdev->use_guest_notifier_mask =3D= true;
> >> > +=C2=A0 =C2=A0 vdev->shmem_list =3D NULL;
> >> > +=C2=A0 =C2=A0 vdev->n_shmem_regions =3D 0;
> >> >=C2=A0 }
> >> >
> >> >=C2=A0 /*
> >> > diff --git a/include/hw/virtio/virtio.h b/include/hw/vir= tio/virtio.h
> >> > index 7d5ffdc145..16d598aadc 100644
> >> > --- a/include/hw/virtio/virtio.h
> >> > +++ b/include/hw/virtio/virtio.h
> >> > @@ -165,6 +165,9 @@ struct VirtIODevice
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0*/
> >> >=C2=A0 =C2=A0 =C2=A0 EventNotifier config_notifier;
> >> >=C2=A0 =C2=A0 =C2=A0 bool device_iotlb_enabled;
> >> > +=C2=A0 =C2=A0 /* Shared memory region for vhost-user ma= ppings. */
> >> > +=C2=A0 =C2=A0 MemoryRegion *shmem_list;
> >> > +=C2=A0 =C2=A0 int n_shmem_regions;
> >> >=C2=A0 };
> >> >
> >> >=C2=A0 struct VirtioDeviceClass {
> >> > @@ -280,6 +283,8 @@ void virtio_notify(VirtIODevice *vde= v, VirtQueue
> >> *vq);
> >> >
> >> >=C2=A0 int virtio_save(VirtIODevice *vdev, QEMUFile *f);<= br> > >> >
> >> > +MemoryRegion *virtio_new_shmem_region(VirtIODevice *vde= v);
> >> > +
> >> >=C2=A0 extern const VMStateInfo virtio_vmstate_info;
> >> >
> >> >=C2=A0 #define VMSTATE_VIRTIO_DEVICE \
> >> > diff --git a/subprojects/libvhost-user/libvhost-user.c > >> b/subprojects/libvhost-user/libvhost-user.c
> >> > index a879149fef..28556d183a 100644
> >> > --- a/subprojects/libvhost-user/libvhost-user.c
> >> > +++ b/subprojects/libvhost-user/libvhost-user.c
> >> > @@ -1586,6 +1586,71 @@ vu_rm_shared_object(VuDev *dev, u= nsigned char
> >> uuid[UUID_LEN])
> >> >=C2=A0 =C2=A0 =C2=A0 return vu_send_message(dev, &msg= );
> >> >=C2=A0 }
> >> >
> >> > +bool
> >> > +vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t fd_off= set,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_= t shm_offset, uint64_t len, uint64_t flags)
> >> > +{
> >> > +=C2=A0 =C2=A0 bool result =3D false;
> >> > +=C2=A0 =C2=A0 VhostUserMsg msg_reply;
> >> > +=C2=A0 =C2=A0 VhostUserMsg vmsg =3D {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .request =3D VHOST_USER_BAC= KEND_SHMEM_MAP,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .size =3D sizeof(vmsg.paylo= ad.mmap),
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D VHOST_USER_VERSI= ON,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .payload.mmap =3D {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .shmid =3D sh= mid,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .fd_offset = =3D fd_offset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .shm_offset = =3D shm_offset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .len =3D len,=
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D fl= ags,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 },
> >> > +=C2=A0 =C2=A0 };
> >> > +
> >> > +=C2=A0 =C2=A0 if (vu_has_protocol_feature(dev, VHOST_US= ER_PROTOCOL_F_REPLY_ACK))
> >> {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 vmsg.flags |=3D VHOST_USER_= NEED_REPLY_MASK;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 pthread_mutex_lock(&dev->backend_m= utex);
> >> > +=C2=A0 =C2=A0 if (!vu_message_write(dev, dev->backen= d_fd, &vmsg)) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 pthread_mutex_unlock(&d= ev->backend_mutex);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 /* Also unlocks the backend_mutex */
> >> > +=C2=A0 =C2=A0 return vu_process_message_reply(dev, &= ;vmsg);
> >> > +}
> >> > +
> >> > +bool
> >> > +vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t fd_o= ffset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= uint64_t shm_offset, uint64_t len)
> >> > +{
> >> > +=C2=A0 =C2=A0 bool result =3D false;
> >> > +=C2=A0 =C2=A0 VhostUserMsg msg_reply;
> >> > +=C2=A0 =C2=A0 VhostUserMsg vmsg =3D {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .request =3D VHOST_USER_BAC= KEND_SHMEM_UNMAP,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .size =3D sizeof(vmsg.paylo= ad.mmap),
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D VHOST_USER_VERSI= ON,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 .payload.mmap =3D {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .shmid =3D sh= mid,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .fd_offset = =3D fd_offset,
> >>
> >> What is the meaning of this field? I expected it to be set to= 0.
> >>
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .shm_offset = =3D shm_offset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .len =3D len,=
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 },
> >> > +=C2=A0 =C2=A0 };
> >> > +
> >> > +=C2=A0 =C2=A0 if (vu_has_protocol_feature(dev, VHOST_US= ER_PROTOCOL_F_REPLY_ACK))
> >> {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 vmsg.flags |=3D VHOST_USER_= NEED_REPLY_MASK;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 pthread_mutex_lock(&dev->backend_m= utex);
> >> > +=C2=A0 =C2=A0 if (!vu_message_write(dev, dev->backen= d_fd, &vmsg)) {
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 pthread_mutex_unlock(&d= ev->backend_mutex);
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> >> > +=C2=A0 =C2=A0 }
> >> > +
> >> > +=C2=A0 =C2=A0 /* Also unlocks the backend_mutex */
> >> > +=C2=A0 =C2=A0 return vu_process_message_reply(dev, &= ;vmsg);
> >> > +}
> >> > +
> >> >=C2=A0 static bool
> >> >=C2=A0 vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *v= msg)
> >> >=C2=A0 {
> >> > diff --git a/subprojects/libvhost-user/libvhost-user.h > >> b/subprojects/libvhost-user/libvhost-user.h
> >> > index deb40e77b3..7f6c22cc1a 100644
> >> > --- a/subprojects/libvhost-user/libvhost-user.h
> >> > +++ b/subprojects/libvhost-user/libvhost-user.h
> >> > @@ -127,6 +127,8 @@ typedef enum VhostUserBackendRequest= {
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_ADD= =3D 6,
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_REM= OVE =3D 7,
> >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_SHARED_OBJECT_LOO= KUP =3D 8,
> >> > +=C2=A0 =C2=A0 VHOST_USER_BACKEND_SHMEM_MAP =3D 9,
> >> > +=C2=A0 =C2=A0 VHOST_USER_BACKEND_SHMEM_UNMAP =3D 10, > >> >=C2=A0 =C2=A0 =C2=A0 VHOST_USER_BACKEND_MAX
> >> >=C2=A0 }=C2=A0 VhostUserBackendRequest;
> >> >
> >> > @@ -186,6 +188,24 @@ typedef struct VhostUserShared { > >> >=C2=A0 =C2=A0 =C2=A0 unsigned char uuid[UUID_LEN];
> >> >=C2=A0 } VhostUserShared;
> >> >
> >> > +/* For the flags field of VhostUserMMap */
> >> > +#define VHOST_USER_FLAG_MAP_R (1u << 0)
> >> > +#define VHOST_USER_FLAG_MAP_W (1u << 1)
> >> > +
> >> > +typedef struct {
> >> > +=C2=A0 =C2=A0 /* VIRTIO Shared Memory Region ID */
> >> > +=C2=A0 =C2=A0 uint8_t shmid;
> >> > +=C2=A0 =C2=A0 uint8_t padding[7];
> >> > +=C2=A0 =C2=A0 /* File offset */
> >> > +=C2=A0 =C2=A0 uint64_t fd_offset;
> >> > +=C2=A0 =C2=A0 /* Offset within the VIRTIO Shared Memory= Region */
> >> > +=C2=A0 =C2=A0 uint64_t shm_offset;
> >> > +=C2=A0 =C2=A0 /* Size of the mapping */
> >> > +=C2=A0 =C2=A0 uint64_t len;
> >> > +=C2=A0 =C2=A0 /* Flags for the mmap operation, from VHO= ST_USER_FLAG_* */
> >> > +=C2=A0 =C2=A0 uint64_t flags;
> >> > +} VhostUserMMap;
> >> > +
> >> >=C2=A0 #if defined(_WIN32) && (defined(__x86_64__= ) || defined(__i386__))
> >> >=C2=A0 # define VU_PACKED __attribute__((gcc_struct, pack= ed))
> >> >=C2=A0 #else
> >> > @@ -214,6 +234,7 @@ typedef struct VhostUserMsg {
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserVringArea are= a;
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserInflight infl= ight;
> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserShared object= ;
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 VhostUserMMap mmap;
> >> >=C2=A0 =C2=A0 =C2=A0 } payload;
> >> >
> >> >=C2=A0 =C2=A0 =C2=A0 int fds[VHOST_MEMORY_BASELINE_NREGIO= NS];
> >> > @@ -597,6 +618,38 @@ bool vu_add_shared_object(VuDev *de= v, unsigned
> >> char uuid[UUID_LEN]);
> >> >=C2=A0 =C2=A0*/
> >> >=C2=A0 bool vu_rm_shared_object(VuDev *dev, unsigned char= uuid[UUID_LEN]);
> >> >
> >> > +/**
> >> > + * vu_shmem_map:
> >> > + * @dev: a VuDev context
> >> > + * @shmid: VIRTIO Shared Memory Region ID
> >> > + * @fd_offset: File offset
> >> > + * @shm_offset: Offset within the VIRTIO Shared Memory = Region
> >> > + * @len: Size of the mapping
> >> > + * @flags: Flags for the mmap operation
> >> > + *
> >> > + * Advertises a new mapping to be made in a given VIRTI= O Shared Memory
> >> Region.
> >> > + *
> >> > + * Returns: TRUE on success, FALSE on failure.
> >> > + */
> >> > +bool vu_shmem_map(VuDev *dev, uint8_t shmid, uint64_t f= d_offset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 uint64_t shm_offset, uint64_t len, uint64_t flags);
> >> > +
> >> > +/**
> >> > + * vu_shmem_map:
> >> > + * @dev: a VuDev context
> >> > + * @shmid: VIRTIO Shared Memory Region ID
> >> > + * @fd_offset: File offset
> >> > + * @shm_offset: Offset within the VIRTIO Shared Memory = Region
> >> > + * @len: Size of the mapping
> >> > + *
> >> > + * The front-end un-mmaps a given range in the VIRTIO S= hared Memory
> >> Region
> >> > + * with the requested `shmid`.
> >> > + *
> >> > + * Returns: TRUE on success, FALSE on failure.
> >> > + */
> >> > +bool vu_shmem_unmap(VuDev *dev, uint8_t shmid, uint64_t= fd_offset,
> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 uint64_t shm_offset, uint64_t len);
> >> > +
> >> >=C2=A0 /**
> >> >=C2=A0 =C2=A0* vu_queue_set_notification:
> >> >=C2=A0 =C2=A0* @dev: a VuDev context
> >> > --
> >> > 2.45.2
> >> >
> >>
> >
--0000000000009a6b8b0621d6b12a--