qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] vhost: memslot handling improvements
@ 2023-05-03 17:21 David Hildenbrand
  2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: David Hildenbrand @ 2023-05-03 17:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Stefan Hajnoczi,
	Igor Mammedov, Paolo Bonzini, Peter Xu,
	Philippe Mathieu-Daudé

Following up on my previous work to make virtio-mem consume multiple
memslots dynamically [1] that requires precise accounting between used vs.
reserved memslots, I realized that vhost makes this extra hard by
filtering out some memory region sections (so they don't consume a
memslot) in the vhost-user case, which messes up the whole memslot
accounting.

This series fixes what I found to be broken and prepares for more work on
[1]. Further, it cleanes up the merge checks that I consider unnecessary.

[1] https://lkml.kernel.org/r/20211027124531.57561-8-david@redhat.com

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>

v2 -> v3:
- Add ACKs
- "softmmu/physmem: Fixup qemu_ram_block_from_host() documentation"
-- Fix typo in description

v1 -> v2:
- "vhost: Rework memslot filtering and fix "used_memslot" tracking"
-- New approach: keep filtering, but make filtering less generic and
   track separately. This should keep any existing setups working.
- "softmmu/physmem: Fixup qemu_ram_block_from_host() documentation"
-- As requested by Igor

David Hildenbrand (3):
  vhost: Rework memslot filtering and fix "used_memslot" tracking
  vhost: Remove vhost_backend_can_merge() callback
  softmmu/physmem: Fixup qemu_ram_block_from_host() documentation

 hw/virtio/vhost-user.c            | 21 ++---------
 hw/virtio/vhost-vdpa.c            |  1 -
 hw/virtio/vhost.c                 | 62 ++++++++++++++++++++++++-------
 include/exec/cpu-common.h         | 15 ++++++++
 include/hw/virtio/vhost-backend.h |  9 +----
 softmmu/physmem.c                 | 17 ---------
 6 files changed, 68 insertions(+), 57 deletions(-)

-- 
2.40.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking
  2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
@ 2023-05-03 17:21 ` David Hildenbrand
  2023-05-23 15:34   ` Peter Xu
  2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2023-05-03 17:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Stefan Hajnoczi,
	Igor Mammedov, Paolo Bonzini, Peter Xu,
	Philippe Mathieu-Daudé, Tiwei Bie

Having multiple vhost devices, some filtering out fd-less memslots and
some not, can mess up the "used_memslot" accounting. Consequently our
"free memslot" checks become unreliable and we might run out of free
memslots at runtime later.

An example sequence which can trigger a potential issue that involves
different vhost backends (vhost-kernel and vhost-user) and hotplugged
memory devices can be found at [1].

Let's make the filtering mechanism less generic and distinguish between
backends that support private memslots (without a fd) and ones that only
support shared memslots (with a fd). Track the used_memslots for both
cases separately and use the corresponding value when required.

Note: Most probably we should filter out MAP_PRIVATE fd-based RAM regions
(for example, via memory-backend-memfd,...,shared=off or as default with
 memory-backend-file) as well. When not using MAP_SHARED, it might not work
as expected. Add a TODO for now.

[1] https://lkml.kernel.org/r/fad9136f-08d3-3fd9-71a1-502069c000cf@redhat.com

Fixes: 988a27754bbb ("vhost: allow backends to filter memory sections")
Cc: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/virtio/vhost-user.c            |  7 ++--
 hw/virtio/vhost.c                 | 56 ++++++++++++++++++++++++++-----
 include/hw/virtio/vhost-backend.h |  5 ++-
 3 files changed, 52 insertions(+), 16 deletions(-)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index e5285df4ba..0c3e2702b1 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -2453,10 +2453,9 @@ vhost_user_crypto_close_session(struct vhost_dev *dev, uint64_t session_id)
     return 0;
 }
 
-static bool vhost_user_mem_section_filter(struct vhost_dev *dev,
-                                          MemoryRegionSection *section)
+static bool vhost_user_no_private_memslots(struct vhost_dev *dev)
 {
-    return memory_region_get_fd(section->mr) >= 0;
+    return true;
 }
 
 static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
@@ -2686,6 +2685,7 @@ const VhostOps user_ops = {
         .vhost_backend_init = vhost_user_backend_init,
         .vhost_backend_cleanup = vhost_user_backend_cleanup,
         .vhost_backend_memslots_limit = vhost_user_memslots_limit,
+        .vhost_backend_no_private_memslots = vhost_user_no_private_memslots,
         .vhost_set_log_base = vhost_user_set_log_base,
         .vhost_set_mem_table = vhost_user_set_mem_table,
         .vhost_set_vring_addr = vhost_user_set_vring_addr,
@@ -2712,7 +2712,6 @@ const VhostOps user_ops = {
         .vhost_set_config = vhost_user_set_config,
         .vhost_crypto_create_session = vhost_user_crypto_create_session,
         .vhost_crypto_close_session = vhost_user_crypto_close_session,
-        .vhost_backend_mem_section_filter = vhost_user_mem_section_filter,
         .vhost_get_inflight_fd = vhost_user_get_inflight_fd,
         .vhost_set_inflight_fd = vhost_user_set_inflight_fd,
         .vhost_dev_start = vhost_user_dev_start,
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 746d130c74..4fe08c809f 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -46,20 +46,33 @@
 static struct vhost_log *vhost_log;
 static struct vhost_log *vhost_log_shm;
 
+/* Memslots used by backends that support private memslots (without an fd). */
 static unsigned int used_memslots;
+
+/* Memslots used by backends that only support shared memslots (with an fd). */
+static unsigned int used_shared_memslots;
+
 static QLIST_HEAD(, vhost_dev) vhost_devices =
     QLIST_HEAD_INITIALIZER(vhost_devices);
 
 bool vhost_has_free_slot(void)
 {
-    unsigned int slots_limit = ~0U;
+    unsigned int free = UINT_MAX;
     struct vhost_dev *hdev;
 
     QLIST_FOREACH(hdev, &vhost_devices, entry) {
         unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
-        slots_limit = MIN(slots_limit, r);
+        unsigned int cur_free;
+
+        if (hdev->vhost_ops->vhost_backend_no_private_memslots &&
+            hdev->vhost_ops->vhost_backend_no_private_memslots(hdev)) {
+            cur_free = r - used_shared_memslots;
+        } else {
+            cur_free = r - used_memslots;
+        }
+        free = MIN(free, cur_free);
     }
-    return slots_limit > used_memslots;
+    return free > 1;
 }
 
 static void vhost_dev_sync_region(struct vhost_dev *dev,
@@ -475,8 +488,7 @@ static int vhost_verify_ring_mappings(struct vhost_dev *dev,
  * vhost_section: identify sections needed for vhost access
  *
  * We only care about RAM sections here (where virtqueue and guest
- * internals accessed by virtio might live). If we find one we still
- * allow the backend to potentially filter it out of our list.
+ * internals accessed by virtio might live).
  */
 static bool vhost_section(struct vhost_dev *dev, MemoryRegionSection *section)
 {
@@ -503,8 +515,16 @@ static bool vhost_section(struct vhost_dev *dev, MemoryRegionSection *section)
             return false;
         }
 
-        if (dev->vhost_ops->vhost_backend_mem_section_filter &&
-            !dev->vhost_ops->vhost_backend_mem_section_filter(dev, section)) {
+        /*
+         * Some backends (like vhost-user) can only handle memory regions
+         * that have an fd (can be mapped into a different process). Filter
+         * the ones without an fd out, if requested.
+         *
+         * TODO: we might have to limit to MAP_SHARED as well.
+         */
+        if (memory_region_get_fd(section->mr) < 0 &&
+            dev->vhost_ops->vhost_backend_no_private_memslots &&
+            dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
             trace_vhost_reject_section(mr->name, 2);
             return false;
         }
@@ -569,7 +589,14 @@ static void vhost_commit(MemoryListener *listener)
                        dev->n_mem_sections * sizeof dev->mem->regions[0];
     dev->mem = g_realloc(dev->mem, regions_size);
     dev->mem->nregions = dev->n_mem_sections;
-    used_memslots = dev->mem->nregions;
+
+    if (dev->vhost_ops->vhost_backend_no_private_memslots &&
+        dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
+        used_shared_memslots = dev->mem->nregions;
+    } else {
+        used_memslots = dev->mem->nregions;
+    }
+
     for (i = 0; i < dev->n_mem_sections; i++) {
         struct vhost_memory_region *cur_vmr = dev->mem->regions + i;
         struct MemoryRegionSection *mrs = dev->mem_sections + i;
@@ -1387,6 +1414,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
                    VhostBackendType backend_type, uint32_t busyloop_timeout,
                    Error **errp)
 {
+    unsigned int used;
     uint64_t features;
     int i, r, n_initialized_vqs = 0;
 
@@ -1482,7 +1510,17 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
     memory_listener_register(&hdev->memory_listener, &address_space_memory);
     QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
 
-    if (used_memslots > hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
+    /*
+     * The listener we registered properly updated the corresponding counter.
+     * So we can trust that these values are accurate.
+     */
+    if (hdev->vhost_ops->vhost_backend_no_private_memslots &&
+        hdev->vhost_ops->vhost_backend_no_private_memslots(hdev)) {
+        used = used_shared_memslots;
+    } else {
+        used = used_memslots;
+    }
+    if (used > hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
         error_setg(errp, "vhost backend memory slots limit is less"
                    " than current number of present memory slots");
         r = -EINVAL;
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index ec3fbae58d..2349a4a7d2 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -108,8 +108,7 @@ typedef int (*vhost_crypto_create_session_op)(struct vhost_dev *dev,
 typedef int (*vhost_crypto_close_session_op)(struct vhost_dev *dev,
                                              uint64_t session_id);
 
-typedef bool (*vhost_backend_mem_section_filter_op)(struct vhost_dev *dev,
-                                                MemoryRegionSection *section);
+typedef bool (*vhost_backend_no_private_memslots_op)(struct vhost_dev *dev);
 
 typedef int (*vhost_get_inflight_fd_op)(struct vhost_dev *dev,
                                         uint16_t queue_size,
@@ -138,6 +137,7 @@ typedef struct VhostOps {
     vhost_backend_init vhost_backend_init;
     vhost_backend_cleanup vhost_backend_cleanup;
     vhost_backend_memslots_limit vhost_backend_memslots_limit;
+    vhost_backend_no_private_memslots_op vhost_backend_no_private_memslots;
     vhost_net_set_backend_op vhost_net_set_backend;
     vhost_net_set_mtu_op vhost_net_set_mtu;
     vhost_scsi_set_endpoint_op vhost_scsi_set_endpoint;
@@ -172,7 +172,6 @@ typedef struct VhostOps {
     vhost_set_config_op vhost_set_config;
     vhost_crypto_create_session_op vhost_crypto_create_session;
     vhost_crypto_close_session_op vhost_crypto_close_session;
-    vhost_backend_mem_section_filter_op vhost_backend_mem_section_filter;
     vhost_get_inflight_fd_op vhost_get_inflight_fd;
     vhost_set_inflight_fd_op vhost_set_inflight_fd;
     vhost_dev_start_op vhost_dev_start;
-- 
2.40.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback
  2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
  2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
@ 2023-05-03 17:21 ` David Hildenbrand
  2023-05-23 15:40   ` Peter Xu
  2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
  2023-05-23 14:25 ` [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
  3 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2023-05-03 17:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Stefan Hajnoczi,
	Igor Mammedov, Paolo Bonzini, Peter Xu,
	Philippe Mathieu-Daudé

Checking whether the memory regions are equal is sufficient: if they are
equal, then most certainly the contained fd is equal.

The whole vhost-user memslot handling is suboptimal and overly
complicated. We shouldn't have to lookup a RAM memory regions we got
notified about in vhost_user_get_mr_data() using a host pointer. But that
requires a bigger rework -- especially an alternative vhost_set_mem_table()
backend call that simply consumes MemoryRegionSections.

For now, let's just drop vhost_backend_can_merge().

Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/virtio/vhost-user.c            | 14 --------------
 hw/virtio/vhost-vdpa.c            |  1 -
 hw/virtio/vhost.c                 |  6 +-----
 include/hw/virtio/vhost-backend.h |  4 ----
 4 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 0c3e2702b1..831375a967 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -2195,19 +2195,6 @@ static int vhost_user_migration_done(struct vhost_dev *dev, char* mac_addr)
     return -ENOTSUP;
 }
 
-static bool vhost_user_can_merge(struct vhost_dev *dev,
-                                 uint64_t start1, uint64_t size1,
-                                 uint64_t start2, uint64_t size2)
-{
-    ram_addr_t offset;
-    int mfd, rfd;
-
-    (void)vhost_user_get_mr_data(start1, &offset, &mfd);
-    (void)vhost_user_get_mr_data(start2, &offset, &rfd);
-
-    return mfd == rfd;
-}
-
 static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
 {
     VhostUserMsg msg;
@@ -2704,7 +2691,6 @@ const VhostOps user_ops = {
         .vhost_set_vring_enable = vhost_user_set_vring_enable,
         .vhost_requires_shm_log = vhost_user_requires_shm_log,
         .vhost_migration_done = vhost_user_migration_done,
-        .vhost_backend_can_merge = vhost_user_can_merge,
         .vhost_net_set_mtu = vhost_user_net_set_mtu,
         .vhost_set_iotlb_callback = vhost_user_set_iotlb_callback,
         .vhost_send_device_iotlb_msg = vhost_user_send_device_iotlb_msg,
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index bc6bad23d5..38d98528e7 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1355,7 +1355,6 @@ const VhostOps vdpa_ops = {
         .vhost_set_config = vhost_vdpa_set_config,
         .vhost_requires_shm_log = NULL,
         .vhost_migration_done = NULL,
-        .vhost_backend_can_merge = NULL,
         .vhost_net_set_mtu = NULL,
         .vhost_set_iotlb_callback = NULL,
         .vhost_send_device_iotlb_msg = NULL,
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 4fe08c809f..6148892798 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -729,11 +729,7 @@ static void vhost_region_add_section(struct vhost_dev *dev,
             size_t offset = mrs_gpa - prev_gpa_start;
 
             if (prev_host_start + offset == mrs_host &&
-                section->mr == prev_sec->mr &&
-                (!dev->vhost_ops->vhost_backend_can_merge ||
-                 dev->vhost_ops->vhost_backend_can_merge(dev,
-                    mrs_host, mrs_size,
-                    prev_host_start, prev_size))) {
+                section->mr == prev_sec->mr) {
                 uint64_t max_end = MAX(prev_host_end, mrs_host + mrs_size);
                 need_add = false;
                 prev_sec->offset_within_address_space =
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index 2349a4a7d2..f3ba7b676b 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -86,9 +86,6 @@ typedef int (*vhost_set_vring_enable_op)(struct vhost_dev *dev,
 typedef bool (*vhost_requires_shm_log_op)(struct vhost_dev *dev);
 typedef int (*vhost_migration_done_op)(struct vhost_dev *dev,
                                        char *mac_addr);
-typedef bool (*vhost_backend_can_merge_op)(struct vhost_dev *dev,
-                                           uint64_t start1, uint64_t size1,
-                                           uint64_t start2, uint64_t size2);
 typedef int (*vhost_vsock_set_guest_cid_op)(struct vhost_dev *dev,
                                             uint64_t guest_cid);
 typedef int (*vhost_vsock_set_running_op)(struct vhost_dev *dev, int start);
@@ -163,7 +160,6 @@ typedef struct VhostOps {
     vhost_set_vring_enable_op vhost_set_vring_enable;
     vhost_requires_shm_log_op vhost_requires_shm_log;
     vhost_migration_done_op vhost_migration_done;
-    vhost_backend_can_merge_op vhost_backend_can_merge;
     vhost_vsock_set_guest_cid_op vhost_vsock_set_guest_cid;
     vhost_vsock_set_running_op vhost_vsock_set_running;
     vhost_set_iotlb_callback_op vhost_set_iotlb_callback;
-- 
2.40.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation
  2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
  2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
  2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
@ 2023-05-03 17:21 ` David Hildenbrand
  2023-05-23 15:42   ` Peter Xu
  2023-05-23 14:25 ` [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
  3 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2023-05-03 17:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Stefan Hajnoczi,
	Igor Mammedov, Paolo Bonzini, Peter Xu,
	Philippe Mathieu-Daudé

Let's fixup the documentation (e.g., removing traces of the ram_addr
parameter that no longer exists) and move it to the header file while at
it.

Suggested-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/exec/cpu-common.h | 15 +++++++++++++++
 softmmu/physmem.c         | 17 -----------------
 2 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 1be4a3117e..cd23a7535e 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -75,6 +75,21 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
 ram_addr_t qemu_ram_addr_from_host(void *ptr);
 ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr);
 RAMBlock *qemu_ram_block_by_name(const char *name);
+
+/*
+ * Translates a host ptr back to a RAMBlock and an offset in that RAMBlock.
+ *
+ * @ptr: The host pointer to transalte.
+ * @round_offset: Whether to round the result offset down to a target page
+ * @offset: Will be set to the offset within the returned RAMBlock.
+ *
+ * Returns: RAMBlock (or NULL if not found)
+ *
+ * By the time this function returns, the returned pointer is not protected
+ * by RCU anymore.  If the caller is not within an RCU critical section and
+ * does not hold the iothread lock, it must have other means of protecting the
+ * pointer, such as a reference to the memory region that owns the RAMBlock.
+ */
 RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
                                    ram_addr_t *offset);
 ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host);
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 0e0182d9f2..f5b0fa5b17 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2169,23 +2169,6 @@ ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
     return res;
 }
 
-/*
- * Translates a host ptr back to a RAMBlock, a ram_addr and an offset
- * in that RAMBlock.
- *
- * ptr: Host pointer to look up
- * round_offset: If true round the result offset down to a page boundary
- * *ram_addr: set to result ram_addr
- * *offset: set to result offset within the RAMBlock
- *
- * Returns: RAMBlock (or NULL if not found)
- *
- * By the time this function returns, the returned pointer is not protected
- * by RCU anymore.  If the caller is not within an RCU critical section and
- * does not hold the iothread lock, it must have other means of protecting the
- * pointer, such as a reference to the region that includes the incoming
- * ram_addr_t.
- */
 RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
                                    ram_addr_t *offset)
 {
-- 
2.40.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/3] vhost: memslot handling improvements
  2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
                   ` (2 preceding siblings ...)
  2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
@ 2023-05-23 14:25 ` David Hildenbrand
  3 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2023-05-23 14:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, Igor Mammedov, Paolo Bonzini,
	Peter Xu, Philippe Mathieu-Daudé

On 03.05.23 19:21, David Hildenbrand wrote:
> Following up on my previous work to make virtio-mem consume multiple
> memslots dynamically [1] that requires precise accounting between used vs.
> reserved memslots, I realized that vhost makes this extra hard by
> filtering out some memory region sections (so they don't consume a
> memslot) in the vhost-user case, which messes up the whole memslot
> accounting.
> 
> This series fixes what I found to be broken and prepares for more work on
> [1]. Further, it cleanes up the merge checks that I consider unnecessary.
> 
> [1] https://lkml.kernel.org/r/20211027124531.57561-8-david@redhat.com
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>
> 

Ping, probably fell through the cracks.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking
  2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
@ 2023-05-23 15:34   ` Peter Xu
  2023-05-23 15:42     ` David Hildenbrand
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2023-05-23 15:34 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Stefan Hajnoczi, Igor Mammedov,
	Paolo Bonzini, Philippe Mathieu-Daudé, Tiwei Bie

On Wed, May 03, 2023 at 07:21:19PM +0200, David Hildenbrand wrote:
> Having multiple vhost devices, some filtering out fd-less memslots and
> some not, can mess up the "used_memslot" accounting. Consequently our
> "free memslot" checks become unreliable and we might run out of free
> memslots at runtime later.
> 
> An example sequence which can trigger a potential issue that involves
> different vhost backends (vhost-kernel and vhost-user) and hotplugged
> memory devices can be found at [1].
> 
> Let's make the filtering mechanism less generic and distinguish between
> backends that support private memslots (without a fd) and ones that only
> support shared memslots (with a fd). Track the used_memslots for both
> cases separately and use the corresponding value when required.
> 
> Note: Most probably we should filter out MAP_PRIVATE fd-based RAM regions
> (for example, via memory-backend-memfd,...,shared=off or as default with
>  memory-backend-file) as well. When not using MAP_SHARED, it might not work
> as expected. Add a TODO for now.
> 
> [1] https://lkml.kernel.org/r/fad9136f-08d3-3fd9-71a1-502069c000cf@redhat.com
> 
> Fixes: 988a27754bbb ("vhost: allow backends to filter memory sections")
> Cc: Tiwei Bie <tiwei.bie@intel.com>
> Acked-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  hw/virtio/vhost-user.c            |  7 ++--
>  hw/virtio/vhost.c                 | 56 ++++++++++++++++++++++++++-----
>  include/hw/virtio/vhost-backend.h |  5 ++-
>  3 files changed, 52 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index e5285df4ba..0c3e2702b1 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -2453,10 +2453,9 @@ vhost_user_crypto_close_session(struct vhost_dev *dev, uint64_t session_id)
>      return 0;
>  }
>  
> -static bool vhost_user_mem_section_filter(struct vhost_dev *dev,
> -                                          MemoryRegionSection *section)
> +static bool vhost_user_no_private_memslots(struct vhost_dev *dev)
>  {
> -    return memory_region_get_fd(section->mr) >= 0;
> +    return true;
>  }
>  
>  static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
> @@ -2686,6 +2685,7 @@ const VhostOps user_ops = {
>          .vhost_backend_init = vhost_user_backend_init,
>          .vhost_backend_cleanup = vhost_user_backend_cleanup,
>          .vhost_backend_memslots_limit = vhost_user_memslots_limit,
> +        .vhost_backend_no_private_memslots = vhost_user_no_private_memslots,
>          .vhost_set_log_base = vhost_user_set_log_base,
>          .vhost_set_mem_table = vhost_user_set_mem_table,
>          .vhost_set_vring_addr = vhost_user_set_vring_addr,
> @@ -2712,7 +2712,6 @@ const VhostOps user_ops = {
>          .vhost_set_config = vhost_user_set_config,
>          .vhost_crypto_create_session = vhost_user_crypto_create_session,
>          .vhost_crypto_close_session = vhost_user_crypto_close_session,
> -        .vhost_backend_mem_section_filter = vhost_user_mem_section_filter,
>          .vhost_get_inflight_fd = vhost_user_get_inflight_fd,
>          .vhost_set_inflight_fd = vhost_user_set_inflight_fd,
>          .vhost_dev_start = vhost_user_dev_start,
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 746d130c74..4fe08c809f 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -46,20 +46,33 @@
>  static struct vhost_log *vhost_log;
>  static struct vhost_log *vhost_log_shm;
>  
> +/* Memslots used by backends that support private memslots (without an fd). */
>  static unsigned int used_memslots;
> +
> +/* Memslots used by backends that only support shared memslots (with an fd). */
> +static unsigned int used_shared_memslots;

It's just that these vars are updated multiple times when >1 vhost is
there, accessing these fields are still a bit confusing - I think it's
implicitly protected by BQL so looks always safe.

Since we already have the shared/private handling, maybe for the long term
it'll be nicer to just keep such info per-device e.g. in vhost_dev so we
can also drop vhost_backend_no_private_memslots().  Anyway the code is
internal so can be done on top even if worthwhile.

> +
>  static QLIST_HEAD(, vhost_dev) vhost_devices =
>      QLIST_HEAD_INITIALIZER(vhost_devices);
>  
>  bool vhost_has_free_slot(void)
>  {
> -    unsigned int slots_limit = ~0U;
> +    unsigned int free = UINT_MAX;
>      struct vhost_dev *hdev;
>  
>      QLIST_FOREACH(hdev, &vhost_devices, entry) {
>          unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
> -        slots_limit = MIN(slots_limit, r);
> +        unsigned int cur_free;
> +
> +        if (hdev->vhost_ops->vhost_backend_no_private_memslots &&
> +            hdev->vhost_ops->vhost_backend_no_private_memslots(hdev)) {
> +            cur_free = r - used_shared_memslots;
> +        } else {
> +            cur_free = r - used_memslots;
> +        }
> +        free = MIN(free, cur_free);
>      }
> -    return slots_limit > used_memslots;
> +    return free > 1;

Should here be "free > 0" instead?

Trivial but maybe still matter when some device used exactly the size of
all memslots of a device..

Other than this the patch looks all good here.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback
  2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
@ 2023-05-23 15:40   ` Peter Xu
  0 siblings, 0 replies; 9+ messages in thread
From: Peter Xu @ 2023-05-23 15:40 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Stefan Hajnoczi, Igor Mammedov,
	Paolo Bonzini, Philippe Mathieu-Daudé

On Wed, May 03, 2023 at 07:21:20PM +0200, David Hildenbrand wrote:
> Checking whether the memory regions are equal is sufficient: if they are
> equal, then most certainly the contained fd is equal.

Looks reasonable to me.

I double checked the src of the change and there's no bug report attached
either.  Maybe just a double safety belt, but definitely Michael will know
the best.

https://lore.kernel.org/qemu-devel/1456067090-18187-1-git-send-email-mst@redhat.com/

> 
> The whole vhost-user memslot handling is suboptimal and overly
> complicated. We shouldn't have to lookup a RAM memory regions we got
> notified about in vhost_user_get_mr_data() using a host pointer. But that
> requires a bigger rework -- especially an alternative vhost_set_mem_table()
> backend call that simply consumes MemoryRegionSections.
> 
> For now, let's just drop vhost_backend_can_merge().
> 
> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> Acked-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation
  2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
@ 2023-05-23 15:42   ` Peter Xu
  0 siblings, 0 replies; 9+ messages in thread
From: Peter Xu @ 2023-05-23 15:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Stefan Hajnoczi, Igor Mammedov,
	Paolo Bonzini, Philippe Mathieu-Daudé

On Wed, May 03, 2023 at 07:21:21PM +0200, David Hildenbrand wrote:
> Let's fixup the documentation (e.g., removing traces of the ram_addr
> parameter that no longer exists) and move it to the header file while at
> it.
> 
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Acked-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking
  2023-05-23 15:34   ` Peter Xu
@ 2023-05-23 15:42     ` David Hildenbrand
  0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2023-05-23 15:42 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Michael S. Tsirkin, Stefan Hajnoczi, Igor Mammedov,
	Paolo Bonzini, Philippe Mathieu-Daudé, Tiwei Bie

[...]

>> --- a/hw/virtio/vhost.c
>> +++ b/hw/virtio/vhost.c
>> @@ -46,20 +46,33 @@
>>   static struct vhost_log *vhost_log;
>>   static struct vhost_log *vhost_log_shm;
>>   
>> +/* Memslots used by backends that support private memslots (without an fd). */
>>   static unsigned int used_memslots;
>> +
>> +/* Memslots used by backends that only support shared memslots (with an fd). */
>> +static unsigned int used_shared_memslots;
> 
> It's just that these vars are updated multiple times when >1 vhost is
> there, accessing these fields are still a bit confusing - I think it's
> implicitly protected by BQL so looks always safe.

Yes, like the existing variable.

> 
> Since we already have the shared/private handling, maybe for the long term
> it'll be nicer to just keep such info per-device e.g. in vhost_dev so we
> can also drop vhost_backend_no_private_memslots().  Anyway the code is
> internal so can be done on top even if worthwhile.

Might be possible, but I remember there was a catch to it when 
hotplugging a device.

> 
>> +
>>   static QLIST_HEAD(, vhost_dev) vhost_devices =
>>       QLIST_HEAD_INITIALIZER(vhost_devices);
>>   
>>   bool vhost_has_free_slot(void)
>>   {
>> -    unsigned int slots_limit = ~0U;
>> +    unsigned int free = UINT_MAX;
>>       struct vhost_dev *hdev;
>>   
>>       QLIST_FOREACH(hdev, &vhost_devices, entry) {
>>           unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
>> -        slots_limit = MIN(slots_limit, r);
>> +        unsigned int cur_free;
>> +
>> +        if (hdev->vhost_ops->vhost_backend_no_private_memslots &&
>> +            hdev->vhost_ops->vhost_backend_no_private_memslots(hdev)) {
>> +            cur_free = r - used_shared_memslots;
>> +        } else {
>> +            cur_free = r - used_memslots;
>> +        }
>> +        free = MIN(free, cur_free);
>>       }
>> -    return slots_limit > used_memslots;
>> +    return free > 1;
> 
> Should here be "free > 0" instead?
> 
> Trivial but maybe still matter when some device used exactly the size of
> all memslots of a device..

Very good catch, thanks Peter!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-05-23 15:43 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
2023-05-23 15:34   ` Peter Xu
2023-05-23 15:42     ` David Hildenbrand
2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
2023-05-23 15:40   ` Peter Xu
2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
2023-05-23 15:42   ` Peter Xu
2023-05-23 14:25 ` [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).