qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Postcopy migration and vhost-user errors
@ 2024-07-11 13:14 Prasad Pandit
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-11 13:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

From: Prasad Pandit <pjp@fedoraproject.org>

Hello,

* virsh(1) offers multiple options to initiate Postcopy migration:

    1) virsh migrate --postcopy --postcopy-after-precopy
    2) virsh migrate --postcopy + virsh migrate-postcopy
    3) virsh migrate --postcopy --timeout <N> --timeout-postcopy

When Postcopy migration is invoked via method (2) or (3) above,
the guest on the destination host seems to hang or get stuck sometimes.

* During Postcopy migration, multiple threads are spawned on the destination
host to start the guest and setup devices. One such thread starts vhost
device via vhost_dev_start() function and another called fault_thread handles
page faults in user space using kernel's userfaultfd(2) system.

When fault_thread exits upon completion of Postcopy migration, it sends a
'postcopy_end' message to the vhost-user device. But sometimes 'postcopy_end'
message is sent while vhost device is being setup via vhost_dev_start().

     Thread-1                                  Thread-2

vhost_dev_start                        postcopy_ram_incoming_cleanup
 vhost_device_iotlb_miss                postcopy_notify
  vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
   vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
    process_message_reply                  process_message_reply
     vhost_user_read                        vhost_user_read
      vhost_user_read_header                 vhost_user_read_header
       "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"

This creates confusion when vhost device receives 'postcopy_end' message while
it is still trying to update IOTLB entries.

This seems to leave the guest in a stranded/hung state because fault_thread
has exited saying Postcopy migration has ended, but vhost-device is probably
still expecting updates. QEMU logs following errors on the destination host
===
...
qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
qemu-kvm: vhost_user_postcopy_end: 700871,700900: Failed to receive reply to postcopy_end
qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x8 instead of 0x5.
qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x16 instead of 0x5.
qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
===

* Couple of patches here help to fix/handle these errors.

Thank you.
---
Prasad Pandit (2):
  vhost-user: add a write-read lock
  vhost: fail device start if iotlb update fails

 hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
 hw/virtio/vhost.c              |   6 +-
 include/hw/virtio/vhost-user.h |   3 +
 3 files changed, 259 insertions(+), 173 deletions(-)

--
2.45.2



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 13:14 [PATCH 0/2] Postcopy migration and vhost-user errors Prasad Pandit
@ 2024-07-11 13:14 ` Prasad Pandit
  2024-07-11 14:39   ` Michael S. Tsirkin
                     ` (2 more replies)
  2024-07-11 13:14 ` [PATCH 2/2] vhost: fail device start if iotlb update fails Prasad Pandit
  2024-07-11 15:38 ` [PATCH 0/2] Postcopy migration and vhost-user errors Peter Xu
  2 siblings, 3 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-11 13:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

From: Prasad Pandit <pjp@fedoraproject.org>

QEMU threads use vhost_user_write/read calls to send
and receive messages from a vhost-user device. When multiple
threads communicate with the same vhost-user device, they can
receive each other's messages, resulting in an erroneous state.

 vhost_user_read_header:
  700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
 vhost_device_iotlb_miss:
  700871,700871: Fail to update device iotlb
 vhost_user_postcopy_end:
  700871,700900: Failed to receive reply to postcopy_end
 vhost_user_read_header:
  700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.

Here fault thread seems to end the postcopy migration while
another thread is starting the vhost-user device.

Add a rw lock to hold for one vhost_user_write/read cycle
and avoid such race conditions.

Suggested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>
---
 hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
 include/hw/virtio/vhost-user.h |   3 +
 2 files changed, 254 insertions(+), 172 deletions(-)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 00561daa06..99881c487f 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -24,6 +24,7 @@
 #include "qemu/main-loop.h"
 #include "qemu/uuid.h"
 #include "qemu/sockets.h"
+#include "qemu/lockable.h"
 #include "sysemu/runstate.h"
 #include "sysemu/cryptodev.h"
 #include "migration/postcopy-ram.h"
@@ -433,6 +434,8 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd)
 static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
                                    struct vhost_log *log)
 {
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     int fds[VHOST_USER_MAX_RAM_SLOTS];
     size_t fd_num = 0;
     bool shmfd = virtio_has_feature(dev->protocol_features,
@@ -455,23 +458,25 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
         fds[fd_num++] = log->fd;
     }
 
-    ret = vhost_user_write(dev, &msg, fds, fd_num);
-    if (ret < 0) {
-        return ret;
-    }
-
-    if (shmfd) {
-        msg.hdr.size = 0;
-        ret = vhost_user_read(dev, &msg);
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, fds, fd_num);
         if (ret < 0) {
             return ret;
         }
 
-        if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
-            error_report("Received unexpected msg type. "
-                         "Expected %d received %d",
-                         VHOST_USER_SET_LOG_BASE, msg.hdr.request);
-            return -EPROTO;
+        if (shmfd) {
+            msg.hdr.size = 0;
+            ret = vhost_user_read(dev, &msg);
+            if (ret < 0) {
+                return ret;
+            }
+
+            if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
+                error_report("Received unexpected msg type. "
+                             "Expected %d received %d",
+                             VHOST_USER_SET_LOG_BASE, msg.hdr.request);
+                return -EPROTO;
+            }
         }
     }
 
@@ -664,6 +669,7 @@ static int send_remove_regions(struct vhost_dev *dev,
                                bool reply_supported)
 {
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     struct vhost_memory_region *shadow_reg;
     int i, fd, shadow_reg_idx, ret;
     ram_addr_t offset;
@@ -685,16 +691,18 @@ static int send_remove_regions(struct vhost_dev *dev,
             vhost_user_fill_msg_region(&region_buffer, shadow_reg, 0);
             msg->payload.mem_reg.region = region_buffer;
 
-            ret = vhost_user_write(dev, msg, NULL, 0);
-            if (ret < 0) {
-                return ret;
-            }
-
-            if (reply_supported) {
-                ret = process_message_reply(dev, msg);
-                if (ret) {
+            WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+                ret = vhost_user_write(dev, msg, NULL, 0);
+                if (ret < 0) {
                     return ret;
                 }
+
+                if (reply_supported) {
+                    ret = process_message_reply(dev, msg);
+                    if (ret) {
+                        return ret;
+                    }
+                }
             }
         }
 
@@ -725,6 +733,9 @@ static int send_add_regions(struct vhost_dev *dev,
     VhostUserMsg msg_reply;
     VhostUserMemoryRegion region_buffer;
 
+    struct VhostUserState *us = u->user;
+    QEMU_LOCK_GUARD(&us->vhost_user_rw_lock);
+
     for (i = 0; i < nr_add_reg; i++) {
         reg = add_reg[i].region;
         reg_idx = add_reg[i].reg_idx;
@@ -893,6 +904,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
                                              bool config_mem_slots)
 {
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     int fds[VHOST_MEMORY_BASELINE_NREGIONS];
     size_t fd_num = 0;
     VhostUserMsg msg_reply;
@@ -926,14 +938,16 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
             return ret;
         }
 
-        ret = vhost_user_write(dev, &msg, fds, fd_num);
-        if (ret < 0) {
-            return ret;
-        }
+        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+            ret = vhost_user_write(dev, &msg, fds, fd_num);
+            if (ret < 0) {
+                return ret;
+            }
 
-        ret = vhost_user_read(dev, &msg_reply);
-        if (ret < 0) {
-            return ret;
+            ret = vhost_user_read(dev, &msg_reply);
+            if (ret < 0) {
+                return ret;
+            }
         }
 
         if (msg_reply.hdr.request != VHOST_USER_SET_MEM_TABLE) {
@@ -1005,6 +1019,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
                                     struct vhost_memory *mem)
 {
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     int fds[VHOST_MEMORY_BASELINE_NREGIONS];
     size_t fd_num = 0;
     bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler;
@@ -1044,13 +1059,15 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
             return ret;
         }
 
-        ret = vhost_user_write(dev, &msg, fds, fd_num);
-        if (ret < 0) {
-            return ret;
-        }
+        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+            ret = vhost_user_write(dev, &msg, fds, fd_num);
+            if (ret < 0) {
+                return ret;
+            }
 
-        if (reply_supported) {
-            return process_message_reply(dev, &msg);
+            if (reply_supported) {
+                return process_message_reply(dev, &msg);
+            }
         }
     }
 
@@ -1080,6 +1097,8 @@ static int vhost_user_set_vring_endian(struct vhost_dev *dev,
 static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
 {
     int ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     VhostUserMsg msg = {
         .hdr.request = request,
         .hdr.flags = VHOST_USER_VERSION,
@@ -1089,14 +1108,16 @@ static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
         return 0;
     }
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            return ret;
+        }
     }
 
     if (msg.hdr.request != request) {
@@ -1129,6 +1150,8 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
                                  bool wait_for_reply)
 {
     int ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
 
     if (wait_for_reply) {
         bool reply_supported = virtio_has_feature(dev->protocol_features,
@@ -1138,25 +1161,27 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
         }
     }
 
-    ret = vhost_user_write(dev, msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    if (wait_for_reply) {
-        uint64_t dummy;
+        if (wait_for_reply) {
+            uint64_t dummy;
 
-        if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
-            return process_message_reply(dev, msg);
-        }
+            if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
+                return process_message_reply(dev, msg);
+            }
 
-       /*
-        * We need to wait for a reply but the backend does not
-        * support replies for the command we just sent.
-        * Send VHOST_USER_GET_FEATURES which makes all backends
-        * send a reply.
-        */
-        return vhost_user_get_features(dev, &dummy);
+           /*
+            * We need to wait for a reply but the backend does not
+            * support replies for the command we just sent.
+            * Send VHOST_USER_GET_FEATURES which makes all backends
+            * send a reply.
+            */
+            return vhost_user_get_features(dev, &dummy);
+        }
     }
 
     return 0;
@@ -1277,20 +1302,23 @@ static int vhost_user_get_vring_base(struct vhost_dev *dev,
         .hdr.size = sizeof(msg.payload.state),
     };
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
 
     VhostUserHostNotifier *n = fetch_notifier(u->user, ring->index);
     if (n) {
         vhost_user_host_notifier_remove(n, dev->vdev);
     }
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_GET_VRING_BASE) {
@@ -1661,6 +1689,7 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
                                  int *dmabuf_fd)
 {
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     CharBackend *chr = u->user->chr;
     int ret;
     VhostUserMsg msg = {
@@ -1669,14 +1698,16 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
     };
     memcpy(msg.payload.object.uuid, uuid, sizeof(msg.payload.object.uuid));
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_GET_SHARED_OBJECT) {
@@ -1858,6 +1889,7 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
         .hdr.flags = VHOST_USER_VERSION,
     };
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     int sv[2], ret = 0;
     bool reply_supported = virtio_has_feature(dev->protocol_features,
                                               VHOST_USER_PROTOCOL_F_REPLY_ACK);
@@ -1889,15 +1921,16 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
         msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
     }
 
-    ret = vhost_user_write(dev, &msg, &sv[1], 1);
-    if (ret) {
-        goto out;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, &sv[1], 1);
+        if (ret) {
+            goto out;
+        }
 
-    if (reply_supported) {
-        ret = process_message_reply(dev, &msg);
+        if (reply_supported) {
+            ret = process_message_reply(dev, &msg);
+        }
     }
-
 out:
     close(sv[1]);
     if (ret) {
@@ -1985,6 +2018,7 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
 {
 #ifdef CONFIG_LINUX
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     CharBackend *chr = u->user->chr;
     int ufd;
     int ret;
@@ -1993,16 +2027,18 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
         .hdr.flags = VHOST_USER_VERSION,
     };
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_setg(errp, "Failed to send postcopy_advise to vhost");
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_setg(errp, "Failed to send postcopy_advise to vhost");
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        error_setg(errp, "Failed to get postcopy_advise reply from vhost");
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            error_setg(errp, "Failed to get postcopy_advise reply from vhost");
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_POSTCOPY_ADVISE) {
@@ -2051,16 +2087,19 @@ static int vhost_user_postcopy_listen(struct vhost_dev *dev, Error **errp)
 
     trace_vhost_user_postcopy_listen();
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_setg(errp, "Failed to send postcopy_listen to vhost");
-        return ret;
-    }
+    struct VhostUserState *us = u->user;
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_setg(errp, "Failed to send postcopy_listen to vhost");
+            return ret;
+        }
 
-    ret = process_message_reply(dev, &msg);
-    if (ret) {
-        error_setg(errp, "Failed to receive reply to postcopy_listen");
-        return ret;
+        ret = process_message_reply(dev, &msg);
+        if (ret) {
+            error_setg(errp, "Failed to receive reply to postcopy_listen");
+            return ret;
+        }
     }
 
     return 0;
@@ -2077,19 +2116,22 @@ static int vhost_user_postcopy_end(struct vhost_dev *dev, Error **errp)
     };
     int ret;
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
 
     trace_vhost_user_postcopy_end_entry();
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_setg(errp, "Failed to send postcopy_end to vhost");
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_setg(errp, "Failed to send postcopy_end to vhost");
+            return ret;
+        }
 
-    ret = process_message_reply(dev, &msg);
-    if (ret) {
-        error_setg(errp, "Failed to receive reply to postcopy_end");
-        return ret;
+        ret = process_message_reply(dev, &msg);
+        if (ret) {
+            error_setg(errp, "Failed to receive reply to postcopy_end");
+            return ret;
+        }
     }
     postcopy_unregister_shared_ufd(&u->postcopy_fd);
     close(u->postcopy_fd.fd);
@@ -2359,6 +2401,8 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
     bool reply_supported = virtio_has_feature(dev->protocol_features,
                                               VHOST_USER_PROTOCOL_F_REPLY_ACK);
     int ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
 
     if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_NET_MTU))) {
         return 0;
@@ -2372,14 +2416,16 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
         msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
     }
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    /* If reply_ack supported, backend has to ack specified MTU is valid */
-    if (reply_supported) {
-        return process_message_reply(dev, &msg);
+        /* If reply_ack supported, backend has to ack specified MTU is valid */
+        if (reply_supported) {
+            return process_message_reply(dev, &msg);
+        }
     }
 
     return 0;
@@ -2396,12 +2442,19 @@ static int vhost_user_send_device_iotlb_msg(struct vhost_dev *dev,
         .payload.iotlb = *imsg,
     };
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
+
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
+
+        ret = process_message_reply(dev, &msg);
     }
 
-    return process_message_reply(dev, &msg);
+    return ret;
 }
 
 
@@ -2414,6 +2467,8 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
                                  uint32_t config_len, Error **errp)
 {
     int ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     VhostUserMsg msg = {
         .hdr.request = VHOST_USER_GET_CONFIG,
         .hdr.flags = VHOST_USER_VERSION,
@@ -2430,16 +2485,19 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
 
     msg.payload.config.offset = 0;
     msg.payload.config.size = config_len;
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret, "vhost_get_config failed");
-        return ret;
-    }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret, "vhost_get_config failed");
-        return ret;
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret, "vhost_get_config failed");
+            return ret;
+        }
+
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret, "vhost_get_config failed");
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_GET_CONFIG) {
@@ -2464,6 +2522,8 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
 {
     int ret;
     uint8_t *p;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     bool reply_supported = virtio_has_feature(dev->protocol_features,
                                               VHOST_USER_PROTOCOL_F_REPLY_ACK);
 
@@ -2492,13 +2552,15 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
     p = msg.payload.config.region;
     memcpy(p, data, size);
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    if (reply_supported) {
-        return process_message_reply(dev, &msg);
+        if (reply_supported) {
+            return process_message_reply(dev, &msg);
+        }
     }
 
     return 0;
@@ -2509,6 +2571,8 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
                                             uint64_t *session_id)
 {
     int ret;
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     bool crypto_session = virtio_has_feature(dev->protocol_features,
                                        VHOST_USER_PROTOCOL_F_CRYPTO_SESSION);
     CryptoDevBackendSessionInfo *backend_info = session_info;
@@ -2572,18 +2636,21 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
 
     msg.payload.session.op_code = backend_info->op_code;
     msg.payload.session.session_id = backend_info->session_id;
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_report("vhost_user_write() return %d, create session failed",
-                     ret);
-        return ret;
-    }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        error_report("vhost_user_read() return %d, create session failed",
-                     ret);
-        return ret;
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_report("vhost_user_write() return %d, create session failed",
+                         ret);
+            return ret;
+        }
+
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            error_report("vhost_user_read() return %d, create session failed",
+                         ret);
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_CREATE_CRYPTO_SESSION) {
@@ -2648,6 +2715,7 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
     int fd;
     int ret;
     struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     CharBackend *chr = u->user->chr;
     VhostUserMsg msg = {
         .hdr.request = VHOST_USER_GET_INFLIGHT_FD,
@@ -2662,14 +2730,16 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
         return 0;
     }
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_GET_INFLIGHT_FD) {
@@ -2757,6 +2827,7 @@ bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp)
     user->memory_slots = 0;
     user->notifiers = g_ptr_array_new_full(VIRTIO_QUEUE_MAX / 4,
                                            &vhost_user_state_destroy);
+    qemu_mutex_init(&user->vhost_user_rw_lock);
     return true;
 }
 
@@ -2769,6 +2840,7 @@ void vhost_user_cleanup(VhostUserState *user)
     user->notifiers = (GPtrArray *) g_ptr_array_free(user->notifiers, true);
     memory_region_transaction_commit();
     user->chr = NULL;
+    qemu_mutex_destroy(&user->vhost_user_rw_lock);
 }
 
 
@@ -2882,6 +2954,7 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
 {
     int ret;
     struct vhost_user *vu = dev->opaque;
+    struct VhostUserState *us = vu->user;
     VhostUserMsg msg = {
         .hdr = {
             .request = VHOST_USER_SET_DEVICE_STATE_FD,
@@ -2902,19 +2975,21 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
         return -ENOTSUP;
     }
 
-    ret = vhost_user_write(dev, &msg, &fd, 1);
-    close(fd);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret,
-                         "Failed to send SET_DEVICE_STATE_FD message");
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, &fd, 1);
+        close(fd);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret,
+                             "Failed to send SET_DEVICE_STATE_FD message");
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret,
-                         "Failed to receive SET_DEVICE_STATE_FD reply");
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret,
+                             "Failed to receive SET_DEVICE_STATE_FD reply");
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_SET_DEVICE_STATE_FD) {
@@ -2951,6 +3026,8 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
 
 static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
 {
+    struct vhost_user *u = dev->opaque;
+    struct VhostUserState *us = u->user;
     int ret;
     VhostUserMsg msg = {
         .hdr = {
@@ -2965,18 +3042,20 @@ static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
         return -ENOTSUP;
     }
 
-    ret = vhost_user_write(dev, &msg, NULL, 0);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret,
-                         "Failed to send CHECK_DEVICE_STATE message");
-        return ret;
-    }
+    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
+        ret = vhost_user_write(dev, &msg, NULL, 0);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret,
+                             "Failed to send CHECK_DEVICE_STATE message");
+            return ret;
+        }
 
-    ret = vhost_user_read(dev, &msg);
-    if (ret < 0) {
-        error_setg_errno(errp, -ret,
-                         "Failed to receive CHECK_DEVICE_STATE reply");
-        return ret;
+        ret = vhost_user_read(dev, &msg);
+        if (ret < 0) {
+            error_setg_errno(errp, -ret,
+                             "Failed to receive CHECK_DEVICE_STATE reply");
+            return ret;
+        }
     }
 
     if (msg.hdr.request != VHOST_USER_CHECK_DEVICE_STATE) {
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 324cd8663a..387ab6da2e 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -67,6 +67,9 @@ typedef struct VhostUserState {
     GPtrArray *notifiers;
     int memory_slots;
     bool supports_config;
+
+    /* Hold lock for vhost_user_write/read cycle */
+    QemuMutex vhost_user_rw_lock;
 } VhostUserState;
 
 /**
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/2] vhost: fail device start if iotlb update fails
  2024-07-11 13:14 [PATCH 0/2] Postcopy migration and vhost-user errors Prasad Pandit
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
@ 2024-07-11 13:14 ` Prasad Pandit
  2024-07-11 15:38 ` [PATCH 0/2] Postcopy migration and vhost-user errors Peter Xu
  2 siblings, 0 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-11 13:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Xu, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

From: Prasad Pandit <pjp@fedoraproject.org>

While starting a vhost device, updating iotlb entries
via 'vhost_device_iotlb_miss' may return an error.

  qemu-kvm: vhost_device_iotlb_miss:
    700871,700871: Fail to update device iotlb

Fail device start when such an error occurs.

Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>
---
 hw/virtio/vhost.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 06fc71746e..a70b7422b5 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -2151,7 +2151,11 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
          * vhost-kernel code requires for this.*/
         for (i = 0; i < hdev->nvqs; ++i) {
             struct vhost_virtqueue *vq = hdev->vqs + i;
-            vhost_device_iotlb_miss(hdev, vq->used_phys, true);
+            r = vhost_device_iotlb_miss(hdev, vq->used_phys, true);
+            if (r) {
+                VHOST_OPS_DEBUG(r, "vhost_device_iotlb_miss failed");
+                goto fail_start;
+            }
         }
     }
     vhost_start_config_intr(hdev);
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
@ 2024-07-11 14:39   ` Michael S. Tsirkin
  2024-07-15 10:57     ` Prasad Pandit
  2024-07-11 15:41   ` Peter Xu
  2024-07-20 19:41   ` Michael S. Tsirkin
  2 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2024-07-11 14:39 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Peter Xu, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Thu, Jul 11, 2024 at 06:44:23PM +0530, Prasad Pandit wrote:
> From: Prasad Pandit <pjp@fedoraproject.org>
> 
> QEMU threads use vhost_user_write/read calls to send
> and receive messages from a vhost-user device. When multiple
> threads communicate with the same vhost-user device, they can
> receive each other's messages, resulting in an erroneous state.
> 
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
>  vhost_device_iotlb_miss:
>   700871,700871: Fail to update device iotlb
>  vhost_user_postcopy_end:
>   700871,700900: Failed to receive reply to postcopy_end
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> 
> Here fault thread seems to end the postcopy migration while
> another thread is starting the vhost-user device.
> 
> Add a rw lock to hold for one vhost_user_write/read cycle
> and avoid such race conditions.
> 
> Suggested-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>


Could you supply a Fixes tag here? What commit introduced the race?


> ---
>  hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
>  include/hw/virtio/vhost-user.h |   3 +
>  2 files changed, 254 insertions(+), 172 deletions(-)
> 
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 00561daa06..99881c487f 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -24,6 +24,7 @@
>  #include "qemu/main-loop.h"
>  #include "qemu/uuid.h"
>  #include "qemu/sockets.h"
> +#include "qemu/lockable.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/cryptodev.h"
>  #include "migration/postcopy-ram.h"
> @@ -433,6 +434,8 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd)
>  static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
>                                     struct vhost_log *log)
>  {
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_USER_MAX_RAM_SLOTS];
>      size_t fd_num = 0;
>      bool shmfd = virtio_has_feature(dev->protocol_features,
> @@ -455,23 +458,25 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
>          fds[fd_num++] = log->fd;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, fds, fd_num);
> -    if (ret < 0) {
> -        return ret;
> -    }
> -
> -    if (shmfd) {
> -        msg.hdr.size = 0;
> -        ret = vhost_user_read(dev, &msg);
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, fds, fd_num);
>          if (ret < 0) {
>              return ret;
>          }
>  
> -        if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
> -            error_report("Received unexpected msg type. "
> -                         "Expected %d received %d",
> -                         VHOST_USER_SET_LOG_BASE, msg.hdr.request);
> -            return -EPROTO;
> +        if (shmfd) {
> +            msg.hdr.size = 0;
> +            ret = vhost_user_read(dev, &msg);
> +            if (ret < 0) {
> +                return ret;
> +            }
> +
> +            if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
> +                error_report("Received unexpected msg type. "
> +                             "Expected %d received %d",
> +                             VHOST_USER_SET_LOG_BASE, msg.hdr.request);
> +                return -EPROTO;
> +            }
>          }
>      }
>  
> @@ -664,6 +669,7 @@ static int send_remove_regions(struct vhost_dev *dev,
>                                 bool reply_supported)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      struct vhost_memory_region *shadow_reg;
>      int i, fd, shadow_reg_idx, ret;
>      ram_addr_t offset;
> @@ -685,16 +691,18 @@ static int send_remove_regions(struct vhost_dev *dev,
>              vhost_user_fill_msg_region(&region_buffer, shadow_reg, 0);
>              msg->payload.mem_reg.region = region_buffer;
>  
> -            ret = vhost_user_write(dev, msg, NULL, 0);
> -            if (ret < 0) {
> -                return ret;
> -            }
> -
> -            if (reply_supported) {
> -                ret = process_message_reply(dev, msg);
> -                if (ret) {
> +            WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +                ret = vhost_user_write(dev, msg, NULL, 0);
> +                if (ret < 0) {
>                      return ret;
>                  }
> +
> +                if (reply_supported) {
> +                    ret = process_message_reply(dev, msg);
> +                    if (ret) {
> +                        return ret;
> +                    }
> +                }
>              }
>          }
>  
> @@ -725,6 +733,9 @@ static int send_add_regions(struct vhost_dev *dev,
>      VhostUserMsg msg_reply;
>      VhostUserMemoryRegion region_buffer;
>  
> +    struct VhostUserState *us = u->user;
> +    QEMU_LOCK_GUARD(&us->vhost_user_rw_lock);
> +
>      for (i = 0; i < nr_add_reg; i++) {
>          reg = add_reg[i].region;
>          reg_idx = add_reg[i].reg_idx;
> @@ -893,6 +904,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
>                                               bool config_mem_slots)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>      size_t fd_num = 0;
>      VhostUserMsg msg_reply;
> @@ -926,14 +938,16 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
>              return ret;
>          }
>  
> -        ret = vhost_user_write(dev, &msg, fds, fd_num);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +            ret = vhost_user_write(dev, &msg, fds, fd_num);
> +            if (ret < 0) {
> +                return ret;
> +            }
>  
> -        ret = vhost_user_read(dev, &msg_reply);
> -        if (ret < 0) {
> -            return ret;
> +            ret = vhost_user_read(dev, &msg_reply);
> +            if (ret < 0) {
> +                return ret;
> +            }
>          }
>  
>          if (msg_reply.hdr.request != VHOST_USER_SET_MEM_TABLE) {
> @@ -1005,6 +1019,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
>                                      struct vhost_memory *mem)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>      size_t fd_num = 0;
>      bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler;
> @@ -1044,13 +1059,15 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
>              return ret;
>          }
>  
> -        ret = vhost_user_write(dev, &msg, fds, fd_num);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +            ret = vhost_user_write(dev, &msg, fds, fd_num);
> +            if (ret < 0) {
> +                return ret;
> +            }
>  
> -        if (reply_supported) {
> -            return process_message_reply(dev, &msg);
> +            if (reply_supported) {
> +                return process_message_reply(dev, &msg);
> +            }
>          }
>      }
>  
> @@ -1080,6 +1097,8 @@ static int vhost_user_set_vring_endian(struct vhost_dev *dev,
>  static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      VhostUserMsg msg = {
>          .hdr.request = request,
>          .hdr.flags = VHOST_USER_VERSION,
> @@ -1089,14 +1108,16 @@ static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
>          return 0;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != request) {
> @@ -1129,6 +1150,8 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
>                                   bool wait_for_reply)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      if (wait_for_reply) {
>          bool reply_supported = virtio_has_feature(dev->protocol_features,
> @@ -1138,25 +1161,27 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
>          }
>      }
>  
> -    ret = vhost_user_write(dev, msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    if (wait_for_reply) {
> -        uint64_t dummy;
> +        if (wait_for_reply) {
> +            uint64_t dummy;
>  
> -        if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
> -            return process_message_reply(dev, msg);
> -        }
> +            if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
> +                return process_message_reply(dev, msg);
> +            }
>  
> -       /*
> -        * We need to wait for a reply but the backend does not
> -        * support replies for the command we just sent.
> -        * Send VHOST_USER_GET_FEATURES which makes all backends
> -        * send a reply.
> -        */
> -        return vhost_user_get_features(dev, &dummy);
> +           /*
> +            * We need to wait for a reply but the backend does not
> +            * support replies for the command we just sent.
> +            * Send VHOST_USER_GET_FEATURES which makes all backends
> +            * send a reply.
> +            */
> +            return vhost_user_get_features(dev, &dummy);
> +        }
>      }
>  
>      return 0;
> @@ -1277,20 +1302,23 @@ static int vhost_user_get_vring_base(struct vhost_dev *dev,
>          .hdr.size = sizeof(msg.payload.state),
>      };
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      VhostUserHostNotifier *n = fetch_notifier(u->user, ring->index);
>      if (n) {
>          vhost_user_host_notifier_remove(n, dev->vdev);
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_VRING_BASE) {
> @@ -1661,6 +1689,7 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
>                                   int *dmabuf_fd)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      int ret;
>      VhostUserMsg msg = {
> @@ -1669,14 +1698,16 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
>      };
>      memcpy(msg.payload.object.uuid, uuid, sizeof(msg.payload.object.uuid));
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_SHARED_OBJECT) {
> @@ -1858,6 +1889,7 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
>          .hdr.flags = VHOST_USER_VERSION,
>      };
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int sv[2], ret = 0;
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
> @@ -1889,15 +1921,16 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
>          msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, &sv[1], 1);
> -    if (ret) {
> -        goto out;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, &sv[1], 1);
> +        if (ret) {
> +            goto out;
> +        }
>  
> -    if (reply_supported) {
> -        ret = process_message_reply(dev, &msg);
> +        if (reply_supported) {
> +            ret = process_message_reply(dev, &msg);
> +        }
>      }
> -
>  out:
>      close(sv[1]);
>      if (ret) {
> @@ -1985,6 +2018,7 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
>  {
>  #ifdef CONFIG_LINUX
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      int ufd;
>      int ret;
> @@ -1993,16 +2027,18 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
>          .hdr.flags = VHOST_USER_VERSION,
>      };
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_advise to vhost");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_advise to vhost");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to get postcopy_advise reply from vhost");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to get postcopy_advise reply from vhost");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_POSTCOPY_ADVISE) {
> @@ -2051,16 +2087,19 @@ static int vhost_user_postcopy_listen(struct vhost_dev *dev, Error **errp)
>  
>      trace_vhost_user_postcopy_listen();
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_listen to vhost");
> -        return ret;
> -    }
> +    struct VhostUserState *us = u->user;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_listen to vhost");
> +            return ret;
> +        }
>  
> -    ret = process_message_reply(dev, &msg);
> -    if (ret) {
> -        error_setg(errp, "Failed to receive reply to postcopy_listen");
> -        return ret;
> +        ret = process_message_reply(dev, &msg);
> +        if (ret) {
> +            error_setg(errp, "Failed to receive reply to postcopy_listen");
> +            return ret;
> +        }
>      }
>  
>      return 0;
> @@ -2077,19 +2116,22 @@ static int vhost_user_postcopy_end(struct vhost_dev *dev, Error **errp)
>      };
>      int ret;
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      trace_vhost_user_postcopy_end_entry();
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_end to vhost");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_end to vhost");
> +            return ret;
> +        }
>  
> -    ret = process_message_reply(dev, &msg);
> -    if (ret) {
> -        error_setg(errp, "Failed to receive reply to postcopy_end");
> -        return ret;
> +        ret = process_message_reply(dev, &msg);
> +        if (ret) {
> +            error_setg(errp, "Failed to receive reply to postcopy_end");
> +            return ret;
> +        }
>      }
>      postcopy_unregister_shared_ufd(&u->postcopy_fd);
>      close(u->postcopy_fd.fd);
> @@ -2359,6 +2401,8 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_NET_MTU))) {
>          return 0;
> @@ -2372,14 +2416,16 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
>          msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    /* If reply_ack supported, backend has to ack specified MTU is valid */
> -    if (reply_supported) {
> -        return process_message_reply(dev, &msg);
> +        /* If reply_ack supported, backend has to ack specified MTU is valid */
> +        if (reply_supported) {
> +            return process_message_reply(dev, &msg);
> +        }
>      }
>  
>      return 0;
> @@ -2396,12 +2442,19 @@ static int vhost_user_send_device_iotlb_msg(struct vhost_dev *dev,
>          .payload.iotlb = *imsg,
>      };
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
> +
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
> +
> +        ret = process_message_reply(dev, &msg);
>      }
>  
> -    return process_message_reply(dev, &msg);
> +    return ret;
>  }
>  
>  
> @@ -2414,6 +2467,8 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
>                                   uint32_t config_len, Error **errp)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      VhostUserMsg msg = {
>          .hdr.request = VHOST_USER_GET_CONFIG,
>          .hdr.flags = VHOST_USER_VERSION,
> @@ -2430,16 +2485,19 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
>  
>      msg.payload.config.offset = 0;
>      msg.payload.config.size = config_len;
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret, "vhost_get_config failed");
> -        return ret;
> -    }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret, "vhost_get_config failed");
> -        return ret;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret, "vhost_get_config failed");
> +            return ret;
> +        }
> +
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret, "vhost_get_config failed");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_CONFIG) {
> @@ -2464,6 +2522,8 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
>  {
>      int ret;
>      uint8_t *p;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
>  
> @@ -2492,13 +2552,15 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
>      p = msg.payload.config.region;
>      memcpy(p, data, size);
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    if (reply_supported) {
> -        return process_message_reply(dev, &msg);
> +        if (reply_supported) {
> +            return process_message_reply(dev, &msg);
> +        }
>      }
>  
>      return 0;
> @@ -2509,6 +2571,8 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
>                                              uint64_t *session_id)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      bool crypto_session = virtio_has_feature(dev->protocol_features,
>                                         VHOST_USER_PROTOCOL_F_CRYPTO_SESSION);
>      CryptoDevBackendSessionInfo *backend_info = session_info;
> @@ -2572,18 +2636,21 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
>  
>      msg.payload.session.op_code = backend_info->op_code;
>      msg.payload.session.session_id = backend_info->session_id;
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_report("vhost_user_write() return %d, create session failed",
> -                     ret);
> -        return ret;
> -    }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_report("vhost_user_read() return %d, create session failed",
> -                     ret);
> -        return ret;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_report("vhost_user_write() return %d, create session failed",
> +                         ret);
> +            return ret;
> +        }
> +
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_report("vhost_user_read() return %d, create session failed",
> +                         ret);
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_CREATE_CRYPTO_SESSION) {
> @@ -2648,6 +2715,7 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
>      int fd;
>      int ret;
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      VhostUserMsg msg = {
>          .hdr.request = VHOST_USER_GET_INFLIGHT_FD,
> @@ -2662,14 +2730,16 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
>          return 0;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_INFLIGHT_FD) {
> @@ -2757,6 +2827,7 @@ bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp)
>      user->memory_slots = 0;
>      user->notifiers = g_ptr_array_new_full(VIRTIO_QUEUE_MAX / 4,
>                                             &vhost_user_state_destroy);
> +    qemu_mutex_init(&user->vhost_user_rw_lock);
>      return true;
>  }
>  
> @@ -2769,6 +2840,7 @@ void vhost_user_cleanup(VhostUserState *user)
>      user->notifiers = (GPtrArray *) g_ptr_array_free(user->notifiers, true);
>      memory_region_transaction_commit();
>      user->chr = NULL;
> +    qemu_mutex_destroy(&user->vhost_user_rw_lock);
>  }
>  
>  
> @@ -2882,6 +2954,7 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>  {
>      int ret;
>      struct vhost_user *vu = dev->opaque;
> +    struct VhostUserState *us = vu->user;
>      VhostUserMsg msg = {
>          .hdr = {
>              .request = VHOST_USER_SET_DEVICE_STATE_FD,
> @@ -2902,19 +2975,21 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>          return -ENOTSUP;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, &fd, 1);
> -    close(fd);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to send SET_DEVICE_STATE_FD message");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, &fd, 1);
> +        close(fd);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to send SET_DEVICE_STATE_FD message");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to receive SET_DEVICE_STATE_FD reply");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to receive SET_DEVICE_STATE_FD reply");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_SET_DEVICE_STATE_FD) {
> @@ -2951,6 +3026,8 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>  
>  static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
>  {
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int ret;
>      VhostUserMsg msg = {
>          .hdr = {
> @@ -2965,18 +3042,20 @@ static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
>          return -ENOTSUP;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to send CHECK_DEVICE_STATE message");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to send CHECK_DEVICE_STATE message");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to receive CHECK_DEVICE_STATE reply");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to receive CHECK_DEVICE_STATE reply");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_CHECK_DEVICE_STATE) {
> diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
> index 324cd8663a..387ab6da2e 100644
> --- a/include/hw/virtio/vhost-user.h
> +++ b/include/hw/virtio/vhost-user.h
> @@ -67,6 +67,9 @@ typedef struct VhostUserState {
>      GPtrArray *notifiers;
>      int memory_slots;
>      bool supports_config;
> +
> +    /* Hold lock for vhost_user_write/read cycle */
> +    QemuMutex vhost_user_rw_lock;
>  } VhostUserState;
>  
>  /**
> -- 
> 2.45.2



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-11 13:14 [PATCH 0/2] Postcopy migration and vhost-user errors Prasad Pandit
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
  2024-07-11 13:14 ` [PATCH 2/2] vhost: fail device start if iotlb update fails Prasad Pandit
@ 2024-07-11 15:38 ` Peter Xu
  2024-07-15 10:14   ` Prasad Pandit
  2 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-11 15:38 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Thu, Jul 11, 2024 at 06:44:22PM +0530, Prasad Pandit wrote:
> From: Prasad Pandit <pjp@fedoraproject.org>
> 
> Hello,
> 
> * virsh(1) offers multiple options to initiate Postcopy migration:
> 
>     1) virsh migrate --postcopy --postcopy-after-precopy
>     2) virsh migrate --postcopy + virsh migrate-postcopy
>     3) virsh migrate --postcopy --timeout <N> --timeout-postcopy
> 
> When Postcopy migration is invoked via method (2) or (3) above,
> the guest on the destination host seems to hang or get stuck sometimes.
> 
> * During Postcopy migration, multiple threads are spawned on the destination
> host to start the guest and setup devices. One such thread starts vhost
> device via vhost_dev_start() function and another called fault_thread handles

Hmm, I thought it was one of the vcpu threads that invoked
vhost_dev_start(), rather than any migration thread?

> page faults in user space using kernel's userfaultfd(2) system.
> 
> When fault_thread exits upon completion of Postcopy migration, it sends a
> 'postcopy_end' message to the vhost-user device. But sometimes 'postcopy_end'
> message is sent while vhost device is being setup via vhost_dev_start().
> 
>      Thread-1                                  Thread-2
> 
> vhost_dev_start                        postcopy_ram_incoming_cleanup
>  vhost_device_iotlb_miss                postcopy_notify
>   vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
>    vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
>     process_message_reply                  process_message_reply
>      vhost_user_read                        vhost_user_read
>       vhost_user_read_header                 vhost_user_read_header
>        "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"
> 
> This creates confusion when vhost device receives 'postcopy_end' message while
> it is still trying to update IOTLB entries.
> 
> This seems to leave the guest in a stranded/hung state because fault_thread
> has exited saying Postcopy migration has ended, but vhost-device is probably
> still expecting updates. QEMU logs following errors on the destination host
> ===
> ...
> qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
> qemu-kvm: vhost_user_postcopy_end: 700871,700900: Failed to receive reply to postcopy_end
> qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
> qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x8 instead of 0x5.
> qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
> qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x16 instead of 0x5.
> qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
> qemu-kvm: vhost_user_read_header: 700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> qemu-kvm: vhost_device_iotlb_miss: 700871,700871: Fail to update device iotlb
> ===
> 
> * Couple of patches here help to fix/handle these errors.

I remember after you added the rwlock, there's still a hang issue.

Did you investigated that?  Or do you mean this series will fix all the
problems?

Thanks,

> 
> Thank you.
> ---
> Prasad Pandit (2):
>   vhost-user: add a write-read lock
>   vhost: fail device start if iotlb update fails
> 
>  hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
>  hw/virtio/vhost.c              |   6 +-
>  include/hw/virtio/vhost-user.h |   3 +
>  3 files changed, 259 insertions(+), 173 deletions(-)
> 
> --
> 2.45.2
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
  2024-07-11 14:39   ` Michael S. Tsirkin
@ 2024-07-11 15:41   ` Peter Xu
  2024-07-15  8:14     ` Prasad Pandit
  2024-07-20 19:41   ` Michael S. Tsirkin
  2 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-11 15:41 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Thu, Jul 11, 2024 at 06:44:23PM +0530, Prasad Pandit wrote:
> From: Prasad Pandit <pjp@fedoraproject.org>
> 
> QEMU threads use vhost_user_write/read calls to send
> and receive messages from a vhost-user device. When multiple
> threads communicate with the same vhost-user device, they can
> receive each other's messages, resulting in an erroneous state.
> 
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
>  vhost_device_iotlb_miss:
>   700871,700871: Fail to update device iotlb
>  vhost_user_postcopy_end:
>   700871,700900: Failed to receive reply to postcopy_end
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> 
> Here fault thread seems to end the postcopy migration while
> another thread is starting the vhost-user device.
> 
> Add a rw lock to hold for one vhost_user_write/read cycle
> and avoid such race conditions.
> 
> Suggested-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>
> ---
>  hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
>  include/hw/virtio/vhost-user.h |   3 +

I apologize if I suggested WITH_QEMU_LOCK_GUARD when we talked.. I don't
remember which one I suggested, but in this case IIUC it'll be much easier
to review if you use the other sister function QEMU_LOCK_GUARD()
instead.. That should make the diff much, much less.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 15:41   ` Peter Xu
@ 2024-07-15  8:14     ` Prasad Pandit
  2024-07-15 13:27       ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Prasad Pandit @ 2024-07-15  8:14 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Thu, 11 Jul 2024 at 21:12, Peter Xu <peterx@redhat.com> wrote:
> I apologize if I suggested WITH_QEMU_LOCK_GUARD when we talked.. I don't
> remember which one I suggested, but in this case IIUC it'll be much easier
> to review if you use the other sister function QEMU_LOCK_GUARD()
> instead.. That should make the diff much, much less.

* Yes, QEMU_LOCK_GUARD simplifies the diff, but it may extend the time
for which lock is held, delaying other threads, is that okay?

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-11 15:38 ` [PATCH 0/2] Postcopy migration and vhost-user errors Peter Xu
@ 2024-07-15 10:14   ` Prasad Pandit
  2024-07-15 13:39     ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Prasad Pandit @ 2024-07-15 10:14 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Thu, 11 Jul 2024 at 21:08, Peter Xu <peterx@redhat.com> wrote:
> Hmm, I thought it was one of the vcpu threads that invoked
> vhost_dev_start(), rather than any migration thread?

     [QEMU=vhost-user-front-end]  <===========>   [QEMU=vhost-user-front-end]
                            ^
                                    |
                            |
                                     |
                            |
                                     |
                            |
                                    V
[external-process=vhost-user-back-end]
[external-process=vhost-user-back-end]
===
vhost-user-protocol:
    -> https://www.qemu.org/docs/master/interop/vhost-user.html#vhost-user-proto

* It is not clear which thread calls vhost_dev_start() routine, it
could be a vCPU thread.  Sending 'postcopy_end' message to the
'vhost-user-back-end', hints that the device was being migrated and
migration finished before the device set-up was done. The protocol
above says

    "...The nature of the channel is implementation-defined, but it
must generally behave like a pipe: The writing end will write all the
data it has into it, signalling the end of data by closing its end.
The reading end must read all of this data (until encountering the end
of file) and process it."

* It does not mention sending the 'postcopy_end' message. But it talks
about the front-end sending 'VHOST_USER_CHECK_DEVICE_STATE' to the
back-end to check if the migration of the device state was successful
or not.

> I remember after you added the rwlock, there's still a hang issue.
> Did you investigated that?  Or do you mean this series will fix all the problems?

* No, this series does not fix the guest hang issue. Root cause of
that is still a mystery. If migration is ending abruptly before all of
the guest state is migrated, the guest hang scenario seems possible.
Adding vhost-user-rw-lock does not address the issue of end of
migration.

* From the protocol page above, it is not clear if the front-end
should allow/have multiple threads talking to the same vhost-user
device.

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 14:39   ` Michael S. Tsirkin
@ 2024-07-15 10:57     ` Prasad Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-15 10:57 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Peter Xu, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Thu, 11 Jul 2024 at 20:11, Michael S. Tsirkin <mst@redhat.com> wrote:
> Could you supply a Fixes tag here? What commit introduced the race?

'postcopy_end' message was added by:
  -> https://github.com/qemu/qemu/commit/46343570c06e63b4499f619011df80f91349cd49

Not sure if its race condition also began with it. It is not clear if
the front-end should allow multiple threads to talk to the same
vhost-user device.

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-15  8:14     ` Prasad Pandit
@ 2024-07-15 13:27       ` Peter Xu
  2024-07-16 10:19         ` Prasad Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-15 13:27 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Mon, Jul 15, 2024 at 01:44:00PM +0530, Prasad Pandit wrote:
> On Thu, 11 Jul 2024 at 21:12, Peter Xu <peterx@redhat.com> wrote:
> > I apologize if I suggested WITH_QEMU_LOCK_GUARD when we talked.. I don't
> > remember which one I suggested, but in this case IIUC it'll be much easier
> > to review if you use the other sister function QEMU_LOCK_GUARD()
> > instead.. That should make the diff much, much less.
> 
> * Yes, QEMU_LOCK_GUARD simplifies the diff, but it may extend the time
> for which lock is held, delaying other threads, is that okay?

I think it shouldn't be a major deal in most cases, if the extended cycles
only cover a bunch of instructions. In special case we can still use
WITH_QEMU_LOCK_GUARD, but I'd start with the simple first and only switch
if necessary.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-15 10:14   ` Prasad Pandit
@ 2024-07-15 13:39     ` Peter Xu
  2024-07-16 10:14       ` Prasad Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-15 13:39 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Mon, Jul 15, 2024 at 03:44:06PM +0530, Prasad Pandit wrote:
> > I remember after you added the rwlock, there's still a hang issue.
> > Did you investigated that?  Or do you mean this series will fix all the problems?
> 
> * No, this series does not fix the guest hang issue. Root cause of
> that is still a mystery. If migration is ending abruptly before all of
> the guest state is migrated, the guest hang scenario seems possible.
> Adding vhost-user-rw-lock does not address the issue of end of
> migration.

IMHO it's better we debug and fix all the issues before merging this one,
otherwise we may overlook something.  You could pass over the patch to
whoever going to debug this, so it will be included in the whole set to be
posted when the bug is completely fixed.

> * From the protocol page above, it is not clear if the front-end
> should allow/have multiple threads talking to the same vhost-user
> device.

The protocol should have no restriction on the thread model of a front-end.
It only describes the wire protocol.

IIUC the protocol was designed to be serialized by nature (where there's no
request ID, so we can't match reply to any of the previous response), then
the front-end can manage the threads well to serialize all the requests,
like using this rwlock.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-15 13:39     ` Peter Xu
@ 2024-07-16 10:14       ` Prasad Pandit
  2024-07-16 22:02         ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Prasad Pandit @ 2024-07-16 10:14 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

Hello Peter,

On Mon, 15 Jul 2024 at 19:10, Peter Xu <peterx@redhat.com> wrote:
> IMHO it's better we debug and fix all the issues before merging this one,
> otherwise we may overlook something.

* Well we don't know where the issue is, not sure where the fix may go
in, ex. if the issue turns out to be how virsh(1) invokes
migrate-postcopy, fix may go in virsh(1). Patches in this series
anyway don't help to fix the migration convergence issue, so they
could be reviewed independently I guess.

> You could pass over the patch to whoever going to debug this, so it will be included in the whole set to be
> posted when the bug is completely fixed.

* Yes, this patch series is linked there.

> The protocol should have no restriction on the thread model of a front-end.
> It only describes the wire protocol.
>
> IIUC the protocol was designed to be serialized by nature (where there's no
> request ID, so we can't match reply to any of the previous response), then
> the front-end can manage the threads well to serialize all the requests,
> like using this rwlock.

* I see, okay. The simple protocol definition seems to indicate that
it is meant for one front-end/back-end pair. If we are dividing the
front-end across multiple threads, maybe we need a document to
describe those threads and how they work, at least for the QEMU
(front-end) side. Because the back-end could be a non-QEMU process, we
can not do much there. (just thinking)

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-15 13:27       ` Peter Xu
@ 2024-07-16 10:19         ` Prasad Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-16 10:19 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Mon, 15 Jul 2024 at 18:57, Peter Xu <peterx@redhat.com> wrote:
> I think it shouldn't be a major deal in most cases, if the extended cycles
> only cover a bunch of instructions. In special case we can still use
> WITH_QEMU_LOCK_GUARD, but I'd start with the simple first and only switch
> if necessary.

* Okay, will send patch v2.

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-16 10:14       ` Prasad Pandit
@ 2024-07-16 22:02         ` Peter Xu
  2024-07-17  8:55           ` Michael S. Tsirkin
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-16 22:02 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Fabiano Rosas, Jason Wang, Michael S . Tsirkin,
	mcoqueli, Prasad Pandit

On Tue, Jul 16, 2024 at 03:44:54PM +0530, Prasad Pandit wrote:
> Hello Peter,
> 
> On Mon, 15 Jul 2024 at 19:10, Peter Xu <peterx@redhat.com> wrote:
> > IMHO it's better we debug and fix all the issues before merging this one,
> > otherwise we may overlook something.
> 
> * Well we don't know where the issue is, not sure where the fix may go
> in, ex. if the issue turns out to be how virsh(1) invokes
> migrate-postcopy, fix may go in virsh(1). Patches in this series
> anyway don't help to fix the migration convergence issue, so they
> could be reviewed independently I guess.

I still think we should find a complete solution before merging anything,
because I'm not 100% confident the issue to be further investigated is
irrelevant to this patch.

No strong opinions, I'll leave that to Michael to decide.

> 
> > You could pass over the patch to whoever going to debug this, so it will be included in the whole set to be
> > posted when the bug is completely fixed.
> 
> * Yes, this patch series is linked there.
> 
> > The protocol should have no restriction on the thread model of a front-end.
> > It only describes the wire protocol.
> >
> > IIUC the protocol was designed to be serialized by nature (where there's no
> > request ID, so we can't match reply to any of the previous response), then
> > the front-end can manage the threads well to serialize all the requests,
> > like using this rwlock.
> 
> * I see, okay. The simple protocol definition seems to indicate that
> it is meant for one front-end/back-end pair. If we are dividing the
> front-end across multiple threads, maybe we need a document to
> describe those threads and how they work, at least for the QEMU
> (front-end) side. Because the back-end could be a non-QEMU process, we
> can not do much there. (just thinking)

IMHO that's not part of the protocol but impl details, so the current doc
looks all fine to me.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-16 22:02         ` Peter Xu
@ 2024-07-17  8:55           ` Michael S. Tsirkin
  2024-07-17 13:33             ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2024-07-17  8:55 UTC (permalink / raw)
  To: Peter Xu
  Cc: Prasad Pandit, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Tue, Jul 16, 2024 at 06:02:50PM -0400, Peter Xu wrote:
> On Tue, Jul 16, 2024 at 03:44:54PM +0530, Prasad Pandit wrote:
> > Hello Peter,
> > 
> > On Mon, 15 Jul 2024 at 19:10, Peter Xu <peterx@redhat.com> wrote:
> > > IMHO it's better we debug and fix all the issues before merging this one,
> > > otherwise we may overlook something.
> > 
> > * Well we don't know where the issue is, not sure where the fix may go
> > in, ex. if the issue turns out to be how virsh(1) invokes
> > migrate-postcopy, fix may go in virsh(1). Patches in this series
> > anyway don't help to fix the migration convergence issue, so they
> > could be reviewed independently I guess.
> 
> I still think we should find a complete solution before merging anything,
> because I'm not 100% confident the issue to be further investigated is
> irrelevant to this patch.
> 
> No strong opinions, I'll leave that to Michael to decide.
> 
> > 
> > > You could pass over the patch to whoever going to debug this, so it will be included in the whole set to be
> > > posted when the bug is completely fixed.
> > 
> > * Yes, this patch series is linked there.
> > 
> > > The protocol should have no restriction on the thread model of a front-end.
> > > It only describes the wire protocol.
> > >
> > > IIUC the protocol was designed to be serialized by nature (where there's no
> > > request ID, so we can't match reply to any of the previous response), then
> > > the front-end can manage the threads well to serialize all the requests,
> > > like using this rwlock.
> > 
> > * I see, okay. The simple protocol definition seems to indicate that
> > it is meant for one front-end/back-end pair. If we are dividing the
> > front-end across multiple threads, maybe we need a document to
> > describe those threads and how they work, at least for the QEMU
> > (front-end) side. Because the back-end could be a non-QEMU process, we
> > can not do much there. (just thinking)
> 
> IMHO that's not part of the protocol but impl details, so the current doc
> looks all fine to me.
> 
> Thanks,
> 
> -- 
> Peter Xu


I just want to understand how we managed to have two threads
talking in parallel. BQL is normally enough, which path
manages to invoke vhost-user with BQL not taken?
Just check BQL taken on each vhost user invocation and
you will figure it out.


-- 
MST



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-17  8:55           ` Michael S. Tsirkin
@ 2024-07-17 13:33             ` Peter Xu
  2024-07-17 13:40               ` Michael S. Tsirkin
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-17 13:33 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Prasad Pandit, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

Hi, Michael,

On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> I just want to understand how we managed to have two threads
> talking in parallel. BQL is normally enough, which path
> manages to invoke vhost-user with BQL not taken?
> Just check BQL taken on each vhost user invocation and
> you will figure it out.

Prasad mentioned how the race happened in the cover letter:

https://lore.kernel.org/r/20240711131424.181615-1-ppandit@redhat.com

     Thread-1                                  Thread-2

vhost_dev_start                        postcopy_ram_incoming_cleanup
 vhost_device_iotlb_miss                postcopy_notify
  vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
   vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
    process_message_reply                  process_message_reply
     vhost_user_read                        vhost_user_read
      vhost_user_read_header                 vhost_user_read_header
       "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"

The normal case should be that thread-2 is postcopy_ram_listen_thread(),
and this happens when postcopy migration is close to the end.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-17 13:33             ` Peter Xu
@ 2024-07-17 13:40               ` Michael S. Tsirkin
  2024-07-17 13:47                 ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2024-07-17 13:40 UTC (permalink / raw)
  To: Peter Xu
  Cc: Prasad Pandit, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Wed, Jul 17, 2024 at 09:33:01AM -0400, Peter Xu wrote:
> Hi, Michael,
> 
> On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > I just want to understand how we managed to have two threads
> > talking in parallel. BQL is normally enough, which path
> > manages to invoke vhost-user with BQL not taken?
> > Just check BQL taken on each vhost user invocation and
> > you will figure it out.
> 
> Prasad mentioned how the race happened in the cover letter:
> 
> https://lore.kernel.org/r/20240711131424.181615-1-ppandit@redhat.com
> 
>      Thread-1                                  Thread-2
> 
> vhost_dev_start                        postcopy_ram_incoming_cleanup
>  vhost_device_iotlb_miss                postcopy_notify
>   vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
>    vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
>     process_message_reply                  process_message_reply
>      vhost_user_read                        vhost_user_read
>       vhost_user_read_header                 vhost_user_read_header
>        "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"
> 
> The normal case should be that thread-2 is postcopy_ram_listen_thread(),
> and this happens when postcopy migration is close to the end.
> 
> Thanks,
> 
> -- 
> Peter Xu


OK, so postcopy_ram_ things run without the BQL?

-- 
MST



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-17 13:40               ` Michael S. Tsirkin
@ 2024-07-17 13:47                 ` Peter Xu
  2024-07-20 19:41                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2024-07-17 13:47 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Prasad Pandit, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Wed, Jul 17, 2024 at 09:40:06AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 17, 2024 at 09:33:01AM -0400, Peter Xu wrote:
> > Hi, Michael,
> > 
> > On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > > I just want to understand how we managed to have two threads
> > > talking in parallel. BQL is normally enough, which path
> > > manages to invoke vhost-user with BQL not taken?
> > > Just check BQL taken on each vhost user invocation and
> > > you will figure it out.
> > 
> > Prasad mentioned how the race happened in the cover letter:
> > 
> > https://lore.kernel.org/r/20240711131424.181615-1-ppandit@redhat.com
> > 
> >      Thread-1                                  Thread-2
> > 
> > vhost_dev_start                        postcopy_ram_incoming_cleanup
> >  vhost_device_iotlb_miss                postcopy_notify
> >   vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
> >    vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
> >     process_message_reply                  process_message_reply
> >      vhost_user_read                        vhost_user_read
> >       vhost_user_read_header                 vhost_user_read_header
> >        "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"
> > 
> > The normal case should be that thread-2 is postcopy_ram_listen_thread(),
> > and this happens when postcopy migration is close to the end.
> > 
> > Thanks,
> > 
> > -- 
> > Peter Xu
> 
> 
> OK, so postcopy_ram_ things run without the BQL?

There are a lot of postcopy_ram_* functions, I didn't check all of them but
I think it's true in this case.  Thanks.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
  2024-07-11 14:39   ` Michael S. Tsirkin
  2024-07-11 15:41   ` Peter Xu
@ 2024-07-20 19:41   ` Michael S. Tsirkin
  2024-07-23  4:58     ` Prasad Pandit
  2 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2024-07-20 19:41 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: qemu-devel, Peter Xu, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Thu, Jul 11, 2024 at 06:44:23PM +0530, Prasad Pandit wrote:
> From: Prasad Pandit <pjp@fedoraproject.org>
> 
> QEMU threads use vhost_user_write/read calls to send
> and receive messages from a vhost-user device. When multiple
> threads communicate with the same vhost-user device, they can
> receive each other's messages, resulting in an erroneous state.
> 
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
>  vhost_device_iotlb_miss:
>   700871,700871: Fail to update device iotlb
>  vhost_user_postcopy_end:
>   700871,700900: Failed to receive reply to postcopy_end
>  vhost_user_read_header:
>   700871,700871: Failed to read msg header. Flags 0x0 instead of 0x5.
> 
> Here fault thread seems to end the postcopy migration while
> another thread is starting the vhost-user device.
> 
> Add a rw lock to hold for one vhost_user_write/read cycle
> and avoid such race conditions.
> 
> Suggested-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>
> ---
>  hw/virtio/vhost-user.c         | 423 +++++++++++++++++++--------------
>  include/hw/virtio/vhost-user.h |   3 +
>  2 files changed, 254 insertions(+), 172 deletions(-)
> 
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 00561daa06..99881c487f 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -24,6 +24,7 @@
>  #include "qemu/main-loop.h"
>  #include "qemu/uuid.h"
>  #include "qemu/sockets.h"
> +#include "qemu/lockable.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/cryptodev.h"
>  #include "migration/postcopy-ram.h"
> @@ -433,6 +434,8 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd)
>  static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
>                                     struct vhost_log *log)
>  {
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_USER_MAX_RAM_SLOTS];
>      size_t fd_num = 0;
>      bool shmfd = virtio_has_feature(dev->protocol_features,
> @@ -455,23 +458,25 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
>          fds[fd_num++] = log->fd;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, fds, fd_num);
> -    if (ret < 0) {
> -        return ret;
> -    }
> -
> -    if (shmfd) {
> -        msg.hdr.size = 0;
> -        ret = vhost_user_read(dev, &msg);
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, fds, fd_num);
>          if (ret < 0) {
>              return ret;
>          }
>  
> -        if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
> -            error_report("Received unexpected msg type. "
> -                         "Expected %d received %d",
> -                         VHOST_USER_SET_LOG_BASE, msg.hdr.request);
> -            return -EPROTO;
> +        if (shmfd) {
> +            msg.hdr.size = 0;
> +            ret = vhost_user_read(dev, &msg);
> +            if (ret < 0) {
> +                return ret;
> +            }
> +
> +            if (msg.hdr.request != VHOST_USER_SET_LOG_BASE) {
> +                error_report("Received unexpected msg type. "
> +                             "Expected %d received %d",
> +                             VHOST_USER_SET_LOG_BASE, msg.hdr.request);
> +                return -EPROTO;
> +            }
>          }
>      }
>  
> @@ -664,6 +669,7 @@ static int send_remove_regions(struct vhost_dev *dev,
>                                 bool reply_supported)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      struct vhost_memory_region *shadow_reg;
>      int i, fd, shadow_reg_idx, ret;
>      ram_addr_t offset;
> @@ -685,16 +691,18 @@ static int send_remove_regions(struct vhost_dev *dev,
>              vhost_user_fill_msg_region(&region_buffer, shadow_reg, 0);
>              msg->payload.mem_reg.region = region_buffer;
>  
> -            ret = vhost_user_write(dev, msg, NULL, 0);
> -            if (ret < 0) {
> -                return ret;
> -            }
> -
> -            if (reply_supported) {
> -                ret = process_message_reply(dev, msg);
> -                if (ret) {
> +            WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +                ret = vhost_user_write(dev, msg, NULL, 0);
> +                if (ret < 0) {
>                      return ret;
>                  }
> +
> +                if (reply_supported) {
> +                    ret = process_message_reply(dev, msg);
> +                    if (ret) {
> +                        return ret;
> +                    }
> +                }
>              }
>          }
>  
> @@ -725,6 +733,9 @@ static int send_add_regions(struct vhost_dev *dev,
>      VhostUserMsg msg_reply;
>      VhostUserMemoryRegion region_buffer;
>  
> +    struct VhostUserState *us = u->user;
> +    QEMU_LOCK_GUARD(&us->vhost_user_rw_lock);
> +
>      for (i = 0; i < nr_add_reg; i++) {
>          reg = add_reg[i].region;
>          reg_idx = add_reg[i].reg_idx;
> @@ -893,6 +904,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
>                                               bool config_mem_slots)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>      size_t fd_num = 0;
>      VhostUserMsg msg_reply;
> @@ -926,14 +938,16 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev,
>              return ret;
>          }
>  
> -        ret = vhost_user_write(dev, &msg, fds, fd_num);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +            ret = vhost_user_write(dev, &msg, fds, fd_num);
> +            if (ret < 0) {
> +                return ret;
> +            }
>  
> -        ret = vhost_user_read(dev, &msg_reply);
> -        if (ret < 0) {
> -            return ret;
> +            ret = vhost_user_read(dev, &msg_reply);
> +            if (ret < 0) {
> +                return ret;
> +            }
>          }
>  
>          if (msg_reply.hdr.request != VHOST_USER_SET_MEM_TABLE) {
> @@ -1005,6 +1019,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
>                                      struct vhost_memory *mem)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int fds[VHOST_MEMORY_BASELINE_NREGIONS];
>      size_t fd_num = 0;
>      bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler;
> @@ -1044,13 +1059,15 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev,
>              return ret;
>          }
>  
> -        ret = vhost_user_write(dev, &msg, fds, fd_num);
> -        if (ret < 0) {
> -            return ret;
> -        }
> +        WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +            ret = vhost_user_write(dev, &msg, fds, fd_num);
> +            if (ret < 0) {
> +                return ret;
> +            }
>  
> -        if (reply_supported) {
> -            return process_message_reply(dev, &msg);
> +            if (reply_supported) {
> +                return process_message_reply(dev, &msg);
> +            }
>          }
>      }
>  
> @@ -1080,6 +1097,8 @@ static int vhost_user_set_vring_endian(struct vhost_dev *dev,
>  static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      VhostUserMsg msg = {
>          .hdr.request = request,
>          .hdr.flags = VHOST_USER_VERSION,
> @@ -1089,14 +1108,16 @@ static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
>          return 0;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != request) {
> @@ -1129,6 +1150,8 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
>                                   bool wait_for_reply)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      if (wait_for_reply) {
>          bool reply_supported = virtio_has_feature(dev->protocol_features,
> @@ -1138,25 +1161,27 @@ static int vhost_user_write_sync(struct vhost_dev *dev, VhostUserMsg *msg,
>          }
>      }
>  
> -    ret = vhost_user_write(dev, msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    if (wait_for_reply) {
> -        uint64_t dummy;
> +        if (wait_for_reply) {
> +            uint64_t dummy;
>  
> -        if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
> -            return process_message_reply(dev, msg);
> -        }
> +            if (msg->hdr.flags & VHOST_USER_NEED_REPLY_MASK) {
> +                return process_message_reply(dev, msg);
> +            }
>  
> -       /*
> -        * We need to wait for a reply but the backend does not
> -        * support replies for the command we just sent.
> -        * Send VHOST_USER_GET_FEATURES which makes all backends
> -        * send a reply.
> -        */
> -        return vhost_user_get_features(dev, &dummy);
> +           /*
> +            * We need to wait for a reply but the backend does not
> +            * support replies for the command we just sent.
> +            * Send VHOST_USER_GET_FEATURES which makes all backends
> +            * send a reply.
> +            */
> +            return vhost_user_get_features(dev, &dummy);
> +        }
>      }
>  
>      return 0;
> @@ -1277,20 +1302,23 @@ static int vhost_user_get_vring_base(struct vhost_dev *dev,
>          .hdr.size = sizeof(msg.payload.state),
>      };
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      VhostUserHostNotifier *n = fetch_notifier(u->user, ring->index);
>      if (n) {
>          vhost_user_host_notifier_remove(n, dev->vdev);
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_VRING_BASE) {
> @@ -1661,6 +1689,7 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
>                                   int *dmabuf_fd)
>  {
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      int ret;
>      VhostUserMsg msg = {
> @@ -1669,14 +1698,16 @@ int vhost_user_get_shared_object(struct vhost_dev *dev, unsigned char *uuid,
>      };
>      memcpy(msg.payload.object.uuid, uuid, sizeof(msg.payload.object.uuid));
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_SHARED_OBJECT) {
> @@ -1858,6 +1889,7 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
>          .hdr.flags = VHOST_USER_VERSION,
>      };
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int sv[2], ret = 0;
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
> @@ -1889,15 +1921,16 @@ static int vhost_setup_backend_channel(struct vhost_dev *dev)
>          msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, &sv[1], 1);
> -    if (ret) {
> -        goto out;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, &sv[1], 1);
> +        if (ret) {
> +            goto out;
> +        }
>  
> -    if (reply_supported) {
> -        ret = process_message_reply(dev, &msg);
> +        if (reply_supported) {
> +            ret = process_message_reply(dev, &msg);
> +        }
>      }
> -
>  out:
>      close(sv[1]);
>      if (ret) {
> @@ -1985,6 +2018,7 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
>  {
>  #ifdef CONFIG_LINUX
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      int ufd;
>      int ret;
> @@ -1993,16 +2027,18 @@ static int vhost_user_postcopy_advise(struct vhost_dev *dev, Error **errp)
>          .hdr.flags = VHOST_USER_VERSION,
>      };
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_advise to vhost");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_advise to vhost");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to get postcopy_advise reply from vhost");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to get postcopy_advise reply from vhost");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_POSTCOPY_ADVISE) {
> @@ -2051,16 +2087,19 @@ static int vhost_user_postcopy_listen(struct vhost_dev *dev, Error **errp)
>  
>      trace_vhost_user_postcopy_listen();
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_listen to vhost");
> -        return ret;
> -    }
> +    struct VhostUserState *us = u->user;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_listen to vhost");
> +            return ret;
> +        }
>  
> -    ret = process_message_reply(dev, &msg);
> -    if (ret) {
> -        error_setg(errp, "Failed to receive reply to postcopy_listen");
> -        return ret;
> +        ret = process_message_reply(dev, &msg);
> +        if (ret) {
> +            error_setg(errp, "Failed to receive reply to postcopy_listen");
> +            return ret;
> +        }
>      }
>  
>      return 0;
> @@ -2077,19 +2116,22 @@ static int vhost_user_postcopy_end(struct vhost_dev *dev, Error **errp)
>      };
>      int ret;
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      trace_vhost_user_postcopy_end_entry();
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg(errp, "Failed to send postcopy_end to vhost");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg(errp, "Failed to send postcopy_end to vhost");
> +            return ret;
> +        }
>  
> -    ret = process_message_reply(dev, &msg);
> -    if (ret) {
> -        error_setg(errp, "Failed to receive reply to postcopy_end");
> -        return ret;
> +        ret = process_message_reply(dev, &msg);
> +        if (ret) {
> +            error_setg(errp, "Failed to receive reply to postcopy_end");
> +            return ret;
> +        }
>      }
>      postcopy_unregister_shared_ufd(&u->postcopy_fd);
>      close(u->postcopy_fd.fd);
> @@ -2359,6 +2401,8 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>  
>      if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_NET_MTU))) {
>          return 0;
> @@ -2372,14 +2416,16 @@ static int vhost_user_net_set_mtu(struct vhost_dev *dev, uint16_t mtu)
>          msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    /* If reply_ack supported, backend has to ack specified MTU is valid */
> -    if (reply_supported) {
> -        return process_message_reply(dev, &msg);
> +        /* If reply_ack supported, backend has to ack specified MTU is valid */
> +        if (reply_supported) {
> +            return process_message_reply(dev, &msg);
> +        }
>      }
>  
>      return 0;
> @@ -2396,12 +2442,19 @@ static int vhost_user_send_device_iotlb_msg(struct vhost_dev *dev,
>          .payload.iotlb = *imsg,
>      };
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
> +
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
> +
> +        ret = process_message_reply(dev, &msg);
>      }
>  
> -    return process_message_reply(dev, &msg);
> +    return ret;
>  }
>  
>  
> @@ -2414,6 +2467,8 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
>                                   uint32_t config_len, Error **errp)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      VhostUserMsg msg = {
>          .hdr.request = VHOST_USER_GET_CONFIG,
>          .hdr.flags = VHOST_USER_VERSION,
> @@ -2430,16 +2485,19 @@ static int vhost_user_get_config(struct vhost_dev *dev, uint8_t *config,
>  
>      msg.payload.config.offset = 0;
>      msg.payload.config.size = config_len;
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret, "vhost_get_config failed");
> -        return ret;
> -    }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret, "vhost_get_config failed");
> -        return ret;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret, "vhost_get_config failed");
> +            return ret;
> +        }
> +
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret, "vhost_get_config failed");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_CONFIG) {
> @@ -2464,6 +2522,8 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
>  {
>      int ret;
>      uint8_t *p;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      bool reply_supported = virtio_has_feature(dev->protocol_features,
>                                                VHOST_USER_PROTOCOL_F_REPLY_ACK);
>  
> @@ -2492,13 +2552,15 @@ static int vhost_user_set_config(struct vhost_dev *dev, const uint8_t *data,
>      p = msg.payload.config.region;
>      memcpy(p, data, size);
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    if (reply_supported) {
> -        return process_message_reply(dev, &msg);
> +        if (reply_supported) {
> +            return process_message_reply(dev, &msg);
> +        }
>      }
>  
>      return 0;
> @@ -2509,6 +2571,8 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
>                                              uint64_t *session_id)
>  {
>      int ret;
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      bool crypto_session = virtio_has_feature(dev->protocol_features,
>                                         VHOST_USER_PROTOCOL_F_CRYPTO_SESSION);
>      CryptoDevBackendSessionInfo *backend_info = session_info;
> @@ -2572,18 +2636,21 @@ static int vhost_user_crypto_create_session(struct vhost_dev *dev,
>  
>      msg.payload.session.op_code = backend_info->op_code;
>      msg.payload.session.session_id = backend_info->session_id;
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_report("vhost_user_write() return %d, create session failed",
> -                     ret);
> -        return ret;
> -    }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_report("vhost_user_read() return %d, create session failed",
> -                     ret);
> -        return ret;
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_report("vhost_user_write() return %d, create session failed",
> +                         ret);
> +            return ret;
> +        }
> +
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_report("vhost_user_read() return %d, create session failed",
> +                         ret);
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_CREATE_CRYPTO_SESSION) {
> @@ -2648,6 +2715,7 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
>      int fd;
>      int ret;
>      struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      CharBackend *chr = u->user->chr;
>      VhostUserMsg msg = {
>          .hdr.request = VHOST_USER_GET_INFLIGHT_FD,
> @@ -2662,14 +2730,16 @@ static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
>          return 0;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_GET_INFLIGHT_FD) {
> @@ -2757,6 +2827,7 @@ bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp)
>      user->memory_slots = 0;
>      user->notifiers = g_ptr_array_new_full(VIRTIO_QUEUE_MAX / 4,
>                                             &vhost_user_state_destroy);
> +    qemu_mutex_init(&user->vhost_user_rw_lock);
>      return true;
>  }
>  
> @@ -2769,6 +2840,7 @@ void vhost_user_cleanup(VhostUserState *user)
>      user->notifiers = (GPtrArray *) g_ptr_array_free(user->notifiers, true);
>      memory_region_transaction_commit();
>      user->chr = NULL;
> +    qemu_mutex_destroy(&user->vhost_user_rw_lock);
>  }
>  
>  
> @@ -2882,6 +2954,7 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>  {
>      int ret;
>      struct vhost_user *vu = dev->opaque;
> +    struct VhostUserState *us = vu->user;
>      VhostUserMsg msg = {
>          .hdr = {
>              .request = VHOST_USER_SET_DEVICE_STATE_FD,
> @@ -2902,19 +2975,21 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>          return -ENOTSUP;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, &fd, 1);
> -    close(fd);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to send SET_DEVICE_STATE_FD message");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, &fd, 1);
> +        close(fd);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to send SET_DEVICE_STATE_FD message");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to receive SET_DEVICE_STATE_FD reply");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to receive SET_DEVICE_STATE_FD reply");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_SET_DEVICE_STATE_FD) {
> @@ -2951,6 +3026,8 @@ static int vhost_user_set_device_state_fd(struct vhost_dev *dev,
>  
>  static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
>  {
> +    struct vhost_user *u = dev->opaque;
> +    struct VhostUserState *us = u->user;
>      int ret;
>      VhostUserMsg msg = {
>          .hdr = {
> @@ -2965,18 +3042,20 @@ static int vhost_user_check_device_state(struct vhost_dev *dev, Error **errp)
>          return -ENOTSUP;
>      }
>  
> -    ret = vhost_user_write(dev, &msg, NULL, 0);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to send CHECK_DEVICE_STATE message");
> -        return ret;
> -    }
> +    WITH_QEMU_LOCK_GUARD(&us->vhost_user_rw_lock) {
> +        ret = vhost_user_write(dev, &msg, NULL, 0);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to send CHECK_DEVICE_STATE message");
> +            return ret;
> +        }
>  
> -    ret = vhost_user_read(dev, &msg);
> -    if (ret < 0) {
> -        error_setg_errno(errp, -ret,
> -                         "Failed to receive CHECK_DEVICE_STATE reply");
> -        return ret;
> +        ret = vhost_user_read(dev, &msg);
> +        if (ret < 0) {
> +            error_setg_errno(errp, -ret,
> +                             "Failed to receive CHECK_DEVICE_STATE reply");
> +            return ret;
> +        }
>      }
>  
>      if (msg.hdr.request != VHOST_USER_CHECK_DEVICE_STATE) {
> diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
> index 324cd8663a..387ab6da2e 100644
> --- a/include/hw/virtio/vhost-user.h
> +++ b/include/hw/virtio/vhost-user.h
> @@ -67,6 +67,9 @@ typedef struct VhostUserState {
>      GPtrArray *notifiers;
>      int memory_slots;
>      bool supports_config;
> +
> +    /* Hold lock for vhost_user_write/read cycle */
> +    QemuMutex vhost_user_rw_lock;
>  } VhostUserState;

So it's not a rw lock. It's just a mutex.
Lock should be named after what they protect, not
after where they are held.
In this case, this ensures only 1 request is
outstanding at a time.
So vhost_user_request_reply_lock


>  
>  /**
> -- 
> 2.45.2



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-17 13:47                 ` Peter Xu
@ 2024-07-20 19:41                   ` Michael S. Tsirkin
  2024-07-23  5:03                     ` Prasad Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2024-07-20 19:41 UTC (permalink / raw)
  To: Peter Xu
  Cc: Prasad Pandit, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Wed, Jul 17, 2024 at 09:47:17AM -0400, Peter Xu wrote:
> On Wed, Jul 17, 2024 at 09:40:06AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jul 17, 2024 at 09:33:01AM -0400, Peter Xu wrote:
> > > Hi, Michael,
> > > 
> > > On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > > > I just want to understand how we managed to have two threads
> > > > talking in parallel. BQL is normally enough, which path
> > > > manages to invoke vhost-user with BQL not taken?
> > > > Just check BQL taken on each vhost user invocation and
> > > > you will figure it out.
> > > 
> > > Prasad mentioned how the race happened in the cover letter:
> > > 
> > > https://lore.kernel.org/r/20240711131424.181615-1-ppandit@redhat.com
> > > 
> > >      Thread-1                                  Thread-2
> > > 
> > > vhost_dev_start                        postcopy_ram_incoming_cleanup
> > >  vhost_device_iotlb_miss                postcopy_notify
> > >   vhost_backend_update_device_iotlb      vhost_user_postcopy_notifier
> > >    vhost_user_send_device_iotlb_msg       vhost_user_postcopy_end
> > >     process_message_reply                  process_message_reply
> > >      vhost_user_read                        vhost_user_read
> > >       vhost_user_read_header                 vhost_user_read_header
> > >        "Fail to update device iotlb"          "Failed to receive reply to postcopy_end"
> > > 
> > > The normal case should be that thread-2 is postcopy_ram_listen_thread(),
> > > and this happens when postcopy migration is close to the end.
> > > 
> > > Thanks,
> > > 
> > > -- 
> > > Peter Xu
> > 
> > 
> > OK, so postcopy_ram_ things run without the BQL?
> 
> There are a lot of postcopy_ram_* functions, I didn't check all of them but
> I think it's true in this case.  Thanks.
> 
> -- 
> Peter Xu


So pls document this in the commit log.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/2] vhost-user: add a write-read lock
  2024-07-20 19:41   ` Michael S. Tsirkin
@ 2024-07-23  4:58     ` Prasad Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-23  4:58 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Peter Xu, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Sun, 21 Jul 2024 at 01:11, Michael S. Tsirkin <mst@redhat.com> wrote:
> So it's not a rw lock. It's just a mutex.
> Lock should be named after what they protect, not
> after where they are held.
> In this case, this ensures only 1 request is
> outstanding at a time.
> So vhost_user_request_reply_lock

Okay, will do this change. Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-20 19:41                   ` Michael S. Tsirkin
@ 2024-07-23  5:03                     ` Prasad Pandit
  2024-07-23 17:52                       ` Peter Xu
  2024-07-23 17:57                       ` Prasad Pandit
  0 siblings, 2 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-23  5:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Xu, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

On Sun, 21 Jul 2024 at 01:11, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > > > > I just want to understand how we managed to have two threads
> > > > > talking in parallel. BQL is normally enough, which path
> > > > > manages to invoke vhost-user with BQL not taken?
> > > > > Just check BQL taken on each vhost user invocation and
> > > > > you will figure it out.
> > > >
> > > OK, so postcopy_ram_ things run without the BQL?
> >
> > There are a lot of postcopy_ram_* functions, I didn't check all of them but
> > I think it's true in this case.  Thanks.
> >
> So pls document this in the commit log.

* ie. IIUC, if we take BQL in postcop_ram_* functions, we don't need
this 'vhost_user_request_reply_lock patch'? I'll check the
postcopy_ram_* functions to see what's happening there.

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-23  5:03                     ` Prasad Pandit
@ 2024-07-23 17:52                       ` Peter Xu
  2024-07-23 17:57                       ` Prasad Pandit
  1 sibling, 0 replies; 24+ messages in thread
From: Peter Xu @ 2024-07-23 17:52 UTC (permalink / raw)
  To: Prasad Pandit
  Cc: Michael S. Tsirkin, qemu-devel, Fabiano Rosas, Jason Wang,
	mcoqueli, Prasad Pandit

On Tue, Jul 23, 2024 at 10:33:58AM +0530, Prasad Pandit wrote:
> On Sun, 21 Jul 2024 at 01:11, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > > > > > I just want to understand how we managed to have two threads
> > > > > > talking in parallel. BQL is normally enough, which path
> > > > > > manages to invoke vhost-user with BQL not taken?
> > > > > > Just check BQL taken on each vhost user invocation and
> > > > > > you will figure it out.
> > > > >
> > > > OK, so postcopy_ram_ things run without the BQL?
> > >
> > > There are a lot of postcopy_ram_* functions, I didn't check all of them but
> > > I think it's true in this case.  Thanks.
> > >
> > So pls document this in the commit log.
> 
> * ie. IIUC, if we take BQL in postcop_ram_* functions, we don't need
> this 'vhost_user_request_reply_lock patch'? I'll check the
> postcopy_ram_* functions to see what's happening there.

Go ahead, Prasad.  But just to mention postcopy stuff doesn't take BQL may
be because there's some good reason. So if you're not confident on some
changes, you can share a patch before testing in an involved environment.

Personally, I'd avoid using BQL as long as it can be replaced by a finer
grained lock.  Otherwise the lock semantics can be more unclear, and we'll
never get rid of BQL.  IOW, I'd prefer a smaller lock in general if
possible to avoid BQL, but we can see how the patch look like when it's
ready.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/2] Postcopy migration and vhost-user errors
  2024-07-23  5:03                     ` Prasad Pandit
  2024-07-23 17:52                       ` Peter Xu
@ 2024-07-23 17:57                       ` Prasad Pandit
  1 sibling, 0 replies; 24+ messages in thread
From: Prasad Pandit @ 2024-07-23 17:57 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Xu, qemu-devel, Fabiano Rosas, Jason Wang, mcoqueli,
	Prasad Pandit

Hi,

On Tue, 23 Jul 2024 at 10:33, Prasad Pandit <ppandit@redhat.com> wrote:
> On Sun, 21 Jul 2024 at 01:11, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > > On Wed, Jul 17, 2024 at 04:55:52AM -0400, Michael S. Tsirkin wrote:
> > > > > > I just want to understand how we managed to have two threads
> > > > > > talking in parallel. BQL is normally enough, which path
> > > > > > manages to invoke vhost-user with BQL not taken?
> > > > > > Just check BQL taken on each vhost user invocation and
> > > > > > you will figure it out.
> > > > >
> > > > OK, so postcopy_ram_ things run without the BQL?
>
> I'll check the postcopy_ram_* functions to see what's happening there.
===
2024-07-23T17:11:25.934756Z qemu-kvm: vhost_user_postcopy_end:
161184:161213: BQL not taken
2024-07-23T17:11:25.934994Z qemu-kvm: vhost_user_postcopy_end:
161184:161213: BQL not taken
2024-07-23T17:11:25.935095Z qemu-kvm: vhost_user_postcopy_end:
161184:161213: BQL not taken
===
* postcopy_ram_listen_thread() does not take the BQL. Trying to see
where to take it.

Thank you.
---
  - Prasad



^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2024-07-23 17:58 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-11 13:14 [PATCH 0/2] Postcopy migration and vhost-user errors Prasad Pandit
2024-07-11 13:14 ` [PATCH 1/2] vhost-user: add a write-read lock Prasad Pandit
2024-07-11 14:39   ` Michael S. Tsirkin
2024-07-15 10:57     ` Prasad Pandit
2024-07-11 15:41   ` Peter Xu
2024-07-15  8:14     ` Prasad Pandit
2024-07-15 13:27       ` Peter Xu
2024-07-16 10:19         ` Prasad Pandit
2024-07-20 19:41   ` Michael S. Tsirkin
2024-07-23  4:58     ` Prasad Pandit
2024-07-11 13:14 ` [PATCH 2/2] vhost: fail device start if iotlb update fails Prasad Pandit
2024-07-11 15:38 ` [PATCH 0/2] Postcopy migration and vhost-user errors Peter Xu
2024-07-15 10:14   ` Prasad Pandit
2024-07-15 13:39     ` Peter Xu
2024-07-16 10:14       ` Prasad Pandit
2024-07-16 22:02         ` Peter Xu
2024-07-17  8:55           ` Michael S. Tsirkin
2024-07-17 13:33             ` Peter Xu
2024-07-17 13:40               ` Michael S. Tsirkin
2024-07-17 13:47                 ` Peter Xu
2024-07-20 19:41                   ` Michael S. Tsirkin
2024-07-23  5:03                     ` Prasad Pandit
2024-07-23 17:52                       ` Peter Xu
2024-07-23 17:57                       ` Prasad Pandit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).