* [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests
@ 2020-08-31 9:27 Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine Dima Stepanov
` (6 more replies)
0 siblings, 7 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
v2 -> v3:
- update commit message for the
"vhost: recheck dev state in the vhost_migration_log routine" commit
- rename "started" field of the VhostUserBlk structure to
"started_vu", so there will be no confustion with the VHOST started
field
- update vhost-user-test.c to always initialize nq local variable
(spotted by patchew)
v1 -> v2:
- add comments to connected/started fields in the header file
- move the "s->started" logic from the vhost_user_blk_disconnect
routine to the vhost_user_blk_stop routine
Reference e-mail threads:
- https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg01509.html
- https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg05241.html
If vhost-user daemon is used as a backend for the vhost device, then we
should consider a possibility of disconnect at any moment. There was a general
question here: should we consider it as an error or okay state for the vhost-user
devices during migration process?
I think the disconnect event for the vhost-user devices should not break the
migration process, because:
- the device will be in the stopped state, so it will not be changed
during migration
- if reconnect will be made the migration log will be reinitialized as
part of reconnect/init process:
#0 vhost_log_global_start (listener=0x563989cf7be0)
at hw/virtio/vhost.c:920
#1 0x000056398603d8bc in listener_add_address_space (listener=0x563989cf7be0,
as=0x563986ea4340 <address_space_memory>)
at softmmu/memory.c:2664
#2 0x000056398603dd30 in memory_listener_register (listener=0x563989cf7be0,
as=0x563986ea4340 <address_space_memory>)
at softmmu/memory.c:2740
#3 0x0000563985fd6956 in vhost_dev_init (hdev=0x563989cf7bd8,
opaque=0x563989cf7e30, backend_type=VHOST_BACKEND_TYPE_USER,
busyloop_timeout=0)
at hw/virtio/vhost.c:1385
#4 0x0000563985f7d0b8 in vhost_user_blk_connect (dev=0x563989cf7990)
at hw/block/vhost-user-blk.c:315
#5 0x0000563985f7d3f6 in vhost_user_blk_event (opaque=0x563989cf7990,
event=CHR_EVENT_OPENED)
at hw/block/vhost-user-blk.c:379
The first patch in the patchset fixes this issue by setting vhost device to the
stopped state in the disconnect handler and check it the vhost_migration_log()
routine before returning from the function.
qtest framework was updated to test vhost-user-blk functionality. The
vhost-user-blk/vhost-user-blk-tests/migrate_reconnect test was added to reproduce
the original issue found.
Dima Stepanov (7):
vhost: recheck dev state in the vhost_migration_log routine
vhost: check queue state in the vhost_dev_set_log routine
tests/qtest/vhost-user-test: prepare the tests for adding new dev
class
tests/qtest/libqos/virtio-blk: add support for vhost-user-blk
tests/qtest/vhost-user-test: add support for the vhost-user-blk device
tests/qtest/vhost-user-test: add migrate_reconnect test
tests/qtest/vhost-user-test: enable the reconnect tests
hw/block/vhost-user-blk.c | 19 ++-
hw/virtio/vhost.c | 39 ++++-
include/hw/virtio/vhost-user-blk.h | 10 ++
tests/qtest/libqos/virtio-blk.c | 14 ++
tests/qtest/vhost-user-test.c | 290 +++++++++++++++++++++++++++++++------
5 files changed, 323 insertions(+), 49 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-09-01 3:49 ` Raphael Norwitz
2020-08-31 9:27 ` [PATCH v3 2/7] vhost: check queue state in the vhost_dev_set_log routine Dima Stepanov
` (5 subsequent siblings)
6 siblings, 1 reply; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
vhost-user devices can get a disconnect in the middle of the VHOST-USER
handshake on the migration start. If disconnect event happened right
before sending next VHOST-USER command, then the vhost_dev_set_log()
call in the vhost_migration_log() function will return error. This error
will lead to the assert() and close the QEMU migration source process.
For the vhost-user devices the disconnect event should not break the
migration process, because:
- the device will be in the stopped state, so it will not be changed
during migration
- if reconnect will be made the migration log will be reinitialized as
part of reconnect/init process:
#0 vhost_log_global_start (listener=0x563989cf7be0)
at hw/virtio/vhost.c:920
#1 0x000056398603d8bc in listener_add_address_space (listener=0x563989cf7be0,
as=0x563986ea4340 <address_space_memory>)
at softmmu/memory.c:2664
#2 0x000056398603dd30 in memory_listener_register (listener=0x563989cf7be0,
as=0x563986ea4340 <address_space_memory>)
at softmmu/memory.c:2740
#3 0x0000563985fd6956 in vhost_dev_init (hdev=0x563989cf7bd8,
opaque=0x563989cf7e30, backend_type=VHOST_BACKEND_TYPE_USER,
busyloop_timeout=0)
at hw/virtio/vhost.c:1385
#4 0x0000563985f7d0b8 in vhost_user_blk_connect (dev=0x563989cf7990)
at hw/block/vhost-user-blk.c:315
#5 0x0000563985f7d3f6 in vhost_user_blk_event (opaque=0x563989cf7990,
event=CHR_EVENT_OPENED)
at hw/block/vhost-user-blk.c:379
Update the vhost-user-blk device with the internal started_vu field which
will be used for initialization (vhost_user_blk_start) and clean up
(vhost_user_blk_stop). This additional flag in the VhostUserBlk structure
will be used to track whether the device really needs to be stopped and
cleaned up on a vhost-user level.
The disconnect event will set the overall VHOST device (not vhost-user) to
the stopped state, so it can be used by the general vhost_migration_log
routine.
Such approach could be propogated to the other vhost-user devices, but
better idea is just to make the same connect/disconnect code for all the
vhost-user devices.
This migration issue was slightly discussed earlier:
- https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg01509.html
- https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg05241.html
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
hw/block/vhost-user-blk.c | 19 ++++++++++++++++---
hw/virtio/vhost.c | 27 ++++++++++++++++++++++++---
include/hw/virtio/vhost-user-blk.h | 10 ++++++++++
3 files changed, 50 insertions(+), 6 deletions(-)
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index 39aec42..a076b1e 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -150,6 +150,7 @@ static int vhost_user_blk_start(VirtIODevice *vdev)
error_report("Error starting vhost: %d", -ret);
goto err_guest_notifiers;
}
+ s->started_vu = true;
/* guest_notifier_mask/pending not used yet, so just unmask
* everything here. virtio-pci will do the right thing by
@@ -175,6 +176,11 @@ static void vhost_user_blk_stop(VirtIODevice *vdev)
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
int ret;
+ if (!s->started_vu) {
+ return;
+ }
+ s->started_vu = false;
+
if (!k->set_guest_notifiers) {
return;
}
@@ -341,9 +347,7 @@ static void vhost_user_blk_disconnect(DeviceState *dev)
}
s->connected = false;
- if (s->dev.started) {
- vhost_user_blk_stop(vdev);
- }
+ vhost_user_blk_stop(vdev);
vhost_dev_cleanup(&s->dev);
}
@@ -399,6 +403,15 @@ static void vhost_user_blk_event(void *opaque, QEMUChrEvent event)
NULL, NULL, false);
aio_bh_schedule_oneshot(ctx, vhost_user_blk_chr_closed_bh, opaque);
}
+
+ /*
+ * Move vhost device to the stopped state. The vhost-user device
+ * will be clean up and disconnected in BH. This can be useful in
+ * the vhost migration code. If disconnect was caught there is an
+ * option for the general vhost code to get the dev state without
+ * knowing its type (in this case vhost-user).
+ */
+ s->dev.started = false;
break;
case CHR_EVENT_BREAK:
case CHR_EVENT_MUX_IN:
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 1a1384e..ffef7ab 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -861,21 +861,42 @@ static int vhost_migration_log(MemoryListener *listener, bool enable)
dev->log_enabled = enable;
return 0;
}
+
+ r = 0;
if (!enable) {
r = vhost_dev_set_log(dev, false);
if (r < 0) {
- return r;
+ goto check_dev_state;
}
vhost_log_put(dev, false);
} else {
vhost_dev_log_resize(dev, vhost_get_log_size(dev));
r = vhost_dev_set_log(dev, true);
if (r < 0) {
- return r;
+ goto check_dev_state;
}
}
+
+check_dev_state:
dev->log_enabled = enable;
- return 0;
+ /*
+ * vhost-user-* devices could change their state during log
+ * initialization due to disconnect. So check dev state after
+ * vhost communication.
+ */
+ if (!dev->started) {
+ /*
+ * Since device is in the stopped state, it is okay for
+ * migration. Return success.
+ */
+ r = 0;
+ }
+ if (r) {
+ /* An error is occured. */
+ dev->log_enabled = false;
+ }
+
+ return r;
}
static void vhost_log_global_start(MemoryListener *listener)
diff --git a/include/hw/virtio/vhost-user-blk.h b/include/hw/virtio/vhost-user-blk.h
index 292d171..4d6f4c0 100644
--- a/include/hw/virtio/vhost-user-blk.h
+++ b/include/hw/virtio/vhost-user-blk.h
@@ -40,7 +40,17 @@ typedef struct VHostUserBlk {
VhostUserState vhost_user;
struct vhost_virtqueue *vhost_vqs;
VirtQueue **virtqs;
+
+ /*
+ * There are at least two steps of initialization of the
+ * vhost-user device. The first is a "connect" step and
+ * second is a "start" step. Make a separation between
+ * those initialization phases by using two fields.
+ */
+ /* vhost_user_blk_connect/vhost_user_blk_disconnect */
bool connected;
+ /* vhost_user_blk_start/vhost_user_blk_stop */
+ bool started_vu;
} VHostUserBlk;
#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/7] vhost: check queue state in the vhost_dev_set_log routine
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 3/7] tests/qtest/vhost-user-test: prepare the tests for adding new dev class Dima Stepanov
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
If the vhost-user-blk daemon provides only one virtqueue, but device was
added with several queues, then QEMU will send more VHOST-USER command
than expected by daemon side. The vhost_virtqueue_start() routine
handles such case by checking the return value from the
virtio_queue_get_desc_addr() function call. Add the same check to the
vhost_dev_set_log() routine.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
hw/virtio/vhost.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index ffef7ab..a33ffd4 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -825,12 +825,24 @@ static int vhost_dev_set_features(struct vhost_dev *dev,
static int vhost_dev_set_log(struct vhost_dev *dev, bool enable_log)
{
int r, i, idx;
+ hwaddr addr;
+
r = vhost_dev_set_features(dev, enable_log);
if (r < 0) {
goto err_features;
}
for (i = 0; i < dev->nvqs; ++i) {
idx = dev->vhost_ops->vhost_get_vq_index(dev, dev->vq_index + i);
+ addr = virtio_queue_get_desc_addr(dev->vdev, idx);
+ if (!addr) {
+ /*
+ * The queue might not be ready for start. If this
+ * is the case there is no reason to continue the process.
+ * The similar logic is used by the vhost_virtqueue_start()
+ * routine.
+ */
+ break;
+ }
r = vhost_virtqueue_set_addr(dev, dev->vqs + i, idx,
enable_log);
if (r < 0) {
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/7] tests/qtest/vhost-user-test: prepare the tests for adding new dev class
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 2/7] vhost: check queue state in the vhost_dev_set_log routine Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 4/7] tests/qtest/libqos/virtio-blk: add support for vhost-user-blk Dima Stepanov
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
For now only vhost-user-net device is supported by the test. Other
vhost-user devices are not tested. As a first step make source code
refactoring so new devices can reuse the same test routines. To make
this provide a new vhost_user_ops structure with the methods to
initialize device, its command line or make a proper vhost-user
responses.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
tests/qtest/vhost-user-test.c | 105 ++++++++++++++++++++++++++++++------------
1 file changed, 76 insertions(+), 29 deletions(-)
diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index 9ee0f1e..3df5322 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -135,6 +135,10 @@ enum {
TEST_FLAGS_END,
};
+enum {
+ VHOST_USER_NET,
+};
+
typedef struct TestServer {
gchar *socket_path;
gchar *mig_path;
@@ -154,10 +158,25 @@ typedef struct TestServer {
bool test_fail;
int test_flags;
int queues;
+ struct vhost_user_ops *vu_ops;
} TestServer;
+struct vhost_user_ops {
+ /* Device types. */
+ int type;
+ void (*append_opts)(TestServer *s, GString *cmd_line,
+ const char *chr_opts);
+
+ /* VHOST-USER commands. */
+ void (*set_features)(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg);
+ void (*get_protocol_features)(TestServer *s,
+ CharBackend *chr, VhostUserMsg *msg);
+};
+
static const char *init_hugepagefs(void);
-static TestServer *test_server_new(const gchar *name);
+static TestServer *test_server_new(const gchar *name,
+ struct vhost_user_ops *ops);
static void test_server_free(TestServer *server);
static void test_server_listen(TestServer *server);
@@ -167,7 +186,7 @@ enum test_memfd {
TEST_MEMFD_NO,
};
-static void append_vhost_opts(TestServer *s, GString *cmd_line,
+static void append_vhost_net_opts(TestServer *s, GString *cmd_line,
const char *chr_opts)
{
g_string_append_printf(cmd_line, QEMU_CMD_CHR QEMU_CMD_NETDEV,
@@ -332,25 +351,15 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
break;
case VHOST_USER_SET_FEATURES:
- g_assert_cmpint(msg.payload.u64 & (0x1ULL << VHOST_USER_F_PROTOCOL_FEATURES),
- !=, 0ULL);
- if (s->test_flags == TEST_FLAGS_DISCONNECT) {
- qemu_chr_fe_disconnect(chr);
- s->test_flags = TEST_FLAGS_BAD;
+ if (s->vu_ops->set_features) {
+ s->vu_ops->set_features(s, chr, &msg);
}
break;
case VHOST_USER_GET_PROTOCOL_FEATURES:
- /* send back features to qemu */
- msg.flags |= VHOST_USER_REPLY_MASK;
- msg.size = sizeof(m.payload.u64);
- msg.payload.u64 = 1 << VHOST_USER_PROTOCOL_F_LOG_SHMFD;
- msg.payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_CROSS_ENDIAN;
- if (s->queues > 1) {
- msg.payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_MQ;
+ if (s->vu_ops->get_protocol_features) {
+ s->vu_ops->get_protocol_features(s, chr, &msg);
}
- p = (uint8_t *) &msg;
- qemu_chr_fe_write_all(chr, p, VHOST_USER_HDR_SIZE + msg.size);
break;
case VHOST_USER_GET_VRING_BASE:
@@ -467,7 +476,8 @@ static const char *init_hugepagefs(void)
#endif
}
-static TestServer *test_server_new(const gchar *name)
+static TestServer *test_server_new(const gchar *name,
+ struct vhost_user_ops *ops)
{
TestServer *server = g_new0(TestServer, 1);
char template[] = "/tmp/vhost-test-XXXXXX";
@@ -495,6 +505,7 @@ static TestServer *test_server_new(const gchar *name)
server->log_fd = -1;
server->queues = 1;
+ server->vu_ops = ops;
return server;
}
@@ -669,11 +680,11 @@ static void vhost_user_test_cleanup(void *s)
static void *vhost_user_test_setup(GString *cmd_line, void *arg)
{
- TestServer *server = test_server_new("vhost-user-test");
+ TestServer *server = test_server_new("vhost-user-test", arg);
test_server_listen(server);
append_mem_opts(server, cmd_line, 256, TEST_MEMFD_AUTO);
- append_vhost_opts(server, cmd_line, "");
+ server->vu_ops->append_opts(server, cmd_line, "");
g_test_queue_destroy(vhost_user_test_cleanup, server);
@@ -682,11 +693,11 @@ static void *vhost_user_test_setup(GString *cmd_line, void *arg)
static void *vhost_user_test_setup_memfd(GString *cmd_line, void *arg)
{
- TestServer *server = test_server_new("vhost-user-test");
+ TestServer *server = test_server_new("vhost-user-test", arg);
test_server_listen(server);
append_mem_opts(server, cmd_line, 256, TEST_MEMFD_YES);
- append_vhost_opts(server, cmd_line, "");
+ server->vu_ops->append_opts(server, cmd_line, "");
g_test_queue_destroy(vhost_user_test_cleanup, server);
@@ -720,7 +731,7 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
return;
}
- dest = test_server_new("dest");
+ dest = test_server_new("dest", s->vu_ops);
dest_cmdline = g_string_new(qos_get_current_command_line());
uri = g_strdup_printf("%s%s", "unix:", dest->mig_path);
@@ -730,7 +741,7 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
test_server_listen(dest);
g_string_append_printf(dest_cmdline, " -incoming %s", uri);
append_mem_opts(dest, dest_cmdline, 256, TEST_MEMFD_AUTO);
- append_vhost_opts(dest, dest_cmdline, "");
+ dest->vu_ops->append_opts(dest, dest_cmdline, "");
to = qtest_init(dest_cmdline->str);
/* This would be where you call qos_allocate_objects(to, NULL), if you want
@@ -831,11 +842,11 @@ connect_thread(gpointer data)
static void *vhost_user_test_setup_reconnect(GString *cmd_line, void *arg)
{
- TestServer *s = test_server_new("reconnect");
+ TestServer *s = test_server_new("reconnect", arg);
g_thread_new("connect", connect_thread, s);
append_mem_opts(s, cmd_line, 256, TEST_MEMFD_AUTO);
- append_vhost_opts(s, cmd_line, ",server");
+ s->vu_ops->append_opts(s, cmd_line, ",server");
g_test_queue_destroy(vhost_user_test_cleanup, s);
@@ -866,13 +877,13 @@ static void test_reconnect(void *obj, void *arg, QGuestAllocator *alloc)
static void *vhost_user_test_setup_connect_fail(GString *cmd_line, void *arg)
{
- TestServer *s = test_server_new("connect-fail");
+ TestServer *s = test_server_new("connect-fail", arg);
s->test_fail = true;
g_thread_new("connect", connect_thread, s);
append_mem_opts(s, cmd_line, 256, TEST_MEMFD_AUTO);
- append_vhost_opts(s, cmd_line, ",server");
+ s->vu_ops->append_opts(s, cmd_line, ",server");
g_test_queue_destroy(vhost_user_test_cleanup, s);
@@ -881,13 +892,13 @@ static void *vhost_user_test_setup_connect_fail(GString *cmd_line, void *arg)
static void *vhost_user_test_setup_flags_mismatch(GString *cmd_line, void *arg)
{
- TestServer *s = test_server_new("flags-mismatch");
+ TestServer *s = test_server_new("flags-mismatch", arg);
s->test_flags = TEST_FLAGS_DISCONNECT;
g_thread_new("connect", connect_thread, s);
append_mem_opts(s, cmd_line, 256, TEST_MEMFD_AUTO);
- append_vhost_opts(s, cmd_line, ",server");
+ s->vu_ops->append_opts(s, cmd_line, ",server");
g_test_queue_destroy(vhost_user_test_cleanup, s);
@@ -924,11 +935,47 @@ static void test_multiqueue(void *obj, void *arg, QGuestAllocator *alloc)
wait_for_rings_started(s, s->queues * 2);
}
+static void vu_net_set_features(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg)
+{
+ g_assert_cmpint(msg->payload.u64 &
+ (0x1ULL << VHOST_USER_F_PROTOCOL_FEATURES), !=, 0ULL);
+ if (s->test_flags == TEST_FLAGS_DISCONNECT) {
+ qemu_chr_fe_disconnect(chr);
+ s->test_flags = TEST_FLAGS_BAD;
+ }
+}
+
+static void vu_net_get_protocol_features(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg)
+{
+ /* send back features to qemu */
+ msg->flags |= VHOST_USER_REPLY_MASK;
+ msg->size = sizeof(m.payload.u64);
+ msg->payload.u64 = 1 << VHOST_USER_PROTOCOL_F_LOG_SHMFD;
+ msg->payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_CROSS_ENDIAN;
+ if (s->queues > 1) {
+ msg->payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_MQ;
+ }
+ qemu_chr_fe_write_all(chr, (uint8_t *)msg, VHOST_USER_HDR_SIZE + msg->size);
+}
+
+/* Each VHOST-USER device should have its ops structure defined. */
+static struct vhost_user_ops g_vu_net_ops = {
+ .type = VHOST_USER_NET,
+
+ .append_opts = append_vhost_net_opts,
+
+ .set_features = vu_net_set_features,
+ .get_protocol_features = vu_net_get_protocol_features,
+};
+
static void register_vhost_user_test(void)
{
QOSGraphTestOptions opts = {
.before = vhost_user_test_setup,
.subprocess = true,
+ .arg = &g_vu_net_ops,
};
qemu_add_opts(&qemu_chardev_opts);
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/7] tests/qtest/libqos/virtio-blk: add support for vhost-user-blk
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
` (2 preceding siblings ...)
2020-08-31 9:27 ` [PATCH v3 3/7] tests/qtest/vhost-user-test: prepare the tests for adding new dev class Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 5/7] tests/qtest/vhost-user-test: add support for the vhost-user-blk device Dima Stepanov
` (2 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
Add support for the vhost-user-blk-pci device. This node can be used by
the vhost-user-blk tests. Tests for the vhost-user-blk device are added
in the following patches.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
tests/qtest/libqos/virtio-blk.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/tests/qtest/libqos/virtio-blk.c b/tests/qtest/libqos/virtio-blk.c
index 5da0259..959c5dc 100644
--- a/tests/qtest/libqos/virtio-blk.c
+++ b/tests/qtest/libqos/virtio-blk.c
@@ -36,6 +36,9 @@ static void *qvirtio_blk_get_driver(QVirtioBlk *v_blk,
if (!g_strcmp0(interface, "virtio")) {
return v_blk->vdev;
}
+ if (!g_strcmp0(interface, "vhost-user-blk")) {
+ return v_blk;
+ }
fprintf(stderr, "%s not present in virtio-blk-device\n", interface);
g_assert_not_reached();
@@ -120,6 +123,17 @@ static void virtio_blk_register_nodes(void)
qos_node_produces("virtio-blk-pci", "virtio-blk");
g_free(arg);
+
+ /* vhost-user-blk-pci */
+ arg = g_strdup_printf("id=drv0,chardev=chdev0,addr=%x.%x",
+ PCI_SLOT, PCI_FN);
+ opts.extra_device_opts = arg;
+ add_qpci_address(&opts, &addr);
+ qos_node_create_driver("vhost-user-blk-pci", virtio_blk_pci_create);
+ qos_node_consumes("vhost-user-blk-pci", "pci-bus", &opts);
+ qos_node_produces("vhost-user-blk-pci", "vhost-user-blk");
+
+ g_free(arg);
}
libqos_init(virtio_blk_register_nodes);
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 5/7] tests/qtest/vhost-user-test: add support for the vhost-user-blk device
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
` (3 preceding siblings ...)
2020-08-31 9:27 ` [PATCH v3 4/7] tests/qtest/libqos/virtio-blk: add support for vhost-user-blk Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 6/7] tests/qtest/vhost-user-test: add migrate_reconnect test Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 7/7] tests/qtest/vhost-user-test: enable the reconnect tests Dima Stepanov
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
Add vhost_user_ops structure for the vhost-user-blk device class. Add
the test_reconnect and test_migrate tests for this device.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
tests/qtest/vhost-user-test.c | 139 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 137 insertions(+), 2 deletions(-)
diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index 3df5322..a8af613 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -24,6 +24,7 @@
#include "libqos/libqos.h"
#include "libqos/pci-pc.h"
#include "libqos/virtio-pci.h"
+#include "libqos/virtio-blk.h"
#include "libqos/malloc-pc.h"
#include "hw/virtio/virtio-net.h"
@@ -31,6 +32,7 @@
#include "standard-headers/linux/vhost_types.h"
#include "standard-headers/linux/virtio_ids.h"
#include "standard-headers/linux/virtio_net.h"
+#include "standard-headers/linux/virtio_blk.h"
#ifdef CONFIG_LINUX
#include <sys/vfs.h>
@@ -43,6 +45,7 @@
" -numa node,memdev=mem"
#define QEMU_CMD_CHR " -chardev socket,id=%s,path=%s%s"
#define QEMU_CMD_NETDEV " -netdev vhost-user,id=hs0,chardev=%s,vhostforce"
+#define QEMU_CMD_BLKCHR " -chardev socket,id=chdev0,path=%s%s"
#define HUGETLBFS_MAGIC 0x958458f6
@@ -55,6 +58,7 @@
#define VHOST_USER_PROTOCOL_F_MQ 0
#define VHOST_USER_PROTOCOL_F_LOG_SHMFD 1
#define VHOST_USER_PROTOCOL_F_CROSS_ENDIAN 6
+#define VHOST_USER_PROTOCOL_F_CONFIG 9
#define VHOST_LOG_PAGE 0x1000
@@ -78,6 +82,8 @@ typedef enum VhostUserRequest {
VHOST_USER_SET_PROTOCOL_FEATURES = 16,
VHOST_USER_GET_QUEUE_NUM = 17,
VHOST_USER_SET_VRING_ENABLE = 18,
+ VHOST_USER_GET_CONFIG = 24,
+ VHOST_USER_SET_CONFIG = 25,
VHOST_USER_MAX
} VhostUserRequest;
@@ -99,6 +105,14 @@ typedef struct VhostUserLog {
uint64_t mmap_offset;
} VhostUserLog;
+#define VHOST_USER_MAX_CONFIG_SIZE 256
+typedef struct VhostUserConfig {
+ uint32_t offset;
+ uint32_t size;
+ uint32_t flags;
+ uint8_t region[VHOST_USER_MAX_CONFIG_SIZE];
+} VhostUserConfig;
+
typedef struct VhostUserMsg {
VhostUserRequest request;
@@ -114,6 +128,7 @@ typedef struct VhostUserMsg {
struct vhost_vring_addr addr;
VhostUserMemory memory;
VhostUserLog log;
+ VhostUserConfig config;
} payload;
} QEMU_PACKED VhostUserMsg;
@@ -137,6 +152,7 @@ enum {
enum {
VHOST_USER_NET,
+ VHOST_USER_BLK,
};
typedef struct TestServer {
@@ -166,12 +182,15 @@ struct vhost_user_ops {
int type;
void (*append_opts)(TestServer *s, GString *cmd_line,
const char *chr_opts);
+ void (*driver_init)(void *obj, QGuestAllocator *alloc);
/* VHOST-USER commands. */
void (*set_features)(TestServer *s, CharBackend *chr,
VhostUserMsg *msg);
void (*get_protocol_features)(TestServer *s,
CharBackend *chr, VhostUserMsg *msg);
+ void (*get_config)(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg);
};
static const char *init_hugepagefs(void);
@@ -194,6 +213,14 @@ static void append_vhost_net_opts(TestServer *s, GString *cmd_line,
chr_opts, s->chr_name);
}
+static void append_vhost_blk_opts(TestServer *s, GString *cmd_line,
+ const char *chr_opts)
+{
+ g_string_append_printf(cmd_line, QEMU_CMD_BLKCHR,
+ s->socket_path,
+ chr_opts);
+}
+
static void append_mem_opts(TestServer *server, GString *cmd_line,
int size, enum test_memfd memfd)
{
@@ -425,6 +452,12 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
qemu_chr_fe_write_all(chr, p, VHOST_USER_HDR_SIZE + msg.size);
break;
+ case VHOST_USER_GET_CONFIG:
+ if (s->vu_ops->get_config) {
+ s->vu_ops->get_config(s, chr, &msg);
+ }
+ break;
+
default:
break;
}
@@ -727,6 +760,9 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
guint8 *log;
guint64 size;
+ if (s->vu_ops->driver_init) {
+ s->vu_ops->driver_init(obj, alloc);
+ }
if (!wait_for_fds(s)) {
return;
}
@@ -796,6 +832,24 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
g_string_free(dest_cmdline, true);
}
+static void vu_blk_driver_init(void *obj, QGuestAllocator *alloc)
+{
+ QVirtioBlk *blk_if;
+ QVirtioDevice *dev;
+ QVirtQueue *vq;
+ uint64_t features;
+
+ blk_if = obj;
+ dev = blk_if->vdev;
+ features = qvirtio_get_features(dev);
+ qvirtio_set_features(dev, features);
+
+ vq = qvirtqueue_setup(dev, alloc, 0);
+ g_assert(vq);
+
+ qvirtio_set_driver_ok(dev);
+}
+
static void wait_for_rings_started(TestServer *s, size_t count)
{
gint64 end_time;
@@ -857,12 +911,21 @@ static void test_reconnect(void *obj, void *arg, QGuestAllocator *alloc)
{
TestServer *s = arg;
GSource *src;
+ int nq;
+ if (s->vu_ops->driver_init) {
+ s->vu_ops->driver_init(obj, alloc);
+ }
if (!wait_for_fds(s)) {
return;
}
- wait_for_rings_started(s, 2);
+ nq = 1;
+ if (s->vu_ops->type == VHOST_USER_NET) {
+ /* tx and rx queues */
+ nq = 2;
+ }
+ wait_for_rings_started(s, nq);
/* reconnect */
s->fds_num = 0;
@@ -872,7 +935,7 @@ static void test_reconnect(void *obj, void *arg, QGuestAllocator *alloc)
g_source_attach(src, s->context);
g_source_unref(src);
g_assert(wait_for_fds(s));
- wait_for_rings_started(s, 2);
+ wait_for_rings_started(s, nq);
}
static void *vhost_user_test_setup_connect_fail(GString *cmd_line, void *arg)
@@ -960,6 +1023,56 @@ static void vu_net_get_protocol_features(TestServer *s, CharBackend *chr,
qemu_chr_fe_write_all(chr, (uint8_t *)msg, VHOST_USER_HDR_SIZE + msg->size);
}
+static void vu_blk_set_features(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg)
+{
+ if (s->test_flags == TEST_FLAGS_DISCONNECT) {
+ qemu_chr_fe_disconnect(chr);
+ s->test_flags = TEST_FLAGS_BAD;
+ }
+}
+
+static void vu_blk_get_protocol_features(TestServer *s,
+ CharBackend *chr, VhostUserMsg *msg)
+{
+ /* send back features to qemu */
+ msg->flags |= VHOST_USER_REPLY_MASK;
+ msg->size = sizeof(m.payload.u64);
+ msg->payload.u64 = 1 << VHOST_USER_PROTOCOL_F_LOG_SHMFD;
+ msg->payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_CONFIG;
+ if (s->queues > 1) {
+ msg->payload.u64 |= 1 << VHOST_USER_PROTOCOL_F_MQ;
+ }
+ qemu_chr_fe_write_all(chr, (uint8_t *)msg, VHOST_USER_HDR_SIZE + msg->size);
+}
+
+static void vu_blk_get_config(TestServer *s, CharBackend *chr,
+ VhostUserMsg *msg)
+{
+ VhostUserConfig *config;
+ struct virtio_blk_config *blk_config;
+
+ config = &msg->payload.config;
+ memset(config, 0, sizeof(*config));
+ config->size = sizeof(*blk_config);
+
+ blk_config = (struct virtio_blk_config *)&config->region;
+ /*
+ * Represent 128Mb test disk, with no real backend, just
+ * to test vhost-user functionality.
+ */
+ blk_config->capacity = 262144;
+ blk_config->size_max = 0x20000;
+ blk_config->seg_max = 0x7e;
+ blk_config->blk_size = 512;
+ blk_config->min_io_size = 0x1;
+ blk_config->num_queues = 0x1;
+
+ msg->size = sizeof(*config) - sizeof(config->region) + config->size;
+ msg->flags |= VHOST_USER_REPLY_MASK;
+ qemu_chr_fe_write_all(chr, (uint8_t *)msg, VHOST_USER_HDR_SIZE + msg->size);
+}
+
/* Each VHOST-USER device should have its ops structure defined. */
static struct vhost_user_ops g_vu_net_ops = {
.type = VHOST_USER_NET,
@@ -970,6 +1083,17 @@ static struct vhost_user_ops g_vu_net_ops = {
.get_protocol_features = vu_net_get_protocol_features,
};
+static struct vhost_user_ops g_vu_blk_ops = {
+ .type = VHOST_USER_BLK,
+
+ .append_opts = append_vhost_blk_opts,
+ .driver_init = vu_blk_driver_init,
+
+ .set_features = vu_blk_set_features,
+ .get_protocol_features = vu_blk_get_protocol_features,
+ .get_config = vu_blk_get_config,
+};
+
static void register_vhost_user_test(void)
{
QOSGraphTestOptions opts = {
@@ -1015,5 +1139,16 @@ static void register_vhost_user_test(void)
qos_add_test("vhost-user/multiqueue",
"virtio-net",
test_multiqueue, &opts);
+ opts.edge.extra_device_opts = NULL;
+
+ /* vhost-user-blk tests */
+ opts.arg = &g_vu_blk_ops;
+ opts.before = vhost_user_test_setup_reconnect;
+ qos_add_test("reconnect", "vhost-user-blk",
+ test_reconnect, &opts);
+
+ opts.before = vhost_user_test_setup_memfd;
+ qos_add_test("migrate", "vhost-user-blk",
+ test_migrate, &opts);
}
libqos_init(register_vhost_user_test);
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 6/7] tests/qtest/vhost-user-test: add migrate_reconnect test
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
` (4 preceding siblings ...)
2020-08-31 9:27 ` [PATCH v3 5/7] tests/qtest/vhost-user-test: add support for the vhost-user-blk device Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 7/7] tests/qtest/vhost-user-test: enable the reconnect tests Dima Stepanov
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
Add new migrate_reconnect test for the vhost-user-blk device. Perform a
disconnect after sending response for the VHOST_USER_SET_LOG_BASE
command.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
tests/qtest/vhost-user-test.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index a8af613..4b715d3 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -146,6 +146,7 @@ static VhostUserMsg m __attribute__ ((unused));
enum {
TEST_FLAGS_OK,
TEST_FLAGS_DISCONNECT,
+ TEST_FLAGS_MIGRATE_DISCONNECT,
TEST_FLAGS_BAD,
TEST_FLAGS_END,
};
@@ -436,6 +437,15 @@ static void chr_read(void *opaque, const uint8_t *buf, int size)
qemu_chr_fe_write_all(chr, p, VHOST_USER_HDR_SIZE);
g_cond_broadcast(&s->data_cond);
+ /*
+ * Perform disconnect after sending a response. In this
+ * case the next write command on the QEMU side (for now
+ * it is SET_FEATURES will return -1, because of disconnect.
+ */
+ if (s->test_flags == TEST_FLAGS_MIGRATE_DISCONNECT) {
+ qemu_chr_fe_disconnect(chr);
+ s->test_flags = TEST_FLAGS_BAD;
+ }
break;
case VHOST_USER_SET_VRING_BASE:
@@ -737,6 +747,17 @@ static void *vhost_user_test_setup_memfd(GString *cmd_line, void *arg)
return server;
}
+static void *vhost_user_test_setup_migrate_reconnect(GString *cmd_line,
+ void *arg)
+{
+ TestServer *server;
+
+ server = vhost_user_test_setup_memfd(cmd_line, arg);
+ server->test_flags = TEST_FLAGS_MIGRATE_DISCONNECT;
+
+ return server;
+}
+
static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
{
TestServer *server = arg;
@@ -1150,5 +1171,9 @@ static void register_vhost_user_test(void)
opts.before = vhost_user_test_setup_memfd;
qos_add_test("migrate", "vhost-user-blk",
test_migrate, &opts);
+
+ opts.before = vhost_user_test_setup_migrate_reconnect;
+ qos_add_test("migrate_reconnect", "vhost-user-blk",
+ test_migrate, &opts);
}
libqos_init(register_vhost_user_test);
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 7/7] tests/qtest/vhost-user-test: enable the reconnect tests
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
` (5 preceding siblings ...)
2020-08-31 9:27 ` [PATCH v3 6/7] tests/qtest/vhost-user-test: add migrate_reconnect test Dima Stepanov
@ 2020-08-31 9:27 ` Dima Stepanov
6 siblings, 0 replies; 9+ messages in thread
From: Dima Stepanov @ 2020-08-31 9:27 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, lvivier, thuth, qemu-block, mst, jasowang, dgilbert,
mreitz, fengli, yc-core, pbonzini, raphael.norwitz
For now a QTEST_VHOST_USER_FIXME environment variable is used to
separate reconnect tests for the vhost-user-net device. Looks like the
reconnect functionality is pretty stable, so this separation is
deprecated.
Remove it and enable these tests for the default run.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
---
tests/qtest/vhost-user-test.c | 25 +++++++++++--------------
1 file changed, 11 insertions(+), 14 deletions(-)
diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index 4b715d3..4b96312 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -1140,20 +1140,17 @@ static void register_vhost_user_test(void)
"virtio-net",
test_migrate, &opts);
- /* keeps failing on build-system since Aug 15 2017 */
- if (getenv("QTEST_VHOST_USER_FIXME")) {
- opts.before = vhost_user_test_setup_reconnect;
- qos_add_test("vhost-user/reconnect", "virtio-net",
- test_reconnect, &opts);
-
- opts.before = vhost_user_test_setup_connect_fail;
- qos_add_test("vhost-user/connect-fail", "virtio-net",
- test_vhost_user_started, &opts);
-
- opts.before = vhost_user_test_setup_flags_mismatch;
- qos_add_test("vhost-user/flags-mismatch", "virtio-net",
- test_vhost_user_started, &opts);
- }
+ opts.before = vhost_user_test_setup_reconnect;
+ qos_add_test("vhost-user/reconnect", "virtio-net",
+ test_reconnect, &opts);
+
+ opts.before = vhost_user_test_setup_connect_fail;
+ qos_add_test("vhost-user/connect-fail", "virtio-net",
+ test_vhost_user_started, &opts);
+
+ opts.before = vhost_user_test_setup_flags_mismatch;
+ qos_add_test("vhost-user/flags-mismatch", "virtio-net",
+ test_vhost_user_started, &opts);
opts.before = vhost_user_test_setup_multiqueue;
opts.edge.extra_device_opts = "mq=on";
--
2.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine
2020-08-31 9:27 ` [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine Dima Stepanov
@ 2020-09-01 3:49 ` Raphael Norwitz
0 siblings, 0 replies; 9+ messages in thread
From: Raphael Norwitz @ 2020-09-01 3:49 UTC (permalink / raw)
To: Dima Stepanov
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block,
Michael S. Tsirkin, jasowang, QEMU, Dr. David Alan Gilbert,
Raphael Norwitz, fengli, yc-core, Paolo Bonzini, Max Reitz
On Mon, Aug 31, 2020 at 5:28 AM Dima Stepanov <dimastep@yandex-team.ru> wrote:
>
> vhost-user devices can get a disconnect in the middle of the VHOST-USER
> handshake on the migration start. If disconnect event happened right
> before sending next VHOST-USER command, then the vhost_dev_set_log()
> call in the vhost_migration_log() function will return error. This error
> will lead to the assert() and close the QEMU migration source process.
> For the vhost-user devices the disconnect event should not break the
> migration process, because:
> - the device will be in the stopped state, so it will not be changed
> during migration
> - if reconnect will be made the migration log will be reinitialized as
> part of reconnect/init process:
> #0 vhost_log_global_start (listener=0x563989cf7be0)
> at hw/virtio/vhost.c:920
> #1 0x000056398603d8bc in listener_add_address_space (listener=0x563989cf7be0,
> as=0x563986ea4340 <address_space_memory>)
> at softmmu/memory.c:2664
> #2 0x000056398603dd30 in memory_listener_register (listener=0x563989cf7be0,
> as=0x563986ea4340 <address_space_memory>)
> at softmmu/memory.c:2740
> #3 0x0000563985fd6956 in vhost_dev_init (hdev=0x563989cf7bd8,
> opaque=0x563989cf7e30, backend_type=VHOST_BACKEND_TYPE_USER,
> busyloop_timeout=0)
> at hw/virtio/vhost.c:1385
> #4 0x0000563985f7d0b8 in vhost_user_blk_connect (dev=0x563989cf7990)
> at hw/block/vhost-user-blk.c:315
> #5 0x0000563985f7d3f6 in vhost_user_blk_event (opaque=0x563989cf7990,
> event=CHR_EVENT_OPENED)
> at hw/block/vhost-user-blk.c:379
> Update the vhost-user-blk device with the internal started_vu field which
> will be used for initialization (vhost_user_blk_start) and clean up
> (vhost_user_blk_stop). This additional flag in the VhostUserBlk structure
> will be used to track whether the device really needs to be stopped and
> cleaned up on a vhost-user level.
> The disconnect event will set the overall VHOST device (not vhost-user) to
> the stopped state, so it can be used by the general vhost_migration_log
> routine.
> Such approach could be propogated to the other vhost-user devices, but
> better idea is just to make the same connect/disconnect code for all the
> vhost-user devices.
>
> This migration issue was slightly discussed earlier:
> - https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg01509.html
> - https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg05241.html
>
> Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
> hw/block/vhost-user-blk.c | 19 ++++++++++++++++---
> hw/virtio/vhost.c | 27 ++++++++++++++++++++++++---
> include/hw/virtio/vhost-user-blk.h | 10 ++++++++++
> 3 files changed, 50 insertions(+), 6 deletions(-)
>
> diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
> index 39aec42..a076b1e 100644
> --- a/hw/block/vhost-user-blk.c
> +++ b/hw/block/vhost-user-blk.c
> @@ -150,6 +150,7 @@ static int vhost_user_blk_start(VirtIODevice *vdev)
> error_report("Error starting vhost: %d", -ret);
> goto err_guest_notifiers;
> }
> + s->started_vu = true;
>
> /* guest_notifier_mask/pending not used yet, so just unmask
> * everything here. virtio-pci will do the right thing by
> @@ -175,6 +176,11 @@ static void vhost_user_blk_stop(VirtIODevice *vdev)
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> int ret;
>
> + if (!s->started_vu) {
> + return;
> + }
> + s->started_vu = false;
> +
> if (!k->set_guest_notifiers) {
> return;
> }
> @@ -341,9 +347,7 @@ static void vhost_user_blk_disconnect(DeviceState *dev)
> }
> s->connected = false;
>
> - if (s->dev.started) {
> - vhost_user_blk_stop(vdev);
> - }
> + vhost_user_blk_stop(vdev);
>
> vhost_dev_cleanup(&s->dev);
> }
> @@ -399,6 +403,15 @@ static void vhost_user_blk_event(void *opaque, QEMUChrEvent event)
> NULL, NULL, false);
> aio_bh_schedule_oneshot(ctx, vhost_user_blk_chr_closed_bh, opaque);
> }
> +
> + /*
> + * Move vhost device to the stopped state. The vhost-user device
> + * will be clean up and disconnected in BH. This can be useful in
> + * the vhost migration code. If disconnect was caught there is an
> + * option for the general vhost code to get the dev state without
> + * knowing its type (in this case vhost-user).
> + */
> + s->dev.started = false;
> break;
> case CHR_EVENT_BREAK:
> case CHR_EVENT_MUX_IN:
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 1a1384e..ffef7ab 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -861,21 +861,42 @@ static int vhost_migration_log(MemoryListener *listener, bool enable)
> dev->log_enabled = enable;
> return 0;
> }
> +
> + r = 0;
> if (!enable) {
> r = vhost_dev_set_log(dev, false);
> if (r < 0) {
> - return r;
> + goto check_dev_state;
> }
> vhost_log_put(dev, false);
> } else {
> vhost_dev_log_resize(dev, vhost_get_log_size(dev));
> r = vhost_dev_set_log(dev, true);
> if (r < 0) {
> - return r;
> + goto check_dev_state;
> }
> }
> +
> +check_dev_state:
> dev->log_enabled = enable;
> - return 0;
> + /*
> + * vhost-user-* devices could change their state during log
> + * initialization due to disconnect. So check dev state after
> + * vhost communication.
> + */
> + if (!dev->started) {
> + /*
> + * Since device is in the stopped state, it is okay for
> + * migration. Return success.
> + */
> + r = 0;
> + }
> + if (r) {
> + /* An error is occured. */
> + dev->log_enabled = false;
> + }
> +
> + return r;
> }
>
> static void vhost_log_global_start(MemoryListener *listener)
> diff --git a/include/hw/virtio/vhost-user-blk.h b/include/hw/virtio/vhost-user-blk.h
> index 292d171..4d6f4c0 100644
> --- a/include/hw/virtio/vhost-user-blk.h
> +++ b/include/hw/virtio/vhost-user-blk.h
> @@ -40,7 +40,17 @@ typedef struct VHostUserBlk {
> VhostUserState vhost_user;
> struct vhost_virtqueue *vhost_vqs;
> VirtQueue **virtqs;
> +
> + /*
> + * There are at least two steps of initialization of the
> + * vhost-user device. The first is a "connect" step and
> + * second is a "start" step. Make a separation between
> + * those initialization phases by using two fields.
> + */
> + /* vhost_user_blk_connect/vhost_user_blk_disconnect */
> bool connected;
> + /* vhost_user_blk_start/vhost_user_blk_stop */
> + bool started_vu;
> } VHostUserBlk;
>
> #endif
> --
> 2.7.4
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-09-01 3:50 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-31 9:27 [PATCH v3 0/7] vhost-user-blk: fix the migration issue and enhance qtests Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 1/7] vhost: recheck dev state in the vhost_migration_log routine Dima Stepanov
2020-09-01 3:49 ` Raphael Norwitz
2020-08-31 9:27 ` [PATCH v3 2/7] vhost: check queue state in the vhost_dev_set_log routine Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 3/7] tests/qtest/vhost-user-test: prepare the tests for adding new dev class Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 4/7] tests/qtest/libqos/virtio-blk: add support for vhost-user-blk Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 5/7] tests/qtest/vhost-user-test: add support for the vhost-user-blk device Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 6/7] tests/qtest/vhost-user-test: add migrate_reconnect test Dima Stepanov
2020-08-31 9:27 ` [PATCH v3 7/7] tests/qtest/vhost-user-test: enable the reconnect tests Dima Stepanov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).