* [PATCH v7 0/5] Implement reconnect for vhost-user-scsi
@ 2023-10-08 9:12 Li Feng
2023-10-08 9:12 ` [PATCH v7 1/5] vhost-user-common: send get_inflight_fd once Li Feng
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Michael S. Tsirkin, Raphael Norwitz,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
Changes for v7:
- [PATCH 3/5] vhost-user-scsi: support reconnect to backend
- Add reporting the error in vhost-scsi;
- Rebase to master and fix the conflict.
- Add "Reviewed-by" tags.
Changes for v6:
- [PATCH] vhost-user: fix lost reconnect
- Fix missing assign event_cb.
Changes for v5:
- No logic has been changed, just move part of the code from patch 4 to patch 5.
Changes for v4:
- Merge
https://lore.kernel.org/all/20230830045722.611224-1-fengli@smartx.com/ to
this series.
- Add ERRP_GUARD in vhost_user_scsi_realize;
- Reword the commit messages.
Changes for v3:
- Split the vhost_user_scsi_handle_output to a separate patch;
- Move the started_vu from vhost scsi common header to vhost-user-scsi header;
- Fix a log print error;
Changes for v2:
- Split the v1 patch to small separate patchset;
- New patch for fixing fd leak, which has sent to reviewers in another
mail;
- Implement the `vhost_user_scsi_handle_output`;
- Add the started_vu safe check;
- Fix error handler;
- Check the inflight before set/get inflight fd.
Li Feng (5):
vhost-user-common: send get_inflight_fd once
vhost: move and rename the conn retry times
vhost-user-scsi: support reconnect to backend
vhost-user-scsi: start vhost when guest kicks
vhost-user: fix lost reconnect
hw/block/vhost-user-blk.c | 6 +-
hw/scsi/vhost-scsi-common.c | 47 ++---
hw/scsi/vhost-scsi.c | 6 +-
hw/scsi/vhost-user-scsi.c | 253 +++++++++++++++++++++++---
hw/virtio/vhost-user-gpio.c | 5 +-
hw/virtio/vhost-user.c | 10 +-
include/hw/virtio/vhost-scsi-common.h | 2 +-
include/hw/virtio/vhost-user-scsi.h | 4 +
include/hw/virtio/vhost-user.h | 3 +-
include/hw/virtio/vhost.h | 2 +
10 files changed, 278 insertions(+), 60 deletions(-)
--
2.41.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v7 1/5] vhost-user-common: send get_inflight_fd once
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
@ 2023-10-08 9:12 ` Li Feng
2023-10-08 9:12 ` [PATCH v7 2/5] vhost: move and rename the conn retry times Li Feng
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Raphael Norwitz, Michael S. Tsirkin,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
Currently the get_inflight_fd will be sent every time the device is started, and
the backend will allocate shared memory to save the inflight state. If the
backend finds that it receives the second get_inflight_fd, it will release the
previous shared memory, which breaks inflight working logic.
This patch is a preparation for the following patches.
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
hw/scsi/vhost-scsi-common.c | 37 ++++++++++++++++++-------------------
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/hw/scsi/vhost-scsi-common.c b/hw/scsi/vhost-scsi-common.c
index a06f01af26..a61cd0e907 100644
--- a/hw/scsi/vhost-scsi-common.c
+++ b/hw/scsi/vhost-scsi-common.c
@@ -52,20 +52,28 @@ int vhost_scsi_common_start(VHostSCSICommon *vsc)
vsc->dev.acked_features = vdev->guest_features;
- assert(vsc->inflight == NULL);
- vsc->inflight = g_new0(struct vhost_inflight, 1);
- ret = vhost_dev_get_inflight(&vsc->dev,
- vs->conf.virtqueue_size,
- vsc->inflight);
+ ret = vhost_dev_prepare_inflight(&vsc->dev, vdev);
if (ret < 0) {
- error_report("Error get inflight: %d", -ret);
+ error_report("Error setting inflight format: %d", -ret);
goto err_guest_notifiers;
}
- ret = vhost_dev_set_inflight(&vsc->dev, vsc->inflight);
- if (ret < 0) {
- error_report("Error set inflight: %d", -ret);
- goto err_guest_notifiers;
+ if (vsc->inflight) {
+ if (!vsc->inflight->addr) {
+ ret = vhost_dev_get_inflight(&vsc->dev,
+ vs->conf.virtqueue_size,
+ vsc->inflight);
+ if (ret < 0) {
+ error_report("Error getting inflight: %d", -ret);
+ goto err_guest_notifiers;
+ }
+ }
+
+ ret = vhost_dev_set_inflight(&vsc->dev, vsc->inflight);
+ if (ret < 0) {
+ error_report("Error setting inflight: %d", -ret);
+ goto err_guest_notifiers;
+ }
}
ret = vhost_dev_start(&vsc->dev, vdev, true);
@@ -85,9 +93,6 @@ int vhost_scsi_common_start(VHostSCSICommon *vsc)
return ret;
err_guest_notifiers:
- g_free(vsc->inflight);
- vsc->inflight = NULL;
-
k->set_guest_notifiers(qbus->parent, vsc->dev.nvqs, false);
err_host_notifiers:
vhost_dev_disable_notifiers(&vsc->dev, vdev);
@@ -111,12 +116,6 @@ void vhost_scsi_common_stop(VHostSCSICommon *vsc)
}
assert(ret >= 0);
- if (vsc->inflight) {
- vhost_dev_free_inflight(vsc->inflight);
- g_free(vsc->inflight);
- vsc->inflight = NULL;
- }
-
vhost_dev_disable_notifiers(&vsc->dev, vdev);
}
--
2.41.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v7 2/5] vhost: move and rename the conn retry times
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
2023-10-08 9:12 ` [PATCH v7 1/5] vhost-user-common: send get_inflight_fd once Li Feng
@ 2023-10-08 9:12 ` Li Feng
2023-10-08 9:12 ` [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend Li Feng
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Michael S. Tsirkin, Raphael Norwitz,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
Multiple devices need this macro, move it to a common header.
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
hw/block/vhost-user-blk.c | 4 +---
hw/virtio/vhost-user-gpio.c | 3 +--
include/hw/virtio/vhost.h | 2 ++
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index eecf3f7a81..3c69fa47d5 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -32,8 +32,6 @@
#include "sysemu/sysemu.h"
#include "sysemu/runstate.h"
-#define REALIZE_CONNECTION_RETRIES 3
-
static const int user_feature_bits[] = {
VIRTIO_BLK_F_SIZE_MAX,
VIRTIO_BLK_F_SEG_MAX,
@@ -482,7 +480,7 @@ static void vhost_user_blk_device_realize(DeviceState *dev, Error **errp)
s->inflight = g_new0(struct vhost_inflight, 1);
s->vhost_vqs = g_new0(struct vhost_virtqueue, s->num_queues);
- retries = REALIZE_CONNECTION_RETRIES;
+ retries = VU_REALIZE_CONN_RETRIES;
assert(!*errp);
do {
if (*errp) {
diff --git a/hw/virtio/vhost-user-gpio.c b/hw/virtio/vhost-user-gpio.c
index 3d7fae3984..fc784e4213 100644
--- a/hw/virtio/vhost-user-gpio.c
+++ b/hw/virtio/vhost-user-gpio.c
@@ -15,7 +15,6 @@
#include "standard-headers/linux/virtio_ids.h"
#include "trace.h"
-#define REALIZE_CONNECTION_RETRIES 3
#define VHOST_NVQS 2
/* Features required from VirtIO */
@@ -365,7 +364,7 @@ static void vu_gpio_device_realize(DeviceState *dev, Error **errp)
qemu_chr_fe_set_handlers(&gpio->chardev, NULL, NULL, vu_gpio_event, NULL,
dev, NULL, true);
- retries = REALIZE_CONNECTION_RETRIES;
+ retries = VU_REALIZE_CONN_RETRIES;
g_assert(!*errp);
do {
if (*errp) {
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 6a173cb9fa..ca3131b1af 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -8,6 +8,8 @@
#define VHOST_F_DEVICE_IOTLB 63
#define VHOST_USER_F_PROTOCOL_FEATURES 30
+#define VU_REALIZE_CONN_RETRIES 3
+
/* Generic structures common for any vhost based device. */
struct vhost_inflight {
--
2.41.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
2023-10-08 9:12 ` [PATCH v7 1/5] vhost-user-common: send get_inflight_fd once Li Feng
2023-10-08 9:12 ` [PATCH v7 2/5] vhost: move and rename the conn retry times Li Feng
@ 2023-10-08 9:12 ` Li Feng
2023-10-08 10:46 ` Manos Pitsidianakis
2023-10-08 9:12 ` [PATCH v7 4/5] vhost-user-scsi: start vhost when guest kicks Li Feng
2023-10-08 9:12 ` [PATCH v7 5/5] vhost-user: fix lost reconnect Li Feng
4 siblings, 1 reply; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Michael S. Tsirkin, Raphael Norwitz,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
If the backend crashes and restarts, the device is broken.
This patch adds reconnect for vhost-user-scsi.
This patch also improves the error messages, and reports some silent errors.
Tested with spdk backend.
Signed-off-by: Li Feng <fengli@smartx.com>
---
hw/scsi/vhost-scsi-common.c | 16 +-
hw/scsi/vhost-scsi.c | 6 +-
hw/scsi/vhost-user-scsi.c | 204 +++++++++++++++++++++++---
include/hw/virtio/vhost-scsi-common.h | 2 +-
include/hw/virtio/vhost-user-scsi.h | 4 +
5 files changed, 202 insertions(+), 30 deletions(-)
diff --git a/hw/scsi/vhost-scsi-common.c b/hw/scsi/vhost-scsi-common.c
index a61cd0e907..4c8637045d 100644
--- a/hw/scsi/vhost-scsi-common.c
+++ b/hw/scsi/vhost-scsi-common.c
@@ -16,6 +16,7 @@
*/
#include "qemu/osdep.h"
+#include "qapi/error.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include "hw/virtio/vhost.h"
@@ -25,7 +26,7 @@
#include "hw/virtio/virtio-access.h"
#include "hw/fw-path-provider.h"
-int vhost_scsi_common_start(VHostSCSICommon *vsc)
+int vhost_scsi_common_start(VHostSCSICommon *vsc, Error **errp)
{
int ret, i;
VirtIODevice *vdev = VIRTIO_DEVICE(vsc);
@@ -35,18 +36,19 @@ int vhost_scsi_common_start(VHostSCSICommon *vsc)
VirtIOSCSICommon *vs = (VirtIOSCSICommon *)vsc;
if (!k->set_guest_notifiers) {
- error_report("binding does not support guest notifiers");
+ error_setg(errp, "binding does not support guest notifiers");
return -ENOSYS;
}
ret = vhost_dev_enable_notifiers(&vsc->dev, vdev);
if (ret < 0) {
+ error_setg_errno(errp, -ret, "Error enabling host notifiers");
return ret;
}
ret = k->set_guest_notifiers(qbus->parent, vsc->dev.nvqs, true);
if (ret < 0) {
- error_report("Error binding guest notifier");
+ error_setg_errno(errp, -ret, "Error binding guest notifier");
goto err_host_notifiers;
}
@@ -54,7 +56,7 @@ int vhost_scsi_common_start(VHostSCSICommon *vsc)
ret = vhost_dev_prepare_inflight(&vsc->dev, vdev);
if (ret < 0) {
- error_report("Error setting inflight format: %d", -ret);
+ error_setg_errno(errp, -ret, "Error setting inflight format");
goto err_guest_notifiers;
}
@@ -64,21 +66,21 @@ int vhost_scsi_common_start(VHostSCSICommon *vsc)
vs->conf.virtqueue_size,
vsc->inflight);
if (ret < 0) {
- error_report("Error getting inflight: %d", -ret);
+ error_setg_errno(errp, -ret, "Error getting inflight");
goto err_guest_notifiers;
}
}
ret = vhost_dev_set_inflight(&vsc->dev, vsc->inflight);
if (ret < 0) {
- error_report("Error setting inflight: %d", -ret);
+ error_setg_errno(errp, -ret, "Error setting inflight");
goto err_guest_notifiers;
}
}
ret = vhost_dev_start(&vsc->dev, vdev, true);
if (ret < 0) {
- error_report("Error start vhost dev");
+ error_setg_errno(errp, -ret, "Error starting vhost dev");
goto err_guest_notifiers;
}
diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c
index 443f67daa4..95cadb93e7 100644
--- a/hw/scsi/vhost-scsi.c
+++ b/hw/scsi/vhost-scsi.c
@@ -75,6 +75,7 @@ static int vhost_scsi_start(VHostSCSI *s)
int ret, abi_version;
VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
const VhostOps *vhost_ops = vsc->dev.vhost_ops;
+ Error *local_err = NULL;
ret = vhost_ops->vhost_scsi_get_abi_version(&vsc->dev, &abi_version);
if (ret < 0) {
@@ -88,14 +89,15 @@ static int vhost_scsi_start(VHostSCSI *s)
return -ENOSYS;
}
- ret = vhost_scsi_common_start(vsc);
+ ret = vhost_scsi_common_start(vsc, &local_err);
if (ret < 0) {
+ error_reportf_err(local_err, "Error starting vhost-scsi");
return ret;
}
ret = vhost_scsi_set_endpoint(s);
if (ret < 0) {
- error_report("Error setting vhost-scsi endpoint");
+ error_reportf_err(local_err, "Error setting vhost-scsi endpoint");
vhost_scsi_common_stop(vsc);
}
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
index df6b66cc1a..5df24faff4 100644
--- a/hw/scsi/vhost-user-scsi.c
+++ b/hw/scsi/vhost-user-scsi.c
@@ -39,26 +39,56 @@ static const int user_feature_bits[] = {
VHOST_INVALID_FEATURE_BIT
};
+static int vhost_user_scsi_start(VHostUserSCSI *s, Error **errp)
+{
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+ int ret;
+
+ ret = vhost_scsi_common_start(vsc, errp);
+ s->started_vu = (ret < 0 ? false : true);
+
+ return ret;
+}
+
+static void vhost_user_scsi_stop(VHostUserSCSI *s)
+{
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+
+ if (!s->started_vu) {
+ return;
+ }
+ s->started_vu = false;
+
+ vhost_scsi_common_stop(vsc);
+}
+
static void vhost_user_scsi_set_status(VirtIODevice *vdev, uint8_t status)
{
VHostUserSCSI *s = (VHostUserSCSI *)vdev;
+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
- bool start = (status & VIRTIO_CONFIG_S_DRIVER_OK) && vdev->vm_running;
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+ bool should_start = virtio_device_should_start(vdev, status);
+ Error *local_err = NULL;
+ int ret;
- if (vhost_dev_is_started(&vsc->dev) == start) {
+ if (!s->connected) {
return;
}
- if (start) {
- int ret;
+ if (vhost_dev_is_started(&vsc->dev) == should_start) {
+ return;
+ }
- ret = vhost_scsi_common_start(vsc);
+ if (should_start) {
+ ret = vhost_user_scsi_start(s, &local_err);
if (ret < 0) {
- error_report("unable to start vhost-user-scsi: %s", strerror(-ret));
- exit(1);
+ error_reportf_err(local_err, "unable to start vhost-user-scsi: %s",
+ strerror(-ret));
+ qemu_chr_fe_disconnect(&vs->conf.chardev);
}
} else {
- vhost_scsi_common_stop(vsc);
+ vhost_user_scsi_stop(s);
}
}
@@ -85,14 +115,127 @@ static void vhost_dummy_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
}
+static int vhost_user_scsi_connect(DeviceState *dev, Error **errp)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VHostUserSCSI *s = VHOST_USER_SCSI(vdev);
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+ int ret = 0;
+
+ if (s->connected) {
+ return 0;
+ }
+ s->connected = true;
+
+ vsc->dev.num_queues = vs->conf.num_queues;
+ vsc->dev.nvqs = VIRTIO_SCSI_VQ_NUM_FIXED + vs->conf.num_queues;
+ vsc->dev.vqs = s->vhost_vqs;
+ vsc->dev.vq_index = 0;
+ vsc->dev.backend_features = 0;
+
+ ret = vhost_dev_init(&vsc->dev, &s->vhost_user, VHOST_BACKEND_TYPE_USER, 0,
+ errp);
+ if (ret < 0) {
+ return ret;
+ }
+
+ /* restore vhost state */
+ if (virtio_device_started(vdev, vdev->status)) {
+ ret = vhost_user_scsi_start(s, errp);
+ if (ret < 0) {
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static void vhost_user_scsi_event(void *opaque, QEMUChrEvent event);
+
+static void vhost_user_scsi_disconnect(DeviceState *dev)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VHostUserSCSI *s = VHOST_USER_SCSI(vdev);
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+
+ if (!s->connected) {
+ return;
+ }
+ s->connected = false;
+
+ vhost_user_scsi_stop(s);
+
+ vhost_dev_cleanup(&vsc->dev);
+
+ /* Re-instate the event handler for new connections */
+ qemu_chr_fe_set_handlers(&vs->conf.chardev, NULL, NULL,
+ vhost_user_scsi_event, NULL, dev, NULL, true);
+}
+
+static void vhost_user_scsi_event(void *opaque, QEMUChrEvent event)
+{
+ DeviceState *dev = opaque;
+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+ VHostUserSCSI *s = VHOST_USER_SCSI(vdev);
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+ Error *local_err = NULL;
+
+ switch (event) {
+ case CHR_EVENT_OPENED:
+ if (vhost_user_scsi_connect(dev, &local_err) < 0) {
+ error_report_err(local_err);
+ qemu_chr_fe_disconnect(&vs->conf.chardev);
+ return;
+ }
+ break;
+ case CHR_EVENT_CLOSED:
+ /* defer close until later to avoid circular close */
+ vhost_user_async_close(dev, &vs->conf.chardev, &vsc->dev,
+ vhost_user_scsi_disconnect);
+ break;
+ case CHR_EVENT_BREAK:
+ case CHR_EVENT_MUX_IN:
+ case CHR_EVENT_MUX_OUT:
+ /* Ignore */
+ break;
+ }
+}
+
+static int vhost_user_scsi_realize_connect(VHostUserSCSI *s, Error **errp)
+{
+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+ int ret;
+
+ s->connected = false;
+
+ ret = qemu_chr_fe_wait_connected(&vs->conf.chardev, errp);
+ if (ret < 0) {
+ return ret;
+ }
+
+ ret = vhost_user_scsi_connect(dev, errp);
+ if (ret < 0) {
+ qemu_chr_fe_disconnect(&vs->conf.chardev);
+ return ret;
+ }
+ assert(s->connected);
+
+ return 0;
+}
+
static void vhost_user_scsi_realize(DeviceState *dev, Error **errp)
{
+ ERRP_GUARD();
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
VHostUserSCSI *s = VHOST_USER_SCSI(dev);
VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
- struct vhost_virtqueue *vqs = NULL;
Error *err = NULL;
int ret;
+ int retries = VU_REALIZE_CONN_RETRIES;
if (!vs->conf.chardev.chr) {
error_setg(errp, "vhost-user-scsi: missing chardev");
@@ -111,18 +254,28 @@ static void vhost_user_scsi_realize(DeviceState *dev, Error **errp)
goto free_virtio;
}
- vsc->dev.nvqs = VIRTIO_SCSI_VQ_NUM_FIXED + vs->conf.num_queues;
- vsc->dev.vqs = g_new0(struct vhost_virtqueue, vsc->dev.nvqs);
- vsc->dev.vq_index = 0;
- vsc->dev.backend_features = 0;
- vqs = vsc->dev.vqs;
+ vsc->inflight = g_new0(struct vhost_inflight, 1);
+ s->vhost_vqs = g_new0(struct vhost_virtqueue,
+ VIRTIO_SCSI_VQ_NUM_FIXED + vs->conf.num_queues);
+
+ assert(!*errp);
+ do {
+ if (*errp) {
+ error_prepend(errp, "Reconnecting after error: ");
+ error_report_err(*errp);
+ *errp = NULL;
+ }
+ ret = vhost_user_scsi_realize_connect(s, errp);
+ } while (ret < 0 && retries--);
- ret = vhost_dev_init(&vsc->dev, &s->vhost_user,
- VHOST_BACKEND_TYPE_USER, 0, errp);
if (ret < 0) {
goto free_vhost;
}
+ /* we're fully initialized, now we can operate, so add the handler */
+ qemu_chr_fe_set_handlers(&vs->conf.chardev, NULL, NULL,
+ vhost_user_scsi_event, NULL, (void *)dev,
+ NULL, true);
/* Channel and lun both are 0 for bootable vhost-user-scsi disk */
vsc->channel = 0;
vsc->lun = 0;
@@ -131,8 +284,12 @@ static void vhost_user_scsi_realize(DeviceState *dev, Error **errp)
return;
free_vhost:
+ g_free(s->vhost_vqs);
+ s->vhost_vqs = NULL;
+ g_free(vsc->inflight);
+ vsc->inflight = NULL;
vhost_user_cleanup(&s->vhost_user);
- g_free(vqs);
+
free_virtio:
virtio_scsi_common_unrealize(dev);
}
@@ -142,16 +299,23 @@ static void vhost_user_scsi_unrealize(DeviceState *dev)
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VHostUserSCSI *s = VHOST_USER_SCSI(dev);
VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
- struct vhost_virtqueue *vqs = vsc->dev.vqs;
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
/* This will stop the vhost backend. */
vhost_user_scsi_set_status(vdev, 0);
+ qemu_chr_fe_set_handlers(&vs->conf.chardev, NULL, NULL, NULL, NULL, NULL,
+ NULL, false);
vhost_dev_cleanup(&vsc->dev);
- g_free(vqs);
+ g_free(s->vhost_vqs);
+ s->vhost_vqs = NULL;
+
+ vhost_dev_free_inflight(vsc->inflight);
+ g_free(vsc->inflight);
+ vsc->inflight = NULL;
- virtio_scsi_common_unrealize(dev);
vhost_user_cleanup(&s->vhost_user);
+ virtio_scsi_common_unrealize(dev);
}
static Property vhost_user_scsi_properties[] = {
diff --git a/include/hw/virtio/vhost-scsi-common.h b/include/hw/virtio/vhost-scsi-common.h
index 18f115527c..c5d2c09455 100644
--- a/include/hw/virtio/vhost-scsi-common.h
+++ b/include/hw/virtio/vhost-scsi-common.h
@@ -39,7 +39,7 @@ struct VHostSCSICommon {
struct vhost_inflight *inflight;
};
-int vhost_scsi_common_start(VHostSCSICommon *vsc);
+int vhost_scsi_common_start(VHostSCSICommon *vsc, Error **errp);
void vhost_scsi_common_stop(VHostSCSICommon *vsc);
char *vhost_scsi_common_get_fw_dev_path(FWPathProvider *p, BusState *bus,
DeviceState *dev);
diff --git a/include/hw/virtio/vhost-user-scsi.h b/include/hw/virtio/vhost-user-scsi.h
index 521b08e559..b405ec952a 100644
--- a/include/hw/virtio/vhost-user-scsi.h
+++ b/include/hw/virtio/vhost-user-scsi.h
@@ -29,6 +29,10 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserSCSI, VHOST_USER_SCSI)
struct VHostUserSCSI {
VHostSCSICommon parent_obj;
VhostUserState vhost_user;
+ bool connected;
+ bool started_vu;
+
+ struct vhost_virtqueue *vhost_vqs;
};
#endif /* VHOST_USER_SCSI_H */
--
2.41.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v7 4/5] vhost-user-scsi: start vhost when guest kicks
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
` (2 preceding siblings ...)
2023-10-08 9:12 ` [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend Li Feng
@ 2023-10-08 9:12 ` Li Feng
2023-10-08 9:12 ` [PATCH v7 5/5] vhost-user: fix lost reconnect Li Feng
4 siblings, 0 replies; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Michael S. Tsirkin, Raphael Norwitz,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
Let's keep the same behavior as vhost-user-blk.
Some old guests kick virtqueue before setting VIRTIO_CONFIG_S_DRIVER_OK.
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
hw/scsi/vhost-user-scsi.c | 48 +++++++++++++++++++++++++++++++++++----
1 file changed, 44 insertions(+), 4 deletions(-)
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
index 5df24faff4..5afb514398 100644
--- a/hw/scsi/vhost-user-scsi.c
+++ b/hw/scsi/vhost-user-scsi.c
@@ -111,8 +111,48 @@ static void vhost_user_scsi_reset(VirtIODevice *vdev)
}
}
-static void vhost_dummy_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+static void vhost_user_scsi_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
+ VHostUserSCSI *s = (VHostUserSCSI *)vdev;
+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
+
+ Error *local_err = NULL;
+ int i, ret;
+
+ if (!vdev->start_on_kick) {
+ return;
+ }
+
+ if (!s->connected) {
+ return;
+ }
+
+ if (vhost_dev_is_started(&vsc->dev)) {
+ return;
+ }
+
+ /*
+ * Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
+ * vhost here instead of waiting for .set_status().
+ */
+ ret = vhost_user_scsi_start(s, &local_err);
+ if (ret < 0) {
+ error_reportf_err(local_err, "vhost-user-scsi: vhost start failed: ");
+ qemu_chr_fe_disconnect(&vs->conf.chardev);
+ return;
+ }
+
+ /* Kick right away to begin processing requests already in vring */
+ for (i = 0; i < vsc->dev.nvqs; i++) {
+ VirtQueue *kick_vq = virtio_get_queue(vdev, i);
+
+ if (!virtio_queue_get_desc_addr(vdev, i)) {
+ continue;
+ }
+ event_notifier_set(virtio_queue_get_host_notifier(kick_vq));
+ }
}
static int vhost_user_scsi_connect(DeviceState *dev, Error **errp)
@@ -242,9 +282,9 @@ static void vhost_user_scsi_realize(DeviceState *dev, Error **errp)
return;
}
- virtio_scsi_common_realize(dev, vhost_dummy_handle_output,
- vhost_dummy_handle_output,
- vhost_dummy_handle_output, &err);
+ virtio_scsi_common_realize(dev, vhost_user_scsi_handle_output,
+ vhost_user_scsi_handle_output,
+ vhost_user_scsi_handle_output, &err);
if (err != NULL) {
error_propagate(errp, err);
return;
--
2.41.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v7 5/5] vhost-user: fix lost reconnect
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
` (3 preceding siblings ...)
2023-10-08 9:12 ` [PATCH v7 4/5] vhost-user-scsi: start vhost when guest kicks Li Feng
@ 2023-10-08 9:12 ` Li Feng
4 siblings, 0 replies; 8+ messages in thread
From: Li Feng @ 2023-10-08 9:12 UTC (permalink / raw)
To: Markus Armbruster, Raphael Norwitz, Michael S. Tsirkin,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Bennée, Viresh Kumar, open list:Block layer core,
open list:All patches CC here
Cc: Li Feng
When the vhost-user is reconnecting to the backend, and if the vhost-user fails
at the get_features in vhost_dev_init(), then the reconnect will fail
and it will not be retriggered forever.
The reason is:
When the vhost-user fails at get_features, the vhost_dev_cleanup will be called
immediately.
vhost_dev_cleanup calls 'memset(hdev, 0, sizeof(struct vhost_dev))'.
The reconnect path is:
vhost_user_blk_event
vhost_user_async_close(.. vhost_user_blk_disconnect ..)
qemu_chr_fe_set_handlers <----- clear the notifier callback
schedule vhost_user_async_close_bh
The vhost->vdev is null, so the vhost_user_blk_disconnect will not be
called, then the event fd callback will not be reinstalled.
All vhost-user devices have this issue, including vhost-user-blk/scsi.
With this patch, if the vdev->vdev is null, the fd callback will still
be reinstalled.
Fixes: 71e076a07d ("hw/virtio: generalise CHR_EVENT_CLOSED handling")
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
hw/block/vhost-user-blk.c | 2 +-
hw/scsi/vhost-user-scsi.c | 3 ++-
hw/virtio/vhost-user-gpio.c | 2 +-
hw/virtio/vhost-user.c | 10 ++++++++--
include/hw/virtio/vhost-user.h | 3 ++-
5 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index 3c69fa47d5..95c758200d 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -391,7 +391,7 @@ static void vhost_user_blk_event(void *opaque, QEMUChrEvent event)
case CHR_EVENT_CLOSED:
/* defer close until later to avoid circular close */
vhost_user_async_close(dev, &s->chardev, &s->dev,
- vhost_user_blk_disconnect);
+ vhost_user_blk_disconnect, vhost_user_blk_event);
break;
case CHR_EVENT_BREAK:
case CHR_EVENT_MUX_IN:
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
index 5afb514398..dbe864c0e5 100644
--- a/hw/scsi/vhost-user-scsi.c
+++ b/hw/scsi/vhost-user-scsi.c
@@ -234,7 +234,8 @@ static void vhost_user_scsi_event(void *opaque, QEMUChrEvent event)
case CHR_EVENT_CLOSED:
/* defer close until later to avoid circular close */
vhost_user_async_close(dev, &vs->conf.chardev, &vsc->dev,
- vhost_user_scsi_disconnect);
+ vhost_user_scsi_disconnect,
+ vhost_user_scsi_event);
break;
case CHR_EVENT_BREAK:
case CHR_EVENT_MUX_IN:
diff --git a/hw/virtio/vhost-user-gpio.c b/hw/virtio/vhost-user-gpio.c
index fc784e4213..aff2d7eff6 100644
--- a/hw/virtio/vhost-user-gpio.c
+++ b/hw/virtio/vhost-user-gpio.c
@@ -289,7 +289,7 @@ static void vu_gpio_event(void *opaque, QEMUChrEvent event)
case CHR_EVENT_CLOSED:
/* defer close until later to avoid circular close */
vhost_user_async_close(dev, &gpio->chardev, &gpio->vhost_dev,
- vu_gpio_disconnect);
+ vu_gpio_disconnect, vu_gpio_event);
break;
case CHR_EVENT_BREAK:
case CHR_EVENT_MUX_IN:
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 3766b415f8..7395bfc531 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -2765,6 +2765,7 @@ typedef struct {
DeviceState *dev;
CharBackend *cd;
struct vhost_dev *vhost;
+ IOEventHandler *event_cb;
} VhostAsyncCallback;
static void vhost_user_async_close_bh(void *opaque)
@@ -2779,7 +2780,10 @@ static void vhost_user_async_close_bh(void *opaque)
*/
if (vhost->vdev) {
data->cb(data->dev);
- }
+ } else if (data->event_cb) {
+ qemu_chr_fe_set_handlers(data->cd, NULL, NULL, data->event_cb,
+ NULL, data->dev, NULL, true);
+ }
g_free(data);
}
@@ -2791,7 +2795,8 @@ static void vhost_user_async_close_bh(void *opaque)
*/
void vhost_user_async_close(DeviceState *d,
CharBackend *chardev, struct vhost_dev *vhost,
- vu_async_close_fn cb)
+ vu_async_close_fn cb,
+ IOEventHandler *event_cb)
{
if (!runstate_check(RUN_STATE_SHUTDOWN)) {
/*
@@ -2807,6 +2812,7 @@ void vhost_user_async_close(DeviceState *d,
data->dev = d;
data->cd = chardev;
data->vhost = vhost;
+ data->event_cb = event_cb;
/* Disable any further notifications on the chardev */
qemu_chr_fe_set_handlers(chardev,
diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
index 9f9ddf878d..6b06ecb1bd 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -106,6 +106,7 @@ typedef void (*vu_async_close_fn)(DeviceState *cb);
void vhost_user_async_close(DeviceState *d,
CharBackend *chardev, struct vhost_dev *vhost,
- vu_async_close_fn cb);
+ vu_async_close_fn cb,
+ IOEventHandler *event_cb);
#endif
--
2.41.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend
2023-10-08 9:12 ` [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend Li Feng
@ 2023-10-08 10:46 ` Manos Pitsidianakis
2023-10-09 4:34 ` Li Feng
0 siblings, 1 reply; 8+ messages in thread
From: Manos Pitsidianakis @ 2023-10-08 10:46 UTC (permalink / raw)
To: qemu-block
Cc: Li Feng, Markus Armbruster, Michael S. Tsirkin, Raphael Norwitz,
Kevin Wolf, Hanna Reitz, Paolo Bonzini, Fam Zheng,
Alex Benné e, Viresh Kumar, qemu-devel
Hello Li, I have some trivial style comments you could possibly address
in a next version:
On Sun, 08 Oct 2023 12:12, Li Feng <fengli@smartx.com> wrote:
>diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
>index df6b66cc1a..5df24faff4 100644
>--- a/hw/scsi/vhost-user-scsi.c
>+++ b/hw/scsi/vhost-user-scsi.c
>@@ -39,26 +39,56 @@ static const int user_feature_bits[] = {
> VHOST_INVALID_FEATURE_BIT
> };
>
>+static int vhost_user_scsi_start(VHostUserSCSI *s, Error **errp)
>+{
>+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
>+ int ret;
>+
>+ ret = vhost_scsi_common_start(vsc, errp);
>+ s->started_vu = (ret < 0 ? false : true);
-+ s->started_vu = (ret < 0 ? false : true);
++ s->started_vu = !(ret < 0);
> static void vhost_user_scsi_set_status(VirtIODevice *vdev, uint8_t
> status)
> {
> VHostUserSCSI *s = (VHostUserSCSI *)vdev;
>+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
-+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
++ DeviceState *dev = DEVICE(vdev);
>+static int vhost_user_scsi_connect(DeviceState *dev, Error **errp)
>+{
>+ VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>+ VHostUserSCSI *s = VHOST_USER_SCSI(vdev);
>+ VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
>+ VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
>+ int ret = 0;
>+
>+ if (s->connected) {
>+ return 0;
>+ }
>+ s->connected = true;
>+
>+ vsc->dev.num_queues = vs->conf.num_queues;
>+ vsc->dev.nvqs = VIRTIO_SCSI_VQ_NUM_FIXED + vs->conf.num_queues;
>+ vsc->dev.vqs = s->vhost_vqs;
>+ vsc->dev.vq_index = 0;
>+ vsc->dev.backend_features = 0;
>+
>+ ret = vhost_dev_init(&vsc->dev, &s->vhost_user, VHOST_BACKEND_TYPE_USER, 0,
>+ errp);
>+ if (ret < 0) {
>+ return ret;
>+ }
>+
>+ /* restore vhost state */
>+ if (virtio_device_started(vdev, vdev->status)) {
>+ ret = vhost_user_scsi_start(s, errp);
>+ if (ret < 0) {
>+ return ret;
>+ }
>+ }
>+
>+ return 0;
>+}
-+ if (virtio_device_started(vdev, vdev->status)) {
-+ ret = vhost_user_scsi_start(s, errp);
-+ if (ret < 0) {
-+ return ret;
-+ }
-+ }
-+
-+ return 0;
-+}
++ if (virtio_device_started(vdev, vdev->status)) {
++ ret = vhost_user_scsi_start(s, errp);
++ }
++
++ return ret;
++}
[skipping..]
>+static int vhost_user_scsi_realize_connect(VHostUserSCSI *s, Error **errp)
>+{
>+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
-+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
++ DeviceState *dev = DEVICE(s);
>diff --git a/include/hw/virtio/vhost-user-scsi.h b/include/hw/virtio/vhost-user-scsi.h
>index 521b08e559..b405ec952a 100644
>--- a/include/hw/virtio/vhost-user-scsi.h
>+++ b/include/hw/virtio/vhost-user-scsi.h
>@@ -29,6 +29,10 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserSCSI, VHOST_USER_SCSI)
> struct VHostUserSCSI {
> VHostSCSICommon parent_obj;
> VhostUserState vhost_user;
>+ bool connected;
>+ bool started_vu;
>+
>+ struct vhost_virtqueue *vhost_vqs;
+ bool connected;
+ bool started_vu;
-+
+ struct vhost_virtqueue *vhost_vqs;
See https://www.qemu.org/docs/master/devel/style.html#qemu-object-model-declarations
The definition should look like:
struct VHostUserSCSI {
VHostSCSICommon parent_obj;
/* Properties */
bool connected;
bool started_vu;
VhostUserState vhost_user;
struct vhost_virtqueue *vhost_vqs;
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend
2023-10-08 10:46 ` Manos Pitsidianakis
@ 2023-10-09 4:34 ` Li Feng
0 siblings, 0 replies; 8+ messages in thread
From: Li Feng @ 2023-10-09 4:34 UTC (permalink / raw)
To: Manos Pitsidianakis
Cc: open list:Block layer core, Markus Armbruster, Michael S. Tsirkin,
Raphael Norwitz, Kevin Wolf, Hanna Reitz, Paolo Bonzini,
Fam Zheng, Alex Benné e, Viresh Kumar,
open list:All patches CC here
Thanks for your comments, I will submit the v8.
> On 8 Oct 2023, at 6:46 PM, Manos Pitsidianakis <manos.pitsidianakis@linaro.org> wrote:
>
> Hello Li, I have some trivial style comments you could possibly address in a next version:
>
> On Sun, 08 Oct 2023 12:12, Li Feng <fengli@smartx.com> wrote:
>> diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
>> index df6b66cc1a..5df24faff4 100644
>> --- a/hw/scsi/vhost-user-scsi.c
>> +++ b/hw/scsi/vhost-user-scsi.c
>> @@ -39,26 +39,56 @@ static const int user_feature_bits[] = {
>> VHOST_INVALID_FEATURE_BIT
>> };
>> +static int vhost_user_scsi_start(VHostUserSCSI *s, Error **errp)
>> +{
>> + VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
>> + int ret;
>> +
>> + ret = vhost_scsi_common_start(vsc, errp);
>> + s->started_vu = (ret < 0 ? false : true);
>
> -+ s->started_vu = (ret < 0 ? false : true);
> ++ s->started_vu = !(ret < 0);
>
>> static void vhost_user_scsi_set_status(VirtIODevice *vdev, uint8_t status)
>> {
>> VHostUserSCSI *s = (VHostUserSCSI *)vdev;
>> + DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
>
> -+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
> ++ DeviceState *dev = DEVICE(vdev);
>
>> +static int vhost_user_scsi_connect(DeviceState *dev, Error **errp)
>> +{
>> + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>> + VHostUserSCSI *s = VHOST_USER_SCSI(vdev);
>> + VHostSCSICommon *vsc = VHOST_SCSI_COMMON(s);
>> + VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
>> + int ret = 0;
>> +
>> + if (s->connected) {
>> + return 0;
>> + }
>> + s->connected = true;
>> +
>> + vsc->dev.num_queues = vs->conf.num_queues;
>> + vsc->dev.nvqs = VIRTIO_SCSI_VQ_NUM_FIXED + vs->conf.num_queues;
>> + vsc->dev.vqs = s->vhost_vqs;
>> + vsc->dev.vq_index = 0;
>> + vsc->dev.backend_features = 0;
>> +
>> + ret = vhost_dev_init(&vsc->dev, &s->vhost_user, VHOST_BACKEND_TYPE_USER, 0,
>> + errp);
>> + if (ret < 0) {
>> + return ret;
>> + }
>> +
>> + /* restore vhost state */
>> + if (virtio_device_started(vdev, vdev->status)) {
>> + ret = vhost_user_scsi_start(s, errp);
>> + if (ret < 0) {
>> + return ret;
>> + }
>> + }
>> +
>> + return 0;
>> +}
>
>
> -+ if (virtio_device_started(vdev, vdev->status)) {
> -+ ret = vhost_user_scsi_start(s, errp);
> -+ if (ret < 0) {
> -+ return ret;
> -+ }
> -+ }
> -+
> -+ return 0;
> -+}
> ++ if (virtio_device_started(vdev, vdev->status)) {
> ++ ret = vhost_user_scsi_start(s, errp);
> ++ }
> ++
> ++ return ret;
> ++}
>
> [skipping..]
>
>> +static int vhost_user_scsi_realize_connect(VHostUserSCSI *s, Error **errp)
>> +{
>> + DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
>
>
> -+ DeviceState *dev = &s->parent_obj.parent_obj.parent_obj.parent_obj;
> ++ DeviceState *dev = DEVICE(s);
>
>> diff --git a/include/hw/virtio/vhost-user-scsi.h b/include/hw/virtio/vhost-user-scsi.h
>> index 521b08e559..b405ec952a 100644
>> --- a/include/hw/virtio/vhost-user-scsi.h
>> +++ b/include/hw/virtio/vhost-user-scsi.h
>> @@ -29,6 +29,10 @@ OBJECT_DECLARE_SIMPLE_TYPE(VHostUserSCSI, VHOST_USER_SCSI)
>> struct VHostUserSCSI {
>> VHostSCSICommon parent_obj;
>> VhostUserState vhost_user;
>> + bool connected;
>> + bool started_vu;
>> +
>> + struct vhost_virtqueue *vhost_vqs;
>
> + bool connected;
> + bool started_vu;
> -+
> + struct vhost_virtqueue *vhost_vqs;
>
> See https://www.qemu.org/docs/master/devel/style.html#qemu-object-model-declarations
>
> The definition should look like:
>
> struct VHostUserSCSI {
> VHostSCSICommon parent_obj;
>
> /* Properties */
> bool connected;
> bool started_vu;
>
> VhostUserState vhost_user;
> struct vhost_virtqueue *vhost_vqs;
> }
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-10-09 4:35 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-08 9:12 [PATCH v7 0/5] Implement reconnect for vhost-user-scsi Li Feng
2023-10-08 9:12 ` [PATCH v7 1/5] vhost-user-common: send get_inflight_fd once Li Feng
2023-10-08 9:12 ` [PATCH v7 2/5] vhost: move and rename the conn retry times Li Feng
2023-10-08 9:12 ` [PATCH v7 3/5] vhost-user-scsi: support reconnect to backend Li Feng
2023-10-08 10:46 ` Manos Pitsidianakis
2023-10-09 4:34 ` Li Feng
2023-10-08 9:12 ` [PATCH v7 4/5] vhost-user-scsi: start vhost when guest kicks Li Feng
2023-10-08 9:12 ` [PATCH v7 5/5] vhost-user: fix lost reconnect Li Feng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).