* [PATCH v3 0/3] vhost-user-blk: support inflight migration
@ 2025-11-10 10:39 Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 1/3] vmstate: introduce VMSTATE_VBUFFER_UINT64 Alexandr Moshkov
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Alexandr Moshkov @ 2025-11-10 10:39 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
v3:
- use pre_load_errp instead of pre_load in vhost.c
- change vhost-user-blk property to
"skip-get-vring-base-inflight-migration"
- refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
v2:
- rewrite migration using VMSD instead of qemufile API
- add vhost-user-blk parameter instead of migration capability
I don't know if VMSD was used cleanly in migration implementation, so
feel free for comments.
Based on Vladimir's work:
[PATCH v2 00/25] vhost-user-blk: live-backend local migration
which was based on:
- [PATCH v4 0/7] chardev: postpone connect
(which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
- [PATCH v3 00/23] vhost refactoring and fixes
- [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
Based-on: <20250924133309.334631-1-vsementsov@yandex-team.ru>
Based-on: <20251015212051.1156334-1-vsementsov@yandex-team.ru>
Based-on: <20251015145808.1112843-1-vsementsov@yandex-team.ru>
Based-on: <20251015132136.1083972-15-vsementsov@yandex-team.ru>
Based-on: <20251016114104.1384675-1-vsementsov@yandex-team.ru>
---
Hi!
During inter-host migration, waiting for disk requests to be drained
in the vhost-user backend can incur significant downtime.
This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
then, after migration, they will be executed on another host.
At first, I tried to implement migration for all vhost-user devices that support inflight at once,
but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
Therefore, for now I decided to leave this idea for later and
implement the migration of the inflight region first for vhost-user-blk.
Alexandr Moshkov (3):
vmstate: introduce VMSTATE_VBUFFER_UINT64
vhost: add vmstate for inflight region with inner buffer
vhost-user-blk: support inter-host inflight migration
hw/block/vhost-user-blk.c | 29 +++++++++++++++++++++
hw/virtio/vhost.c | 42 ++++++++++++++++++++++++++++++
include/hw/virtio/vhost-user-blk.h | 1 +
include/hw/virtio/vhost.h | 6 +++++
include/migration/vmstate.h | 10 +++++++
5 files changed, 88 insertions(+)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/3] vmstate: introduce VMSTATE_VBUFFER_UINT64
2025-11-10 10:39 [PATCH v3 0/3] vhost-user-blk: support inflight migration Alexandr Moshkov
@ 2025-11-10 10:39 ` Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 2/3] vhost: add vmstate for inflight region with inner buffer Alexandr Moshkov
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Alexandr Moshkov @ 2025-11-10 10:39 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
This is an analog of VMSTATE_VBUFFER_UINT32 macro, but for uint64 type.
Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
Acked-by: Peter Xu <peterx@redhat.com>
---
include/migration/vmstate.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 7f1f1c166a..4c9e212d58 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -707,6 +707,16 @@ extern const VMStateInfo vmstate_info_qlist;
.offset = offsetof(_state, _field), \
}
+#define VMSTATE_VBUFFER_UINT64(_field, _state, _version, _test, _field_size) { \
+ .name = (stringify(_field)), \
+ .version_id = (_version), \
+ .field_exists = (_test), \
+ .size_offset = vmstate_offset_value(_state, _field_size, uint64_t),\
+ .info = &vmstate_info_buffer, \
+ .flags = VMS_VBUFFER | VMS_POINTER, \
+ .offset = offsetof(_state, _field), \
+}
+
#define VMSTATE_VBUFFER_ALLOC_UINT32(_field, _state, _version, \
_test, _field_size) { \
.name = (stringify(_field)), \
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/3] vhost: add vmstate for inflight region with inner buffer
2025-11-10 10:39 [PATCH v3 0/3] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 1/3] vmstate: introduce VMSTATE_VBUFFER_UINT64 Alexandr Moshkov
@ 2025-11-10 10:39 ` Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 3/3] vhost-user-blk: support inter-host inflight migration Alexandr Moshkov
2025-11-18 20:24 ` [PATCH v3 0/3] vhost-user-blk: support " Vladimir Sementsov-Ogievskiy
3 siblings, 0 replies; 11+ messages in thread
From: Alexandr Moshkov @ 2025-11-10 10:39 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
Prepare for future inflight region migration for vhost-user-blk.
We need to migrate size, queue_size, and inner buffer.
So firstly it migrate size and queue_size fields, then allocate memory for buffer with
migrated size, then migrate inner buffer itself.
Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
---
hw/virtio/vhost.c | 42 +++++++++++++++++++++++++++++++++++++++
include/hw/virtio/vhost.h | 6 ++++++
2 files changed, 48 insertions(+)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index c46203eb9c..9a746c9861 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -2028,6 +2028,48 @@ const VMStateDescription vmstate_backend_transfer_vhost_inflight = {
}
};
+static int vhost_inflight_buffer_pre_load(void *opaque, Error **errp)
+{
+ info_report("vhost_inflight_region_buffer_pre_load");
+ struct vhost_inflight *inflight = opaque;
+
+ int fd = -1;
+ void *addr = qemu_memfd_alloc("vhost-inflight", inflight->size,
+ F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
+ &fd, errp);
+ if (*errp) {
+ return -ENOMEM;
+ }
+
+ inflight->offset = 0;
+ inflight->addr = addr;
+ inflight->fd = fd;
+
+ return 0;
+}
+
+const VMStateDescription vmstate_vhost_inflight_region_buffer = {
+ .name = "vhost-inflight-region/buffer",
+ .pre_load_errp = vhost_inflight_buffer_pre_load,
+ .fields = (const VMStateField[]) {
+ VMSTATE_VBUFFER_UINT64(addr, struct vhost_inflight, 0, NULL, size),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+const VMStateDescription vmstate_vhost_inflight_region = {
+ .name = "vhost-inflight-region",
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT64(size, struct vhost_inflight),
+ VMSTATE_UINT16(queue_size, struct vhost_inflight),
+ VMSTATE_END_OF_LIST()
+ },
+ .subsections = (const VMStateDescription * const []) {
+ &vmstate_vhost_inflight_region_buffer,
+ NULL
+ }
+};
+
const VMStateDescription vmstate_vhost_virtqueue = {
.name = "vhost-virtqueue",
.fields = (const VMStateField[]) {
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 13ca2c319f..dd552de91f 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -596,6 +596,12 @@ extern const VMStateDescription vmstate_backend_transfer_vhost_inflight;
vmstate_backend_transfer_vhost_inflight, \
struct vhost_inflight)
+extern const VMStateDescription vmstate_vhost_inflight_region;
+#define VMSTATE_VHOST_INFLIGHT_REGION(_field, _state) \
+ VMSTATE_STRUCT_POINTER(_field, _state, \
+ vmstate_vhost_inflight_region, \
+ struct vhost_inflight)
+
extern const VMStateDescription vmstate_vhost_dev;
#define VMSTATE_BACKEND_TRANSFER_VHOST(_field, _state) \
VMSTATE_STRUCT(_field, _state, 0, vmstate_vhost_dev, struct vhost_dev)
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/3] vhost-user-blk: support inter-host inflight migration
2025-11-10 10:39 [PATCH v3 0/3] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 1/3] vmstate: introduce VMSTATE_VBUFFER_UINT64 Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 2/3] vhost: add vmstate for inflight region with inner buffer Alexandr Moshkov
@ 2025-11-10 10:39 ` Alexandr Moshkov
2025-11-18 20:24 ` [PATCH v3 0/3] vhost-user-blk: support " Vladimir Sementsov-Ogievskiy
3 siblings, 0 replies; 11+ messages in thread
From: Alexandr Moshkov @ 2025-11-10 10:39 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
During inter-host migration, waiting for disk requests to be drained
in the vhost-user backend can incur significant downtime.
This can be avoided if QEMU migrates the inflight region in
vhost-user-blk.
Thus, during the qemu migration, the vhost-user backend can cancel all
inflight requests and
then, after migration, they will be executed on another host.
In vhost_user_blk_stop() on incoming inter-host migration make
force_stop = true,
so GET_VRING_BASE will not be executed.
Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
---
hw/block/vhost-user-blk.c | 29 +++++++++++++++++++++++++++++
include/hw/virtio/vhost-user-blk.h | 1 +
2 files changed, 30 insertions(+)
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index a8fd90480a..2d9b398de6 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -139,6 +139,14 @@ const VhostDevConfigOps blk_ops = {
.vhost_dev_config_notifier = vhost_user_blk_handle_config_change,
};
+static bool vhost_user_blk_inflight_needed(void *opaque)
+{
+ struct VHostUserBlk *s = opaque;
+
+ return s->skip_get_vring_base_inflight_migration &&
+ !migrate_local_vhost_user_blk();
+}
+
static int vhost_user_blk_start(VirtIODevice *vdev, Error **errp)
{
VHostUserBlk *s = VHOST_USER_BLK(vdev);
@@ -242,6 +250,11 @@ static int vhost_user_blk_stop(VirtIODevice *vdev)
force_stop = s->skip_get_vring_base_on_force_shutdown &&
qemu_force_shutdown_requested();
+ if (vhost_user_blk_inflight_needed(s) &&
+ runstate_check(RUN_STATE_FINISH_MIGRATE)) {
+ force_stop = true;
+ }
+
s->dev.backend_transfer = s->dev.backend_transfer ||
(runstate_check(RUN_STATE_FINISH_MIGRATE) &&
migrate_local_vhost_user_blk());
@@ -656,6 +669,16 @@ static struct vhost_dev *vhost_user_blk_get_vhost(VirtIODevice *vdev)
return &s->dev;
}
+static const VMStateDescription vmstate_vhost_user_blk_inflight = {
+ .name = "vhost-user-blk/inflight",
+ .version_id = 1,
+ .needed = vhost_user_blk_inflight_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_VHOST_INFLIGHT_REGION(inflight, VHostUserBlk),
+ VMSTATE_END_OF_LIST()
+ },
+};
+
static bool vhost_user_blk_pre_incoming(void *opaque, Error **errp)
{
VHostUserBlk *s = VHOST_USER_BLK(opaque);
@@ -678,6 +701,10 @@ static const VMStateDescription vmstate_vhost_user_blk = {
VMSTATE_VIRTIO_DEVICE,
VMSTATE_END_OF_LIST()
},
+ .subsections = (const VMStateDescription * const []) {
+ &vmstate_vhost_user_blk_inflight,
+ NULL
+ }
};
static bool vhost_user_needed(void *opaque)
@@ -751,6 +778,8 @@ static const Property vhost_user_blk_properties[] = {
VIRTIO_BLK_F_WRITE_ZEROES, true),
DEFINE_PROP_BOOL("skip-get-vring-base-on-force-shutdown", VHostUserBlk,
skip_get_vring_base_on_force_shutdown, false),
+ DEFINE_PROP_BOOL("skip-get-vring-base-inflight-migration", VHostUserBlk,
+ skip_get_vring_base_inflight_migration, false),
};
static void vhost_user_blk_class_init(ObjectClass *klass, const void *data)
diff --git a/include/hw/virtio/vhost-user-blk.h b/include/hw/virtio/vhost-user-blk.h
index b06f55fd6f..859fb96956 100644
--- a/include/hw/virtio/vhost-user-blk.h
+++ b/include/hw/virtio/vhost-user-blk.h
@@ -52,6 +52,7 @@ struct VHostUserBlk {
bool started_vu;
bool skip_get_vring_base_on_force_shutdown;
+ bool skip_get_vring_base_inflight_migration;
bool incoming_backend;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-11-10 10:39 [PATCH v3 0/3] vhost-user-blk: support inflight migration Alexandr Moshkov
` (2 preceding siblings ...)
2025-11-10 10:39 ` [PATCH v3 3/3] vhost-user-blk: support inter-host inflight migration Alexandr Moshkov
@ 2025-11-18 20:24 ` Vladimir Sementsov-Ogievskiy
2025-11-18 22:05 ` Peter Xu
3 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2025-11-18 20:24 UTC (permalink / raw)
To: Alexandr Moshkov, qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Daniel P. Berrangé
Add Daniel
On 10.11.25 13:39, Alexandr Moshkov wrote:
> v3:
> - use pre_load_errp instead of pre_load in vhost.c
> - change vhost-user-blk property to
> "skip-get-vring-base-inflight-migration"
> - refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
>
> v2:
> - rewrite migration using VMSD instead of qemufile API
> - add vhost-user-blk parameter instead of migration capability
>
> I don't know if VMSD was used cleanly in migration implementation, so
> feel free for comments.
>
> Based on Vladimir's work:
> [PATCH v2 00/25] vhost-user-blk: live-backend local migration
> which was based on:
> - [PATCH v4 0/7] chardev: postpone connect
> (which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
> - [PATCH v3 00/23] vhost refactoring and fixes
> - [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
>
Hi!
On my series about backend-transfer migration, the final consensus (or at least,
I hope that it's a consensus:) is that using device properties to control migration
channel content is wrong. And we should instead use migration parameters.
(discussion here: https://lore.kernel.org/qemu-devel/29aa1d66-9fa7-4e44-b0e3-2ca26e77accf@yandex-team.ru/ )
So the API for backend-transfer features is a migration parameter
backend-transfer = [ list of QOM paths of devices, for which we want to enable backend-transfer ]
and user don't have to change device properties in runtime to setup the following migration.
So I assume, similar practice should be applied here: don't use device
properties to control migration.
So, should it be a parameter like
migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
?
> Based-on: <20250924133309.334631-1-vsementsov@yandex-team.ru>
> Based-on: <20251015212051.1156334-1-vsementsov@yandex-team.ru>
> Based-on: <20251015145808.1112843-1-vsementsov@yandex-team.ru>
> Based-on: <20251015132136.1083972-15-vsementsov@yandex-team.ru>
> Based-on: <20251016114104.1384675-1-vsementsov@yandex-team.ru>
>
> ---
>
> Hi!
>
> During inter-host migration, waiting for disk requests to be drained
> in the vhost-user backend can incur significant downtime.
>
> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
> then, after migration, they will be executed on another host.
>
> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>
> Therefore, for now I decided to leave this idea for later and
> implement the migration of the inflight region first for vhost-user-blk.
>
> Alexandr Moshkov (3):
> vmstate: introduce VMSTATE_VBUFFER_UINT64
> vhost: add vmstate for inflight region with inner buffer
> vhost-user-blk: support inter-host inflight migration
>
> hw/block/vhost-user-blk.c | 29 +++++++++++++++++++++
> hw/virtio/vhost.c | 42 ++++++++++++++++++++++++++++++
> include/hw/virtio/vhost-user-blk.h | 1 +
> include/hw/virtio/vhost.h | 6 +++++
> include/migration/vmstate.h | 10 +++++++
> 5 files changed, 88 insertions(+)
>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-11-18 20:24 ` [PATCH v3 0/3] vhost-user-blk: support " Vladimir Sementsov-Ogievskiy
@ 2025-11-18 22:05 ` Peter Xu
2025-12-04 19:55 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 11+ messages in thread
From: Peter Xu @ 2025-11-18 22:05 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On Tue, Nov 18, 2025 at 11:24:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Add Daniel
>
> On 10.11.25 13:39, Alexandr Moshkov wrote:
> > v3:
> > - use pre_load_errp instead of pre_load in vhost.c
> > - change vhost-user-blk property to
> > "skip-get-vring-base-inflight-migration"
> > - refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
> >
> > v2:
> > - rewrite migration using VMSD instead of qemufile API
> > - add vhost-user-blk parameter instead of migration capability
> >
> > I don't know if VMSD was used cleanly in migration implementation, so
> > feel free for comments.
> >
> > Based on Vladimir's work:
> > [PATCH v2 00/25] vhost-user-blk: live-backend local migration
> > which was based on:
> > - [PATCH v4 0/7] chardev: postpone connect
> > (which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
> > - [PATCH v3 00/23] vhost refactoring and fixes
> > - [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
> >
>
> Hi!
>
> On my series about backend-transfer migration, the final consensus (or at least,
> I hope that it's a consensus:) is that using device properties to control migration
> channel content is wrong. And we should instead use migration parameters.
>
> (discussion here: https://lore.kernel.org/qemu-devel/29aa1d66-9fa7-4e44-b0e3-2ca26e77accf@yandex-team.ru/ )
>
> So the API for backend-transfer features is a migration parameter
>
> backend-transfer = [ list of QOM paths of devices, for which we want to enable backend-transfer ]
>
> and user don't have to change device properties in runtime to setup the following migration.
>
> So I assume, similar practice should be applied here: don't use device
> properties to control migration.
>
> So, should it be a parameter like
>
> migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
>
> ?
I have concern that if we start doing this more, migration qapi/ will be
completely messed up.
Imagine a world where there'll be tons of lists like:
migrate-dev1-some-feature-1 = [list of devices (almost only dev1 typed)]
migrate-dev2-some-feature-2 = [list of devices (almost only dev2 typed)]
migrate-dev3-some-feature-3 = [list of devices (almost only dev3 typed)]
...
That doesn't look reasonable at all. If some feature is likely only
supported in one device, that should not appear in migration.json but only
in the specific device.
I don't think I'm fully convinced we can't enable some form of machine type
properties (with QDEV or not) on backends we should stick with something
like that. I can have some closer look this week, but.. even if not, I
still think migration shouldn't care about some specific behavior of a
specific device.
If we really want to have some way to probe device features, maybe we
should also think about a generic interface (rather than "one new list
every time"). We also have some recent discussions on a proper interface
to query TAP backend features like USO*. Maybe they share some of the
goals here.
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-11-18 22:05 ` Peter Xu
@ 2025-12-04 19:55 ` Vladimir Sementsov-Ogievskiy
2025-12-09 16:51 ` Peter Xu
0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2025-12-04 19:55 UTC (permalink / raw)
To: Peter Xu
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On 19.11.25 01:05, Peter Xu wrote:
> On Tue, Nov 18, 2025 at 11:24:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Add Daniel
>>
>> On 10.11.25 13:39, Alexandr Moshkov wrote:
>>> v3:
>>> - use pre_load_errp instead of pre_load in vhost.c
>>> - change vhost-user-blk property to
>>> "skip-get-vring-base-inflight-migration"
>>> - refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
>>>
>>> v2:
>>> - rewrite migration using VMSD instead of qemufile API
>>> - add vhost-user-blk parameter instead of migration capability
>>>
>>> I don't know if VMSD was used cleanly in migration implementation, so
>>> feel free for comments.
>>>
>>> Based on Vladimir's work:
>>> [PATCH v2 00/25] vhost-user-blk: live-backend local migration
>>> which was based on:
>>> - [PATCH v4 0/7] chardev: postpone connect
>>> (which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
>>> - [PATCH v3 00/23] vhost refactoring and fixes
>>> - [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
>>>
>>
>> Hi!
>>
>> On my series about backend-transfer migration, the final consensus (or at least,
>> I hope that it's a consensus:) is that using device properties to control migration
>> channel content is wrong. And we should instead use migration parameters.
>>
>> (discussion here: https://lore.kernel.org/qemu-devel/29aa1d66-9fa7-4e44-b0e3-2ca26e77accf@yandex-team.ru/ )
>>
>> So the API for backend-transfer features is a migration parameter
>>
>> backend-transfer = [ list of QOM paths of devices, for which we want to enable backend-transfer ]
>>
>> and user don't have to change device properties in runtime to setup the following migration.
>>
>> So I assume, similar practice should be applied here: don't use device
>> properties to control migration.
>>
>> So, should it be a parameter like
>>
>> migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
>>
>> ?
>
> I have concern that if we start doing this more, migration qapi/ will be
> completely messed up.
>
> Imagine a world where there'll be tons of lists like:
>
> migrate-dev1-some-feature-1 = [list of devices (almost only dev1 typed)]
> migrate-dev2-some-feature-2 = [list of devices (almost only dev2 typed)]
> migrate-dev3-some-feature-3 = [list of devices (almost only dev3 typed)]
> ...
Yes, hard to argue against it.
I still hope, Daniel will share his opinion..
From our side, we are OK with any interface, which is accepted)
Let me summarize in short the variants I see:
===
1. lists
Add migrations parameters for such features:
migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
backend-transfer = [ list of QOM paths of devices, which backend should be migrated ]
This way, we just need to set the same sets for source and target QEMU before migration,
and it have no relation to machine types.
PROS: Like any other migration-capability, setup what (and how) should migrate, no
relation to device properties and MT.
CONS: Logically, that's the same as add a device property, but instead we implement
lists of devices, and create extra QOM_PATH-links.
===
2. device parameters
Before migration we should loop through devices and call corresponding
qom-set commands, like
qom-set {path: QOM_PATH, property: "backend-transfer", "value": true}
qom-set {path: QOM_PATH, property: "migrate-inflight-region", "value": true}
And of course, we should care to set same values for corresponding devices on source
and target.
In this case, we also _can_ rely on machine types for defaults.
Note, that "migrate-inflight-region" may become the default in the 11.0 MT.
But backend-transfer can't be a default, as this way we'll break remote migration.
PROS: No lists, native properties
CONS: These properties does not define any portion of device state, internal or
visible to guest. It's not a property of device, but it's and option for migration
of that device.
===
2.1 = [2] assisted by one boolean migration-parameter
Still, if we want make backend-transfer "a kind of" default, we'll need one boolean
migration parameter "it-is-local-migration", and modify logic to
really_do_backend_transfer = it-is-local-migration and device.backend-transfer
really_do_migrate_inflight_region = not it-is-local-migration and device.migrate-inflight-region
PROS: starting from some MT, we'll have good defaults, so that user don't have
to enable/disable the option per device for every migration.
CONS: a precedent of the behavior driven by combination of device property and
corresponding migration parameter (or we have something similar?)
===
4. mixed
Keep [2] for this series, and [1] for backend-transfer.
PROS: list for backend-transfer remains "the only exclusion" instead of "the practice",
so we will not have tons of such lists :)
CONS: inconstant solutions for similar things
===
5. implement "per device" migration parameters
They may be set by additional QMP command qmp-migrate-set-device-parameters, which
will take additional qom-path parameter.
Or, we may add one list of structures like
[{
qom_path: ...
parameters: ..
}, ...]
into common migration parameters.
PROS: keep new features as a property of migration, but avoid several lists of QOM paths
CONS: ?
Hmm, we also may select devices not only by qom_path, but by type, for example, to enable
feature for all virtio-net devices. Hmm, and this type may be also used as discriminator
for parameters, which may be a QAPI union type..
===
Thoughts?
>
> That doesn't look reasonable at all. If some feature is likely only
> supported in one device, that should not appear in migration.json but only
> in the specific device.
>
> I don't think I'm fully convinced we can't enable some form of machine type
> properties (with QDEV or not) on backends we should stick with something
> like that. I can have some closer look this week, but.. even if not, I
> still think migration shouldn't care about some specific behavior of a
> specific device.
>
> If we really want to have some way to probe device features, maybe we
> should also think about a generic interface (rather than "one new list
> every time"). We also have some recent discussions on a proper interface
> to query TAP backend features like USO*. Maybe they share some of the
> goals here.
>
What do you mean by probing device features? Isn't it qom-get command?
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-12-04 19:55 ` Vladimir Sementsov-Ogievskiy
@ 2025-12-09 16:51 ` Peter Xu
2025-12-10 11:41 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 11+ messages in thread
From: Peter Xu @ 2025-12-09 16:51 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On Thu, Dec 04, 2025 at 10:55:33PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 19.11.25 01:05, Peter Xu wrote:
> > On Tue, Nov 18, 2025 at 11:24:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > Add Daniel
> > >
> > > On 10.11.25 13:39, Alexandr Moshkov wrote:
> > > > v3:
> > > > - use pre_load_errp instead of pre_load in vhost.c
> > > > - change vhost-user-blk property to
> > > > "skip-get-vring-base-inflight-migration"
> > > > - refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
> > > >
> > > > v2:
> > > > - rewrite migration using VMSD instead of qemufile API
> > > > - add vhost-user-blk parameter instead of migration capability
> > > >
> > > > I don't know if VMSD was used cleanly in migration implementation, so
> > > > feel free for comments.
> > > >
> > > > Based on Vladimir's work:
> > > > [PATCH v2 00/25] vhost-user-blk: live-backend local migration
> > > > which was based on:
> > > > - [PATCH v4 0/7] chardev: postpone connect
> > > > (which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
> > > > - [PATCH v3 00/23] vhost refactoring and fixes
> > > > - [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
> > > >
> > >
> > > Hi!
> > >
> > > On my series about backend-transfer migration, the final consensus (or at least,
> > > I hope that it's a consensus:) is that using device properties to control migration
> > > channel content is wrong. And we should instead use migration parameters.
> > >
> > > (discussion here: https://lore.kernel.org/qemu-devel/29aa1d66-9fa7-4e44-b0e3-2ca26e77accf@yandex-team.ru/ )
> > >
> > > So the API for backend-transfer features is a migration parameter
> > >
> > > backend-transfer = [ list of QOM paths of devices, for which we want to enable backend-transfer ]
> > >
> > > and user don't have to change device properties in runtime to setup the following migration.
> > >
> > > So I assume, similar practice should be applied here: don't use device
> > > properties to control migration.
> > >
> > > So, should it be a parameter like
> > >
> > > migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
> > >
> > > ?
> >
> > I have concern that if we start doing this more, migration qapi/ will be
> > completely messed up.
> >
> > Imagine a world where there'll be tons of lists like:
> >
> > migrate-dev1-some-feature-1 = [list of devices (almost only dev1 typed)]
> > migrate-dev2-some-feature-2 = [list of devices (almost only dev2 typed)]
> > migrate-dev3-some-feature-3 = [list of devices (almost only dev3 typed)]
> > ...
>
>
> Yes, hard to argue against it.
>
> I still hope, Daniel will share his opinion..
>
> From our side, we are OK with any interface, which is accepted)
>
>
> Let me summarize in short the variants I see:
>
> ===
>
> 1. lists
>
> Add migrations parameters for such features:
>
> migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
> backend-transfer = [ list of QOM paths of devices, which backend should be migrated ]
>
> This way, we just need to set the same sets for source and target QEMU before migration,
> and it have no relation to machine types.
>
> PROS: Like any other migration-capability, setup what (and how) should migrate, no
> relation to device properties and MT.
>
> CONS: Logically, that's the same as add a device property, but instead we implement
> lists of devices, and create extra QOM_PATH-links.
>
> ===
>
> 2. device parameters
>
> Before migration we should loop through devices and call corresponding
> qom-set commands, like
>
> qom-set {path: QOM_PATH, property: "backend-transfer", "value": true}
> qom-set {path: QOM_PATH, property: "migrate-inflight-region", "value": true}
>
> And of course, we should care to set same values for corresponding devices on source
> and target.
>
> In this case, we also _can_ rely on machine types for defaults.
>
> Note, that "migrate-inflight-region" may become the default in the 11.0 MT.
> But backend-transfer can't be a default, as this way we'll break remote migration.
>
> PROS: No lists, native properties
>
> CONS: These properties does not define any portion of device state, internal or
> visible to guest. It's not a property of device, but it's and option for migration
> of that device.
>
> ===
>
> 2.1 = [2] assisted by one boolean migration-parameter
>
> Still, if we want make backend-transfer "a kind of" default, we'll need one boolean
> migration parameter "it-is-local-migration", and modify logic to
>
> really_do_backend_transfer = it-is-local-migration and device.backend-transfer
> really_do_migrate_inflight_region = not it-is-local-migration and device.migrate-inflight-region
>
> PROS: starting from some MT, we'll have good defaults, so that user don't have
> to enable/disable the option per device for every migration.
>
> CONS: a precedent of the behavior driven by combination of device property and
> corresponding migration parameter (or we have something similar?)
>
> ===
>
> 4. mixed
>
> Keep [2] for this series, and [1] for backend-transfer.
>
> PROS: list for backend-transfer remains "the only exclusion" instead of "the practice",
> so we will not have tons of such lists :)
>
> CONS: inconstant solutions for similar things
>
> ===
>
> 5. implement "per device" migration parameters
>
> They may be set by additional QMP command qmp-migrate-set-device-parameters, which
> will take additional qom-path parameter.
>
> Or, we may add one list of structures like
>
> [{
> qom_path: ...
> parameters: ..
> }, ...]
>
> into common migration parameters.
>
> PROS: keep new features as a property of migration, but avoid several lists of QOM paths
> CONS: ?
>
> Hmm, we also may select devices not only by qom_path, but by type, for example, to enable
> feature for all virtio-net devices. Hmm, and this type may be also used as discriminator
> for parameters, which may be a QAPI union type..
>
> ===
>
>
> Thoughts?
Sorry to respond late, I kept getting other things interrupting me when I
wanted to look at this..
I just sent a series here, allowing TYPE_OBJECT of any kind to be able to
work with machine compat properties:
https://lore.kernel.org/r/20251209162857.857593-1-peterx@redhat.com
I still want to see if we can stick with compat properties in general
whenever it's about defining guest ABI.
What you proposed should work, but that'll involve a 2nd way of probing
"what is the guest ABI" by providing a new QMP query command and then set
them after mgmt queries both QEMUs then set the subset of both. It will be
finer granule but as I discussed previously, I think it's re-inventing the
wheels, and it may cause mgmt over-bloated on caring too many trivial
details of per-device specific details.
Please have a look to see the feasibility. As mentioned in the cover
letter, that will need further work to e.g. QOMify TAP first at least for
your series. But I don't yet see it as a blocker? After QOMified, it can
inherit directly the OBJECT_COMPAT then TAP can add compat properties.
I wonder if vhost-usr-blk can already use compat properties.
Thanks,
>
> >
> > That doesn't look reasonable at all. If some feature is likely only
> > supported in one device, that should not appear in migration.json but only
> > in the specific device.
> >
> > I don't think I'm fully convinced we can't enable some form of machine type
> > properties (with QDEV or not) on backends we should stick with something
> > like that. I can have some closer look this week, but.. even if not, I
> > still think migration shouldn't care about some specific behavior of a
> > specific device.
> >
> > If we really want to have some way to probe device features, maybe we
> > should also think about a generic interface (rather than "one new list
> > every time"). We also have some recent discussions on a proper interface
> > to query TAP backend features like USO*. Maybe they share some of the
> > goals here.
> >
> What do you mean by probing device features? Isn't it qom-get command?
>
> --
> Best regards,
> Vladimir
>
--
Peter Xu
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-12-09 16:51 ` Peter Xu
@ 2025-12-10 11:41 ` Vladimir Sementsov-Ogievskiy
2025-12-10 15:20 ` Peter Xu
0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2025-12-10 11:41 UTC (permalink / raw)
To: Peter Xu
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On 09.12.25 19:51, Peter Xu wrote:
> On Thu, Dec 04, 2025 at 10:55:33PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> On 19.11.25 01:05, Peter Xu wrote:
>>> On Tue, Nov 18, 2025 at 11:24:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>>>> Add Daniel
>>>>
>>>> On 10.11.25 13:39, Alexandr Moshkov wrote:
>>>>> v3:
>>>>> - use pre_load_errp instead of pre_load in vhost.c
>>>>> - change vhost-user-blk property to
>>>>> "skip-get-vring-base-inflight-migration"
>>>>> - refactor vhost-user-blk.c, by moving vhost_user_blk_inflight_needed() higher
>>>>>
>>>>> v2:
>>>>> - rewrite migration using VMSD instead of qemufile API
>>>>> - add vhost-user-blk parameter instead of migration capability
>>>>>
>>>>> I don't know if VMSD was used cleanly in migration implementation, so
>>>>> feel free for comments.
>>>>>
>>>>> Based on Vladimir's work:
>>>>> [PATCH v2 00/25] vhost-user-blk: live-backend local migration
>>>>> which was based on:
>>>>> - [PATCH v4 0/7] chardev: postpone connect
>>>>> (which in turn is based on [PATCH 0/2] remove deprecated 'reconnect' options)
>>>>> - [PATCH v3 00/23] vhost refactoring and fixes
>>>>> - [PATCH v8 14/19] migration: introduce .pre_incoming() vmsd handler
>>>>>
>>>>
>>>> Hi!
>>>>
>>>> On my series about backend-transfer migration, the final consensus (or at least,
>>>> I hope that it's a consensus:) is that using device properties to control migration
>>>> channel content is wrong. And we should instead use migration parameters.
>>>>
>>>> (discussion here: https://lore.kernel.org/qemu-devel/29aa1d66-9fa7-4e44-b0e3-2ca26e77accf@yandex-team.ru/ )
>>>>
>>>> So the API for backend-transfer features is a migration parameter
>>>>
>>>> backend-transfer = [ list of QOM paths of devices, for which we want to enable backend-transfer ]
>>>>
>>>> and user don't have to change device properties in runtime to setup the following migration.
>>>>
>>>> So I assume, similar practice should be applied here: don't use device
>>>> properties to control migration.
>>>>
>>>> So, should it be a parameter like
>>>>
>>>> migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
>>>>
>>>> ?
>>>
>>> I have concern that if we start doing this more, migration qapi/ will be
>>> completely messed up.
>>>
>>> Imagine a world where there'll be tons of lists like:
>>>
>>> migrate-dev1-some-feature-1 = [list of devices (almost only dev1 typed)]
>>> migrate-dev2-some-feature-2 = [list of devices (almost only dev2 typed)]
>>> migrate-dev3-some-feature-3 = [list of devices (almost only dev3 typed)]
>>> ...
>>
>>
>> Yes, hard to argue against it.
>>
>> I still hope, Daniel will share his opinion..
>>
>> From our side, we are OK with any interface, which is accepted)
>>
>>
>> Let me summarize in short the variants I see:
>>
>> ===
>>
>> 1. lists
>>
>> Add migrations parameters for such features:
>>
>> migrate-inflight-region = [ list of QOM paths of vhost-user devices ]
>> backend-transfer = [ list of QOM paths of devices, which backend should be migrated ]
>>
>> This way, we just need to set the same sets for source and target QEMU before migration,
>> and it have no relation to machine types.
>>
>> PROS: Like any other migration-capability, setup what (and how) should migrate, no
>> relation to device properties and MT.
>>
>> CONS: Logically, that's the same as add a device property, but instead we implement
>> lists of devices, and create extra QOM_PATH-links.
>>
>> ===
>>
>> 2. device parameters
>>
>> Before migration we should loop through devices and call corresponding
>> qom-set commands, like
>>
>> qom-set {path: QOM_PATH, property: "backend-transfer", "value": true}
>> qom-set {path: QOM_PATH, property: "migrate-inflight-region", "value": true}
>>
>> And of course, we should care to set same values for corresponding devices on source
>> and target.
>>
>> In this case, we also _can_ rely on machine types for defaults.
>>
>> Note, that "migrate-inflight-region" may become the default in the 11.0 MT.
>> But backend-transfer can't be a default, as this way we'll break remote migration.
>>
>> PROS: No lists, native properties
>>
>> CONS: These properties does not define any portion of device state, internal or
>> visible to guest. It's not a property of device, but it's and option for migration
>> of that device.
>>
>> ===
>>
>> 2.1 = [2] assisted by one boolean migration-parameter
>>
>> Still, if we want make backend-transfer "a kind of" default, we'll need one boolean
>> migration parameter "it-is-local-migration", and modify logic to
>>
>> really_do_backend_transfer = it-is-local-migration and device.backend-transfer
>> really_do_migrate_inflight_region = not it-is-local-migration and device.migrate-inflight-region
>>
>> PROS: starting from some MT, we'll have good defaults, so that user don't have
>> to enable/disable the option per device for every migration.
>>
>> CONS: a precedent of the behavior driven by combination of device property and
>> corresponding migration parameter (or we have something similar?)
>>
>> ===
>>
>> 4. mixed
>>
>> Keep [2] for this series, and [1] for backend-transfer.
>>
>> PROS: list for backend-transfer remains "the only exclusion" instead of "the practice",
>> so we will not have tons of such lists :)
>>
>> CONS: inconstant solutions for similar things
>>
>> ===
>>
>> 5. implement "per device" migration parameters
>>
>> They may be set by additional QMP command qmp-migrate-set-device-parameters, which
>> will take additional qom-path parameter.
>>
>> Or, we may add one list of structures like
>>
>> [{
>> qom_path: ...
>> parameters: ..
>> }, ...]
>>
>> into common migration parameters.
>>
>> PROS: keep new features as a property of migration, but avoid several lists of QOM paths
>> CONS: ?
>>
>> Hmm, we also may select devices not only by qom_path, but by type, for example, to enable
>> feature for all virtio-net devices. Hmm, and this type may be also used as discriminator
>> for parameters, which may be a QAPI union type..
>>
>> ===
>>
>>
>> Thoughts?
>
> Sorry to respond late, I kept getting other things interrupting me when I
> wanted to look at this..
>
> I just sent a series here, allowing TYPE_OBJECT of any kind to be able to
> work with machine compat properties:
>
> https://lore.kernel.org/r/20251209162857.857593-1-peterx@redhat.com
>
> I still want to see if we can stick with compat properties in general
> whenever it's about defining guest ABI.
>
> What you proposed should work, but that'll involve a 2nd way of probing
> "what is the guest ABI" by providing a new QMP query command and then set
> them after mgmt queries both QEMUs then set the subset of both. It will be
> finer granule but as I discussed previously, I think it's re-inventing the
> wheels, and it may cause mgmt over-bloated on caring too many trivial
> details of per-device specific details.
>
> Please have a look to see the feasibility. As mentioned in the cover
> letter, that will need further work to e.g. QOMify TAP first at least for
> your series. But I don't yet see it as a blocker? After QOMified, it can
> inherit directly the OBJECT_COMPAT then TAP can add compat properties.
>
> I wonder if vhost-usr-blk can already use compat properties.
>
Yes, it can. And regardless of the way we chose: qdev properties or qapi,
I don't think we need a property for backend itself. We need a property
(or migration capability) for vhost-user-blk itself, saying that its
backend should be migrated.
It's a lot simpler to migrate backend inside of frontend state. If we
migrate backend in separate, we can't control the order of backend/frontend
stats, and will have to implement some late point in state load process,
where both are already loaded and we can do our post-load logic.
>
>>
>>>
>>> That doesn't look reasonable at all. If some feature is likely only
>>> supported in one device, that should not appear in migration.json but only
>>> in the specific device.
>>>
>>> I don't think I'm fully convinced we can't enable some form of machine type
>>> properties (with QDEV or not) on backends we should stick with something
>>> like that. I can have some closer look this week, but.. even if not, I
>>> still think migration shouldn't care about some specific behavior of a
>>> specific device.
>>>
>>> If we really want to have some way to probe device features, maybe we
>>> should also think about a generic interface (rather than "one new list
>>> every time"). We also have some recent discussions on a proper interface
>>> to query TAP backend features like USO*. Maybe they share some of the
>>> goals here.
>>>
>> What do you mean by probing device features? Isn't it qom-get command?
>>
>> --
>> Best regards,
>> Vladimir
>>
>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-12-10 11:41 ` Vladimir Sementsov-Ogievskiy
@ 2025-12-10 15:20 ` Peter Xu
2025-12-10 15:33 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 11+ messages in thread
From: Peter Xu @ 2025-12-10 15:20 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On Wed, Dec 10, 2025 at 02:41:20PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Yes, it can. And regardless of the way we chose: qdev properties or qapi,
> I don't think we need a property for backend itself. We need a property
> (or migration capability) for vhost-user-blk itself, saying that its
> backend should be migrated.
The problem is then we need to introduce the new property to all frontends
that would support the backend? If it's a backend property, it can be one
property for the backend that all the frontends can consume.
>
> It's a lot simpler to migrate backend inside of frontend state. If we
> migrate backend in separate, we can't control the order of backend/frontend
> stats, and will have to implement some late point in state load process,
> where both are already loaded and we can do our post-load logic.
Would MigrationPriority help when defining the VMSD?
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/3] vhost-user-blk: support inflight migration
2025-12-10 15:20 ` Peter Xu
@ 2025-12-10 15:33 ` Vladimir Sementsov-Ogievskiy
0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2025-12-10 15:33 UTC (permalink / raw)
To: Peter Xu
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster, Daniel P. Berrangé
On 10.12.25 18:20, Peter Xu wrote:
> On Wed, Dec 10, 2025 at 02:41:20PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Yes, it can. And regardless of the way we chose: qdev properties or qapi,
>> I don't think we need a property for backend itself. We need a property
>> (or migration capability) for vhost-user-blk itself, saying that its
>> backend should be migrated.
>
> The problem is then we need to introduce the new property to all frontends
> that would support the backend? If it's a backend property, it can be one
> property for the backend that all the frontends can consume.
>
Hmm, agree, that's right.. So, we may not touch frontend at all, and only
setup backend to be migrated. And this remains transparent for frontend side.
>>
>> It's a lot simpler to migrate backend inside of frontend state. If we
>> migrate backend in separate, we can't control the order of backend/frontend
>> stats, and will have to implement some late point in state load process,
>> where both are already loaded and we can do our post-load logic.
>
> Would MigrationPriority help when defining the VMSD?
>
Didn't know about it. Most probably it may help, we just setup so that backends
migrate before frontends.
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-12-10 15:33 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-10 10:39 [PATCH v3 0/3] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 1/3] vmstate: introduce VMSTATE_VBUFFER_UINT64 Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 2/3] vhost: add vmstate for inflight region with inner buffer Alexandr Moshkov
2025-11-10 10:39 ` [PATCH v3 3/3] vhost-user-blk: support inter-host inflight migration Alexandr Moshkov
2025-11-18 20:24 ` [PATCH v3 0/3] vhost-user-blk: support " Vladimir Sementsov-Ogievskiy
2025-11-18 22:05 ` Peter Xu
2025-12-04 19:55 ` Vladimir Sementsov-Ogievskiy
2025-12-09 16:51 ` Peter Xu
2025-12-10 11:41 ` Vladimir Sementsov-Ogievskiy
2025-12-10 15:20 ` Peter Xu
2025-12-10 15:33 ` Vladimir Sementsov-Ogievskiy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).