* [PATCH 0/2] vhost-user-blk: support inflight migration
@ 2025-10-20 5:44 Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 1/2] vhost: support inflight save/load Alexandr Moshkov
` (3 more replies)
0 siblings, 4 replies; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-20 5:44 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
Hi!
During inter-host migration, waiting for disk requests to be drained
in the vhost-user backend can incur significant downtime.
This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
then, after migration, they will be executed on another host.
At first, I tried to implement migration for all vhost-user devices that support inflight at once,
but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
Therefore, for now I decided to leave this idea for later and
implement the migration of the inflight region first for vhost-user-blk.
Alexandr Moshkov (2):
vhost: support inflight save/load
vhost-user-blk: support inflight migration
hw/block/vhost-user-blk.c | 52 ++++++++++++++++++++++++++++++++++++
hw/virtio/vhost.c | 56 +++++++++++++++++++++++++++++++++++++++
include/hw/virtio/vhost.h | 2 ++
migration/options.c | 7 +++++
migration/options.h | 1 +
qapi/migration.json | 9 +++++--
6 files changed, 125 insertions(+), 2 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/2] vhost: support inflight save/load
2025-10-20 5:44 [PATCH 0/2] vhost-user-blk: support inflight migration Alexandr Moshkov
@ 2025-10-20 5:44 ` Alexandr Moshkov
2025-10-21 19:29 ` Peter Xu
2025-10-20 5:44 ` [PATCH 2/2] vhost-user-blk: support inflight migration Alexandr Moshkov
` (2 subsequent siblings)
3 siblings, 1 reply; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-20 5:44 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
vhost_dev_load_inflight and vhost_dev_save_inflight have been deleted
by:
abe9ff2 ("vhost: Remove unused vhost_dev_{load|save}_inflight")
So, now they are needed for future commit.
Return them, and their helper vhost_dev_resize_inflight.
Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
---
hw/virtio/vhost.c | 56 +++++++++++++++++++++++++++++++++++++++
include/hw/virtio/vhost.h | 2 ++
2 files changed, 58 insertions(+)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 266a11514a..16ce9a6037 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -2013,6 +2013,62 @@ int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size,
return 0;
}
+static int vhost_dev_resize_inflight(struct vhost_inflight *inflight,
+ uint64_t new_size)
+{
+ Error *err = NULL;
+ int fd = -1;
+ void *addr = qemu_memfd_alloc("vhost-inflight", new_size,
+ F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
+ &fd, &err);
+ if (err) {
+ error_report_err(err);
+ return -ENOMEM;
+ }
+
+ vhost_dev_free_inflight(inflight);
+ inflight->offset = 0;
+ inflight->addr = addr;
+ inflight->fd = fd;
+ inflight->size = new_size;
+
+ return 0;
+}
+
+void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f)
+{
+ if (inflight->addr) {
+ qemu_put_be64(f, inflight->size);
+ qemu_put_be16(f, inflight->queue_size);
+ qemu_put_buffer(f, inflight->addr, inflight->size);
+ } else {
+ qemu_put_be64(f, 0);
+ }
+}
+
+int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f)
+{
+ uint64_t size;
+
+ size = qemu_get_be64(f);
+ if (!size) {
+ return 0;
+ }
+
+ if (inflight->size != size) {
+ int ret = vhost_dev_resize_inflight(inflight, size);
+ if (ret < 0) {
+ return ret;
+ }
+ }
+
+ inflight->queue_size = qemu_get_be16(f);
+
+ qemu_get_buffer(f, inflight->addr, size);
+
+ return 0;
+}
+
static int vhost_dev_set_vring_enable(struct vhost_dev *hdev, int enable)
{
if (!hdev->vhost_ops->vhost_set_vring_enable) {
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 08bbb4dfe9..da1f0c2361 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -402,6 +402,8 @@ int vhost_virtqueue_stop(struct vhost_dev *dev, struct VirtIODevice *vdev,
void vhost_dev_reset_inflight(struct vhost_inflight *inflight);
void vhost_dev_free_inflight(struct vhost_inflight *inflight);
+void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f);
+int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f);
int vhost_dev_prepare_inflight(struct vhost_dev *hdev, VirtIODevice *vdev);
int vhost_dev_set_inflight(struct vhost_dev *dev,
struct vhost_inflight *inflight);
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 5:44 [PATCH 0/2] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 1/2] vhost: support inflight save/load Alexandr Moshkov
@ 2025-10-20 5:44 ` Alexandr Moshkov
2025-10-20 9:55 ` Markus Armbruster
2025-10-23 14:29 ` Lei Yang
2025-10-20 9:55 ` [PATCH 0/2] " Markus Armbruster
2025-10-21 18:54 ` Raphael Norwitz
3 siblings, 2 replies; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-20 5:44 UTC (permalink / raw)
To: qemu-devel
Cc: Raphael Norwitz, Michael S. Tsirkin, Stefano Garzarella,
Kevin Wolf, Hanna Reitz, Peter Xu, Fabiano Rosas, Eric Blake,
Markus Armbruster, Alexandr Moshkov
In vhost_user_blk_stop() on incoming migration make force_stop = true,
so GET_VRING_BASE will not be executed.
Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
---
hw/block/vhost-user-blk.c | 52 +++++++++++++++++++++++++++++++++++++++
migration/options.c | 7 ++++++
migration/options.h | 1 +
qapi/migration.json | 9 +++++--
4 files changed, 67 insertions(+), 2 deletions(-)
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index c0cc5f6942..49f67d0451 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -31,6 +31,7 @@
#include "hw/virtio/virtio-access.h"
#include "system/system.h"
#include "system/runstate.h"
+#include "migration/options.h"
static const int user_feature_bits[] = {
VIRTIO_BLK_F_SIZE_MAX,
@@ -224,6 +225,11 @@ static int vhost_user_blk_stop(VirtIODevice *vdev)
force_stop = s->skip_get_vring_base_on_force_shutdown &&
qemu_force_shutdown_requested();
+ if (migrate_inflight_vhost_user_blk() &&
+ runstate_check(RUN_STATE_FINISH_MIGRATE)) {
+ force_stop = true;
+ }
+
ret = force_stop ? vhost_dev_force_stop(&s->dev, vdev, true) :
vhost_dev_stop(&s->dev, vdev, true);
@@ -568,12 +574,58 @@ static struct vhost_dev *vhost_user_blk_get_vhost(VirtIODevice *vdev)
return &s->dev;
}
+static int vhost_user_blk_save(QEMUFile *f, void *pv, size_t size,
+ const VMStateField *field, JSONWriter *vmdesc)
+{
+ VirtIODevice *vdev = pv;
+ VHostUserBlk *s = VHOST_USER_BLK(vdev);
+
+ if (!migrate_inflight_vhost_user_blk()) {
+ return 0;
+ }
+
+ vhost_dev_save_inflight(s->inflight, f);
+
+ return 0;
+}
+
+static int vhost_user_blk_load(QEMUFile *f, void *pv, size_t size,
+ const VMStateField *field)
+{
+ VirtIODevice *vdev = pv;
+ VHostUserBlk *s = VHOST_USER_BLK(vdev);
+ int ret;
+
+ if (!migrate_inflight_vhost_user_blk()) {
+ return 0;
+ }
+
+ ret = vhost_dev_load_inflight(s->inflight, f);
+ if (ret < 0) {
+ g_autofree char *path = object_get_canonical_path(OBJECT(vdev));
+ error_report("%s [%s]: can't load in-flight requests",
+ path, TYPE_VHOST_USER_BLK);
+ return ret;
+ }
+
+ return 0;
+}
+
static const VMStateDescription vmstate_vhost_user_blk = {
.name = "vhost-user-blk",
.minimum_version_id = 1,
.version_id = 1,
.fields = (const VMStateField[]) {
VMSTATE_VIRTIO_DEVICE,
+ {
+ .name = "backend state",
+ .info = &(const VMStateInfo) {
+ .name = "vhost-user-blk backend state",
+ .get = vhost_user_blk_load,
+ .put = vhost_user_blk_save,
+ },
+ .flags = VMS_SINGLE,
+ },
VMSTATE_END_OF_LIST()
},
};
diff --git a/migration/options.c b/migration/options.c
index 5183112775..fcae2b4559 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -262,6 +262,13 @@ bool migrate_mapped_ram(void)
return s->capabilities[MIGRATION_CAPABILITY_MAPPED_RAM];
}
+bool migrate_inflight_vhost_user_blk(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return s->capabilities[MIGRATION_CAPABILITY_INFLIGHT_VHOST_USER_BLK];
+}
+
bool migrate_ignore_shared(void)
{
MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 82d839709e..eab1485d1a 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -30,6 +30,7 @@ bool migrate_colo(void);
bool migrate_dirty_bitmaps(void);
bool migrate_events(void);
bool migrate_mapped_ram(void);
+bool migrate_inflight_vhost_user_blk(void);
bool migrate_ignore_shared(void);
bool migrate_late_block_activate(void);
bool migrate_multifd(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index be0f3fcc12..c9fea59515 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -517,9 +517,13 @@
# each RAM page. Requires a migration URI that supports seeking,
# such as a file. (since 9.0)
#
+# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
+# region for vhost-user-blk. (since 10.2)
+#
# Features:
#
-# @unstable: Members @x-colo and @x-ignore-shared are experimental.
+# @unstable: Members @x-colo and @x-ignore-shared,
+# @inflight-vhost-user-blk are experimental.
# @deprecated: Member @zero-blocks is deprecated as being part of
# block migration which was already removed.
#
@@ -536,7 +540,8 @@
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
- 'dirty-limit', 'mapped-ram'] }
+ 'dirty-limit', 'mapped-ram',
+ { 'name': 'inflight-vhost-user-blk', 'features': [ 'unstable' ] } ] }
##
# @MigrationCapabilityStatus:
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-20 5:44 [PATCH 0/2] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 1/2] vhost: support inflight save/load Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 2/2] vhost-user-blk: support inflight migration Alexandr Moshkov
@ 2025-10-20 9:55 ` Markus Armbruster
2025-10-20 10:16 ` Alexandr Moshkov
2025-10-21 18:54 ` Raphael Norwitz
3 siblings, 1 reply; 20+ messages in thread
From: Markus Armbruster @ 2025-10-20 9:55 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster,
Vladimir Sementsov-Ogievskiy
Alexandr Moshkov <dtalexundeer@yandex-team.ru> writes:
> Hi!
>
> During inter-host migration, waiting for disk requests to be drained
> in the vhost-user backend can incur significant downtime.
>
> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
> then, after migration, they will be executed on another host.
>
> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>
> Therefore, for now I decided to leave this idea for later and
> implement the migration of the inflight region first for vhost-user-blk.
How is this work related to Vladimir's "vhost-user-blk: live-backend
local migration"?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 5:44 ` [PATCH 2/2] vhost-user-blk: support inflight migration Alexandr Moshkov
@ 2025-10-20 9:55 ` Markus Armbruster
2025-10-20 10:34 ` Alexandr Moshkov
2025-10-23 14:29 ` Lei Yang
1 sibling, 1 reply; 20+ messages in thread
From: Markus Armbruster @ 2025-10-20 9:55 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
Alexandr Moshkov <dtalexundeer@yandex-team.ru> writes:
> In vhost_user_blk_stop() on incoming migration make force_stop = true,
> so GET_VRING_BASE will not be executed.
>
> Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
Your cover letter explains why this is useful. Please work it into your
commit message.
[...]
> diff --git a/qapi/migration.json b/qapi/migration.json
> index be0f3fcc12..c9fea59515 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -517,9 +517,13 @@
> # each RAM page. Requires a migration URI that supports seeking,
> # such as a file. (since 9.0)
> #
> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
> +# region for vhost-user-blk. (since 10.2)
> +#
Any guidance why and when users would want to enable it?
Is it a good idea to have device-specific capabilities?
> # Features:
> #
> -# @unstable: Members @x-colo and @x-ignore-shared are experimental.
> +# @unstable: Members @x-colo and @x-ignore-shared,
> +# @inflight-vhost-user-blk are experimental.
"and" is misplaced now. Fix:
# @unstable: Members @x-colo, @x-ignore-shared, and
# @inflight-vhost-user-blk are experimental.
Use the opportunity and insert a blank line here.
> # @deprecated: Member @zero-blocks is deprecated as being part of
> # block migration which was already removed.
> #
> @@ -536,7 +540,8 @@
> { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> 'validate-uuid', 'background-snapshot',
> 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> - 'dirty-limit', 'mapped-ram'] }
> + 'dirty-limit', 'mapped-ram',
> + { 'name': 'inflight-vhost-user-blk', 'features': [ 'unstable' ] } ] }
Long line. Obvious line break:
{ 'name': 'inflight-vhost-user-blk',
'features': [ 'unstable' ] } ] }
>
> ##
> # @MigrationCapabilityStatus:
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-20 9:55 ` [PATCH 0/2] " Markus Armbruster
@ 2025-10-20 10:16 ` Alexandr Moshkov
0 siblings, 0 replies; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-20 10:16 UTC (permalink / raw)
To: Markus Armbruster
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Vladimir Sementsov-Ogievskiy
[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]
On 10/20/25 14:55, Markus Armbruster wrote:
> Alexandr Moshkov<dtalexundeer@yandex-team.ru> writes:
>
>> Hi!
>>
>> During inter-host migration, waiting for disk requests to be drained
>> in the vhost-user backend can incur significant downtime.
>>
>> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
>> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
>> then, after migration, they will be executed on another host.
>>
>> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
>> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
>> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>>
>> Therefore, for now I decided to leave this idea for later and
>> implement the migration of the inflight region first for vhost-user-blk.
> How is this work related to Vladimir's "vhost-user-blk: live-backend
> local migration"?
Hi!
Vladimir's work is only related to local migration (including inflight
region fd), when my patch allows to migrate inflight requests to another
host to reduce downtime without waiting for disk requests to be drained.
[-- Attachment #2: Type: text/html, Size: 1863 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 9:55 ` Markus Armbruster
@ 2025-10-20 10:34 ` Alexandr Moshkov
2025-10-20 11:47 ` Markus Armbruster
0 siblings, 1 reply; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-20 10:34 UTC (permalink / raw)
To: Markus Armbruster
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake
[-- Attachment #1: Type: text/plain, Size: 2242 bytes --]
Thanks for review!
On 10/20/25 14:55, Markus Armbruster wrote:
> Alexandr Moshkov<dtalexundeer@yandex-team.ru> writes:
>
>> In vhost_user_blk_stop() on incoming migration make force_stop = true,
>> so GET_VRING_BASE will not be executed.
>>
>> Signed-off-by: Alexandr Moshkov<dtalexundeer@yandex-team.ru>
> Your cover letter explains why this is useful. Please work it into your
> commit message.
Ok
> [...]
>
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index be0f3fcc12..c9fea59515 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -517,9 +517,13 @@
>> # each RAM page. Requires a migration URI that supports seeking,
>> # such as a file. (since 9.0)
>> #
>> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
>> +# region for vhost-user-blk. (since 10.2)
>> +#
> Any guidance why and when users would want to enable it?
>
> Is it a good idea to have device-specific capabilities?
Hmm, maybe it's better way to make a parameter for the vhost-user-blk
instead of migration capability?
What do you think?
>> # Features:
>> #
>> -# @unstable: Members @x-colo and @x-ignore-shared are experimental.
>> +# @unstable: Members @x-colo and @x-ignore-shared,
>> +# @inflight-vhost-user-blk are experimental.
> "and" is misplaced now. Fix:
>
> # @unstable: Members @x-colo, @x-ignore-shared, and
> # @inflight-vhost-user-blk are experimental.
>
> Use the opportunity and insert a blank line here.
>
>> # @deprecated: Member @zero-blocks is deprecated as being part of
>> # block migration which was already removed.
>> #
>> @@ -536,7 +540,8 @@
>> { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
>> 'validate-uuid', 'background-snapshot',
>> 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
>> - 'dirty-limit', 'mapped-ram'] }
>> + 'dirty-limit', 'mapped-ram',
>> + { 'name': 'inflight-vhost-user-blk', 'features': [ 'unstable' ] } ] }
> Long line. Obvious line break:
>
> { 'name': 'inflight-vhost-user-blk',
> 'features': [ 'unstable' ] } ] }
>
>>
>> ##
>> # @MigrationCapabilityStatus:
Wiil be fixed, thank!
[-- Attachment #2: Type: text/html, Size: 3709 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 10:34 ` Alexandr Moshkov
@ 2025-10-20 11:47 ` Markus Armbruster
2025-10-23 19:24 ` Peter Xu
0 siblings, 1 reply; 20+ messages in thread
From: Markus Armbruster @ 2025-10-20 11:47 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake
Alexandr Moshkov <dtalexundeer@yandex-team.ru> writes:
> Thanks for review!
>
> On 10/20/25 14:55, Markus Armbruster wrote:
>> Alexandr Moshkov<dtalexundeer@yandex-team.ru> writes:
>>
>>> In vhost_user_blk_stop() on incoming migration make force_stop = true,
>>> so GET_VRING_BASE will not be executed.
>>>
>>> Signed-off-by: Alexandr Moshkov<dtalexundeer@yandex-team.ru>
>> Your cover letter explains why this is useful. Please work it into your
>> commit message.
>
> Ok
>
>> [...]
>>
>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>> index be0f3fcc12..c9fea59515 100644
>>> --- a/qapi/migration.json
>>> +++ b/qapi/migration.json
>>> @@ -517,9 +517,13 @@
>>> # each RAM page. Requires a migration URI that supports seeking,
>>> # such as a file. (since 9.0)
>>> #
>>> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
>>> +# region for vhost-user-blk. (since 10.2)
>>> +#
>> Any guidance why and when users would want to enable it?
>>
>> Is it a good idea to have device-specific capabilities?
>
> Hmm, maybe it's better way to make a parameter for the vhost-user-blk instead of migration capability?
>
> What do you think?
I think this is a question for the migration maintainers :)
[...]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-20 5:44 [PATCH 0/2] vhost-user-blk: support inflight migration Alexandr Moshkov
` (2 preceding siblings ...)
2025-10-20 9:55 ` [PATCH 0/2] " Markus Armbruster
@ 2025-10-21 18:54 ` Raphael Norwitz
2025-10-22 7:59 ` Alexandr Moshkov
3 siblings, 1 reply; 20+ messages in thread
From: Raphael Norwitz @ 2025-10-21 18:54 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
The logic looks ok from the vhost-user-blk side but some comments inline.
On Mon, Oct 20, 2025 at 1:47 AM Alexandr Moshkov
<dtalexundeer@yandex-team.ru> wrote:
>
> Hi!
>
> During inter-host migration, waiting for disk requests to be drained
> in the vhost-user backend can incur significant downtime.
>
> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
> then, after migration, they will be executed on another host.
>
> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>
Even if it's a more significant change I'd rather generalize as much
logic as possible and expose it as a vhost-user protocol feature. IMO
too much vhost-user device-agnositic code is being pushed into
vhost-user-blk.
As Markus noted this also conflicts significantly with Vladimir's
series so I'd suggest waiting until those are in, or possibly
attempting to generalize on top of his changes.
> Therefore, for now I decided to leave this idea for later and
> implement the migration of the inflight region first for vhost-user-blk.
>
> Alexandr Moshkov (2):
> vhost: support inflight save/load
> vhost-user-blk: support inflight migration
>
> hw/block/vhost-user-blk.c | 52 ++++++++++++++++++++++++++++++++++++
> hw/virtio/vhost.c | 56 +++++++++++++++++++++++++++++++++++++++
> include/hw/virtio/vhost.h | 2 ++
> migration/options.c | 7 +++++
> migration/options.h | 1 +
> qapi/migration.json | 9 +++++--
> 6 files changed, 125 insertions(+), 2 deletions(-)
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/2] vhost: support inflight save/load
2025-10-20 5:44 ` [PATCH 1/2] vhost: support inflight save/load Alexandr Moshkov
@ 2025-10-21 19:29 ` Peter Xu
2025-10-23 6:46 ` Alexandr Moshkov
0 siblings, 1 reply; 20+ messages in thread
From: Peter Xu @ 2025-10-21 19:29 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster
On Mon, Oct 20, 2025 at 10:44:14AM +0500, Alexandr Moshkov wrote:
> vhost_dev_load_inflight and vhost_dev_save_inflight have been deleted
> by:
>
> abe9ff2 ("vhost: Remove unused vhost_dev_{load|save}_inflight")
>
> So, now they are needed for future commit.
> Return them, and their helper vhost_dev_resize_inflight.
>
> Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
> ---
> hw/virtio/vhost.c | 56 +++++++++++++++++++++++++++++++++++++++
> include/hw/virtio/vhost.h | 2 ++
> 2 files changed, 58 insertions(+)
>
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 266a11514a..16ce9a6037 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -2013,6 +2013,62 @@ int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size,
> return 0;
> }
>
> +static int vhost_dev_resize_inflight(struct vhost_inflight *inflight,
> + uint64_t new_size)
> +{
> + Error *err = NULL;
> + int fd = -1;
> + void *addr = qemu_memfd_alloc("vhost-inflight", new_size,
> + F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
> + &fd, &err);
> + if (err) {
> + error_report_err(err);
> + return -ENOMEM;
> + }
> +
> + vhost_dev_free_inflight(inflight);
> + inflight->offset = 0;
> + inflight->addr = addr;
> + inflight->fd = fd;
> + inflight->size = new_size;
> +
> + return 0;
> +}
> +
> +void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f)
> +{
> + if (inflight->addr) {
> + qemu_put_be64(f, inflight->size);
> + qemu_put_be16(f, inflight->queue_size);
> + qemu_put_buffer(f, inflight->addr, inflight->size);
> + } else {
> + qemu_put_be64(f, 0);
> + }
Can we use VMSD (extra fields, or subsections) to describe any new data for
migration?
In general, we want to avoid using qemufile API as much as possible in the
future. Hard-coded VMStateInfo is not suggested.
Thanks,
> +}
> +
> +int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f)
> +{
> + uint64_t size;
> +
> + size = qemu_get_be64(f);
> + if (!size) {
> + return 0;
> + }
> +
> + if (inflight->size != size) {
> + int ret = vhost_dev_resize_inflight(inflight, size);
> + if (ret < 0) {
> + return ret;
> + }
> + }
> +
> + inflight->queue_size = qemu_get_be16(f);
> +
> + qemu_get_buffer(f, inflight->addr, size);
> +
> + return 0;
> +}
> +
> static int vhost_dev_set_vring_enable(struct vhost_dev *hdev, int enable)
> {
> if (!hdev->vhost_ops->vhost_set_vring_enable) {
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index 08bbb4dfe9..da1f0c2361 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -402,6 +402,8 @@ int vhost_virtqueue_stop(struct vhost_dev *dev, struct VirtIODevice *vdev,
>
> void vhost_dev_reset_inflight(struct vhost_inflight *inflight);
> void vhost_dev_free_inflight(struct vhost_inflight *inflight);
> +void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f);
> +int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f);
> int vhost_dev_prepare_inflight(struct vhost_dev *hdev, VirtIODevice *vdev);
> int vhost_dev_set_inflight(struct vhost_dev *dev,
> struct vhost_inflight *inflight);
> --
> 2.34.1
>
--
Peter Xu
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-21 18:54 ` Raphael Norwitz
@ 2025-10-22 7:59 ` Alexandr Moshkov
2025-10-22 21:33 ` Raphael Norwitz
0 siblings, 1 reply; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-22 7:59 UTC (permalink / raw)
To: Raphael Norwitz
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
[-- Attachment #1: Type: text/plain, Size: 2300 bytes --]
Hi!
On 10/21/25 23:54, Raphael Norwitz wrote:
> The logic looks ok from the vhost-user-blk side but some comments inline.
>
> On Mon, Oct 20, 2025 at 1:47 AM Alexandr Moshkov
> <dtalexundeer@yandex-team.ru> wrote:
>> Hi!
>>
>> During inter-host migration, waiting for disk requests to be drained
>> in the vhost-user backend can incur significant downtime.
>>
>> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
>> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
>> then, after migration, they will be executed on another host.
>>
>> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
>> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
>> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>>
> Even if it's a more significant change I'd rather generalize as much
> logic as possible and expose it as a vhost-user protocol feature. IMO
> too much vhost-user device-agnositic code is being pushed into
> vhost-user-blk.
As far as I understand (correct me if I'm wrong), but this feature must
be implemented in device itself (along with inflight field location) or
in some base class for vhost-user devices. For now vhost-user-blkdoesn't
have one yet.The closest base class that i could find is
vhost-user-base, but for using it, it has to implement all
device-agnostic code from vhost-user-blk, and don't break all existing
vhost-user-base derived devices. For example, to support inflight
migration in base class, there first need to implement inflight field in
it, along with the reconnect feature. Then, with a big refactor, somehow
inherit vhost-user-blk from it, along with other devices such as
vhost-user-fs.
So, IMO it looks like a whole separate track that will need to be tackled.
> As Markus noted this also conflicts significantly with Vladimir's
> series so I'd suggest waiting until those are in, or possibly
> attempting to generalize on top of his changes.
Yea, but i think, i just need to perform inflight migration only when
some non-local migration flag is set.
And i think, Vladimir's series already have that flag.
Thanks for the comments!
[-- Attachment #2: Type: text/html, Size: 3612 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-22 7:59 ` Alexandr Moshkov
@ 2025-10-22 21:33 ` Raphael Norwitz
2025-10-23 6:44 ` Alexandr Moshkov
0 siblings, 1 reply; 20+ messages in thread
From: Raphael Norwitz @ 2025-10-22 21:33 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
On Wed, Oct 22, 2025 at 3:59 AM Alexandr Moshkov
<dtalexundeer@yandex-team.ru> wrote:
>
> Hi!
>
> On 10/21/25 23:54, Raphael Norwitz wrote:
>
> The logic looks ok from the vhost-user-blk side but some comments inline.
>
> On Mon, Oct 20, 2025 at 1:47 AM Alexandr Moshkov
> <dtalexundeer@yandex-team.ru> wrote:
>
> Hi!
>
> During inter-host migration, waiting for disk requests to be drained
> in the vhost-user backend can incur significant downtime.
>
> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
> then, after migration, they will be executed on another host.
>
> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>
> Even if it's a more significant change I'd rather generalize as much
> logic as possible and expose it as a vhost-user protocol feature. IMO
> too much vhost-user device-agnositic code is being pushed into
> vhost-user-blk.
>
> As far as I understand (correct me if I'm wrong), but this feature must be implemented in device itself (along with inflight field location) or in some base class for vhost-user devices. For now vhost-user-blk doesn't have one yet. The closest base class that i could find is vhost-user-base, but for using it, it has to implement all device-agnostic code from vhost-user-blk, and don't break all existing vhost-user-base derived devices. For example, to support inflight migration in base class, there first need to implement inflight field in it, along with the reconnect feature. Then, with a big refactor, somehow inherit vhost-user-blk from it, along with other devices such as vhost-user-fs.
>
> So, IMO it looks like a whole separate track that will need to be tackled.
>
I get that inflight is inside of struct VhostUserBlk, not in struct
vhost_user. Even if not all device types support inflight I think that
could probably be changed? If we do keep things as you have it here
patch 1 should probably bring the vhost_dev_load_inflight() and
vhost_dev_save_inflight() into vhost_user.c rather than vhost.c since
only vhost_user devices seem to use inflight.
> As Markus noted this also conflicts significantly with Vladimir's
> series so I'd suggest waiting until those are in, or possibly
> attempting to generalize on top of his changes.
>
> Yea, but i think, i just need to perform inflight migration only when some non-local migration flag is set.
>
I'm not sure about that. While it may be completely safe in your case
to avoid GET_VRING_BASE/quiescing the backend, it could introduce
corruption cases on other vhost-user-blk storage backends which work
differently. I think this would require an "opt-in" migration
parameter over and above checking that it's not a local migration.
Ideally there would be some way to have the vhost-user backends expose
support per device with a protocol feature and, if the source and
destination both advertise support, we perform the inflight migration
skipping GET_VRING_BASE. I don't know enough about the migration code
to know how that would be done or if it is practical.
Also (and I think this also applies to Vladimir's series), it should
be possible to add multiple vhost-user-blk devices to the same Qemu
instance but have the devices backed by very different backend
implementations. What happens if some of those backend implementations
support things like lightweight local migration and/or inflight
migration but others don't? Ideally you would want the API to be able
to select target vhost-user-blk devices for accelerated migration
rather than assuming they will all want to use the same features. I
don't know if anyone is doing that or if we care about such cases but
I just want to throw the concern out there.
> And i think, Vladimir's series already have that flag.
>
>
> Thanks for the comments!
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-22 21:33 ` Raphael Norwitz
@ 2025-10-23 6:44 ` Alexandr Moshkov
2025-10-23 17:53 ` Raphael Norwitz
0 siblings, 1 reply; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-23 6:44 UTC (permalink / raw)
To: Raphael Norwitz
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
[-- Attachment #1: Type: text/plain, Size: 4936 bytes --]
On 10/23/25 02:33, Raphael Norwitz wrote:
> On Wed, Oct 22, 2025 at 3:59 AM Alexandr Moshkov
> <dtalexundeer@yandex-team.ru> wrote:
>> Hi!
>>
>> On 10/21/25 23:54, Raphael Norwitz wrote:
>>
>> The logic looks ok from the vhost-user-blk side but some comments inline.
>>
>> On Mon, Oct 20, 2025 at 1:47 AM Alexandr Moshkov
>> <dtalexundeer@yandex-team.ru> wrote:
>>
>> Hi!
>>
>> During inter-host migration, waiting for disk requests to be drained
>> in the vhost-user backend can incur significant downtime.
>>
>> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
>> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
>> then, after migration, they will be executed on another host.
>>
>> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
>> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
>> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>>
>> Even if it's a more significant change I'd rather generalize as much
>> logic as possible and expose it as a vhost-user protocol feature. IMO
>> too much vhost-user device-agnositic code is being pushed into
>> vhost-user-blk.
>>
>> As far as I understand (correct me if I'm wrong), but this feature must be implemented in device itself (along with inflight field location) or in some base class for vhost-user devices. For now vhost-user-blk doesn't have one yet. The closest base class that i could find is vhost-user-base, but for using it, it has to implement all device-agnostic code from vhost-user-blk, and don't break all existing vhost-user-base derived devices. For example, to support inflight migration in base class, there first need to implement inflight field in it, along with the reconnect feature. Then, with a big refactor, somehow inherit vhost-user-blk from it, along with other devices such as vhost-user-fs.
>>
>> So, IMO it looks like a whole separate track that will need to be tackled.
>>
> I get that inflight is inside of struct VhostUserBlk, not in struct
> vhost_user. Even if not all device types support inflight I think that
> could probably be changed? If we do keep things as you have it here
> patch 1 should probably bring the vhost_dev_load_inflight() and
> vhost_dev_save_inflight() into vhost_user.c rather than vhost.c since
> only vhost_user devices seem to use inflight.
Firstly, I tried to implement inflight in the vhost_user structure, but
this structure is initialized on every vhost_user_blk_connect() and then
destroyed on every vhost_user_blk_disconnect(). But, the inflight region
must survive device disconnect.
During active migration, the vhost_user struct will be destroyed before
the inflight saving.
And you are right about the location of the functions in patch 1. I will
move them to the vhost-user.c file, which will be more appropriate. Thanks!
>> As Markus noted this also conflicts significantly with Vladimir's
>> series so I'd suggest waiting until those are in, or possibly
>> attempting to generalize on top of his changes.
>>
>> Yea, but i think, i just need to perform inflight migration only when some non-local migration flag is set.
>>
> I'm not sure about that. While it may be completely safe in your case
> to avoid GET_VRING_BASE/quiescing the backend, it could introduce
> corruption cases on other vhost-user-blk storage backends which work
> differently. I think this would require an "opt-in" migration
> parameter over and above checking that it's not a local migration.
In patch 2 I introduce migration capability for that. Or you think that
we need something like per-device parameter?
>
> Ideally there would be some way to have the vhost-user backends expose
> support per device with a protocol feature and, if the source and
> destination both advertise support, we perform the inflight migration
> skipping GET_VRING_BASE. I don't know enough about the migration code
> to know how that would be done or if it is practical.
>
> Also (and I think this also applies to Vladimir's series), it should
> be possible to add multiple vhost-user-blk devices to the same Qemu
> instance but have the devices backed by very different backend
> implementations. What happens if some of those backend implementations
> support things like lightweight local migration and/or inflight
> migration but others don't? Ideally you would want the API to be able
> to select target vhost-user-blk devices for accelerated migration
> rather than assuming they will all want to use the same features. I
> don't know if anyone is doing that or if we care about such cases but
> I just want to throw the concern out there.
Hmm, that looks like we need some per-device parameter in order to set
this feature on for them.
I guess we need a opinion from migration maintainer about such cases
[-- Attachment #2: Type: text/html, Size: 6363 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/2] vhost: support inflight save/load
2025-10-21 19:29 ` Peter Xu
@ 2025-10-23 6:46 ` Alexandr Moshkov
0 siblings, 0 replies; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-23 6:46 UTC (permalink / raw)
To: Peter Xu
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake, Markus Armbruster
Hi! Thanks for review!
On 10/22/25 00:29, Peter Xu wrote:
> On Mon, Oct 20, 2025 at 10:44:14AM +0500, Alexandr Moshkov wrote:
>> vhost_dev_load_inflight and vhost_dev_save_inflight have been deleted
>> by:
>>
>> abe9ff2 ("vhost: Remove unused vhost_dev_{load|save}_inflight")
>>
>> So, now they are needed for future commit.
>> Return them, and their helper vhost_dev_resize_inflight.
>>
>> Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
>> ---
>> hw/virtio/vhost.c | 56 +++++++++++++++++++++++++++++++++++++++
>> include/hw/virtio/vhost.h | 2 ++
>> 2 files changed, 58 insertions(+)
>>
>> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
>> index 266a11514a..16ce9a6037 100644
>> --- a/hw/virtio/vhost.c
>> +++ b/hw/virtio/vhost.c
>> @@ -2013,6 +2013,62 @@ int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size,
>> return 0;
>> }
>>
>> +static int vhost_dev_resize_inflight(struct vhost_inflight *inflight,
>> + uint64_t new_size)
>> +{
>> + Error *err = NULL;
>> + int fd = -1;
>> + void *addr = qemu_memfd_alloc("vhost-inflight", new_size,
>> + F_SEAL_GROW | F_SEAL_SHRINK | F_SEAL_SEAL,
>> + &fd, &err);
>> + if (err) {
>> + error_report_err(err);
>> + return -ENOMEM;
>> + }
>> +
>> + vhost_dev_free_inflight(inflight);
>> + inflight->offset = 0;
>> + inflight->addr = addr;
>> + inflight->fd = fd;
>> + inflight->size = new_size;
>> +
>> + return 0;
>> +}
>> +
>> +void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f)
>> +{
>> + if (inflight->addr) {
>> + qemu_put_be64(f, inflight->size);
>> + qemu_put_be16(f, inflight->queue_size);
>> + qemu_put_buffer(f, inflight->addr, inflight->size);
>> + } else {
>> + qemu_put_be64(f, 0);
>> + }
> Can we use VMSD (extra fields, or subsections) to describe any new data for
> migration?
>
> In general, we want to avoid using qemufile API as much as possible in the
> future. Hard-coded VMStateInfo is not suggested.
>
> Thanks,
Ok! I'll fix it
>
>> +}
>> +
>> +int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f)
>> +{
>> + uint64_t size;
>> +
>> + size = qemu_get_be64(f);
>> + if (!size) {
>> + return 0;
>> + }
>> +
>> + if (inflight->size != size) {
>> + int ret = vhost_dev_resize_inflight(inflight, size);
>> + if (ret < 0) {
>> + return ret;
>> + }
>> + }
>> +
>> + inflight->queue_size = qemu_get_be16(f);
>> +
>> + qemu_get_buffer(f, inflight->addr, size);
>> +
>> + return 0;
>> +}
>> +
>> static int vhost_dev_set_vring_enable(struct vhost_dev *hdev, int enable)
>> {
>> if (!hdev->vhost_ops->vhost_set_vring_enable) {
>> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
>> index 08bbb4dfe9..da1f0c2361 100644
>> --- a/include/hw/virtio/vhost.h
>> +++ b/include/hw/virtio/vhost.h
>> @@ -402,6 +402,8 @@ int vhost_virtqueue_stop(struct vhost_dev *dev, struct VirtIODevice *vdev,
>>
>> void vhost_dev_reset_inflight(struct vhost_inflight *inflight);
>> void vhost_dev_free_inflight(struct vhost_inflight *inflight);
>> +void vhost_dev_save_inflight(struct vhost_inflight *inflight, QEMUFile *f);
>> +int vhost_dev_load_inflight(struct vhost_inflight *inflight, QEMUFile *f);
>> int vhost_dev_prepare_inflight(struct vhost_dev *hdev, VirtIODevice *vdev);
>> int vhost_dev_set_inflight(struct vhost_dev *dev,
>> struct vhost_inflight *inflight);
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 5:44 ` [PATCH 2/2] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-10-20 9:55 ` Markus Armbruster
@ 2025-10-23 14:29 ` Lei Yang
2025-10-24 8:37 ` Alexandr Moshkov
1 sibling, 1 reply; 20+ messages in thread
From: Lei Yang @ 2025-10-23 14:29 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
Hi Alexandr
According to my test result, this series of patches introduce issues,
it prints the following error messages when compiling the process
after applying your patch.
The test based on this commit:
commit 3a2d5612a7422732b648b46d4b934e2e54622fd6 (origin/master, origin/HEAD)
Author: Peter Maydell <peter.maydell@linaro.org>
Date: Fri Oct 17 14:31:56 2025 +0100
Error messages:
[1849/2964] Compiling C object
libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
FAILED: libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
cc -m64 -Ilibqemu-x86_64-softmmu.a.p -I. -I.. -Itarget/i386
-I../target/i386 -Isubprojects/dtc/libfdt -I../subprojects/dtc/libfdt
-Isubprojects/libvduse -I../subprojects/libvduse -Iqapi -Itrace -Iui
-Iui/shader -I/usr/include/pixman-1 -I/usr/include/glib-2.0
-I/usr/lib64/glib-2.0/include -I/usr/include/libmount
-I/usr/include/blkid -I/usr/include/sysprof-6
-I/usr/include/gio-unix-2.0 -I/usr/include/slirp
-fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O0 -g
-fstack-protector-strong -Wempty-body -Wendif-labels
-Wexpansion-to-defined -Wformat-security -Wformat-y2k
-Wignored-qualifiers -Wimplicit-fallthrough=2 -Winit-self
-Wmissing-format-attribute -Wmissing-prototypes -Wnested-externs
-Wold-style-declaration -Wold-style-definition -Wredundant-decls
-Wshadow=local -Wstrict-prototypes -Wtype-limits -Wundef -Wvla
-Wwrite-strings -Wno-missing-include-dirs -Wno-psabi
-Wno-shift-negative-value -isystem
/mnt/tests/distribution/command/qemu/linux-headers -isystem
linux-headers -iquote . -iquote /mnt/tests/distribution/command/qemu
-iquote /mnt/tests/distribution/command/qemu/include -iquote
/mnt/tests/distribution/command/qemu/host/include/x86_64 -iquote
/mnt/tests/distribution/command/qemu/host/include/generic -iquote
/mnt/tests/distribution/command/qemu/tcg/i386 -pthread -mcx16 -msse2
-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
-fno-strict-aliasing -fno-common -fwrapv -ftrivial-auto-var-init=zero
-fzero-call-used-regs=used-gpr -fPIE -DWITH_GZFILEOP
-isystem../linux-headers -isystemlinux-headers -DCOMPILING_PER_TARGET
'-DCONFIG_TARGET="x86_64-softmmu-config-target.h"'
'-DCONFIG_DEVICES="x86_64-softmmu-config-devices.h"' -MD -MQ
libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -MF
libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o.d -o
libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -c
../hw/block/vhost-user-blk.c
In file included from
/mnt/tests/distribution/command/qemu/migration/options.h:19,
from ../hw/block/vhost-user-blk.c:34:
/mnt/tests/distribution/command/qemu/include/migration/client-options.h:26:1:
error: unknown type name ‘MigMode’
26 | MigMode migrate_mode(void);
| ^~~~~~~
/mnt/tests/distribution/command/qemu/migration/options.h:66:7: error:
unknown type name ‘BitmapMigrationNodeAliasList’
66 | const BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/tests/distribution/command/qemu/migration/options.h:80:1: error:
unknown type name ‘MultiFDCompression’
80 | MultiFDCompression migrate_multifd_compression(void);
| ^~~~~~~~~~~~~~~~~~
/mnt/tests/distribution/command/qemu/migration/options.h:89:1: error:
unknown type name ‘ZeroPageDetection’
89 | ZeroPageDetection migrate_zero_page_detection(void);
| ^~~~~~~~~~~~~~~~~
/mnt/tests/distribution/command/qemu/migration/options.h:93:27: error:
unknown type name ‘MigrationParameters’; did you mean
‘MigrationState’?
93 | bool migrate_params_check(MigrationParameters *params, Error **errp);
| ^~~~~~~~~~~~~~~~~~~
| MigrationState
/mnt/tests/distribution/command/qemu/migration/options.h:94:26: error:
unknown type name ‘MigrationParameters’; did you mean
‘MigrationState’?
94 | void migrate_params_init(MigrationParameters *params);
| ^~~~~~~~~~~~~~~~~~~
| MigrationState
ninja: build stopped: subcommand failed.
make[1]: *** [Makefile:168: run-ninja] Error 1
Thanks
Lei
On Mon, Oct 20, 2025 at 1:47 PM Alexandr Moshkov
<dtalexundeer@yandex-team.ru> wrote:
>
> In vhost_user_blk_stop() on incoming migration make force_stop = true,
> so GET_VRING_BASE will not be executed.
>
> Signed-off-by: Alexandr Moshkov <dtalexundeer@yandex-team.ru>
> ---
> hw/block/vhost-user-blk.c | 52 +++++++++++++++++++++++++++++++++++++++
> migration/options.c | 7 ++++++
> migration/options.h | 1 +
> qapi/migration.json | 9 +++++--
> 4 files changed, 67 insertions(+), 2 deletions(-)
>
> diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
> index c0cc5f6942..49f67d0451 100644
> --- a/hw/block/vhost-user-blk.c
> +++ b/hw/block/vhost-user-blk.c
> @@ -31,6 +31,7 @@
> #include "hw/virtio/virtio-access.h"
> #include "system/system.h"
> #include "system/runstate.h"
> +#include "migration/options.h"
>
> static const int user_feature_bits[] = {
> VIRTIO_BLK_F_SIZE_MAX,
> @@ -224,6 +225,11 @@ static int vhost_user_blk_stop(VirtIODevice *vdev)
> force_stop = s->skip_get_vring_base_on_force_shutdown &&
> qemu_force_shutdown_requested();
>
> + if (migrate_inflight_vhost_user_blk() &&
> + runstate_check(RUN_STATE_FINISH_MIGRATE)) {
> + force_stop = true;
> + }
> +
> ret = force_stop ? vhost_dev_force_stop(&s->dev, vdev, true) :
> vhost_dev_stop(&s->dev, vdev, true);
>
> @@ -568,12 +574,58 @@ static struct vhost_dev *vhost_user_blk_get_vhost(VirtIODevice *vdev)
> return &s->dev;
> }
>
> +static int vhost_user_blk_save(QEMUFile *f, void *pv, size_t size,
> + const VMStateField *field, JSONWriter *vmdesc)
> +{
> + VirtIODevice *vdev = pv;
> + VHostUserBlk *s = VHOST_USER_BLK(vdev);
> +
> + if (!migrate_inflight_vhost_user_blk()) {
> + return 0;
> + }
> +
> + vhost_dev_save_inflight(s->inflight, f);
> +
> + return 0;
> +}
> +
> +static int vhost_user_blk_load(QEMUFile *f, void *pv, size_t size,
> + const VMStateField *field)
> +{
> + VirtIODevice *vdev = pv;
> + VHostUserBlk *s = VHOST_USER_BLK(vdev);
> + int ret;
> +
> + if (!migrate_inflight_vhost_user_blk()) {
> + return 0;
> + }
> +
> + ret = vhost_dev_load_inflight(s->inflight, f);
> + if (ret < 0) {
> + g_autofree char *path = object_get_canonical_path(OBJECT(vdev));
> + error_report("%s [%s]: can't load in-flight requests",
> + path, TYPE_VHOST_USER_BLK);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> static const VMStateDescription vmstate_vhost_user_blk = {
> .name = "vhost-user-blk",
> .minimum_version_id = 1,
> .version_id = 1,
> .fields = (const VMStateField[]) {
> VMSTATE_VIRTIO_DEVICE,
> + {
> + .name = "backend state",
> + .info = &(const VMStateInfo) {
> + .name = "vhost-user-blk backend state",
> + .get = vhost_user_blk_load,
> + .put = vhost_user_blk_save,
> + },
> + .flags = VMS_SINGLE,
> + },
> VMSTATE_END_OF_LIST()
> },
> };
> diff --git a/migration/options.c b/migration/options.c
> index 5183112775..fcae2b4559 100644
> --- a/migration/options.c
> +++ b/migration/options.c
> @@ -262,6 +262,13 @@ bool migrate_mapped_ram(void)
> return s->capabilities[MIGRATION_CAPABILITY_MAPPED_RAM];
> }
>
> +bool migrate_inflight_vhost_user_blk(void)
> +{
> + MigrationState *s = migrate_get_current();
> +
> + return s->capabilities[MIGRATION_CAPABILITY_INFLIGHT_VHOST_USER_BLK];
> +}
> +
> bool migrate_ignore_shared(void)
> {
> MigrationState *s = migrate_get_current();
> diff --git a/migration/options.h b/migration/options.h
> index 82d839709e..eab1485d1a 100644
> --- a/migration/options.h
> +++ b/migration/options.h
> @@ -30,6 +30,7 @@ bool migrate_colo(void);
> bool migrate_dirty_bitmaps(void);
> bool migrate_events(void);
> bool migrate_mapped_ram(void);
> +bool migrate_inflight_vhost_user_blk(void);
> bool migrate_ignore_shared(void);
> bool migrate_late_block_activate(void);
> bool migrate_multifd(void);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index be0f3fcc12..c9fea59515 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -517,9 +517,13 @@
> # each RAM page. Requires a migration URI that supports seeking,
> # such as a file. (since 9.0)
> #
> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
> +# region for vhost-user-blk. (since 10.2)
> +#
> # Features:
> #
> -# @unstable: Members @x-colo and @x-ignore-shared are experimental.
> +# @unstable: Members @x-colo and @x-ignore-shared,
> +# @inflight-vhost-user-blk are experimental.
> # @deprecated: Member @zero-blocks is deprecated as being part of
> # block migration which was already removed.
> #
> @@ -536,7 +540,8 @@
> { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> 'validate-uuid', 'background-snapshot',
> 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> - 'dirty-limit', 'mapped-ram'] }
> + 'dirty-limit', 'mapped-ram',
> + { 'name': 'inflight-vhost-user-blk', 'features': [ 'unstable' ] } ] }
>
> ##
> # @MigrationCapabilityStatus:
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/2] vhost-user-blk: support inflight migration
2025-10-23 6:44 ` Alexandr Moshkov
@ 2025-10-23 17:53 ` Raphael Norwitz
0 siblings, 0 replies; 20+ messages in thread
From: Raphael Norwitz @ 2025-10-23 17:53 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
On Thu, Oct 23, 2025 at 2:44 AM Alexandr Moshkov
<dtalexundeer@yandex-team.ru> wrote:
>
>
> On 10/23/25 02:33, Raphael Norwitz wrote:
>
> On Wed, Oct 22, 2025 at 3:59 AM Alexandr Moshkov
> <dtalexundeer@yandex-team.ru> wrote:
>
> Hi!
>
> On 10/21/25 23:54, Raphael Norwitz wrote:
>
> The logic looks ok from the vhost-user-blk side but some comments inline.
>
> On Mon, Oct 20, 2025 at 1:47 AM Alexandr Moshkov
> <dtalexundeer@yandex-team.ru> wrote:
>
> Hi!
>
> During inter-host migration, waiting for disk requests to be drained
> in the vhost-user backend can incur significant downtime.
>
> This can be avoided if QEMU migrates the inflight region in vhost-user-blk.
> Thus, during the qemu migration, the vhost-user backend can cancel all inflight requests and
> then, after migration, they will be executed on another host.
>
> At first, I tried to implement migration for all vhost-user devices that support inflight at once,
> but this would require a lot of changes both in vhost-user-blk (to transfer it to the base class) and
> in the vhost-user-base base class (inflight implementation and remodeling + a large refactor).
>
> Even if it's a more significant change I'd rather generalize as much
> logic as possible and expose it as a vhost-user protocol feature. IMO
> too much vhost-user device-agnositic code is being pushed into
> vhost-user-blk.
>
> As far as I understand (correct me if I'm wrong), but this feature must be implemented in device itself (along with inflight field location) or in some base class for vhost-user devices. For now vhost-user-blk doesn't have one yet. The closest base class that i could find is vhost-user-base, but for using it, it has to implement all device-agnostic code from vhost-user-blk, and don't break all existing vhost-user-base derived devices. For example, to support inflight migration in base class, there first need to implement inflight field in it, along with the reconnect feature. Then, with a big refactor, somehow inherit vhost-user-blk from it, along with other devices such as vhost-user-fs.
>
> So, IMO it looks like a whole separate track that will need to be tackled.
>
> I get that inflight is inside of struct VhostUserBlk, not in struct
> vhost_user. Even if not all device types support inflight I think that
> could probably be changed? If we do keep things as you have it here
> patch 1 should probably bring the vhost_dev_load_inflight() and
> vhost_dev_save_inflight() into vhost_user.c rather than vhost.c since
> only vhost_user devices seem to use inflight.
>
> Firstly, I tried to implement inflight in the vhost_user structure, but this structure is initialized on every vhost_user_blk_connect() and then destroyed on every vhost_user_blk_disconnect(). But, the inflight region must survive device disconnect.
>
> During active migration, the vhost_user struct will be destroyed before the inflight saving.
>
In struct vhost we should probably add some data which persists
between reconnects. I take your point that we cannot generalize
inflight migration without doing that first.
> And you are right about the location of the functions in patch 1. I will move them to the vhost-user.c file, which will be more appropriate. Thanks!
>
> As Markus noted this also conflicts significantly with Vladimir's
> series so I'd suggest waiting until those are in, or possibly
> attempting to generalize on top of his changes.
>
> Yea, but i think, i just need to perform inflight migration only when some non-local migration flag is set.
>
> I'm not sure about that. While it may be completely safe in your case
> to avoid GET_VRING_BASE/quiescing the backend, it could introduce
> corruption cases on other vhost-user-blk storage backends which work
> differently. I think this would require an "opt-in" migration
> parameter over and above checking that it's not a local migration.
>
> In patch 2 I introduce migration capability for that. Or you think that we need something like per-device parameter?
>
My point here was that you will definitely need a parameter you have.
I'll leave it to others to say if we actually need a per-device
parameter.
> Ideally there would be some way to have the vhost-user backends expose
> support per device with a protocol feature and, if the source and
> destination both advertise support, we perform the inflight migration
> skipping GET_VRING_BASE. I don't know enough about the migration code
> to know how that would be done or if it is practical.
>
> Also (and I think this also applies to Vladimir's series), it should
> be possible to add multiple vhost-user-blk devices to the same Qemu
> instance but have the devices backed by very different backend
> implementations. What happens if some of those backend implementations
> support things like lightweight local migration and/or inflight
> migration but others don't? Ideally you would want the API to be able
> to select target vhost-user-blk devices for accelerated migration
> rather than assuming they will all want to use the same features. I
> don't know if anyone is doing that or if we care about such cases but
> I just want to throw the concern out there.
>
> Hmm, that looks like we need some per-device parameter in order to set this feature on for them.
>
> I guess we need a opinion from migration maintainer about such cases
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-20 11:47 ` Markus Armbruster
@ 2025-10-23 19:24 ` Peter Xu
2025-10-24 8:42 ` Alexandr Moshkov
0 siblings, 1 reply; 20+ messages in thread
From: Peter Xu @ 2025-10-23 19:24 UTC (permalink / raw)
To: Markus Armbruster
Cc: Alexandr Moshkov, qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake
On Mon, Oct 20, 2025 at 01:47:12PM +0200, Markus Armbruster wrote:
> Alexandr Moshkov <dtalexundeer@yandex-team.ru> writes:
>
> > Thanks for review!
> >
> > On 10/20/25 14:55, Markus Armbruster wrote:
> >> Alexandr Moshkov<dtalexundeer@yandex-team.ru> writes:
> >>
> >>> In vhost_user_blk_stop() on incoming migration make force_stop = true,
> >>> so GET_VRING_BASE will not be executed.
> >>>
> >>> Signed-off-by: Alexandr Moshkov<dtalexundeer@yandex-team.ru>
> >> Your cover letter explains why this is useful. Please work it into your
> >> commit message.
> >
> > Ok
> >
> >> [...]
> >>
> >>> diff --git a/qapi/migration.json b/qapi/migration.json
> >>> index be0f3fcc12..c9fea59515 100644
> >>> --- a/qapi/migration.json
> >>> +++ b/qapi/migration.json
> >>> @@ -517,9 +517,13 @@
> >>> # each RAM page. Requires a migration URI that supports seeking,
> >>> # such as a file. (since 9.0)
> >>> #
> >>> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
> >>> +# region for vhost-user-blk. (since 10.2)
> >>> +#
> >> Any guidance why and when users would want to enable it?
> >>
> >> Is it a good idea to have device-specific capabilities?
> >
> > Hmm, maybe it's better way to make a parameter for the vhost-user-blk instead of migration capability?
> >
> > What do you think?
>
> I think this is a question for the migration maintainers :)
Oops, I missed this email previously..
We discussed similar things with Vladimir on virtio-net. Unless extremely
necessary, we should avoid adding any cap into migration that is relevant
to a specific device. Yes, per-device is better..
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-23 14:29 ` Lei Yang
@ 2025-10-24 8:37 ` Alexandr Moshkov
2025-10-25 14:19 ` Lei Yang
0 siblings, 1 reply; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-24 8:37 UTC (permalink / raw)
To: Lei Yang
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
[-- Attachment #1: Type: text/plain, Size: 7387 bytes --]
Hi, thanks for testing!
On 10/23/25 19:29, Lei Yang wrote:
> Hi Alexandr
>
> According to my test result, this series of patches introduce issues,
> it prints the following error messages when compiling the process
> after applying your patch.
> The test based on this commit:
> commit 3a2d5612a7422732b648b46d4b934e2e54622fd6 (origin/master, origin/HEAD)
> Author: Peter Maydell<peter.maydell@linaro.org>
> Date: Fri Oct 17 14:31:56 2025 +0100
>
> Error messages:
> [1849/2964] Compiling C object
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
> FAILED: libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
> cc -m64 -Ilibqemu-x86_64-softmmu.a.p -I. -I.. -Itarget/i386
> -I../target/i386 -Isubprojects/dtc/libfdt -I../subprojects/dtc/libfdt
> -Isubprojects/libvduse -I../subprojects/libvduse -Iqapi -Itrace -Iui
> -Iui/shader -I/usr/include/pixman-1 -I/usr/include/glib-2.0
> -I/usr/lib64/glib-2.0/include -I/usr/include/libmount
> -I/usr/include/blkid -I/usr/include/sysprof-6
> -I/usr/include/gio-unix-2.0 -I/usr/include/slirp
> -fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O0 -g
> -fstack-protector-strong -Wempty-body -Wendif-labels
> -Wexpansion-to-defined -Wformat-security -Wformat-y2k
> -Wignored-qualifiers -Wimplicit-fallthrough=2 -Winit-self
> -Wmissing-format-attribute -Wmissing-prototypes -Wnested-externs
> -Wold-style-declaration -Wold-style-definition -Wredundant-decls
> -Wshadow=local -Wstrict-prototypes -Wtype-limits -Wundef -Wvla
> -Wwrite-strings -Wno-missing-include-dirs -Wno-psabi
> -Wno-shift-negative-value -isystem
> /mnt/tests/distribution/command/qemu/linux-headers -isystem
> linux-headers -iquote . -iquote /mnt/tests/distribution/command/qemu
> -iquote /mnt/tests/distribution/command/qemu/include -iquote
> /mnt/tests/distribution/command/qemu/host/include/x86_64 -iquote
> /mnt/tests/distribution/command/qemu/host/include/generic -iquote
> /mnt/tests/distribution/command/qemu/tcg/i386 -pthread -mcx16 -msse2
> -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
> -fno-strict-aliasing -fno-common -fwrapv -ftrivial-auto-var-init=zero
> -fzero-call-used-regs=used-gpr -fPIE -DWITH_GZFILEOP
> -isystem../linux-headers -isystemlinux-headers -DCOMPILING_PER_TARGET
> '-DCONFIG_TARGET="x86_64-softmmu-config-target.h"'
> '-DCONFIG_DEVICES="x86_64-softmmu-config-devices.h"' -MD -MQ
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -MF
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o.d -o
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -c
> ../hw/block/vhost-user-blk.c
> In file included from
> /mnt/tests/distribution/command/qemu/migration/options.h:19,
> from ../hw/block/vhost-user-blk.c:34:
> /mnt/tests/distribution/command/qemu/include/migration/client-options.h:26:1:
> error: unknown type name ‘MigMode’
> 26 | MigMode migrate_mode(void);
> | ^~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:66:7: error:
> unknown type name ‘BitmapMigrationNodeAliasList’
> 66 | const BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void);
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:80:1: error:
> unknown type name ‘MultiFDCompression’
> 80 | MultiFDCompression migrate_multifd_compression(void);
> | ^~~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:89:1: error:
> unknown type name ‘ZeroPageDetection’
> 89 | ZeroPageDetection migrate_zero_page_detection(void);
> | ^~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:93:27: error:
> unknown type name ‘MigrationParameters’; did you mean
> ‘MigrationState’?
> 93 | bool migrate_params_check(MigrationParameters *params, Error **errp);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
> /mnt/tests/distribution/command/qemu/migration/options.h:94:26: error:
> unknown type name ‘MigrationParameters’; did you mean
> ‘MigrationState’?
> 94 | void migrate_params_init(MigrationParameters *params);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
> ninja: build stopped: subcommand failed.
> make[1]: *** [Makefile:168: run-ninja] Error 1
>
> Thanks
> Lei
I have the same issue on my machine...
In file included from
/home/dtalexundeer/code/qemu-upstream/migration/options.h:19,
from ../hw/block/vhost-user-blk.c:34:
/home/dtalexundeer/code/qemu-upstream/include/migration/client-options.h:26:1:
error: unknown type name ‘MigMode’
26 | MigMode migrate_mode(void);
| ^~~~~~~
In file included from ../hw/block/vhost-user-blk.c:34:
/home/dtalexundeer/code/qemu-upstream/migration/options.h:65:7: error:
unknown type name ‘BitmapMigrationNodeAliasList’
65 | const BitmapMigrationNodeAliasList
*migrate_block_bitmap_mapping(void);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/dtalexundeer/code/qemu-upstream/migration/options.h:79:1: error:
unknown type name ‘MultiFDCompression’
79 | MultiFDCompression migrate_multifd_compression(void);
| ^~~~~~~~~~~~~~~~~~
/home/dtalexundeer/code/qemu-upstream/migration/options.h:88:1: error:
unknown type name ‘ZeroPageDetection’
88 | ZeroPageDetection migrate_zero_page_detection(void);
| ^~~~~~~~~~~~~~~~~
/home/dtalexundeer/code/qemu-upstream/migration/options.h:92:27: error:
unknown type name ‘MigrationParameters’; did you mean ‘MigrationState’?
92 | bool migrate_params_check(MigrationParameters *params, Error
**errp);
| ^~~~~~~~~~~~~~~~~~~
| MigrationState
/home/dtalexundeer/code/qemu-upstream/migration/options.h:93:26: error:
unknown type name ‘MigrationParameters’; did you mean ‘MigrationState’?
93 | void migrate_params_init(MigrationParameters *params);
| ^~~~~~~~~~~~~~~~~~~
| MigrationState
When I send a patch, did you know, is any CI performed (like compilation)?
I was thinking that there was some issue in my environment (or some
other code) because it even reproduces on the latest master if I add
this include to the vhost-user-blk.c file:
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index
c0cc5f6942..70235737f0 100644 --- a/hw/block/vhost-user-blk.c +++
b/hw/block/vhost-user-blk.c @@ -31,6 +31,7 @@ #include
"hw/virtio/virtio-access.h" #include "system/system.h" #include
"system/runstate.h" +#include "migration/options.h"
So, it looks like there is a problem with the client-options.h file
(that needs to include the qapi file) or in qapi generation process. If
I apply this patch:
diff --git a/include/migration/client-options.h
b/include/migration/client-options.h index 289c9d7762..38cf53388d 100644
--- a/include/migration/client-options.h +++
b/include/migration/client-options.h @@ -10,6 +10,7 @@ #ifndef
QEMU_MIGRATION_CLIENT_OPTIONS_H #define QEMU_MIGRATION_CLIENT_OPTIONS_H
+#include "qapi/qapi-types-migration.h" /* properties */
the problem goes away..
[-- Attachment #2: Type: text/html, Size: 8714 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-23 19:24 ` Peter Xu
@ 2025-10-24 8:42 ` Alexandr Moshkov
0 siblings, 0 replies; 20+ messages in thread
From: Alexandr Moshkov @ 2025-10-24 8:42 UTC (permalink / raw)
To: Peter Xu, Markus Armbruster
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Fabiano Rosas,
Eric Blake
On 10/24/25 00:24, Peter Xu wrote:
> On Mon, Oct 20, 2025 at 01:47:12PM +0200, Markus Armbruster wrote:
>> Alexandr Moshkov <dtalexundeer@yandex-team.ru> writes:
>>>> [...]
>>>>
>>>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>>>> index be0f3fcc12..c9fea59515 100644
>>>>> --- a/qapi/migration.json
>>>>> +++ b/qapi/migration.json
>>>>> @@ -517,9 +517,13 @@
>>>>> # each RAM page. Requires a migration URI that supports seeking,
>>>>> # such as a file. (since 9.0)
>>>>> #
>>>>> +# @inflight-vhost-user-blk: If enabled, QEMU will migrate inflight
>>>>> +# region for vhost-user-blk. (since 10.2)
>>>>> +#
>>>> Any guidance why and when users would want to enable it?
>>>>
>>>> Is it a good idea to have device-specific capabilities?
>>> Hmm, maybe it's better way to make a parameter for the vhost-user-blk instead of migration capability?
>>>
>>> What do you think?
>> I think this is a question for the migration maintainers :)
> Oops, I missed this email previously..
>
> We discussed similar things with Vladimir on virtio-net. Unless extremely
> necessary, we should avoid adding any cap into migration that is relevant
> to a specific device. Yes, per-device is better..
>
> Thanks,
Arlight, thanks!
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/2] vhost-user-blk: support inflight migration
2025-10-24 8:37 ` Alexandr Moshkov
@ 2025-10-25 14:19 ` Lei Yang
0 siblings, 0 replies; 20+ messages in thread
From: Lei Yang @ 2025-10-25 14:19 UTC (permalink / raw)
To: Alexandr Moshkov
Cc: qemu-devel, Raphael Norwitz, Michael S. Tsirkin,
Stefano Garzarella, Kevin Wolf, Hanna Reitz, Peter Xu,
Fabiano Rosas, Eric Blake, Markus Armbruster
On Fri, Oct 24, 2025 at 4:38 PM Alexandr Moshkov
<dtalexundeer@yandex-team.ru> wrote:
>
> Hi, thanks for testing!
>
> On 10/23/25 19:29, Lei Yang wrote:
>
> Hi Alexandr
>
> According to my test result, this series of patches introduce issues,
> it prints the following error messages when compiling the process
> after applying your patch.
> The test based on this commit:
> commit 3a2d5612a7422732b648b46d4b934e2e54622fd6 (origin/master, origin/HEAD)
> Author: Peter Maydell <peter.maydell@linaro.org>
> Date: Fri Oct 17 14:31:56 2025 +0100
>
> Error messages:
> [1849/2964] Compiling C object
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
> FAILED: libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o
> cc -m64 -Ilibqemu-x86_64-softmmu.a.p -I. -I.. -Itarget/i386
> -I../target/i386 -Isubprojects/dtc/libfdt -I../subprojects/dtc/libfdt
> -Isubprojects/libvduse -I../subprojects/libvduse -Iqapi -Itrace -Iui
> -Iui/shader -I/usr/include/pixman-1 -I/usr/include/glib-2.0
> -I/usr/lib64/glib-2.0/include -I/usr/include/libmount
> -I/usr/include/blkid -I/usr/include/sysprof-6
> -I/usr/include/gio-unix-2.0 -I/usr/include/slirp
> -fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O0 -g
> -fstack-protector-strong -Wempty-body -Wendif-labels
> -Wexpansion-to-defined -Wformat-security -Wformat-y2k
> -Wignored-qualifiers -Wimplicit-fallthrough=2 -Winit-self
> -Wmissing-format-attribute -Wmissing-prototypes -Wnested-externs
> -Wold-style-declaration -Wold-style-definition -Wredundant-decls
> -Wshadow=local -Wstrict-prototypes -Wtype-limits -Wundef -Wvla
> -Wwrite-strings -Wno-missing-include-dirs -Wno-psabi
> -Wno-shift-negative-value -isystem
> /mnt/tests/distribution/command/qemu/linux-headers -isystem
> linux-headers -iquote . -iquote /mnt/tests/distribution/command/qemu
> -iquote /mnt/tests/distribution/command/qemu/include -iquote
> /mnt/tests/distribution/command/qemu/host/include/x86_64 -iquote
> /mnt/tests/distribution/command/qemu/host/include/generic -iquote
> /mnt/tests/distribution/command/qemu/tcg/i386 -pthread -mcx16 -msse2
> -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
> -fno-strict-aliasing -fno-common -fwrapv -ftrivial-auto-var-init=zero
> -fzero-call-used-regs=used-gpr -fPIE -DWITH_GZFILEOP
> -isystem../linux-headers -isystemlinux-headers -DCOMPILING_PER_TARGET
> '-DCONFIG_TARGET="x86_64-softmmu-config-target.h"'
> '-DCONFIG_DEVICES="x86_64-softmmu-config-devices.h"' -MD -MQ
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -MF
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o.d -o
> libqemu-x86_64-softmmu.a.p/hw_block_vhost-user-blk.c.o -c
> ../hw/block/vhost-user-blk.c
> In file included from
> /mnt/tests/distribution/command/qemu/migration/options.h:19,
> from ../hw/block/vhost-user-blk.c:34:
> /mnt/tests/distribution/command/qemu/include/migration/client-options.h:26:1:
> error: unknown type name ‘MigMode’
> 26 | MigMode migrate_mode(void);
> | ^~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:66:7: error:
> unknown type name ‘BitmapMigrationNodeAliasList’
> 66 | const BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void);
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:80:1: error:
> unknown type name ‘MultiFDCompression’
> 80 | MultiFDCompression migrate_multifd_compression(void);
> | ^~~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:89:1: error:
> unknown type name ‘ZeroPageDetection’
> 89 | ZeroPageDetection migrate_zero_page_detection(void);
> | ^~~~~~~~~~~~~~~~~
> /mnt/tests/distribution/command/qemu/migration/options.h:93:27: error:
> unknown type name ‘MigrationParameters’; did you mean
> ‘MigrationState’?
> 93 | bool migrate_params_check(MigrationParameters *params, Error **errp);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
> /mnt/tests/distribution/command/qemu/migration/options.h:94:26: error:
> unknown type name ‘MigrationParameters’; did you mean
> ‘MigrationState’?
> 94 | void migrate_params_init(MigrationParameters *params);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
> ninja: build stopped: subcommand failed.
> make[1]: *** [Makefile:168: run-ninja] Error 1
>
> Thanks
> Lei
>
> I have the same issue on my machine...
>
> In file included from /home/dtalexundeer/code/qemu-upstream/migration/options.h:19,
> from ../hw/block/vhost-user-blk.c:34:
> /home/dtalexundeer/code/qemu-upstream/include/migration/client-options.h:26:1: error: unknown type name ‘MigMode’
> 26 | MigMode migrate_mode(void);
> | ^~~~~~~
> In file included from ../hw/block/vhost-user-blk.c:34:
> /home/dtalexundeer/code/qemu-upstream/migration/options.h:65:7: error: unknown type name ‘BitmapMigrationNodeAliasList’
> 65 | const BitmapMigrationNodeAliasList *migrate_block_bitmap_mapping(void);
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
> /home/dtalexundeer/code/qemu-upstream/migration/options.h:79:1: error: unknown type name ‘MultiFDCompression’
> 79 | MultiFDCompression migrate_multifd_compression(void);
> | ^~~~~~~~~~~~~~~~~~
> /home/dtalexundeer/code/qemu-upstream/migration/options.h:88:1: error: unknown type name ‘ZeroPageDetection’
> 88 | ZeroPageDetection migrate_zero_page_detection(void);
> | ^~~~~~~~~~~~~~~~~
> /home/dtalexundeer/code/qemu-upstream/migration/options.h:92:27: error: unknown type name ‘MigrationParameters’; did you mean ‘MigrationState’?
> 92 | bool migrate_params_check(MigrationParameters *params, Error **errp);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
> /home/dtalexundeer/code/qemu-upstream/migration/options.h:93:26: error: unknown type name ‘MigrationParameters’; did you mean ‘MigrationState’?
> 93 | void migrate_params_init(MigrationParameters *params);
> | ^~~~~~~~~~~~~~~~~~~
> | MigrationState
>
Hi Alexandr
> When I send a patch, did you know, is any CI performed (like compilation)?
To be honest, I'm not sure if there's such a check in place upstream.
I discovered this issue because my local CI tool caught the files
modified by your patch, found the ones I was interested in, triggered
the tests, and only discovered the issue when I looked at the results.
Thanks
Lei
>
> I was thinking that there was some issue in my environment (or some other code) because it even reproduces on the latest master if I add this include to the vhost-user-blk.c file:
>
> diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index c0cc5f6942..70235737f0 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -31,6 +31,7 @@ #include "hw/virtio/virtio-access.h" #include "system/system.h" #include "system/runstate.h" +#include "migration/options.h"
>
> So, it looks like there is a problem with the client-options.h file (that needs to include the qapi file) or in qapi generation process. If I apply this patch:
>
> diff --git a/include/migration/client-options.h b/include/migration/client-options.h index 289c9d7762..38cf53388d 100644 --- a/include/migration/client-options.h +++ b/include/migration/client-options.h @@ -10,6 +10,7 @@ #ifndef QEMU_MIGRATION_CLIENT_OPTIONS_H #define QEMU_MIGRATION_CLIENT_OPTIONS_H +#include "qapi/qapi-types-migration.h" /* properties */
>
> the problem goes away..
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-10-25 14:20 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-20 5:44 [PATCH 0/2] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 1/2] vhost: support inflight save/load Alexandr Moshkov
2025-10-21 19:29 ` Peter Xu
2025-10-23 6:46 ` Alexandr Moshkov
2025-10-20 5:44 ` [PATCH 2/2] vhost-user-blk: support inflight migration Alexandr Moshkov
2025-10-20 9:55 ` Markus Armbruster
2025-10-20 10:34 ` Alexandr Moshkov
2025-10-20 11:47 ` Markus Armbruster
2025-10-23 19:24 ` Peter Xu
2025-10-24 8:42 ` Alexandr Moshkov
2025-10-23 14:29 ` Lei Yang
2025-10-24 8:37 ` Alexandr Moshkov
2025-10-25 14:19 ` Lei Yang
2025-10-20 9:55 ` [PATCH 0/2] " Markus Armbruster
2025-10-20 10:16 ` Alexandr Moshkov
2025-10-21 18:54 ` Raphael Norwitz
2025-10-22 7:59 ` Alexandr Moshkov
2025-10-22 21:33 ` Raphael Norwitz
2025-10-23 6:44 ` Alexandr Moshkov
2025-10-23 17:53 ` Raphael Norwitz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).