qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Cédric Le Goater" <clg@redhat.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Cc: "Alex Williamson" <alex.williamson@redhat.com>,
	"Eric Blake" <eblake@redhat.com>, "Peter Xu" <peterx@redhat.com>,
	"Fabiano Rosas" <farosas@suse.de>,
	"Markus Armbruster" <armbru@redhat.com>,
	"Daniel P . Berrangé" <berrange@redhat.com>,
	"Avihai Horon" <avihaih@nvidia.com>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	qemu-devel@nongnu.org
Subject: Re: [PATCH v3 22/24] vfio/migration: Multifd device state transfer support - receive side
Date: Thu, 19 Dec 2024 15:13:17 +0100	[thread overview]
Message-ID: <ddd0df36-54bd-4fa1-8697-188f2a947ea0@redhat.com> (raw)
In-Reply-To: <2aecdbf0-945a-43d5-a197-fd6761e3b81e@maciej.szmigiero.name>

On 12/11/24 00:04, Maciej S. Szmigiero wrote:
> Hi Cédric,
> 
> On 2.12.2024 18:56, Cédric Le Goater wrote:
>> Hello Maciej,
>>
>> On 11/17/24 20:20, Maciej S. Szmigiero wrote:
>>> From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
>>>
>>> The multifd received data needs to be reassembled since device state
>>> packets sent via different multifd channels can arrive out-of-order.
>>>
>>> Therefore, each VFIO device state packet carries a header indicating its
>>> position in the stream.
>>>
>>> The last such VFIO device state packet should have
>>> VFIO_DEVICE_STATE_CONFIG_STATE flag set and carry the device config state.
>>>
>>> Since it's important to finish loading device state transferred via the
>>> main migration channel (via save_live_iterate SaveVMHandler) before
>>> starting loading the data asynchronously transferred via multifd the thread
>>> doing the actual loading of the multifd transferred data is only started
>>> from switchover_start SaveVMHandler.
>>>
>>> switchover_start handler is called when MIG_CMD_SWITCHOVER_START
>>> sub-command of QEMU_VM_COMMAND is received via the main migration channel.
>>>
>>> This sub-command is only sent after all save_live_iterate data have already
>>> been posted so it is safe to commence loading of the multifd-transferred
>>> device state upon receiving it - loading of save_live_iterate data happens
>>> synchronously in the main migration thread (much like the processing of
>>> MIG_CMD_SWITCHOVER_START) so by the time MIG_CMD_SWITCHOVER_START is
>>> processed all the proceeding data must have already been loaded.
>>>
>>> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>>> ---
>>>   hw/vfio/migration.c           | 402 ++++++++++++++++++++++++++++++++++
>>
>> This is quite a significant update to introduce all at once. It lacks a
>> comprehensive overview of the design for those who were not involved in
>> the earlier discussions adding support for multifd migration of device
>> state. There are multiple threads and migration streams involved at
>> load time which deserve some descriptions. I think the best place
>> would be at the end of :
>>
>>     https://qemu.readthedocs.io/en/v9.1.0/devel/migration/vfio.html
> 
> Will try to add some design/implementations descriptions to
> docs/devel/migration/vfio.rst.
> 
>> Could you please break down the patch to progressively introduce the
>> various elements needed for the receive sequence ? Something like :
>>
>>    - data structures first
>>    - init phase
>>    - run time
>>    - and clean up phase
>>    - toggles to enable/disable/tune
>>    - finaly, documentation update (under vfio migration)
> 
> Obviously I can split the VFIO patch into smaller fragments,
> but this means that the intermediate form won't be testable
> (I guess that's okay).

As long as bisect is not broken, it is fine. Typically, the last patch
of a series is the one activating the new proposed feature.

> 
>> Some more below,
>>
>>>   hw/vfio/pci.c                 |   2 +
>>>   hw/vfio/trace-events          |   6 +
>>>   include/hw/vfio/vfio-common.h |  19 ++
>>>   4 files changed, 429 insertions(+)
>>>
>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>> index 683f2ae98d5e..b54879fe6209 100644
>>> --- a/hw/vfio/migration.c
>>> +++ b/hw/vfio/migration.c
>>> @@ -15,6 +15,7 @@
>>>   #include <linux/vfio.h>
>>>   #include <sys/ioctl.h>
>>> +#include "io/channel-buffer.h"
>>>   #include "sysemu/runstate.h"
>>>   #include "hw/vfio/vfio-common.h"
>>>   #include "migration/misc.h"
>>> @@ -55,6 +56,15 @@
>>>    */
>>>   #define VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE (1 * MiB)
>>> +#define VFIO_DEVICE_STATE_CONFIG_STATE (1)
>>> +
>>> +typedef struct VFIODeviceStatePacket {
>>> +    uint32_t version;
>>> +    uint32_t idx;
>>> +    uint32_t flags;
>>> +    uint8_t data[0];
>>> +} QEMU_PACKED VFIODeviceStatePacket;
>>> +
>>>   static int64_t bytes_transferred;
>>>   static const char *mig_state_to_str(enum vfio_device_mig_state state)
>>> @@ -254,6 +264,292 @@ static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev,
>>>       return ret;
>>>   }
>>> +typedef struct VFIOStateBuffer {
>>> +    bool is_present;
>>> +    char *data;
>>> +    size_t len;
>>> +} VFIOStateBuffer;
>>> +
>>> +static void vfio_state_buffer_clear(gpointer data)
>>> +{
>>> +    VFIOStateBuffer *lb = data;
>>> +
>>> +    if (!lb->is_present) {
>>> +        return;
>>> +    }
>>> +
>>> +    g_clear_pointer(&lb->data, g_free);
>>> +    lb->is_present = false;
>>> +}
>>> +
>>> +static void vfio_state_buffers_init(VFIOStateBuffers *bufs)
>>> +{
>>> +    bufs->array = g_array_new(FALSE, TRUE, sizeof(VFIOStateBuffer));
>>> +    g_array_set_clear_func(bufs->array, vfio_state_buffer_clear);
>>> +}
>>> +
>>> +static void vfio_state_buffers_destroy(VFIOStateBuffers *bufs)
>>> +{
>>> +    g_clear_pointer(&bufs->array, g_array_unref);
>>> +}
>>> +
>>> +static void vfio_state_buffers_assert_init(VFIOStateBuffers *bufs)
>>> +{
>>> +    assert(bufs->array);
>>> +}
>>> +
>>> +static guint vfio_state_buffers_size_get(VFIOStateBuffers *bufs)
>>> +{
>>> +    return bufs->array->len;
>>> +}
>>> +
>>> +static void vfio_state_buffers_size_set(VFIOStateBuffers *bufs, guint size)
>>> +{
>>> +    g_array_set_size(bufs->array, size);
>>> +}
>>> +
>>> +static VFIOStateBuffer *vfio_state_buffers_at(VFIOStateBuffers *bufs, guint idx)
>>> +{
>>> +    return &g_array_index(bufs->array, VFIOStateBuffer, idx);
>>> +}
>>> +
>>> +static int vfio_load_state_buffer(void *opaque, char *data, size_t data_size,
>>> +                                  Error **errp)
>>> +{
>>> +    VFIODevice *vbasedev = opaque;
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    VFIODeviceStatePacket *packet = (VFIODeviceStatePacket *)data;
>>> +    VFIOStateBuffer *lb;
>>> +
>>> +    /*
>>> +     * Holding BQL here would violate the lock order and can cause
>>> +     * a deadlock once we attempt to lock load_bufs_mutex below.
>>> +     */
>>> +    assert(!bql_locked());
>>> +
>>> +    if (!migration->multifd_transfer) {
>>
>> Hmm, why is 'multifd_transfer' a migration attribute ? Shouldn't it
>> be at the device level ? 
> 
> I thought migration-time data goes into VFIOMigration?

yes. Sorry, I was confused by the object MigrationState which is global.
VFIOMigration is a device leve object. We are fine.

AFAICT, this supports hybrid configs : some devices using multifd
migration and some others using standard migration.

> I don't have any strong objections against moving it into VFIODevice though.
> 
>> Or should all devices of a VM support multifd
>> transfer ? That said, I'm a bit unclear about the limitations, if there
>> are any. Could please you explain a bit more when the migration sequence
>> is setup for the  device ?
>>
> 
> The reason we need this setting on the receive side is because we
> need to know whether to start the load_bufs_thread (the migration
> core will later wait for this thread to finish before proceeding further).
> 
> We also need to know whether to allocate multifd-related data structures
> in the VFIO driver based on this setting.
> 
> This setting ultimately comes from "x-migration-multifd-transfer"
> VFIOPCIDevice setting, which is a ON_OFF_AUTO setting ("AUTO" value means
> that multifd use in the driver is attempted in configurations that
> otherwise support it).
> 
>>
>>> +        error_setg(errp,
>>> +                   "got device state packet but not doing multifd transfer");
>>> +        return -1;
>>> +    }
>>> +
>>> +    if (data_size < sizeof(*packet)) {
>>> +        error_setg(errp, "packet too short at %zu (min is %zu)",
>>> +                   data_size, sizeof(*packet));
>>> +        return -1;
>>> +    }
>>> +
>>> +    if (packet->version != 0) {
>>> +        error_setg(errp, "packet has unknown version %" PRIu32,
>>> +                   packet->version);
>>> +        return -1;
>>> +    }
>>> +
>>> +    if (packet->idx == UINT32_MAX) {
>>> +        error_setg(errp, "packet has too high idx %" PRIu32,
>>> +                   packet->idx);
>>> +        return -1;
>>> +    }
>>> +
>>> +    trace_vfio_load_state_device_buffer_incoming(vbasedev->name, packet->idx);
>>> +
>>> +    QEMU_LOCK_GUARD(&migration->load_bufs_mutex);
>>> +
>>> +    /* config state packet should be the last one in the stream */
>>> +    if (packet->flags & VFIO_DEVICE_STATE_CONFIG_STATE) {
>>> +        migration->load_buf_idx_last = packet->idx;
>>> +    }
>>> +
>>> +    vfio_state_buffers_assert_init(&migration->load_bufs);
>>> +    if (packet->idx >= vfio_state_buffers_size_get(&migration->load_bufs)) {
>>> +        vfio_state_buffers_size_set(&migration->load_bufs, packet->idx + 1);
>>> +    }
>>> +
>>> +    lb = vfio_state_buffers_at(&migration->load_bufs, packet->idx);
>>> +    if (lb->is_present) {
>>> +        error_setg(errp, "state buffer %" PRIu32 " already filled",
>>> +                   packet->idx);
>>> +        return -1;
>>> +    }
>>> +
>>> +    assert(packet->idx >= migration->load_buf_idx);
>>> +
>>> +    migration->load_buf_queued_pending_buffers++;
>>> +    if (migration->load_buf_queued_pending_buffers >
>>> +        vbasedev->migration_max_queued_buffers) {
>>> +        error_setg(errp,
>>> +                   "queuing state buffer %" PRIu32 " would exceed the max of %" PRIu64,
>>> +                   packet->idx, vbasedev->migration_max_queued_buffers);
>>> +        return -1;
>>> +    }
>>> +
>>> +    lb->data = g_memdup2(&packet->data, data_size - sizeof(*packet));
>>> +    lb->len = data_size - sizeof(*packet);
>>> +    lb->is_present = true;
>>> +
>>> +    qemu_cond_signal(&migration->load_bufs_buffer_ready_cond);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int vfio_load_device_config_state(QEMUFile *f, void *opaque);
>>> +
>>> +static int vfio_load_bufs_thread_load_config(VFIODevice *vbasedev)
>>> +{
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    VFIOStateBuffer *lb;
>>> +    g_autoptr(QIOChannelBuffer) bioc = NULL;
>>> +    QEMUFile *f_out = NULL, *f_in = NULL;
>>> +    uint64_t mig_header;
>>> +    int ret;
>>> +
>>> +    assert(migration->load_buf_idx == migration->load_buf_idx_last);
>>> +    lb = vfio_state_buffers_at(&migration->load_bufs, migration->load_buf_idx);
>>> +    assert(lb->is_present);
>>> +
>>> +    bioc = qio_channel_buffer_new(lb->len);
>>> +    qio_channel_set_name(QIO_CHANNEL(bioc), "vfio-device-config-load");
>>> +
>>> +    f_out = qemu_file_new_output(QIO_CHANNEL(bioc));
>>> +    qemu_put_buffer(f_out, (uint8_t *)lb->data, lb->len);
>>> +
>>> +    ret = qemu_fflush(f_out);
>>> +    if (ret) {
>>> +        g_clear_pointer(&f_out, qemu_fclose);
>>> +        return ret;
>>> +    }
>>> +
>>> +    qio_channel_io_seek(QIO_CHANNEL(bioc), 0, 0, NULL);
>>> +    f_in = qemu_file_new_input(QIO_CHANNEL(bioc));
>>> +
>>> +    mig_header = qemu_get_be64(f_in);
>>> +    if (mig_header != VFIO_MIG_FLAG_DEV_CONFIG_STATE) {
>>> +        g_clear_pointer(&f_out, qemu_fclose);
>>> +        g_clear_pointer(&f_in, qemu_fclose);
>>> +        return -EINVAL;
>>> +    }
>>
>> All the above code is using the QIOChannel interface which is sort of an
>> internal API of the migration subsystem. Can we move it under migration ?
> 
> hw/remote and hw/virtio are also using QIOChannel API, not to mention
> qemu-nbd, block/nbd and backends/tpm, so definitely it's not just the
> core migration code that uses it.

These examples are not device models.

> I don't think introducing a tiny generic migration core helper which takes
> VFIO-specific buffer with config data and ends calling VFIO-specific
> device config state load function really makes sense.

qemu_file_new_input/ouput, qio_channel_buffer_new, qio_channel_io_seek,
qio_channel_buffer_new are solely used in migration. That's why I am
reluctant to use them directly in VFIO.

I agree it is small, for now.

> 
>>
>>> +
>>> +    bql_lock();
>>> +    ret = vfio_load_device_config_state(f_in, vbasedev);
>>> +    bql_unlock();
>>> +
>>> +    g_clear_pointer(&f_out, qemu_fclose);
>>> +    g_clear_pointer(&f_in, qemu_fclose);
>>> +    if (ret < 0) {
>>> +        return ret;
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static bool vfio_load_bufs_thread_want_abort(VFIODevice *vbasedev,
>>> +                                             bool *abort_flag)
>>> +{
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +
>>> +    return migration->load_bufs_thread_want_exit || qatomic_read(abort_flag);
>>> +}
>>> +
>>> +static int vfio_load_bufs_thread(bool *abort_flag, void *opaque)
>>> +{
>>> +    VFIODevice *vbasedev = opaque;
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    QEMU_LOCK_GUARD(&migration->load_bufs_mutex);
>>> +    int ret;
>>> +
>>> +    assert(migration->load_bufs_thread_running);
>>> +
>>> +    while (!vfio_load_bufs_thread_want_abort(vbasedev, abort_flag)) {
>>> +        VFIOStateBuffer *lb;
>>> +        guint bufs_len;
>>> +        bool starved;
>>> +
>>> +        assert(migration->load_buf_idx <= migration->load_buf_idx_last);
>>> +
>>> +        bufs_len = vfio_state_buffers_size_get(&migration->load_bufs);
>>> +        if (migration->load_buf_idx >= bufs_len) {
>>> +            assert(migration->load_buf_idx == bufs_len);
>>> +            starved = true;
>>> +        } else {
>>> +            lb = vfio_state_buffers_at(&migration->load_bufs,
>>> +                                       migration->load_buf_idx);
>>> +            starved = !lb->is_present;
>>> +        }
>>> +
>>> +        if (starved) {
>>> +            trace_vfio_load_state_device_buffer_starved(vbasedev->name,
>>> +                                                        migration->load_buf_idx);
>>> +            qemu_cond_wait(&migration->load_bufs_buffer_ready_cond,
>>> +                           &migration->load_bufs_mutex);
>>> +            continue;
>>> +        }
>>> +
>>> +        if (migration->load_buf_idx == migration->load_buf_idx_last) {
>>> +            break;
>>> +        }
>>> +
>>> +        if (migration->load_buf_idx == 0) {
>>> +            trace_vfio_load_state_device_buffer_start(vbasedev->name);
>>> +        }
>>> +
>>> +        if (lb->len) {
>>> +            g_autofree char *buf = NULL;
>>> +            size_t buf_len;
>>> +            ssize_t wr_ret;
>>> +            int errno_save;
>>> +
>>> +            trace_vfio_load_state_device_buffer_load_start(vbasedev->name,
>>> +                                                           migration->load_buf_idx);
>>> +
>>> +            /* lb might become re-allocated when we drop the lock */
>>> +            buf = g_steal_pointer(&lb->data);
>>> +            buf_len = lb->len;
>>> +
>>> +            /*
>>> +             * Loading data to the device takes a while,
>>> +             * drop the lock during this process.
>>> +             */
>>> +            qemu_mutex_unlock(&migration->load_bufs_mutex);
>>> +            wr_ret = write(migration->data_fd, buf, buf_len);
>>> +            errno_save = errno;
>>> +            qemu_mutex_lock(&migration->load_bufs_mutex);
>>> +
>>> +            if (wr_ret < 0) {
>>> +                ret = -errno_save;
>>> +                goto ret_signal;
>>> +            } else if (wr_ret < buf_len) {
>>> +                ret = -EINVAL;
>>> +                goto ret_signal;
>>> +            }
>>> +
>>> +            trace_vfio_load_state_device_buffer_load_end(vbasedev->name,
>>> +                                                         migration->load_buf_idx);
>>> +        }
>>> +
>>> +        assert(migration->load_buf_queued_pending_buffers > 0);
>>> +        migration->load_buf_queued_pending_buffers--;
>>> +
>>> +        if (migration->load_buf_idx == migration->load_buf_idx_last - 1) {
>>> +            trace_vfio_load_state_device_buffer_end(vbasedev->name);
>>> +        }
>>> +
>>> +        migration->load_buf_idx++;
>>> +    }
>>> +
>>> +    if (vfio_load_bufs_thread_want_abort(vbasedev, abort_flag)) {
>>> +        ret = -ECANCELED;
>>> +        goto ret_signal;
>>> +    }
>>> +
>>> +    ret = vfio_load_bufs_thread_load_config(vbasedev);
>>> +
>>> +ret_signal:
>>> +    migration->load_bufs_thread_running = false;
>>> +    qemu_cond_signal(&migration->load_bufs_thread_finished_cond);
>>> +
>>> +    return ret;
>>
>> Is the error reported to the migration subsytem ?
> 
> Yes, via setting "load_threads_ret" in qemu_loadvm_load_thread().
> 
>>> +}
>>> +
>>>   static int vfio_save_device_config_state(QEMUFile *f, void *opaque,
>>>                                            Error **errp)
>>>   {
>>> @@ -430,6 +726,12 @@ static bool vfio_precopy_supported(VFIODevice *vbasedev)
>>>       return migration->mig_flags & VFIO_MIGRATION_PRE_COPY;
>>>   }
>>> +static bool vfio_multifd_transfer_supported(void)
>>> +{
>>> +    return migration_has_device_state_support() &&
>>> +        migrate_send_switchover_start();
>>> +}
>>> +
>>>   /* ---------------------------------------------------------------------- */
>>>   static int vfio_save_prepare(void *opaque, Error **errp)
>>> @@ -695,17 +997,73 @@ static int vfio_load_setup(QEMUFile *f, void *opaque, Error **errp)
>>>       assert(!migration->load_setup);
>>> +    /*
>>> +     * Make a copy of this setting at the start in case it is changed
>>> +     * mid-migration.
>>> +     */
>>> +    if (vbasedev->migration_multifd_transfer == ON_OFF_AUTO_AUTO) {
>>> +        migration->multifd_transfer = vfio_multifd_transfer_supported();
>>> +    } else {
>>> +        migration->multifd_transfer =
>>> +            vbasedev->migration_multifd_transfer == ON_OFF_AUTO_ON;
>>> +    }
>>> +
>>> +    if (migration->multifd_transfer && !vfio_multifd_transfer_supported()) {
>>> +        error_setg(errp,
>>> +                   "%s: Multifd device transfer requested but unsupported in the current config",
>>> +                   vbasedev->name);
>>> +        return -EINVAL;
>>> +    }
>>
>> Can we move these checks ealier ? in vfio_migration_realize() ?
>> If possible, it would be good to avoid the multifd_transfer attribute also.
> 
> We can't since the value is changeable at runtime, so it could have been
> changed after the VFIO device got realized.

We will need discuss this part again. Let's keep it that way for now.

>>>       ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RESUMING,
>>>                                      migration->device_state, errp);
>>>       if (ret) {
>>>           return ret;
>>>       }
>>> +    if (migration->multifd_transfer) {
>>> +        assert(!migration->load_bufs.array);
>>> +        vfio_state_buffers_init(&migration->load_bufs);
>>> +
>>> +        qemu_mutex_init(&migration->load_bufs_mutex);
>>> +
>>> +        migration->load_buf_idx = 0;
>>> +        migration->load_buf_idx_last = UINT32_MAX;
>>> +        migration->load_buf_queued_pending_buffers = 0;
>>> +        qemu_cond_init(&migration->load_bufs_buffer_ready_cond);
>>> +
>>> +        migration->load_bufs_thread_running = false;
>>> +        migration->load_bufs_thread_want_exit = false;
>>> +        qemu_cond_init(&migration->load_bufs_thread_finished_cond);
>>
>> Please provide an helper routine to initialize all the multifd transfer
>> attributes. We might want to add a struct to gather them all by the way.
> 
> Will move these to a new helper.
> 
>>> +    }
>>> +
>>>       migration->load_setup = true;
>>>       return 0;
>>>   }
>>> +static void vfio_load_cleanup_load_bufs_thread(VFIODevice *vbasedev)
>>> +{
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +
>>> +    /* The lock order is load_bufs_mutex -> BQL so unlock BQL here first */
>>> +    bql_unlock();
>>> +    WITH_QEMU_LOCK_GUARD(&migration->load_bufs_mutex) {
>>> +        if (!migration->load_bufs_thread_running) {
>>> +            break;
>>> +        }
>>> +
>>> +        migration->load_bufs_thread_want_exit = true;
>>> +
>>> +        qemu_cond_signal(&migration->load_bufs_buffer_ready_cond);
>>> +        qemu_cond_wait(&migration->load_bufs_thread_finished_cond,
>>> +                       &migration->load_bufs_mutex);
>>> +
>>> +        assert(!migration->load_bufs_thread_running);
>>> +    }
>>> +    bql_lock();
>>> +}
>>> +
>>>   static int vfio_load_cleanup(void *opaque)
>>>   {
>>>       VFIODevice *vbasedev = opaque;
>>> @@ -715,7 +1073,19 @@ static int vfio_load_cleanup(void *opaque)
>>>           return 0;
>>>       }
>>> +    if (migration->multifd_transfer) {
>>> +        vfio_load_cleanup_load_bufs_thread(vbasedev);
>>> +    }
>>> +
>>>       vfio_migration_cleanup(vbasedev);
>>
>> Why is the cleanup done in two steps ?
> 
> I'm not sure what "two steps" here refer to, but
> if you mean to move the "if (migration->multifd_transfer)"
> block below to the similar one above then it should be possible.

good. It is preferable.


Thanks,

C.




> 
>>> +
>>> +    if (migration->multifd_transfer) {
>>> +        qemu_cond_destroy(&migration->load_bufs_thread_finished_cond);
>>> +        vfio_state_buffers_destroy(&migration->load_bufs);
>>> +        qemu_cond_destroy(&migration->load_bufs_buffer_ready_cond);
>>> +        qemu_mutex_destroy(&migration->load_bufs_mutex);
>>> +    }
>>> +
>>>       migration->load_setup = false;
>>>       trace_vfio_load_cleanup(vbasedev->name);
> (..)
> 
> 
>> Thanks,
>>
>> C.
>>
> 
> Thanks,
> Maciej
> 



  reply	other threads:[~2024-12-19 16:22 UTC|newest]

Thread overview: 140+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-17 19:19 [PATCH v3 00/24] Multifd 🔀 device state transfer support with VFIO consumer Maciej S. Szmigiero
2024-11-17 19:19 ` [PATCH v3 01/24] migration: Clarify that {load, save}_cleanup handlers can run without setup Maciej S. Szmigiero
2024-11-25 19:08   ` Fabiano Rosas
2024-11-26 16:25   ` [PATCH v3 01/24] migration: Clarify that {load,save}_cleanup " Cédric Le Goater
2024-11-17 19:19 ` [PATCH v3 02/24] thread-pool: Remove thread_pool_submit() function Maciej S. Szmigiero
2024-11-25 19:13   ` Fabiano Rosas
2024-11-26 16:25   ` Cédric Le Goater
2024-12-04 19:24   ` Peter Xu
2024-12-06 21:11     ` Maciej S. Szmigiero
2024-11-17 19:19 ` [PATCH v3 03/24] thread-pool: Rename AIO pool functions to *_aio() and data types to *Aio Maciej S. Szmigiero
2024-11-25 19:15   ` Fabiano Rosas
2024-11-26 16:26   ` Cédric Le Goater
2024-12-04 19:26   ` Peter Xu
2024-11-17 19:19 ` [PATCH v3 04/24] thread-pool: Implement generic (non-AIO) pool support Maciej S. Szmigiero
2024-11-25 19:41   ` Fabiano Rosas
2024-11-25 19:55     ` Maciej S. Szmigiero
2024-11-25 20:51       ` Fabiano Rosas
2024-11-26 19:25       ` Cédric Le Goater
2024-11-26 21:21         ` Maciej S. Szmigiero
2024-11-26 19:29   ` Cédric Le Goater
2024-11-26 21:22     ` Maciej S. Szmigiero
2024-12-05 13:10       ` Cédric Le Goater
2024-11-28 10:08   ` Avihai Horon
2024-11-28 12:11     ` Maciej S. Szmigiero
2024-12-04 20:04   ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 05/24] migration: Add MIG_CMD_SWITCHOVER_START and its load handler Maciej S. Szmigiero
2024-11-25 19:46   ` Fabiano Rosas
2024-11-26 19:37   ` Cédric Le Goater
2024-11-26 21:22     ` Maciej S. Szmigiero
2024-12-04 21:29   ` Peter Xu
2024-12-05 19:46     ` Zhang Chen
2024-12-06 18:24       ` Maciej S. Szmigiero
2024-12-06 22:12         ` Peter Xu
2024-12-09  1:43           ` Zhang Chen
2024-11-17 19:20 ` [PATCH v3 06/24] migration: Add qemu_loadvm_load_state_buffer() and its handler Maciej S. Szmigiero
2024-12-04 21:32   ` Peter Xu
2024-12-06 21:12     ` Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 07/24] migration: Document the BQL behavior of load SaveVMHandlers Maciej S. Szmigiero
2024-12-04 21:38   ` Peter Xu
2024-12-06 18:40     ` Maciej S. Szmigiero
2024-12-06 22:15       ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 08/24] migration: Add thread pool of optional load threads Maciej S. Szmigiero
2024-11-25 19:58   ` Fabiano Rosas
2024-11-27  9:13   ` Cédric Le Goater
2024-11-27 20:16     ` Maciej S. Szmigiero
2024-12-04 22:48       ` Peter Xu
2024-12-05 16:15         ` Peter Xu
2024-12-10 23:05           ` Maciej S. Szmigiero
2024-12-10 23:05         ` Maciej S. Szmigiero
2024-12-12 16:38           ` Peter Xu
2024-12-12 22:53             ` Maciej S. Szmigiero
2024-12-16 16:29               ` Peter Xu
2024-12-16 23:15                 ` Maciej S. Szmigiero
2024-12-17 14:50                   ` Peter Xu
2024-11-28 10:26   ` Avihai Horon
2024-11-28 12:11     ` Maciej S. Szmigiero
2024-12-04 22:43       ` Peter Xu
2024-12-10 23:05         ` Maciej S. Szmigiero
2024-12-12 16:55           ` Peter Xu
2024-12-12 22:53             ` Maciej S. Szmigiero
2024-12-16 16:33               ` Peter Xu
2024-12-16 23:15                 ` Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 09/24] migration/multifd: Split packet into header and RAM data Maciej S. Szmigiero
2024-11-26 14:34   ` Fabiano Rosas
2024-12-05 15:29   ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 10/24] migration/multifd: Device state transfer support - receive side Maciej S. Szmigiero
2024-12-05 16:06   ` Peter Xu
2024-12-06 21:12     ` Maciej S. Szmigiero
2024-12-06 21:57       ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 11/24] migration/multifd: Make multifd_send() thread safe Maciej S. Szmigiero
2024-12-05 16:17   ` Peter Xu
2024-12-06 21:12     ` Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 12/24] migration/multifd: Add an explicit MultiFDSendData destructor Maciej S. Szmigiero
2024-12-05 16:23   ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 13/24] migration/multifd: Device state transfer support - send side Maciej S. Szmigiero
2024-11-26 19:58   ` Fabiano Rosas
2024-11-26 21:22     ` Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 14/24] migration/multifd: Make MultiFDSendData a struct Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 15/24] migration/multifd: Add migration_has_device_state_support() Maciej S. Szmigiero
2024-11-26 20:05   ` Fabiano Rosas
2024-11-28 10:33   ` Avihai Horon
2024-11-28 12:12     ` Maciej S. Szmigiero
2024-12-05 16:44       ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 16/24] migration/multifd: Send final SYNC only after device state is complete Maciej S. Szmigiero
2024-11-26 20:52   ` Fabiano Rosas
2024-11-26 21:22     ` Maciej S. Szmigiero
2024-12-05 19:02       ` Peter Xu
2024-12-10 23:05         ` Maciej S. Szmigiero
2024-12-11 13:20           ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 17/24] migration: Add save_live_complete_precopy_thread handler Maciej S. Szmigiero
2024-11-29 14:03   ` Cédric Le Goater
2024-11-29 17:14     ` Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 18/24] vfio/migration: Don't run load cleanup if load setup didn't run Maciej S. Szmigiero
2024-11-29 14:08   ` Cédric Le Goater
2024-11-29 17:15     ` Maciej S. Szmigiero
2024-12-03 15:09       ` Avihai Horon
2024-12-10 23:04         ` Maciej S. Szmigiero
2024-12-12 14:30           ` Avihai Horon
2024-12-12 22:52             ` Maciej S. Szmigiero
2024-12-19  9:19               ` Cédric Le Goater
2024-11-17 19:20 ` [PATCH v3 19/24] vfio/migration: Add x-migration-multifd-transfer VFIO property Maciej S. Szmigiero
2024-11-29 14:11   ` Cédric Le Goater
2024-11-29 17:15     ` Maciej S. Szmigiero
2024-12-19  9:37       ` Cédric Le Goater
2024-11-17 19:20 ` [PATCH v3 20/24] vfio/migration: Add load_device_config_state_start trace event Maciej S. Szmigiero
2024-11-29 14:26   ` Cédric Le Goater
2024-11-17 19:20 ` [PATCH v3 21/24] vfio/migration: Convert bytes_transferred counter to atomic Maciej S. Szmigiero
2024-11-17 19:20 ` [PATCH v3 22/24] vfio/migration: Multifd device state transfer support - receive side Maciej S. Szmigiero
2024-12-02 17:56   ` Cédric Le Goater
2024-12-10 23:04     ` Maciej S. Szmigiero
2024-12-19 14:13       ` Cédric Le Goater [this message]
2024-12-09  9:13   ` Avihai Horon
2024-12-10 23:06     ` Maciej S. Szmigiero
2024-12-12 14:33       ` Avihai Horon
2024-11-17 19:20 ` [PATCH v3 23/24] migration/qemu-file: Define g_autoptr() cleanup function for QEMUFile Maciej S. Szmigiero
2024-11-26 21:01   ` Fabiano Rosas
2024-12-05 19:49   ` Peter Xu
2024-11-17 19:20 ` [PATCH v3 24/24] vfio/migration: Multifd device state transfer support - send side Maciej S. Szmigiero
2024-12-09  9:28   ` Avihai Horon
2024-12-10 23:06     ` Maciej S. Szmigiero
2024-12-12 11:10       ` Cédric Le Goater
2024-12-12 22:52         ` Maciej S. Szmigiero
2024-12-13 11:08           ` Cédric Le Goater
2024-12-13 18:25             ` Maciej S. Szmigiero
2024-12-12 14:54       ` Avihai Horon
2024-12-12 22:53         ` Maciej S. Szmigiero
2024-12-16 17:33           ` Peter Xu
2024-12-19  9:50             ` Cédric Le Goater
2024-12-04 19:10 ` [PATCH v3 00/24] Multifd 🔀 device state transfer support with VFIO consumer Peter Xu
2024-12-06 18:03   ` Maciej S. Szmigiero
2024-12-06 22:20     ` Peter Xu
2024-12-10 23:06       ` Maciej S. Szmigiero
2024-12-12 17:35         ` Peter Xu
2024-12-19  7:55           ` Yanghang Liu
2024-12-19  8:53             ` Cédric Le Goater
2024-12-19 13:00               ` Yanghang Liu
2024-12-05 21:27 ` Cédric Le Goater
2024-12-05 21:42   ` Peter Xu
2024-12-06 10:24     ` Cédric Le Goater
2024-12-06 18:44   ` Maciej S. Szmigiero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ddd0df36-54bd-4fa1-8697-188f2a947ea0@redhat.com \
    --to=clg@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=armbru@redhat.com \
    --cc=avihaih@nvidia.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=farosas@suse.de \
    --cc=joao.m.martins@oracle.com \
    --cc=mail@maciej.szmigiero.name \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).