qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Avihai Horon <avihaih@nvidia.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>,
	Peter Xu <peterx@redhat.com>
Cc: "Alex Williamson" <alex.williamson@redhat.com>,
	"Cédric Le Goater" <clg@redhat.com>,
	"Fabiano Rosas" <farosas@suse.de>,
	"Eric Blake" <eblake@redhat.com>,
	"Markus Armbruster" <armbru@redhat.com>,
	"Daniel P . Berrangé" <berrange@redhat.com>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	qemu-devel@nongnu.org
Subject: Re: [PATCH v2 15/17] vfio/migration: Multifd device state transfer support - receive side
Date: Thu, 12 Sep 2024 11:20:39 +0300	[thread overview]
Message-ID: <5a9dba27-96e5-4404-b429-c36d9b1c5707@nvidia.com> (raw)
In-Reply-To: <ad637771-3eff-4492-b3a8-e72bb1e91551@maciej.szmigiero.name>


On 09/09/2024 21:06, Maciej S. Szmigiero wrote:
> External email: Use caution opening links or attachments
>
>
> On 9.09.2024 10:55, Avihai Horon wrote:
>>
>> On 27/08/2024 20:54, Maciej S. Szmigiero wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
>>>
>>> The multifd received data needs to be reassembled since device state
>>> packets sent via different multifd channels can arrive out-of-order.
>>>
>>> Therefore, each VFIO device state packet carries a header indicating
>>> its position in the stream.
>>>
>>> The last such VFIO device state packet should have
>>> VFIO_DEVICE_STATE_CONFIG_STATE flag set and carry the device config
>>> state.
>>>
>>> Since it's important to finish loading device state transferred via
>>> the main migration channel (via save_live_iterate handler) before
>>> starting loading the data asynchronously transferred via multifd
>>> a new VFIO_MIG_FLAG_DEV_DATA_STATE_COMPLETE flag is introduced to
>>> mark the end of the main migration channel data.
>>>
>>> The device state loading process waits until that flag is seen before
>>> commencing loading of the multifd-transferred device state.
>>>
>>> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
>>> ---
>>>   hw/vfio/migration.c           | 338 
>>> +++++++++++++++++++++++++++++++++-
>>>   hw/vfio/pci.c                 |   2 +
>>>   hw/vfio/trace-events          |   9 +-
>>>   include/hw/vfio/vfio-common.h |  17 ++
>>>   4 files changed, 362 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>> index 24679d8c5034..57c1542528dc 100644
>>> --- a/hw/vfio/migration.c
>>> +++ b/hw/vfio/migration.c
>>> @@ -15,6 +15,7 @@
>>>   #include <linux/vfio.h>
>>>   #include <sys/ioctl.h>
>>>
>>> +#include "io/channel-buffer.h"
>>>   #include "sysemu/runstate.h"
>>>   #include "hw/vfio/vfio-common.h"
>>>   #include "migration/misc.h"
>>> @@ -47,6 +48,7 @@
>>>   #define VFIO_MIG_FLAG_DEV_SETUP_STATE (0xffffffffef100003ULL)
>>>   #define VFIO_MIG_FLAG_DEV_DATA_STATE (0xffffffffef100004ULL)
>>>   #define VFIO_MIG_FLAG_DEV_INIT_DATA_SENT (0xffffffffef100005ULL)
>>> +#define VFIO_MIG_FLAG_DEV_DATA_STATE_COMPLETE (0xffffffffef100006ULL)
>>>
>>>   /*
>>>    * This is an arbitrary size based on migration of mlx5 devices, 
>>> where typically
>>> @@ -55,6 +57,15 @@
>>>    */
>>>   #define VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE (1 * MiB)
>>>
>>> +#define VFIO_DEVICE_STATE_CONFIG_STATE (1)
>>> +
>>> +typedef struct VFIODeviceStatePacket {
>>> +    uint32_t version;
>>> +    uint32_t idx;
>>> +    uint32_t flags;
>>> +    uint8_t data[0];
>>> +} QEMU_PACKED VFIODeviceStatePacket;
>>> +
>>>   static int64_t bytes_transferred;
>>>
>>>   static const char *mig_state_to_str(enum vfio_device_mig_state state)
>>> @@ -254,6 +265,188 @@ static int vfio_load_buffer(QEMUFile *f, 
>>> VFIODevice *vbasedev,
>>>       return ret;
>>>   }
>>>
>>> +typedef struct LoadedBuffer {
>>> +    bool is_present;
>>> +    char *data;
>>> +    size_t len;
>>> +} LoadedBuffer;
>>
>> Maybe rename LoadedBuffer to a more specific name, like VFIOStateBuffer?
>
> Will do.
>
>> I also feel like LoadedBuffer deserves a separate commit.
>> Plus, I think it will be good to add a full API for this, that wraps 
>> the g_array_* calls and holds the extra members.
>> E.g, VFIOStateBuffer, VFIOStateArray (will hold load_buf_idx, 
>> load_buf_idx_last, etc.), vfio_state_array_destroy(), 
>> vfio_state_array_alloc(), vfio_state_array_get(), etc...
>> IMHO, this will make it clearer.
>
> Will think about wrapping GArray accesses in separate methods,
> however wrapping a single line GArray call in a separate function
> normally would seem a bit excessive.

Sure, let's do it only if it makes the code cleaner.

>
>>> +
>>> +static void loaded_buffer_clear(gpointer data)
>>> +{
>>> +    LoadedBuffer *lb = data;
>>> +
>>> +    if (!lb->is_present) {
>>> +        return;
>>> +    }
>>> +
>>> +    g_clear_pointer(&lb->data, g_free);
>>> +    lb->is_present = false;
>>> +}
>>> +
>>> +static int vfio_load_state_buffer(void *opaque, char *data, size_t 
>>> data_size,
>>> +                                  Error **errp)
>>> +{
>>> +    VFIODevice *vbasedev = opaque;
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    VFIODeviceStatePacket *packet = (VFIODeviceStatePacket *)data;
>>> +    QEMU_LOCK_GUARD(&migration->load_bufs_mutex);
>>
>> Move lock to where it's needed? I.e., after 
>> trace_vfio_load_state_device_buffer_incoming(vbasedev->name, 
>> packet->idx)
>
> It's a declaration of a new variable so I guess it should always be
> at the top of the code block in the kernel / QEMU code style?

Yes, but it's opaque to the user.
Looking at other QEMU_LOCK_GUARD call sites in the code and it seems 
like people are using it in the middle of code blocks as well.

>
> Also, these checks below are very unlikely to fail and even if they do,
> I doubt a failed migration due to bit stream corruption is a scenario
> worth optimizing run time performance for.

IMO, in this case it's more for readability, but we can go either way 
and let the maintainer decide.

>
>>> +    LoadedBuffer *lb;
>>> +
>>> +    if (data_size < sizeof(*packet)) {
>>> +        error_setg(errp, "packet too short at %zu (min is %zu)",
>>> +                   data_size, sizeof(*packet));
>>> +        return -1;
>>> +    }
>>> +
>>> +    if (packet->version != 0) {
>>> +        error_setg(errp, "packet has unknown version %" PRIu32,
>>> +                   packet->version);
>>> +        return -1;
>>> +    }
>>> +
>>> +    if (packet->idx == UINT32_MAX) {
>>> +        error_setg(errp, "packet has too high idx %" PRIu32,
>>> +                   packet->idx);
>>> +        return -1;
>>> +    }
>>> +
>>> + trace_vfio_load_state_device_buffer_incoming(vbasedev->name, 
>>> packet->idx);
>>> +
>>> +    /* config state packet should be the last one in the stream */
>>> +    if (packet->flags & VFIO_DEVICE_STATE_CONFIG_STATE) {
>>> +        migration->load_buf_idx_last = packet->idx;
>>> +    }
>>> +
>>> +    assert(migration->load_bufs);
>>> +    if (packet->idx >= migration->load_bufs->len) {
>>> +        g_array_set_size(migration->load_bufs, packet->idx + 1);
>>> +    }
>>> +
>>> +    lb = &g_array_index(migration->load_bufs, typeof(*lb), 
>>> packet->idx);
>>> +    if (lb->is_present) {
>>> +        error_setg(errp, "state buffer %" PRIu32 " already filled", 
>>> packet->idx);
>>> +        return -1;
>>> +    }
>>> +
>>> +    assert(packet->idx >= migration->load_buf_idx);
>>> +
>>> +    migration->load_buf_queued_pending_buffers++;
>>> +    if (migration->load_buf_queued_pending_buffers >
>>> +        vbasedev->migration_max_queued_buffers) {
>>> +        error_setg(errp,
>>> +                   "queuing state buffer %" PRIu32 " would exceed 
>>> the max of %" PRIu64,
>>> +                   packet->idx, 
>>> vbasedev->migration_max_queued_buffers);
>>> +        return -1;
>>> +    }
>>
>> I feel like max_queued_buffers accounting/checking/configuration 
>> should be split to a separate patch that will come after this patch.
>> Also, should we count bytes instead of buffers? Current buffer size 
>> is 1MB but this could change, and the normal user should not care or 
>> know what is the buffer size.
>> So maybe rename to migration_max_pending_bytes or such?
>
> Since it's Peter that asked for this limit to be introduced in the 
> first place
> I would like to ask him what his preference here.
>
> @Peter: max queued buffers or bytes?
>
>>> +
>>> +    lb->data = g_memdup2(&packet->data, data_size - sizeof(*packet));
>>> +    lb->len = data_size - sizeof(*packet);
>>> +    lb->is_present = true;
>>> +
>>> + qemu_cond_broadcast(&migration->load_bufs_buffer_ready_cond);
>>
>> There is only one thread waiting, shouldn't signal be enough?
>
> Will change this to _signal() since it clearly doesn't
> make sense to have a future-proof API here - it's an
> implementation detail.
>
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static void *vfio_load_bufs_thread(void *opaque)
>>> +{
>>> +    VFIODevice *vbasedev = opaque;
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    Error **errp = &migration->load_bufs_thread_errp;
>>> +    g_autoptr(QemuLockable) locker = qemu_lockable_auto_lock(
>>> + QEMU_MAKE_LOCKABLE(&migration->load_bufs_mutex));
>>
>> Any special reason to use QemuLockable?
>
> I prefer automatic lock management (RAII-like) for the same reason
> I prefer automatic memory management: it makes it much harder to
> forget to unlock the lock (or free memory) in some error path.
>
> That's the reason these primitives were introduced in QEMU in the
> first place (apparently modeled after its Glib equivalents) and
> why these are being (slowly) introduced to Linux kernel too.

Agree, I guess what I really meant is why not use QEMU_LOCK_GUARD()?

>
>>> +    LoadedBuffer *lb;
>>> +
>>> +    while (!migration->load_bufs_device_ready &&
>>> +           !migration->load_bufs_thread_want_exit) {
>>> + qemu_cond_wait(&migration->load_bufs_device_ready_cond, 
>>> &migration->load_bufs_mutex);
>>> +    }
>>> +
>>> +    while (!migration->load_bufs_thread_want_exit) {
>>> +        bool starved;
>>> +        ssize_t ret;
>>> +
>>> +        assert(migration->load_buf_idx <= 
>>> migration->load_buf_idx_last);
>>> +
>>> +        if (migration->load_buf_idx >= migration->load_bufs->len) {
>>> +            assert(migration->load_buf_idx == 
>>> migration->load_bufs->len);
>>> +            starved = true;
>>> +        } else {
>>> +            lb = &g_array_index(migration->load_bufs, typeof(*lb), 
>>> migration->load_buf_idx);
>>> +            starved = !lb->is_present;
>>> +        }
>>> +
>>> +        if (starved) {
>>> + trace_vfio_load_state_device_buffer_starved(vbasedev->name, 
>>> migration->load_buf_idx);
>>> + qemu_cond_wait(&migration->load_bufs_buffer_ready_cond, 
>>> &migration->load_bufs_mutex);
>>> +            continue;
>>> +        }
>>> +
>>> +        if (migration->load_buf_idx == migration->load_buf_idx_last) {
>>> +            break;
>>> +        }
>>> +
>>> +        if (migration->load_buf_idx == 0) {
>>> + trace_vfio_load_state_device_buffer_start(vbasedev->name);
>>> +        }
>>> +
>>> +        if (lb->len) {
>>> +            g_autofree char *buf = NULL;
>>> +            size_t buf_len;
>>> +            int errno_save;
>>> +
>>> + trace_vfio_load_state_device_buffer_load_start(vbasedev->name,
>>> + migration->load_buf_idx);
>>> +
>>> +            /* lb might become re-allocated when we drop the lock */
>>> +            buf = g_steal_pointer(&lb->data);
>>> +            buf_len = lb->len;
>>> +
>>> +            /* Loading data to the device takes a while, drop the 
>>> lock during this process */
>>> + qemu_mutex_unlock(&migration->load_bufs_mutex);
>>> +            ret = write(migration->data_fd, buf, buf_len);
>>> +            errno_save = errno;
>>> + qemu_mutex_lock(&migration->load_bufs_mutex);
>>> +
>>> +            if (ret < 0) {
>>> +                error_setg(errp, "write to state buffer %" PRIu32 " 
>>> failed with %d",
>>> +                           migration->load_buf_idx, errno_save);
>>> +                break;
>>> +            } else if (ret < buf_len) {
>>> +                error_setg(errp, "write to state buffer %" PRIu32 " 
>>> incomplete %zd / %zu",
>>> +                           migration->load_buf_idx, ret, buf_len);
>>> +                break;
>>> +            }
>>> +
>>> + trace_vfio_load_state_device_buffer_load_end(vbasedev->name,
>>> + migration->load_buf_idx);
>>> +        }
>>> +
>>> +        assert(migration->load_buf_queued_pending_buffers > 0);
>>> +        migration->load_buf_queued_pending_buffers--;
>>> +
>>> +        if (migration->load_buf_idx == migration->load_buf_idx_last 
>>> - 1) {
>>> + trace_vfio_load_state_device_buffer_end(vbasedev->name);
>>> +        }
>>> +
>>> +        migration->load_buf_idx++;
>>> +    }
>>> +
>>> +    if (migration->load_bufs_thread_want_exit &&
>>> +        !*errp) {
>>> +        error_setg(errp, "load bufs thread asked to quit");
>>> +    }
>>> +
>>> +    g_clear_pointer(&locker, qemu_lockable_auto_unlock);
>>> +
>>> +    qemu_loadvm_load_finish_ready_lock();
>>> +    migration->load_bufs_thread_finished = true;
>>> +    qemu_loadvm_load_finish_ready_broadcast();
>>> +    qemu_loadvm_load_finish_ready_unlock();
>>> +
>>> +    return NULL;
>>> +}
>>> +
>>>   static int vfio_save_device_config_state(QEMUFile *f, void *opaque,
>>>                                            Error **errp)
>>>   {
>>> @@ -285,6 +478,8 @@ static int 
>>> vfio_load_device_config_state(QEMUFile *f, void *opaque)
>>>       VFIODevice *vbasedev = opaque;
>>>       uint64_t data;
>>>
>>> + trace_vfio_load_device_config_state_start(vbasedev->name);
>>
>> Maybe split this and below trace_vfio_load_device_config_state_end to 
>> a separate patch?
>
> I guess you mean to add these trace points in a separate patch?
> Can do.
>
>>> +
>>>       if (vbasedev->ops && vbasedev->ops->vfio_load_config) {
>>>           int ret;
>>>
>>> @@ -303,7 +498,7 @@ static int 
>>> vfio_load_device_config_state(QEMUFile *f, void *opaque)
>>>           return -EINVAL;
>>>       }
>>>
>>> -    trace_vfio_load_device_config_state(vbasedev->name);
>>> + trace_vfio_load_device_config_state_end(vbasedev->name);
>>>       return qemu_file_get_error(f);
>>>   }
>>>
>>> @@ -687,16 +882,70 @@ static void vfio_save_state(QEMUFile *f, void 
>>> *opaque)
>>>   static int vfio_load_setup(QEMUFile *f, void *opaque, Error **errp)
>>>   {
>>>       VFIODevice *vbasedev = opaque;
>>> +    VFIOMigration *migration = vbasedev->migration;
>>> +    int ret;
>>> +
>>> +    ret = vfio_migration_set_state(vbasedev, 
>>> VFIO_DEVICE_STATE_RESUMING,
>>> + vbasedev->migration->device_state, errp);
>>> +    if (ret) {
>>> +        return ret;
>>> +    }
>>> +
>>> +    assert(!migration->load_bufs);
>>> +    migration->load_bufs = g_array_new(FALSE, TRUE, 
>>> sizeof(LoadedBuffer));
>>> +    g_array_set_clear_func(migration->load_bufs, loaded_buffer_clear);
>>> +
>>> +    qemu_mutex_init(&migration->load_bufs_mutex);
>>> +
>>> +    migration->load_bufs_device_ready = false;
>>> + qemu_cond_init(&migration->load_bufs_device_ready_cond);
>>> +
>>> +    migration->load_buf_idx = 0;
>>> +    migration->load_buf_idx_last = UINT32_MAX;
>>> +    migration->load_buf_queued_pending_buffers = 0;
>>> + qemu_cond_init(&migration->load_bufs_buffer_ready_cond);
>>> +
>>> +    migration->config_state_loaded_to_dev = false;
>>> +
>>> +    assert(!migration->load_bufs_thread_started);
>>
>> Maybe do all these allocations (and de-allocations) only if multifd 
>> device state is supported and enabled?
>> Extracting this to its own function could also be good.
>
> Sure, will try to avoid unnecessarily allocating multifd device state
> related things if this functionality is unavailable anyway.
>
>>>
>>> -    return vfio_migration_set_state(vbasedev, 
>>> VFIO_DEVICE_STATE_RESUMING,
>>> - vbasedev->migration->device_state, errp);
>>> +    migration->load_bufs_thread_finished = false;
>>> +    migration->load_bufs_thread_want_exit = false;
>>> +    qemu_thread_create(&migration->load_bufs_thread, "vfio-load-bufs",
>>> +                       vfio_load_bufs_thread, opaque, 
>>> QEMU_THREAD_JOINABLE);
>>
>> The device state save threads are manged by migration core thread 
>> pool. Don't we want to apply the same thread management scheme for 
>> the load flow as well?
>
> I think that (in contrast with the device state saving threads)
> the buffer loading / reordering thread is an implementation detail
> of the VFIO driver, so I don't think it really makes sense for multifd 
> code
> to manage it.

Hmm, yes I understand.

Thanks.



  reply	other threads:[~2024-09-12  8:21 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-27 17:54 [PATCH v2 00/17] Multifd 🔀 device state transfer support with VFIO consumer Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 01/17] vfio/migration: Add save_{iterate, complete_precopy}_started trace events Maciej S. Szmigiero
2024-09-05 13:08   ` [PATCH v2 01/17] vfio/migration: Add save_{iterate,complete_precopy}_started " Avihai Horon
2024-09-09 18:04     ` Maciej S. Szmigiero
2024-09-11 14:50       ` Avihai Horon
2024-08-27 17:54 ` [PATCH v2 02/17] migration/ram: Add load start trace event Maciej S. Szmigiero
2024-08-28 18:44   ` Fabiano Rosas
2024-08-28 20:21     ` Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 03/17] migration/multifd: Zero p->flags before starting filling a packet Maciej S. Szmigiero
2024-08-28 18:50   ` Fabiano Rosas
2024-09-09 15:41   ` Peter Xu
2024-08-27 17:54 ` [PATCH v2 04/17] thread-pool: Add a DestroyNotify parameter to thread_pool_submit{, _aio)() Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 05/17] thread-pool: Implement non-AIO (generic) pool support Maciej S. Szmigiero
2024-09-02 22:07   ` Fabiano Rosas
2024-09-03 12:02     ` Maciej S. Szmigiero
2024-09-03 14:26       ` Fabiano Rosas
2024-09-03 18:14         ` Maciej S. Szmigiero
2024-09-03 13:55   ` Stefan Hajnoczi
2024-09-03 16:54     ` Maciej S. Szmigiero
2024-09-03 19:04       ` Stefan Hajnoczi
2024-09-09 16:45         ` Peter Xu
2024-09-09 18:38           ` Maciej S. Szmigiero
2024-09-09 19:12             ` Peter Xu
2024-09-09 19:16               ` Maciej S. Szmigiero
2024-09-09 19:24                 ` Peter Xu
2024-08-27 17:54 ` [PATCH v2 06/17] migration: Add save_live_complete_precopy_{begin, end} handlers Maciej S. Szmigiero
2024-08-28 19:03   ` [PATCH v2 06/17] migration: Add save_live_complete_precopy_{begin,end} handlers Fabiano Rosas
2024-09-05 13:45   ` Avihai Horon
2024-09-09 17:59     ` Peter Xu
2024-09-09 18:32       ` Maciej S. Szmigiero
2024-09-09 19:08         ` Peter Xu
2024-09-09 19:32           ` Peter Xu
2024-09-19 19:48             ` Maciej S. Szmigiero
2024-09-19 19:47           ` Maciej S. Szmigiero
2024-09-19 20:54             ` Peter Xu
2024-09-20 15:22               ` Maciej S. Szmigiero
2024-09-20 16:08                 ` Peter Xu
2024-09-09 18:05     ` Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 07/17] migration: Add qemu_loadvm_load_state_buffer() and its handler Maciej S. Szmigiero
2024-08-30 19:05   ` Fabiano Rosas
2024-09-05 14:15   ` Avihai Horon
2024-09-09 18:05     ` Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 08/17] migration: Add load_finish handler and associated functions Maciej S. Szmigiero
2024-08-30 19:28   ` Fabiano Rosas
2024-09-05 15:13   ` Avihai Horon
2024-09-09 18:05     ` Maciej S. Szmigiero
2024-09-09 20:03   ` Peter Xu
2024-09-19 19:49     ` Maciej S. Szmigiero
2024-09-19 21:11       ` Peter Xu
2024-09-20 15:23         ` Maciej S. Szmigiero
2024-09-20 16:45           ` Peter Xu
2024-09-26 22:34             ` Maciej S. Szmigiero
2024-09-27  0:53               ` Peter Xu
2024-09-30 19:25                 ` Maciej S. Szmigiero
2024-09-30 21:57                   ` Peter Xu
2024-10-01 20:41                     ` Maciej S. Szmigiero
2024-10-01 21:30                       ` Peter Xu
2024-10-02 20:11                         ` Maciej S. Szmigiero
2024-10-02 21:25                           ` Peter Xu
2024-10-03 20:34                             ` Maciej S. Szmigiero
2024-10-03 21:17                               ` Peter Xu
2024-08-27 17:54 ` [PATCH v2 09/17] migration/multifd: Device state transfer support - receive side Maciej S. Szmigiero
2024-08-30 20:22   ` Fabiano Rosas
2024-09-02 20:12     ` Maciej S. Szmigiero
2024-09-03 14:42       ` Fabiano Rosas
2024-09-03 18:41         ` Maciej S. Szmigiero
2024-09-09 19:52       ` Peter Xu
2024-09-19 19:49         ` Maciej S. Szmigiero
2024-09-05 16:47   ` Avihai Horon
2024-09-09 18:05     ` Maciej S. Szmigiero
2024-09-12  8:13       ` Avihai Horon
2024-09-12 13:52         ` Fabiano Rosas
2024-09-19 19:59           ` Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 10/17] migration/multifd: Convert multifd_send()::next_channel to atomic Maciej S. Szmigiero
2024-08-30 18:13   ` Fabiano Rosas
2024-09-02 20:11     ` Maciej S. Szmigiero
2024-09-03 15:01       ` Fabiano Rosas
2024-09-03 20:04         ` Maciej S. Szmigiero
2024-09-10 14:13   ` Peter Xu
2024-08-27 17:54 ` [PATCH v2 11/17] migration/multifd: Add an explicit MultiFDSendData destructor Maciej S. Szmigiero
2024-08-30 13:12   ` Fabiano Rosas
2024-08-27 17:54 ` [PATCH v2 12/17] migration/multifd: Device state transfer support - send side Maciej S. Szmigiero
2024-08-29  0:41   ` Fabiano Rosas
2024-08-29 20:03     ` Maciej S. Szmigiero
2024-08-30 13:02       ` Fabiano Rosas
2024-09-09 19:40         ` Peter Xu
2024-09-19 19:50           ` Maciej S. Szmigiero
2024-09-10 19:48     ` Peter Xu
2024-09-12 18:43       ` Fabiano Rosas
2024-09-13  0:23         ` Peter Xu
2024-09-13 13:21           ` Fabiano Rosas
2024-09-13 14:19             ` Peter Xu
2024-09-13 15:04               ` Fabiano Rosas
2024-09-13 15:22                 ` Peter Xu
2024-09-13 18:26                   ` Fabiano Rosas
2024-09-17 15:39                     ` Peter Xu
2024-09-17 17:07                   ` Cédric Le Goater
2024-09-17 17:50                     ` Peter Xu
2024-09-19 19:51                       ` Maciej S. Szmigiero
2024-09-19 19:49       ` Maciej S. Szmigiero
2024-09-19 21:17         ` Peter Xu
2024-09-20 15:23           ` Maciej S. Szmigiero
2024-09-20 17:09             ` Peter Xu
2024-09-10 16:06   ` Peter Xu
2024-09-19 19:49     ` Maciej S. Szmigiero
2024-09-19 21:18       ` Peter Xu
2024-08-27 17:54 ` [PATCH v2 13/17] migration/multifd: Add migration_has_device_state_support() Maciej S. Szmigiero
2024-08-30 18:55   ` Fabiano Rosas
2024-09-02 20:11     ` Maciej S. Szmigiero
2024-09-03 15:09       ` Fabiano Rosas
2024-08-27 17:54 ` [PATCH v2 14/17] migration: Add save_live_complete_precopy_thread handler Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 15/17] vfio/migration: Multifd device state transfer support - receive side Maciej S. Szmigiero
2024-09-09  8:55   ` Avihai Horon
2024-09-09 18:06     ` Maciej S. Szmigiero
2024-09-12  8:20       ` Avihai Horon [this message]
2024-09-12  8:45         ` Cédric Le Goater
2024-08-27 17:54 ` [PATCH v2 16/17] vfio/migration: Add x-migration-multifd-transfer VFIO property Maciej S. Szmigiero
2024-08-27 17:54 ` [PATCH v2 17/17] vfio/migration: Multifd device state transfer support - send side Maciej S. Szmigiero
2024-09-09 11:41   ` Avihai Horon
2024-09-09 18:07     ` Maciej S. Szmigiero
2024-09-12  8:26       ` Avihai Horon
2024-09-12  8:57         ` Cédric Le Goater
2024-08-28 20:46 ` [PATCH v2 00/17] Multifd 🔀 device state transfer support with VFIO consumer Fabiano Rosas
2024-08-28 21:58   ` Maciej S. Szmigiero
2024-08-29  0:51     ` Fabiano Rosas
2024-08-29 20:02       ` Maciej S. Szmigiero
2024-10-11 13:58 ` Cédric Le Goater
2024-10-15 21:12   ` Maciej S. Szmigiero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5a9dba27-96e5-4404-b429-c36d9b1c5707@nvidia.com \
    --to=avihaih@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=armbru@redhat.com \
    --cc=berrange@redhat.com \
    --cc=clg@redhat.com \
    --cc=eblake@redhat.com \
    --cc=farosas@suse.de \
    --cc=joao.m.martins@oracle.com \
    --cc=mail@maciej.szmigiero.name \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).