All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Prasad Pandit <ppandit@redhat.com>
Cc: qemu-devel@nongnu.org, "Juraj Marcin" <jmarcin@redhat.com>,
	"Kirti Wankhede" <kwankhede@nvidia.com>,
	"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
	"Daniel P . Berrangé" <berrange@redhat.com>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Alex Williamson" <alex@shazbot.org>,
	"Yishai Hadas" <yishaih@nvidia.com>,
	"Fabiano Rosas" <farosas@suse.de>,
	"Pranav Tyagi" <prtyagi@redhat.com>,
	"Zhiyi Guo" <zhguo@redhat.com>,
	"Markus Armbruster" <armbru@redhat.com>,
	"Avihai Horon" <avihaih@nvidia.com>,
	"Cédric Le Goater" <clg@redhat.com>,
	"Halil Pasic" <pasic@linux.ibm.com>,
	"Christian Borntraeger" <borntraeger@linux.ibm.com>,
	"Jason Herne" <jjherne@linux.ibm.com>,
	"Eric Farman" <farman@linux.ibm.com>,
	"Matthew Rosato" <mjrosato@linux.ibm.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"Ilya Leoshkevich" <iii@linux.ibm.com>,
	"David Hildenbrand" <david@kernel.org>,
	"Cornelia Huck" <cohuck@redhat.com>,
	"Eric Blake" <eblake@redhat.com>,
	"Vladimir Sementsov-Ogievskiy" <vsementsov@yandex-team.ru>,
	"John Snow" <jsnow@redhat.com>
Subject: Re: [PATCH RFC 05/12] migration/treewide: Merge @state_pending_{exact|estimate} APIs
Date: Wed, 1 Apr 2026 16:53:03 -0400	[thread overview]
Message-ID: <ac2FrwjNPfwrKjnL@x1.local> (raw)
In-Reply-To: <CAE8KmOzjUviaivX_Z6vk=S9DKMyM7aSyx2D1-M25tRsamc1PyA@mail.gmail.com>

On Tue, Mar 24, 2026 at 04:05:00PM +0530, Prasad Pandit wrote:
> > +typedef struct MigPendingData {
> > +    /* How many bytes are pending for precopy / stopcopy? */
> > +    uint64_t precopy_bytes;
> > +    /* How many bytes are pending that can be transferred in postcopy? */
> > +    uint64_t postcopy_bytes;
> > +    /* Is this a fastpath query (which can be inaccurate)? */
> > +    bool fastpath;
> > +} MigPendingData ;
> > +
> 
> * Pending precopy bytes are: bytes still pending to be sent + dirty
> (updated) bytes; How do we measure/estimate pending postcopy bytes?
> 
> * Could we do away with separating pending bytes into Precopy and
> Postcopy bytes? Instead go with just pending_bytes on the source side?
> 
> * Let's not add another term/variable called stopcopy. It'll only add
> to the confusion.

What would you suggest to solve this problem?

> 
> 
> >  /**
> >   * struct SaveVMHandlers: handler structure to finely control
> >   * migration of complex subsystems and devices, such as RAM, block and
> > @@ -197,46 +206,17 @@ typedef struct SaveVMHandlers {
> >      bool (*save_postcopy_prepare)(QEMUFile *f, void *opaque, Error **errp);
> >
> >      /**
> > -     * @state_pending_estimate
> > -     *
> > -     * This estimates the remaining data to transfer
> > -     *
> > -     * Sum of @can_postcopy and @must_postcopy is the whole amount of
> > -     * pending data.
> > -     *
> > -     * @opaque: data pointer passed to register_savevm_live()
> > -     * @must_precopy: amount of data that must be migrated in precopy
> > -     *                or in stopped state, i.e. that must be migrated
> > -     *                before target start.
> > -     * @can_postcopy: amount of data that can be migrated in postcopy
> > -     *                or in stopped state, i.e. after target start.
> > -     *                Some can also be migrated during precopy (RAM).
> > -     *                Some must be migrated after source stops
> > -     *                (block-dirty-bitmap)
> > -     */
> > -    void (*state_pending_estimate)(void *opaque, uint64_t *must_precopy,
> > -                                   uint64_t *can_postcopy);
> > -
> > -    /**
> > -     * @state_pending_exact
> > -     *
> > -     * This calculates the exact remaining data to transfer
> > +     * @save_query_pending
> >       *
> > -     * Sum of @can_postcopy and @must_postcopy is the whole amount of
> > -     * pending data.
> > +     * This estimates the remaining data to transfer on the source side.
> > +     * It's highly suggested that the module should implement both fastpath
> > +     * and slowpath version of it when it can be slow (for more information
> > +     * please check pending->fastpath field).
> >       *
> >       * @opaque: data pointer passed to register_savevm_live()
> > -     * @must_precopy: amount of data that must be migrated in precopy
> > -     *                or in stopped state, i.e. that must be migrated
> > -     *                before target start.
> > -     * @can_postcopy: amount of data that can be migrated in postcopy
> > -     *                or in stopped state, i.e. after target start.
> > -     *                Some can also be migrated during precopy (RAM).
> > -     *                Some must be migrated after source stops
> > -     *                (block-dirty-bitmap)
> > +     * @pending: pointer to a MigPendingData struct
> >       */
> > -    void (*state_pending_exact)(void *opaque, uint64_t *must_precopy,
> > -                                uint64_t *can_postcopy);
> > +    void (*save_query_pending)(void *opaque, MigPendingData *pending);
> >
> >      /**
> >       * @load_state
> > diff --git a/migration/savevm.h b/migration/savevm.h
> > index b3d1e8a13c..b116933bce 100644
> > --- a/migration/savevm.h
> > +++ b/migration/savevm.h
> > @@ -14,6 +14,8 @@
> >  #ifndef MIGRATION_SAVEVM_H
> >  #define MIGRATION_SAVEVM_H
> >
> > +#include "migration/register.h"
> > +
> >  #define QEMU_VM_FILE_MAGIC           0x5145564d
> >  #define QEMU_VM_FILE_VERSION_COMPAT  0x00000002
> >  #define QEMU_VM_FILE_VERSION         0x00000003
> > @@ -43,6 +45,7 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy);
> >  void qemu_savevm_state_cleanup(void);
> >  void qemu_savevm_state_complete_postcopy(QEMUFile *f);
> >  int qemu_savevm_state_complete_precopy(MigrationState *s);
> > +void qemu_savevm_query_pending(MigPendingData *pending, bool fastpath);
> >  void qemu_savevm_state_pending_exact(uint64_t *must_precopy,
> >                                       uint64_t *can_postcopy);
> >  void qemu_savevm_state_pending_estimate(uint64_t *must_precopy,
> > diff --git a/hw/s390x/s390-stattrib.c b/hw/s390x/s390-stattrib.c
> > index d808ece3b9..b1ec51c77a 100644
> > --- a/hw/s390x/s390-stattrib.c
> > +++ b/hw/s390x/s390-stattrib.c
> > @@ -187,15 +187,14 @@ static int cmma_save_setup(QEMUFile *f, void *opaque, Error **errp)
> >      return 0;
> >  }
> >
> > -static void cmma_state_pending(void *opaque, uint64_t *must_precopy,
> > -                               uint64_t *can_postcopy)
> > +static void cmma_state_pending(void *opaque, MigPendingData *pending)
> >  {
> >      S390StAttribState *sas = S390_STATTRIB(opaque);
> >      S390StAttribClass *sac = S390_STATTRIB_GET_CLASS(sas);
> >      long long res = sac->get_dirtycount(sas);
> >
> >      if (res >= 0) {
> > -        *must_precopy += res;
> > +        pending->precopy_bytes += res;
> >      }
> >  }
> >
> > @@ -340,8 +339,7 @@ static SaveVMHandlers savevm_s390_stattrib_handlers = {
> >      .save_setup = cmma_save_setup,
> >      .save_live_iterate = cmma_save_iterate,
> >      .save_complete = cmma_save_complete,
> > -    .state_pending_exact = cmma_state_pending,
> > -    .state_pending_estimate = cmma_state_pending,
> > +    .save_query_pending = cmma_state_pending,
> >      .save_cleanup = cmma_save_cleanup,
> >      .load_state = cmma_load,
> >      .is_active = cmma_active,
> > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > index 827d3ded63..c054c749b0 100644
> > --- a/hw/vfio/migration.c
> > +++ b/hw/vfio/migration.c
> > @@ -570,42 +570,51 @@ static void vfio_save_cleanup(void *opaque)
> >      trace_vfio_save_cleanup(vbasedev->name);
> >  }
> >
> > -static void vfio_state_pending_estimate(void *opaque, uint64_t *must_precopy,
> > -                                        uint64_t *can_postcopy)
> > +static void vfio_state_pending_sync(VFIODevice *vbasedev)
> >  {
> > -    VFIODevice *vbasedev = opaque;
> >      VFIOMigration *migration = vbasedev->migration;
> >
> > -    if (!vfio_device_state_is_precopy(vbasedev)) {
> > -        return;
> > -    }
> > +    vfio_query_stop_copy_size(vbasedev);
> >
> > -    *must_precopy +=
> > -        migration->precopy_init_size + migration->precopy_dirty_size;
> > +    if (vfio_device_state_is_precopy(vbasedev)) {
> > +        vfio_query_precopy_size(migration);
> > +    }
> >
> > -    trace_vfio_state_pending_estimate(vbasedev->name, *must_precopy,
> > -                                      *can_postcopy,
> > -                                      migration->precopy_init_size,
> > -                                      migration->precopy_dirty_size);
> > +    /*
> > +     * In all cases, all PRECOPY data should be no more than STOPCOPY data.
> > +     * Otherwise we have a problem.  So far, only dump some errors.
> > +     */
> > +    if (migration->precopy_init_size + migration->precopy_dirty_size <
> > +        migration->stopcopy_size) {
> > +        error_report_once("%s: wrong pending data (init=%" PRIx64
> > +                          ", dirty=%"PRIx64", stop=%"PRIx64")",
> > +                          __func__, migration->precopy_init_size,
> > +                          migration->precopy_dirty_size,
> > +                          migration->stopcopy_size);
> > +    }
> >  }
> >
> > -static void vfio_state_pending_exact(void *opaque, uint64_t *must_precopy,
> > -                                     uint64_t *can_postcopy)
> > +static void vfio_state_pending(void *opaque, MigPendingData *pending)
> >  {
> >      VFIODevice *vbasedev = opaque;
> >      VFIOMigration *migration = vbasedev->migration;
> > +    uint64_t remain;
> >
> > -    vfio_query_stop_copy_size(vbasedev);
> > -    *must_precopy += migration->stopcopy_size;
> > -
> > -    if (vfio_device_state_is_precopy(vbasedev)) {
> > -        vfio_query_precopy_size(migration);
> > +    if (pending->fastpath) {
> > +        if (!vfio_device_state_is_precopy(vbasedev)) {
> > +            return;
> > +        }
> > +        remain = migration->precopy_init_size + migration->precopy_dirty_size;
> > +    } else {
> > +        vfio_state_pending_sync(vbasedev);
> > +        remain = migration->stopcopy_size;
> >      }
> >
> > -    trace_vfio_state_pending_exact(vbasedev->name, *must_precopy, *can_postcopy,
> > -                                   migration->stopcopy_size,
> > -                                   migration->precopy_init_size,
> > -                                   migration->precopy_dirty_size);
> > +    pending->precopy_bytes += remain;
> > +
> > +    trace_vfio_state_pending(vbasedev->name, migration->stopcopy_size,
> > +                             migration->precopy_init_size,
> > +                             migration->precopy_dirty_size);
> >  }
> >
> >  static bool vfio_is_active_iterate(void *opaque)
> > @@ -850,8 +859,7 @@ static const SaveVMHandlers savevm_vfio_handlers = {
> >      .save_prepare = vfio_save_prepare,
> >      .save_setup = vfio_save_setup,
> >      .save_cleanup = vfio_save_cleanup,
> > -    .state_pending_estimate = vfio_state_pending_estimate,
> > -    .state_pending_exact = vfio_state_pending_exact,
> > +    .save_query_pending = vfio_state_pending,
> >      .is_active_iterate = vfio_is_active_iterate,
> >      .save_live_iterate = vfio_save_iterate,
> >      .save_complete = vfio_save_complete_precopy,
> > diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
> > index a061aad817..376a9b43ac 100644
> > --- a/migration/block-dirty-bitmap.c
> > +++ b/migration/block-dirty-bitmap.c
> > @@ -766,9 +766,7 @@ static int dirty_bitmap_save_complete(QEMUFile *f, void *opaque)
> >      return 0;
> >  }
> >
> > -static void dirty_bitmap_state_pending(void *opaque,
> > -                                       uint64_t *must_precopy,
> > -                                       uint64_t *can_postcopy)
> > +static void dirty_bitmap_state_pending(void *opaque, MigPendingData *data)
> >  {
> >      DBMSaveState *s = &((DBMState *)opaque)->save;
> >      SaveBitmapState *dbms;
> > @@ -788,7 +786,7 @@ static void dirty_bitmap_state_pending(void *opaque,
> >
> >      trace_dirty_bitmap_state_pending(pending);
> >
> > -    *can_postcopy += pending;
> > +    data->postcopy_bytes += pending;
> >  }
> >
> >  /* First occurrence of this bitmap. It should be created if doesn't exist */
> > @@ -1250,8 +1248,7 @@ static SaveVMHandlers savevm_dirty_bitmap_handlers = {
> >      .save_setup = dirty_bitmap_save_setup,
> >      .save_complete = dirty_bitmap_save_complete,
> >      .has_postcopy = dirty_bitmap_has_postcopy,
> > -    .state_pending_exact = dirty_bitmap_state_pending,
> > -    .state_pending_estimate = dirty_bitmap_state_pending,
> > +    .save_query_pending = dirty_bitmap_state_pending,
> >      .save_live_iterate = dirty_bitmap_save_iterate,
> >      .is_active_iterate = dirty_bitmap_is_active_iterate,
> >      .load_state = dirty_bitmap_load,
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 979751f61b..89f761a471 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -3443,30 +3443,17 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
> >      return qemu_fflush(f);
> >  }
> >
> > -static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy,
> > -                                       uint64_t *can_postcopy)
> > -{
> > -    RAMState **temp = opaque;
> > -    RAMState *rs = *temp;
> > -
> > -    uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > -
> > -    if (migrate_postcopy_ram()) {
> > -        /* We can do postcopy, and all the data is postcopiable */
> > -        *can_postcopy += remaining_size;
> > -    } else {
> > -        *must_precopy += remaining_size;
> > -    }
> > -}
> > -
> > -static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
> > -                                    uint64_t *can_postcopy)
> > +static void ram_state_pending(void *opaque, MigPendingData *pending)
> >  {
> >      RAMState **temp = opaque;
> >      RAMState *rs = *temp;
> >      uint64_t remaining_size;
> >
> > -    if (!migration_in_postcopy()) {
> > +    /*
> > +     * Sync is only needed either with: (1) a fast query, or (2) postcopy
> > +     * as started (in which case no new dirty will generate anymore).
> 
> * as -> has
> * (... no new dirty will generate anymore)  - ...?
> 
> > +     */
> > +    if (!pending->fastpath && !migration_in_postcopy()) {
> 
> * The comment above and 'if' conditionals don't seem to match. We are
> saying a call to 'migration_bitmap_sync_precopy()' is needed either
> with fast query ie. 'pending->fastpath = true'  OR Postcopy has
> started ie. migration_in_postcopy() = true, right? What is the
> right/intended behaviour here?

It should have been s/only/not/.. I'll update comment to:

    /*
     * Sync is not needed either with: (1) a fast query, or (2) after
     * postcopy has started (no new dirty will generate anymore).
     */

> 
> >          bql_lock();
> >          WITH_RCU_READ_LOCK_GUARD() {
> >              migration_bitmap_sync_precopy(false);
> > @@ -3478,9 +3465,9 @@ static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
> >
> >      if (migrate_postcopy_ram()) {
> >          /* We can do postcopy, and all the data is postcopiable */
> > -        *can_postcopy += remaining_size;
> > +        pending->postcopy_bytes += remaining_size;
> >      } else {
> > -        *must_precopy += remaining_size;
> > +        pending->precopy_bytes += remaining_size;
> >      }
> >  }
> >
> > @@ -4703,8 +4690,7 @@ static SaveVMHandlers savevm_ram_handlers = {
> >      .save_live_iterate = ram_save_iterate,
> >      .save_complete = ram_save_complete,
> >      .has_postcopy = ram_has_postcopy,
> > -    .state_pending_exact = ram_state_pending_exact,
> > -    .state_pending_estimate = ram_state_pending_estimate,
> > +    .save_query_pending = ram_state_pending,
> >      .load_state = ram_load,
> >      .save_cleanup = ram_save_cleanup,
> >      .load_setup = ram_load_setup,
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index dd58f2a705..6268e68382 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -1762,46 +1762,45 @@ int qemu_savevm_state_complete_precopy(MigrationState *s)
> >      return qemu_fflush(f);
> >  }
> >
> > -/* Give an estimate of the amount left to be transferred,
> > - * the result is split into the amount for units that can and
> > - * for units that can't do postcopy.
> > - */
> > -void qemu_savevm_state_pending_estimate(uint64_t *must_precopy,
> > -                                        uint64_t *can_postcopy)
> > +void qemu_savevm_query_pending(MigPendingData *pending, bool fastpath)
> >  {
> >      SaveStateEntry *se;
> >
> > -    *must_precopy = 0;
> > -    *can_postcopy = 0;
> > +    pending->precopy_bytes = 0;
> > +    pending->postcopy_bytes = 0;
> > +    pending->fastpath = fastpath;
> >
> >      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> > -        if (!se->ops || !se->ops->state_pending_estimate) {
> > +        if (!se->ops || !se->ops->save_query_pending) {
> >              continue;
> >          }
> >          if (!qemu_savevm_state_active(se)) {
> >              continue;
> >          }
> > -        se->ops->state_pending_estimate(se->opaque, must_precopy, can_postcopy);
> > +        se->ops->save_query_pending(se->opaque, pending);
> >      }
> >  }
> >
> > +void qemu_savevm_state_pending_estimate(uint64_t *must_precopy,
> > +                                        uint64_t *can_postcopy)
> > +{
> > +    MigPendingData pending;
> > +
> > +    qemu_savevm_query_pending(&pending, true);
> > +
> > +    *must_precopy = pending.precopy_bytes;
> > +    *can_postcopy = pending.postcopy_bytes;
> > +}
> > +
> >  void qemu_savevm_state_pending_exact(uint64_t *must_precopy,
> >                                       uint64_t *can_postcopy)
> >  {
> > -    SaveStateEntry *se;
> > +    MigPendingData pending;
> >
> > -    *must_precopy = 0;
> > -    *can_postcopy = 0;
> > +    qemu_savevm_query_pending(&pending, false);
> >
> > -    QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
> > -        if (!se->ops || !se->ops->state_pending_exact) {
> > -            continue;
> > -        }
> > -        if (!qemu_savevm_state_active(se)) {
> > -            continue;
> > -        }
> > -        se->ops->state_pending_exact(se->opaque, must_precopy, can_postcopy);
> > -    }
> > +    *must_precopy = pending.precopy_bytes;
> > +    *can_postcopy = pending.postcopy_bytes;
> >  }
> 
> * IIUC, we send fastpath=true from _pending_estimate() because we want
> to avoid call to migration_bitmap_sync_precopy(), as calling it won't
> be fast. And we send fastpath=false from _pending_exact() function
> because we want to trigger call to migration_bitmap_sync_precopy()
> routine.
> 
> * Instead of calling the boolean parameter 'fastpath',  let's use
> 'bool use_precision'  OR 'bool sync_data'. And then in
> qemu_savevm_query_pending()
> 
>     qemu_savevm_query_pending():
>             if (use_precision || sync_data) {
>                      migration_bitmap_sync_precopy()
>             }
> 
> * When reading code, equating fastpath with 'estimation' and slowpath
> with 'precision/accuracy' is unnatural.

I'll use "exact" as suggested by Juraj.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2026-04-01 20:53 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 23:12 [PATCH RFC 00/12] migration/vfio: Fix a few issues on API misuse or statistic reports Peter Xu
2026-03-19 23:12 ` [PATCH RFC 01/12] migration: Fix low possibility downtime violation Peter Xu
2026-03-20 12:26   ` Prasad Pandit
2026-03-27 14:35     ` Juraj Marcin
2026-03-30 11:52       ` Prasad Pandit
2026-03-31 12:49         ` Juraj Marcin
2026-04-06  7:21           ` Prasad Pandit
2026-04-01 19:11       ` Peter Xu
2026-03-27 15:05   ` Juraj Marcin
2026-03-19 23:12 ` [PATCH RFC 02/12] migration/qapi: Rename MigrationStats to MigrationRAMStats Peter Xu
2026-03-19 23:26   ` Peter Xu
2026-03-20  6:54   ` Markus Armbruster
2026-04-01 19:38     ` Peter Xu
2026-04-01 19:47     ` Peter Xu
2026-03-19 23:12 ` [PATCH RFC 03/12] vfio/migration: Throttle vfio_save_block() on data size to read Peter Xu
2026-03-25 14:10   ` Avihai Horon
2026-04-01 20:36     ` Peter Xu
2026-04-06 11:21       ` Avihai Horon
2026-04-07 15:18         ` Peter Xu
2026-03-19 23:12 ` [PATCH RFC 04/12] vfio/migration: Cache stop size in VFIOMigration Peter Xu
2026-03-25 14:15   ` Avihai Horon
2026-04-01 20:41     ` Peter Xu
2026-04-06 11:28       ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 05/12] migration/treewide: Merge @state_pending_{exact|estimate} APIs Peter Xu
2026-03-24 10:35   ` Prasad Pandit
2026-04-01 20:53     ` Peter Xu [this message]
2026-03-25 15:20   ` Avihai Horon
2026-04-01 21:22     ` Peter Xu
2026-04-06 11:54       ` Avihai Horon
2026-03-27 15:17   ` Juraj Marcin
2026-03-19 23:12 ` [PATCH RFC 06/12] migration: Use the new save_query_pending() API directly Peter Xu
2026-03-24  9:35   ` Prasad Pandit
2026-03-27 15:24   ` Juraj Marcin
2026-04-01 22:28     ` Peter Xu
2026-03-19 23:12 ` [PATCH RFC 07/12] migration: Introduce stopcopy_bytes in save_query_pending() Peter Xu
2026-03-24 11:05   ` Prasad Pandit
2026-03-25 16:54   ` Avihai Horon
2026-04-02 14:09     ` Peter Xu
2026-04-06 12:20       ` Avihai Horon
2026-04-07 15:30         ` Peter Xu
2026-03-27 16:43   ` Juraj Marcin
2026-04-02 15:16     ` Peter Xu
2026-04-07 15:19       ` Juraj Marcin
2026-04-07 15:32         ` Peter Xu
2026-03-19 23:12 ` [PATCH RFC 08/12] vfio/migration: Fix incorrect reporting for VFIO pending data Peter Xu
2026-03-25 17:32   ` Avihai Horon
2026-04-02 15:28     ` Peter Xu
2026-04-02 15:55       ` Peter Xu
2026-04-06 12:34         ` Avihai Horon
2026-04-07 15:45           ` Peter Xu
2026-03-19 23:12 ` [PATCH RFC 09/12] migration: Make iteration counter out of RAM Peter Xu
2026-03-20  6:12   ` Yong Huang
2026-03-20  9:49   ` Prasad Pandit
2026-04-02 15:35     ` Peter Xu
2026-03-27 16:49   ` Juraj Marcin
2026-04-02 15:42     ` Peter Xu
2026-03-19 23:13 ` [PATCH RFC 10/12] migration: Introduce a helper to return switchover bw estimate Peter Xu
2026-03-23 10:26   ` Prasad Pandit
2026-03-27 17:07   ` Juraj Marcin
2026-04-07 17:27     ` Peter Xu
2026-04-08 14:33       ` Juraj Marcin
2026-03-19 23:13 ` [PATCH RFC 11/12] migration: Calculate expected downtime on demand Peter Xu
2026-03-27 17:17   ` Juraj Marcin
2026-04-07 17:33     ` Peter Xu
2026-03-19 23:13 ` [PATCH RFC 12/12] migration: Fix calculation of expected_downtime to take VFIO info Peter Xu
2026-03-23 12:05   ` Prasad Pandit
2026-04-07 17:40     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac2FrwjNPfwrKjnL@x1.local \
    --to=peterx@redhat.com \
    --cc=alex@shazbot.org \
    --cc=armbru@redhat.com \
    --cc=avihaih@nvidia.com \
    --cc=berrange@redhat.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=clg@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=david@kernel.org \
    --cc=eblake@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=farosas@suse.de \
    --cc=iii@linux.ibm.com \
    --cc=jjherne@linux.ibm.com \
    --cc=jmarcin@redhat.com \
    --cc=joao.m.martins@oracle.com \
    --cc=jsnow@redhat.com \
    --cc=kwankhede@nvidia.com \
    --cc=mail@maciej.szmigiero.name \
    --cc=mjrosato@linux.ibm.com \
    --cc=pasic@linux.ibm.com \
    --cc=ppandit@redhat.com \
    --cc=prtyagi@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=vsementsov@yandex-team.ru \
    --cc=yishaih@nvidia.com \
    --cc=zhguo@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.