From: Yong Huang <yong.huang@smartx.com>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, "Juraj Marcin" <jmarcin@redhat.com>,
"Kirti Wankhede" <kwankhede@nvidia.com>,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
"Daniel P . Berrangé" <berrange@redhat.com>,
"Joao Martins" <joao.m.martins@oracle.com>,
"Alex Williamson" <alex@shazbot.org>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Fabiano Rosas" <farosas@suse.de>,
"Pranav Tyagi" <prtyagi@redhat.com>,
"Zhiyi Guo" <zhguo@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Avihai Horon" <avihaih@nvidia.com>,
"Cédric Le Goater" <clg@redhat.com>
Subject: Re: [PATCH RFC 09/12] migration: Make iteration counter out of RAM
Date: Fri, 20 Mar 2026 14:12:51 +0800 [thread overview]
Message-ID: <CAK9dgmZb1RGvU5r-+Gudf2iZF4keG+0rJxfg6Rthws6v35XBFg@mail.gmail.com> (raw)
In-Reply-To: <20260319231302.123135-10-peterx@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 5298 bytes --]
Thanks,
Reviewed-by: Hyman Huang <yong.huang@smartx.com>
On Fri, Mar 20, 2026 at 7:13 AM Peter Xu <peterx@redhat.com> wrote:
> It used to hide in RAM dirty sync path. Now with more modules being able
> to slow sync on dirty information, keeping it there may not be good anymore
> because it's not RAM's own concept for iterations: all modules should
> follow.
>
> More importantly, mgmt may try to query dirty info (to make policy
> decisions like adjusting downtime) by listening to iteration count changes
> via QMP events. So we must make sure the boost of iterations only happens
> _after_ the dirty sync operations with whatever form (RAM's dirty bitmap
> sync, or VFIO's different ioctls to fetch latest dirty info from kernel).
>
> Move this to core migration path to manage, together with the event
> generation, so that it can be well ordered with the sync operations for all
> modules.
>
> This brings a good side effect that we should have an old issue regarding
> to cpu_throttle_dirty_sync_timer_tick() which can randomly boost iteration
> counts (because it invokes sync ops). Now it won't, which is actually the
> right behavior.
>
> Said that, we have code (not only QEMU, but likely mgmt too) assuming the
> 1st iteration will always shows dirty count to 1. Make it initialized with
> 1 this time, because we'll miss the dirty sync for setup() on boosting this
> counter now.
>
> Cc: Yong Huang <yong.huang@smartx.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> migration/migration-stats.h | 3 ++-
> migration/migration.c | 29 ++++++++++++++++++++++++++---
> migration/ram.c | 6 ------
> 3 files changed, 28 insertions(+), 10 deletions(-)
>
> diff --git a/migration/migration-stats.h b/migration/migration-stats.h
> index 1153520f7a..326ddb0088 100644
> --- a/migration/migration-stats.h
> +++ b/migration/migration-stats.h
> @@ -43,7 +43,8 @@ typedef struct {
> */
> uint64_t dirty_pages_rate;
> /*
> - * Number of times we have synchronized guest bitmaps.
> + * Number of times we have synchronized guest bitmaps. This always
> + * starts from 1 for the 1st iteration.
> */
> uint64_t dirty_sync_count;
> /*
> diff --git a/migration/migration.c b/migration/migration.c
> index 42facb16d1..ad8a824585 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1654,10 +1654,15 @@ int migrate_init(MigrationState *s, Error **errp)
> s->threshold_size = 0;
> s->switchover_acked = false;
> s->rdma_migration = false;
> +
> /*
> - * set mig_stats memory to zero for a new migration
> + * set mig_stats memory to zero for a new migration.. except the
> + * iteration counter, which we want to make sure it returns 1 for the
> + * first iteration.
> */
> memset(&mig_stats, 0, sizeof(mig_stats));
> + mig_stats.dirty_sync_count = 1;
> +
> migration_reset_vfio_bytes_transferred();
>
> s->postcopy_package_loaded = false;
> @@ -3230,10 +3235,28 @@ static bool
> migration_iteration_next_ready(MigrationState *s,
> static void migration_iteration_go_next(MigPendingData *pending)
> {
> /*
> - * Do a slow sync will achieve this. TODO: move RAM iteration code
> - * into the core layer.
> + * Do a slow sync first before boosting the iteration count.
> */
> qemu_savevm_query_pending(pending, false);
> +
> + /*
> + * Boost dirty sync count to reflect we finished one iteration.
> + *
> + * NOTE: we need to make sure when this happens (together with the
> + * event sent below) all modules have slow-synced the pending data
> + * above. That means a write mem barrier, but qatomic_add() should be
> + * enough.
> + *
> + * It's because a mgmt could wait on the iteration event to query
> again
> + * on pending data for policy changes (e.g. downtime adjustments).
> The
> + * ordering will make sure the query will fetch the latest results
> from
> + * all the modules.
> + */
> + qatomic_add(&mig_stats.dirty_sync_count, 1);
> +
> + if (migrate_events()) {
> + qapi_event_send_migration_pass(mig_stats.dirty_sync_count);
> + }
> }
>
> /*
> diff --git a/migration/ram.c b/migration/ram.c
> index 89f761a471..29e9608715 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1136,8 +1136,6 @@ static void migration_bitmap_sync(RAMState *rs, bool
> last_stage)
> RAMBlock *block;
> int64_t end_time;
>
> - qatomic_add(&mig_stats.dirty_sync_count, 1);
> -
> if (!rs->time_last_bitmap_sync) {
> rs->time_last_bitmap_sync =
> qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> }
> @@ -1172,10 +1170,6 @@ static void migration_bitmap_sync(RAMState *rs,
> bool last_stage)
> rs->num_dirty_pages_period = 0;
> rs->bytes_xfer_prev = migration_transferred_bytes();
> }
> - if (migrate_events()) {
> - uint64_t generation = qatomic_read(&mig_stats.dirty_sync_count);
> - qapi_event_send_migration_pass(generation);
> - }
> }
>
> void migration_bitmap_sync_precopy(bool last_stage)
> --
> 2.50.1
>
>
--
Best regards
[-- Attachment #2: Type: text/html, Size: 6771 bytes --]
next prev parent reply other threads:[~2026-03-20 6:14 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 23:12 [PATCH RFC 00/12] migration/vfio: Fix a few issues on API misuse or statistic reports Peter Xu
2026-03-19 23:12 ` [PATCH RFC 01/12] migration: Fix low possibility downtime violation Peter Xu
2026-03-20 12:26 ` Prasad Pandit
2026-03-19 23:12 ` [PATCH RFC 02/12] migration/qapi: Rename MigrationStats to MigrationRAMStats Peter Xu
2026-03-19 23:26 ` Peter Xu
2026-03-20 6:54 ` Markus Armbruster
2026-03-19 23:12 ` [PATCH RFC 03/12] vfio/migration: Throttle vfio_save_block() on data size to read Peter Xu
2026-03-25 14:10 ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 04/12] vfio/migration: Cache stop size in VFIOMigration Peter Xu
2026-03-25 14:15 ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 05/12] migration/treewide: Merge @state_pending_{exact|estimate} APIs Peter Xu
2026-03-24 10:35 ` Prasad Pandit
2026-03-25 15:20 ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 06/12] migration: Use the new save_query_pending() API directly Peter Xu
2026-03-24 9:35 ` Prasad Pandit
2026-03-19 23:12 ` [PATCH RFC 07/12] migration: Introduce stopcopy_bytes in save_query_pending() Peter Xu
2026-03-24 11:05 ` Prasad Pandit
2026-03-25 16:54 ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 08/12] vfio/migration: Fix incorrect reporting for VFIO pending data Peter Xu
2026-03-25 17:32 ` Avihai Horon
2026-03-19 23:12 ` [PATCH RFC 09/12] migration: Make iteration counter out of RAM Peter Xu
2026-03-20 6:12 ` Yong Huang [this message]
2026-03-20 9:49 ` Prasad Pandit
2026-03-19 23:13 ` [PATCH RFC 10/12] migration: Introduce a helper to return switchover bw estimate Peter Xu
2026-03-23 10:26 ` Prasad Pandit
2026-03-19 23:13 ` [PATCH RFC 11/12] migration: Calculate expected downtime on demand Peter Xu
2026-03-19 23:13 ` [PATCH RFC 12/12] migration: Fix calculation of expected_downtime to take VFIO info Peter Xu
2026-03-23 12:05 ` Prasad Pandit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAK9dgmZb1RGvU5r-+Gudf2iZF4keG+0rJxfg6Rthws6v35XBFg@mail.gmail.com \
--to=yong.huang@smartx.com \
--cc=alex@shazbot.org \
--cc=armbru@redhat.com \
--cc=avihaih@nvidia.com \
--cc=berrange@redhat.com \
--cc=clg@redhat.com \
--cc=farosas@suse.de \
--cc=jmarcin@redhat.com \
--cc=joao.m.martins@oracle.com \
--cc=kwankhede@nvidia.com \
--cc=mail@maciej.szmigiero.name \
--cc=peterx@redhat.com \
--cc=prtyagi@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yishaih@nvidia.com \
--cc=zhguo@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox