* [PATCH v5 0/2] Migration: Make more ram_counters atomic @ 2023-04-26 20:10 Juan Quintela 2023-04-26 20:10 ` [PATCH v5 1/2] migration: Make dirty_pages_rate atomic Juan Quintela ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: Juan Quintela @ 2023-04-26 20:10 UTC (permalink / raw) To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras, Richard Henderson Hi In this v5: Not only change the type of the counters, also use the __nocheck() variants of the functions. Please, review. [v4] - Change aligned_uint64_t to size_t to make (some) 32bit hosts happy. Please review. [v3] - Addressed reviews - All counters are now atomic, either Stat64 or atomic. - Rename duplicated to zero_pages - Rename normal to zero_pages. Please review. [v2] - fix typos found by David Edmondson - Add review-by tags. Please review. [v1] On previous series we cerate ram_atomic_counters. But we basically need that all counters are atomic. So move back to only have ram_counters, just with a new type that allows the atomic counters. Once there, move update of stats out of RAM mutex. And make multifd_bytes atomic. Later, Juan. Juan Quintela (2): migration: Make dirty_pages_rate atomic migration: Make dirty_bytes_last_sync atomic migration/migration.c | 9 ++++++--- migration/ram.c | 7 ++++--- migration/ram.h | 4 ++-- 3 files changed, 12 insertions(+), 8 deletions(-) -- 2.40.0 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v5 1/2] migration: Make dirty_pages_rate atomic 2023-04-26 20:10 [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela @ 2023-04-26 20:10 ` Juan Quintela 2023-04-27 8:17 ` Paolo Bonzini 2023-04-26 20:10 ` [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela 2023-04-27 8:41 ` [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela 2 siblings, 1 reply; 7+ messages in thread From: Juan Quintela @ 2023-04-26 20:10 UTC (permalink / raw) To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras, Richard Henderson In this case we use qatomic operations instead of Stat64 wrapper because there is no stat64_set(). Defining the 64 bit wrapper is trivial. The one without atomics is more interesting. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> --- Don't use __nocheck() variants --- migration/migration.c | 6 ++++-- migration/ram.c | 5 +++-- migration/ram.h | 2 +- 3 files changed, 8 insertions(+), 5 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 22e8586623..712f802962 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1005,7 +1005,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) if (s->state != MIGRATION_STATUS_COMPLETED) { info->ram->remaining = ram_bytes_remaining(); - info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate; + info->ram->dirty_pages_rate = + qatomic_read(&ram_counters.dirty_pages_rate); } } @@ -2751,7 +2752,8 @@ static void migration_update_counters(MigrationState *s, * if we haven't sent anything, we don't want to * recalculate. 10000 is a small enough number for our purposes */ - if (ram_counters.dirty_pages_rate && transferred > 10000) { + if (qatomic_read(&ram_counters.dirty_pages_rate) && + transferred > 10000) { s->expected_downtime = ram_counters.remaining / bandwidth; } diff --git a/migration/ram.c b/migration/ram.c index 01356f60a4..7c534a41e0 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1129,8 +1129,9 @@ static void migration_update_rates(RAMState *rs, int64_t end_time) double compressed_size; /* calculate period counters */ - ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 - / (end_time - rs->time_last_bitmap_sync); + qatomic_set(&ram_counters.dirty_pages_rate, + rs->num_dirty_pages_period * 1000 / + (end_time - rs->time_last_bitmap_sync)); if (!page_count) { return; diff --git a/migration/ram.h b/migration/ram.h index a6e0d70226..3db0a9d65c 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -41,7 +41,7 @@ * one thread). */ typedef struct { - int64_t dirty_pages_rate; + aligned_uint64_t dirty_pages_rate; Stat64 dirty_sync_count; Stat64 dirty_sync_missed_zero_copy; Stat64 downtime_bytes; -- 2.40.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v5 1/2] migration: Make dirty_pages_rate atomic 2023-04-26 20:10 ` [PATCH v5 1/2] migration: Make dirty_pages_rate atomic Juan Quintela @ 2023-04-27 8:17 ` Paolo Bonzini 0 siblings, 0 replies; 7+ messages in thread From: Paolo Bonzini @ 2023-04-27 8:17 UTC (permalink / raw) To: Juan Quintela, qemu-devel; +Cc: Peter Xu, Leonardo Bras, Richard Henderson On 4/26/23 22:10, Juan Quintela wrote: > In this case we use qatomic operations instead of Stat64 wrapper > because there is no stat64_set(). Defining the 64 bit wrapper is > trivial. The one without atomics is more interesting. This does not work if CONFIG_ATOMIC64 is not defined. I actually have stat64_set in a patchset that I still have to submit, so you can reuse it: diff --git a/include/qemu/stats64.h b/include/qemu/stats64.h index 802402254b6f..99b5cb724a89 100644 --- a/include/qemu/stats64.h +++ b/include/qemu/stats64.h @@ -40,6 +40,11 @@ static inline uint64_t stat64_get(const Stat64 *s) return qatomic_read__nocheck(&s->value); } +static inline void stat64_set(Stat64 *s, uint64_t value) +{ + qatomic_set__nocheck(&s->value, value); +} + static inline void stat64_add(Stat64 *s, uint64_t value) { qatomic_add(&s->value, value); @@ -62,6 +67,7 @@ static inline void stat64_max(Stat64 *s, } #else uint64_t stat64_get(const Stat64 *s); +void stat64_set(Stat64 *s, uint64_t value); bool stat64_min_slow(Stat64 *s, uint64_t value); bool stat64_max_slow(Stat64 *s, uint64_t value); bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t high); diff --git a/util/stats64.c b/util/stats64.c index 897613c94965..efa04d5759f0 100644 --- a/util/stats64.c +++ b/util/stats64.c @@ -57,6 +57,17 @@ uint64_t stat64_get(const Stat64 *s) return ((uint64_t)high << 32) | low; } +void stat64_set(Stat64 *s, uint64_t val) +{ + while (!stat64_wrtrylock(s)) { + cpu_relax(); + } + + qatomic_set(&s->high, val >> 32); + qatomic_set(&s->low, val); + stat64_wrunlock(s); +} + bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t high) { uint32_t old; Paolo ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic 2023-04-26 20:10 [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela 2023-04-26 20:10 ` [PATCH v5 1/2] migration: Make dirty_pages_rate atomic Juan Quintela @ 2023-04-26 20:10 ` Juan Quintela 2023-04-27 8:19 ` Paolo Bonzini 2023-04-27 8:41 ` [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela 2 siblings, 1 reply; 7+ messages in thread From: Juan Quintela @ 2023-04-26 20:10 UTC (permalink / raw) To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras, Richard Henderson As we set its value, it needs to be operated with atomics. We rename it from remaining to better reflect its meaning. Statistics always return the real reamaining bytes. This was used to store how much pages where dirty on the previous generation, so we can calculate the expected downtime as: dirty_bytes_last_sync / current_bandwith. If we use the actual remaining bytes, we would see a very small value at the end of the iteration. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> --- I am open to use ram_bytes_remaining() in its only use and be more "optimistic" about the downtime. Don't use __nocheck() functions. --- migration/migration.c | 3 ++- migration/ram.c | 2 +- migration/ram.h | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 712f802962..c81c65bf28 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -2754,7 +2754,8 @@ static void migration_update_counters(MigrationState *s, */ if (qatomic_read(&ram_counters.dirty_pages_rate) && transferred > 10000) { - s->expected_downtime = ram_counters.remaining / bandwidth; + s->expected_downtime = + qatomic_read(&ram_counters.dirty_bytes_last_sync) / bandwidth; } qemu_file_reset_rate_limit(s->to_dst_file); diff --git a/migration/ram.c b/migration/ram.c index 7c534a41e0..704df661d1 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1223,7 +1223,7 @@ static void migration_bitmap_sync(RAMState *rs) RAMBLOCK_FOREACH_NOT_IGNORED(block) { ramblock_sync_dirty_bitmap(rs, block); } - ram_counters.remaining = ram_bytes_remaining(); + qatomic_set(&ram_counters.dirty_bytes_last_sync, ram_bytes_remaining()); } qemu_mutex_unlock(&rs->bitmap_mutex); diff --git a/migration/ram.h b/migration/ram.h index 3db0a9d65c..11a0fde99b 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -41,6 +41,7 @@ * one thread). */ typedef struct { + aligned_uint64_t dirty_bytes_last_sync; aligned_uint64_t dirty_pages_rate; Stat64 dirty_sync_count; Stat64 dirty_sync_missed_zero_copy; @@ -51,7 +52,6 @@ typedef struct { Stat64 postcopy_bytes; Stat64 postcopy_requests; Stat64 precopy_bytes; - int64_t remaining; Stat64 transferred; } RAMStats; -- 2.40.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic 2023-04-26 20:10 ` [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela @ 2023-04-27 8:19 ` Paolo Bonzini 2023-04-27 8:40 ` Juan Quintela 0 siblings, 1 reply; 7+ messages in thread From: Paolo Bonzini @ 2023-04-27 8:19 UTC (permalink / raw) To: Juan Quintela, qemu-devel; +Cc: Peter Xu, Leonardo Bras, Richard Henderson On 4/26/23 22:10, Juan Quintela wrote: > Don't use __nocheck() functions. Doesn't this break on 32-bit platforms? #if defined(__x86_64__) || defined(__sparc__) || defined(__mips64) # define ATOMIC_REG_SIZE 8 #else # define ATOMIC_REG_SIZE sizeof(void *) #endif #define qatomic_set(ptr, i) do { \ qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \ qatomic_set__nocheck(ptr, i); \ } while(0) So if sizeof(void*) == 4 it would trigger a compile-time assertion. Paolo ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic 2023-04-27 8:19 ` Paolo Bonzini @ 2023-04-27 8:40 ` Juan Quintela 0 siblings, 0 replies; 7+ messages in thread From: Juan Quintela @ 2023-04-27 8:40 UTC (permalink / raw) To: Paolo Bonzini; +Cc: qemu-devel, Peter Xu, Leonardo Bras, Richard Henderson Paolo Bonzini <pbonzini@redhat.com> wrote: > On 4/26/23 22:10, Juan Quintela wrote: >> Don't use __nocheck() functions. > > Doesn't this break on 32-bit platforms? > > #if defined(__x86_64__) || defined(__sparc__) || defined(__mips64) > # define ATOMIC_REG_SIZE 8 > #else > # define ATOMIC_REG_SIZE sizeof(void *) > #endif > > #define qatomic_set(ptr, i) do { \ > qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \ > qatomic_set__nocheck(ptr, i); \ > } while(0) > > So if sizeof(void*) == 4 it would trigger a compile-time assertion. > > Paolo Yeap. Really I was waiting for stat64_set() that you have just sent, will do it on top of that. And yes, I already broke the build with that patch. Later, Juan. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v5 0/2] Migration: Make more ram_counters atomic 2023-04-26 20:10 [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela 2023-04-26 20:10 ` [PATCH v5 1/2] migration: Make dirty_pages_rate atomic Juan Quintela 2023-04-26 20:10 ` [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela @ 2023-04-27 8:41 ` Juan Quintela 2 siblings, 0 replies; 7+ messages in thread From: Juan Quintela @ 2023-04-27 8:41 UTC (permalink / raw) To: qemu-devel; +Cc: Peter Xu, Leonardo Bras, Richard Henderson Juan Quintela <quintela@redhat.com> wrote: self-NACK Working on top of paolo stat64_set() function. > Hi > > In this v5: > > Not only change the type of the counters, also use the __nocheck() > variants of the functions. > > Please, review. > > [v4] > - Change aligned_uint64_t to size_t to make (some) 32bit hosts happy. > > Please review. > > [v3] > - Addressed reviews > - All counters are now atomic, either Stat64 or atomic. > - Rename duplicated to zero_pages > - Rename normal to zero_pages. > > Please review. > > [v2] > - fix typos found by David Edmondson > - Add review-by tags. > > Please review. > > [v1] > On previous series we cerate ram_atomic_counters. But we basically > need that all counters are atomic. So move back to only have > ram_counters, just with a new type that allows the atomic counters. > > Once there, move update of stats out of RAM mutex. > And make multifd_bytes atomic. > > Later, Juan. > > Juan Quintela (2): > migration: Make dirty_pages_rate atomic > migration: Make dirty_bytes_last_sync atomic > > migration/migration.c | 9 ++++++--- > migration/ram.c | 7 ++++--- > migration/ram.h | 4 ++-- > 3 files changed, 12 insertions(+), 8 deletions(-) ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-04-27 8:41 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-04-26 20:10 [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela 2023-04-26 20:10 ` [PATCH v5 1/2] migration: Make dirty_pages_rate atomic Juan Quintela 2023-04-27 8:17 ` Paolo Bonzini 2023-04-26 20:10 ` [PATCH v5 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela 2023-04-27 8:19 ` Paolo Bonzini 2023-04-27 8:40 ` Juan Quintela 2023-04-27 8:41 ` [PATCH v5 0/2] Migration: Make more ram_counters atomic Juan Quintela
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).