* [PATCH v4 0/2] Migration: Make more ram_counters atomic
@ 2023-04-24 13:55 Juan Quintela
2023-04-24 13:55 ` [PATCH v4 1/2] migration: Make dirty_pages_rate atomic Juan Quintela
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Juan Quintela @ 2023-04-24 13:55 UTC (permalink / raw)
To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras
Hi
In this v4:
- Change aligned_uint64_t to size_t to make (some) 32bit hosts happy.
Please review.
[v3]
- Addressed reviews
- All counters are now atomic, either Stat64 or atomic.
- Rename duplicated to zero_pages
- Rename normal to zero_pages.
Please review.
[v2]
- fix typos found by David Edmondson
- Add review-by tags.
Please review.
[v1]
On previous series we cerate ram_atomic_counters. But we basically
need that all counters are atomic. So move back to only have
ram_counters, just with a new type that allows the atomic counters.
Once there, move update of stats out of RAM mutex.
And make multifd_bytes atomic.
Later, Juan.
Juan Quintela (2):
migration: Make dirty_pages_rate atomic
migration: Make dirty_bytes_last_sync atomic
migration/migration.c | 10 +++++++---
migration/ram.c | 8 +++++---
migration/ram.h | 4 ++--
3 files changed, 14 insertions(+), 8 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v4 1/2] migration: Make dirty_pages_rate atomic
2023-04-24 13:55 [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
@ 2023-04-24 13:55 ` Juan Quintela
2023-04-24 13:55 ` [PATCH v4 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela
2023-04-26 19:57 ` [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
2 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-04-24 13:55 UTC (permalink / raw)
To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras
In this case we use qatomic operations instead of Stat64 wrapper
because there is no stat64_set(). Defining the 64 bit wrapper is
trivial. The one without atomics is more interesting.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
Change aligned_uint64_t to size_t to make (some) 32bit hosts happy.
---
migration/migration.c | 6 ++++--
migration/ram.c | 5 +++--
migration/ram.h | 2 +-
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 53dd59f6f6..4cded4e0ae 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1014,7 +1014,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
if (s->state != MIGRATION_STATUS_COMPLETED) {
info->ram->remaining = ram_bytes_remaining();
- info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
+ info->ram->dirty_pages_rate =
+ qatomic_read__nocheck(&ram_counters.dirty_pages_rate);
}
}
@@ -3180,7 +3181,8 @@ static void migration_update_counters(MigrationState *s,
* if we haven't sent anything, we don't want to
* recalculate. 10000 is a small enough number for our purposes
*/
- if (ram_counters.dirty_pages_rate && transferred > 10000) {
+ if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
+ transferred > 10000) {
s->expected_downtime = ram_counters.remaining / bandwidth;
}
diff --git a/migration/ram.c b/migration/ram.c
index 01356f60a4..e6f14023e4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1129,8 +1129,9 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
double compressed_size;
/* calculate period counters */
- ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
- / (end_time - rs->time_last_bitmap_sync);
+ qatomic_set__nocheck(&ram_counters.dirty_pages_rate,
+ rs->num_dirty_pages_period * 1000 /
+ (end_time - rs->time_last_bitmap_sync));
if (!page_count) {
return;
diff --git a/migration/ram.h b/migration/ram.h
index a6e0d70226..d7f534162d 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,7 +41,7 @@
* one thread).
*/
typedef struct {
- int64_t dirty_pages_rate;
+ size_t dirty_pages_rate;
Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
Stat64 downtime_bytes;
--
2.39.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v4 2/2] migration: Make dirty_bytes_last_sync atomic
2023-04-24 13:55 [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
2023-04-24 13:55 ` [PATCH v4 1/2] migration: Make dirty_pages_rate atomic Juan Quintela
@ 2023-04-24 13:55 ` Juan Quintela
2023-04-26 19:57 ` [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
2 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-04-24 13:55 UTC (permalink / raw)
To: qemu-devel; +Cc: Juan Quintela, Peter Xu, Leonardo Bras
As we set its value, it needs to be operated with atomics.
We rename it from remaining to better reflect its meaning.
Statistics always return the real reamaining bytes. This was used to
store how much pages where dirty on the previous generation, so we can
calculate the expected downtime as: dirty_bytes_last_sync /
current_bandwith.
If we use the actual remaining bytes, we would see a very small value
at the end of the iteration.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
I am open to use ram_bytes_remaining() in its only use and be more
"optimistic" about the downtime.
Change aligned_uint64_t to size_t to make (some) 32bit hosts happy.
---
migration/migration.c | 4 +++-
migration/ram.c | 3 ++-
migration/ram.h | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 4cded4e0ae..1dd848015d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3183,7 +3183,9 @@ static void migration_update_counters(MigrationState *s,
*/
if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
transferred > 10000) {
- s->expected_downtime = ram_counters.remaining / bandwidth;
+ s->expected_downtime =
+ qatomic_read__nocheck(&ram_counters.dirty_bytes_last_sync) /
+ bandwidth;
}
qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index e6f14023e4..3862ec0853 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1223,7 +1223,8 @@ static void migration_bitmap_sync(RAMState *rs)
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(rs, block);
}
- ram_counters.remaining = ram_bytes_remaining();
+ qatomic_set__nocheck(&ram_counters.dirty_bytes_last_sync,
+ ram_bytes_remaining());
}
qemu_mutex_unlock(&rs->bitmap_mutex);
diff --git a/migration/ram.h b/migration/ram.h
index d7f534162d..801732ff13 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,6 +41,7 @@
* one thread).
*/
typedef struct {
+ size_t dirty_bytes_last_sync;
size_t dirty_pages_rate;
Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
@@ -51,7 +52,6 @@ typedef struct {
Stat64 postcopy_bytes;
Stat64 postcopy_requests;
Stat64 precopy_bytes;
- int64_t remaining;
Stat64 transferred;
} RAMStats;
--
2.39.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v4 0/2] Migration: Make more ram_counters atomic
2023-04-24 13:55 [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
2023-04-24 13:55 ` [PATCH v4 1/2] migration: Make dirty_pages_rate atomic Juan Quintela
2023-04-24 13:55 ` [PATCH v4 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela
@ 2023-04-26 19:57 ` Juan Quintela
2 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-04-26 19:57 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Leonardo Bras
Juan Quintela <quintela@redhat.com> wrote:
> Hi
>
> In this v4:
> - Change aligned_uint64_t to size_t to make (some) 32bit hosts happy.
>
> Please review.
self-NACK
Still missing the removal of the __nocheck() functions.
>
> [v3]
> - Addressed reviews
> - All counters are now atomic, either Stat64 or atomic.
> - Rename duplicated to zero_pages
> - Rename normal to zero_pages.
>
> Please review.
>
> [v2]
> - fix typos found by David Edmondson
> - Add review-by tags.
>
> Please review.
>
> [v1]
> On previous series we cerate ram_atomic_counters. But we basically
> need that all counters are atomic. So move back to only have
> ram_counters, just with a new type that allows the atomic counters.
>
> Once there, move update of stats out of RAM mutex.
> And make multifd_bytes atomic.
>
> Later, Juan.
>
> Juan Quintela (2):
> migration: Make dirty_pages_rate atomic
> migration: Make dirty_bytes_last_sync atomic
>
> migration/migration.c | 10 +++++++---
> migration/ram.c | 8 +++++---
> migration/ram.h | 4 ++--
> 3 files changed, 14 insertions(+), 8 deletions(-)
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-04-26 19:58 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-24 13:55 [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
2023-04-24 13:55 ` [PATCH v4 1/2] migration: Make dirty_pages_rate atomic Juan Quintela
2023-04-24 13:55 ` [PATCH v4 2/2] migration: Make dirty_bytes_last_sync atomic Juan Quintela
2023-04-26 19:57 ` [PATCH v4 0/2] Migration: Make more ram_counters atomic Juan Quintela
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).