* [PATCH v3 01/12] migration: Merge ram_counters and ram_atomic_counters
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 02/12] migration: Update atomic stats out of the mutex Juan Quintela
` (11 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Using MgrationStats as type for ram_counters mean that we didn't have
to re-declare each value in another struct. The need of atomic
counters have make us to create MigrationAtomicStats for this atomic
counters.
Create RAMStats type which is a merge of MigrationStats and
MigrationAtomicStats removing unused members.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Fix typos found by David Edmondson
---
migration/ram.h | 28 +++++++++++++++-------------
migration/migration.c | 8 ++++----
migration/multifd.c | 4 ++--
migration/ram.c | 39 ++++++++++++++++-----------------------
4 files changed, 37 insertions(+), 42 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 81cbb0947c..7c026b5242 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -35,25 +35,27 @@
#include "qemu/stats64.h"
/*
- * These are the migration statistic counters that need to be updated using
- * atomic ops (can be accessed by more than one thread). Here since we
- * cannot modify MigrationStats directly to use Stat64 as it was defined in
- * the QAPI scheme, we define an internal structure to hold them, and we
- * propagate the real values when QMP queries happen.
- *
- * IOW, the corresponding fields within ram_counters on these specific
- * fields will be always zero and not being used at all; they're just
- * placeholders to make it QAPI-compatible.
+ * These are the ram migration statistic counters. It is loosely
+ * based on MigrationStats. We change to Stat64 any counter that
+ * needs to be updated using atomic ops (can be accessed by more than
+ * one thread).
*/
typedef struct {
- Stat64 transferred;
+ int64_t dirty_pages_rate;
+ int64_t dirty_sync_count;
+ uint64_t dirty_sync_missed_zero_copy;
+ uint64_t downtime_bytes;
Stat64 duplicate;
+ uint64_t multifd_bytes;
Stat64 normal;
Stat64 postcopy_bytes;
-} MigrationAtomicStats;
+ int64_t postcopy_requests;
+ uint64_t precopy_bytes;
+ int64_t remaining;
+ Stat64 transferred;
+} RAMStats;
-extern MigrationAtomicStats ram_atomic_counters;
-extern MigrationStats ram_counters;
+extern RAMStats ram_counters;
extern XBZRLECacheStats xbzrle_counters;
extern CompressionStats compression_counters;
diff --git a/migration/migration.c b/migration/migration.c
index bda4789193..10483f3cab 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1140,12 +1140,12 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
size_t page_size = qemu_target_page_size();
info->ram = g_malloc0(sizeof(*info->ram));
- info->ram->transferred = stat64_get(&ram_atomic_counters.transferred);
+ info->ram->transferred = stat64_get(&ram_counters.transferred);
info->ram->total = ram_bytes_total();
- info->ram->duplicate = stat64_get(&ram_atomic_counters.duplicate);
+ info->ram->duplicate = stat64_get(&ram_counters.duplicate);
/* legacy value. It is not used anymore */
info->ram->skipped = 0;
- info->ram->normal = stat64_get(&ram_atomic_counters.normal);
+ info->ram->normal = stat64_get(&ram_counters.normal);
info->ram->normal_bytes = info->ram->normal * page_size;
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
@@ -1157,7 +1157,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->pages_per_second = s->pages_per_second;
info->ram->precopy_bytes = ram_counters.precopy_bytes;
info->ram->downtime_bytes = ram_counters.downtime_bytes;
- info->ram->postcopy_bytes = stat64_get(&ram_atomic_counters.postcopy_bytes);
+ info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
if (migrate_use_xbzrle()) {
info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
diff --git a/migration/multifd.c b/migration/multifd.c
index cbc0dfe39b..01fab01a92 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -433,7 +433,7 @@ static int multifd_send_pages(QEMUFile *f)
transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
qemu_file_acct_rate_limit(f, transferred);
ram_counters.multifd_bytes += transferred;
- stat64_add(&ram_atomic_counters.transferred, transferred);
+ stat64_add(&ram_counters.transferred, transferred);
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
@@ -628,7 +628,7 @@ int multifd_send_sync_main(QEMUFile *f)
p->pending_job++;
qemu_file_acct_rate_limit(f, p->packet_len);
ram_counters.multifd_bytes += p->packet_len;
- stat64_add(&ram_atomic_counters.transferred, p->packet_len);
+ stat64_add(&ram_counters.transferred, p->packet_len);
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
}
diff --git a/migration/ram.c b/migration/ram.c
index 79d881f735..95ba5ea0c5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -458,25 +458,18 @@ uint64_t ram_bytes_remaining(void)
0;
}
-/*
- * NOTE: not all stats in ram_counters are used in reality. See comments
- * for struct MigrationAtomicStats. The ultimate result of ram migration
- * counters will be a merged version with both ram_counters and the atomic
- * fields in ram_atomic_counters.
- */
-MigrationStats ram_counters;
-MigrationAtomicStats ram_atomic_counters;
+RAMStats ram_counters;
void ram_transferred_add(uint64_t bytes)
{
if (runstate_is_running()) {
ram_counters.precopy_bytes += bytes;
} else if (migration_in_postcopy()) {
- stat64_add(&ram_atomic_counters.postcopy_bytes, bytes);
+ stat64_add(&ram_counters.postcopy_bytes, bytes);
} else {
ram_counters.downtime_bytes += bytes;
}
- stat64_add(&ram_atomic_counters.transferred, bytes);
+ stat64_add(&ram_counters.transferred, bytes);
}
void dirty_sync_missed_zero_copy(void)
@@ -756,7 +749,7 @@ void mig_throttle_counter_reset(void)
rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
rs->num_dirty_pages_period = 0;
- rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
+ rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
}
/**
@@ -1130,8 +1123,8 @@ uint64_t ram_pagesize_summary(void)
uint64_t ram_get_total_transferred_pages(void)
{
- return stat64_get(&ram_atomic_counters.normal) +
- stat64_get(&ram_atomic_counters.duplicate) +
+ return stat64_get(&ram_counters.normal) +
+ stat64_get(&ram_counters.duplicate) +
compression_counters.pages + xbzrle_counters.pages;
}
@@ -1192,7 +1185,7 @@ static void migration_trigger_throttle(RAMState *rs)
MigrationState *s = migrate_get_current();
uint64_t threshold = s->parameters.throttle_trigger_threshold;
uint64_t bytes_xfer_period =
- stat64_get(&ram_atomic_counters.transferred) - rs->bytes_xfer_prev;
+ stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
@@ -1255,7 +1248,7 @@ static void migration_bitmap_sync(RAMState *rs)
/* reset period counters */
rs->time_last_bitmap_sync = end_time;
rs->num_dirty_pages_period = 0;
- rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
+ rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
}
if (migrate_use_events()) {
qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
@@ -1331,7 +1324,7 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
int len = save_zero_page_to_file(pss, f, block, offset);
if (len) {
- stat64_add(&ram_atomic_counters.duplicate, 1);
+ stat64_add(&ram_counters.duplicate, 1);
ram_transferred_add(len);
return 1;
}
@@ -1368,9 +1361,9 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
}
if (bytes_xmit > 0) {
- stat64_add(&ram_atomic_counters.normal, 1);
+ stat64_add(&ram_counters.normal, 1);
} else if (bytes_xmit == 0) {
- stat64_add(&ram_atomic_counters.duplicate, 1);
+ stat64_add(&ram_counters.duplicate, 1);
}
return true;
@@ -1402,7 +1395,7 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
}
ram_transferred_add(TARGET_PAGE_SIZE);
- stat64_add(&ram_atomic_counters.normal, 1);
+ stat64_add(&ram_counters.normal, 1);
return 1;
}
@@ -1458,7 +1451,7 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
if (multifd_queue_page(file, block, offset) < 0) {
return -1;
}
- stat64_add(&ram_atomic_counters.normal, 1);
+ stat64_add(&ram_counters.normal, 1);
return 1;
}
@@ -1497,7 +1490,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
ram_transferred_add(bytes_xmit);
if (param->zero_page) {
- stat64_add(&ram_atomic_counters.duplicate, 1);
+ stat64_add(&ram_counters.duplicate, 1);
return;
}
@@ -2632,9 +2625,9 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
uint64_t pages = size / TARGET_PAGE_SIZE;
if (zero) {
- stat64_add(&ram_atomic_counters.duplicate, pages);
+ stat64_add(&ram_counters.duplicate, pages);
} else {
- stat64_add(&ram_atomic_counters.normal, pages);
+ stat64_add(&ram_counters.normal, pages);
ram_transferred_add(size);
qemu_file_credit_transfer(f, size);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 02/12] migration: Update atomic stats out of the mutex
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
2023-04-19 16:24 ` [PATCH v3 01/12] migration: Merge ram_counters and ram_atomic_counters Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 03/12] migration: Make multifd_bytes atomic Juan Quintela
` (10 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras, David Edmondson
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/multifd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 01fab01a92..6ef3a27938 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -433,8 +433,8 @@ static int multifd_send_pages(QEMUFile *f)
transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
qemu_file_acct_rate_limit(f, transferred);
ram_counters.multifd_bytes += transferred;
+ qemu_mutex_unlock(&p->mutex);
stat64_add(&ram_counters.transferred, transferred);
- qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
return 1;
@@ -628,8 +628,8 @@ int multifd_send_sync_main(QEMUFile *f)
p->pending_job++;
qemu_file_acct_rate_limit(f, p->packet_len);
ram_counters.multifd_bytes += p->packet_len;
+ qemu_mutex_unlock(&p->mutex);
stat64_add(&ram_counters.transferred, p->packet_len);
- qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
}
for (i = 0; i < migrate_multifd_channels(); i++) {
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 03/12] migration: Make multifd_bytes atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
2023-04-19 16:24 ` [PATCH v3 01/12] migration: Merge ram_counters and ram_atomic_counters Juan Quintela
2023-04-19 16:24 ` [PATCH v3 02/12] migration: Update atomic stats out of the mutex Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 04/12] migration: Make dirty_sync_missed_zero_copy atomic Juan Quintela
` (9 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras, David Edmondson
In the spirit of:
commit 394d323bc3451e4d07f13341cb8817fac8dfbadd
Author: Peter Xu <peterx@redhat.com>
Date: Tue Oct 11 17:55:51 2022 -0400
migration: Use atomic ops properly for page accountings
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 4 ++--
migration/multifd.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 7c026b5242..ed70391317 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -46,7 +46,7 @@ typedef struct {
uint64_t dirty_sync_missed_zero_copy;
uint64_t downtime_bytes;
Stat64 duplicate;
- uint64_t multifd_bytes;
+ Stat64 multifd_bytes;
Stat64 normal;
Stat64 postcopy_bytes;
int64_t postcopy_requests;
diff --git a/migration/migration.c b/migration/migration.c
index 10483f3cab..c3debe71f6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1153,7 +1153,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
ram_counters.dirty_sync_missed_zero_copy;
info->ram->postcopy_requests = ram_counters.postcopy_requests;
info->ram->page_size = page_size;
- info->ram->multifd_bytes = ram_counters.multifd_bytes;
+ info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
info->ram->pages_per_second = s->pages_per_second;
info->ram->precopy_bytes = ram_counters.precopy_bytes;
info->ram->downtime_bytes = ram_counters.downtime_bytes;
@@ -3778,7 +3778,7 @@ static MigThrError migration_detect_error(MigrationState *s)
static uint64_t migration_total_bytes(MigrationState *s)
{
return qemu_file_total_transferred(s->to_dst_file) +
- ram_counters.multifd_bytes;
+ stat64_get(&ram_counters.multifd_bytes);
}
static void migration_calculate_complete(MigrationState *s)
diff --git a/migration/multifd.c b/migration/multifd.c
index 6ef3a27938..1c992abf53 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -432,9 +432,9 @@ static int multifd_send_pages(QEMUFile *f)
p->pages = pages;
transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
qemu_file_acct_rate_limit(f, transferred);
- ram_counters.multifd_bytes += transferred;
qemu_mutex_unlock(&p->mutex);
stat64_add(&ram_counters.transferred, transferred);
+ stat64_add(&ram_counters.multifd_bytes, transferred);
qemu_sem_post(&p->sem);
return 1;
@@ -627,9 +627,9 @@ int multifd_send_sync_main(QEMUFile *f)
p->flags |= MULTIFD_FLAG_SYNC;
p->pending_job++;
qemu_file_acct_rate_limit(f, p->packet_len);
- ram_counters.multifd_bytes += p->packet_len;
qemu_mutex_unlock(&p->mutex);
stat64_add(&ram_counters.transferred, p->packet_len);
+ stat64_add(&ram_counters.multifd_bytes, p->packet_len);
qemu_sem_post(&p->sem);
}
for (i = 0; i < migrate_multifd_channels(); i++) {
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 04/12] migration: Make dirty_sync_missed_zero_copy atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (2 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 03/12] migration: Make multifd_bytes atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 05/12] migration: Make precopy_bytes atomic Juan Quintela
` (8 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 4 +---
migration/migration.c | 2 +-
migration/multifd.c | 2 +-
migration/ram.c | 5 -----
4 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index ed70391317..2170c55e67 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -43,7 +43,7 @@
typedef struct {
int64_t dirty_pages_rate;
int64_t dirty_sync_count;
- uint64_t dirty_sync_missed_zero_copy;
+ Stat64 dirty_sync_missed_zero_copy;
uint64_t downtime_bytes;
Stat64 duplicate;
Stat64 multifd_bytes;
@@ -114,6 +114,4 @@ void ram_write_tracking_prepare(void);
int ram_write_tracking_start(void);
void ram_write_tracking_stop(void);
-void dirty_sync_missed_zero_copy(void);
-
#endif
diff --git a/migration/migration.c b/migration/migration.c
index c3debe71f6..66e5197b77 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1150,7 +1150,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
info->ram->dirty_sync_missed_zero_copy =
- ram_counters.dirty_sync_missed_zero_copy;
+ stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
info->ram->postcopy_requests = ram_counters.postcopy_requests;
info->ram->page_size = page_size;
info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
diff --git a/migration/multifd.c b/migration/multifd.c
index 1c992abf53..903df2117b 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -576,7 +576,7 @@ static int multifd_zero_copy_flush(QIOChannel *c)
return -1;
}
if (ret == 1) {
- dirty_sync_missed_zero_copy();
+ stat64_add(&ram_counters.dirty_sync_missed_zero_copy, 1);
}
return ret;
diff --git a/migration/ram.c b/migration/ram.c
index 95ba5ea0c5..72e3d78589 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -472,11 +472,6 @@ void ram_transferred_add(uint64_t bytes)
stat64_add(&ram_counters.transferred, bytes);
}
-void dirty_sync_missed_zero_copy(void)
-{
- ram_counters.dirty_sync_missed_zero_copy++;
-}
-
struct MigrationOps {
int (*ram_save_target_page)(RAMState *rs, PageSearchStatus *pss);
};
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 05/12] migration: Make precopy_bytes atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (3 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 04/12] migration: Make dirty_sync_missed_zero_copy atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 06/12] migration: Make downtime_bytes atomic Juan Quintela
` (7 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 2 +-
migration/ram.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 2170c55e67..a766b895fa 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -50,7 +50,7 @@ typedef struct {
Stat64 normal;
Stat64 postcopy_bytes;
int64_t postcopy_requests;
- uint64_t precopy_bytes;
+ Stat64 precopy_bytes;
int64_t remaining;
Stat64 transferred;
} RAMStats;
diff --git a/migration/migration.c b/migration/migration.c
index 66e5197b77..cbd6f6f235 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1155,7 +1155,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->page_size = page_size;
info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
info->ram->pages_per_second = s->pages_per_second;
- info->ram->precopy_bytes = ram_counters.precopy_bytes;
+ info->ram->precopy_bytes = stat64_get(&ram_counters.precopy_bytes);
info->ram->downtime_bytes = ram_counters.downtime_bytes;
info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
diff --git a/migration/ram.c b/migration/ram.c
index 72e3d78589..14529fe928 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -463,7 +463,7 @@ RAMStats ram_counters;
void ram_transferred_add(uint64_t bytes)
{
if (runstate_is_running()) {
- ram_counters.precopy_bytes += bytes;
+ stat64_add(&ram_counters.precopy_bytes, bytes);
} else if (migration_in_postcopy()) {
stat64_add(&ram_counters.postcopy_bytes, bytes);
} else {
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 06/12] migration: Make downtime_bytes atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (4 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 05/12] migration: Make precopy_bytes atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 07/12] migration: Make dirty_sync_count atomic Juan Quintela
` (6 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 2 +-
migration/ram.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index a766b895fa..bb52632424 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -44,7 +44,7 @@ typedef struct {
int64_t dirty_pages_rate;
int64_t dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
- uint64_t downtime_bytes;
+ Stat64 downtime_bytes;
Stat64 duplicate;
Stat64 multifd_bytes;
Stat64 normal;
diff --git a/migration/migration.c b/migration/migration.c
index cbd6f6f235..4ca2173d85 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1156,7 +1156,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
info->ram->pages_per_second = s->pages_per_second;
info->ram->precopy_bytes = stat64_get(&ram_counters.precopy_bytes);
- info->ram->downtime_bytes = ram_counters.downtime_bytes;
+ info->ram->downtime_bytes = stat64_get(&ram_counters.downtime_bytes);
info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
if (migrate_use_xbzrle()) {
diff --git a/migration/ram.c b/migration/ram.c
index 14529fe928..35abbcda02 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -467,7 +467,7 @@ void ram_transferred_add(uint64_t bytes)
} else if (migration_in_postcopy()) {
stat64_add(&ram_counters.postcopy_bytes, bytes);
} else {
- ram_counters.downtime_bytes += bytes;
+ stat64_add(&ram_counters.downtime_bytes, bytes);
}
stat64_add(&ram_counters.transferred, bytes);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 07/12] migration: Make dirty_sync_count atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (5 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 06/12] migration: Make downtime_bytes atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 08/12] migration: Make postcopy_requests atomic Juan Quintela
` (5 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 3 ++-
migration/ram.c | 13 +++++++------
3 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index bb52632424..8c0d07c43a 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -42,7 +42,7 @@
*/
typedef struct {
int64_t dirty_pages_rate;
- int64_t dirty_sync_count;
+ Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
Stat64 downtime_bytes;
Stat64 duplicate;
diff --git a/migration/migration.c b/migration/migration.c
index 4ca2173d85..97c227aa85 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1148,7 +1148,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->normal = stat64_get(&ram_counters.normal);
info->ram->normal_bytes = info->ram->normal * page_size;
info->ram->mbps = s->mbps;
- info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
+ info->ram->dirty_sync_count =
+ stat64_get(&ram_counters.dirty_sync_count);
info->ram->dirty_sync_missed_zero_copy =
stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
info->ram->postcopy_requests = ram_counters.postcopy_requests;
diff --git a/migration/ram.c b/migration/ram.c
index 35abbcda02..b391546020 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -764,7 +764,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
/* We don't care if this fails to allocate a new cache page
* as long as it updated an old one */
cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
- ram_counters.dirty_sync_count);
+ stat64_get(&ram_counters.dirty_sync_count));
}
#define ENCODING_FLAG_XBZRLE 0x1
@@ -790,13 +790,13 @@ static int save_xbzrle_page(RAMState *rs, PageSearchStatus *pss,
int encoded_len = 0, bytes_xbzrle;
uint8_t *prev_cached_page;
QEMUFile *file = pss->pss_channel;
+ uint64_t generation = stat64_get(&ram_counters.dirty_sync_count);
- if (!cache_is_cached(XBZRLE.cache, current_addr,
- ram_counters.dirty_sync_count)) {
+ if (!cache_is_cached(XBZRLE.cache, current_addr, generation)) {
xbzrle_counters.cache_miss++;
if (!rs->last_stage) {
if (cache_insert(XBZRLE.cache, current_addr, *current_data,
- ram_counters.dirty_sync_count) == -1) {
+ generation) == -1) {
return -1;
} else {
/* update *current_data when the page has been
@@ -1209,7 +1209,7 @@ static void migration_bitmap_sync(RAMState *rs)
RAMBlock *block;
int64_t end_time;
- ram_counters.dirty_sync_count++;
+ stat64_add(&ram_counters.dirty_sync_count, 1);
if (!rs->time_last_bitmap_sync) {
rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -1246,7 +1246,8 @@ static void migration_bitmap_sync(RAMState *rs)
rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
}
if (migrate_use_events()) {
- qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
+ uint64_t generation = stat64_get(&ram_counters.dirty_sync_count);
+ qapi_event_send_migration_pass(generation);
}
}
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 08/12] migration: Make postcopy_requests atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (6 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 07/12] migration: Make dirty_sync_count atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 09/12] migration: Make dirty_pages_rate atomic Juan Quintela
` (4 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 3 ++-
migration/ram.c | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 8c0d07c43a..afa68521d7 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -49,7 +49,7 @@ typedef struct {
Stat64 multifd_bytes;
Stat64 normal;
Stat64 postcopy_bytes;
- int64_t postcopy_requests;
+ Stat64 postcopy_requests;
Stat64 precopy_bytes;
int64_t remaining;
Stat64 transferred;
diff --git a/migration/migration.c b/migration/migration.c
index 97c227aa85..09b37a6603 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1152,7 +1152,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
stat64_get(&ram_counters.dirty_sync_count);
info->ram->dirty_sync_missed_zero_copy =
stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
- info->ram->postcopy_requests = ram_counters.postcopy_requests;
+ info->ram->postcopy_requests =
+ stat64_get(&ram_counters.postcopy_requests);
info->ram->page_size = page_size;
info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
info->ram->pages_per_second = s->pages_per_second;
diff --git a/migration/ram.c b/migration/ram.c
index b391546020..b4aa07118a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2169,7 +2169,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
RAMBlock *ramblock;
RAMState *rs = ram_state;
- ram_counters.postcopy_requests++;
+ stat64_add(&ram_counters.postcopy_requests, 1);
RCU_READ_LOCK_GUARD();
if (!rbname) {
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 09/12] migration: Make dirty_pages_rate atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (7 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 08/12] migration: Make postcopy_requests atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 10/12] migration: Make dirty_bytes_last_sync atomic Juan Quintela
` (3 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
In this case we use qatomic operations instead of Stat64 wrapper
because there is no stat64_set(). Defining the 64 bit wrapper is
trivial. The one without atomics is more interesting.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 6 ++++--
migration/ram.c | 5 +++--
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index afa68521d7..574a604b72 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,7 +41,7 @@
* one thread).
*/
typedef struct {
- int64_t dirty_pages_rate;
+ aligned_uint64_t dirty_pages_rate;
Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
Stat64 downtime_bytes;
diff --git a/migration/migration.c b/migration/migration.c
index 09b37a6603..50eae2fbcd 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1190,7 +1190,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
if (s->state != MIGRATION_STATUS_COMPLETED) {
info->ram->remaining = ram_bytes_remaining();
- info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
+ info->ram->dirty_pages_rate =
+ qatomic_read__nocheck(&ram_counters.dirty_pages_rate);
}
}
@@ -3844,7 +3845,8 @@ static void migration_update_counters(MigrationState *s,
* if we haven't sent anything, we don't want to
* recalculate. 10000 is a small enough number for our purposes
*/
- if (ram_counters.dirty_pages_rate && transferred > 10000) {
+ if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
+ transferred > 10000) {
s->expected_downtime = ram_counters.remaining / bandwidth;
}
diff --git a/migration/ram.c b/migration/ram.c
index b4aa07118a..55fabec1fe 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1129,8 +1129,9 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
double compressed_size;
/* calculate period counters */
- ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
- / (end_time - rs->time_last_bitmap_sync);
+ qatomic_set__nocheck(&ram_counters.dirty_pages_rate,
+ rs->num_dirty_pages_period * 1000 /
+ (end_time - rs->time_last_bitmap_sync));
if (!page_count) {
return;
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 10/12] migration: Make dirty_bytes_last_sync atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (8 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 09/12] migration: Make dirty_pages_rate atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 11/12] migration: Rename duplicate to zero_pages Juan Quintela
` (2 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
As we set its value, it needs to be operated with atomics.
We rename it from remaining to better reflect its meaning.
Statistics always return the real reamaining bytes. This was used to
store how much pages where dirty on the previous generation, so we can
calculate the expected downtime as: dirty_bytes_last_sync /
current_bandwith.
If we use the actual remaining bytes, we would see a very small value
at the end of the iteration.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
I am open to use ram_bytes_remaining() in its only use and be more
"optimistic" about the downtime.
---
migration/ram.h | 2 +-
migration/migration.c | 4 +++-
migration/ram.c | 3 ++-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 574a604b72..8093ebc210 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,6 +41,7 @@
* one thread).
*/
typedef struct {
+ aligned_uint64_t dirty_bytes_last_sync;
aligned_uint64_t dirty_pages_rate;
Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
@@ -51,7 +52,6 @@ typedef struct {
Stat64 postcopy_bytes;
Stat64 postcopy_requests;
Stat64 precopy_bytes;
- int64_t remaining;
Stat64 transferred;
} RAMStats;
diff --git a/migration/migration.c b/migration/migration.c
index 50eae2fbcd..83d3bfbf62 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3847,7 +3847,9 @@ static void migration_update_counters(MigrationState *s,
*/
if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
transferred > 10000) {
- s->expected_downtime = ram_counters.remaining / bandwidth;
+ s->expected_downtime =
+ qatomic_read__nocheck(&ram_counters.dirty_bytes_last_sync) /
+ bandwidth;
}
qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 55fabec1fe..771596d377 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1224,7 +1224,8 @@ static void migration_bitmap_sync(RAMState *rs)
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(rs, block);
}
- ram_counters.remaining = ram_bytes_remaining();
+ qatomic_set__nocheck(&ram_counters.dirty_bytes_last_sync,
+ ram_bytes_remaining());
}
qemu_mutex_unlock(&rs->bitmap_mutex);
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 11/12] migration: Rename duplicate to zero_pages
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (9 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 10/12] migration: Make dirty_bytes_last_sync atomic Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 16:24 ` [PATCH v3 12/12] migration: Rename normal to full_pages Juan Quintela
2023-04-19 19:29 ` [PATCH v3 00/12] Migration: Make more ram_counters atomic Peter Xu
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Rest of counters that refer to pages has a _pages suffix.
And historically, this showed the number of pages composed of the same
character, here comes the name "duplicated". But since years ago, it
refers to the number of zero_pages.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 2 +-
migration/ram.c | 10 +++++-----
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 8093ebc210..b27ce01f2e 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -46,7 +46,7 @@ typedef struct {
Stat64 dirty_sync_count;
Stat64 dirty_sync_missed_zero_copy;
Stat64 downtime_bytes;
- Stat64 duplicate;
+ Stat64 zero_pages;
Stat64 multifd_bytes;
Stat64 normal;
Stat64 postcopy_bytes;
diff --git a/migration/migration.c b/migration/migration.c
index 83d3bfbf62..20ef5b683b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1142,7 +1142,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram = g_malloc0(sizeof(*info->ram));
info->ram->transferred = stat64_get(&ram_counters.transferred);
info->ram->total = ram_bytes_total();
- info->ram->duplicate = stat64_get(&ram_counters.duplicate);
+ info->ram->duplicate = stat64_get(&ram_counters.zero_pages);
/* legacy value. It is not used anymore */
info->ram->skipped = 0;
info->ram->normal = stat64_get(&ram_counters.normal);
diff --git a/migration/ram.c b/migration/ram.c
index 771596d377..34126f0274 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1119,7 +1119,7 @@ uint64_t ram_pagesize_summary(void)
uint64_t ram_get_total_transferred_pages(void)
{
return stat64_get(&ram_counters.normal) +
- stat64_get(&ram_counters.duplicate) +
+ stat64_get(&ram_counters.zero_pages) +
compression_counters.pages + xbzrle_counters.pages;
}
@@ -1322,7 +1322,7 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
int len = save_zero_page_to_file(pss, f, block, offset);
if (len) {
- stat64_add(&ram_counters.duplicate, 1);
+ stat64_add(&ram_counters.zero_pages, 1);
ram_transferred_add(len);
return 1;
}
@@ -1361,7 +1361,7 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
if (bytes_xmit > 0) {
stat64_add(&ram_counters.normal, 1);
} else if (bytes_xmit == 0) {
- stat64_add(&ram_counters.duplicate, 1);
+ stat64_add(&ram_counters.zero_pages, 1);
}
return true;
@@ -1488,7 +1488,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
ram_transferred_add(bytes_xmit);
if (param->zero_page) {
- stat64_add(&ram_counters.duplicate, 1);
+ stat64_add(&ram_counters.zero_pages, 1);
return;
}
@@ -2623,7 +2623,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
uint64_t pages = size / TARGET_PAGE_SIZE;
if (zero) {
- stat64_add(&ram_counters.duplicate, pages);
+ stat64_add(&ram_counters.zero_pages, pages);
} else {
stat64_add(&ram_counters.normal, pages);
ram_transferred_add(size);
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v3 12/12] migration: Rename normal to full_pages
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (10 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 11/12] migration: Rename duplicate to zero_pages Juan Quintela
@ 2023-04-19 16:24 ` Juan Quintela
2023-04-19 19:29 ` [PATCH v3 00/12] Migration: Make more ram_counters atomic Peter Xu
12 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 16:24 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Juan Quintela, Leonardo Bras
Rest of counters that refer to pages has a _pages suffix.
And historically, this showed the number of full pages transferred.
The name "normal" refered to the fact that they were sent without any
optimization (compression, xbzrle, zero_page, ...).
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 2 +-
migration/migration.c | 2 +-
migration/ram.c | 10 +++++-----
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index b27ce01f2e..421673aa26 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -48,7 +48,7 @@ typedef struct {
Stat64 downtime_bytes;
Stat64 zero_pages;
Stat64 multifd_bytes;
- Stat64 normal;
+ Stat64 full_pages;
Stat64 postcopy_bytes;
Stat64 postcopy_requests;
Stat64 precopy_bytes;
diff --git a/migration/migration.c b/migration/migration.c
index 20ef5b683b..26c61ece55 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1145,7 +1145,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->duplicate = stat64_get(&ram_counters.zero_pages);
/* legacy value. It is not used anymore */
info->ram->skipped = 0;
- info->ram->normal = stat64_get(&ram_counters.normal);
+ info->ram->normal = stat64_get(&ram_counters.full_pages);
info->ram->normal_bytes = info->ram->normal * page_size;
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count =
diff --git a/migration/ram.c b/migration/ram.c
index 34126f0274..09ed5cdf27 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1118,7 +1118,7 @@ uint64_t ram_pagesize_summary(void)
uint64_t ram_get_total_transferred_pages(void)
{
- return stat64_get(&ram_counters.normal) +
+ return stat64_get(&ram_counters.full_pages) +
stat64_get(&ram_counters.zero_pages) +
compression_counters.pages + xbzrle_counters.pages;
}
@@ -1359,7 +1359,7 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
}
if (bytes_xmit > 0) {
- stat64_add(&ram_counters.normal, 1);
+ stat64_add(&ram_counters.full_pages, 1);
} else if (bytes_xmit == 0) {
stat64_add(&ram_counters.zero_pages, 1);
}
@@ -1393,7 +1393,7 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
}
ram_transferred_add(TARGET_PAGE_SIZE);
- stat64_add(&ram_counters.normal, 1);
+ stat64_add(&ram_counters.full_pages, 1);
return 1;
}
@@ -1449,7 +1449,7 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
if (multifd_queue_page(file, block, offset) < 0) {
return -1;
}
- stat64_add(&ram_counters.normal, 1);
+ stat64_add(&ram_counters.full_pages, 1);
return 1;
}
@@ -2625,7 +2625,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
if (zero) {
stat64_add(&ram_counters.zero_pages, pages);
} else {
- stat64_add(&ram_counters.normal, pages);
+ stat64_add(&ram_counters.full_pages, pages);
ram_transferred_add(size);
qemu_file_credit_transfer(f, size);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v3 00/12] Migration: Make more ram_counters atomic
2023-04-19 16:24 [PATCH v3 00/12] Migration: Make more ram_counters atomic Juan Quintela
` (11 preceding siblings ...)
2023-04-19 16:24 ` [PATCH v3 12/12] migration: Rename normal to full_pages Juan Quintela
@ 2023-04-19 19:29 ` Peter Xu
2023-04-19 19:54 ` Juan Quintela
12 siblings, 1 reply; 15+ messages in thread
From: Peter Xu @ 2023-04-19 19:29 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Leonardo Bras
On Wed, Apr 19, 2023 at 06:24:03PM +0200, Juan Quintela wrote:
> Juan Quintela (12):
> migration: Merge ram_counters and ram_atomic_counters
> migration: Update atomic stats out of the mutex
> migration: Make multifd_bytes atomic
> migration: Make dirty_sync_missed_zero_copy atomic
> migration: Make precopy_bytes atomic
> migration: Make downtime_bytes atomic
> migration: Make dirty_sync_count atomic
> migration: Make postcopy_requests atomic
> migration: Make dirty_pages_rate atomic
> migration: Make dirty_bytes_last_sync atomic
> migration: Rename duplicate to zero_pages
> migration: Rename normal to full_pages
Reviewed-by: Peter Xu <peterx@redhat.com>
One trivial comment on last patch: full_pages is slightly confusing to me,
probably because "normal" matches with the code (save_normal_page()),
meanwhile "full" makes me think of small/huge page where it can be a
huge/full page (comparing to a "partial" page).
I'd think "normal_pages" could be slightly better? No strong opinions
though.
--
Peter Xu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v3 00/12] Migration: Make more ram_counters atomic
2023-04-19 19:29 ` [PATCH v3 00/12] Migration: Make more ram_counters atomic Peter Xu
@ 2023-04-19 19:54 ` Juan Quintela
0 siblings, 0 replies; 15+ messages in thread
From: Juan Quintela @ 2023-04-19 19:54 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Leonardo Bras
Peter Xu <peterx@redhat.com> wrote:
> On Wed, Apr 19, 2023 at 06:24:03PM +0200, Juan Quintela wrote:
>> Juan Quintela (12):
>> migration: Merge ram_counters and ram_atomic_counters
>> migration: Update atomic stats out of the mutex
>> migration: Make multifd_bytes atomic
>> migration: Make dirty_sync_missed_zero_copy atomic
>> migration: Make precopy_bytes atomic
>> migration: Make downtime_bytes atomic
>> migration: Make dirty_sync_count atomic
>> migration: Make postcopy_requests atomic
>> migration: Make dirty_pages_rate atomic
>> migration: Make dirty_bytes_last_sync atomic
>> migration: Rename duplicate to zero_pages
>> migration: Rename normal to full_pages
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> One trivial comment on last patch: full_pages is slightly confusing to me,
> probably because "normal" matches with the code (save_normal_page()),
> meanwhile "full" makes me think of small/huge page where it can be a
> huge/full page (comparing to a "partial" page).
>
> I'd think "normal_pages" could be slightly better? No strong opinions
> though.
Ok, will move it back to normal.
I think this cames from Spanish, if some pages are normal, the others
are non-normal, and that sounds really weird.
In this case full was used with the meaning that we are sending the full
page, but I can see that it is confusing that a non-full page is a
partial page.
Later, Juan.
^ permalink raw reply [flat|nested] 15+ messages in thread