qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/20] Migration 20230420 patches
@ 2023-04-20 13:17 Juan Quintela
  2023-04-20 13:17 ` [PULL 01/20] migration: remove extra whitespace character for code style Juan Quintela
                   ` (20 more replies)
  0 siblings, 21 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini

The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:

  Open 8.1 development tree (2023-04-20 10:05:25 +0100)

are available in the Git repository at:

  https://gitlab.com/juan.quintela/qemu.git tags/migration-20230420-pull-request

for you to fetch changes up to cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:

  migration: Pass migrate_caps_check() the old and new caps (2023-04-20 15:10:58 +0200)

----------------------------------------------------------------
Migration Pull request

This series include everything reviewed for migration:

- fix for disk stop/start (eric)
- detect filesystem of hostmem (peter)
- rename qatomic_mb_read (paolo)
- whitespace cleanup (李皆俊)
  I hope copy and paste work for the name O:-)
- atomic_counters series (juan)
- two first patches of capabilities (juan)

Please apply,

----------------------------------------------------------------

Eric Blake (1):
  migration: Handle block device inactivation failures better

Juan Quintela (14):
  migration: Merge ram_counters and ram_atomic_counters
  migration: Update atomic stats out of the mutex
  migration: Make multifd_bytes atomic
  migration: Make dirty_sync_missed_zero_copy atomic
  migration: Make precopy_bytes atomic
  migration: Make downtime_bytes atomic
  migration: Make dirty_sync_count atomic
  migration: Make postcopy_requests atomic
  migration: Make dirty_pages_rate atomic
  migration: Make dirty_bytes_last_sync atomic
  migration: Rename duplicate to zero_pages
  migration: Rename normal to normal_pages
  migration: rename enabled_capabilities to capabilities
  migration: Pass migrate_caps_check() the old and new caps

Paolo Bonzini (1):
  postcopy-ram: do not use qatomic_mb_read

Peter Xu (3):
  util/mmap-alloc: qemu_fd_getfs()
  vl.c: Create late backends before migration object
  migration/postcopy: Detect file system on dest host

李皆俊 (1):
  migration: remove extra whitespace character for code style

 include/qemu/mmap-alloc.h |   7 ++
 migration/migration.c     | 167 +++++++++++++++++---------------------
 migration/migration.h     |   2 +-
 migration/multifd.c       |  10 +--
 migration/postcopy-ram.c  |  36 ++++++--
 migration/ram.c           |  73 ++++++++---------
 migration/ram.h           |  34 ++++----
 migration/rdma.c          |   4 +-
 migration/savevm.c        |   6 +-
 softmmu/vl.c              |   9 +-
 util/mmap-alloc.c         |  28 +++++++
 11 files changed, 209 insertions(+), 167 deletions(-)

-- 
2.39.2



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PULL 01/20] migration: remove extra whitespace character for code style
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 02/20] postcopy-ram: do not use qatomic_mb_read Juan Quintela
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, 李皆俊

From: 李皆俊 <a_lijiejun@163.com>

Fix code style.

Signed-off-by: 李皆俊 <a_lijiejun@163.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 79d881f735..0e68099bf9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3293,7 +3293,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
 
     migration_ops = g_malloc0(sizeof(MigrationOps));
     migration_ops->ram_save_target_page = ram_save_target_page_legacy;
-    ret =  multifd_send_sync_main(f);
+    ret = multifd_send_sync_main(f);
     if (ret < 0) {
         return ret;
     }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 02/20] postcopy-ram: do not use qatomic_mb_read
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
  2023-04-20 13:17 ` [PULL 01/20] migration: remove extra whitespace character for code style Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 03/20] migration: Merge ram_counters and ram_atomic_counters Juan Quintela
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini

From: Paolo Bonzini <pbonzini@redhat.com>

It does not even pair with a qatomic_mb_set(), so it is clearer to use
load-acquire in this case; they are synonyms.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/postcopy-ram.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 93f39f8e06..7d24dac397 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -1500,7 +1500,7 @@ static PostcopyState incoming_postcopy_state;
 
 PostcopyState  postcopy_state_get(void)
 {
-    return qatomic_mb_read(&incoming_postcopy_state);
+    return qatomic_load_acquire(&incoming_postcopy_state);
 }
 
 /* Set the state and return the old state */
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 03/20] migration: Merge ram_counters and ram_atomic_counters
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
  2023-04-20 13:17 ` [PULL 01/20] migration: remove extra whitespace character for code style Juan Quintela
  2023-04-20 13:17 ` [PULL 02/20] postcopy-ram: do not use qatomic_mb_read Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 04/20] migration: Update atomic stats out of the mutex Juan Quintela
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Using MgrationStats as type for ram_counters mean that we didn't have
to re-declare each value in another struct. The need of atomic
counters have make us to create MigrationAtomicStats for this atomic
counters.

Create RAMStats type which is a merge of MigrationStats and
MigrationAtomicStats removing unused members.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>

---

Fix typos found by David Edmondson
---
 migration/migration.c |  8 ++++----
 migration/multifd.c   |  4 ++--
 migration/ram.c       | 39 ++++++++++++++++-----------------------
 migration/ram.h       | 28 +++++++++++++++-------------
 4 files changed, 37 insertions(+), 42 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index bda4789193..10483f3cab 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1140,12 +1140,12 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     size_t page_size = qemu_target_page_size();
 
     info->ram = g_malloc0(sizeof(*info->ram));
-    info->ram->transferred = stat64_get(&ram_atomic_counters.transferred);
+    info->ram->transferred = stat64_get(&ram_counters.transferred);
     info->ram->total = ram_bytes_total();
-    info->ram->duplicate = stat64_get(&ram_atomic_counters.duplicate);
+    info->ram->duplicate = stat64_get(&ram_counters.duplicate);
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
-    info->ram->normal = stat64_get(&ram_atomic_counters.normal);
+    info->ram->normal = stat64_get(&ram_counters.normal);
     info->ram->normal_bytes = info->ram->normal * page_size;
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
@@ -1157,7 +1157,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->pages_per_second = s->pages_per_second;
     info->ram->precopy_bytes = ram_counters.precopy_bytes;
     info->ram->downtime_bytes = ram_counters.downtime_bytes;
-    info->ram->postcopy_bytes = stat64_get(&ram_atomic_counters.postcopy_bytes);
+    info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
 
     if (migrate_use_xbzrle()) {
         info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
diff --git a/migration/multifd.c b/migration/multifd.c
index cbc0dfe39b..01fab01a92 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -433,7 +433,7 @@ static int multifd_send_pages(QEMUFile *f)
     transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
     qemu_file_acct_rate_limit(f, transferred);
     ram_counters.multifd_bytes += transferred;
-    stat64_add(&ram_atomic_counters.transferred, transferred);
+    stat64_add(&ram_counters.transferred, transferred);
     qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
@@ -628,7 +628,7 @@ int multifd_send_sync_main(QEMUFile *f)
         p->pending_job++;
         qemu_file_acct_rate_limit(f, p->packet_len);
         ram_counters.multifd_bytes += p->packet_len;
-        stat64_add(&ram_atomic_counters.transferred, p->packet_len);
+        stat64_add(&ram_counters.transferred, p->packet_len);
         qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
     }
diff --git a/migration/ram.c b/migration/ram.c
index 0e68099bf9..71320ed27a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -458,25 +458,18 @@ uint64_t ram_bytes_remaining(void)
                        0;
 }
 
-/*
- * NOTE: not all stats in ram_counters are used in reality.  See comments
- * for struct MigrationAtomicStats.  The ultimate result of ram migration
- * counters will be a merged version with both ram_counters and the atomic
- * fields in ram_atomic_counters.
- */
-MigrationStats ram_counters;
-MigrationAtomicStats ram_atomic_counters;
+RAMStats ram_counters;
 
 void ram_transferred_add(uint64_t bytes)
 {
     if (runstate_is_running()) {
         ram_counters.precopy_bytes += bytes;
     } else if (migration_in_postcopy()) {
-        stat64_add(&ram_atomic_counters.postcopy_bytes, bytes);
+        stat64_add(&ram_counters.postcopy_bytes, bytes);
     } else {
         ram_counters.downtime_bytes += bytes;
     }
-    stat64_add(&ram_atomic_counters.transferred, bytes);
+    stat64_add(&ram_counters.transferred, bytes);
 }
 
 void dirty_sync_missed_zero_copy(void)
@@ -756,7 +749,7 @@ void mig_throttle_counter_reset(void)
 
     rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     rs->num_dirty_pages_period = 0;
-    rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
+    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
 }
 
 /**
@@ -1130,8 +1123,8 @@ uint64_t ram_pagesize_summary(void)
 
 uint64_t ram_get_total_transferred_pages(void)
 {
-    return  stat64_get(&ram_atomic_counters.normal) +
-        stat64_get(&ram_atomic_counters.duplicate) +
+    return stat64_get(&ram_counters.normal) +
+        stat64_get(&ram_counters.duplicate) +
         compression_counters.pages + xbzrle_counters.pages;
 }
 
@@ -1192,7 +1185,7 @@ static void migration_trigger_throttle(RAMState *rs)
     MigrationState *s = migrate_get_current();
     uint64_t threshold = s->parameters.throttle_trigger_threshold;
     uint64_t bytes_xfer_period =
-        stat64_get(&ram_atomic_counters.transferred) - rs->bytes_xfer_prev;
+        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
     uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
     uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
 
@@ -1255,7 +1248,7 @@ static void migration_bitmap_sync(RAMState *rs)
         /* reset period counters */
         rs->time_last_bitmap_sync = end_time;
         rs->num_dirty_pages_period = 0;
-        rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
+        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
     }
     if (migrate_use_events()) {
         qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
@@ -1331,7 +1324,7 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
     int len = save_zero_page_to_file(pss, f, block, offset);
 
     if (len) {
-        stat64_add(&ram_atomic_counters.duplicate, 1);
+        stat64_add(&ram_counters.duplicate, 1);
         ram_transferred_add(len);
         return 1;
     }
@@ -1368,9 +1361,9 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
     }
 
     if (bytes_xmit > 0) {
-        stat64_add(&ram_atomic_counters.normal, 1);
+        stat64_add(&ram_counters.normal, 1);
     } else if (bytes_xmit == 0) {
-        stat64_add(&ram_atomic_counters.duplicate, 1);
+        stat64_add(&ram_counters.duplicate, 1);
     }
 
     return true;
@@ -1402,7 +1395,7 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
         qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
     }
     ram_transferred_add(TARGET_PAGE_SIZE);
-    stat64_add(&ram_atomic_counters.normal, 1);
+    stat64_add(&ram_counters.normal, 1);
     return 1;
 }
 
@@ -1458,7 +1451,7 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
     if (multifd_queue_page(file, block, offset) < 0) {
         return -1;
     }
-    stat64_add(&ram_atomic_counters.normal, 1);
+    stat64_add(&ram_counters.normal, 1);
 
     return 1;
 }
@@ -1497,7 +1490,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
     ram_transferred_add(bytes_xmit);
 
     if (param->zero_page) {
-        stat64_add(&ram_atomic_counters.duplicate, 1);
+        stat64_add(&ram_counters.duplicate, 1);
         return;
     }
 
@@ -2632,9 +2625,9 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     uint64_t pages = size / TARGET_PAGE_SIZE;
 
     if (zero) {
-        stat64_add(&ram_atomic_counters.duplicate, pages);
+        stat64_add(&ram_counters.duplicate, pages);
     } else {
-        stat64_add(&ram_atomic_counters.normal, pages);
+        stat64_add(&ram_counters.normal, pages);
         ram_transferred_add(size);
         qemu_file_credit_transfer(f, size);
     }
diff --git a/migration/ram.h b/migration/ram.h
index 81cbb0947c..7c026b5242 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -35,25 +35,27 @@
 #include "qemu/stats64.h"
 
 /*
- * These are the migration statistic counters that need to be updated using
- * atomic ops (can be accessed by more than one thread).  Here since we
- * cannot modify MigrationStats directly to use Stat64 as it was defined in
- * the QAPI scheme, we define an internal structure to hold them, and we
- * propagate the real values when QMP queries happen.
- *
- * IOW, the corresponding fields within ram_counters on these specific
- * fields will be always zero and not being used at all; they're just
- * placeholders to make it QAPI-compatible.
+ * These are the ram migration statistic counters.  It is loosely
+ * based on MigrationStats.  We change to Stat64 any counter that
+ * needs to be updated using atomic ops (can be accessed by more than
+ * one thread).
  */
 typedef struct {
-    Stat64 transferred;
+    int64_t dirty_pages_rate;
+    int64_t dirty_sync_count;
+    uint64_t dirty_sync_missed_zero_copy;
+    uint64_t downtime_bytes;
     Stat64 duplicate;
+    uint64_t multifd_bytes;
     Stat64 normal;
     Stat64 postcopy_bytes;
-} MigrationAtomicStats;
+    int64_t postcopy_requests;
+    uint64_t precopy_bytes;
+    int64_t remaining;
+    Stat64 transferred;
+} RAMStats;
 
-extern MigrationAtomicStats ram_atomic_counters;
-extern MigrationStats ram_counters;
+extern RAMStats ram_counters;
 extern XBZRLECacheStats xbzrle_counters;
 extern CompressionStats compression_counters;
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 04/20] migration: Update atomic stats out of the mutex
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (2 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 03/20] migration: Merge ram_counters and ram_atomic_counters Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 05/20] migration: Make multifd_bytes atomic Juan Quintela
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, David Edmondson, Peter Xu

Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 01fab01a92..6ef3a27938 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -433,8 +433,8 @@ static int multifd_send_pages(QEMUFile *f)
     transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
     qemu_file_acct_rate_limit(f, transferred);
     ram_counters.multifd_bytes += transferred;
+    qemu_mutex_unlock(&p->mutex);
     stat64_add(&ram_counters.transferred, transferred);
-    qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
     return 1;
@@ -628,8 +628,8 @@ int multifd_send_sync_main(QEMUFile *f)
         p->pending_job++;
         qemu_file_acct_rate_limit(f, p->packet_len);
         ram_counters.multifd_bytes += p->packet_len;
+        qemu_mutex_unlock(&p->mutex);
         stat64_add(&ram_counters.transferred, p->packet_len);
-        qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 05/20] migration: Make multifd_bytes atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (3 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 04/20] migration: Update atomic stats out of the mutex Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 06/20] migration: Make dirty_sync_missed_zero_copy atomic Juan Quintela
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, David Edmondson, Peter Xu

In the spirit of:

commit 394d323bc3451e4d07f13341cb8817fac8dfbadd
Author: Peter Xu <peterx@redhat.com>
Date:   Tue Oct 11 17:55:51 2022 -0400

    migration: Use atomic ops properly for page accountings

Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c | 4 ++--
 migration/multifd.c   | 4 ++--
 migration/ram.h       | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 10483f3cab..c3debe71f6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1153,7 +1153,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
             ram_counters.dirty_sync_missed_zero_copy;
     info->ram->postcopy_requests = ram_counters.postcopy_requests;
     info->ram->page_size = page_size;
-    info->ram->multifd_bytes = ram_counters.multifd_bytes;
+    info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
     info->ram->pages_per_second = s->pages_per_second;
     info->ram->precopy_bytes = ram_counters.precopy_bytes;
     info->ram->downtime_bytes = ram_counters.downtime_bytes;
@@ -3778,7 +3778,7 @@ static MigThrError migration_detect_error(MigrationState *s)
 static uint64_t migration_total_bytes(MigrationState *s)
 {
     return qemu_file_total_transferred(s->to_dst_file) +
-        ram_counters.multifd_bytes;
+        stat64_get(&ram_counters.multifd_bytes);
 }
 
 static void migration_calculate_complete(MigrationState *s)
diff --git a/migration/multifd.c b/migration/multifd.c
index 6ef3a27938..1c992abf53 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -432,9 +432,9 @@ static int multifd_send_pages(QEMUFile *f)
     p->pages = pages;
     transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
     qemu_file_acct_rate_limit(f, transferred);
-    ram_counters.multifd_bytes += transferred;
     qemu_mutex_unlock(&p->mutex);
     stat64_add(&ram_counters.transferred, transferred);
+    stat64_add(&ram_counters.multifd_bytes, transferred);
     qemu_sem_post(&p->sem);
 
     return 1;
@@ -627,9 +627,9 @@ int multifd_send_sync_main(QEMUFile *f)
         p->flags |= MULTIFD_FLAG_SYNC;
         p->pending_job++;
         qemu_file_acct_rate_limit(f, p->packet_len);
-        ram_counters.multifd_bytes += p->packet_len;
         qemu_mutex_unlock(&p->mutex);
         stat64_add(&ram_counters.transferred, p->packet_len);
+        stat64_add(&ram_counters.multifd_bytes, p->packet_len);
         qemu_sem_post(&p->sem);
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
diff --git a/migration/ram.h b/migration/ram.h
index 7c026b5242..ed70391317 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -46,7 +46,7 @@ typedef struct {
     uint64_t dirty_sync_missed_zero_copy;
     uint64_t downtime_bytes;
     Stat64 duplicate;
-    uint64_t multifd_bytes;
+    Stat64 multifd_bytes;
     Stat64 normal;
     Stat64 postcopy_bytes;
     int64_t postcopy_requests;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 06/20] migration: Make dirty_sync_missed_zero_copy atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (4 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 05/20] migration: Make multifd_bytes atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 07/20] migration: Make precopy_bytes atomic Juan Quintela
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 2 +-
 migration/multifd.c   | 2 +-
 migration/ram.c       | 5 -----
 migration/ram.h       | 4 +---
 4 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index c3debe71f6..66e5197b77 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1150,7 +1150,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
     info->ram->dirty_sync_missed_zero_copy =
-            ram_counters.dirty_sync_missed_zero_copy;
+        stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
     info->ram->postcopy_requests = ram_counters.postcopy_requests;
     info->ram->page_size = page_size;
     info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
diff --git a/migration/multifd.c b/migration/multifd.c
index 1c992abf53..903df2117b 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -576,7 +576,7 @@ static int multifd_zero_copy_flush(QIOChannel *c)
         return -1;
     }
     if (ret == 1) {
-        dirty_sync_missed_zero_copy();
+        stat64_add(&ram_counters.dirty_sync_missed_zero_copy, 1);
     }
 
     return ret;
diff --git a/migration/ram.c b/migration/ram.c
index 71320ed27a..93e0a48af4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -472,11 +472,6 @@ void ram_transferred_add(uint64_t bytes)
     stat64_add(&ram_counters.transferred, bytes);
 }
 
-void dirty_sync_missed_zero_copy(void)
-{
-    ram_counters.dirty_sync_missed_zero_copy++;
-}
-
 struct MigrationOps {
     int (*ram_save_target_page)(RAMState *rs, PageSearchStatus *pss);
 };
diff --git a/migration/ram.h b/migration/ram.h
index ed70391317..2170c55e67 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -43,7 +43,7 @@
 typedef struct {
     int64_t dirty_pages_rate;
     int64_t dirty_sync_count;
-    uint64_t dirty_sync_missed_zero_copy;
+    Stat64 dirty_sync_missed_zero_copy;
     uint64_t downtime_bytes;
     Stat64 duplicate;
     Stat64 multifd_bytes;
@@ -114,6 +114,4 @@ void ram_write_tracking_prepare(void);
 int ram_write_tracking_start(void);
 void ram_write_tracking_stop(void);
 
-void dirty_sync_missed_zero_copy(void);
-
 #endif
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 07/20] migration: Make precopy_bytes atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (5 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 06/20] migration: Make dirty_sync_missed_zero_copy atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 08/20] migration: Make downtime_bytes atomic Juan Quintela
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 2 +-
 migration/ram.c       | 2 +-
 migration/ram.h       | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 66e5197b77..cbd6f6f235 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1155,7 +1155,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->page_size = page_size;
     info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
     info->ram->pages_per_second = s->pages_per_second;
-    info->ram->precopy_bytes = ram_counters.precopy_bytes;
+    info->ram->precopy_bytes = stat64_get(&ram_counters.precopy_bytes);
     info->ram->downtime_bytes = ram_counters.downtime_bytes;
     info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
 
diff --git a/migration/ram.c b/migration/ram.c
index 93e0a48af4..0b4693215e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -463,7 +463,7 @@ RAMStats ram_counters;
 void ram_transferred_add(uint64_t bytes)
 {
     if (runstate_is_running()) {
-        ram_counters.precopy_bytes += bytes;
+        stat64_add(&ram_counters.precopy_bytes, bytes);
     } else if (migration_in_postcopy()) {
         stat64_add(&ram_counters.postcopy_bytes, bytes);
     } else {
diff --git a/migration/ram.h b/migration/ram.h
index 2170c55e67..a766b895fa 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -50,7 +50,7 @@ typedef struct {
     Stat64 normal;
     Stat64 postcopy_bytes;
     int64_t postcopy_requests;
-    uint64_t precopy_bytes;
+    Stat64 precopy_bytes;
     int64_t remaining;
     Stat64 transferred;
 } RAMStats;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 08/20] migration: Make downtime_bytes atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (6 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 07/20] migration: Make precopy_bytes atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 09/20] migration: Make dirty_sync_count atomic Juan Quintela
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 2 +-
 migration/ram.c       | 2 +-
 migration/ram.h       | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index cbd6f6f235..4ca2173d85 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1156,7 +1156,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
     info->ram->pages_per_second = s->pages_per_second;
     info->ram->precopy_bytes = stat64_get(&ram_counters.precopy_bytes);
-    info->ram->downtime_bytes = ram_counters.downtime_bytes;
+    info->ram->downtime_bytes = stat64_get(&ram_counters.downtime_bytes);
     info->ram->postcopy_bytes = stat64_get(&ram_counters.postcopy_bytes);
 
     if (migrate_use_xbzrle()) {
diff --git a/migration/ram.c b/migration/ram.c
index 0b4693215e..b1722b6071 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -467,7 +467,7 @@ void ram_transferred_add(uint64_t bytes)
     } else if (migration_in_postcopy()) {
         stat64_add(&ram_counters.postcopy_bytes, bytes);
     } else {
-        ram_counters.downtime_bytes += bytes;
+        stat64_add(&ram_counters.downtime_bytes, bytes);
     }
     stat64_add(&ram_counters.transferred, bytes);
 }
diff --git a/migration/ram.h b/migration/ram.h
index a766b895fa..bb52632424 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -44,7 +44,7 @@ typedef struct {
     int64_t dirty_pages_rate;
     int64_t dirty_sync_count;
     Stat64 dirty_sync_missed_zero_copy;
-    uint64_t downtime_bytes;
+    Stat64 downtime_bytes;
     Stat64 duplicate;
     Stat64 multifd_bytes;
     Stat64 normal;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 09/20] migration: Make dirty_sync_count atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (7 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 08/20] migration: Make downtime_bytes atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 10/20] migration: Make postcopy_requests atomic Juan Quintela
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c |  3 ++-
 migration/ram.c       | 13 +++++++------
 migration/ram.h       |  2 +-
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 4ca2173d85..97c227aa85 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1148,7 +1148,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->normal = stat64_get(&ram_counters.normal);
     info->ram->normal_bytes = info->ram->normal * page_size;
     info->ram->mbps = s->mbps;
-    info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
+    info->ram->dirty_sync_count =
+        stat64_get(&ram_counters.dirty_sync_count);
     info->ram->dirty_sync_missed_zero_copy =
         stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
     info->ram->postcopy_requests = ram_counters.postcopy_requests;
diff --git a/migration/ram.c b/migration/ram.c
index b1722b6071..3c13136559 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -764,7 +764,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
-                 ram_counters.dirty_sync_count);
+                 stat64_get(&ram_counters.dirty_sync_count));
 }
 
 #define ENCODING_FLAG_XBZRLE 0x1
@@ -790,13 +790,13 @@ static int save_xbzrle_page(RAMState *rs, PageSearchStatus *pss,
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
     QEMUFile *file = pss->pss_channel;
+    uint64_t generation = stat64_get(&ram_counters.dirty_sync_count);
 
-    if (!cache_is_cached(XBZRLE.cache, current_addr,
-                         ram_counters.dirty_sync_count)) {
+    if (!cache_is_cached(XBZRLE.cache, current_addr, generation)) {
         xbzrle_counters.cache_miss++;
         if (!rs->last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
-                             ram_counters.dirty_sync_count) == -1) {
+                             generation) == -1) {
                 return -1;
             } else {
                 /* update *current_data when the page has been
@@ -1209,7 +1209,7 @@ static void migration_bitmap_sync(RAMState *rs)
     RAMBlock *block;
     int64_t end_time;
 
-    ram_counters.dirty_sync_count++;
+    stat64_add(&ram_counters.dirty_sync_count, 1);
 
     if (!rs->time_last_bitmap_sync) {
         rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -1246,7 +1246,8 @@ static void migration_bitmap_sync(RAMState *rs)
         rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
     }
     if (migrate_use_events()) {
-        qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
+        uint64_t generation = stat64_get(&ram_counters.dirty_sync_count);
+        qapi_event_send_migration_pass(generation);
     }
 }
 
diff --git a/migration/ram.h b/migration/ram.h
index bb52632424..8c0d07c43a 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -42,7 +42,7 @@
  */
 typedef struct {
     int64_t dirty_pages_rate;
-    int64_t dirty_sync_count;
+    Stat64 dirty_sync_count;
     Stat64 dirty_sync_missed_zero_copy;
     Stat64 downtime_bytes;
     Stat64 duplicate;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 10/20] migration: Make postcopy_requests atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (8 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 09/20] migration: Make dirty_sync_count atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 11/20] migration: Make dirty_pages_rate atomic Juan Quintela
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 3 ++-
 migration/ram.c       | 2 +-
 migration/ram.h       | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 97c227aa85..09b37a6603 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1152,7 +1152,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         stat64_get(&ram_counters.dirty_sync_count);
     info->ram->dirty_sync_missed_zero_copy =
         stat64_get(&ram_counters.dirty_sync_missed_zero_copy);
-    info->ram->postcopy_requests = ram_counters.postcopy_requests;
+    info->ram->postcopy_requests =
+        stat64_get(&ram_counters.postcopy_requests);
     info->ram->page_size = page_size;
     info->ram->multifd_bytes = stat64_get(&ram_counters.multifd_bytes);
     info->ram->pages_per_second = s->pages_per_second;
diff --git a/migration/ram.c b/migration/ram.c
index 3c13136559..fe69ecaef4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2169,7 +2169,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
     RAMBlock *ramblock;
     RAMState *rs = ram_state;
 
-    ram_counters.postcopy_requests++;
+    stat64_add(&ram_counters.postcopy_requests, 1);
     RCU_READ_LOCK_GUARD();
 
     if (!rbname) {
diff --git a/migration/ram.h b/migration/ram.h
index 8c0d07c43a..afa68521d7 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -49,7 +49,7 @@ typedef struct {
     Stat64 multifd_bytes;
     Stat64 normal;
     Stat64 postcopy_bytes;
-    int64_t postcopy_requests;
+    Stat64 postcopy_requests;
     Stat64 precopy_bytes;
     int64_t remaining;
     Stat64 transferred;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 11/20] migration: Make dirty_pages_rate atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (9 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 10/20] migration: Make postcopy_requests atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 12/20] migration: Make dirty_bytes_last_sync atomic Juan Quintela
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

In this case we use qatomic operations instead of Stat64 wrapper
because there is no stat64_set().  Defining the 64 bit wrapper is
trivial. The one without atomics is more interesting.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 6 ++++--
 migration/ram.c       | 5 +++--
 migration/ram.h       | 2 +-
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 09b37a6603..50eae2fbcd 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1190,7 +1190,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
-        info->ram->dirty_pages_rate = ram_counters.dirty_pages_rate;
+        info->ram->dirty_pages_rate =
+           qatomic_read__nocheck(&ram_counters.dirty_pages_rate);
     }
 }
 
@@ -3844,7 +3845,8 @@ static void migration_update_counters(MigrationState *s,
      * if we haven't sent anything, we don't want to
      * recalculate. 10000 is a small enough number for our purposes
      */
-    if (ram_counters.dirty_pages_rate && transferred > 10000) {
+    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
+        transferred > 10000) {
         s->expected_downtime = ram_counters.remaining / bandwidth;
     }
 
diff --git a/migration/ram.c b/migration/ram.c
index fe69ecaef4..7400abf5e1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1129,8 +1129,9 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
     double compressed_size;
 
     /* calculate period counters */
-    ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
-                / (end_time - rs->time_last_bitmap_sync);
+    qatomic_set__nocheck(&ram_counters.dirty_pages_rate,
+                         rs->num_dirty_pages_period * 1000 /
+                         (end_time - rs->time_last_bitmap_sync));
 
     if (!page_count) {
         return;
diff --git a/migration/ram.h b/migration/ram.h
index afa68521d7..574a604b72 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,7 +41,7 @@
  * one thread).
  */
 typedef struct {
-    int64_t dirty_pages_rate;
+    aligned_uint64_t dirty_pages_rate;
     Stat64 dirty_sync_count;
     Stat64 dirty_sync_missed_zero_copy;
     Stat64 downtime_bytes;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 12/20] migration: Make dirty_bytes_last_sync atomic
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (10 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 11/20] migration: Make dirty_pages_rate atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 13/20] migration: Rename duplicate to zero_pages Juan Quintela
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

As we set its value, it needs to be operated with atomics.
We rename it from remaining to better reflect its meaning.

Statistics always return the real reamaining bytes.  This was used to
store how much pages where dirty on the previous generation, so we can
calculate the expected downtime as: dirty_bytes_last_sync /
current_bandwith.

If we use the actual remaining bytes, we would see a very small value
at the end of the iteration.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>

---

I am open to use ram_bytes_remaining() in its only use and be more
"optimistic" about the downtime.
---
 migration/migration.c | 4 +++-
 migration/ram.c       | 3 ++-
 migration/ram.h       | 2 +-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 50eae2fbcd..83d3bfbf62 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3847,7 +3847,9 @@ static void migration_update_counters(MigrationState *s,
      */
     if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
         transferred > 10000) {
-        s->expected_downtime = ram_counters.remaining / bandwidth;
+        s->expected_downtime =
+            qatomic_read__nocheck(&ram_counters.dirty_bytes_last_sync) /
+                                  bandwidth;
     }
 
     qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 7400abf5e1..7bbaf8cd86 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1224,7 +1224,8 @@ static void migration_bitmap_sync(RAMState *rs)
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             ramblock_sync_dirty_bitmap(rs, block);
         }
-        ram_counters.remaining = ram_bytes_remaining();
+        qatomic_set__nocheck(&ram_counters.dirty_bytes_last_sync,
+                             ram_bytes_remaining());
     }
     qemu_mutex_unlock(&rs->bitmap_mutex);
 
diff --git a/migration/ram.h b/migration/ram.h
index 574a604b72..8093ebc210 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -41,6 +41,7 @@
  * one thread).
  */
 typedef struct {
+    aligned_uint64_t dirty_bytes_last_sync;
     aligned_uint64_t dirty_pages_rate;
     Stat64 dirty_sync_count;
     Stat64 dirty_sync_missed_zero_copy;
@@ -51,7 +52,6 @@ typedef struct {
     Stat64 postcopy_bytes;
     Stat64 postcopy_requests;
     Stat64 precopy_bytes;
-    int64_t remaining;
     Stat64 transferred;
 } RAMStats;
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 13/20] migration: Rename duplicate to zero_pages
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (11 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 12/20] migration: Make dirty_bytes_last_sync atomic Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 14/20] migration: Rename normal to normal_pages Juan Quintela
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Rest of counters that refer to pages has a _pages suffix.
And historically, this showed the number of pages composed of the same
character, here comes the name "duplicated".  But since years ago, it
refers to the number of zero_pages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c |  2 +-
 migration/ram.c       | 10 +++++-----
 migration/ram.h       |  2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 83d3bfbf62..20ef5b683b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1142,7 +1142,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram = g_malloc0(sizeof(*info->ram));
     info->ram->transferred = stat64_get(&ram_counters.transferred);
     info->ram->total = ram_bytes_total();
-    info->ram->duplicate = stat64_get(&ram_counters.duplicate);
+    info->ram->duplicate = stat64_get(&ram_counters.zero_pages);
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
     info->ram->normal = stat64_get(&ram_counters.normal);
diff --git a/migration/ram.c b/migration/ram.c
index 7bbaf8cd86..1a098a9c21 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1119,7 +1119,7 @@ uint64_t ram_pagesize_summary(void)
 uint64_t ram_get_total_transferred_pages(void)
 {
     return stat64_get(&ram_counters.normal) +
-        stat64_get(&ram_counters.duplicate) +
+        stat64_get(&ram_counters.zero_pages) +
         compression_counters.pages + xbzrle_counters.pages;
 }
 
@@ -1322,7 +1322,7 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
     int len = save_zero_page_to_file(pss, f, block, offset);
 
     if (len) {
-        stat64_add(&ram_counters.duplicate, 1);
+        stat64_add(&ram_counters.zero_pages, 1);
         ram_transferred_add(len);
         return 1;
     }
@@ -1361,7 +1361,7 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
     if (bytes_xmit > 0) {
         stat64_add(&ram_counters.normal, 1);
     } else if (bytes_xmit == 0) {
-        stat64_add(&ram_counters.duplicate, 1);
+        stat64_add(&ram_counters.zero_pages, 1);
     }
 
     return true;
@@ -1488,7 +1488,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
     ram_transferred_add(bytes_xmit);
 
     if (param->zero_page) {
-        stat64_add(&ram_counters.duplicate, 1);
+        stat64_add(&ram_counters.zero_pages, 1);
         return;
     }
 
@@ -2623,7 +2623,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     uint64_t pages = size / TARGET_PAGE_SIZE;
 
     if (zero) {
-        stat64_add(&ram_counters.duplicate, pages);
+        stat64_add(&ram_counters.zero_pages, pages);
     } else {
         stat64_add(&ram_counters.normal, pages);
         ram_transferred_add(size);
diff --git a/migration/ram.h b/migration/ram.h
index 8093ebc210..b27ce01f2e 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -46,7 +46,7 @@ typedef struct {
     Stat64 dirty_sync_count;
     Stat64 dirty_sync_missed_zero_copy;
     Stat64 downtime_bytes;
-    Stat64 duplicate;
+    Stat64 zero_pages;
     Stat64 multifd_bytes;
     Stat64 normal;
     Stat64 postcopy_bytes;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 14/20] migration: Rename normal to normal_pages
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (12 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 13/20] migration: Rename duplicate to zero_pages Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 15/20] migration: Handle block device inactivation failures better Juan Quintela
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

Rest of counters that refer to pages has a _pages suffix.
And historically, this showed the number of full pages transferred.
The name "normal" refered to the fact that they were sent without any
optimization (compression, xbzrle, zero_page, ...).

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c |  2 +-
 migration/ram.c       | 10 +++++-----
 migration/ram.h       |  2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 20ef5b683b..f311bb5f93 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1145,7 +1145,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->duplicate = stat64_get(&ram_counters.zero_pages);
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
-    info->ram->normal = stat64_get(&ram_counters.normal);
+    info->ram->normal = stat64_get(&ram_counters.normal_pages);
     info->ram->normal_bytes = info->ram->normal * page_size;
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count =
diff --git a/migration/ram.c b/migration/ram.c
index 1a098a9c21..7ad92f8756 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1118,7 +1118,7 @@ uint64_t ram_pagesize_summary(void)
 
 uint64_t ram_get_total_transferred_pages(void)
 {
-    return stat64_get(&ram_counters.normal) +
+    return stat64_get(&ram_counters.normal_pages) +
         stat64_get(&ram_counters.zero_pages) +
         compression_counters.pages + xbzrle_counters.pages;
 }
@@ -1359,7 +1359,7 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
     }
 
     if (bytes_xmit > 0) {
-        stat64_add(&ram_counters.normal, 1);
+        stat64_add(&ram_counters.normal_pages, 1);
     } else if (bytes_xmit == 0) {
         stat64_add(&ram_counters.zero_pages, 1);
     }
@@ -1393,7 +1393,7 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
         qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
     }
     ram_transferred_add(TARGET_PAGE_SIZE);
-    stat64_add(&ram_counters.normal, 1);
+    stat64_add(&ram_counters.normal_pages, 1);
     return 1;
 }
 
@@ -1449,7 +1449,7 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
     if (multifd_queue_page(file, block, offset) < 0) {
         return -1;
     }
-    stat64_add(&ram_counters.normal, 1);
+    stat64_add(&ram_counters.normal_pages, 1);
 
     return 1;
 }
@@ -2625,7 +2625,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     if (zero) {
         stat64_add(&ram_counters.zero_pages, pages);
     } else {
-        stat64_add(&ram_counters.normal, pages);
+        stat64_add(&ram_counters.normal_pages, pages);
         ram_transferred_add(size);
         qemu_file_credit_transfer(f, size);
     }
diff --git a/migration/ram.h b/migration/ram.h
index b27ce01f2e..11a0fde99b 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -48,7 +48,7 @@ typedef struct {
     Stat64 downtime_bytes;
     Stat64 zero_pages;
     Stat64 multifd_bytes;
-    Stat64 normal;
+    Stat64 normal_pages;
     Stat64 postcopy_bytes;
     Stat64 postcopy_requests;
     Stat64 precopy_bytes;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 15/20] migration: Handle block device inactivation failures better
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (13 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 14/20] migration: Rename normal to normal_pages Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 16/20] util/mmap-alloc: qemu_fd_getfs() Juan Quintela
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Eric Blake, Lukas Straub

From: Eric Blake <eblake@redhat.com>

Consider what happens when performing a migration between two host
machines connected to an NFS server serving multiple block devices to
the guest, when the NFS server becomes unavailable.  The migration
attempts to inactivate all block devices on the source (a necessary
step before the destination can take over); but if the NFS server is
non-responsive, the attempt to inactivate can itself fail.  When that
happens, the destination fails to get the migrated guest (good,
because the source wasn't able to flush everything properly):

  (qemu) qemu-kvm: load of migration failed: Input/output error

at which point, our only hope for the guest is for the source to take
back control.  With the current code base, the host outputs a message, but then appears to resume:

  (qemu) qemu-kvm: qemu_savevm_state_complete_precopy_non_iterable: bdrv_inactivate_all() failed (-1)

  (src qemu)info status
   VM status: running

but a second migration attempt now asserts:

  (src qemu) qemu-kvm: ../block.c:6738: int bdrv_inactivate_recurse(BlockDriverState *): Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.

Whether the guest is recoverable on the source after the first failure
is debatable, but what we do not want is to have qemu itself fail due
to an assertion.  It looks like the problem is as follows:

In migration.c:migration_completion(), the source sets 'inactivate' to
true (since COLO is not enabled), then tries
savevm.c:qemu_savevm_state_complete_precopy() with a request to
inactivate block devices.  In turn, this calls
block.c:bdrv_inactivate_all(), which fails when flushing runs up
against the non-responsive NFS server.  With savevm failing, we are
now left in a state where some, but not all, of the block devices have
been inactivated; but migration_completion() then jumps to 'fail'
rather than 'fail_invalidate' and skips an attempt to reclaim those
those disks by calling bdrv_activate_all().  Even if we do attempt to
reclaim disks, we aren't taking note of failure there, either.

Thus, we have reached a state where the migration engine has forgotten
all state about whether a block device is inactive, because we did not
set s->block_inactive in enough places; so migration allows the source
to reach vm_start() and resume execution, violating the block layer
invariant that the guest CPUs should not be restarted while a device
is inactive.  Note that the code in migration.c:migrate_fd_cancel()
will also try to reactivate all block devices if s->block_inactive was
set, but because we failed to set that flag after the first failure,
the source assumes it has reclaimed all devices, even though it still
has remaining inactivated devices and does not try again.  Normally,
qmp_cont() will also try to reactivate all disks (or correctly fail if
the disks are not reclaimable because NFS is not yet back up), but the
auto-resumption of the source after a migration failure does not go
through qmp_cont().  And because we have left the block layer in an
inconsistent state with devices still inactivated, the later migration
attempt is hitting the assertion failure.

Since it is important to not resume the source with inactive disks,
this patch marks s->block_inactive before attempting inactivation,
rather than after succeeding, in order to prevent any vm_start() until
it has successfully reactivated all devices.

See also https://bugzilla.redhat.com/show_bug.cgi?id=2058982

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Acked-by: Lukas Straub <lukasstraub2@web.de>
Tested-by: Lukas Straub <lukasstraub2@web.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index f311bb5f93..d630193272 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3447,13 +3447,11 @@ static void migration_completion(MigrationState *s)
                                             MIGRATION_STATUS_DEVICE);
             }
             if (ret >= 0) {
+                s->block_inactive = inactivate;
                 qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX);
                 ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
                                                          inactivate);
             }
-            if (inactivate && ret >= 0) {
-                s->block_inactive = true;
-            }
         }
         qemu_mutex_unlock_iothread();
 
@@ -3525,6 +3523,7 @@ fail_invalidate:
         bdrv_activate_all(&local_err);
         if (local_err) {
             error_report_err(local_err);
+            s->block_inactive = true;
         } else {
             s->block_inactive = false;
         }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 16/20] util/mmap-alloc: qemu_fd_getfs()
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (14 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 15/20] migration: Handle block device inactivation failures better Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 17/20] vl.c: Create late backends before migration object Juan Quintela
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu, David Hildenbrand

From: Peter Xu <peterx@redhat.com>

This new helper fetches file system type for a fd.  Only Linux is
implemented so far.  Currently only tmpfs and hugetlbfs are defined,
but it can grow as needed.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/qemu/mmap-alloc.h |  7 +++++++
 util/mmap-alloc.c         | 28 ++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
index 2825e231a7..8344daaa03 100644
--- a/include/qemu/mmap-alloc.h
+++ b/include/qemu/mmap-alloc.h
@@ -1,8 +1,15 @@
 #ifndef QEMU_MMAP_ALLOC_H
 #define QEMU_MMAP_ALLOC_H
 
+typedef enum {
+    QEMU_FS_TYPE_UNKNOWN = 0,
+    QEMU_FS_TYPE_TMPFS,
+    QEMU_FS_TYPE_HUGETLBFS,
+    QEMU_FS_TYPE_NUM,
+} QemuFsType;
 
 size_t qemu_fd_getpagesize(int fd);
+QemuFsType qemu_fd_getfs(int fd);
 
 /**
  * qemu_ram_mmap: mmap anonymous memory, the specified file or device.
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index 5ed7d29183..ed14f9c64d 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -27,8 +27,36 @@
 
 #ifdef CONFIG_LINUX
 #include <sys/vfs.h>
+#include <linux/magic.h>
 #endif
 
+QemuFsType qemu_fd_getfs(int fd)
+{
+#ifdef CONFIG_LINUX
+    struct statfs fs;
+    int ret;
+
+    if (fd < 0) {
+        return QEMU_FS_TYPE_UNKNOWN;
+    }
+
+    do {
+        ret = fstatfs(fd, &fs);
+    } while (ret != 0 && errno == EINTR);
+
+    switch (fs.f_type) {
+    case TMPFS_MAGIC:
+        return QEMU_FS_TYPE_TMPFS;
+    case HUGETLBFS_MAGIC:
+        return QEMU_FS_TYPE_HUGETLBFS;
+    default:
+        return QEMU_FS_TYPE_UNKNOWN;
+    }
+#else
+    return QEMU_FS_TYPE_UNKNOWN;
+#endif
+}
+
 size_t qemu_fd_getpagesize(int fd)
 {
 #ifdef CONFIG_LINUX
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 17/20] vl.c: Create late backends before migration object
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (15 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 16/20] util/mmap-alloc: qemu_fd_getfs() Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 18/20] migration/postcopy: Detect file system on dest host Juan Quintela
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu, David Hildenbrand

From: Peter Xu <peterx@redhat.com>

The migration object may want to check against different types of memory
when initialized.  Delay the creation to be after late backends.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 softmmu/vl.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/softmmu/vl.c b/softmmu/vl.c
index ea20b23e4c..ad394b402f 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -3583,14 +3583,19 @@ void qemu_init(int argc, char **argv)
                      machine_class->name, machine_class->deprecation_reason);
     }
 
+    /*
+     * Create backends before creating migration objects, so that it can
+     * check against compatibilities on the backend memories (e.g. postcopy
+     * over memory-backend-file objects).
+     */
+    qemu_create_late_backends();
+
     /*
      * Note: creates a QOM object, must run only after global and
      * compat properties have been set up.
      */
     migration_object_init();
 
-    qemu_create_late_backends();
-
     /* parse features once if machine provides default cpu_type */
     current_machine->cpu_type = machine_class->default_cpu_type;
     if (cpu_option) {
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 18/20] migration/postcopy: Detect file system on dest host
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (16 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 17/20] vl.c: Create late backends before migration object Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 19/20] migration: rename enabled_capabilities to capabilities Juan Quintela
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Peter Xu

From: Peter Xu <peterx@redhat.com>

Postcopy requires the memory support userfaultfd to work.  Right now we
check it but it's a bit too late (when switching to postcopy migration).

Do that early right at enabling of postcopy.

Note that this is still only a best effort because ramblocks can be
dynamically created.  We can add check in hostmem creations and fail if
postcopy enabled, but maybe that's too aggressive.

Still, we have chance to fail the most obvious where we know there's an
existing unsupported ramblock.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/postcopy-ram.c | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 7d24dac397..d7b48dd920 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -36,6 +36,7 @@
 #include "yank_functions.h"
 #include "tls.h"
 #include "qemu/userfaultfd.h"
+#include "qemu/mmap-alloc.h"
 
 /* Arbitrary limit on size of each discard command,
  * keeps them around ~200 bytes
@@ -336,11 +337,12 @@ static bool ufd_check_and_apply(int ufd, MigrationIncomingState *mis)
 
 /* Callback from postcopy_ram_supported_by_host block iterator.
  */
-static int test_ramblock_postcopiable(RAMBlock *rb, void *opaque)
+static int test_ramblock_postcopiable(RAMBlock *rb)
 {
     const char *block_name = qemu_ram_get_idstr(rb);
     ram_addr_t length = qemu_ram_get_used_length(rb);
     size_t pagesize = qemu_ram_pagesize(rb);
+    QemuFsType fs;
 
     if (length % pagesize) {
         error_report("Postcopy requires RAM blocks to be a page size multiple,"
@@ -348,6 +350,15 @@ static int test_ramblock_postcopiable(RAMBlock *rb, void *opaque)
                      "page size of 0x%zx", block_name, length, pagesize);
         return 1;
     }
+
+    if (rb->fd >= 0) {
+        fs = qemu_fd_getfs(rb->fd);
+        if (fs != QEMU_FS_TYPE_TMPFS && fs != QEMU_FS_TYPE_HUGETLBFS) {
+            error_report("Host backend files need to be TMPFS or HUGETLBFS only");
+            return 1;
+        }
+    }
+
     return 0;
 }
 
@@ -366,6 +377,7 @@ bool postcopy_ram_supported_by_host(MigrationIncomingState *mis)
     struct uffdio_range range_struct;
     uint64_t feature_mask;
     Error *local_err = NULL;
+    RAMBlock *block;
 
     if (qemu_target_page_size() > pagesize) {
         error_report("Target page size bigger than host page size");
@@ -390,9 +402,23 @@ bool postcopy_ram_supported_by_host(MigrationIncomingState *mis)
         goto out;
     }
 
-    /* We don't support postcopy with shared RAM yet */
-    if (foreach_not_ignored_block(test_ramblock_postcopiable, NULL)) {
-        goto out;
+    /*
+     * We don't support postcopy with some type of ramblocks.
+     *
+     * NOTE: we explicitly ignored ramblock_is_ignored() instead we checked
+     * all possible ramblocks.  This is because this function can be called
+     * when creating the migration object, during the phase RAM_MIGRATABLE
+     * is not even properly set for all the ramblocks.
+     *
+     * A side effect of this is we'll also check against RAM_SHARED
+     * ramblocks even if migrate_ignore_shared() is set (in which case
+     * we'll never migrate RAM_SHARED at all), but normally this shouldn't
+     * affect in reality, or we can revisit.
+     */
+    RAMBLOCK_FOREACH(block) {
+        if (test_ramblock_postcopiable(block)) {
+            goto out;
+        }
     }
 
     /*
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 19/20] migration: rename enabled_capabilities to capabilities
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (17 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 18/20] migration/postcopy: Detect file system on dest host Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-20 13:17 ` [PULL 20/20] migration: Pass migrate_caps_check() the old and new caps Juan Quintela
  2023-04-22  5:09 ` [PULL 00/20] Migration 20230420 patches Richard Henderson
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Vladimir Sementsov-Ogievskiy

It is clear from the context what that means, and such a long name
with the extra long names of the capabilities make very difficilut to
stay inside the 80 columns limit.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---
 migration/migration.c | 52 +++++++++++++++++++++----------------------
 migration/migration.h |  2 +-
 migration/rdma.c      |  4 ++--
 migration/savevm.c    |  6 ++---
 4 files changed, 31 insertions(+), 33 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index d630193272..0c2376bc7e 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -364,8 +364,7 @@ static bool migrate_late_block_activate(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[
-        MIGRATION_CAPABILITY_LATE_BLOCK_ACTIVATE];
+    return s->capabilities[MIGRATION_CAPABILITY_LATE_BLOCK_ACTIVATE];
 }
 
 /*
@@ -944,7 +943,7 @@ MigrationCapabilityStatusList *qmp_query_migrate_capabilities(Error **errp)
 #endif
         caps = g_malloc0(sizeof(*caps));
         caps->capability = i;
-        caps->state = s->enabled_capabilities[i];
+        caps->state = s->capabilities[i];
         QAPI_LIST_APPEND(tail, caps);
     }
 
@@ -1495,13 +1494,13 @@ void qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
         return;
     }
 
-    memcpy(cap_list, s->enabled_capabilities, sizeof(cap_list));
+    memcpy(cap_list, s->capabilities, sizeof(cap_list));
     if (!migrate_caps_check(cap_list, params, errp)) {
         return;
     }
 
     for (cap = params; cap; cap = cap->next) {
-        s->enabled_capabilities[cap->value->capability] = cap->value->state;
+        s->capabilities[cap->value->capability] = cap->value->state;
     }
 }
 
@@ -2570,7 +2569,7 @@ bool migrate_release_ram(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_RELEASE_RAM];
+    return s->capabilities[MIGRATION_CAPABILITY_RELEASE_RAM];
 }
 
 bool migrate_postcopy_ram(void)
@@ -2579,7 +2578,7 @@ bool migrate_postcopy_ram(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
+    return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
 
 bool migrate_postcopy(void)
@@ -2593,7 +2592,7 @@ bool migrate_auto_converge(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_AUTO_CONVERGE];
+    return s->capabilities[MIGRATION_CAPABILITY_AUTO_CONVERGE];
 }
 
 bool migrate_zero_blocks(void)
@@ -2602,7 +2601,7 @@ bool migrate_zero_blocks(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_ZERO_BLOCKS];
+    return s->capabilities[MIGRATION_CAPABILITY_ZERO_BLOCKS];
 }
 
 bool migrate_postcopy_blocktime(void)
@@ -2611,7 +2610,7 @@ bool migrate_postcopy_blocktime(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_BLOCKTIME];
+    return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_BLOCKTIME];
 }
 
 bool migrate_use_compression(void)
@@ -2620,7 +2619,7 @@ bool migrate_use_compression(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_COMPRESS];
+    return s->capabilities[MIGRATION_CAPABILITY_COMPRESS];
 }
 
 int migrate_compress_level(void)
@@ -2665,7 +2664,7 @@ bool migrate_dirty_bitmaps(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
+    return s->capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
 }
 
 bool migrate_ignore_shared(void)
@@ -2674,7 +2673,7 @@ bool migrate_ignore_shared(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_X_IGNORE_SHARED];
+    return s->capabilities[MIGRATION_CAPABILITY_X_IGNORE_SHARED];
 }
 
 bool migrate_validate_uuid(void)
@@ -2683,7 +2682,7 @@ bool migrate_validate_uuid(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_VALIDATE_UUID];
+    return s->capabilities[MIGRATION_CAPABILITY_VALIDATE_UUID];
 }
 
 bool migrate_use_events(void)
@@ -2692,7 +2691,7 @@ bool migrate_use_events(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_EVENTS];
+    return s->capabilities[MIGRATION_CAPABILITY_EVENTS];
 }
 
 bool migrate_use_multifd(void)
@@ -2701,7 +2700,7 @@ bool migrate_use_multifd(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
+    return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
 bool migrate_pause_before_switchover(void)
@@ -2710,8 +2709,7 @@ bool migrate_pause_before_switchover(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[
-        MIGRATION_CAPABILITY_PAUSE_BEFORE_SWITCHOVER];
+    return s->capabilities[MIGRATION_CAPABILITY_PAUSE_BEFORE_SWITCHOVER];
 }
 
 int migrate_multifd_channels(void)
@@ -2758,7 +2756,7 @@ bool migrate_use_zero_copy_send(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_ZERO_COPY_SEND];
+    return s->capabilities[MIGRATION_CAPABILITY_ZERO_COPY_SEND];
 }
 #endif
 
@@ -2777,7 +2775,7 @@ int migrate_use_xbzrle(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_XBZRLE];
+    return s->capabilities[MIGRATION_CAPABILITY_XBZRLE];
 }
 
 uint64_t migrate_xbzrle_cache_size(void)
@@ -2804,7 +2802,7 @@ bool migrate_use_block(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_BLOCK];
+    return s->capabilities[MIGRATION_CAPABILITY_BLOCK];
 }
 
 bool migrate_use_return_path(void)
@@ -2813,7 +2811,7 @@ bool migrate_use_return_path(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_RETURN_PATH];
+    return s->capabilities[MIGRATION_CAPABILITY_RETURN_PATH];
 }
 
 bool migrate_use_block_incremental(void)
@@ -2831,7 +2829,7 @@ bool migrate_background_snapshot(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT];
+    return s->capabilities[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT];
 }
 
 bool migrate_postcopy_preempt(void)
@@ -2840,7 +2838,7 @@ bool migrate_postcopy_preempt(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT];
+    return s->capabilities[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT];
 }
 
 /* migration thread support */
@@ -3582,7 +3580,7 @@ fail:
 bool migrate_colo_enabled(void)
 {
     MigrationState *s = migrate_get_current();
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_X_COLO];
+    return s->capabilities[MIGRATION_CAPABILITY_X_COLO];
 }
 
 typedef enum MigThrError {
@@ -4448,7 +4446,7 @@ void migration_global_dump(Monitor *mon)
 }
 
 #define DEFINE_PROP_MIG_CAP(name, x)             \
-    DEFINE_PROP_BOOL(name, MigrationState, enabled_capabilities[x], false)
+    DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
 
 static Property migration_properties[] = {
     DEFINE_PROP_BOOL("store-global-state", MigrationState,
@@ -4647,7 +4645,7 @@ static bool migration_object_check(MigrationState *ms, Error **errp)
     }
 
     for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) {
-        if (ms->enabled_capabilities[i]) {
+        if (ms->capabilities[i]) {
             QAPI_LIST_PREPEND(head, migrate_cap_add(i, true));
         }
     }
diff --git a/migration/migration.h b/migration/migration.h
index 310ae8901b..04e0860b4e 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -310,7 +310,7 @@ struct MigrationState {
     int64_t downtime_start;
     int64_t downtime;
     int64_t expected_downtime;
-    bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
+    bool capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t setup_time;
     /*
      * Whether guest was running when we enter the completion stage.
diff --git a/migration/rdma.c b/migration/rdma.c
index df646be35e..f35f021963 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4179,7 +4179,7 @@ void rdma_start_outgoing_migration(void *opaque,
     }
 
     ret = qemu_rdma_source_init(rdma,
-        s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL], errp);
+        s->capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL], errp);
 
     if (ret) {
         goto err;
@@ -4201,7 +4201,7 @@ void rdma_start_outgoing_migration(void *opaque,
         }
 
         ret = qemu_rdma_source_init(rdma_return_path,
-            s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL], errp);
+            s->capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL], errp);
 
         if (ret) {
             goto return_path_err;
diff --git a/migration/savevm.c b/migration/savevm.c
index aa54a67fda..589ef926ab 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -253,7 +253,7 @@ static uint32_t get_validatable_capabilities_count(void)
     uint32_t result = 0;
     int i;
     for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) {
-        if (should_validate_capability(i) && s->enabled_capabilities[i]) {
+        if (should_validate_capability(i) && s->capabilities[i]) {
             result++;
         }
     }
@@ -275,7 +275,7 @@ static int configuration_pre_save(void *opaque)
     state->capabilities = g_renew(MigrationCapability, state->capabilities,
                                   state->caps_count);
     for (i = j = 0; i < MIGRATION_CAPABILITY__MAX; i++) {
-        if (should_validate_capability(i) && s->enabled_capabilities[i]) {
+        if (should_validate_capability(i) && s->capabilities[i]) {
             state->capabilities[j++] = i;
         }
     }
@@ -325,7 +325,7 @@ static bool configuration_validate_capabilities(SaveState *state)
             continue;
         }
         source_state = test_bit(i, source_caps_bm);
-        target_state = s->enabled_capabilities[i];
+        target_state = s->capabilities[i];
         if (source_state != target_state) {
             error_report("Capability %s is %s, but received capability is %s",
                          MigrationCapability_str(i),
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PULL 20/20] migration: Pass migrate_caps_check() the old and new caps
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (18 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 19/20] migration: rename enabled_capabilities to capabilities Juan Quintela
@ 2023-04-20 13:17 ` Juan Quintela
  2023-04-22  5:09 ` [PULL 00/20] Migration 20230420 patches Richard Henderson
  20 siblings, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-20 13:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Juan Quintela, Paolo Bonzini, Vladimir Sementsov-Ogievskiy

We used to pass the old capabilities array and the new
capabilities as a list.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---
 migration/migration.c | 80 +++++++++++++++++--------------------------
 1 file changed, 31 insertions(+), 49 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 0c2376bc7e..7b0d4a9d8f 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1300,30 +1300,20 @@ WriteTrackingSupport migrate_query_write_tracking(void)
 }
 
 /**
- * @migration_caps_check - check capability validity
+ * @migration_caps_check - check capability compatibility
  *
- * @cap_list: old capability list, array of bool
- * @params: new capabilities to be applied soon
+ * @old_caps: old capability list
+ * @new_caps: new capability list
  * @errp: set *errp if the check failed, with reason
  *
  * Returns true if check passed, otherwise false.
  */
-static bool migrate_caps_check(bool *cap_list,
-                               MigrationCapabilityStatusList *params,
-                               Error **errp)
+static bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
 {
-    MigrationCapabilityStatusList *cap;
-    bool old_postcopy_cap;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
-    old_postcopy_cap = cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM];
-
-    for (cap = params; cap; cap = cap->next) {
-        cap_list[cap->value->capability] = cap->value->state;
-    }
-
 #ifndef CONFIG_LIVE_BLOCK_MIGRATION
-    if (cap_list[MIGRATION_CAPABILITY_BLOCK]) {
+    if (new_caps[MIGRATION_CAPABILITY_BLOCK]) {
         error_setg(errp, "QEMU compiled without old-style (blk/-b, inc/-i) "
                    "block migration");
         error_append_hint(errp, "Use drive_mirror+NBD instead.\n");
@@ -1332,7 +1322,7 @@ static bool migrate_caps_check(bool *cap_list,
 #endif
 
 #ifndef CONFIG_REPLICATION
-    if (cap_list[MIGRATION_CAPABILITY_X_COLO]) {
+    if (new_caps[MIGRATION_CAPABILITY_X_COLO]) {
         error_setg(errp, "QEMU compiled without replication module"
                    " can't enable COLO");
         error_append_hint(errp, "Please enable replication before COLO.\n");
@@ -1340,12 +1330,13 @@ static bool migrate_caps_check(bool *cap_list,
     }
 #endif
 
-    if (cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
+    if (new_caps[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
         /* This check is reasonably expensive, so only when it's being
          * set the first time, also it's only the destination that needs
          * special support.
          */
-        if (!old_postcopy_cap && runstate_check(RUN_STATE_INMIGRATE) &&
+        if (!old_caps[MIGRATION_CAPABILITY_POSTCOPY_RAM] &&
+            runstate_check(RUN_STATE_INMIGRATE) &&
             !postcopy_ram_supported_by_host(mis)) {
             /* postcopy_ram_supported_by_host will have emitted a more
              * detailed message
@@ -1354,13 +1345,13 @@ static bool migrate_caps_check(bool *cap_list,
             return false;
         }
 
-        if (cap_list[MIGRATION_CAPABILITY_X_IGNORE_SHARED]) {
+        if (new_caps[MIGRATION_CAPABILITY_X_IGNORE_SHARED]) {
             error_setg(errp, "Postcopy is not compatible with ignore-shared");
             return false;
         }
     }
 
-    if (cap_list[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT]) {
+    if (new_caps[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT]) {
         WriteTrackingSupport wt_support;
         int idx;
         /*
@@ -1384,7 +1375,7 @@ static bool migrate_caps_check(bool *cap_list,
          */
         for (idx = 0; idx < check_caps_background_snapshot.size; idx++) {
             int incomp_cap = check_caps_background_snapshot.caps[idx];
-            if (cap_list[incomp_cap]) {
+            if (new_caps[incomp_cap]) {
                 error_setg(errp,
                         "Background-snapshot is not compatible with %s",
                         MigrationCapability_str(incomp_cap));
@@ -1394,10 +1385,10 @@ static bool migrate_caps_check(bool *cap_list,
     }
 
 #ifdef CONFIG_LINUX
-    if (cap_list[MIGRATION_CAPABILITY_ZERO_COPY_SEND] &&
-        (!cap_list[MIGRATION_CAPABILITY_MULTIFD] ||
-         cap_list[MIGRATION_CAPABILITY_COMPRESS] ||
-         cap_list[MIGRATION_CAPABILITY_XBZRLE] ||
+    if (new_caps[MIGRATION_CAPABILITY_ZERO_COPY_SEND] &&
+        (!new_caps[MIGRATION_CAPABILITY_MULTIFD] ||
+         new_caps[MIGRATION_CAPABILITY_COMPRESS] ||
+         new_caps[MIGRATION_CAPABILITY_XBZRLE] ||
          migrate_multifd_compression() ||
          migrate_use_tls())) {
         error_setg(errp,
@@ -1405,15 +1396,15 @@ static bool migrate_caps_check(bool *cap_list,
         return false;
     }
 #else
-    if (cap_list[MIGRATION_CAPABILITY_ZERO_COPY_SEND]) {
+    if (new_caps[MIGRATION_CAPABILITY_ZERO_COPY_SEND]) {
         error_setg(errp,
                    "Zero copy currently only available on Linux");
         return false;
     }
 #endif
 
-    if (cap_list[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT]) {
-        if (!cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
+    if (new_caps[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT]) {
+        if (!new_caps[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
             error_setg(errp, "Postcopy preempt requires postcopy-ram");
             return false;
         }
@@ -1424,14 +1415,14 @@ static bool migrate_caps_check(bool *cap_list,
          * different compression channels, which is not compatible with the
          * preempt assumptions on channel assignments.
          */
-        if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
+        if (new_caps[MIGRATION_CAPABILITY_COMPRESS]) {
             error_setg(errp, "Postcopy preempt not compatible with compress");
             return false;
         }
     }
 
-    if (cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
-        if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
+    if (new_caps[MIGRATION_CAPABILITY_MULTIFD]) {
+        if (new_caps[MIGRATION_CAPABILITY_COMPRESS]) {
             error_setg(errp, "Multifd is not compatible with compress");
             return false;
         }
@@ -1487,15 +1478,19 @@ void qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
 {
     MigrationState *s = migrate_get_current();
     MigrationCapabilityStatusList *cap;
-    bool cap_list[MIGRATION_CAPABILITY__MAX];
+    bool new_caps[MIGRATION_CAPABILITY__MAX];
 
     if (migration_is_running(s->state)) {
         error_setg(errp, QERR_MIGRATION_ACTIVE);
         return;
     }
 
-    memcpy(cap_list, s->capabilities, sizeof(cap_list));
-    if (!migrate_caps_check(cap_list, params, errp)) {
+    memcpy(new_caps, s->capabilities, sizeof(new_caps));
+    for (cap = params; cap; cap = cap->next) {
+        new_caps[cap->value->capability] = cap->value->state;
+    }
+
+    if (!migrate_caps_check(s->capabilities, new_caps, errp)) {
         return;
     }
 
@@ -4635,27 +4630,14 @@ static void migration_instance_init(Object *obj)
  */
 static bool migration_object_check(MigrationState *ms, Error **errp)
 {
-    MigrationCapabilityStatusList *head = NULL;
     /* Assuming all off */
-    bool cap_list[MIGRATION_CAPABILITY__MAX] = { 0 }, ret;
-    int i;
+    bool old_caps[MIGRATION_CAPABILITY__MAX] = { 0 };
 
     if (!migrate_params_check(&ms->parameters, errp)) {
         return false;
     }
 
-    for (i = 0; i < MIGRATION_CAPABILITY__MAX; i++) {
-        if (ms->capabilities[i]) {
-            QAPI_LIST_PREPEND(head, migrate_cap_add(i, true));
-        }
-    }
-
-    ret = migrate_caps_check(cap_list, head, errp);
-
-    /* It works with head == NULL */
-    qapi_free_MigrationCapabilityStatusList(head);
-
-    return ret;
+    return migrate_caps_check(old_caps, ms->capabilities, errp);
 }
 
 static const TypeInfo migration_type = {
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
                   ` (19 preceding siblings ...)
  2023-04-20 13:17 ` [PULL 20/20] migration: Pass migrate_caps_check() the old and new caps Juan Quintela
@ 2023-04-22  5:09 ` Richard Henderson
  2023-04-22  9:21   ` Juan Quintela
  20 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-04-22  5:09 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel; +Cc: Paolo Bonzini

On 4/20/23 14:17, Juan Quintela wrote:
> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
> 
>    Open 8.1 development tree (2023-04-20 10:05:25 +0100)
> 
> are available in the Git repository at:
> 
>    https://gitlab.com/juan.quintela/qemu.git  tags/migration-20230420-pull-request
> 
> for you to fetch changes up to cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
> 
>    migration: Pass migrate_caps_check() the old and new caps (2023-04-20 15:10:58 +0200)
> 
> ----------------------------------------------------------------
> Migration Pull request
> 
> This series include everything reviewed for migration:
> 
> - fix for disk stop/start (eric)
> - detect filesystem of hostmem (peter)
> - rename qatomic_mb_read (paolo)
> - whitespace cleanup (李皆俊)
>    I hope copy and paste work for the name O:-)
> - atomic_counters series (juan)
> - two first patches of capabilities (juan)
> 
> Please apply,

Fails CI:
https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896

/usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld: 
libcommon.fa.p/migration_migration.c.o: undefined reference to symbol 
'__atomic_load_8@@LIBATOMIC_1.0'

You're using an atomic 8-byte operation on a host that doesn't support it.  Did you use 
qatomic_read__nocheck instead of qatomic_read to try and get around a build failure on 
i686?  The check is there for a reason...


r~


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-22  5:09 ` [PULL 00/20] Migration 20230420 patches Richard Henderson
@ 2023-04-22  9:21   ` Juan Quintela
  2023-04-22  9:57     ` Richard Henderson
  0 siblings, 1 reply; 27+ messages in thread
From: Juan Quintela @ 2023-04-22  9:21 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini

Richard Henderson <richard.henderson@linaro.org> wrote:
> On 4/20/23 14:17, Juan Quintela wrote:
>> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
>>    Open 8.1 development tree (2023-04-20 10:05:25 +0100)
>> are available in the Git repository at:
>>    https://gitlab.com/juan.quintela/qemu.git
>> tags/migration-20230420-pull-request
>> for you to fetch changes up to
>> cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
>>    migration: Pass migrate_caps_check() the old and new caps
>> (2023-04-20 15:10:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request
>> This series include everything reviewed for migration:
>> - fix for disk stop/start (eric)
>> - detect filesystem of hostmem (peter)
>> - rename qatomic_mb_read (paolo)
>> - whitespace cleanup (李皆俊)
>>    I hope copy and paste work for the name O:-)
>> - atomic_counters series (juan)
>> - two first patches of capabilities (juan)
>> Please apply,
>
> Fails CI:
> https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896
>
> /usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld:
> libcommon.fa.p/migration_migration.c.o: undefined reference to symbol
> '__atomic_load_8@@LIBATOMIC_1.0'

Hi Richard

First of all, I have no doubt that you know better that me in this
regard (*).

Once told that, it looks like one case of "my toolchain is better than
yours":

$ ls qemu-system-mips
qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
qemu-system-mips64el    qemu-system-mipsel

This is Fedora37 with updates.

There are two posibilities here that came to mind, in order of
probability;
- myself with:

-    if (ram_counters.dirty_pages_rate && transferred > 10000) {
+    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
+        transferred > 10000) {

- paolo:

 PostcopyState  postcopy_state_get(void)
 {
-    return qatomic_mb_read(&incoming_postcopy_state);
+    return qatomic_load_acquire(&incoming_postcopy_state);
 }

> You're using an atomic 8-byte operation on a host that doesn't support
> it.  Did you use qatomic_read__nocheck instead of qatomic_read to try
> and get around a build failure on i686?  The check is there for a
> reason...

No, I am changing all ram_counters values to atomic.  Almost all of them
move from [u]int64_t to Stat64.  Notice that I don't care about 63 to
64bits, and anyways I think it was an error that they were int64_t on
the frist place (blame the old days qapi whet it didn't have unsigned
types).

But it don't exist a stat64_set() function.  The most similar thing that
appears here is stat64_init(), but it is cheating about not being atomic
at all.

Almost all ram_counters values are ok with stat64_add() and stat64_get()
operations.  But some of them, we need to reset them to zero (or
someother value, but that would not be complicated).

(*) And here is where it comes the call sentence from the 1st paragraph,
see how the stat64_get() gets implemented for the !CONFIG_ATOMIC64, I
didn't even try to write a stat64_set() on my own.

This is one example of the use that I had:

     if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
         transferred > 10000) {
-        s->expected_downtime = ram_counters.remaining / bandwidth;
+        s->expected_downtime =
+            qatomic_read__nocheck(&ram_counters.dirty_bytes_last_sync) /
+                                  bandwidth;
     }
 
     qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 7400abf5e1..7bbaf8cd86 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1224,7 +1224,8 @@ static void migration_bitmap_sync(RAMState *rs)
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             ramblock_sync_dirty_bitmap(rs, block);
         }
-        ram_counters.remaining = ram_bytes_remaining();
+        qatomic_set__nocheck(&ram_counters.dirty_bytes_last_sync,
+                             ram_bytes_remaining());
:


and why I used qatomic_*__nocheck() instead of the proper operations?
Because reading this:

#define qatomic_read__nocheck(ptr) \
    __atomic_load_n(ptr, __ATOMIC_RELAXED)

#define qatomic_read(ptr)                              \
    ({                                                 \
    qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
    qatomic_read__nocheck(ptr);                        \
    })

#define qatomic_set__nocheck(ptr, i) \
    __atomic_store_n(ptr, i, __ATOMIC_RELAXED)

#define qatomic_set(ptr, i)  do {                      \
    qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
    qatomic_set__nocheck(ptr, i);                      \
} while(0)

I was complely sure that we will never get the qemu_build_assert().

I know, I know.

And now that I have explained myself, what is the correct way of doing
this?

I declared the value as:

+    aligned_uint64_t dirty_bytes_last_sync;
-    int64_t remaining;

I just want to make sure that *all* ram_counters are atomic and then I
can use them from any thread.  All the counters that use stat64 already
are.  But for this two to work, I would need to have a way to set and
old value.

And once that we are here, I would like ta have:

stat64_inc(): just add 1, I know, I can create a macro.

and

stat64_reset(): as its name says, it returns the value to zero.

I still miss a couple of stats in migration, where I need to reset them
to zero from time to time:

./ram.c:380:    uint64_t bytes_xfer_prev;
./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);

You can clame that this operation happens always on the migration
thread, but I have found that it is more difficult to document which
ones are atomic and which not, that make all of them atomic.  This
variable are get/set once a second, so performance is not one of the
issues.

And:

./ram.c:382:    uint64_t num_dirty_pages_period;
./ram.c:746:    rs->num_dirty_pages_period = 0;
./ram.c:1095:    rs->num_dirty_pages_period += new_dirty_pages;
./ram.c:1133:                         rs->num_dirty_pages_period * 1000 /
./ram.c:1184:    uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
./ram.c:1232:    trace_migration_bitmap_sync_end(rs->num_dirty_pages_period);
./ram.c:1246:        rs->num_dirty_pages_period = 0;

The problem here is that we reset the value every second, but for
everything else it is an stat64.

Thanks, Juan.



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-22  9:21   ` Juan Quintela
@ 2023-04-22  9:57     ` Richard Henderson
  2023-04-23  9:45       ` Juan Quintela
  0 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-04-22  9:57 UTC (permalink / raw)
  To: quintela; +Cc: qemu-devel, Paolo Bonzini

On 4/22/23 10:21, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 4/20/23 14:17, Juan Quintela wrote:
>>> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
>>>     Open 8.1 development tree (2023-04-20 10:05:25 +0100)
>>> are available in the Git repository at:
>>>     https://gitlab.com/juan.quintela/qemu.git
>>> tags/migration-20230420-pull-request
>>> for you to fetch changes up to
>>> cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
>>>     migration: Pass migrate_caps_check() the old and new caps
>>> (2023-04-20 15:10:58 +0200)
>>> ----------------------------------------------------------------
>>> Migration Pull request
>>> This series include everything reviewed for migration:
>>> - fix for disk stop/start (eric)
>>> - detect filesystem of hostmem (peter)
>>> - rename qatomic_mb_read (paolo)
>>> - whitespace cleanup (李皆俊)
>>>     I hope copy and paste work for the name O:-)
>>> - atomic_counters series (juan)
>>> - two first patches of capabilities (juan)
>>> Please apply,
>>
>> Fails CI:
>> https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896
>>
>> /usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld:
>> libcommon.fa.p/migration_migration.c.o: undefined reference to symbol
>> '__atomic_load_8@@LIBATOMIC_1.0'
> 
> Hi Richard
> 
> First of all, I have no doubt that you know better that me in this
> regard (*).
> 
> Once told that, it looks like one case of "my toolchain is better than
> yours":
> 
> $ ls qemu-system-mips
> qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
> qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
> qemu-system-mips64el    qemu-system-mipsel
> 
> This is Fedora37 with updates.

I'm sure it's not true that "my toolchain is better", because mips32 simply does not have 
the ability.  (And of course mips64 does, but that's a different test.)

I'll note that mips32 and armv6 (that is, *not* debian's armv7 based armhf distro) are the 
only hosts we have that don't have an atomic 8-byte operation.


> There are two posibilities here that came to mind, in order of
> probability;
> - myself with:
> 
> -    if (ram_counters.dirty_pages_rate && transferred > 10000) {
> +    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
> +        transferred > 10000) {

I think it's this one...

> - paolo:
> 
>   PostcopyState  postcopy_state_get(void)
>   {
> -    return qatomic_mb_read(&incoming_postcopy_state);
> +    return qatomic_load_acquire(&incoming_postcopy_state);

... because this one was already atomic, with different barriers.

> and why I used qatomic_*__nocheck() instead of the proper operations?
> Because reading this:
> 
> #define qatomic_read__nocheck(ptr) \
>      __atomic_load_n(ptr, __ATOMIC_RELAXED)
> 
> #define qatomic_read(ptr)                              \
>      ({                                                 \
>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>      qatomic_read__nocheck(ptr);                        \
>      })
> 
> #define qatomic_set__nocheck(ptr, i) \
>      __atomic_store_n(ptr, i, __ATOMIC_RELAXED)
> 
> #define qatomic_set(ptr, i)  do {                      \
>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>      qatomic_set__nocheck(ptr, i);                      \
> } while(0)
> 
> I was complely sure that we will never get the qemu_build_assert().
> 
> I know, I know.

:-)

> And now that I have explained myself, what is the correct way of doing
> this?
> 
> I declared the value as:
> 
> +    aligned_uint64_t dirty_bytes_last_sync;
> -    int64_t remaining;
> 
> I just want to make sure that *all* ram_counters are atomic and then I
> can use them from any thread.  All the counters that use stat64 already
> are.  But for this two to work, I would need to have a way to set and
> old value.
> 
> And once that we are here, I would like ta have:
> 
> stat64_inc(): just add 1, I know, I can create a macro.
> 
> and
> 
> stat64_reset(): as its name says, it returns the value to zero.
> 
> I still miss a couple of stats in migration, where I need to reset them
> to zero from time to time:

How critical are the statistics?  Are they integral to the algorithm or are they merely 
for diagnostics and user display?  What happens they're not atomic and we do race?

If we really need atomicity, then the only answer is a mutex or spinlock.

> ./ram.c:380:    uint64_t bytes_xfer_prev;
> ./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
> ./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
> ./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
> 
> You can clame that this operation happens always on the migration
> thread, but I have found that it is more difficult to document which
> ones are atomic and which not, that make all of them atomic.  This
> variable are get/set once a second, so performance is not one of the
> issues.

For access once per second, it sounds like a spinlock would be fine.


r~


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-22  9:57     ` Richard Henderson
@ 2023-04-23  9:45       ` Juan Quintela
  2023-04-23 15:00         ` Richard Henderson
  2023-04-24  9:57         ` Juan Quintela
  0 siblings, 2 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-23  9:45 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini

Richard Henderson <richard.henderson@linaro.org> wrote:
> On 4/22/23 10:21, Juan Quintela wrote:
>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>> On 4/20/23 14:17, Juan Quintela wrote:

>> Hi Richard
>> First of all, I have no doubt that you know better that me in this
>> regard (*).
>> Once told that, it looks like one case of "my toolchain is better
>> than
>> yours":

Quotes were here for a reason O:-)

>> $ ls qemu-system-mips
>> qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
>> qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
>> qemu-system-mips64el    qemu-system-mipsel
>> This is Fedora37 with updates.
>
> I'm sure it's not true that "my toolchain is better", because mips32
> simply does not have the ability.  (And of course mips64 does, but
> that's a different test.)

It was a kind of apology to say that I had really compiled mipsel.  I
compile everything that has a crosscompiler in fedora:

TARGET_DIRS=aarch64-softmmu alpha-softmmu arm-softmmu avr-softmmu
cris-softmmu hppa-softmmu i386-softmmu loongarch64-softmmu m68k-softmmu
microblazeel-softmmu microblaze-softmmu mips64el-softmmu mips64-softmmu
mipsel-softmmu mips-softmmu nios2-softmmu or1k-softmmu ppc64-softmmu
ppc-softmmu riscv32-softmmu riscv64-softmmu rx-softmmu s390x-softmmu
sh4eb-softmmu sh4-softmmu sparc64-softmmu sparc-softmmu tricore-softmmu
x86_64-softmmu xtensaeb-softmmu xtensa-softmmu aarch64_be-linux-user
aarch64-linux-user alpha-linux-user armeb-linux-user arm-linux-user
cris-linux-user hexagon-linux-user hppa-linux-user i386-linux-user
loongarch64-linux-user m68k-linux-user microblazeel-linux-user
microblaze-linux-user mips64el-linux-user mips64-linux-user
mipsel-linux-user mips-linux-user mipsn32el-linux-user
mipsn32-linux-user nios2-linux-user or1k-linux-user ppc64le-linux-user
ppc64-linux-user ppc-linux-user riscv32-linux-user riscv64-linux-user
s390x-linux-user sh4eb-linux-user sh4-linux-user sparc32plus-linux-user
sparc64-linux-user sparc-linux-user x86_64-linux-user
xtensaeb-linux-user xtensa-linux-user

And I still get this "build" failures.

> I'll note that mips32 and armv6 (that is, *not* debian's armv7 based
> armhf distro) are the only hosts we have that don't have an atomic
> 8-byte operation.

This is the kind of trouble that I don'k now what to do.  I am pretty
sure that nobody is goigng to migrate a host that has so much RAM than
needs a 64bit counter in that two architectures (or any 32 architectures
for what is worth).

A couple of minutes after sending the 1st email, I considederd sending
another one saying "my toolchain lies better than yours".

I moved the atomic operations that do the buildcheck and run make again:

$ rm -f qemu-system-mips*
$ time make

[....]

[2/5] Linking target qemu-system-mipsel
[3/5] Linking target qemu-system-mips
[4/5] Linking target qemu-system-mips64el
[5/5] Linking target qemu-system-mips64

So clearly my toolchain is lying O:-)

>> There are two posibilities here that came to mind, in order of
>> probability;
>> - myself with:
>> -    if (ram_counters.dirty_pages_rate && transferred > 10000) {
>> +    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
>> +        transferred > 10000) {
>
> I think it's this one...

O:-)

>> and why I used qatomic_*__nocheck() instead of the proper operations?
>> Because reading this:
>> #define qatomic_read__nocheck(ptr) \
>>      __atomic_load_n(ptr, __ATOMIC_RELAXED)
>> #define qatomic_read(ptr)                              \
>>      ({                                                 \
>>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>>      qatomic_read__nocheck(ptr);                        \
>>      })
>> #define qatomic_set__nocheck(ptr, i) \
>>      __atomic_store_n(ptr, i, __ATOMIC_RELAXED)
>> #define qatomic_set(ptr, i)  do {                      \
>>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>>      qatomic_set__nocheck(ptr, i);                      \
>> } while(0)
>> I was complely sure that we will never get the qemu_build_assert().
>> I know, I know.
>
> :-)
>
>> And now that I have explained myself, what is the correct way of doing
>> this?
>> I declared the value as:
>> +    aligned_uint64_t dirty_bytes_last_sync;
>> -    int64_t remaining;
>> I just want to make sure that *all* ram_counters are atomic and then
>> I
>> can use them from any thread.  All the counters that use stat64 already
>> are.  But for this two to work, I would need to have a way to set and
>> old value.
>> And once that we are here, I would like ta have:
>> stat64_inc(): just add 1, I know, I can create a macro.
>> and
>> stat64_reset(): as its name says, it returns the value to zero.
>> I still miss a couple of stats in migration, where I need to reset
>> them
>> to zero from time to time:
>
> How critical are the statistics?  Are they integral to the algorithm
> or are they merely for diagnostics and user display?  What happens
> they're not atomic and we do race?
>
> If we really need atomicity, then the only answer is a mutex or spinlock.

I think we can extend Stat64 operations with at least stat64_reset()
operations.  What I don't want is that half of the counters need to be
updated with one spinlock and the other half with atomic operations, it
makes difficult to explain.

If I have the stat64_reset() operation, then stat64_set() becomes
stat64_reset() + stat64_add().  I put a wrapper and call it a day.  As
said, this one is not so speed critical.

Yes, other counters are speed critical, updated once for each transmited
page.  But others are only updated every time that we try to finish
migration or each iteration.  The two ones given trouble are in the last
kind.

>> ./ram.c:380:    uint64_t bytes_xfer_prev;
>> ./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
>> ./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
>> ./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
>> You can clame that this operation happens always on the migration
>> thread, but I have found that it is more difficult to document which
>> ones are atomic and which not, that make all of them atomic.  This
>> variable are get/set once a second, so performance is not one of the
>> issues.
>
> For access once per second, it sounds like a spinlock would be fine.

Ortogonality is more important to me that speed here.  I mill wait if
someone (hint, hint) cames with an implementaton of stat64_clear() that
also works for non ATOMIC64 machines O:-)

Later, Juan.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-23  9:45       ` Juan Quintela
@ 2023-04-23 15:00         ` Richard Henderson
  2023-04-24  9:57         ` Juan Quintela
  1 sibling, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-04-23 15:00 UTC (permalink / raw)
  To: quintela; +Cc: qemu-devel, Paolo Bonzini

On 4/23/23 10:45, Juan Quintela wrote:
> This is the kind of trouble that I don'k now what to do.  I am pretty
> sure that nobody is goigng to migrate a host that has so much RAM than
> needs a 64bit counter in that two architectures (or any 32 architectures
> for what is worth).

Does it really need to be a 64-bit counter? Should it be size_t instead?
Given that a 32-bit host can't represent more than say 2**20 pages, we shouldn't need to 
count them either.


r~



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PULL 00/20] Migration 20230420 patches
  2023-04-23  9:45       ` Juan Quintela
  2023-04-23 15:00         ` Richard Henderson
@ 2023-04-24  9:57         ` Juan Quintela
  1 sibling, 0 replies; 27+ messages in thread
From: Juan Quintela @ 2023-04-24  9:57 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini

Juan Quintela <quintela@redhat.com> wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 4/22/23 10:21, Juan Quintela wrote:
>>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>>> On 4/20/23 14:17, Juan Quintela wrote:

>> I'll note that mips32 and armv6 (that is, *not* debian's armv7 based
>> armhf distro) are the only hosts we have that don't have an atomic
>> 8-byte operation.
>
> This is the kind of trouble that I don'k now what to do.  I am pretty
> sure that nobody is goigng to migrate a host that has so much RAM than
> needs a 64bit counter in that two architectures (or any 32 architectures
> for what is worth).
>
> A couple of minutes after sending the 1st email, I considederd sending
> another one saying "my toolchain lies better than yours".
>
> I moved the atomic operations that do the buildcheck and run make again:
>
> $ rm -f qemu-system-mips*
> $ time make
>
> [....]
>
> [2/5] Linking target qemu-system-mipsel
> [3/5] Linking target qemu-system-mips
> [4/5] Linking target qemu-system-mips64el
> [5/5] Linking target qemu-system-mips64
>
> So clearly my toolchain is lying O:-)

And here I am.
Wearing a brow paper bag on my head for week.

These are emulatores for mips.  Not cross-compiled to run on MIPS.

/me hides on the hills in shame.

Later, Juan.



^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-04-24  9:58 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-20 13:17 [PULL 00/20] Migration 20230420 patches Juan Quintela
2023-04-20 13:17 ` [PULL 01/20] migration: remove extra whitespace character for code style Juan Quintela
2023-04-20 13:17 ` [PULL 02/20] postcopy-ram: do not use qatomic_mb_read Juan Quintela
2023-04-20 13:17 ` [PULL 03/20] migration: Merge ram_counters and ram_atomic_counters Juan Quintela
2023-04-20 13:17 ` [PULL 04/20] migration: Update atomic stats out of the mutex Juan Quintela
2023-04-20 13:17 ` [PULL 05/20] migration: Make multifd_bytes atomic Juan Quintela
2023-04-20 13:17 ` [PULL 06/20] migration: Make dirty_sync_missed_zero_copy atomic Juan Quintela
2023-04-20 13:17 ` [PULL 07/20] migration: Make precopy_bytes atomic Juan Quintela
2023-04-20 13:17 ` [PULL 08/20] migration: Make downtime_bytes atomic Juan Quintela
2023-04-20 13:17 ` [PULL 09/20] migration: Make dirty_sync_count atomic Juan Quintela
2023-04-20 13:17 ` [PULL 10/20] migration: Make postcopy_requests atomic Juan Quintela
2023-04-20 13:17 ` [PULL 11/20] migration: Make dirty_pages_rate atomic Juan Quintela
2023-04-20 13:17 ` [PULL 12/20] migration: Make dirty_bytes_last_sync atomic Juan Quintela
2023-04-20 13:17 ` [PULL 13/20] migration: Rename duplicate to zero_pages Juan Quintela
2023-04-20 13:17 ` [PULL 14/20] migration: Rename normal to normal_pages Juan Quintela
2023-04-20 13:17 ` [PULL 15/20] migration: Handle block device inactivation failures better Juan Quintela
2023-04-20 13:17 ` [PULL 16/20] util/mmap-alloc: qemu_fd_getfs() Juan Quintela
2023-04-20 13:17 ` [PULL 17/20] vl.c: Create late backends before migration object Juan Quintela
2023-04-20 13:17 ` [PULL 18/20] migration/postcopy: Detect file system on dest host Juan Quintela
2023-04-20 13:17 ` [PULL 19/20] migration: rename enabled_capabilities to capabilities Juan Quintela
2023-04-20 13:17 ` [PULL 20/20] migration: Pass migrate_caps_check() the old and new caps Juan Quintela
2023-04-22  5:09 ` [PULL 00/20] Migration 20230420 patches Richard Henderson
2023-04-22  9:21   ` Juan Quintela
2023-04-22  9:57     ` Richard Henderson
2023-04-23  9:45       ` Juan Quintela
2023-04-23 15:00         ` Richard Henderson
2023-04-24  9:57         ` Juan Quintela

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).