* [PULL 01/30] migration/multifd: Rename threadinfo.c functions
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
@ 2023-06-22 16:54 ` Juan Quintela
2023-06-22 16:54 ` [PULL 02/30] migration/multifd: Protect accesses to migration_threads Juan Quintela
` (29 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:54 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Fabiano Rosas,
Philippe Mathieu-Daudé
From: Fabiano Rosas <farosas@suse.de>
We're about to add more functions to this file so make it use the same
coding style as the rest of the code.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20230607161306.31425-2-farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/threadinfo.h | 5 ++---
migration/migration.c | 4 ++--
migration/multifd.c | 4 ++--
migration/threadinfo.c | 4 ++--
4 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/migration/threadinfo.h b/migration/threadinfo.h
index 4d69423c0a..8aa6999d58 100644
--- a/migration/threadinfo.h
+++ b/migration/threadinfo.h
@@ -23,6 +23,5 @@ struct MigrationThread {
QLIST_ENTRY(MigrationThread) node;
};
-MigrationThread *MigrationThreadAdd(const char *name, int thread_id);
-
-void MigrationThreadDel(MigrationThread *info);
+MigrationThread *migration_threads_add(const char *name, int thread_id);
+void migration_threads_remove(MigrationThread *info);
diff --git a/migration/migration.c b/migration/migration.c
index dc05c6f6ea..3a001dd042 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2922,7 +2922,7 @@ static void *migration_thread(void *opaque)
MigThrError thr_error;
bool urgent = false;
- thread = MigrationThreadAdd("live_migration", qemu_get_thread_id());
+ thread = migration_threads_add("live_migration", qemu_get_thread_id());
rcu_register_thread();
@@ -3000,7 +3000,7 @@ static void *migration_thread(void *opaque)
migration_iteration_finish(s);
object_unref(OBJECT(s));
rcu_unregister_thread();
- MigrationThreadDel(thread);
+ migration_threads_remove(thread);
return NULL;
}
diff --git a/migration/multifd.c b/migration/multifd.c
index 3387d8277f..4c6cee6547 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -651,7 +651,7 @@ static void *multifd_send_thread(void *opaque)
int ret = 0;
bool use_zero_copy_send = migrate_zero_copy_send();
- thread = MigrationThreadAdd(p->name, qemu_get_thread_id());
+ thread = migration_threads_add(p->name, qemu_get_thread_id());
trace_multifd_send_thread_start(p->id);
rcu_register_thread();
@@ -767,7 +767,7 @@ out:
qemu_mutex_unlock(&p->mutex);
rcu_unregister_thread();
- MigrationThreadDel(thread);
+ migration_threads_remove(thread);
trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages);
return NULL;
diff --git a/migration/threadinfo.c b/migration/threadinfo.c
index 1de8b31855..3dd9b14ae6 100644
--- a/migration/threadinfo.c
+++ b/migration/threadinfo.c
@@ -14,7 +14,7 @@
static QLIST_HEAD(, MigrationThread) migration_threads;
-MigrationThread *MigrationThreadAdd(const char *name, int thread_id)
+MigrationThread *migration_threads_add(const char *name, int thread_id)
{
MigrationThread *thread = g_new0(MigrationThread, 1);
thread->name = name;
@@ -25,7 +25,7 @@ MigrationThread *MigrationThreadAdd(const char *name, int thread_id)
return thread;
}
-void MigrationThreadDel(MigrationThread *thread)
+void migration_threads_remove(MigrationThread *thread)
{
if (thread) {
QLIST_REMOVE(thread, node);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 02/30] migration/multifd: Protect accesses to migration_threads
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
2023-06-22 16:54 ` [PULL 01/30] migration/multifd: Rename threadinfo.c functions Juan Quintela
@ 2023-06-22 16:54 ` Juan Quintela
2023-06-22 16:55 ` [PULL 03/30] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" Juan Quintela
` (28 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:54 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Fabiano Rosas
From: Fabiano Rosas <farosas@suse.de>
This doubly linked list is common for all the multifd and migration
threads so we need to avoid concurrent access.
Add a mutex to protect the data from concurrent access. This fixes a
crash when removing two MigrationThread objects from the list at the
same time during cleanup of multifd threads.
Fixes: 671326201d ("migration: Introduce interface query-migrationthreads")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230607161306.31425-3-farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/threadinfo.h | 2 --
migration/threadinfo.c | 15 ++++++++++++++-
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/migration/threadinfo.h b/migration/threadinfo.h
index 8aa6999d58..2f356ff312 100644
--- a/migration/threadinfo.h
+++ b/migration/threadinfo.h
@@ -10,8 +10,6 @@
* See the COPYING file in the top-level directory.
*/
-#include "qemu/queue.h"
-#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qapi/qapi-commands-migration.h"
diff --git a/migration/threadinfo.c b/migration/threadinfo.c
index 3dd9b14ae6..262990dd75 100644
--- a/migration/threadinfo.c
+++ b/migration/threadinfo.c
@@ -10,23 +10,35 @@
* See the COPYING file in the top-level directory.
*/
+#include "qemu/osdep.h"
+#include "qemu/queue.h"
+#include "qemu/lockable.h"
#include "threadinfo.h"
+QemuMutex migration_threads_lock;
static QLIST_HEAD(, MigrationThread) migration_threads;
+static void __attribute__((constructor)) migration_threads_init(void)
+{
+ qemu_mutex_init(&migration_threads_lock);
+}
+
MigrationThread *migration_threads_add(const char *name, int thread_id)
{
MigrationThread *thread = g_new0(MigrationThread, 1);
thread->name = name;
thread->thread_id = thread_id;
- QLIST_INSERT_HEAD(&migration_threads, thread, node);
+ WITH_QEMU_LOCK_GUARD(&migration_threads_lock) {
+ QLIST_INSERT_HEAD(&migration_threads, thread, node);
+ }
return thread;
}
void migration_threads_remove(MigrationThread *thread)
{
+ QEMU_LOCK_GUARD(&migration_threads_lock);
if (thread) {
QLIST_REMOVE(thread, node);
g_free(thread);
@@ -39,6 +51,7 @@ MigrationThreadInfoList *qmp_query_migrationthreads(Error **errp)
MigrationThreadInfoList **tail = &head;
MigrationThread *thread = NULL;
+ QEMU_LOCK_GUARD(&migration_threads_lock);
QLIST_FOREACH(thread, &migration_threads, node) {
MigrationThreadInfo *info = g_new0(MigrationThreadInfo, 1);
info->name = g_strdup(thread->name);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 03/30] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
2023-06-22 16:54 ` [PULL 01/30] migration/multifd: Rename threadinfo.c functions Juan Quintela
2023-06-22 16:54 ` [PULL 02/30] migration/multifd: Protect accesses to migration_threads Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 04/30] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter Juan Quintela
` (27 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
dirty_rate paraemter of hmp command "set_vcpu_dirty_limit" is invalid
if less than 0, so add parameter check for it.
Note that this patch also delete the unsolicited help message and
clean up the code.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <168618975839.6361.17407633874747688653-1@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
softmmu/dirtylimit.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 015a9038d1..e80201097a 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -515,14 +515,15 @@ void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
int64_t cpu_index = qdict_get_try_int(qdict, "cpu_index", -1);
Error *err = NULL;
+ if (dirty_rate < 0) {
+ error_setg(&err, "invalid dirty page limit %" PRId64, dirty_rate);
+ goto out;
+ }
+
qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);
- if (err) {
- hmp_handle_error(mon, err);
- return;
- }
- monitor_printf(mon, "[Please use 'info vcpu_dirty_limit' to query "
- "dirty limit for virtual CPU]\n");
+out:
+ hmp_handle_error(mon, err);
}
static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 04/30] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (2 preceding siblings ...)
2023-06-22 16:55 ` [PULL 03/30] softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit" Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 05/30] qapi/migration: Introduce vcpu-dirty-limit parameters Juan Quintela
` (26 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Introduce "x-vcpu-dirty-limit-period" migration experimental
parameter, which is in the range of 1 to 1000ms and used to
make dirtyrate calculation period configurable.
Currently with the "x-vcpu-dirty-limit-period" varies, the
total time of live migration changes, test results show the
optimal value of "x-vcpu-dirty-limit-period" ranges from
500ms to 1000 ms. "x-vcpu-dirty-limit-period" should be made
stable once it proves best value can not be determined with
developer's experiments.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <168618975839.6361.17407633874747688653-2@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 34 +++++++++++++++++++++++++++-------
migration/migration-hmp-cmds.c | 8 ++++++++
migration/options.c | 28 ++++++++++++++++++++++++++++
3 files changed, 63 insertions(+), 7 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index 5bb5ab82a0..67c26d9dea 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -779,9 +779,14 @@
# Nodes are mapped to their block device name if there is one, and
# to their node name otherwise. (Since 5.2)
#
+# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
+# live migration. Should be in the range 1 to 1000ms,
+# defaults to 1000ms. (Since 8.1)
+#
# Features:
#
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+# are experimental.
#
# Since: 2.4
##
@@ -799,8 +804,9 @@
'multifd-channels',
'xbzrle-cache-size', 'max-postcopy-bandwidth',
'max-cpu-throttle', 'multifd-compression',
- 'multifd-zlib-level' ,'multifd-zstd-level',
- 'block-bitmap-mapping' ] }
+ 'multifd-zlib-level', 'multifd-zstd-level',
+ 'block-bitmap-mapping',
+ { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
##
# @MigrateSetParameters:
@@ -935,9 +941,14 @@
# Nodes are mapped to their block device name if there is one, and
# to their node name otherwise. (Since 5.2)
#
+# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
+# live migration. Should be in the range 1 to 1000ms,
+# defaults to 1000ms. (Since 8.1)
+#
# Features:
#
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+# are experimental.
#
# TODO: either fuse back into MigrationParameters, or make
# MigrationParameters members mandatory
@@ -972,7 +983,9 @@
'*multifd-compression': 'MultiFDCompression',
'*multifd-zlib-level': 'uint8',
'*multifd-zstd-level': 'uint8',
- '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+ '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+ '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+ 'features': [ 'unstable' ] } } }
##
# @migrate-set-parameters:
@@ -1127,9 +1140,14 @@
# Nodes are mapped to their block device name if there is one, and
# to their node name otherwise. (Since 5.2)
#
+# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
+# live migration. Should be in the range 1 to 1000ms,
+# defaults to 1000ms. (Since 8.1)
+#
# Features:
#
-# @unstable: Member @x-checkpoint-delay is experimental.
+# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
+# are experimental.
#
# Since: 2.4
##
@@ -1161,7 +1179,9 @@
'*multifd-compression': 'MultiFDCompression',
'*multifd-zlib-level': 'uint8',
'*multifd-zstd-level': 'uint8',
- '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
+ '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
+ '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
+ 'features': [ 'unstable' ] } } }
##
# @query-migrate-parameters:
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 9885d7c9f7..352e9ec716 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -364,6 +364,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
}
}
}
+
+ monitor_printf(mon, "%s: %" PRIu64 " ms\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
+ params->x_vcpu_dirty_limit_period);
}
qapi_free_MigrationParameters(params);
@@ -620,6 +624,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
error_setg(&err, "The block-bitmap-mapping parameter can only be set "
"through QMP");
break;
+ case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
+ p->has_x_vcpu_dirty_limit_period = true;
+ visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
+ break;
default:
assert(0);
}
diff --git a/migration/options.c b/migration/options.c
index b62ab30cd5..9743dea3ab 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -80,6 +80,8 @@
#define DEFINE_PROP_MIG_CAP(name, x) \
DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD 1000 /* milliseconds */
+
Property migration_properties[] = {
DEFINE_PROP_BOOL("store-global-state", MigrationState,
store_global_state, true),
@@ -163,6 +165,9 @@ Property migration_properties[] = {
DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
+ DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
+ parameters.x_vcpu_dirty_limit_period,
+ DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -891,6 +896,9 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
s->parameters.block_bitmap_mapping);
}
+ params->has_x_vcpu_dirty_limit_period = true;
+ params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
+
return params;
}
@@ -923,6 +931,7 @@ void migrate_params_init(MigrationParameters *params)
params->has_announce_max = true;
params->has_announce_rounds = true;
params->has_announce_step = true;
+ params->has_x_vcpu_dirty_limit_period = true;
}
/*
@@ -1083,6 +1092,15 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
}
#endif
+ if (params->has_x_vcpu_dirty_limit_period &&
+ (params->x_vcpu_dirty_limit_period < 1 ||
+ params->x_vcpu_dirty_limit_period > 1000)) {
+ error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+ "x-vcpu-dirty-limit-period",
+ "a value between 1 and 1000");
+ return false;
+ }
+
return true;
}
@@ -1182,6 +1200,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
dest->has_block_bitmap_mapping = true;
dest->block_bitmap_mapping = params->block_bitmap_mapping;
}
+
+ if (params->has_x_vcpu_dirty_limit_period) {
+ dest->x_vcpu_dirty_limit_period =
+ params->x_vcpu_dirty_limit_period;
+ }
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1300,6 +1323,11 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
QAPI_CLONE(BitmapMigrationNodeAliasList,
params->block_bitmap_mapping);
}
+
+ if (params->has_x_vcpu_dirty_limit_period) {
+ s->parameters.x_vcpu_dirty_limit_period =
+ params->x_vcpu_dirty_limit_period;
+ }
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 05/30] qapi/migration: Introduce vcpu-dirty-limit parameters
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (3 preceding siblings ...)
2023-06-22 16:55 ` [PULL 04/30] qapi/migration: Introduce x-vcpu-dirty-limit-period parameter Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 06/30] migration: Introduce dirty-limit capability Juan Quintela
` (25 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Introduce "vcpu-dirty-limit" migration parameter used
to limit dirty page rate during live migration.
"vcpu-dirty-limit" and "x-vcpu-dirty-limit-period" are
two dirty-limit-related migration parameters, which can
be set before and during live migration by qmp
migrate-set-parameters.
This two parameters are used to help implement the dirty
page rate limit algo of migration.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <168618975839.6361.17407633874747688653-3@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 18 +++++++++++++++---
migration/migration-hmp-cmds.c | 8 ++++++++
migration/options.c | 21 +++++++++++++++++++++
3 files changed, 44 insertions(+), 3 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index 67c26d9dea..e7243c0c0d 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -783,6 +783,9 @@
# live migration. Should be in the range 1 to 1000ms,
# defaults to 1000ms. (Since 8.1)
#
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+# Defaults to 1. (Since 8.1)
+#
# Features:
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
@@ -806,7 +809,8 @@
'max-cpu-throttle', 'multifd-compression',
'multifd-zlib-level', 'multifd-zstd-level',
'block-bitmap-mapping',
- { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] } ] }
+ { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
+ 'vcpu-dirty-limit'] }
##
# @MigrateSetParameters:
@@ -945,6 +949,9 @@
# live migration. Should be in the range 1 to 1000ms,
# defaults to 1000ms. (Since 8.1)
#
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+# Defaults to 1. (Since 8.1)
+#
# Features:
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
@@ -985,7 +992,8 @@
'*multifd-zstd-level': 'uint8',
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
- 'features': [ 'unstable' ] } } }
+ 'features': [ 'unstable' ] },
+ '*vcpu-dirty-limit': 'uint64'} }
##
# @migrate-set-parameters:
@@ -1144,6 +1152,9 @@
# live migration. Should be in the range 1 to 1000ms,
# defaults to 1000ms. (Since 8.1)
#
+# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
+# Defaults to 1. (Since 8.1)
+#
# Features:
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
@@ -1181,7 +1192,8 @@
'*multifd-zstd-level': 'uint8',
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
- 'features': [ 'unstable' ] } } }
+ 'features': [ 'unstable' ] },
+ '*vcpu-dirty-limit': 'uint64'} }
##
# @query-migrate-parameters:
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 352e9ec716..35e8020bbf 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -368,6 +368,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%s: %" PRIu64 " ms\n",
MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
params->x_vcpu_dirty_limit_period);
+
+ monitor_printf(mon, "%s: %" PRIu64 " MB/s\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT),
+ params->vcpu_dirty_limit);
}
qapi_free_MigrationParameters(params);
@@ -628,6 +632,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
p->has_x_vcpu_dirty_limit_period = true;
visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
break;
+ case MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT:
+ p->has_vcpu_dirty_limit = true;
+ visit_type_size(v, param, &p->vcpu_dirty_limit, &err);
+ break;
default:
assert(0);
}
diff --git a/migration/options.c b/migration/options.c
index 9743dea3ab..8acf5f1d2c 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -81,6 +81,7 @@
DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD 1000 /* milliseconds */
+#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT 1 /* MB/s */
Property migration_properties[] = {
DEFINE_PROP_BOOL("store-global-state", MigrationState,
@@ -168,6 +169,9 @@ Property migration_properties[] = {
DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
parameters.x_vcpu_dirty_limit_period,
DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
+ DEFINE_PROP_UINT64("vcpu-dirty-limit", MigrationState,
+ parameters.vcpu_dirty_limit,
+ DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -898,6 +902,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
params->has_x_vcpu_dirty_limit_period = true;
params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
+ params->has_vcpu_dirty_limit = true;
+ params->vcpu_dirty_limit = s->parameters.vcpu_dirty_limit;
return params;
}
@@ -932,6 +938,7 @@ void migrate_params_init(MigrationParameters *params)
params->has_announce_rounds = true;
params->has_announce_step = true;
params->has_x_vcpu_dirty_limit_period = true;
+ params->has_vcpu_dirty_limit = true;
}
/*
@@ -1101,6 +1108,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
return false;
}
+ if (params->has_vcpu_dirty_limit &&
+ (params->vcpu_dirty_limit < 1)) {
+ error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+ "vcpu_dirty_limit",
+ "is invalid, it must greater then 1 MB/s");
+ return false;
+ }
+
return true;
}
@@ -1205,6 +1220,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
dest->x_vcpu_dirty_limit_period =
params->x_vcpu_dirty_limit_period;
}
+ if (params->has_vcpu_dirty_limit) {
+ dest->vcpu_dirty_limit = params->vcpu_dirty_limit;
+ }
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1328,6 +1346,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
s->parameters.x_vcpu_dirty_limit_period =
params->x_vcpu_dirty_limit_period;
}
+ if (params->has_vcpu_dirty_limit) {
+ s->parameters.vcpu_dirty_limit = params->vcpu_dirty_limit;
+ }
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 06/30] migration: Introduce dirty-limit capability
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (4 preceding siblings ...)
2023-06-22 16:55 ` [PULL 05/30] qapi/migration: Introduce vcpu-dirty-limit parameters Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 07/30] migration: Refactor auto-converge capability logic Juan Quintela
` (24 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Introduce migration dirty-limit capability, which can
be turned on before live migration and limit dirty
page rate durty live migration.
Introduce migrate_dirty_limit function to help check
if dirty-limit capability enabled during live migration.
Meanwhile, refactor vcpu_dirty_rate_stat_collect
so that period can be configured instead of hardcoded.
dirty-limit capability is kind of like auto-converge
but using dirty limit instead of traditional cpu-throttle
to throttle guest down. To enable this feature, turn on
the dirty-limit capability before live migration using
migrate-set-capabilities, and set the parameters
"x-vcpu-dirty-limit-period", "vcpu-dirty-limit" suitably
to speed up convergence.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <168618975839.6361.17407633874747688653-4@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 12 +++++++++++-
migration/options.h | 1 +
migration/options.c | 23 +++++++++++++++++++++++
softmmu/dirtylimit.c | 18 ++++++++++++++----
4 files changed, 49 insertions(+), 5 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index e7243c0c0d..621e6604c6 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -487,6 +487,16 @@
# and should not affect the correctness of postcopy migration.
# (since 7.1)
#
+# @dirty-limit: If enabled, migration will use the dirty-limit algo to
+# throttle down guest instead of auto-converge algo.
+# Throttle algo only works when vCPU's dirtyrate greater
+# than 'vcpu-dirty-limit', read processes in guest os
+# aren't penalized any more, so this algo can improve
+# performance of vCPU during live migration. This is an
+# optional performance feature and should not affect the
+# correctness of the existing auto-converge algo.
+# (since 8.1)
+#
# Features:
#
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
@@ -502,7 +512,7 @@
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
- 'zero-copy-send', 'postcopy-preempt'] }
+ 'zero-copy-send', 'postcopy-preempt', 'dirty-limit'] }
##
# @MigrationCapabilityStatus:
diff --git a/migration/options.h b/migration/options.h
index 45991af3c2..51964eff29 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -29,6 +29,7 @@ bool migrate_block(void);
bool migrate_colo(void);
bool migrate_compress(void);
bool migrate_dirty_bitmaps(void);
+bool migrate_dirty_limit(void);
bool migrate_events(void);
bool migrate_ignore_shared(void);
bool migrate_late_block_activate(void);
diff --git a/migration/options.c b/migration/options.c
index 8acf5f1d2c..ba1010e08b 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -27,6 +27,7 @@
#include "qemu-file.h"
#include "ram.h"
#include "options.h"
+#include "sysemu/kvm.h"
/* Maximum migrate downtime set to 2000 seconds */
#define MAX_MIGRATE_DOWNTIME_SECONDS 2000
@@ -194,6 +195,7 @@ Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-zero-copy-send",
MIGRATION_CAPABILITY_ZERO_COPY_SEND),
#endif
+ DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
DEFINE_PROP_END_OF_LIST(),
};
@@ -240,6 +242,13 @@ bool migrate_dirty_bitmaps(void)
return s->capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
}
+bool migrate_dirty_limit(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return s->capabilities[MIGRATION_CAPABILITY_DIRTY_LIMIT];
+}
+
bool migrate_events(void)
{
MigrationState *s = migrate_get_current();
@@ -556,6 +565,20 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
}
}
+ if (new_caps[MIGRATION_CAPABILITY_DIRTY_LIMIT]) {
+ if (new_caps[MIGRATION_CAPABILITY_AUTO_CONVERGE]) {
+ error_setg(errp, "dirty-limit conflicts with auto-converge"
+ " either of then available currently");
+ return false;
+ }
+
+ if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
+ error_setg(errp, "dirty-limit requires KVM with accelerator"
+ " property 'dirty-ring-size' set");
+ return false;
+ }
+ }
+
return true;
}
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index e80201097a..942d876523 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -24,6 +24,9 @@
#include "hw/boards.h"
#include "sysemu/kvm.h"
#include "trace.h"
+#include "migration/misc.h"
+#include "migration/migration.h"
+#include "migration/options.h"
/*
* Dirtylimit stop working if dirty page rate error
@@ -75,14 +78,21 @@ static bool dirtylimit_quit;
static void vcpu_dirty_rate_stat_collect(void)
{
+ MigrationState *s = migrate_get_current();
VcpuStat stat;
int i = 0;
+ int64_t period = DIRTYLIMIT_CALC_TIME_MS;
+
+ if (migrate_dirty_limit() &&
+ migration_is_active(s)) {
+ period = s->parameters.x_vcpu_dirty_limit_period;
+ }
/* calculate vcpu dirtyrate */
- vcpu_calculate_dirtyrate(DIRTYLIMIT_CALC_TIME_MS,
- &stat,
- GLOBAL_DIRTY_LIMIT,
- false);
+ vcpu_calculate_dirtyrate(period,
+ &stat,
+ GLOBAL_DIRTY_LIMIT,
+ false);
for (i = 0; i < stat.nvcpu; i++) {
vcpu_dirty_rate_stat->stat.rates[i].id = i;
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 07/30] migration: Refactor auto-converge capability logic
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (5 preceding siblings ...)
2023-06-22 16:55 ` [PULL 06/30] migration: Introduce dirty-limit capability Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 08/30] migration: Put the detection logic before auto-converge checking Juan Quintela
` (23 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Check if block migration is running before throttling
guest down in auto-converge way.
Note that this modification is kind of like code clean,
because block migration does not depend on auto-converge
capability, so the order of checks can be adjusted.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <168618975839.6361.17407633874747688653-5@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 5283a75f02..78746849b5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -995,7 +995,11 @@ static void migration_trigger_throttle(RAMState *rs)
/* During block migration the auto-converge logic incorrectly detects
* that ram migration makes no progress. Avoid this by disabling the
* throttling logic during the bulk phase of block migration. */
- if (migrate_auto_converge() && !blk_mig_bulk_active()) {
+ if (blk_mig_bulk_active()) {
+ return;
+ }
+
+ if (migrate_auto_converge()) {
/* The following detection logic can be refined later. For now:
Check to see if the ratio between dirtied bytes and the approx.
amount of bytes that just got transferred since the last time
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 08/30] migration: Put the detection logic before auto-converge checking
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (6 preceding siblings ...)
2023-06-22 16:55 ` [PULL 07/30] migration: Refactor auto-converge capability logic Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 09/30] migration: Implement dirty-limit convergence algo Juan Quintela
` (22 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
This commit is prepared for the implementation of dirty-limit
convergence algo.
The detection logic of throttling condition can apply to both
auto-converge and dirty-limit algo, putting it's position
before the checking logic for auto-converge feature.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <168733225273.5845.15871826788879741674-6@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 78746849b5..b6559f9312 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -999,17 +999,18 @@ static void migration_trigger_throttle(RAMState *rs)
return;
}
- if (migrate_auto_converge()) {
- /* The following detection logic can be refined later. For now:
- Check to see if the ratio between dirtied bytes and the approx.
- amount of bytes that just got transferred since the last time
- we were in this routine reaches the threshold. If that happens
- twice, start or increase throttling. */
-
- if ((bytes_dirty_period > bytes_dirty_threshold) &&
- (++rs->dirty_rate_high_cnt >= 2)) {
+ /*
+ * The following detection logic can be refined later. For now:
+ * Check to see if the ratio between dirtied bytes and the approx.
+ * amount of bytes that just got transferred since the last time
+ * we were in this routine reaches the threshold. If that happens
+ * twice, start or increase throttling.
+ */
+ if ((bytes_dirty_period > bytes_dirty_threshold) &&
+ (++rs->dirty_rate_high_cnt >= 2)) {
+ rs->dirty_rate_high_cnt = 0;
+ if (migrate_auto_converge()) {
trace_migration_throttle();
- rs->dirty_rate_high_cnt = 0;
mig_throttle_guest_down(bytes_dirty_period,
bytes_dirty_threshold);
}
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 09/30] migration: Implement dirty-limit convergence algo
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (7 preceding siblings ...)
2023-06-22 16:55 ` [PULL 08/30] migration: Put the detection logic before auto-converge checking Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 10/30] migration: Extend query-migrate to provide dirty page limit info Juan Quintela
` (21 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Implement dirty-limit convergence algo for live migration,
which is kind of like auto-converge algo but using dirty-limit
instead of cpu throttle to make migration convergent.
Enable dirty page limit if dirty_rate_high_cnt greater than 2
when dirty-limit capability enabled, Disable dirty-limit if
migration be canceled.
Note that "set_vcpu_dirty_limit", "cancel_vcpu_dirty_limit"
commands are not allowed during dirty-limit live migration.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <168733225273.5845.15871826788879741674-7@git.sr.ht>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 3 +++
migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++
softmmu/dirtylimit.c | 29 +++++++++++++++++++++++++++++
migration/trace-events | 1 +
4 files changed, 69 insertions(+)
diff --git a/migration/migration.c b/migration/migration.c
index 3a001dd042..c101784dfa 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -165,6 +165,9 @@ void migration_cancel(const Error *error)
if (error) {
migrate_set_error(current_migration, error);
}
+ if (migrate_dirty_limit()) {
+ qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
+ }
migrate_fd_cancel(current_migration);
}
diff --git a/migration/ram.c b/migration/ram.c
index b6559f9312..8a86363216 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -46,6 +46,7 @@
#include "qapi/error.h"
#include "qapi/qapi-types-migration.h"
#include "qapi/qapi-events-migration.h"
+#include "qapi/qapi-commands-migration.h"
#include "qapi/qmp/qerror.h"
#include "trace.h"
#include "exec/ram_addr.h"
@@ -59,6 +60,8 @@
#include "multifd.h"
#include "sysemu/runstate.h"
#include "options.h"
+#include "sysemu/dirtylimit.h"
+#include "sysemu/kvm.h"
#include "hw/boards.h" /* for machine_dump_guest_core() */
@@ -984,6 +987,37 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
}
}
+/*
+ * Enable dirty-limit to throttle down the guest
+ */
+static void migration_dirty_limit_guest(void)
+{
+ /*
+ * dirty page rate quota for all vCPUs fetched from
+ * migration parameter 'vcpu_dirty_limit'
+ */
+ static int64_t quota_dirtyrate;
+ MigrationState *s = migrate_get_current();
+
+ /*
+ * If dirty limit already enabled and migration parameter
+ * vcpu-dirty-limit untouched.
+ */
+ if (dirtylimit_in_service() &&
+ quota_dirtyrate == s->parameters.vcpu_dirty_limit) {
+ return;
+ }
+
+ quota_dirtyrate = s->parameters.vcpu_dirty_limit;
+
+ /*
+ * Set all vCPU a quota dirtyrate, note that the second
+ * parameter will be ignored if setting all vCPU for the vm
+ */
+ qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
+ trace_migration_dirty_limit_guest(quota_dirtyrate);
+}
+
static void migration_trigger_throttle(RAMState *rs)
{
uint64_t threshold = migrate_throttle_trigger_threshold();
@@ -1013,6 +1047,8 @@ static void migration_trigger_throttle(RAMState *rs)
trace_migration_throttle();
mig_throttle_guest_down(bytes_dirty_period,
bytes_dirty_threshold);
+ } else if (migrate_dirty_limit()) {
+ migration_dirty_limit_guest();
}
}
}
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index 942d876523..a6d854d161 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -436,6 +436,23 @@ static void dirtylimit_cleanup(void)
dirtylimit_state_finalize();
}
+/*
+ * dirty page rate limit is not allowed to set if migration
+ * is running with dirty-limit capability enabled.
+ */
+static bool dirtylimit_is_allowed(void)
+{
+ MigrationState *ms = migrate_get_current();
+
+ if (migration_is_running(ms->state) &&
+ (!qemu_thread_is_self(&ms->thread)) &&
+ migrate_dirty_limit() &&
+ dirtylimit_in_service()) {
+ return false;
+ }
+ return true;
+}
+
void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
int64_t cpu_index,
Error **errp)
@@ -449,6 +466,12 @@ void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
return;
}
+ if (!dirtylimit_is_allowed()) {
+ error_setg(errp, "can't cancel dirty page rate limit while"
+ " migration is running");
+ return;
+ }
+
if (!dirtylimit_in_service()) {
return;
}
@@ -499,6 +522,12 @@ void qmp_set_vcpu_dirty_limit(bool has_cpu_index,
return;
}
+ if (!dirtylimit_is_allowed()) {
+ error_setg(errp, "can't set dirty page rate limit while"
+ " migration is running");
+ return;
+ }
+
if (!dirty_rate) {
qmp_cancel_vcpu_dirty_limit(has_cpu_index, cpu_index, errp);
return;
diff --git a/migration/trace-events b/migration/trace-events
index cdaef7a1ea..c5cb280d95 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -91,6 +91,7 @@ migration_bitmap_sync_start(void) ""
migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx"
migration_throttle(void) ""
+migration_dirty_limit_guest(int64_t dirtyrate) "guest dirty page rate limit %" PRIi64 " MB/s"
ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx"
ram_load_loop(const char *rbname, uint64_t addr, int flags, void *host) "%s: addr: 0x%" PRIx64 " flags: 0x%x host: %p"
ram_load_postcopy_loop(int channel, uint64_t addr, int flags) "chan=%d addr=0x%" PRIx64 " flags=0x%x"
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 10/30] migration: Extend query-migrate to provide dirty page limit info
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (8 preceding siblings ...)
2023-06-22 16:55 ` [PULL 09/30] migration: Implement dirty-limit convergence algo Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 11/30] migration-test: Be consistent for ppc Juan Quintela
` (20 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Hyman Huang(黄勇)
From: Hyman Huang(黄勇) <yong.huang@smartx.com>
Extend query-migrate to provide throttle time and estimated
ring full time with dirty-limit capability enabled, through which
we can observe if dirty limit take effect during live migration.
Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <168733225273.5845.15871826788879741674-8@git.sr.ht>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 16 +++++++++++++-
include/sysemu/dirtylimit.h | 2 ++
migration/migration-hmp-cmds.c | 10 +++++++++
migration/migration.c | 10 +++++++++
softmmu/dirtylimit.c | 39 ++++++++++++++++++++++++++++++++++
5 files changed, 76 insertions(+), 1 deletion(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index 621e6604c6..e9b24fc410 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -250,6 +250,18 @@
# blocked. Present and non-empty when migration is blocked.
# (since 6.0)
#
+# @dirty-limit-throttle-time-per-round: Maximum throttle time (in microseconds) of virtual
+# CPUs each dirty ring full round, which shows how
+# MigrationCapability dirty-limit affects the guest
+# during live migration. (since 8.1)
+#
+# @dirty-limit-ring-full-time: Estimated average dirty ring full time (in microseconds)
+# each dirty ring full round, note that the value equals
+# dirty ring memory size divided by average dirty page rate
+# of virtual CPU, which can be used to observe the average
+# memory load of virtual CPU indirectly. Note that zero
+# means guest doesn't dirty memory (since 8.1)
+#
# Since: 0.14
##
{ 'struct': 'MigrationInfo',
@@ -267,7 +279,9 @@
'*postcopy-blocktime' : 'uint32',
'*postcopy-vcpu-blocktime': ['uint32'],
'*compression': 'CompressionStats',
- '*socket-address': ['SocketAddress'] } }
+ '*socket-address': ['SocketAddress'],
+ '*dirty-limit-throttle-time-per-round': 'uint64',
+ '*dirty-limit-ring-full-time': 'uint64'} }
##
# @query-migrate:
diff --git a/include/sysemu/dirtylimit.h b/include/sysemu/dirtylimit.h
index 8d2c1f3a6b..d11ebbbbdb 100644
--- a/include/sysemu/dirtylimit.h
+++ b/include/sysemu/dirtylimit.h
@@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
void dirtylimit_set_all(uint64_t quota,
bool enable);
void dirtylimit_vcpu_execute(CPUState *cpu);
+uint64_t dirtylimit_throttle_time_per_round(void);
+uint64_t dirtylimit_ring_full_time(void);
#endif
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 35e8020bbf..c115ef2d23 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -190,6 +190,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
info->cpu_throttle_percentage);
}
+ if (info->has_dirty_limit_throttle_time_per_round) {
+ monitor_printf(mon, "dirty-limit throttle time: %" PRIu64 " us\n",
+ info->dirty_limit_throttle_time_per_round);
+ }
+
+ if (info->has_dirty_limit_ring_full_time) {
+ monitor_printf(mon, "dirty-limit ring full time: %" PRIu64 " us\n",
+ info->dirty_limit_ring_full_time);
+ }
+
if (info->has_postcopy_blocktime) {
monitor_printf(mon, "postcopy blocktime: %u\n",
info->postcopy_blocktime);
diff --git a/migration/migration.c b/migration/migration.c
index c101784dfa..719f91573f 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -64,6 +64,7 @@
#include "yank_functions.h"
#include "sysemu/qtest.h"
#include "options.h"
+#include "sysemu/dirtylimit.h"
static NotifierList migration_state_notifiers =
NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
@@ -968,6 +969,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->dirty_pages_rate =
stat64_get(&mig_stats.dirty_pages_rate);
}
+
+ if (migrate_dirty_limit() && dirtylimit_in_service()) {
+ info->has_dirty_limit_throttle_time_per_round = true;
+ info->dirty_limit_throttle_time_per_round =
+ dirtylimit_throttle_time_per_round();
+
+ info->has_dirty_limit_ring_full_time = true;
+ info->dirty_limit_ring_full_time = dirtylimit_ring_full_time();
+ }
}
static void populate_disk_info(MigrationInfo *info)
diff --git a/softmmu/dirtylimit.c b/softmmu/dirtylimit.c
index a6d854d161..3c275ee55b 100644
--- a/softmmu/dirtylimit.c
+++ b/softmmu/dirtylimit.c
@@ -565,6 +565,45 @@ out:
hmp_handle_error(mon, err);
}
+/* Return the max throttle time of each virtual CPU */
+uint64_t dirtylimit_throttle_time_per_round(void)
+{
+ CPUState *cpu;
+ int64_t max = 0;
+
+ CPU_FOREACH(cpu) {
+ if (cpu->throttle_us_per_full > max) {
+ max = cpu->throttle_us_per_full;
+ }
+ }
+
+ return max;
+}
+
+/*
+ * Estimate average dirty ring full time of each virtaul CPU.
+ * Return 0 if guest doesn't dirty memory.
+ */
+uint64_t dirtylimit_ring_full_time(void)
+{
+ CPUState *cpu;
+ uint64_t curr_rate = 0;
+ int nvcpus = 0;
+
+ CPU_FOREACH(cpu) {
+ if (cpu->running) {
+ nvcpus++;
+ curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
+ }
+ }
+
+ if (!curr_rate || !nvcpus) {
+ return 0;
+ }
+
+ return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
+}
+
static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
{
DirtyLimitInfo *info = NULL;
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 11/30] migration-test: Be consistent for ppc
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (9 preceding siblings ...)
2023-06-22 16:55 ` [PULL 10/30] migration: Extend query-migrate to provide dirty page limit info Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 12/30] migration-test: Make machine_opts regular with other options Juan Quintela
` (19 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
It makes no sense that we don't have the same configuration on both sides.
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-ID: <20230608224943.3877-2-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index b0c355bbd9..c5e0c69c6b 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -646,7 +646,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"'nvramrc=hex .\" _\" begin %x %x "
"do i c@ 1 + i c! 1000 +loop .\" B\" 0 "
"until'", end_address, start_address);
- arch_target = g_strdup("");
+ arch_target = g_strdup("-nodefaults");
} else if (strcmp(arch, "aarch64") == 0) {
init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
machine_opts = "virt,gic-version=max";
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 12/30] migration-test: Make machine_opts regular with other options
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (10 preceding siblings ...)
2023-06-22 16:55 ` [PULL 11/30] migration-test: Be consistent for ppc Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 13/30] migration-test: Create arch_opts Juan Quintela
` (18 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230608224943.3877-5-quintela@redhat.com>
---
tests/qtest/migration-test.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index c5e0c69c6b..79157d600b 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -637,7 +637,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
start_address = S390_TEST_MEM_START;
end_address = S390_TEST_MEM_END;
} else if (strcmp(arch, "ppc64") == 0) {
- machine_opts = "vsmt=8";
+ machine_opts = "-machine vsmt=8";
memory_size = "256M";
start_address = PPC_TEST_MEM_START;
end_address = PPC_TEST_MEM_END;
@@ -649,7 +649,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
arch_target = g_strdup("-nodefaults");
} else if (strcmp(arch, "aarch64") == 0) {
init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
- machine_opts = "virt,gic-version=max";
+ machine_opts = "-machine virt,gic-version=max";
memory_size = "150M";
arch_source = g_strdup_printf("-cpu max "
"-kernel %s",
@@ -689,14 +689,13 @@ static int test_migrate_start(QTestState **from, QTestState **to,
shmem_opts = g_strdup("");
}
- cmd_source = g_strdup_printf("-accel kvm%s -accel tcg%s%s "
+ cmd_source = g_strdup_printf("-accel kvm%s -accel tcg %s "
"-name source,debug-threads=on "
"-m %s "
"-serial file:%s/src_serial "
"%s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
- machine_opts ? " -machine " : "",
machine_opts ? machine_opts : "",
memory_size, tmpfs,
arch_source, shmem_opts,
@@ -709,7 +708,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
&got_src_stop);
}
- cmd_target = g_strdup_printf("-accel kvm%s -accel tcg%s%s "
+ cmd_target = g_strdup_printf("-accel kvm%s -accel tcg %s "
"-name target,debug-threads=on "
"-m %s "
"-serial file:%s/dest_serial "
@@ -717,7 +716,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"%s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
- machine_opts ? " -machine " : "",
machine_opts ? machine_opts : "",
memory_size, tmpfs, uri,
arch_target, shmem_opts,
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 13/30] migration-test: Create arch_opts
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (11 preceding siblings ...)
2023-06-22 16:55 ` [PULL 12/30] migration-test: Make machine_opts regular with other options Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 14/30] migration-test: machine_opts is really arch specific Juan Quintela
` (17 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
This will contain the options needed for both source and target.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-6-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 79157d600b..4d8542f5c7 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -600,6 +600,8 @@ static int test_migrate_start(QTestState **from, QTestState **to,
{
g_autofree gchar *arch_source = NULL;
g_autofree gchar *arch_target = NULL;
+ /* options for source and target */
+ g_autofree gchar *arch_opts = NULL;
g_autofree gchar *cmd_source = NULL;
g_autofree gchar *cmd_target = NULL;
const gchar *ignore_stderr;
@@ -625,15 +627,13 @@ static int test_migrate_start(QTestState **from, QTestState **to,
assert(sizeof(x86_bootsect) == 512);
init_bootfile(bootpath, x86_bootsect, sizeof(x86_bootsect));
memory_size = "150M";
- arch_source = g_strdup_printf("-drive file=%s,format=raw", bootpath);
- arch_target = g_strdup(arch_source);
+ arch_opts = g_strdup_printf("-drive file=%s,format=raw", bootpath);
start_address = X86_TEST_MEM_START;
end_address = X86_TEST_MEM_END;
} else if (g_str_equal(arch, "s390x")) {
init_bootfile(bootpath, s390x_elf, sizeof(s390x_elf));
memory_size = "128M";
- arch_source = g_strdup_printf("-bios %s", bootpath);
- arch_target = g_strdup(arch_source);
+ arch_opts = g_strdup_printf("-bios %s", bootpath);
start_address = S390_TEST_MEM_START;
end_address = S390_TEST_MEM_END;
} else if (strcmp(arch, "ppc64") == 0) {
@@ -641,20 +641,16 @@ static int test_migrate_start(QTestState **from, QTestState **to,
memory_size = "256M";
start_address = PPC_TEST_MEM_START;
end_address = PPC_TEST_MEM_END;
- arch_source = g_strdup_printf("-nodefaults "
- "-prom-env 'use-nvramrc?=true' -prom-env "
+ arch_source = g_strdup_printf("-prom-env 'use-nvramrc?=true' -prom-env "
"'nvramrc=hex .\" _\" begin %x %x "
"do i c@ 1 + i c! 1000 +loop .\" B\" 0 "
"until'", end_address, start_address);
- arch_target = g_strdup("-nodefaults");
+ arch_opts = g_strdup("-nodefaults");
} else if (strcmp(arch, "aarch64") == 0) {
init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
machine_opts = "-machine virt,gic-version=max";
memory_size = "150M";
- arch_source = g_strdup_printf("-cpu max "
- "-kernel %s",
- bootpath);
- arch_target = g_strdup(arch_source);
+ arch_opts = g_strdup_printf("-cpu max -kernel %s", bootpath);
start_address = ARM_TEST_MEM_START;
end_address = ARM_TEST_MEM_END;
@@ -693,12 +689,14 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"-name source,debug-threads=on "
"-m %s "
"-serial file:%s/src_serial "
- "%s %s %s %s",
+ "%s %s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
machine_opts ? machine_opts : "",
memory_size, tmpfs,
- arch_source, shmem_opts,
+ arch_opts ? arch_opts : "",
+ arch_source ? arch_source : "",
+ shmem_opts,
args->opts_source ? args->opts_source : "",
ignore_stderr);
if (!args->only_target) {
@@ -713,12 +711,14 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"-m %s "
"-serial file:%s/dest_serial "
"-incoming %s "
- "%s %s %s %s",
+ "%s %s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
machine_opts ? machine_opts : "",
memory_size, tmpfs, uri,
- arch_target, shmem_opts,
+ arch_opts ? arch_opts : "",
+ arch_target ? arch_target : "",
+ shmem_opts,
args->opts_target ? args->opts_target : "",
ignore_stderr);
*to = qtest_init(cmd_target);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 14/30] migration-test: machine_opts is really arch specific
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (12 preceding siblings ...)
2023-06-22 16:55 ` [PULL 13/30] migration-test: Create arch_opts Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 15/30] migration-test: Create kvm_opts Juan Quintela
` (16 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
And it needs to be in both source and target, so put it on arch_opts.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-7-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 4d8542f5c7..fc3337b7bb 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -609,7 +609,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
g_autofree char *shmem_opts = NULL;
g_autofree char *shmem_path = NULL;
const char *arch = qtest_get_arch();
- const char *machine_opts = NULL;
const char *memory_size;
if (args->use_shmem) {
@@ -637,7 +636,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
start_address = S390_TEST_MEM_START;
end_address = S390_TEST_MEM_END;
} else if (strcmp(arch, "ppc64") == 0) {
- machine_opts = "-machine vsmt=8";
memory_size = "256M";
start_address = PPC_TEST_MEM_START;
end_address = PPC_TEST_MEM_END;
@@ -645,12 +643,12 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"'nvramrc=hex .\" _\" begin %x %x "
"do i c@ 1 + i c! 1000 +loop .\" B\" 0 "
"until'", end_address, start_address);
- arch_opts = g_strdup("-nodefaults");
+ arch_opts = g_strdup("-nodefaults -machine vsmt=8");
} else if (strcmp(arch, "aarch64") == 0) {
init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
- machine_opts = "-machine virt,gic-version=max";
memory_size = "150M";
- arch_opts = g_strdup_printf("-cpu max -kernel %s", bootpath);
+ arch_opts = g_strdup_printf("-machine virt,gic-version=max -cpu max "
+ "-kernel %s", bootpath);
start_address = ARM_TEST_MEM_START;
end_address = ARM_TEST_MEM_END;
@@ -685,14 +683,13 @@ static int test_migrate_start(QTestState **from, QTestState **to,
shmem_opts = g_strdup("");
}
- cmd_source = g_strdup_printf("-accel kvm%s -accel tcg %s "
+ cmd_source = g_strdup_printf("-accel kvm%s -accel tcg "
"-name source,debug-threads=on "
"-m %s "
"-serial file:%s/src_serial "
"%s %s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
- machine_opts ? machine_opts : "",
memory_size, tmpfs,
arch_opts ? arch_opts : "",
arch_source ? arch_source : "",
@@ -706,7 +703,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
&got_src_stop);
}
- cmd_target = g_strdup_printf("-accel kvm%s -accel tcg %s "
+ cmd_target = g_strdup_printf("-accel kvm%s -accel tcg "
"-name target,debug-threads=on "
"-m %s "
"-serial file:%s/dest_serial "
@@ -714,7 +711,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"%s %s %s %s %s",
args->use_dirty_ring ?
",dirty-ring-size=4096" : "",
- machine_opts ? machine_opts : "",
memory_size, tmpfs, uri,
arch_opts ? arch_opts : "",
arch_target ? arch_target : "",
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 15/30] migration-test: Create kvm_opts
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (13 preceding siblings ...)
2023-06-22 16:55 ` [PULL 14/30] migration-test: machine_opts is really arch specific Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 16/30] migration-test: bootpath is the same for all tests and for all archs Juan Quintela
` (15 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
So arch_dirty_ring option becomes one option like the others.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-8-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index fc3337b7bb..40967fdffc 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -608,6 +608,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
g_autofree char *bootpath = NULL;
g_autofree char *shmem_opts = NULL;
g_autofree char *shmem_path = NULL;
+ const char *kvm_opts = NULL;
const char *arch = qtest_get_arch();
const char *memory_size;
@@ -683,13 +684,16 @@ static int test_migrate_start(QTestState **from, QTestState **to,
shmem_opts = g_strdup("");
}
+ if (args->use_dirty_ring) {
+ kvm_opts = ",dirty-ring-size=4096";
+ }
+
cmd_source = g_strdup_printf("-accel kvm%s -accel tcg "
"-name source,debug-threads=on "
"-m %s "
"-serial file:%s/src_serial "
"%s %s %s %s %s",
- args->use_dirty_ring ?
- ",dirty-ring-size=4096" : "",
+ kvm_opts ? kvm_opts : "",
memory_size, tmpfs,
arch_opts ? arch_opts : "",
arch_source ? arch_source : "",
@@ -709,8 +713,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"-serial file:%s/dest_serial "
"-incoming %s "
"%s %s %s %s %s",
- args->use_dirty_ring ?
- ",dirty-ring-size=4096" : "",
+ kvm_opts ? kvm_opts : "",
memory_size, tmpfs, uri,
arch_opts ? arch_opts : "",
arch_target ? arch_target : "",
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 16/30] migration-test: bootpath is the same for all tests and for all archs
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (14 preceding siblings ...)
2023-06-22 16:55 ` [PULL 15/30] migration-test: Create kvm_opts Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 17/30] migration-test: Add bootfile_create/delete() functions Juan Quintela
` (14 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
So just make it a global variable.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-9-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 40967fdffc..0f80dbfe80 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -102,6 +102,7 @@ static bool ufd_version_check(void)
#endif
static char *tmpfs;
+static char *bootpath;
/* The boot file modifies memory area in [start_address, end_address)
* repeatedly. It outputs a 'B' at a fixed rate while it's still running.
@@ -110,7 +111,7 @@ static char *tmpfs;
#include "tests/migration/aarch64/a-b-kernel.h"
#include "tests/migration/s390x/a-b-bios.h"
-static void init_bootfile(const char *bootpath, void *content, size_t len)
+static void init_bootfile(void *content, size_t len)
{
FILE *bootfile = fopen(bootpath, "wb");
@@ -605,7 +606,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
g_autofree gchar *cmd_source = NULL;
g_autofree gchar *cmd_target = NULL;
const gchar *ignore_stderr;
- g_autofree char *bootpath = NULL;
g_autofree char *shmem_opts = NULL;
g_autofree char *shmem_path = NULL;
const char *kvm_opts = NULL;
@@ -621,17 +621,16 @@ static int test_migrate_start(QTestState **from, QTestState **to,
got_src_stop = false;
got_dst_resume = false;
- bootpath = g_strdup_printf("%s/bootsect", tmpfs);
if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
/* the assembled x86 boot sector should be exactly one sector large */
assert(sizeof(x86_bootsect) == 512);
- init_bootfile(bootpath, x86_bootsect, sizeof(x86_bootsect));
+ init_bootfile(x86_bootsect, sizeof(x86_bootsect));
memory_size = "150M";
arch_opts = g_strdup_printf("-drive file=%s,format=raw", bootpath);
start_address = X86_TEST_MEM_START;
end_address = X86_TEST_MEM_END;
} else if (g_str_equal(arch, "s390x")) {
- init_bootfile(bootpath, s390x_elf, sizeof(s390x_elf));
+ init_bootfile(s390x_elf, sizeof(s390x_elf));
memory_size = "128M";
arch_opts = g_strdup_printf("-bios %s", bootpath);
start_address = S390_TEST_MEM_START;
@@ -646,7 +645,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"until'", end_address, start_address);
arch_opts = g_strdup("-nodefaults -machine vsmt=8");
} else if (strcmp(arch, "aarch64") == 0) {
- init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
+ init_bootfile(aarch64_kernel, sizeof(aarch64_kernel));
memory_size = "150M";
arch_opts = g_strdup_printf("-machine virt,gic-version=max -cpu max "
"-kernel %s", bootpath);
@@ -764,7 +763,6 @@ static void test_migrate_end(QTestState *from, QTestState *to, bool test_dest)
qtest_quit(to);
- cleanup("bootsect");
cleanup("migsocket");
cleanup("src_serial");
cleanup("dest_serial");
@@ -2493,12 +2491,10 @@ static QTestState *dirtylimit_start_vm(void)
QTestState *vm = NULL;
g_autofree gchar *cmd = NULL;
const char *arch = qtest_get_arch();
- g_autofree char *bootpath = NULL;
assert((strcmp(arch, "x86_64") == 0));
- bootpath = g_strdup_printf("%s/bootsect", tmpfs);
assert(sizeof(x86_bootsect) == 512);
- init_bootfile(bootpath, x86_bootsect, sizeof(x86_bootsect));
+ init_bootfile(x86_bootsect, sizeof(x86_bootsect));
cmd = g_strdup_printf("-accel kvm,dirty-ring-size=4096 "
"-name dirtylimit-test,debug-threads=on "
@@ -2514,7 +2510,6 @@ static QTestState *dirtylimit_start_vm(void)
static void dirtylimit_stop_vm(QTestState *vm)
{
qtest_quit(vm);
- cleanup("bootsect");
cleanup("vm_serial");
}
@@ -2676,6 +2671,7 @@ int main(int argc, char **argv)
g_get_tmp_dir(), err->message);
}
g_assert(tmpfs);
+ bootpath = g_strdup_printf("%s/bootsect", tmpfs);
module_call_init(MODULE_INIT_QOM);
@@ -2819,6 +2815,8 @@ int main(int argc, char **argv)
g_assert_cmpint(ret, ==, 0);
+ cleanup("bootsect");
+ g_free(bootpath);
ret = rmdir(tmpfs);
if (ret != 0) {
g_test_message("unable to rmdir: path (%s): %s",
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 17/30] migration-test: Add bootfile_create/delete() functions
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (15 preceding siblings ...)
2023-06-22 16:55 ` [PULL 16/30] migration-test: bootpath is the same for all tests and for all archs Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 18/30] migration-test: dirtylimit checks for x86_64 arch before Juan Quintela
` (13 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
The bootsector code is read only from the guest (otherwise we are
going to have problems with it being read from both source and
destination).
Create a single copy for all the tests.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-10-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 50 ++++++++++++++++++++++++++----------
1 file changed, 36 insertions(+), 14 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 0f80dbfe80..eb6a11e758 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -111,14 +111,47 @@ static char *bootpath;
#include "tests/migration/aarch64/a-b-kernel.h"
#include "tests/migration/s390x/a-b-bios.h"
-static void init_bootfile(void *content, size_t len)
+static void bootfile_create(char *dir)
{
+ const char *arch = qtest_get_arch();
+ unsigned char *content;
+ size_t len;
+
+ bootpath = g_strdup_printf("%s/bootsect", dir);
+ if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
+ /* the assembled x86 boot sector should be exactly one sector large */
+ g_assert(sizeof(x86_bootsect) == 512);
+ content = x86_bootsect;
+ len = sizeof(x86_bootsect);
+ } else if (g_str_equal(arch, "s390x")) {
+ content = s390x_elf;
+ len = sizeof(s390x_elf);
+ } else if (strcmp(arch, "ppc64") == 0) {
+ /*
+ * sane architectures can be programmed at the boot prompt
+ */
+ return;
+ } else if (strcmp(arch, "aarch64") == 0) {
+ content = aarch64_kernel;
+ len = sizeof(aarch64_kernel);
+ g_assert(sizeof(aarch64_kernel) <= ARM_TEST_MAX_KERNEL_SIZE);
+ } else {
+ g_assert_not_reached();
+ }
+
FILE *bootfile = fopen(bootpath, "wb");
g_assert_cmpint(fwrite(content, len, 1, bootfile), ==, 1);
fclose(bootfile);
}
+static void bootfile_delete(void)
+{
+ unlink(bootpath);
+ g_free(bootpath);
+ bootpath = NULL;
+}
+
/*
* Wait for some output in the serial output file,
* we get an 'A' followed by an endless string of 'B's
@@ -622,15 +655,11 @@ static int test_migrate_start(QTestState **from, QTestState **to,
got_src_stop = false;
got_dst_resume = false;
if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
- /* the assembled x86 boot sector should be exactly one sector large */
- assert(sizeof(x86_bootsect) == 512);
- init_bootfile(x86_bootsect, sizeof(x86_bootsect));
memory_size = "150M";
arch_opts = g_strdup_printf("-drive file=%s,format=raw", bootpath);
start_address = X86_TEST_MEM_START;
end_address = X86_TEST_MEM_END;
} else if (g_str_equal(arch, "s390x")) {
- init_bootfile(s390x_elf, sizeof(s390x_elf));
memory_size = "128M";
arch_opts = g_strdup_printf("-bios %s", bootpath);
start_address = S390_TEST_MEM_START;
@@ -645,14 +674,11 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"until'", end_address, start_address);
arch_opts = g_strdup("-nodefaults -machine vsmt=8");
} else if (strcmp(arch, "aarch64") == 0) {
- init_bootfile(aarch64_kernel, sizeof(aarch64_kernel));
memory_size = "150M";
arch_opts = g_strdup_printf("-machine virt,gic-version=max -cpu max "
"-kernel %s", bootpath);
start_address = ARM_TEST_MEM_START;
end_address = ARM_TEST_MEM_END;
-
- g_assert(sizeof(aarch64_kernel) <= ARM_TEST_MAX_KERNEL_SIZE);
} else {
g_assert_not_reached();
}
@@ -2493,9 +2519,6 @@ static QTestState *dirtylimit_start_vm(void)
const char *arch = qtest_get_arch();
assert((strcmp(arch, "x86_64") == 0));
- assert(sizeof(x86_bootsect) == 512);
- init_bootfile(x86_bootsect, sizeof(x86_bootsect));
-
cmd = g_strdup_printf("-accel kvm,dirty-ring-size=4096 "
"-name dirtylimit-test,debug-threads=on "
"-m 150M -smp 1 "
@@ -2671,7 +2694,7 @@ int main(int argc, char **argv)
g_get_tmp_dir(), err->message);
}
g_assert(tmpfs);
- bootpath = g_strdup_printf("%s/bootsect", tmpfs);
+ bootfile_create(tmpfs);
module_call_init(MODULE_INIT_QOM);
@@ -2815,8 +2838,7 @@ int main(int argc, char **argv)
g_assert_cmpint(ret, ==, 0);
- cleanup("bootsect");
- g_free(bootpath);
+ bootfile_delete();
ret = rmdir(tmpfs);
if (ret != 0) {
g_test_message("unable to rmdir: path (%s): %s",
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 18/30] migration-test: dirtylimit checks for x86_64 arch before
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (16 preceding siblings ...)
2023-06-22 16:55 ` [PULL 17/30] migration-test: Add bootfile_create/delete() functions Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 19/30] migration-test: simplify shmem_opts handling Juan Quintela
` (12 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
So no need to assert we are in x86_64.
Once there, refactor the function to remove useless variables.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-11-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index eb6a11e758..fbe9db23cf 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2515,10 +2515,7 @@ static int64_t get_limit_rate(QTestState *who)
static QTestState *dirtylimit_start_vm(void)
{
QTestState *vm = NULL;
- g_autofree gchar *cmd = NULL;
- const char *arch = qtest_get_arch();
-
- assert((strcmp(arch, "x86_64") == 0));
+ g_autofree gchar *
cmd = g_strdup_printf("-accel kvm,dirty-ring-size=4096 "
"-name dirtylimit-test,debug-threads=on "
"-m 150M -smp 1 "
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 19/30] migration-test: simplify shmem_opts handling
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (17 preceding siblings ...)
2023-06-22 16:55 ` [PULL 18/30] migration-test: dirtylimit checks for x86_64 arch before Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 20/30] migration: Update error description whenever migration fails Juan Quintela
` (11 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230608224943.3877-4-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index fbe9db23cf..e3e7d54216 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -704,9 +704,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
"-object memory-backend-file,id=mem0,size=%s"
",mem-path=%s,share=on -numa node,memdev=mem0",
memory_size, shmem_path);
- } else {
- shmem_path = NULL;
- shmem_opts = g_strdup("");
}
if (args->use_dirty_ring) {
@@ -722,7 +719,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
memory_size, tmpfs,
arch_opts ? arch_opts : "",
arch_source ? arch_source : "",
- shmem_opts,
+ shmem_opts ? shmem_opts : "",
args->opts_source ? args->opts_source : "",
ignore_stderr);
if (!args->only_target) {
@@ -742,7 +739,7 @@ static int test_migrate_start(QTestState **from, QTestState **to,
memory_size, tmpfs, uri,
arch_opts ? arch_opts : "",
arch_target ? arch_target : "",
- shmem_opts,
+ shmem_opts ? shmem_opts : "",
args->opts_target ? args->opts_target : "",
ignore_stderr);
*to = qtest_init(cmd_target);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 20/30] migration: Update error description whenever migration fails
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (18 preceding siblings ...)
2023-06-22 16:55 ` [PULL 19/30] migration-test: simplify shmem_opts handling Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 21/30] migration: Refactor repeated call of yank_unregister_instance Juan Quintela
` (10 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Tejus GK
From: Tejus GK <tejus.gk@nutanix.com>
There are places in migration.c where the migration is marked failed with
MIGRATION_STATUS_FAILED, but the failure reason is never updated. Hence
libvirt doesn't know why the migration failed when it queries for it.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
Message-ID: <20230621130940.178659-2-tejus.gk@nutanix.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 719f91573f..e6bff2e848 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1679,7 +1679,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
if (!(has_resume && resume)) {
yank_unregister_instance(MIGRATION_YANK_INSTANCE);
}
- error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "uri",
+ error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE, "uri",
"a valid migration protocol");
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_FAILED);
@@ -2066,7 +2066,7 @@ migration_wait_main_channel(MigrationState *ms)
* Switch from normal iteration to postcopy
* Returns non-0 on error
*/
-static int postcopy_start(MigrationState *ms)
+static int postcopy_start(MigrationState *ms, Error **errp)
{
int ret;
QIOChannelBuffer *bioc;
@@ -2176,7 +2176,7 @@ static int postcopy_start(MigrationState *ms)
*/
ret = qemu_file_get_error(ms->to_dst_file);
if (ret) {
- error_report("postcopy_start: Migration stream errored (pre package)");
+ error_setg(errp, "postcopy_start: Migration stream errored (pre package)");
goto fail_closefb;
}
@@ -2213,7 +2213,7 @@ static int postcopy_start(MigrationState *ms)
ret = qemu_file_get_error(ms->to_dst_file);
if (ret) {
- error_report("postcopy_start: Migration stream errored");
+ error_setg(errp, "postcopy_start: Migration stream errored");
migrate_set_state(&ms->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
MIGRATION_STATUS_FAILED);
}
@@ -2720,6 +2720,7 @@ typedef enum {
static MigIterateState migration_iteration_run(MigrationState *s)
{
uint64_t must_precopy, can_postcopy;
+ Error *local_err = NULL;
bool in_postcopy = s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE;
qemu_savevm_state_pending_estimate(&must_precopy, &can_postcopy);
@@ -2742,8 +2743,9 @@ static MigIterateState migration_iteration_run(MigrationState *s)
/* Still a significant amount to transfer */
if (!in_postcopy && must_precopy <= s->threshold_size &&
qatomic_read(&s->start_postcopy)) {
- if (postcopy_start(s)) {
- error_report("%s: postcopy failed to start", __func__);
+ if (postcopy_start(s, &local_err)) {
+ migrate_set_error(s, local_err);
+ error_report_err(local_err);
}
return MIG_ITERATE_SKIP;
}
@@ -3234,8 +3236,10 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
*/
if (migrate_postcopy_ram() || migrate_return_path()) {
if (open_return_path_on_source(s, !resume)) {
- error_report("Unable to open return-path for postcopy");
+ error_setg(&local_err, "Unable to open return-path for postcopy");
migrate_set_state(&s->state, s->state, MIGRATION_STATUS_FAILED);
+ migrate_set_error(s, local_err);
+ error_report_err(local_err);
migrate_fd_cleanup(s);
return;
}
@@ -3259,6 +3263,7 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
}
if (multifd_save_setup(&local_err) != 0) {
+ migrate_set_error(s, local_err);
error_report_err(local_err);
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_FAILED);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 21/30] migration: Refactor repeated call of yank_unregister_instance
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (19 preceding siblings ...)
2023-06-22 16:55 ` [PULL 20/30] migration: Update error description whenever migration fails Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-07-27 8:28 ` Tejus GK
2023-06-22 16:55 ` [PULL 22/30] migration: enforce multifd and postcopy preempt to be set before incoming Juan Quintela
` (9 subsequent siblings)
30 siblings, 1 reply; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Tejus GK,
Daniel P . Berrangé
From: Tejus GK <tejus.gk@nutanix.com>
In the function qmp_migrate(), yank_unregister_instance() gets called
twice which isn't required. Hence, refactoring it so that it gets called
during the local_error cleanup.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
Message-ID: <20230621130940.178659-3-tejus.gk@nutanix.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index e6bff2e848..7a4ba2e846 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1676,15 +1676,11 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
} else if (strstart(uri, "fd:", &p)) {
fd_start_outgoing_migration(s, p, &local_err);
} else {
- if (!(has_resume && resume)) {
- yank_unregister_instance(MIGRATION_YANK_INSTANCE);
- }
error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE, "uri",
"a valid migration protocol");
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_FAILED);
block_cleanup_parameters();
- return;
}
if (local_err) {
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PULL 21/30] migration: Refactor repeated call of yank_unregister_instance
2023-06-22 16:55 ` [PULL 21/30] migration: Refactor repeated call of yank_unregister_instance Juan Quintela
@ 2023-07-27 8:28 ` Tejus GK
0 siblings, 0 replies; 48+ messages in thread
From: Tejus GK @ 2023-07-27 8:28 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
On 22/06/23 10:25 pm, Juan Quintela wrote:
> From: Tejus GK <tejus.gk@nutanix.com>
>
> In the function qmp_migrate(), yank_unregister_instance() gets called
> twice which isn't required. Hence, refactoring it so that it gets called
> during the local_error cleanup.
>
> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
> Reviewed-by: Juan Quintela <quintela@redhat.com>
> Acked-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Tejus GK <tejus.gk@nutanix.com>
> Message-ID: <20230621130940.178659-3-tejus.gk@nutanix.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/migration.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index e6bff2e848..7a4ba2e846 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1676,15 +1676,11 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
> } else if (strstart(uri, "fd:", &p)) {
> fd_start_outgoing_migration(s, p, &local_err);
> } else {
> - if (!(has_resume && resume)) {
> - yank_unregister_instance(MIGRATION_YANK_INSTANCE);
> - }
> error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE, "uri",
> "a valid migration protocol");
> migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
> MIGRATION_STATUS_FAILED);
> block_cleanup_parameters();
> - return;
> }
>
> if (local_err) {
Hi Juan,
I saw that this patch wasn't queued in yesterday's migration PULL, is
there any reason why?. Without this refactor, it makes the error
description change (which got merged yesterday), in this function, quite
redundant.
Regards,
Tejus
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PULL 22/30] migration: enforce multifd and postcopy preempt to be set before incoming
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (20 preceding siblings ...)
2023-06-22 16:55 ` [PULL 21/30] migration: Refactor repeated call of yank_unregister_instance Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 23/30] qtest/migration-tests.c: use "-incoming defer" for postcopy tests Juan Quintela
` (8 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Wei Wang
From: Wei Wang <wei.w.wang@intel.com>
qemu_start_incoming_migration needs to check the number of multifd
channels or postcopy ram channels to configure the backlog parameter (i.e.
the maximum length to which the queue of pending connections for sockfd
may grow) of listen(). So enforce the usage of postcopy-preempt and
multifd as below:
- need to use "-incoming defer" on the destination; and
- set_capability and set_parameter need to be done before migrate_incoming
Otherwise, disable the use of the features and report error messages to
remind users to adjust the commands.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230606101910.20456-2-wei.w.wang@intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Acked-by: Juan Quintela <quintela@redhat.com>
---
migration/options.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/migration/options.c b/migration/options.c
index ba1010e08b..c072c2fab7 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -433,6 +433,11 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_VALIDATE_UUID,
MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+static bool migrate_incoming_started(void)
+{
+ return !!migration_incoming_get_current()->transport_data;
+}
+
/**
* @migration_caps_check - check capability compatibility
*
@@ -556,6 +561,12 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
error_setg(errp, "Postcopy preempt not compatible with compress");
return false;
}
+
+ if (migrate_incoming_started()) {
+ error_setg(errp,
+ "Postcopy preempt must be set before incoming starts");
+ return false;
+ }
}
if (new_caps[MIGRATION_CAPABILITY_MULTIFD]) {
@@ -563,6 +574,10 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
error_setg(errp, "Multifd is not compatible with compress");
return false;
}
+ if (migrate_incoming_started()) {
+ error_setg(errp, "Multifd must be set before incoming starts");
+ return false;
+ }
}
if (new_caps[MIGRATION_CAPABILITY_DIRTY_LIMIT]) {
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 23/30] qtest/migration-tests.c: use "-incoming defer" for postcopy tests
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (21 preceding siblings ...)
2023-06-22 16:55 ` [PULL 22/30] migration: enforce multifd and postcopy preempt to be set before incoming Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 24/30] qemu-file: Rename qemu_file_transferred_ fast -> noflush Juan Quintela
` (7 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Wei Wang
From: Wei Wang <wei.w.wang@intel.com>
The Postcopy preempt capability is expected to be set before incoming
starts, so change the postcopy tests to start with deferred incoming and
call migrate-incoming after the cap has been set.
Why the existing tests (without this patch) didn't fail?
There could be two reasons:
1) "backlog" specifies the number of pending connections. As long as the
server accepts the connections faster than the clients side connecting,
connection will succeed. For the preempt test, it uses only 2 channels,
so very likely to not have pending connections.
2) per my tests (on kernel 6.2), the number of pending connections allowed
is actually "backlog + 1", which is 2 in this case.
That said, the implementation of socket_start_incoming_migration_internal
expects "migrate defer" to be used, and for safety, change the test to
work with the expected usage.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230606101910.20456-3-wei.w.wang@intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/qtest/migration-test.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index e3e7d54216..c694685923 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -1161,10 +1161,10 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
QTestState **to_ptr,
MigrateCommon *args)
{
- g_autofree char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs);
+ g_autofree char *uri = NULL;
QTestState *from, *to;
- if (test_migrate_start(&from, &to, uri, &args->start)) {
+ if (test_migrate_start(&from, &to, "defer", &args->start)) {
return -1;
}
@@ -1183,9 +1183,13 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
migrate_ensure_non_converge(from);
+ qtest_qmp_assert_success(to, "{ 'execute': 'migrate-incoming',"
+ " 'arguments': { 'uri': 'tcp:127.0.0.1:0' }}");
+
/* Wait for the first serial output from the source */
wait_for_serial("src_serial");
+ uri = migrate_get_socket_address(to, "socket-address");
migrate_qmp(from, uri, "{}");
wait_for_migration_pass(from);
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 24/30] qemu-file: Rename qemu_file_transferred_ fast -> noflush
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (22 preceding siblings ...)
2023-06-22 16:55 ` [PULL 23/30] qtest/migration-tests.c: use "-incoming defer" for postcopy tests Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 25/30] migration: Change qemu_file_transferred to noflush Juan Quintela
` (6 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Philippe Mathieu-Daudé
Fast don't say much. Noflush indicates more clearly that it is like
qemu_file_transferred but without the flush.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230530183941.7223-2-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.h | 11 +++++------
migration/qemu-file.c | 2 +-
migration/savevm.c | 4 ++--
migration/vmstate.c | 4 ++--
4 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index e649718492..aa6eee66da 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -86,16 +86,15 @@ int qemu_fclose(QEMUFile *f);
uint64_t qemu_file_transferred(QEMUFile *f);
/*
- * qemu_file_transferred_fast:
+ * qemu_file_transferred_noflush:
*
- * As qemu_file_transferred except for writable
- * files, where no flush is performed and the reported
- * amount will include the size of any queued buffers,
- * on top of the amount actually transferred.
+ * As qemu_file_transferred except for writable files, where no flush
+ * is performed and the reported amount will include the size of any
+ * queued buffers, on top of the amount actually transferred.
*
* Returns: the total bytes transferred and queued
*/
-uint64_t qemu_file_transferred_fast(QEMUFile *f);
+uint64_t qemu_file_transferred_noflush(QEMUFile *f);
/*
* put_buffer without copying the buffer.
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index acc282654a..fdf115b5da 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -694,7 +694,7 @@ int coroutine_mixed_fn qemu_get_byte(QEMUFile *f)
return result;
}
-uint64_t qemu_file_transferred_fast(QEMUFile *f)
+uint64_t qemu_file_transferred_noflush(QEMUFile *f)
{
uint64_t ret = f->total_transferred;
int i;
diff --git a/migration/savevm.c b/migration/savevm.c
index bc284087f9..f26b455764 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -927,9 +927,9 @@ static int vmstate_load(QEMUFile *f, SaveStateEntry *se)
static void vmstate_save_old_style(QEMUFile *f, SaveStateEntry *se,
JSONWriter *vmdesc)
{
- uint64_t old_offset = qemu_file_transferred_fast(f);
+ uint64_t old_offset = qemu_file_transferred_noflush(f);
se->ops->save_state(f, se->opaque);
- uint64_t size = qemu_file_transferred_fast(f) - old_offset;
+ uint64_t size = qemu_file_transferred_noflush(f) - old_offset;
if (vmdesc) {
json_writer_int64(vmdesc, "size", size);
diff --git a/migration/vmstate.c b/migration/vmstate.c
index af01d54b6f..31842c3afb 100644
--- a/migration/vmstate.c
+++ b/migration/vmstate.c
@@ -361,7 +361,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
void *curr_elem = first_elem + size * i;
vmsd_desc_field_start(vmsd, vmdesc_loop, field, i, n_elems);
- old_offset = qemu_file_transferred_fast(f);
+ old_offset = qemu_file_transferred_noflush(f);
if (field->flags & VMS_ARRAY_OF_POINTER) {
assert(curr_elem);
curr_elem = *(void **)curr_elem;
@@ -391,7 +391,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
return ret;
}
- written_bytes = qemu_file_transferred_fast(f) - old_offset;
+ written_bytes = qemu_file_transferred_noflush(f) - old_offset;
vmsd_desc_field_end(vmsd, vmdesc_loop, field, written_bytes, i);
/* Compressed arrays only care about the first element */
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 25/30] migration: Change qemu_file_transferred to noflush
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (23 preceding siblings ...)
2023-06-22 16:55 ` [PULL 24/30] qemu-file: Rename qemu_file_transferred_ fast -> noflush Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 26/30] migration: Use qemu_file_transferred_noflush() for block migration Juan Quintela
` (5 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block,
Philippe Mathieu-Daudé
We do a qemu_fclose() just after that, that also does a qemu_fflush(),
so remove one qemu_fflush().
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230530183941.7223-3-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/savevm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/savevm.c b/migration/savevm.c
index f26b455764..b2199d1039 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2952,7 +2952,7 @@ bool save_snapshot(const char *name, bool overwrite, const char *vmstate,
goto the_end;
}
ret = qemu_savevm_state(f, errp);
- vm_state_size = qemu_file_transferred(f);
+ vm_state_size = qemu_file_transferred_noflush(f);
ret2 = qemu_fclose(f);
if (ret < 0) {
goto the_end;
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 26/30] migration: Use qemu_file_transferred_noflush() for block migration.
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (24 preceding siblings ...)
2023-06-22 16:55 ` [PULL 25/30] migration: Change qemu_file_transferred to noflush Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 27/30] qemu_file: Make qemu_file_is_writable() static Juan Quintela
` (4 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block, Fabiano Rosas
We only care about the amount of bytes transferred. Flushing is done
by the system somewhere else.
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230530183941.7223-4-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/block.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index b9580a6c7e..b29e80bdc4 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -748,7 +748,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
- uint64_t last_bytes = qemu_file_transferred(f);
+ uint64_t last_bytes = qemu_file_transferred_noflush(f);
trace_migration_block_save("iterate", block_mig_state.submitted,
block_mig_state.transferred);
@@ -800,7 +800,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
- uint64_t delta_bytes = qemu_file_transferred(f) - last_bytes;
+ uint64_t delta_bytes = qemu_file_transferred_noflush(f) - last_bytes;
return (delta_bytes > 0);
}
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 27/30] qemu_file: Make qemu_file_is_writable() static
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (25 preceding siblings ...)
2023-06-22 16:55 ` [PULL 26/30] migration: Use qemu_file_transferred_noflush() for block migration Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 28/30] qemu-file: Simplify qemu_file_shutdown() Juan Quintela
` (3 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
It is not used outside of qemu_file, and it shouldn't.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230530183941.7223-19-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.h | 1 -
migration/qemu-file.c | 2 +-
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index aa6eee66da..a081ef6c3f 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -103,7 +103,6 @@ uint64_t qemu_file_transferred_noflush(QEMUFile *f);
void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, size_t size,
bool may_free);
bool qemu_file_mode_is_not_valid(const char *mode);
-bool qemu_file_is_writable(QEMUFile *f);
#include "migration/qemu-file-types.h"
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index fdf115b5da..9a89e17924 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -228,7 +228,7 @@ void qemu_file_set_error(QEMUFile *f, int ret)
qemu_file_set_error_obj(f, ret, NULL);
}
-bool qemu_file_is_writable(QEMUFile *f)
+static bool qemu_file_is_writable(QEMUFile *f)
{
return f->is_writable;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 28/30] qemu-file: Simplify qemu_file_shutdown()
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (26 preceding siblings ...)
2023-06-22 16:55 ` [PULL 27/30] qemu_file: Make qemu_file_is_writable() static Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 29/30] qemu-file: Make qemu_file_get_error_obj() static Juan Quintela
` (2 subsequent siblings)
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230530183941.7223-20-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 9a89e17924..4c577bdff8 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -65,8 +65,6 @@ struct QEMUFile {
*/
int qemu_file_shutdown(QEMUFile *f)
{
- int ret = 0;
-
/*
* We must set qemufile error before the real shutdown(), otherwise
* there can be a race window where we thought IO all went though
@@ -96,10 +94,10 @@ int qemu_file_shutdown(QEMUFile *f)
}
if (qio_channel_shutdown(f->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL) < 0) {
- ret = -EIO;
+ return -EIO;
}
- return ret;
+ return 0;
}
bool qemu_file_mode_is_not_valid(const char *mode)
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 29/30] qemu-file: Make qemu_file_get_error_obj() static
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (27 preceding siblings ...)
2023-06-22 16:55 ` [PULL 28/30] qemu-file: Simplify qemu_file_shutdown() Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-22 16:55 ` [PULL 30/30] migration/rdma: Split qemu_fopen_rdma() into input/output functions Juan Quintela
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
It was not used outside of qemu_file.c anyways.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230530183941.7223-21-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.h | 1 -
migration/qemu-file.c | 2 +-
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index a081ef6c3f..8b8b7d27fe 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -128,7 +128,6 @@ void qemu_file_skip(QEMUFile *f, int size);
* accounting information tracks the total migration traffic.
*/
void qemu_file_credit_transfer(QEMUFile *f, size_t size);
-int qemu_file_get_error_obj(QEMUFile *f, Error **errp);
int qemu_file_get_error_obj_any(QEMUFile *f1, QEMUFile *f2, Error **errp);
void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err);
void qemu_file_set_error(QEMUFile *f, int ret);
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 4c577bdff8..d30bf3c377 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -158,7 +158,7 @@ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
* is not 0.
*
*/
-int qemu_file_get_error_obj(QEMUFile *f, Error **errp)
+static int qemu_file_get_error_obj(QEMUFile *f, Error **errp)
{
if (errp) {
*errp = f->last_error_obj ? error_copy(f->last_error_obj) : NULL;
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PULL 30/30] migration/rdma: Split qemu_fopen_rdma() into input/output functions
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (28 preceding siblings ...)
2023-06-22 16:55 ` [PULL 29/30] qemu-file: Make qemu_file_get_error_obj() static Juan Quintela
@ 2023-06-22 16:55 ` Juan Quintela
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
30 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-22 16:55 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
This is how everything else in QEMUFile is structured.
As a bonus they are three less lines of code.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-ID: <20230530183941.7223-17-quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.h | 1 -
migration/qemu-file.c | 12 ------------
migration/rdma.c | 39 +++++++++++++++++++--------------------
3 files changed, 19 insertions(+), 33 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 8b8b7d27fe..47015f5201 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -102,7 +102,6 @@ uint64_t qemu_file_transferred_noflush(QEMUFile *f);
*/
void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, size_t size,
bool may_free);
-bool qemu_file_mode_is_not_valid(const char *mode);
#include "migration/qemu-file-types.h"
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index d30bf3c377..19c33c9985 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -100,18 +100,6 @@ int qemu_file_shutdown(QEMUFile *f)
return 0;
}
-bool qemu_file_mode_is_not_valid(const char *mode)
-{
- if (mode == NULL ||
- (mode[0] != 'r' && mode[0] != 'w') ||
- mode[1] != 'b' || mode[2] != 0) {
- fprintf(stderr, "qemu_fopen: Argument validity check failed\n");
- return true;
- }
-
- return false;
-}
-
static QEMUFile *qemu_file_new_impl(QIOChannel *ioc, bool is_writable)
{
QEMUFile *f;
diff --git a/migration/rdma.c b/migration/rdma.c
index dd1c039e6c..ca430d319d 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4053,27 +4053,26 @@ static void qio_channel_rdma_register_types(void)
type_init(qio_channel_rdma_register_types);
-static QEMUFile *qemu_fopen_rdma(RDMAContext *rdma, const char *mode)
+static QEMUFile *rdma_new_input(RDMAContext *rdma)
{
- QIOChannelRDMA *rioc;
+ QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
- if (qemu_file_mode_is_not_valid(mode)) {
- return NULL;
- }
+ rioc->file = qemu_file_new_input(QIO_CHANNEL(rioc));
+ rioc->rdmain = rdma;
+ rioc->rdmaout = rdma->return_path;
+ qemu_file_set_hooks(rioc->file, &rdma_read_hooks);
- rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
+ return rioc->file;
+}
- if (mode[0] == 'w') {
- rioc->file = qemu_file_new_output(QIO_CHANNEL(rioc));
- rioc->rdmaout = rdma;
- rioc->rdmain = rdma->return_path;
- qemu_file_set_hooks(rioc->file, &rdma_write_hooks);
- } else {
- rioc->file = qemu_file_new_input(QIO_CHANNEL(rioc));
- rioc->rdmain = rdma;
- rioc->rdmaout = rdma->return_path;
- qemu_file_set_hooks(rioc->file, &rdma_read_hooks);
- }
+static QEMUFile *rdma_new_output(RDMAContext *rdma)
+{
+ QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
+
+ rioc->file = qemu_file_new_output(QIO_CHANNEL(rioc));
+ rioc->rdmaout = rdma;
+ rioc->rdmain = rdma->return_path;
+ qemu_file_set_hooks(rioc->file, &rdma_write_hooks);
return rioc->file;
}
@@ -4099,9 +4098,9 @@ static void rdma_accept_incoming_migration(void *opaque)
return;
}
- f = qemu_fopen_rdma(rdma, "rb");
+ f = rdma_new_input(rdma);
if (f == NULL) {
- fprintf(stderr, "RDMA ERROR: could not qemu_fopen_rdma\n");
+ fprintf(stderr, "RDMA ERROR: could not open RDMA for input\n");
qemu_rdma_cleanup(rdma);
return;
}
@@ -4224,7 +4223,7 @@ void rdma_start_outgoing_migration(void *opaque,
trace_rdma_start_outgoing_migration_after_rdma_connect();
- s->to_dst_file = qemu_fopen_rdma(rdma, "wb");
+ s->to_dst_file = rdma_new_output(rdma);
migrate_fd_connect(s, NULL);
return;
return_path_err:
--
2.40.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-22 16:54 [PULL 00/30] Next patches Juan Quintela
` (29 preceding siblings ...)
2023-06-22 16:55 ` [PULL 30/30] migration/rdma: Split qemu_fopen_rdma() into input/output functions Juan Quintela
@ 2023-06-23 5:45 ` Richard Henderson
2023-06-23 7:34 ` Juan Quintela
` (3 more replies)
30 siblings, 4 replies; 48+ messages in thread
From: Richard Henderson @ 2023-06-23 5:45 UTC (permalink / raw)
To: Juan Quintela, qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/22/23 18:54, Juan Quintela wrote:
> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>
> Merge tag 'q800-for-8.1-pull-request' ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22 10:18:32 +0200)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to 23e4307eadc1497bd0a11ca91041768f15963b68:
>
> migration/rdma: Split qemu_fopen_rdma() into input/output functions (2023-06-22 18:11:58 +0200)
>
> ----------------------------------------------------------------
> Migration Pull request (20230621) take 2
>
> In this pull request the only change is fixing 32 bits complitaion issue.
>
> Please apply.
>
> [take 1]
> - fix for multifd thread creation (fabiano)
> - dirtylimity (hyman)
> * migration-test will go on next PULL request, as it has failures.
> - Improve error description (tejus)
> - improve -incoming and set parameters before calling incoming (wei)
> - migration atomic counters reviewed patches (quintela)
> - migration-test refacttoring reviewed (quintela)
New failure with check-cfi-x86_64:
https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
/builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t 0 --num-processes
1 --print-errorlogs
1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test OK
6.55s 8 subtests passed
▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed:
(bad == 0) ERROR
2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test ERROR
151.99s killed by signal 6 SIGABRT
>>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
QTEST_QEMU_BINARY=./qemu-system-x86_64
/builds/qemu-project/qemu/build/tests/qtest/migration-test --tap -k
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
qemu-system-x86_64: Unable to read from socket: Connection reset by peer
Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f current = 88
hit_edge = 1
**
ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
(test program exited with status code -6)
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
r~
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
@ 2023-06-23 7:34 ` Juan Quintela
2023-06-25 22:01 ` Juan Quintela
` (2 subsequent siblings)
3 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-23 7:34 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
I had the feeling when I wake up that today was going to be a great day.
Confirmed.
> New failure with check-cfi-x86_64:
Aha. CFI. Something that I don't know what it is failing on me.
/me googles.
/me enables cfi+lto and compiles with clang.
50/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_az_f128_rx.c.o
[51/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_az_f128.c.o
[52/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_abz_f128.c.o
[53/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_abcz_f128.c.o
[54/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_ab_f128_z_bool.c.o
[55/491] Linking target qemu-system-x86_64
FAILED: qemu-system-x86_64
clang++ -m64 -mcx16 @qemu-system-x86_64.rsp
/usr/bin/ld: cannot find libchardev.fa: Too many open files
/usr/bin/ld: cannot find libqmp.fa: Too many open files
/usr/bin/ld: cannot find libpage-vary-common.a: Too many open files
/usr/bin/ld: cannot find libqemuutil.a: Too many open files
/usr/bin/ld: cannot find subprojects/libvhost-user/libvhost-user-glib.a: Too many open files
/usr/bin/ld: cannot find subprojects/libvhost-user/libvhost-user.a: Too many open files
/usr/bin/ld: cannot find tcg/libtcg_softmmu.fa: Too many open files
/usr/bin/ld: cannot find libmigration.fa: Too many open files
/usr/bin/ld: cannot find libhwcore.fa: Too many open files
/usr/bin/ld: cannot find libqom.fa: Too many open files
/usr/bin/ld: cannot find gdbstub/libgdb_softmmu.fa: Too many open files
/usr/bin/ld: cannot find libio.fa: Too many open files
/usr/bin/ld: cannot find libcrypto.fa: Too many open files
/usr/bin/ld: cannot find libauthz.fa: Too many open files
/usr/bin/ld: cannot find libblockdev.fa: Too many open files
/usr/bin/ld: cannot find libblock.fa: Too many open files
/usr/bin/ld: cannot find libchardev.fa: Too many open files
/usr/bin/ld: cannot find libqmp.fa: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libpixman-1.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libepoxy.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenctrl.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenstore.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenforeignmemory.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxengnttab.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenevtchn.so: Too many open files
Confirmed, today is going to be a great day.
No check-cfi<anything> target for me.
/me investigates what is going on. Found this and retries.
AR=llvm-ar CC=clang CXX=clang++ /mnt/code/qemu/full/configure
--enable-cfi --target-list=x86_64-softmmu
Gives the same error.
After a while of desesperation trying to disable features, etc, etc
Just doing a plain ulimit -n 4096 fixed the problem.
Here we go.
Later, Juan.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
2023-06-23 7:34 ` Juan Quintela
@ 2023-06-25 22:01 ` Juan Quintela
2023-06-26 6:37 ` Richard Henderson
2023-06-26 13:09 ` Juan Quintela
2023-06-27 9:07 ` Juan Quintela
3 siblings, 1 reply; 48+ messages in thread
From: Juan Quintela @ 2023-06-25 22:01 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
First of all, is there a way to get to the test log? In particular, I
am interested in knowing at least what test has failed (yes,
migration-test don't tell you much more).
After a bit more wrestling, I have been able to get things compiling
with this command:
$ /mnt/code/qemu/full/configure --enable-cfi
--target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
--disable-docs --enable-safe-stack --disable-slirp
It should basically be the one that check-cfi-x86_64 is using if I
understand the build recipes correctly (that is a BIG IF).
And it passes for me with flying colors.
Here I have Fedora38, builder has F37.
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
This is the interesting bit, why is the conection closed.
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
This makes zero sense, except if we haven't migrated all the guest
state, that it is what it has happened.
Is there a place on the web interface to see the full logs? Or that is
the only thing that the CI system stores?
Later, Juan.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-25 22:01 ` Juan Quintela
@ 2023-06-26 6:37 ` Richard Henderson
2023-06-26 13:05 ` Juan Quintela
0 siblings, 1 reply; 48+ messages in thread
From: Richard Henderson @ 2023-06-26 6:37 UTC (permalink / raw)
To: quintela
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/26/23 00:01, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 6/22/23 18:54, Juan Quintela wrote:
>>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>>> Merge tag 'q800-for-8.1-pull-request'
>>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>>> 10:18:32 +0200)
>>> are available in the Git repository at:
>>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>>> for you to fetch changes up to
>>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>>> migration/rdma: Split qemu_fopen_rdma() into input/output
>>> functions (2023-06-22 18:11:58 +0200)
>>> ----------------------------------------------------------------
>>> Migration Pull request (20230621) take 2
>>> In this pull request the only change is fixing 32 bits complitaion
>>> issue.
>>> Please apply.
>>> [take 1]
>>> - fix for multifd thread creation (fabiano)
>>> - dirtylimity (hyman)
>>> * migration-test will go on next PULL request, as it has failures.
>>> - Improve error description (tejus)
>>> - improve -incoming and set parameters before calling incoming (wei)
>>> - migration atomic counters reviewed patches (quintela)
>>> - migration-test refacttoring reviewed (quintela)
>>
>> New failure with check-cfi-x86_64:
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> First of all, is there a way to get to the test log? In particular, I
> am interested in knowing at least what test has failed (yes,
> migration-test don't tell you much more).
>
> After a bit more wrestling, I have been able to get things compiling
> with this command:
>
> $ /mnt/code/qemu/full/configure --enable-cfi
> --target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
> --disable-docs --enable-safe-stack --disable-slirp
>
> It should basically be the one that check-cfi-x86_64 is using if I
> understand the build recipes correctly (that is a BIG IF).
>
> And it passes for me with flying colors.
> Here I have Fedora38, builder has F37.
>
>> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
>> 0 --num-processes 1 --print-errorlogs
>> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
>> OK 6.55s 8 subtests passed
>> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
>> assertion failed: (bad == 0) ERROR
>> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
>> ERROR 151.99s killed by signal 6 SIGABRT
>>>>>
>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
>> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
>> QTEST_QEMU_BINARY=./qemu-system-x86_64
>> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
>> -k
>> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
>> stderr:
>> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
>
> This is the interesting bit, why is the conection closed.
>
>> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
>> current = 88 hit_edge = 1
>> **
>> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>>
>> (test program exited with status code -6)
>
> This makes zero sense, except if we haven't migrated all the guest
> state, that it is what it has happened.
>
> Is there a place on the web interface to see the full logs? Or that is
> the only thing that the CI system stores?
The "full logs" are
https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
r~
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-26 6:37 ` Richard Henderson
@ 2023-06-26 13:05 ` Juan Quintela
2023-06-26 13:29 ` Richard Henderson
0 siblings, 1 reply; 48+ messages in thread
From: Juan Quintela @ 2023-06-26 13:05 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/26/23 00:01, Juan Quintela wrote:
>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>> On 6/22/23 18:54, Juan Quintela wrote:
>>>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>>>> Merge tag 'q800-for-8.1-pull-request'
>>>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>>>> 10:18:32 +0200)
>>>> are available in the Git repository at:
>>>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>>>> for you to fetch changes up to
>>>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>>>> migration/rdma: Split qemu_fopen_rdma() into input/output
>>>> functions (2023-06-22 18:11:58 +0200)
>>>> ----------------------------------------------------------------
>>>> Migration Pull request (20230621) take 2
>>>> In this pull request the only change is fixing 32 bits complitaion
>>>> issue.
>>>> Please apply.
>>>> [take 1]
>>>> - fix for multifd thread creation (fabiano)
>>>> - dirtylimity (hyman)
>>>> * migration-test will go on next PULL request, as it has failures.
>>>> - Improve error description (tejus)
>>>> - improve -incoming and set parameters before calling incoming (wei)
>>>> - migration atomic counters reviewed patches (quintela)
>>>> - migration-test refacttoring reviewed (quintela)
>>>
>>> New failure with check-cfi-x86_64:
>>>
>>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>> First of all, is there a way to get to the test log? In particular,
>> I
>> am interested in knowing at least what test has failed (yes,
>> migration-test don't tell you much more).
>> After a bit more wrestling, I have been able to get things compiling
>> with this command:
>> $ /mnt/code/qemu/full/configure --enable-cfi
>> --target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
>> --disable-docs --enable-safe-stack --disable-slirp
>> It should basically be the one that check-cfi-x86_64 is using if I
>> understand the build recipes correctly (that is a BIG IF).
>> And it passes for me with flying colors.
>> Here I have Fedora38, builder has F37.
>>
>>> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
>>> 0 --num-processes 1 --print-errorlogs
>>> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
>>> OK 6.55s 8 subtests passed
>>> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
>>> assertion failed: (bad == 0) ERROR
>>> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
>>> ERROR 151.99s killed by signal 6 SIGABRT
>>>>>>
>>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
>>> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
>>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
>>> QTEST_QEMU_BINARY=./qemu-system-x86_64
>>> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
>>> -k
>>> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
>>> stderr:
>>> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
>> This is the interesting bit, why is the conection closed.
>>
>>> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
>>> current = 88 hit_edge = 1
>>> **
>>> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>>>
>>> (test program exited with status code -6)
>> This makes zero sense, except if we haven't migrated all the guest
>> state, that it is what it has happened.
>> Is there a place on the web interface to see the full logs? Or that
>> is
>> the only thing that the CI system stores?
>
> The "full logs" are
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
Not useful. I was hoping that there is something like when one runs
./tests/qtest/migration-test
Anyways, to make things faster:
- created
/mnt/code/qemu/full/configure --enable-cfi
--target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
--disable-docs --enable-safe-stack --disable-slirp
worked as a charm.
- Your test run:
qemu-system-x86_64: Unable to read from socket: Connection reset by peer
one of the sides die, so anything else after that don't matter.
And I don't understand what CFI is (and I don't rule out that
posibility) or I can't understand how checking indirect functions call
can make migration-test die without a single CFI error message?
- I tried myself CI pipeline, some exact source:
https://gitlab.com/juan.quintela/qemu/-/commit/23e4307eadc1497bd0a11ca91041768f15963b68/pipelines?ref=sent%2Fmigration-20230621b
This is what fails:
https://gitlab.com/juan.quintela/qemu/-/jobs/4527782025
16/395 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect/subprocess [4569]) failed unexpectedly ERROR
16/395 qemu:qtest+qtest-x86_64 / qtest-x86_64/qos-test ERROR 27.46s killed by signal 6 SIGABRT
>>> MALLOC_PERTURB_=92 QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon QTEST_QEMU_BINARY=./qemu-system-x86_64 QTEST_QEMU_IMG=./qemu-img G_TEST_DBUS_DAEMON=/builds/juan.quintela/qemu/tests/dbus-vmstate-daemon.sh /builds/juan.quintela/qemu/build/tests/qtest/qos-test --tap -k
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
Vhost user backend fails to broadcast fake RARP
qemu-system-x86_64: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-8XUX61/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-8XUX61/reconnect.sock,server=on
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -22: Invalid argument (22)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 1 ring restore failed: -22: Invalid argument (22)
**
ERROR:../tests/qtest/vhost-user-test.c:890:wait_for_rings_started: assertion failed (ctpop64(s->rings) == count): (1 == 2)
**
ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child
process
(/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect/subprocess
[4569]) failed unexpectedly
vhost? virtio-queue? in a non migration test?
I don't know what is going on, but this is weird.
Do we have a way to run on that image:
./tests/qtest/migration-test
in a loop until it fails, and at least see what test is failing?
Later, Juan.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-26 13:05 ` Juan Quintela
@ 2023-06-26 13:29 ` Richard Henderson
0 siblings, 0 replies; 48+ messages in thread
From: Richard Henderson @ 2023-06-26 13:29 UTC (permalink / raw)
To: quintela
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/26/23 15:05, Juan Quintela wrote:
>> The "full logs" are
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
>
> Not useful. I was hoping that there is something like when one runs
> ./tests/qtest/migration-test
I thought I saw some patch today that to save more artifacts.
But the bottom line is that we don't emit enough stuff from any of our tests to debug them
from logs -- we're too used to using other methods.
> And I don't understand what CFI is (and I don't rule out that
> posibility) or I can't understand how checking indirect functions call
> can make migration-test die without a single CFI error message?
CFI (control flow inspection/validation/somesuch) adds checking along call paths, which
may affect timing.
This is almost certainly some sort of race condition.
> Do we have a way to run on that image:
>
> ./tests/qtest/migration-test
>
> in a loop until it fails, and at least see what test is failing?
Not as is, no. You'd have to create a new CI job, and for that you'll need advice beyond
myself.
r~
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
2023-06-23 7:34 ` Juan Quintela
2023-06-25 22:01 ` Juan Quintela
@ 2023-06-26 13:09 ` Juan Quintela
2023-06-27 9:07 ` Juan Quintela
3 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-26 13:09 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
I am looking at the whole series. I can't see a single function that is
new/change prototypes/etc that is changed on this series.
So is this problem related to CFI? Or it is a migration problem that
somehow only happens when one use CFI?
Inquiring minds what to know. Any clue?
Later, Juan.
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
>
>
> r~
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` [PULL 00/30] Next patches Richard Henderson
` (2 preceding siblings ...)
2023-06-26 13:09 ` Juan Quintela
@ 2023-06-27 9:07 ` Juan Quintela
3 siblings, 0 replies; 48+ messages in thread
From: Juan Quintela @ 2023-06-27 9:07 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block, Stefan Hajnoczi, Kevin Wolf
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
Still running in bisect mode (this takes forever).
[cc'ing stefan and kevin]
And now I get this problem with gcov:
https://gitlab.com/juan.quintela/qemu/-/jobs/4546094720
357/423 qemu:block / io-qcow2-copy-before-write ERROR 6.23s exit status 1
>>> PYTHON=/builds/juan.quintela/qemu/build/pyvenv/bin/python3 MALLOC_PERTURB_=154 /builds/juan.quintela/qemu/build/pyvenv/bin/python3 /builds/juan.quintela/qemu/build/../tests/qemu-iotests/check -tap -qcow2 copy-before-write --source-dir /builds/juan.quintela/qemu/tests/qemu-iotests --build-dir /builds/juan.quintela/qemu/build/tests/qemu-iotests
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
--- /builds/juan.quintela/qemu/tests/qemu-iotests/tests/copy-before-write.out
+++ /builds/juan.quintela/qemu/build/scratch/qcow2-file-copy-before-write/copy-before-write.out.bad
@@ -1,5 +1,21 @@
-....
+...F
+======================================================================
+FAIL: test_timeout_break_snapshot (__main__.TestCbwError)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+ File "/builds/juan.quintela/qemu/tests/qemu-iotests/tests/copy-before-write", line 210, in test_timeout_break_snapshot
+ self.assertEqual(log, """\
+AssertionError: 'wrot[195 chars]read 1048576/1048576 bytes at offset 0\n1 MiB,[46 chars]c)\n' != 'wrot[195 chars]read failed: Permission denied\n'
+ wrote 524288/524288 bytes at offset 0
+ 512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+ wrote 524288/524288 bytes at offset 524288
+ 512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
++ read failed: Permission denied
+- read 1048576/1048576 bytes at offset 0
+- 1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+
----------------------------------------------------------------------
Ran 4 tests
-OK
+FAILED (failures=1)
(test program exited with status code 1)
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
I have no clue how I can make qtests to fail with my changes.
Specially with a read permission.
Any clue?
Later, Juan.
PD. Yeap, continuing the bisect.
^ permalink raw reply [flat|nested] 48+ messages in thread