* [Qemu-devel] [PULL 01/10] typedefs: add QJSON
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 02/10] migration: Fixes for non-migratable RAMBlocks Dr. David Alan Gilbert (git)
` (9 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: Greg Kurz <groug@kaod.org>
Since commit 83ee768d6247b, we now have two places that define the
QJSON type:
$ git grep 'typedef struct QJSON QJSON'
include/migration/vmstate.h:typedef struct QJSON QJSON;
migration/qjson.h:typedef struct QJSON QJSON;
This breaks docker-test-build@centos6:
In file included from /tmp/qemu-test/src/migration/savevm.c:59:
/tmp/qemu-test/src/migration/qjson.h:16: error: redefinition of typedef
'QJSON'
/tmp/qemu-test/src/include/migration/vmstate.h:30: note: previous
declaration of 'QJSON' was here
make: *** [migration/savevm.o] Error 1
This happens because CentOS 6 has an old GCC 4.4.7. Even if redefining
a typedef with the same type is permitted since GCC 4.6, unless -pedantic
is passed, we don't really need to do that on purpose. Let's have a
single definition in <qemu/typedefs.h> instead.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <152844714981.11789.3657734445739553287.stgit@bahia.lan>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
include/migration/vmstate.h | 2 --
include/qemu/typedefs.h | 1 +
migration/qjson.h | 2 --
3 files changed, 1 insertion(+), 4 deletions(-)
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 3747110f95..42b946ce90 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -27,8 +27,6 @@
#ifndef QEMU_VMSTATE_H
#define QEMU_VMSTATE_H
-typedef struct QJSON QJSON;
-
typedef struct VMStateInfo VMStateInfo;
typedef struct VMStateDescription VMStateDescription;
typedef struct VMStateField VMStateField;
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index 325c72de33..3ec0e13a96 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -97,6 +97,7 @@ typedef struct QEMUTimer QEMUTimer;
typedef struct QEMUTimerListGroup QEMUTimerListGroup;
typedef struct QBool QBool;
typedef struct QDict QDict;
+typedef struct QJSON QJSON;
typedef struct QList QList;
typedef struct QNull QNull;
typedef struct QNum QNum;
diff --git a/migration/qjson.h b/migration/qjson.h
index 2978b5f371..41664f2d71 100644
--- a/migration/qjson.h
+++ b/migration/qjson.h
@@ -13,8 +13,6 @@
#ifndef QEMU_QJSON_H
#define QEMU_QJSON_H
-typedef struct QJSON QJSON;
-
QJSON *qjson_new(void);
void qjson_destroy(QJSON *json);
void json_prop_str(QJSON *json, const char *name, const char *str);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 02/10] migration: Fixes for non-migratable RAMBlocks
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 01/10] typedefs: add QJSON Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 03/10] migration: Poison ramblock loops in migration Dr. David Alan Gilbert (git)
` (8 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
There are still a few cases where migration code is using the macros
and functions that do all RAMBlocks rather than just the migratable
blocks; fix those up.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180605162545.80778-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/ram.c | 4 ++--
migration/rdma.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a500015a2f..a7807cea84 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2516,7 +2516,7 @@ static void ram_state_resume_prepare(RAMState *rs, QEMUFile *out)
* about dirty page logging as well.
*/
- RAMBLOCK_FOREACH(block) {
+ RAMBLOCK_FOREACH_MIGRATABLE(block) {
pages += bitmap_count_one(block->bmap,
block->used_length >> TARGET_PAGE_BITS);
}
@@ -3431,7 +3431,7 @@ static int ram_dirty_bitmap_sync_all(MigrationState *s, RAMState *rs)
trace_ram_dirty_bitmap_sync_start();
- RAMBLOCK_FOREACH(block) {
+ RAMBLOCK_FOREACH_MIGRATABLE(block) {
qemu_savevm_send_recv_bitmap(file, block->idstr);
trace_ram_dirty_bitmap_request(block->idstr);
ramblock_count++;
diff --git a/migration/rdma.c b/migration/rdma.c
index 05aee3d591..8bd7159059 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -635,7 +635,7 @@ static int qemu_rdma_init_ram_blocks(RDMAContext *rdma)
assert(rdma->blockmap == NULL);
memset(local, 0, sizeof *local);
- qemu_ram_foreach_block(qemu_rdma_init_one_block, rdma);
+ qemu_ram_foreach_migratable_block(qemu_rdma_init_one_block, rdma);
trace_qemu_rdma_init_ram_blocks(local->nb_blocks);
rdma->dest_blocks = g_new0(RDMADestBlock,
rdma->local_ram_blocks.nb_blocks);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 03/10] migration: Poison ramblock loops in migration
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 01/10] typedefs: add QJSON Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 02/10] migration: Fixes for non-migratable RAMBlocks Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 04/10] migration/block-dirty-bitmap: fix dirty_bitmap_load Dr. David Alan Gilbert (git)
` (7 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
The migration code should be using the
RAMBLOCK_FOREACH_MIGRATABLE and qemu_ram_foreach_block_migratable
not the all-block versions; poison them so that we can't accidentally
use them.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180605162545.80778-3-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
include/exec/ramlist.h | 4 +++-
migration/migration.h | 3 +++
migration/ram.c | 4 +++-
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/exec/ramlist.h b/include/exec/ramlist.h
index 2e2ac6cb99..bc4faa1b00 100644
--- a/include/exec/ramlist.h
+++ b/include/exec/ramlist.h
@@ -56,8 +56,10 @@ typedef struct RAMList {
extern RAMList ram_list;
/* Should be holding either ram_list.mutex, or the RCU lock. */
-#define RAMBLOCK_FOREACH(block) \
+#define INTERNAL_RAMBLOCK_FOREACH(block) \
QLIST_FOREACH_RCU(block, &ram_list.blocks, next)
+/* Never use the INTERNAL_ version except for defining other macros */
+#define RAMBLOCK_FOREACH(block) INTERNAL_RAMBLOCK_FOREACH(block)
void qemu_mutex_lock_ramlist(void);
void qemu_mutex_unlock_ramlist(void);
diff --git a/migration/migration.h b/migration/migration.h
index 5af57d616c..31d3ed12dc 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -284,4 +284,7 @@ void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value);
void dirty_bitmap_mig_before_vm_start(void);
void init_dirty_bitmap_incoming_migration(void);
+#define qemu_ram_foreach_block \
+ #warning "Use qemu_ram_foreach_block_migratable in migration code"
+
#endif
diff --git a/migration/ram.c b/migration/ram.c
index a7807cea84..e0d19305ee 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -159,9 +159,11 @@ out:
/* Should be holding either ram_list.mutex, or the RCU lock. */
#define RAMBLOCK_FOREACH_MIGRATABLE(block) \
- RAMBLOCK_FOREACH(block) \
+ INTERNAL_RAMBLOCK_FOREACH(block) \
if (!qemu_ram_is_migratable(block)) {} else
+#undef RAMBLOCK_FOREACH
+
static void ramblock_recv_map_init(void)
{
RAMBlock *rb;
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 04/10] migration/block-dirty-bitmap: fix dirty_bitmap_load
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (2 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 03/10] migration: Poison ramblock loops in migration Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 05/10] migration: fix counting xbzrle cache_miss_rate Dr. David Alan Gilbert (git)
` (6 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
dirty_bitmap_load_header return code is obtained but not handled. Fix
this.
Bug was introduced in b35ebdf076d697bc
"migration: add postcopy migration of dirty bitmaps" with the whole
function.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180530112424.204835-1-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/block-dirty-bitmap.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
index eeccaff34b..3bafbbdc4c 100644
--- a/migration/block-dirty-bitmap.c
+++ b/migration/block-dirty-bitmap.c
@@ -672,6 +672,9 @@ static int dirty_bitmap_load(QEMUFile *f, void *opaque, int version_id)
do {
ret = dirty_bitmap_load_header(f, &s);
+ if (ret < 0) {
+ return ret;
+ }
if (s.flags & DIRTY_BITMAP_MIG_FLAG_START) {
ret = dirty_bitmap_load_start(f, &s);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 05/10] migration: fix counting xbzrle cache_miss_rate
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (3 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 04/10] migration/block-dirty-bitmap: fix dirty_bitmap_load Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 06/10] migration: introduce migration_update_rates Dr. David Alan Gilbert (git)
` (5 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: Xiao Guangrong <xiaoguangrong@tencent.com>
Sync up xbzrle_cache_miss_prev only after migration iteration goes
forward
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180604095520.8563-4-xiaoguangrong@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/ram.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index e0d19305ee..d273a19699 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1200,9 +1200,9 @@ static void migration_bitmap_sync(RAMState *rs)
(double)(xbzrle_counters.cache_miss -
rs->xbzrle_cache_miss_prev) /
(rs->iterations - rs->iterations_prev);
+ rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
}
rs->iterations_prev = rs->iterations;
- rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
}
/* reset period counters */
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 06/10] migration: introduce migration_update_rates
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (4 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 05/10] migration: fix counting xbzrle cache_miss_rate Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 07/10] migration/postcopy: Add max-postcopy-bandwidth parameter Dr. David Alan Gilbert (git)
` (4 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: Xiao Guangrong <xiaoguangrong@tencent.com>
It is used to slightly clean the code up, no logic is changed
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180604095520.8563-5-xiaoguangrong@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/ram.c | 35 ++++++++++++++++++++++-------------
1 file changed, 22 insertions(+), 13 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index d273a19699..77071a43ed 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1141,6 +1141,25 @@ uint64_t ram_pagesize_summary(void)
return summary;
}
+static void migration_update_rates(RAMState *rs, int64_t end_time)
+{
+ uint64_t iter_count = rs->iterations - rs->iterations_prev;
+
+ /* calculate period counters */
+ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
+ / (end_time - rs->time_last_bitmap_sync);
+
+ if (!iter_count) {
+ return;
+ }
+
+ if (migrate_use_xbzrle()) {
+ xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss -
+ rs->xbzrle_cache_miss_prev) / iter_count;
+ rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
+ }
+}
+
static void migration_bitmap_sync(RAMState *rs)
{
RAMBlock *block;
@@ -1170,9 +1189,6 @@ static void migration_bitmap_sync(RAMState *rs)
/* more than 1 second = 1000 millisecons */
if (end_time > rs->time_last_bitmap_sync + 1000) {
- /* calculate period counters */
- ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000
- / (end_time - rs->time_last_bitmap_sync);
bytes_xfer_now = ram_counters.transferred;
/* During block migration the auto-converge logic incorrectly detects
@@ -1194,16 +1210,9 @@ static void migration_bitmap_sync(RAMState *rs)
}
}
- if (migrate_use_xbzrle()) {
- if (rs->iterations_prev != rs->iterations) {
- xbzrle_counters.cache_miss_rate =
- (double)(xbzrle_counters.cache_miss -
- rs->xbzrle_cache_miss_prev) /
- (rs->iterations - rs->iterations_prev);
- rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
- }
- rs->iterations_prev = rs->iterations;
- }
+ migration_update_rates(rs, end_time);
+
+ rs->iterations_prev = rs->iterations;
/* reset period counters */
rs->time_last_bitmap_sync = end_time;
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 07/10] migration/postcopy: Add max-postcopy-bandwidth parameter
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (5 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 06/10] migration: introduce migration_update_rates Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 08/10] migration: Wake rate limiting for urgent requests Dr. David Alan Gilbert (git)
` (3 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Limit the background transfer bandwidth during the postcopy
phase to the value set on this new parameter. The default, 0,
corresponds to the existing behaviour which is unlimited bandwidth.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180613102642.23995-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
hmp.c | 7 +++++++
migration/migration.c | 35 ++++++++++++++++++++++++++++++++++-
qapi/migration.json | 19 ++++++++++++++++---
3 files changed, 57 insertions(+), 4 deletions(-)
diff --git a/hmp.c b/hmp.c
index ef93f4878b..f40d8279cf 100644
--- a/hmp.c
+++ b/hmp.c
@@ -370,6 +370,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%s: %" PRIu64 "\n",
MigrationParameter_str(MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE),
params->xbzrle_cache_size);
+ monitor_printf(mon, "%s: %" PRIu64 "\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_MAX_POSTCOPY_BANDWIDTH),
+ params->max_postcopy_bandwidth);
}
qapi_free_MigrationParameters(params);
@@ -1676,6 +1679,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
}
p->xbzrle_cache_size = cache_size;
break;
+ case MIGRATION_PARAMETER_MAX_POSTCOPY_BANDWIDTH:
+ p->has_max_postcopy_bandwidth = true;
+ visit_type_size(v, param, &p->max_postcopy_bandwidth, &err);
+ break;
default:
assert(0);
}
diff --git a/migration/migration.c b/migration/migration.c
index 1e99ec9b7e..3a50d4c35c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -82,6 +82,11 @@
#define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
+/* Background transfer rate for postcopy, 0 means unlimited, note
+ * that page requests can still exceed this limit.
+ */
+#define DEFAULT_MIGRATE_MAX_POSTCOPY_BANDWIDTH 0
+
static NotifierList migration_state_notifiers =
NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
@@ -659,6 +664,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
params->x_multifd_page_count = s->parameters.x_multifd_page_count;
params->has_xbzrle_cache_size = true;
params->xbzrle_cache_size = s->parameters.xbzrle_cache_size;
+ params->has_max_postcopy_bandwidth = true;
+ params->max_postcopy_bandwidth = s->parameters.max_postcopy_bandwidth;
return params;
}
@@ -1066,6 +1073,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
if (params->has_xbzrle_cache_size) {
dest->xbzrle_cache_size = params->xbzrle_cache_size;
}
+ if (params->has_max_postcopy_bandwidth) {
+ dest->max_postcopy_bandwidth = params->max_postcopy_bandwidth;
+ }
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1138,6 +1148,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
s->parameters.xbzrle_cache_size = params->xbzrle_cache_size;
xbzrle_cache_resize(params->xbzrle_cache_size, errp);
}
+ if (params->has_max_postcopy_bandwidth) {
+ s->parameters.max_postcopy_bandwidth = params->max_postcopy_bandwidth;
+ }
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -1887,6 +1900,16 @@ int64_t migrate_xbzrle_cache_size(void)
return s->parameters.xbzrle_cache_size;
}
+static int64_t migrate_max_postcopy_bandwidth(void)
+{
+ MigrationState *s;
+
+ s = migrate_get_current();
+
+ return s->parameters.max_postcopy_bandwidth;
+}
+
+
bool migrate_use_block(void)
{
MigrationState *s;
@@ -2226,6 +2249,7 @@ static int postcopy_start(MigrationState *ms)
QIOChannelBuffer *bioc;
QEMUFile *fb;
int64_t time_at_stop = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+ int64_t bandwidth = migrate_max_postcopy_bandwidth();
bool restart_block = false;
int cur_state = MIGRATION_STATUS_ACTIVE;
if (!migrate_pause_before_switchover()) {
@@ -2280,7 +2304,12 @@ static int postcopy_start(MigrationState *ms)
* will notice we're in POSTCOPY_ACTIVE and not actually
* wrap their state up here
*/
- qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
+ /* 0 max-postcopy-bandwidth means unlimited */
+ if (!bandwidth) {
+ qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
+ } else {
+ qemu_file_set_rate_limit(ms->to_dst_file, bandwidth / XFER_LIMIT_RATIO);
+ }
if (migrate_postcopy_ram()) {
/* Ping just for debugging, helps line traces up */
qemu_savevm_send_ping(ms->to_dst_file, 2);
@@ -3042,6 +3071,9 @@ static Property migration_properties[] = {
DEFINE_PROP_SIZE("xbzrle-cache-size", MigrationState,
parameters.xbzrle_cache_size,
DEFAULT_MIGRATE_XBZRLE_CACHE_SIZE),
+ DEFINE_PROP_SIZE("max-postcopy-bandwidth", MigrationState,
+ parameters.max_postcopy_bandwidth,
+ DEFAULT_MIGRATE_MAX_POSTCOPY_BANDWIDTH),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -3110,6 +3142,7 @@ static void migration_instance_init(Object *obj)
params->has_x_multifd_channels = true;
params->has_x_multifd_page_count = true;
params->has_xbzrle_cache_size = true;
+ params->has_max_postcopy_bandwidth = true;
qemu_sem_init(&ms->postcopy_pause_sem, 0);
qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
diff --git a/qapi/migration.json b/qapi/migration.json
index f7e10ee90f..1b4c1db670 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -517,6 +517,9 @@
# and a power of 2
# (Since 2.11)
#
+# @max-postcopy-bandwidth: Background transfer bandwidth during postcopy.
+# Defaults to 0 (unlimited). In bytes per second.
+# (Since 3.0)
# Since: 2.4
##
{ 'enum': 'MigrationParameter',
@@ -525,7 +528,7 @@
'tls-creds', 'tls-hostname', 'max-bandwidth',
'downtime-limit', 'x-checkpoint-delay', 'block-incremental',
'x-multifd-channels', 'x-multifd-page-count',
- 'xbzrle-cache-size' ] }
+ 'xbzrle-cache-size', 'max-postcopy-bandwidth' ] }
##
# @MigrateSetParameters:
@@ -593,6 +596,10 @@
# needs to be a multiple of the target page size
# and a power of 2
# (Since 2.11)
+#
+# @max-postcopy-bandwidth: Background transfer bandwidth during postcopy.
+# Defaults to 0 (unlimited). In bytes per second.
+# (Since 3.0)
# Since: 2.4
##
# TODO either fuse back into MigrationParameters, or make
@@ -611,7 +618,8 @@
'*block-incremental': 'bool',
'*x-multifd-channels': 'int',
'*x-multifd-page-count': 'int',
- '*xbzrle-cache-size': 'size' } }
+ '*xbzrle-cache-size': 'size',
+ '*max-postcopy-bandwidth': 'size' } }
##
# @migrate-set-parameters:
@@ -694,6 +702,10 @@
# needs to be a multiple of the target page size
# and a power of 2
# (Since 2.11)
+#
+# @max-postcopy-bandwidth: Background transfer bandwidth during postcopy.
+# Defaults to 0 (unlimited). In bytes per second.
+# (Since 3.0)
# Since: 2.4
##
{ 'struct': 'MigrationParameters',
@@ -710,7 +722,8 @@
'*block-incremental': 'bool' ,
'*x-multifd-channels': 'uint8',
'*x-multifd-page-count': 'uint32',
- '*xbzrle-cache-size': 'size' } }
+ '*xbzrle-cache-size': 'size',
+ '*max-postcopy-bandwidth': 'size' } }
##
# @query-migrate-parameters:
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 08/10] migration: Wake rate limiting for urgent requests
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (6 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 07/10] migration/postcopy: Add max-postcopy-bandwidth parameter Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 09/10] migration/postcopy: Wake rate limit sleep on postcopy request Dr. David Alan Gilbert (git)
` (2 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Rate limiting sleeps the migration thread for a while when it runs
out of bandwidth; but sometimes we want to wake up to get on with
something more urgent (like a postcopy request). Here we use
a semaphore with a timedwait instead of a simple sleep; Incrementing
the sempahore will wake it up sooner. Anything that consumes
these urgent events must decrement the sempahore.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180613102642.23995-3-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/migration.c | 35 +++++++++++++++++++++++++++++++----
migration/migration.h | 8 ++++++++
migration/trace-events | 2 ++
3 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 3a50d4c35c..108c3d7142 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2852,6 +2852,16 @@ static void migration_iteration_finish(MigrationState *s)
qemu_mutex_unlock_iothread();
}
+void migration_make_urgent_request(void)
+{
+ qemu_sem_post(&migrate_get_current()->rate_limit_sem);
+}
+
+void migration_consume_urgent_request(void)
+{
+ qemu_sem_wait(&migrate_get_current()->rate_limit_sem);
+}
+
/*
* Master migration thread on the source VM.
* It drives the migration and pumps the data down the outgoing channel.
@@ -2861,6 +2871,7 @@ static void *migration_thread(void *opaque)
MigrationState *s = opaque;
int64_t setup_start = qemu_clock_get_ms(QEMU_CLOCK_HOST);
MigThrError thr_error;
+ bool urgent = false;
rcu_register_thread();
@@ -2901,7 +2912,7 @@ static void *migration_thread(void *opaque)
s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) {
int64_t current_time;
- if (!qemu_file_rate_limit(s->to_dst_file)) {
+ if (urgent || !qemu_file_rate_limit(s->to_dst_file)) {
MigIterateState iter_state = migration_iteration_run(s);
if (iter_state == MIG_ITERATE_SKIP) {
continue;
@@ -2932,10 +2943,24 @@ static void *migration_thread(void *opaque)
migration_update_counters(s, current_time);
+ urgent = false;
if (qemu_file_rate_limit(s->to_dst_file)) {
- /* usleep expects microseconds */
- g_usleep((s->iteration_start_time + BUFFER_DELAY -
- current_time) * 1000);
+ /* Wait for a delay to do rate limiting OR
+ * something urgent to post the semaphore.
+ */
+ int ms = s->iteration_start_time + BUFFER_DELAY - current_time;
+ trace_migration_thread_ratelimit_pre(ms);
+ if (qemu_sem_timedwait(&s->rate_limit_sem, ms) == 0) {
+ /* We were worken by one or more urgent things but
+ * the timedwait will have consumed one of them.
+ * The service routine for the urgent wake will dec
+ * the semaphore itself for each item it consumes,
+ * so add this one we just eat back.
+ */
+ qemu_sem_post(&s->rate_limit_sem);
+ urgent = true;
+ }
+ trace_migration_thread_ratelimit_post(urgent);
}
}
@@ -3109,6 +3134,7 @@ static void migration_instance_finalize(Object *obj)
qemu_mutex_destroy(&ms->qemu_file_lock);
g_free(params->tls_hostname);
g_free(params->tls_creds);
+ qemu_sem_destroy(&ms->rate_limit_sem);
qemu_sem_destroy(&ms->pause_sem);
qemu_sem_destroy(&ms->postcopy_pause_sem);
qemu_sem_destroy(&ms->postcopy_pause_rp_sem);
@@ -3147,6 +3173,7 @@ static void migration_instance_init(Object *obj)
qemu_sem_init(&ms->postcopy_pause_sem, 0);
qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
qemu_sem_init(&ms->rp_state.rp_sem, 0);
+ qemu_sem_init(&ms->rate_limit_sem, 0);
qemu_mutex_init(&ms->qemu_file_lock);
}
diff --git a/migration/migration.h b/migration/migration.h
index 31d3ed12dc..64a7b33735 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -121,6 +121,11 @@ struct MigrationState
*/
QemuMutex qemu_file_lock;
+ /*
+ * Used to allow urgent requests to override rate limiting.
+ */
+ QemuSemaphore rate_limit_sem;
+
/* bytes already send at the beggining of current interation */
uint64_t iteration_initial_bytes;
/* time at the start of current iteration */
@@ -287,4 +292,7 @@ void init_dirty_bitmap_incoming_migration(void);
#define qemu_ram_foreach_block \
#warning "Use qemu_ram_foreach_block_migratable in migration code"
+void migration_make_urgent_request(void);
+void migration_consume_urgent_request(void);
+
#endif
diff --git a/migration/trace-events b/migration/trace-events
index 4a768eaaeb..3f67758893 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -108,6 +108,8 @@ migration_return_path_end_before(void) ""
migration_return_path_end_after(int rp_error) "%d"
migration_thread_after_loop(void) ""
migration_thread_file_err(void) ""
+migration_thread_ratelimit_pre(int ms) "%d ms"
+migration_thread_ratelimit_post(int urgent) "urgent: %d"
migration_thread_setup_complete(void) ""
open_return_path_on_source(void) ""
open_return_path_on_source_continue(void) ""
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 09/10] migration/postcopy: Wake rate limit sleep on postcopy request
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (7 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 08/10] migration: Wake rate limiting for urgent requests Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 15:15 ` [Qemu-devel] [PULL 10/10] migration: calculate expected_downtime with ram_bytes_remaining() Dr. David Alan Gilbert (git)
2018-06-15 18:23 ` [Qemu-devel] [PULL 00/10] migration queue Peter Maydell
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Use the 'urgent request' mechanism added in the previous patch
for entries added to the postcopy request queue for RAM. Ignore
the rate limiting while we have requests.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180613102642.23995-4-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/ram.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 77071a43ed..225b201aff 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1547,6 +1547,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
memory_region_unref(block->mr);
QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
g_free(entry);
+ migration_consume_urgent_request();
}
}
qemu_mutex_unlock(&rs->src_page_req_mutex);
@@ -1695,6 +1696,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
memory_region_ref(ramblock->mr);
qemu_mutex_lock(&rs->src_page_req_mutex);
QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
+ migration_make_urgent_request();
qemu_mutex_unlock(&rs->src_page_req_mutex);
rcu_read_unlock();
@@ -2643,9 +2645,14 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
i = 0;
- while ((ret = qemu_file_rate_limit(f)) == 0) {
+ while ((ret = qemu_file_rate_limit(f)) == 0 ||
+ !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
int pages;
+ if (qemu_file_get_error(f)) {
+ break;
+ }
+
pages = ram_find_and_save_block(rs, false);
/* no more pages to sent */
if (pages == 0) {
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [Qemu-devel] [PULL 10/10] migration: calculate expected_downtime with ram_bytes_remaining()
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (8 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 09/10] migration/postcopy: Wake rate limit sleep on postcopy request Dr. David Alan Gilbert (git)
@ 2018-06-15 15:15 ` Dr. David Alan Gilbert (git)
2018-06-15 18:23 ` [Qemu-devel] [PULL 00/10] migration queue Peter Maydell
10 siblings, 0 replies; 19+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2018-06-15 15:15 UTC (permalink / raw)
To: qemu-devel, quintela, peterx, groug, vsementsov, xiaoguangrong,
bala24
From: Balamuruhan S <bala24@linux.vnet.ibm.com>
expected_downtime value is not accurate with dirty_pages_rate * page_size,
using ram_bytes_remaining() would yeild it resonable.
consider to read the remaining ram just after having updated the dirty
pages count later migration_bitmap_sync_range() in migration_bitmap_sync()
and reuse the `remaining` field in ram_counters to hold ram_bytes_remaining()
for calculating expected_downtime.
Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20180612085009.17594-2-bala24@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/migration.c | 3 +--
migration/ram.c | 1 +
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 108c3d7142..e1eaa97df4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2746,8 +2746,7 @@ static void migration_update_counters(MigrationState *s,
* recalculate. 10000 is a small enough number for our purposes
*/
if (ram_counters.dirty_pages_rate && transferred > 10000) {
- s->expected_downtime = ram_counters.dirty_pages_rate *
- qemu_target_page_size() / bandwidth;
+ s->expected_downtime = ram_counters.remaining / bandwidth;
}
qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 225b201aff..cd5f55117d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1180,6 +1180,7 @@ static void migration_bitmap_sync(RAMState *rs)
RAMBLOCK_FOREACH_MIGRATABLE(block) {
migration_bitmap_sync_range(rs, block, 0, block->used_length);
}
+ ram_counters.remaining = ram_bytes_remaining();
rcu_read_unlock();
qemu_mutex_unlock(&rs->bitmap_mutex);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] [PULL 00/10] migration queue
2018-06-15 15:15 [Qemu-devel] [PULL 00/10] migration queue Dr. David Alan Gilbert (git)
` (9 preceding siblings ...)
2018-06-15 15:15 ` [Qemu-devel] [PULL 10/10] migration: calculate expected_downtime with ram_bytes_remaining() Dr. David Alan Gilbert (git)
@ 2018-06-15 18:23 ` Peter Maydell
10 siblings, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2018-06-15 18:23 UTC (permalink / raw)
To: Dr. David Alan Gilbert (git)
Cc: QEMU Developers, Juan Quintela, Peter Xu, Greg Kurz,
Vladimir Sementsov-Ogievskiy, xiaoguangrong, bala24
On 15 June 2018 at 16:15, Dr. David Alan Gilbert (git)
<dgilbert@redhat.com> wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>
> The following changes since commit 2702c2d3eb74e3908c0c5dbf3a71c8987595a86e:
>
> Merge remote-tracking branch 'remotes/stsquad/tags/pull-travis-updates-140618-1' into staging (2018-06-15 12:49:36 +0100)
>
> are available in the Git repository at:
>
> git://github.com/dagrh/qemu.git tags/pull-migration-20180615a
>
> for you to fetch changes up to 650af8907bd567db914b7ce3a7e9df4c323f4619:
>
> migration: calculate expected_downtime with ram_bytes_remaining() (2018-06-15 14:40:56 +0100)
>
> ----------------------------------------------------------------
> Migration pull 2018-06-15
>
> ----------------------------------------------------------------
Applied, thanks.
-- PMM
^ permalink raw reply [flat|nested] 19+ messages in thread