* [Qemu-devel] [PULL 1/2] Assume madvise for (no)hugepage works
2015-11-25 14:32 [Qemu-devel] [PULL 0/2] Migration pull request Juan Quintela
@ 2015-11-25 14:32 ` Juan Quintela
2015-11-25 14:32 ` [Qemu-devel] [PULL 2/2] block-migration: limit the memory usage Juan Quintela
2015-11-26 9:43 ` [Qemu-devel] [PULL 0/2] Migration pull request Peter Maydell
2 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2015-11-25 14:32 UTC (permalink / raw)
To: qemu-devel; +Cc: amit.shah, dgilbert
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
madvise() returns EINVAL in the case of many failures, but also
returns it in cases where the host kernel doesn't have THP enabled.
Postcopy only really cares that THP is off before it detects faults,
and turns it back on afterwards; so we're going to have
to assume that if the madvise fails then the host just doesn't do
THP and we can carry on with the postcopy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/postcopy-ram.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 22d6b18..3946aa9 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -241,10 +241,7 @@ static int cleanup_range(const char *block_name, void *host_addr,
* We turned off hugepage for the precopy stage with postcopy enabled
* we can turn it back on now.
*/
- if (qemu_madvise(host_addr, length, QEMU_MADV_HUGEPAGE)) {
- error_report("%s HUGEPAGE: %s", __func__, strerror(errno));
- return -1;
- }
+ qemu_madvise(host_addr, length, QEMU_MADV_HUGEPAGE);
/*
* We can also turn off userfault now since we should have all the
@@ -345,10 +342,7 @@ static int nhp_range(const char *block_name, void *host_addr,
* do delete areas of the page, even if THP thinks a hugepage would
* be a good idea, so force hugepages off.
*/
- if (qemu_madvise(host_addr, length, QEMU_MADV_NOHUGEPAGE)) {
- error_report("%s: NOHUGEPAGE: %s", __func__, strerror(errno));
- return -1;
- }
+ qemu_madvise(host_addr, length, QEMU_MADV_NOHUGEPAGE);
return 0;
}
--
2.5.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* [Qemu-devel] [PULL 2/2] block-migration: limit the memory usage
2015-11-25 14:32 [Qemu-devel] [PULL 0/2] Migration pull request Juan Quintela
2015-11-25 14:32 ` [Qemu-devel] [PULL 1/2] Assume madvise for (no)hugepage works Juan Quintela
@ 2015-11-25 14:32 ` Juan Quintela
2015-11-26 9:43 ` [Qemu-devel] [PULL 0/2] Migration pull request Peter Maydell
2 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2015-11-25 14:32 UTC (permalink / raw)
To: qemu-devel; +Cc: amit.shah, dgilbert
From: Wen Congyang <wency@cn.fujitsu.com>
If we set migration speed in a very large value, block-migration will try to read
all data to the memory. Because
(block_mig_state.submitted + block_mig_state.read_done) * BLOCK_SIZE
will be overflow, and it will be always less than rate limit.
There is no need to read too many data into memory when the rate limit is very large.
So limit the memory usage can fix the overflow problem.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/block.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/migration/block.c b/migration/block.c
index 310e2b3..656f38f 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -36,6 +36,8 @@
#define MAX_IS_ALLOCATED_SEARCH 65536
+#define MAX_INFLIGHT_IO 512
+
//#define DEBUG_BLK_MIGRATION
#ifdef DEBUG_BLK_MIGRATION
@@ -665,7 +667,10 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
blk_mig_lock();
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
- qemu_file_get_rate_limit(f)) {
+ qemu_file_get_rate_limit(f) &&
+ (block_mig_state.submitted +
+ block_mig_state.read_done) <
+ MAX_INFLIGHT_IO) {
blk_mig_unlock();
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
--
2.5.0
^ permalink raw reply related [flat|nested] 4+ messages in thread