* [Qemu-devel] [PATCH v4 0/3] Add bitmap for received pages in postcopy migration
[not found] <CGME20170626083547eucas1p10fc2d64db7b6db0a1bd1b94b56c06ca9@eucas1p1.samsung.com>
@ 2017-06-26 8:35 ` Alexey Perevalov
[not found] ` <CGME20170626083551eucas1p1b0cb811534fae257da120e71d32eab37@eucas1p1.samsung.com>
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Alexey Perevalov @ 2017-06-26 8:35 UTC (permalink / raw)
To: qemu-devel; +Cc: Alexey Perevalov, peterx, i.maximets, quintela, dgilbert
This is 4th version of
[PATCH v1 0/2] Add bitmap for copied pages in postcopy migration
cover message from there
This is a separate patch set, it derived from
https://www.mail-archive.com/qemu-devel@nongnu.org/msg456004.html
There are several possible use cases:
1. solve issue with postcopy live migration and shared memory.
OVS-VSWITCH requires information about copied pages, to fallocate
newly allocated pages.
2. calculation vCPU blocktime
for more details see
https://www.mail-archive.com/qemu-devel@nongnu.org/msg456004.html
3. Recovery after fail in the middle of postcopy migration
Declaration is placed in two places include/migration/migration.h and into
migration/postcopy-ram.h, because some functions are required in virtio and
into public function include/exec/ram_addr.h.
----------------------------------------------------------------
V3 -> V4
- clear_bit instead of ramblock_recv_bitmap_clear in ramblock_recv_bitmap_clear_range, it reduced number of operation (comment from Juan)
- for postcopy ramblock_recv_bitmap_set is calling after page was copied,
only in case of success (comment from David)
- indentation fixes (comment from Juan)
V2 -> V3
- ramblock_recv_map_init call is placed into migration_incoming_get_current,
looks like it's general place for both precopy and postcopy case.
- received bitmap memory releasing is placed into ram_load_cleanup,
unfortunatelly, it calls only in case of precopy.
- precopy case and discard ram block case
- function renaming, and another minor cleanups
V1 -> V2
- change in terminology s/copied/received/g
- granularity became TARGET_PAGE_SIZE, but not actual page size of the
ramblock
- movecopiedmap & get_copiedmap_size were removed, until patch set where
it will be necessary
- releasing memory of receivedmap was added into ram_load_cleanup
- new patch "migration: introduce qemu_ufd_copy_ioctl helper"
Patchset is based on Juan's patchset:
[PATCH v2 0/5] Create setup/cleanup methods for migration incoming side
Alexey Perevalov (3):
migration: postcopy_place_page factoring out
migration: introduce qemu_ufd_copy_ioctl helper
migration: add bitmap for received page
include/exec/ram_addr.h | 10 ++++++++
migration/migration.c | 1 +
migration/postcopy-ram.c | 57 ++++++++++++++++++++++++++++---------------
migration/postcopy-ram.h | 4 +--
migration/ram.c | 63 ++++++++++++++++++++++++++++++++++++++++++++----
migration/ram.h | 6 +++++
6 files changed, 115 insertions(+), 26 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v4 1/3] migration: postcopy_place_page factoring out
[not found] ` <CGME20170626083551eucas1p1b0cb811534fae257da120e71d32eab37@eucas1p1.samsung.com>
@ 2017-06-26 8:35 ` Alexey Perevalov
0 siblings, 0 replies; 7+ messages in thread
From: Alexey Perevalov @ 2017-06-26 8:35 UTC (permalink / raw)
To: qemu-devel; +Cc: Alexey Perevalov, peterx, i.maximets, quintela, dgilbert
Need to mark copied pages as closer as possible to the place where it
tracks down. That will be necessary in futher patch.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
---
migration/postcopy-ram.c | 13 +++++++------
migration/postcopy-ram.h | 4 ++--
migration/ram.c | 4 ++--
3 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index c8c4500..dae41b5 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -566,9 +566,10 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
* returns 0 on success
*/
int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
- size_t pagesize)
+ RAMBlock *rb)
{
struct uffdio_copy copy_struct;
+ size_t pagesize = qemu_ram_pagesize(rb);
copy_struct.dst = (uint64_t)(uintptr_t)host;
copy_struct.src = (uint64_t)(uintptr_t)from;
@@ -597,11 +598,11 @@ int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
* returns 0 on success
*/
int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
- size_t pagesize)
+ RAMBlock *rb)
{
trace_postcopy_place_page_zero(host);
- if (pagesize == getpagesize()) {
+ if (qemu_ram_pagesize(rb) == getpagesize()) {
struct uffdio_zeropage zero_struct;
zero_struct.range.start = (uint64_t)(uintptr_t)host;
zero_struct.range.len = getpagesize();
@@ -631,7 +632,7 @@ int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
memset(mis->postcopy_tmp_zero_page, '\0', mis->largest_page_size);
}
return postcopy_place_page(mis, host, mis->postcopy_tmp_zero_page,
- pagesize);
+ rb);
}
return 0;
@@ -694,14 +695,14 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
}
int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
- size_t pagesize)
+ RAMBlock *rb)
{
assert(0);
return -1;
}
int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
- size_t pagesize)
+ RAMBlock *rb)
{
assert(0);
return -1;
diff --git a/migration/postcopy-ram.h b/migration/postcopy-ram.h
index 52d51e8..78a3591 100644
--- a/migration/postcopy-ram.h
+++ b/migration/postcopy-ram.h
@@ -72,14 +72,14 @@ void postcopy_discard_send_finish(MigrationState *ms,
* returns 0 on success
*/
int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
- size_t pagesize);
+ RAMBlock *rb);
/*
* Place a zero page at (host) atomically
* returns 0 on success
*/
int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
- size_t pagesize);
+ RAMBlock *rb);
/* The current postcopy state is read/set by postcopy_state_get/set
* which update it atomically.
diff --git a/migration/ram.c b/migration/ram.c
index 8dbdfdb..f50479d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2465,10 +2465,10 @@ static int ram_load_postcopy(QEMUFile *f)
if (all_zero) {
ret = postcopy_place_page_zero(mis, place_dest,
- block->page_size);
+ block);
} else {
ret = postcopy_place_page(mis, place_dest,
- place_source, block->page_size);
+ place_source, block);
}
}
if (!ret) {
--
1.9.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v4 2/3] migration: introduce qemu_ufd_copy_ioctl helper
[not found] ` <CGME20170626083552eucas1p24efde91b6a4544fb77927f3b93d83cdf@eucas1p2.samsung.com>
@ 2017-06-26 8:35 ` Alexey Perevalov
0 siblings, 0 replies; 7+ messages in thread
From: Alexey Perevalov @ 2017-06-26 8:35 UTC (permalink / raw)
To: qemu-devel; +Cc: Alexey Perevalov, peterx, i.maximets, quintela, dgilbert
Just for placing auxilary operations inside helper,
auxilary operations like: track received pages,
notify about copying operation in futher patches.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
---
migration/postcopy-ram.c | 34 +++++++++++++++++++++-------------
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index dae41b5..293db97 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -561,6 +561,25 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
return 0;
}
+static int qemu_ufd_copy_ioctl(int userfault_fd, void *host_addr,
+ void *from_addr, uint64_t pagesize)
+{
+ if (from_addr) {
+ struct uffdio_copy copy_struct;
+ copy_struct.dst = (uint64_t)(uintptr_t)host_addr;
+ copy_struct.src = (uint64_t)(uintptr_t)from_addr;
+ copy_struct.len = pagesize;
+ copy_struct.mode = 0;
+ return ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
+ } else {
+ struct uffdio_zeropage zero_struct;
+ zero_struct.range.start = (uint64_t)(uintptr_t)host_addr;
+ zero_struct.range.len = pagesize;
+ zero_struct.mode = 0;
+ return ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
+ }
+}
+
/*
* Place a host page (from) at (host) atomically
* returns 0 on success
@@ -568,20 +587,14 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
RAMBlock *rb)
{
- struct uffdio_copy copy_struct;
size_t pagesize = qemu_ram_pagesize(rb);
- copy_struct.dst = (uint64_t)(uintptr_t)host;
- copy_struct.src = (uint64_t)(uintptr_t)from;
- copy_struct.len = pagesize;
- copy_struct.mode = 0;
-
/* copy also acks to the kernel waking the stalled thread up
* TODO: We can inhibit that ack and only do it if it was requested
* which would be slightly cheaper, but we'd have to be careful
* of the order of updating our page state.
*/
- if (ioctl(mis->userfault_fd, UFFDIO_COPY, ©_struct)) {
+ if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, from, pagesize)) {
int e = errno;
error_report("%s: %s copy host: %p from: %p (size: %zd)",
__func__, strerror(e), host, from, pagesize);
@@ -603,12 +616,7 @@ int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
trace_postcopy_place_page_zero(host);
if (qemu_ram_pagesize(rb) == getpagesize()) {
- struct uffdio_zeropage zero_struct;
- zero_struct.range.start = (uint64_t)(uintptr_t)host;
- zero_struct.range.len = getpagesize();
- zero_struct.mode = 0;
-
- if (ioctl(mis->userfault_fd, UFFDIO_ZEROPAGE, &zero_struct)) {
+ if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, NULL, getpagesize())) {
int e = errno;
error_report("%s: %s zero host: %p",
__func__, strerror(e), host);
--
1.9.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page
[not found] ` <CGME20170626083552eucas1p2db6a8bd9188a95d72aad188d6b633dd9@eucas1p2.samsung.com>
@ 2017-06-26 8:35 ` Alexey Perevalov
2017-06-26 18:52 ` Dr. David Alan Gilbert
2017-06-27 4:03 ` Peter Xu
0 siblings, 2 replies; 7+ messages in thread
From: Alexey Perevalov @ 2017-06-26 8:35 UTC (permalink / raw)
To: qemu-devel; +Cc: Alexey Perevalov, peterx, i.maximets, quintela, dgilbert
This patch adds ability to track down already received
pages, it's necessary for calculation vCPU block time in
postcopy migration feature, maybe for restore after
postcopy migration failure.
Also it's necessary to solve shared memory issue in
postcopy livemigration. Information about received pages
will be transferred to the software virtual bridge
(e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for
already received pages. fallocate syscall is required for
remmaped shared memory, due to remmaping itself blocks
ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT
error (struct page is exists after remmap).
Bitmap is placed into RAMBlock as another postcopy/precopy
related bitmaps.
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
---
include/exec/ram_addr.h | 10 ++++++++
migration/migration.c | 1 +
migration/postcopy-ram.c | 20 ++++++++++++----
migration/ram.c | 59 +++++++++++++++++++++++++++++++++++++++++++++---
migration/ram.h | 6 +++++
5 files changed, 88 insertions(+), 8 deletions(-)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 140efa8..4170656 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -47,6 +47,8 @@ struct RAMBlock {
* of the postcopy phase
*/
unsigned long *unsentmap;
+ /* bitmap of already received pages in postcopy */
+ unsigned long *receivedmap;
};
static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset)
@@ -60,6 +62,14 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
return (char *)block->host + offset;
}
+static inline unsigned long int ramblock_recv_bitmap_offset(void *host_addr,
+ RAMBlock *rb)
+{
+ uint64_t host_addr_offset =
+ (uint64_t)(uintptr_t)(host_addr - (void *)rb->host);
+ return host_addr_offset >> TARGET_PAGE_BITS;
+}
+
long qemu_getrampagesize(void);
unsigned long last_ram_page(void);
RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
diff --git a/migration/migration.c b/migration/migration.c
index 71e38bc..53fbd41 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -143,6 +143,7 @@ MigrationIncomingState *migration_incoming_get_current(void)
qemu_mutex_init(&mis_current.rp_mutex);
qemu_event_init(&mis_current.main_thread_load_event, false);
once = true;
+ ramblock_recv_map_init();
}
return &mis_current;
}
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 293db97..372c691 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -562,22 +562,31 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
}
static int qemu_ufd_copy_ioctl(int userfault_fd, void *host_addr,
- void *from_addr, uint64_t pagesize)
+ void *from_addr, uint64_t pagesize, RAMBlock *rb)
{
+ int ret;
if (from_addr) {
struct uffdio_copy copy_struct;
copy_struct.dst = (uint64_t)(uintptr_t)host_addr;
copy_struct.src = (uint64_t)(uintptr_t)from_addr;
copy_struct.len = pagesize;
copy_struct.mode = 0;
- return ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
+ ret = ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
} else {
struct uffdio_zeropage zero_struct;
zero_struct.range.start = (uint64_t)(uintptr_t)host_addr;
zero_struct.range.len = pagesize;
zero_struct.mode = 0;
- return ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
+ ret = ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
+ }
+ /* received page isn't feature of blocktime calculation,
+ * it's more general entity, so keep it here,
+ * but gup betwean two following operation could be high,
+ * and in this case blocktime for such small interval will be lost */
+ if (!ret) {
+ ramblock_recv_bitmap_set(host_addr, rb);
}
+ return ret;
}
/*
@@ -594,7 +603,7 @@ int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
* which would be slightly cheaper, but we'd have to be careful
* of the order of updating our page state.
*/
- if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, from, pagesize)) {
+ if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, from, pagesize, rb)) {
int e = errno;
error_report("%s: %s copy host: %p from: %p (size: %zd)",
__func__, strerror(e), host, from, pagesize);
@@ -616,7 +625,8 @@ int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
trace_postcopy_place_page_zero(host);
if (qemu_ram_pagesize(rb) == getpagesize()) {
- if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, NULL, getpagesize())) {
+ if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, NULL, getpagesize(),
+ rb)) {
int e = errno;
error_report("%s: %s zero host: %p",
__func__, strerror(e), host);
diff --git a/migration/ram.c b/migration/ram.c
index f50479d..37299ed 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -151,6 +151,34 @@ out:
return ret;
}
+void ramblock_recv_map_init(void)
+{
+ RAMBlock *rb;
+
+ RAMBLOCK_FOREACH(rb) {
+ unsigned long pages;
+ pages = rb->max_length >> TARGET_PAGE_BITS;
+ assert(!rb->receivedmap);
+ rb->receivedmap = bitmap_new(pages);
+ }
+}
+
+int ramblock_recv_bitmap_test(void *host_addr, RAMBlock *rb)
+{
+ return test_bit(ramblock_recv_bitmap_offset(host_addr, rb),
+ rb->receivedmap);
+}
+
+void ramblock_recv_bitmap_set(void *host_addr, RAMBlock *rb)
+{
+ set_bit_atomic(ramblock_recv_bitmap_offset(host_addr, rb), rb->receivedmap);
+}
+
+void ramblock_recv_bitmap_clear(void *host_addr, RAMBlock *rb)
+{
+ clear_bit(ramblock_recv_bitmap_offset(host_addr, rb), rb->receivedmap);
+}
+
/*
* An outstanding page request, on the source, having been received
* and queued
@@ -1773,6 +1801,18 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
return ret;
}
+static void ramblock_recv_bitmap_clear_range(uint64_t start, size_t length,
+ RAMBlock *rb)
+{
+ int i, range_count;
+ long nr_bit = start >> TARGET_PAGE_BITS;
+ range_count = length >> TARGET_PAGE_BITS;
+ for (i = 0; i < range_count; i++) {
+ clear_bit(nr_bit, rb->receivedmap);
+ nr_bit += 1;
+ }
+}
+
/**
* ram_discard_range: discard dirtied pages at the beginning of postcopy
*
@@ -1797,6 +1837,7 @@ int ram_discard_range(const char *rbname, uint64_t start, size_t length)
goto err;
}
+ ramblock_recv_bitmap_clear_range(start, length, rb);
ret = ram_block_discard_range(rb, start, length);
err:
@@ -2324,8 +2365,14 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
static int ram_load_cleanup(void *opaque)
{
+ RAMBlock *rb;
xbzrle_load_cleanup();
compress_threads_load_cleanup();
+
+ RAMBLOCK_FOREACH(rb) {
+ g_free(rb->receivedmap);
+ rb->receivedmap = NULL;
+ }
return 0;
}
@@ -2513,6 +2560,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
ram_addr_t addr, total_ram_bytes;
void *host = NULL;
uint8_t ch;
+ RAMBlock *rb = NULL;
addr = qemu_get_be64(f);
flags = addr & ~TARGET_PAGE_MASK;
@@ -2520,15 +2568,15 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
- RAMBlock *block = ram_block_from_stream(f, flags);
+ rb = ram_block_from_stream(f, flags);
- host = host_from_ram_block_offset(block, addr);
+ host = host_from_ram_block_offset(rb, addr);
if (!host) {
error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
ret = -EINVAL;
break;
}
- trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
+ trace_ram_load_loop(rb->idstr, (uint64_t)addr, flags, host);
}
switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
@@ -2582,10 +2630,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
case RAM_SAVE_FLAG_ZERO:
ch = qemu_get_byte(f);
+ ramblock_recv_bitmap_set(host, rb);
ram_handle_compressed(host, ch, TARGET_PAGE_SIZE);
break;
case RAM_SAVE_FLAG_PAGE:
+ ramblock_recv_bitmap_set(host, rb);
qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
break;
@@ -2596,10 +2646,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
ret = -EINVAL;
break;
}
+
+ ramblock_recv_bitmap_set(host, rb);
decompress_data_with_multi_threads(f, host, len);
break;
case RAM_SAVE_FLAG_XBZRLE:
+ ramblock_recv_bitmap_set(host, rb);
if (load_xbzrle(f, addr, host) < 0) {
error_report("Failed to decompress XBZRLE page at "
RAM_ADDR_FMT, addr);
diff --git a/migration/ram.h b/migration/ram.h
index c081fde..98d68df 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -52,4 +52,10 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length);
int ram_postcopy_incoming_init(MigrationIncomingState *mis);
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
+
+void ramblock_recv_map_init(void);
+int ramblock_recv_bitmap_test(void *host_addr, RAMBlock *rb);
+void ramblock_recv_bitmap_set(void *host_addr, RAMBlock *rb);
+void ramblock_recv_bitmap_clear(void *host_addr, RAMBlock *rb);
+
#endif
--
1.9.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page Alexey Perevalov
@ 2017-06-26 18:52 ` Dr. David Alan Gilbert
2017-06-27 4:03 ` Peter Xu
1 sibling, 0 replies; 7+ messages in thread
From: Dr. David Alan Gilbert @ 2017-06-26 18:52 UTC (permalink / raw)
To: Alexey Perevalov; +Cc: qemu-devel, peterx, i.maximets, quintela
* Alexey Perevalov (a.perevalov@samsung.com) wrote:
> This patch adds ability to track down already received
> pages, it's necessary for calculation vCPU block time in
> postcopy migration feature, maybe for restore after
> postcopy migration failure.
> Also it's necessary to solve shared memory issue in
> postcopy livemigration. Information about received pages
> will be transferred to the software virtual bridge
> (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for
> already received pages. fallocate syscall is required for
> remmaped shared memory, due to remmaping itself blocks
> ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT
> error (struct page is exists after remmap).
>
> Bitmap is placed into RAMBlock as another postcopy/precopy
> related bitmaps.
>
> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
> ---
> include/exec/ram_addr.h | 10 ++++++++
> migration/migration.c | 1 +
> migration/postcopy-ram.c | 20 ++++++++++++----
> migration/ram.c | 59 +++++++++++++++++++++++++++++++++++++++++++++---
> migration/ram.h | 6 +++++
> 5 files changed, 88 insertions(+), 8 deletions(-)
>
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 140efa8..4170656 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -47,6 +47,8 @@ struct RAMBlock {
> * of the postcopy phase
> */
> unsigned long *unsentmap;
> + /* bitmap of already received pages in postcopy */
> + unsigned long *receivedmap;
> };
>
> static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset)
> @@ -60,6 +62,14 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
> return (char *)block->host + offset;
> }
>
> +static inline unsigned long int ramblock_recv_bitmap_offset(void *host_addr,
> + RAMBlock *rb)
> +{
> + uint64_t host_addr_offset =
> + (uint64_t)(uintptr_t)(host_addr - (void *)rb->host);
> + return host_addr_offset >> TARGET_PAGE_BITS;
> +}
> +
> long qemu_getrampagesize(void);
> unsigned long last_ram_page(void);
> RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
> diff --git a/migration/migration.c b/migration/migration.c
> index 71e38bc..53fbd41 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -143,6 +143,7 @@ MigrationIncomingState *migration_incoming_get_current(void)
> qemu_mutex_init(&mis_current.rp_mutex);
> qemu_event_init(&mis_current.main_thread_load_event, false);
> once = true;
> + ramblock_recv_map_init();
> }
> return &mis_current;
> }
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index 293db97..372c691 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -562,22 +562,31 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
> }
>
> static int qemu_ufd_copy_ioctl(int userfault_fd, void *host_addr,
> - void *from_addr, uint64_t pagesize)
> + void *from_addr, uint64_t pagesize, RAMBlock *rb)
> {
> + int ret;
> if (from_addr) {
> struct uffdio_copy copy_struct;
> copy_struct.dst = (uint64_t)(uintptr_t)host_addr;
> copy_struct.src = (uint64_t)(uintptr_t)from_addr;
> copy_struct.len = pagesize;
> copy_struct.mode = 0;
> - return ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> + ret = ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> } else {
> struct uffdio_zeropage zero_struct;
> zero_struct.range.start = (uint64_t)(uintptr_t)host_addr;
> zero_struct.range.len = pagesize;
> zero_struct.mode = 0;
> - return ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> + ret = ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> + }
> + /* received page isn't feature of blocktime calculation,
> + * it's more general entity, so keep it here,
> + * but gup betwean two following operation could be high,
> + * and in this case blocktime for such small interval will be lost */
> + if (!ret) {
> + ramblock_recv_bitmap_set(host_addr, rb);
> }
> + return ret;
> }
>
> /*
> @@ -594,7 +603,7 @@ int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
> * which would be slightly cheaper, but we'd have to be careful
> * of the order of updating our page state.
> */
> - if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, from, pagesize)) {
> + if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, from, pagesize, rb)) {
> int e = errno;
> error_report("%s: %s copy host: %p from: %p (size: %zd)",
> __func__, strerror(e), host, from, pagesize);
> @@ -616,7 +625,8 @@ int postcopy_place_page_zero(MigrationIncomingState *mis, void *host,
> trace_postcopy_place_page_zero(host);
>
> if (qemu_ram_pagesize(rb) == getpagesize()) {
> - if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, NULL, getpagesize())) {
> + if (qemu_ufd_copy_ioctl(mis->userfault_fd, host, NULL, getpagesize(),
> + rb)) {
> int e = errno;
> error_report("%s: %s zero host: %p",
> __func__, strerror(e), host);
> diff --git a/migration/ram.c b/migration/ram.c
> index f50479d..37299ed 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -151,6 +151,34 @@ out:
> return ret;
> }
>
> +void ramblock_recv_map_init(void)
> +{
> + RAMBlock *rb;
> +
> + RAMBLOCK_FOREACH(rb) {
> + unsigned long pages;
> + pages = rb->max_length >> TARGET_PAGE_BITS;
> + assert(!rb->receivedmap);
> + rb->receivedmap = bitmap_new(pages);
> + }
> +}
> +
> +int ramblock_recv_bitmap_test(void *host_addr, RAMBlock *rb)
> +{
> + return test_bit(ramblock_recv_bitmap_offset(host_addr, rb),
> + rb->receivedmap);
> +}
> +
> +void ramblock_recv_bitmap_set(void *host_addr, RAMBlock *rb)
> +{
> + set_bit_atomic(ramblock_recv_bitmap_offset(host_addr, rb), rb->receivedmap);
> +}
> +
> +void ramblock_recv_bitmap_clear(void *host_addr, RAMBlock *rb)
> +{
> + clear_bit(ramblock_recv_bitmap_offset(host_addr, rb), rb->receivedmap);
> +}
> +
> /*
> * An outstanding page request, on the source, having been received
> * and queued
> @@ -1773,6 +1801,18 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
> return ret;
> }
>
> +static void ramblock_recv_bitmap_clear_range(uint64_t start, size_t length,
> + RAMBlock *rb)
> +{
> + int i, range_count;
> + long nr_bit = start >> TARGET_PAGE_BITS;
> + range_count = length >> TARGET_PAGE_BITS;
> + for (i = 0; i < range_count; i++) {
> + clear_bit(nr_bit, rb->receivedmap);
> + nr_bit += 1;
> + }
Can you just use the bitmap_clear(map,start,nr) function in bitmap.h for
that loop?
The rest looks OK (although I'd bet we probably need something special
for RDMA, but that's normal for RDMA).
Dave
> +}
> +
> /**
> * ram_discard_range: discard dirtied pages at the beginning of postcopy
> *
> @@ -1797,6 +1837,7 @@ int ram_discard_range(const char *rbname, uint64_t start, size_t length)
> goto err;
> }
>
> + ramblock_recv_bitmap_clear_range(start, length, rb);
> ret = ram_block_discard_range(rb, start, length);
>
> err:
> @@ -2324,8 +2365,14 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
>
> static int ram_load_cleanup(void *opaque)
> {
> + RAMBlock *rb;
> xbzrle_load_cleanup();
> compress_threads_load_cleanup();
> +
> + RAMBLOCK_FOREACH(rb) {
> + g_free(rb->receivedmap);
> + rb->receivedmap = NULL;
> + }
> return 0;
> }
>
> @@ -2513,6 +2560,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> ram_addr_t addr, total_ram_bytes;
> void *host = NULL;
> uint8_t ch;
> + RAMBlock *rb = NULL;
>
> addr = qemu_get_be64(f);
> flags = addr & ~TARGET_PAGE_MASK;
> @@ -2520,15 +2568,15 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
> RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> - RAMBlock *block = ram_block_from_stream(f, flags);
> + rb = ram_block_from_stream(f, flags);
>
> - host = host_from_ram_block_offset(block, addr);
> + host = host_from_ram_block_offset(rb, addr);
> if (!host) {
> error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> ret = -EINVAL;
> break;
> }
> - trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
> + trace_ram_load_loop(rb->idstr, (uint64_t)addr, flags, host);
> }
>
> switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
> @@ -2582,10 +2630,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> case RAM_SAVE_FLAG_ZERO:
> ch = qemu_get_byte(f);
> + ramblock_recv_bitmap_set(host, rb);
> ram_handle_compressed(host, ch, TARGET_PAGE_SIZE);
> break;
>
> case RAM_SAVE_FLAG_PAGE:
> + ramblock_recv_bitmap_set(host, rb);
> qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
> break;
>
> @@ -2596,10 +2646,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> ret = -EINVAL;
> break;
> }
> +
> + ramblock_recv_bitmap_set(host, rb);
> decompress_data_with_multi_threads(f, host, len);
> break;
>
> case RAM_SAVE_FLAG_XBZRLE:
> + ramblock_recv_bitmap_set(host, rb);
> if (load_xbzrle(f, addr, host) < 0) {
> error_report("Failed to decompress XBZRLE page at "
> RAM_ADDR_FMT, addr);
> diff --git a/migration/ram.h b/migration/ram.h
> index c081fde..98d68df 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -52,4 +52,10 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length);
> int ram_postcopy_incoming_init(MigrationIncomingState *mis);
>
> void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
> +
> +void ramblock_recv_map_init(void);
> +int ramblock_recv_bitmap_test(void *host_addr, RAMBlock *rb);
> +void ramblock_recv_bitmap_set(void *host_addr, RAMBlock *rb);
> +void ramblock_recv_bitmap_clear(void *host_addr, RAMBlock *rb);
> +
> #endif
> --
> 1.9.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page Alexey Perevalov
2017-06-26 18:52 ` Dr. David Alan Gilbert
@ 2017-06-27 4:03 ` Peter Xu
2017-06-27 8:28 ` Alexey
1 sibling, 1 reply; 7+ messages in thread
From: Peter Xu @ 2017-06-27 4:03 UTC (permalink / raw)
To: Alexey Perevalov; +Cc: qemu-devel, i.maximets, quintela, dgilbert
On Mon, Jun 26, 2017 at 11:35:20AM +0300, Alexey Perevalov wrote:
> This patch adds ability to track down already received
> pages, it's necessary for calculation vCPU block time in
> postcopy migration feature, maybe for restore after
> postcopy migration failure.
> Also it's necessary to solve shared memory issue in
> postcopy livemigration. Information about received pages
> will be transferred to the software virtual bridge
> (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for
> already received pages. fallocate syscall is required for
> remmaped shared memory, due to remmaping itself blocks
> ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT
> error (struct page is exists after remmap).
>
> Bitmap is placed into RAMBlock as another postcopy/precopy
> related bitmaps.
>
> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Mostly good to me, some minor nits only...
[...]
> static int qemu_ufd_copy_ioctl(int userfault_fd, void *host_addr,
> - void *from_addr, uint64_t pagesize)
> + void *from_addr, uint64_t pagesize, RAMBlock *rb)
> {
> + int ret;
> if (from_addr) {
> struct uffdio_copy copy_struct;
> copy_struct.dst = (uint64_t)(uintptr_t)host_addr;
> copy_struct.src = (uint64_t)(uintptr_t)from_addr;
> copy_struct.len = pagesize;
> copy_struct.mode = 0;
> - return ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> + ret = ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> } else {
> struct uffdio_zeropage zero_struct;
> zero_struct.range.start = (uint64_t)(uintptr_t)host_addr;
> zero_struct.range.len = pagesize;
> zero_struct.mode = 0;
> - return ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> + ret = ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> + }
> + /* received page isn't feature of blocktime calculation,
> + * it's more general entity, so keep it here,
> + * but gup betwean two following operation could be high,
> + * and in this case blocktime for such small interval will be lost */
I would drop this comment for this patch. It didn't help me to be
clearer on the code but a bit more messy... Maybe it suites for some
place in the blocktime series? Not sure.
[...]
> +void ramblock_recv_map_init(void)
> +{
> + RAMBlock *rb;
> +
> + RAMBLOCK_FOREACH(rb) {
> + unsigned long pages;
> + pages = rb->max_length >> TARGET_PAGE_BITS;
> + assert(!rb->receivedmap);
> + rb->receivedmap = bitmap_new(pages);
I'll prefer removing pages variable since used only once.
[...]
> +static void ramblock_recv_bitmap_clear_range(uint64_t start, size_t length,
> + RAMBlock *rb)
> +{
> + int i, range_count;
> + long nr_bit = start >> TARGET_PAGE_BITS;
> + range_count = length >> TARGET_PAGE_BITS;
> + for (i = 0; i < range_count; i++) {
> + clear_bit(nr_bit, rb->receivedmap);
> + nr_bit += 1;
(Dave commented this one)
[...]
> @@ -2513,6 +2560,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> ram_addr_t addr, total_ram_bytes;
> void *host = NULL;
> uint8_t ch;
> + RAMBlock *rb = NULL;
>
> addr = qemu_get_be64(f);
> flags = addr & ~TARGET_PAGE_MASK;
> @@ -2520,15 +2568,15 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
> RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> - RAMBlock *block = ram_block_from_stream(f, flags);
> + rb = ram_block_from_stream(f, flags);
>
> - host = host_from_ram_block_offset(block, addr);
> + host = host_from_ram_block_offset(rb, addr);
> if (!host) {
> error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> ret = -EINVAL;
> break;
> }
IMHO it's ok to set the bit once here. Thanks,
> - trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
> + trace_ram_load_loop(rb->idstr, (uint64_t)addr, flags, host);
> }
>
> switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
> @@ -2582,10 +2630,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> case RAM_SAVE_FLAG_ZERO:
> ch = qemu_get_byte(f);
> + ramblock_recv_bitmap_set(host, rb);
> ram_handle_compressed(host, ch, TARGET_PAGE_SIZE);
> break;
>
> case RAM_SAVE_FLAG_PAGE:
> + ramblock_recv_bitmap_set(host, rb);
> qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
> break;
>
> @@ -2596,10 +2646,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> ret = -EINVAL;
> break;
> }
> +
> + ramblock_recv_bitmap_set(host, rb);
> decompress_data_with_multi_threads(f, host, len);
> break;
>
> case RAM_SAVE_FLAG_XBZRLE:
> + ramblock_recv_bitmap_set(host, rb);
> if (load_xbzrle(f, addr, host) < 0) {
> error_report("Failed to decompress XBZRLE page at "
> RAM_ADDR_FMT, addr);
--
Peter Xu
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page
2017-06-27 4:03 ` Peter Xu
@ 2017-06-27 8:28 ` Alexey
0 siblings, 0 replies; 7+ messages in thread
From: Alexey @ 2017-06-27 8:28 UTC (permalink / raw)
To: Peter Xu; +Cc: i.maximets, qemu-devel, dgilbert, quintela
On Tue, Jun 27, 2017 at 12:03:10PM +0800, Peter Xu wrote:
> On Mon, Jun 26, 2017 at 11:35:20AM +0300, Alexey Perevalov wrote:
> > This patch adds ability to track down already received
> > pages, it's necessary for calculation vCPU block time in
> > postcopy migration feature, maybe for restore after
> > postcopy migration failure.
> > Also it's necessary to solve shared memory issue in
> > postcopy livemigration. Information about received pages
> > will be transferred to the software virtual bridge
> > (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for
> > already received pages. fallocate syscall is required for
> > remmaped shared memory, due to remmaping itself blocks
> > ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT
> > error (struct page is exists after remmap).
> >
> > Bitmap is placed into RAMBlock as another postcopy/precopy
> > related bitmaps.
> >
> > Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
>
> Mostly good to me, some minor nits only...
>
> [...]
>
> > static int qemu_ufd_copy_ioctl(int userfault_fd, void *host_addr,
> > - void *from_addr, uint64_t pagesize)
> > + void *from_addr, uint64_t pagesize, RAMBlock *rb)
> > {
> > + int ret;
> > if (from_addr) {
> > struct uffdio_copy copy_struct;
> > copy_struct.dst = (uint64_t)(uintptr_t)host_addr;
> > copy_struct.src = (uint64_t)(uintptr_t)from_addr;
> > copy_struct.len = pagesize;
> > copy_struct.mode = 0;
> > - return ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> > + ret = ioctl(userfault_fd, UFFDIO_COPY, ©_struct);
> > } else {
> > struct uffdio_zeropage zero_struct;
> > zero_struct.range.start = (uint64_t)(uintptr_t)host_addr;
> > zero_struct.range.len = pagesize;
> > zero_struct.mode = 0;
> > - return ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> > + ret = ioctl(userfault_fd, UFFDIO_ZEROPAGE, &zero_struct);
> > + }
> > + /* received page isn't feature of blocktime calculation,
> > + * it's more general entity, so keep it here,
> > + * but gup betwean two following operation could be high,
> > + * and in this case blocktime for such small interval will be lost */
>
> I would drop this comment for this patch. It didn't help me to be
> clearer on the code but a bit more messy... Maybe it suites for some
> place in the blocktime series? Not sure.
yes, here could be a problem in stats, so it worth to mention about
it in stats series.
>
> [...]
>
> > +void ramblock_recv_map_init(void)
> > +{
> > + RAMBlock *rb;
> > +
> > + RAMBLOCK_FOREACH(rb) {
> > + unsigned long pages;
> > + pages = rb->max_length >> TARGET_PAGE_BITS;
> > + assert(!rb->receivedmap);
> > + rb->receivedmap = bitmap_new(pages);
>
> I'll prefer removing pages variable since used only once.
no problem )
>
> [...]
>
> > +static void ramblock_recv_bitmap_clear_range(uint64_t start, size_t length,
> > + RAMBlock *rb)
> > +{
> > + int i, range_count;
> > + long nr_bit = start >> TARGET_PAGE_BITS;
> > + range_count = length >> TARGET_PAGE_BITS;
> > + for (i = 0; i < range_count; i++) {
> > + clear_bit(nr_bit, rb->receivedmap);
> > + nr_bit += 1;
>
> (Dave commented this one)
I agree with Dave, looks like I invented bitmap_clear ;)
>
> [...]
>
> > @@ -2513,6 +2560,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> > ram_addr_t addr, total_ram_bytes;
> > void *host = NULL;
> > uint8_t ch;
> > + RAMBlock *rb = NULL;
> >
> > addr = qemu_get_be64(f);
> > flags = addr & ~TARGET_PAGE_MASK;
> > @@ -2520,15 +2568,15 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> >
> > if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |
> > RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> > - RAMBlock *block = ram_block_from_stream(f, flags);
> > + rb = ram_block_from_stream(f, flags);
> >
> > - host = host_from_ram_block_offset(block, addr);
> > + host = host_from_ram_block_offset(rb, addr);
> > if (!host) {
> > error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> > ret = -EINVAL;
> > break;
> > }
>
> IMHO it's ok to set the bit once here. Thanks,
yes, this is common place for all copying operations here.
>
> > - trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
> > + trace_ram_load_loop(rb->idstr, (uint64_t)addr, flags, host);
> > }
> >
> > switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
> > @@ -2582,10 +2630,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> >
> > case RAM_SAVE_FLAG_ZERO:
> > ch = qemu_get_byte(f);
> > + ramblock_recv_bitmap_set(host, rb);
> > ram_handle_compressed(host, ch, TARGET_PAGE_SIZE);
> > break;
> >
> > case RAM_SAVE_FLAG_PAGE:
> > + ramblock_recv_bitmap_set(host, rb);
> > qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
> > break;
> >
> > @@ -2596,10 +2646,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> > ret = -EINVAL;
> > break;
> > }
> > +
> > + ramblock_recv_bitmap_set(host, rb);
> > decompress_data_with_multi_threads(f, host, len);
> > break;
> >
> > case RAM_SAVE_FLAG_XBZRLE:
> > + ramblock_recv_bitmap_set(host, rb);
> > if (load_xbzrle(f, addr, host) < 0) {
> > error_report("Failed to decompress XBZRLE page at "
> > RAM_ADDR_FMT, addr);
>
> --
> Peter Xu
>
--
BR
Alexey
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2017-06-27 8:29 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CGME20170626083547eucas1p10fc2d64db7b6db0a1bd1b94b56c06ca9@eucas1p1.samsung.com>
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 0/3] Add bitmap for received pages in postcopy migration Alexey Perevalov
[not found] ` <CGME20170626083551eucas1p1b0cb811534fae257da120e71d32eab37@eucas1p1.samsung.com>
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 1/3] migration: postcopy_place_page factoring out Alexey Perevalov
[not found] ` <CGME20170626083552eucas1p24efde91b6a4544fb77927f3b93d83cdf@eucas1p2.samsung.com>
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 2/3] migration: introduce qemu_ufd_copy_ioctl helper Alexey Perevalov
[not found] ` <CGME20170626083552eucas1p2db6a8bd9188a95d72aad188d6b633dd9@eucas1p2.samsung.com>
2017-06-26 8:35 ` [Qemu-devel] [PATCH v4 3/3] migration: add bitmap for received page Alexey Perevalov
2017-06-26 18:52 ` Dr. David Alan Gilbert
2017-06-27 4:03 ` Peter Xu
2017-06-27 8:28 ` Alexey
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).