qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads
@ 2022-03-10 15:34 Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

Hi

In this version:
- Rebase to latest
- Address all comments
- statistics about zero pages are right now (or at least much better than before)
- changed how we calculate the amount of transferred ram
- numbers, who don't like numbers.

Everything has been checked with a guest launched like the following
command.  Migration is running through localhost.  Will send numbers
with real hardware as soon as I get access to the machines that have
it (I checked with previous versions already, but not this one).

/scratch/qemu/multifd/x64/x86_64-softmmu/qemu-system-x86_64
-name guest=src,debug-threads=on
-m 16G
-smp 6
-machine q35,accel=kvm,usb=off,dump-guest-core=off
-boot strict=on
-cpu host
-no-hpet
-rtc base=utc,driftfix=slew
-global kvm-pit.lost_tick_policy=delay
-global ICH9-LPC.disable_s3=1
-global ICH9-LPC.disable_s4=1
-device pcie-root-port,id=root.1,chassis=1,addr=0x2.0,multifunction=on
-device pcie-root-port,id=root.2,chassis=2,addr=0x2.1
-device pcie-root-port,id=root.3,chassis=3,addr=0x2.2
-device pcie-root-port,id=root.4,chassis=4,addr=0x2.3
-device pcie-root-port,id=root.5,chassis=5,addr=0x2.4
-device pcie-root-port,id=root.6,chassis=6,addr=0x2.5
-device pcie-root-port,id=root.7,chassis=7,addr=0x2.6
-device pcie-root-port,id=root.8,chassis=8,addr=0x2.7
-blockdev driver=file,node-name=storage0,filename=/mnt/vm/test/a3.qcow2,auto-read-only=true,discard=unmap
-blockdev driver=qcow2,node-name=format0,read-only=false,file=storage0
-device virtio-blk-pci,id=virtio-disk0,drive=format0,bootindex=1,bus=root.1
-netdev tap,id=hostnet0,vhost=on,script=/etc/kvm-ifup,downscript=/etc/kvm-ifdown
-device virtio-net-pci,id=net0,netdev=hostnet0,mac=52:54:00:9d:30:23,bus=root.2
-device virtio-serial-pci,id=virtio-serial0,bus=root.3
-device virtio-balloon-pci,id=balloon0,bus=root.4 -display none
-chardev socket,id=charconsole0,path=/tmp/console-src,server=off
-device virtconsole,id=console0,chardev=charconsole0
-uuid 9d3be7da-e1ff-41a0-ac39-8b2e04de2c19
-nodefaults
-msg timestamp=on
-no-user-config
-chardev socket,id=monitor0,path=/tmp/monitor-src,server=off
-mon monitor0
-trace events=/home/quintela/tmp/events
-global migration.x-multifd=on
-global migration.multifd-channels=4
-global migration.x-max-bandwidth=6442450944

Using maximum bandwith of 6GB/s (on my test machine bigger than that
will not help, and then we are not checking that limitation works).

First, the part where multifd + zero pages don't shine.

Migrate the guest just booted and idle.

Precopy (changed multifd=off on the command line)

Migration status: completed
total time: 851 ms
downtime: 29 ms
setup: 4 ms
transferred ram: 634400 kbytes
throughput: 6139.32 mbps
duplicate: 4045125 pages
normal: 149420 pages
normal bytes: 597680 kbytes
dirty sync count: 3
multifd bytes: 0 kbytes
pages-per-second: 3088960
precopy ram: 598949 kbytes
downtime ram: 35450 kbytes

Current multifd:

Migration status: completed
total time: 922 ms
downtime: 27 ms
setup: 4 ms
transferred ram: 621342 kbytes
throughput: 5547.97 mbps
total ram: 16777992 kbytes
duplicate: 4048826 pages
normal: 146032 pages
normal bytes: 584128 kbytes
dirty sync count: 3
multifd bytes: 585757 kbytes
pages-per-second: 5918137
precopy ram: 35585 kbytes

Migration status: completed
total time: 946 ms
downtime: 60 ms
setup: 10 ms
transferred ram: 621784 kbytes
throughput: 5445.14 mbps
total ram: 16777992 kbytes
duplicate: 4048586 pages
normal: 146146 pages
normal bytes: 584584 kbytes
dirty sync count: 3
multifd bytes: 586201 kbytes
pages-per-second: 5297106
precopy ram: 35582 kbytes
downtime ram: 1 kbytes

See that times are similar, accounting is wrong for multifd of
downtime ram, for instance.  Notice that multifd is almosnt not used,
because almost everything is a zero page that is sent through the main
channel.

Multifd + zero page:

Migration status: completed
total time: 1854 ms
downtime: 150 ms
setup: 4 ms
transferred ram: 12086679 kbytes
throughput: 53522.74 mbps
total ram: 16777992 kbytes
duplicate: 4039509 pages
normal: 146476 pages
normal bytes: 585904 kbytes
dirty sync count: 3
multifd bytes: 12086679 kbytes
pages-per-second: 2274851
precopy ram: 10698366 kbytes
downtime ram: 1388313 kbytes

Migration status: completed
total time: 1547 ms
downtime: 143 ms
setup: 4 ms
transferred ram: 9877449 kbytes
throughput: 52442.68 mbps
total ram: 16777992 kbytes
duplicate: 4037502 pages
normal: 149056 pages
normal bytes: 596224 kbytes
dirty sync count: 3
multifd bytes: 9877449 kbytes
pages-per-second: 3011840
precopy ram: 8811411 kbytes
downtime ram: 1066038 kbytes

Here, we are sending zero pages through multifd channels, so the
packages that we sent through multifd channels are smaller, and the
bandwidth that we got is worse than sending them through the main
channel.  Notice that the bandwidth numbers are now correct.

Now a better scenary.  We use the same guest, but we run on it:

 stress --vm 4 --vm-keep --vm-bytes 400M

i.e. 4 threads dirtying 1.6GB in total in the 6GB test.

Precopy:

total time: 120947 ms
expected downtime: 678 ms
setup: 5 ms
transferred ram: 283655333 kbytes
throughput: 19818.30 mbps
remaining ram: 1048116 kbytes
total ram: 16777992 kbytes
duplicate: 3635761 pages
normal: 70767622 pages
normal bytes: 283070488 kbytes
dirty sync count: 286
multifd bytes: 0 kbytes
pages-per-second: 603627
dirty pages rate: 598555 pages
precopy ram: 283655333 kbytes

As you can see, it will never converge.

Current multifd:

Migration status: completed
total time: 2249 ms
downtime: 273 ms
setup: 6 ms
transferred ram: 5432984 kbytes
throughput: 19843.96 mbps
total ram: 16777992 kbytes
duplicate: 3635610 pages
normal: 1346731 pages
normal bytes: 5386924 kbytes
dirty sync count: 7
multifd bytes: 5401030 kbytes
pages-per-second: 1388326
precopy ram: 31953 kbytes

Migration status: completed
total time: 3383 ms
downtime: 230 ms
setup: 4 ms
transferred ram: 10828047 kbytes
throughput: 26252.25 mbps
total ram: 16777992 kbytes
duplicate: 3638556 pages
normal: 2691985 pages
normal bytes: 10767940 kbytes
dirty sync count: 22
page size: 4 kbytes
multifd bytes: 10796067 kbytes
pages-per-second: 1375137
precopy ram: 31980 kbytes

see, that it depends on luck what time we get, but we end converging
sooner or later (yes, I started with 512MB, but multifd will not
converge with that).

Multifd + zero page

Migration status: completed
total time: 2785 ms
downtime: 292 ms
setup: 4 ms
transferred ram: 16108184 kbytes
throughput: 47451.00 mbps
total ram: 16777992 kbytes
duplicate: 3633781 pages
normal: 957319 pages
normal bytes: 3829276 kbytes
dirty sync count: 4
multifd bytes: 16108184 kbytes
pages-per-second: 1861120
precopy ram: 14303936 kbytes
downtime ram: 1804247 kbytes

Migration status: completed
total time: 2789 ms
downtime: 302 ms
setup: 6 ms
transferred ram: 16198682 kbytes
throughput: 47683.29 mbps
total ram: 16777992 kbytes
duplicate: 3631338 pages
normal: 959689 pages
normal bytes: 3838756 kbytes
dirty sync count: 4
multifd bytes: 16198681 kbytes
pages-per-second: 1820160
precopy ram: 14385777 kbytes
downtime ram: 1812904 kbytes

Notice that we only need 4 iterations, on normal multd, it depends on luck.
Statistics are right here, and we are much more consistent in the total time that we need.

Please review, Juan.

[v4]
In this version
- Rebase to latest
- Address all comments from previous versions
- code cleanup

Please review.

[v2]
This is a rebase against last master.

And the reason for resend is to configure properly git-publish and
hope this time that git-publish send all the patches.

Please, review.

[v1]
Since Friday version:
- More cleanups on the code
- Remove repeated calls to qemu_target_page_size()
- Establish normal pages and zero pages
- detect zero pages on the multifd threads
- send zero pages through the multifd channels.
- reviews by Richard addressed.

It pases migration-test, so it should be perfect O:+)

ToDo for next version:
- check the version changes
  I need that 6.2 is out to check for 7.0.
  This code don't exist at all due to that reason.
- Send measurements of the differences

Please, review.

[

Friday version that just created a single writev instead of
write+writev.

]

Right now, multifd does a write() for the header and a writev() for
each group of pages.  Simplify it so we send the header as another
member of the IOV.

Once there, I got several simplifications:
* is_zero_range() was used only once, just use its body.
* same with is_zero_page().
* Be consintent and use offset insed the ramblock everywhere.
* Now that we have the offsets of the ramblock, we can drop the iov.
* Now that nothing uses iov's except NOCOMP method, move the iovs
  from pages to methods.
* Now we can use iov's with a single field for zlib/zstd.
* send_write() method is the same in all the implementaitons, so use
  it directly.
* Now, we can use a single writev() to write everything.

ToDo: Move zero page detection to the multifd thrteads.

With RAM in the Terabytes size, the detection of the zero page takes
too much time on the main thread.

Last patch on the series removes the detection of zero pages in the
main thread for multifd.  In the next series post, I will add how to
detect the zero pages and send them on multifd channels.

Please review.

Later, Juan.

Juan Quintela (8):
  migration: Export ram_transferred_ram()
  multifd: Count the number of sent bytes correctly
  migration: Make ram_save_target_page() a pointer
  multifd: Add property to enable/disable zero_page
  migration: Export ram_release_page()
  multifd: Support for zero pages transmission
  multifd: Zero pages transmission
  migration: Use multifd before we check for the zero page

 migration/migration.h  |  3 ++
 migration/multifd.h    | 24 ++++++++++-
 migration/ram.h        |  3 ++
 hw/core/machine.c      |  4 +-
 migration/migration.c  | 11 +++++
 migration/multifd.c    | 93 ++++++++++++++++++++++++++++++++++--------
 migration/ram.c        | 48 ++++++++++++++++++----
 migration/trace-events |  8 ++--
 8 files changed, 162 insertions(+), 32 deletions(-)

-- 
2.34.1




^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v5 1/8] migration: Export ram_transferred_ram()
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-21 11:50   ` Dr. David Alan Gilbert
  2022-03-29 11:48   ` David Edmondson
  2022-03-10 15:34 ` [PATCH v5 2/8] multifd: Count the number of sent bytes correctly Juan Quintela
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.h | 2 ++
 migration/ram.c | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/migration/ram.h b/migration/ram.h
index 2c6dc3675d..2e27c49f90 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -64,6 +64,8 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis);
 
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
 
+void ram_transferred_add(uint64_t bytes);
+
 int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
 bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
 void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
diff --git a/migration/ram.c b/migration/ram.c
index 170e522a1f..947ed44c89 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -394,7 +394,7 @@ uint64_t ram_bytes_remaining(void)
 
 MigrationStats ram_counters;
 
-static void ram_transferred_add(uint64_t bytes)
+void ram_transferred_add(uint64_t bytes)
 {
     if (runstate_is_running()) {
         ram_counters.precopy_bytes += bytes;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 2/8] multifd: Count the number of sent bytes correctly
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 3/8] migration: Make ram_save_target_page() a pointer Juan Quintela
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

Current code asumes that all pages are whole.  That is not true for
example for compression already.  Fix it for creating a new field
->sent_bytes that includes it.

All ram_counters are used only from the migration thread, so we have
two options:
- put a mutex and fill everything when we sent it (not only
ram_counters, also qemu_file->xfer_bytes).
- Create a local variable that implements how much has been sent
through each channel.  And when we push another packet, we "add" the
previous stats.

I choose two due to less changes overall.  On the previous code we
increase transferred and then we sent.  Current code goes the other
way around.  It sents the data, and after the fact, it updates the
counters.  Notice that each channel can have a maximum of half a
megabyte of data without counting, so it is not very important.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h |  2 ++
 migration/multifd.c | 15 ++++++---------
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 4dda900a0b..3afba8a198 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -119,6 +119,8 @@ typedef struct {
     uint32_t normal_num;
     /* used for compression methods */
     void *data;
+    /* How many bytes have we sent on the last packet */
+    uint64_t sent_bytes;
 }  MultiFDSendParams;
 
 typedef struct {
diff --git a/migration/multifd.c b/migration/multifd.c
index 76b57a7177..ab87879471 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -398,7 +398,6 @@ static int multifd_send_pages(QEMUFile *f)
     static int next_channel;
     MultiFDSendParams *p = NULL; /* make happy gcc */
     MultiFDPages_t *pages = multifd_send_state->pages;
-    uint64_t transferred;
 
     if (qatomic_read(&multifd_send_state->exiting)) {
         return -1;
@@ -433,11 +432,10 @@ static int multifd_send_pages(QEMUFile *f)
     p->packet_num = multifd_send_state->packet_num++;
     multifd_send_state->pages = p->pages;
     p->pages = pages;
-    transferred = ((uint64_t) pages->num) * qemu_target_page_size()
-                + p->packet_len;
-    qemu_file_update_transfer(f, transferred);
-    ram_counters.multifd_bytes += transferred;
-    ram_counters.transferred += transferred;
+    ram_transferred_add(p->sent_bytes);
+    ram_counters.multifd_bytes += p->sent_bytes;
+    qemu_file_update_transfer(f, p->sent_bytes);
+    p->sent_bytes = 0;
     qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
@@ -597,9 +595,6 @@ void multifd_send_sync_main(QEMUFile *f)
         p->packet_num = multifd_send_state->packet_num++;
         p->flags |= MULTIFD_FLAG_SYNC;
         p->pending_job++;
-        qemu_file_update_transfer(f, p->packet_len);
-        ram_counters.multifd_bytes += p->packet_len;
-        ram_counters.transferred += p->packet_len;
         qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
     }
@@ -675,6 +670,8 @@ static void *multifd_send_thread(void *opaque)
             }
 
             qemu_mutex_lock(&p->mutex);
+            p->sent_bytes += p->packet_len;;
+            p->sent_bytes += p->next_packet_size;
             p->pending_job--;
             qemu_mutex_unlock(&p->mutex);
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 3/8] migration: Make ram_save_target_page() a pointer
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 2/8] multifd: Count the number of sent bytes correctly Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 4/8] multifd: Add property to enable/disable zero_page Juan Quintela
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

We are going to create a new function for multifd latest in the series.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 947ed44c89..1006d8d585 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -295,6 +295,9 @@ struct RAMSrcPageRequest {
     QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
 };
 
+typedef struct RAMState RAMState;
+typedef struct PageSearchStatus PageSearchStatus;
+
 /* State of RAM for migration */
 struct RAMState {
     /* QEMUFile used for this migration */
@@ -349,8 +352,8 @@ struct RAMState {
     /* Queue of outstanding page requests from the destination */
     QemuMutex src_page_req_mutex;
     QSIMPLEQ_HEAD(, RAMSrcPageRequest) src_page_requests;
+    int (*ram_save_target_page)(RAMState *rs, PageSearchStatus *pss);
 };
-typedef struct RAMState RAMState;
 
 static RAMState *ram_state;
 
@@ -2126,14 +2129,14 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
 }
 
 /**
- * ram_save_target_page: save one target page
+ * ram_save_target_page_legacy: save one target page
  *
  * Returns the number of pages written
  *
  * @rs: current RAM state
  * @pss: data about the page we want to send
  */
-static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
+static int ram_save_target_page_legacy(RAMState *rs, PageSearchStatus *pss)
 {
     RAMBlock *block = pss->block;
     ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
@@ -2208,7 +2211,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
     do {
         /* Check the pages is dirty and if it is send it */
         if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
-            tmppages = ram_save_target_page(rs, pss);
+            tmppages = rs->ram_save_target_page(rs, pss);
             if (tmppages < 0) {
                 return tmppages;
             }
@@ -2937,6 +2940,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
 
+    (*rsp)->ram_save_target_page = ram_save_target_page_legacy;
     multifd_send_sync_main(f);
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 4/8] multifd: Add property to enable/disable zero_page
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (2 preceding siblings ...)
  2022-03-10 15:34 ` [PATCH v5 3/8] migration: Make ram_save_target_page() a pointer Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 5/8] migration: Export ram_release_page() Juan Quintela
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/migration.h |  3 +++
 hw/core/machine.c     |  4 +++-
 migration/migration.c | 11 +++++++++++
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/migration/migration.h b/migration/migration.h
index 2de861df01..5048d5241f 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -333,6 +333,8 @@ struct MigrationState {
      * This save hostname when out-going migration starts
      */
     char *hostname;
+    /* Use multifd channel to send zero pages */
+    bool multifd_zero_pages;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
@@ -375,6 +377,7 @@ int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_use_multifd_zero_page(void);
 
 int migrate_use_xbzrle(void);
 uint64_t migrate_xbzrle_cache_size(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index d856485cb4..5d2bd2144b 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -37,7 +37,9 @@
 #include "hw/virtio/virtio.h"
 #include "hw/virtio/virtio-pci.h"
 
-GlobalProperty hw_compat_6_2[] = {};
+GlobalProperty hw_compat_6_2[] = {
+    { "migration", "multifd-zero-pages", "false" },
+};
 const size_t hw_compat_6_2_len = G_N_ELEMENTS(hw_compat_6_2);
 
 GlobalProperty hw_compat_6_1[] = {
diff --git a/migration/migration.c b/migration/migration.c
index 695f0f2900..a655fc0b79 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2511,6 +2511,15 @@ bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_use_multifd_zero_page(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->multifd_zero_pages;
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4158,6 +4167,8 @@ static Property migration_properties[] = {
                       clear_bitmap_shift, CLEAR_BITMAP_SHIFT_DEFAULT),
 
     /* Migration parameters */
+    DEFINE_PROP_BOOL("multifd-zero-pages", MigrationState,
+                      multifd_zero_pages, true),
     DEFINE_PROP_UINT8("x-compress-level", MigrationState,
                       parameters.compress_level,
                       DEFAULT_MIGRATE_COMPRESS_LEVEL),
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 5/8] migration: Export ram_release_page()
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (3 preceding siblings ...)
  2022-03-10 15:34 ` [PATCH v5 4/8] multifd: Add property to enable/disable zero_page Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 6/8] multifd: Support for zero pages transmission Juan Quintela
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.h | 1 +
 migration/ram.c | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/migration/ram.h b/migration/ram.h
index 2e27c49f90..33686055b4 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -65,6 +65,7 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis);
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
 
 void ram_transferred_add(uint64_t bytes);
+void ram_release_page(const char *rbname, uint64_t offset);
 
 int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
 bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
diff --git a/migration/ram.c b/migration/ram.c
index 1006d8d585..1a642f1e70 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1180,7 +1180,7 @@ static void migration_bitmap_sync_precopy(RAMState *rs)
     }
 }
 
-static void ram_release_page(const char *rbname, uint64_t offset)
+void ram_release_page(const char *rbname, uint64_t offset)
 {
     if (!migrate_release_ram() || !migration_in_postcopy()) {
         return;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 6/8] multifd: Support for zero pages transmission
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (4 preceding siblings ...)
  2022-03-10 15:34 ` [PATCH v5 5/8] migration: Export ram_release_page() Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 7/8] multifd: Zero " Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 8/8] migration: Use multifd before we check for the zero page Juan Quintela
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

This patch adds counters and similar.  Logic will be added on the
following patch.

Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Added counters for duplicated/non duplicated pages.
Removed reviewed by from David.
Add total_zero_pages
---
 migration/multifd.h    | 17 ++++++++++++++++-
 migration/multifd.c    | 36 +++++++++++++++++++++++++++++-------
 migration/ram.c        |  2 --
 migration/trace-events |  8 ++++----
 4 files changed, 49 insertions(+), 14 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 3afba8a198..06c52081ab 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -49,7 +49,10 @@ typedef struct {
     /* size of the next packet that contains pages */
     uint32_t next_packet_size;
     uint64_t packet_num;
-    uint64_t unused[4];    /* Reserved for future use */
+    /* zero pages */
+    uint32_t zero_pages;
+    uint32_t unused32[1];    /* Reserved for future use */
+    uint64_t unused64[3];    /* Reserved for future use */
     char ramblock[256];
     uint64_t offset[];
 } __attribute__((packed)) MultiFDPacket_t;
@@ -107,6 +110,8 @@ typedef struct {
     uint64_t num_packets;
     /* non zero pages sent through this channel */
     uint64_t total_normal_pages;
+    /* zero pages sent through this channel */
+    uint64_t total_zero_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
     /* buffers to send */
@@ -117,6 +122,10 @@ typedef struct {
     ram_addr_t *normal;
     /* num of non zero pages */
     uint32_t normal_num;
+    /* Pages that are  zero */
+    ram_addr_t *zero;
+    /* num of zero pages */
+    uint32_t zero_num;
     /* used for compression methods */
     void *data;
     /* How many bytes have we sent on the last packet */
@@ -156,6 +165,8 @@ typedef struct {
     uint64_t num_packets;
     /* non zero pages recv through this channel */
     uint64_t total_normal_pages;
+    /* zero pages recv through this channel */
+    uint64_t total_zero_pages;
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
     /* buffers to recv */
@@ -164,6 +175,10 @@ typedef struct {
     ram_addr_t *normal;
     /* num of non zero pages */
     uint32_t normal_num;
+    /* Pages that are  zero */
+    ram_addr_t *zero;
+    /* num of zero pages */
+    uint32_t zero_num;
     /* used for de-compression methods */
     void *data;
 } MultiFDRecvParams;
diff --git a/migration/multifd.c b/migration/multifd.c
index ab87879471..41769ff99f 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -265,6 +265,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
     packet->normal_pages = cpu_to_be32(p->normal_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
     packet->packet_num = cpu_to_be64(p->packet_num);
+    packet->zero_pages = cpu_to_be32(p->zero_num);
 
     if (p->pages->block) {
         strncpy(packet->ramblock, p->pages->block->idstr, 256);
@@ -327,7 +328,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     p->next_packet_size = be32_to_cpu(packet->next_packet_size);
     p->packet_num = be64_to_cpu(packet->packet_num);
 
-    if (p->normal_num == 0) {
+    p->zero_num = be32_to_cpu(packet->zero_pages);
+    if (p->zero_num > packet->pages_alloc - p->normal_num) {
+        error_setg(errp, "multifd: received packet "
+                   "with %u zero pages and expected maximum pages are %u",
+                   p->zero_num, packet->pages_alloc - p->normal_num) ;
+        return -1;
+    }
+
+    if (p->normal_num == 0 && p->zero_num == 0) {
         return 0;
     }
 
@@ -436,6 +445,8 @@ static int multifd_send_pages(QEMUFile *f)
     ram_counters.multifd_bytes += p->sent_bytes;
     qemu_file_update_transfer(f, p->sent_bytes);
     p->sent_bytes = 0;
+    ram_counters.normal += p->normal_num;
+    ram_counters.duplicate += p->zero_num;
     qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
@@ -551,6 +562,8 @@ void multifd_save_cleanup(void)
         p->iov = NULL;
         g_free(p->normal);
         p->normal = NULL;
+        g_free(p->zero);
+        p->zero = NULL;
         multifd_send_state->ops->send_cleanup(p, &local_err);
         if (local_err) {
             migrate_set_error(migrate_get_current(), local_err);
@@ -636,6 +649,7 @@ static void *multifd_send_thread(void *opaque)
             uint32_t flags = p->flags;
             p->iovs_num = 1;
             p->normal_num = 0;
+            p->zero_num = 0;
 
             for (int i = 0; i < p->pages->num; i++) {
                 p->normal[p->normal_num] = p->pages->offset[i];
@@ -653,12 +667,13 @@ static void *multifd_send_thread(void *opaque)
             p->flags = 0;
             p->num_packets++;
             p->total_normal_pages += p->normal_num;
+            p->total_zero_pages += p->zero_num;
             p->pages->num = 0;
             p->pages->block = NULL;
             qemu_mutex_unlock(&p->mutex);
 
-            trace_multifd_send(p->id, packet_num, p->normal_num, flags,
-                               p->next_packet_size);
+            trace_multifd_send(p->id, packet_num, p->normal_num, p->zero_num,
+                               flags, p->next_packet_size);
 
             p->iov[0].iov_len = p->packet_len;
             p->iov[0].iov_base = p->packet;
@@ -709,7 +724,8 @@ out:
     qemu_mutex_unlock(&p->mutex);
 
     rcu_unregister_thread();
-    trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages);
+    trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages,
+                                  p->total_zero_pages);
 
     return NULL;
 }
@@ -910,6 +926,7 @@ int multifd_save_setup(Error **errp)
         /* We need one extra place for the packet header */
         p->iov = g_new0(struct iovec, page_count + 1);
         p->normal = g_new0(ram_addr_t, page_count);
+        p->zero = g_new0(ram_addr_t, page_count);
         socket_send_channel_create(multifd_new_send_channel_async, p);
     }
 
@@ -1011,6 +1028,8 @@ int multifd_load_cleanup(Error **errp)
         p->iov = NULL;
         g_free(p->normal);
         p->normal = NULL;
+        g_free(p->zero);
+        p->zero = NULL;
         multifd_recv_state->ops->recv_cleanup(p);
     }
     qemu_sem_destroy(&multifd_recv_state->sem_sync);
@@ -1084,10 +1103,11 @@ static void *multifd_recv_thread(void *opaque)
         flags = p->flags;
         /* recv methods don't know how to handle the SYNC flag */
         p->flags &= ~MULTIFD_FLAG_SYNC;
-        trace_multifd_recv(p->id, p->packet_num, p->normal_num, flags,
-                           p->next_packet_size);
+        trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
+                           flags, p->next_packet_size);
         p->num_packets++;
         p->total_normal_pages += p->normal_num;
+        p->total_normal_pages += p->zero_num;
         qemu_mutex_unlock(&p->mutex);
 
         if (p->normal_num) {
@@ -1112,7 +1132,8 @@ static void *multifd_recv_thread(void *opaque)
     qemu_mutex_unlock(&p->mutex);
 
     rcu_unregister_thread();
-    trace_multifd_recv_thread_end(p->id, p->num_packets, p->total_normal_pages);
+    trace_multifd_recv_thread_end(p->id, p->num_packets, p->total_normal_pages,
+                                  p->total_zero_pages);
 
     return NULL;
 }
@@ -1150,6 +1171,7 @@ int multifd_load_setup(Error **errp)
         p->name = g_strdup_printf("multifdrecv_%d", i);
         p->iov = g_new0(struct iovec, page_count);
         p->normal = g_new0(ram_addr_t, page_count);
+        p->zero = g_new0(ram_addr_t, page_count);
     }
 
     for (i = 0; i < thread_count; i++) {
diff --git a/migration/ram.c b/migration/ram.c
index 1a642f1e70..141817d6a7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1354,8 +1354,6 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block,
     if (multifd_queue_page(rs->f, block, offset) < 0) {
         return -1;
     }
-    ram_counters.normal++;
-
     return 1;
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index 1aec580e92..d70e89dbb9 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -114,21 +114,21 @@ unqueue_page(char *block, uint64_t offset, bool dirty) "ramblock '%s' offset 0x%
 
 # multifd.c
 multifd_new_send_channel_async(uint8_t id) "channel %u"
-multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " pages %u flags 0x%x next packet size %u"
+multifd_recv(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
 multifd_recv_new_channel(uint8_t id) "channel %u"
 multifd_recv_sync_main(long packet_num) "packet num %ld"
 multifd_recv_sync_main_signal(uint8_t id) "channel %u"
 multifd_recv_sync_main_wait(uint8_t id) "channel %u"
 multifd_recv_terminate_threads(bool error) "error %d"
-multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %u packets %" PRIu64 " pages %" PRIu64
+multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
 multifd_recv_thread_start(uint8_t id) "%u"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u flags 0x%x next packet size %u"
+multifd_send(uint8_t id, uint64_t packet_num, uint32_t normalpages, uint32_t zero_pages, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
 multifd_send_error(uint8_t id) "channel %u"
 multifd_send_sync_main(long packet_num) "packet num %ld"
 multifd_send_sync_main_signal(uint8_t id) "channel %u"
 multifd_send_sync_main_wait(uint8_t id) "channel %u"
 multifd_send_terminate_threads(bool error) "error %d"
-multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages) "channel %u packets %" PRIu64 " normal pages %"  PRIu64
+multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %"  PRIu64 " zero pages %"  PRIu64
 multifd_send_thread_start(uint8_t id) "%u"
 multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
 multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 7/8] multifd: Zero pages transmission
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (5 preceding siblings ...)
  2022-03-10 15:34 ` [PATCH v5 6/8] multifd: Support for zero pages transmission Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  2022-03-10 15:34 ` [PATCH v5 8/8] migration: Use multifd before we check for the zero page Juan Quintela
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

This implements the zero page dection and handling.

Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Add comment for offset (dave)
Use local variables for offset/block to have shorter lines
---
 migration/multifd.h |  5 +++++
 migration/multifd.c | 42 ++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 06c52081ab..e84ce0ebcd 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -54,6 +54,11 @@ typedef struct {
     uint32_t unused32[1];    /* Reserved for future use */
     uint64_t unused64[3];    /* Reserved for future use */
     char ramblock[256];
+    /*
+     * This array contains the pointers to:
+     *  - normal pages (initial normal_pages entries)
+     *  - zero pages (following zero_pages entries)
+     */
     uint64_t offset[];
 } __attribute__((packed)) MultiFDPacket_t;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 41769ff99f..1d7b6ffe24 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -11,6 +11,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/cutils.h"
 #include "qemu/rcu.h"
 #include "exec/target_page.h"
 #include "sysemu/sysemu.h"
@@ -277,6 +278,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 
         packet->offset[i] = cpu_to_be64(temp);
     }
+    for (i = 0; i < p->zero_num; i++) {
+        /* there are architectures where ram_addr_t is 32 bit */
+        uint64_t temp = p->zero[i];
+
+        packet->offset[p->normal_num + i] = cpu_to_be64(temp);
+    }
 }
 
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
@@ -362,6 +369,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         p->normal[i] = offset;
     }
 
+    for (i = 0; i < p->zero_num; i++) {
+        uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
+
+        if (offset > (block->used_length - page_size)) {
+            error_setg(errp, "multifd: offset too long %" PRIu64
+                       " (max " RAM_ADDR_FMT ")",
+                       offset, block->used_length);
+            return -1;
+        }
+        p->zero[i] = offset;
+    }
+
     return 0;
 }
 
@@ -624,6 +643,8 @@ static void *multifd_send_thread(void *opaque)
 {
     MultiFDSendParams *p = opaque;
     Error *local_err = NULL;
+    /* qemu older than 7.0 don't understand zero page on multifd channel */
+    bool use_zero_page = migrate_use_multifd_zero_page();
     int ret = 0;
 
     trace_multifd_send_thread_start(p->id);
@@ -645,6 +666,7 @@ static void *multifd_send_thread(void *opaque)
         qemu_mutex_lock(&p->mutex);
 
         if (p->pending_job) {
+            RAMBlock *rb = p->pages->block;
             uint64_t packet_num = p->packet_num;
             uint32_t flags = p->flags;
             p->iovs_num = 1;
@@ -652,8 +674,17 @@ static void *multifd_send_thread(void *opaque)
             p->zero_num = 0;
 
             for (int i = 0; i < p->pages->num; i++) {
-                p->normal[p->normal_num] = p->pages->offset[i];
-                p->normal_num++;
+                uint64_t offset = p->pages->offset[i];
+                size_t page_size = qemu_target_page_size();
+                if (use_zero_page &&
+                    buffer_is_zero(rb->host + offset, page_size)) {
+                    p->zero[p->zero_num] = offset;
+                    p->zero_num++;
+                    ram_release_page(rb->idstr, offset);
+                } else {
+                    p->normal[p->normal_num] = offset;
+                    p->normal_num++;
+                }
             }
 
             if (p->normal_num) {
@@ -1117,6 +1148,13 @@ static void *multifd_recv_thread(void *opaque)
             }
         }
 
+        for (int i = 0; i < p->zero_num; i++) {
+            void *page = p->host + p->zero[i];
+            if (!buffer_is_zero(page, qemu_target_page_size())) {
+                memset(page, 0, qemu_target_page_size());
+            }
+        }
+
         if (flags & MULTIFD_FLAG_SYNC) {
             qemu_sem_post(&multifd_recv_state->sem_sync);
             qemu_sem_wait(&p->sem_sync);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 8/8] migration: Use multifd before we check for the zero page
  2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
                   ` (6 preceding siblings ...)
  2022-03-10 15:34 ` [PATCH v5 7/8] multifd: Zero " Juan Quintela
@ 2022-03-10 15:34 ` Juan Quintela
  7 siblings, 0 replies; 11+ messages in thread
From: Juan Quintela @ 2022-03-10 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Juan Quintela, Philippe Mathieu-Daudé,
	Peter Xu, Dr. David Alan Gilbert, Yanan Wang, Leonardo Bras

So we use multifd to transmit zero pages.

Signed-off-by: Juan Quintela <quintela@redhat.com>

---

- Check zero_page property before using new code (Dave)
---
 migration/ram.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 141817d6a7..628b5554ba 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2175,6 +2175,32 @@ static int ram_save_target_page_legacy(RAMState *rs, PageSearchStatus *pss)
     return ram_save_page(rs, pss);
 }
 
+/**
+ * ram_save_target_page_multifd: save one target page
+ *
+ * Returns the number of pages written
+ *
+ * @rs: current RAM state
+ * @pss: data about the page we want to send
+ */
+static int ram_save_target_page_multifd(RAMState *rs, PageSearchStatus *pss)
+{
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
+    int res;
+
+    if (!migration_in_postcopy()) {
+        return ram_save_multifd_page(rs, block, offset);
+    }
+
+    res = save_zero_page(rs, block, offset);
+    if (res > 0) {
+        return res;
+    }
+
+    return ram_save_page(rs, pss);
+}
+
 /**
  * ram_save_host_page: save a whole host page
  *
@@ -2938,7 +2964,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     ram_control_before_iterate(f, RAM_CONTROL_SETUP);
     ram_control_after_iterate(f, RAM_CONTROL_SETUP);
 
-    (*rsp)->ram_save_target_page = ram_save_target_page_legacy;
+    if (migrate_use_multifd() && migrate_use_multifd_zero_page()) {
+        (*rsp)->ram_save_target_page = ram_save_target_page_multifd;
+    } else {
+        (*rsp)->ram_save_target_page = ram_save_target_page_legacy;
+    }
     multifd_send_sync_main(f);
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
     qemu_fflush(f);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/8] migration: Export ram_transferred_ram()
  2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
@ 2022-03-21 11:50   ` Dr. David Alan Gilbert
  2022-03-29 11:48   ` David Edmondson
  1 sibling, 0 replies; 11+ messages in thread
From: Dr. David Alan Gilbert @ 2022-03-21 11:50 UTC (permalink / raw)
  To: Juan Quintela
  Cc: Eduardo Habkost, qemu-devel, Peter Xu,
	Philippe Mathieu-Daudé, Yanan Wang, Leonardo Bras

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.h | 2 ++
>  migration/ram.c | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/ram.h b/migration/ram.h
> index 2c6dc3675d..2e27c49f90 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -64,6 +64,8 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis);
>  
>  void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
>  
> +void ram_transferred_add(uint64_t bytes);
> +
>  int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
>  bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
>  void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
> diff --git a/migration/ram.c b/migration/ram.c
> index 170e522a1f..947ed44c89 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -394,7 +394,7 @@ uint64_t ram_bytes_remaining(void)
>  
>  MigrationStats ram_counters;
>  
> -static void ram_transferred_add(uint64_t bytes)
> +void ram_transferred_add(uint64_t bytes)
>  {
>      if (runstate_is_running()) {
>          ram_counters.precopy_bytes += bytes;
> -- 
> 2.34.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/8] migration: Export ram_transferred_ram()
  2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
  2022-03-21 11:50   ` Dr. David Alan Gilbert
@ 2022-03-29 11:48   ` David Edmondson
  1 sibling, 0 replies; 11+ messages in thread
From: David Edmondson @ 2022-03-29 11:48 UTC (permalink / raw)
  To: Juan Quintela
  Cc: Eduardo Habkost, Dr. David Alan Gilbert, Peter Xu, qemu-devel,
	Yanan Wang, Leonardo Bras, Philippe Mathieu-Daudé

On Thursday, 2022-03-10 at 16:34:47 +01, Juan Quintela wrote:

> Subject: Re: [PATCH v5 1/8] migration: Export ram_transferred_ram()

The function is "ram_transferred_add()".

> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: David Edmondson <david.edmondson@oracle.com>

> ---
>  migration/ram.h | 2 ++
>  migration/ram.c | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/migration/ram.h b/migration/ram.h
> index 2c6dc3675d..2e27c49f90 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -64,6 +64,8 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis);
>
>  void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
>
> +void ram_transferred_add(uint64_t bytes);
> +
>  int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
>  bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
>  void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
> diff --git a/migration/ram.c b/migration/ram.c
> index 170e522a1f..947ed44c89 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -394,7 +394,7 @@ uint64_t ram_bytes_remaining(void)
>
>  MigrationStats ram_counters;
>
> -static void ram_transferred_add(uint64_t bytes)
> +void ram_transferred_add(uint64_t bytes)
>  {
>      if (runstate_is_running()) {
>          ram_counters.precopy_bytes += bytes;

dme.
-- 
Oh well, I'm dressed up so nice and I'm doing my best.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-03-29 11:52 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-10 15:34 [PATCH v5 0/8] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
2022-03-10 15:34 ` [PATCH v5 1/8] migration: Export ram_transferred_ram() Juan Quintela
2022-03-21 11:50   ` Dr. David Alan Gilbert
2022-03-29 11:48   ` David Edmondson
2022-03-10 15:34 ` [PATCH v5 2/8] multifd: Count the number of sent bytes correctly Juan Quintela
2022-03-10 15:34 ` [PATCH v5 3/8] migration: Make ram_save_target_page() a pointer Juan Quintela
2022-03-10 15:34 ` [PATCH v5 4/8] multifd: Add property to enable/disable zero_page Juan Quintela
2022-03-10 15:34 ` [PATCH v5 5/8] migration: Export ram_release_page() Juan Quintela
2022-03-10 15:34 ` [PATCH v5 6/8] multifd: Support for zero pages transmission Juan Quintela
2022-03-10 15:34 ` [PATCH v5 7/8] multifd: Zero " Juan Quintela
2022-03-10 15:34 ` [PATCH v5 8/8] migration: Use multifd before we check for the zero page Juan Quintela

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).