From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: "Eduardo Habkost" <eduardo@habkost.net>,
qemu-devel@nongnu.org, "Peter Xu" <peterx@redhat.com>,
"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Leonardo Bras" <leobras@redhat.com>
Subject: Re: [PATCH v4 21/23] multifd: Zero pages transmission
Date: Tue, 18 Jan 2022 19:55:35 +0000 [thread overview]
Message-ID: <YecbN5MbUvL3oVKm@work-vm> (raw)
In-Reply-To: <20220111130024.5392-22-quintela@redhat.com>
* Juan Quintela (quintela@redhat.com) wrote:
> This implements the zero page dection and handling.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> ---
>
> Add comment for offset (dave)
> ---
> migration/multifd.h | 4 ++++
> migration/multifd.c | 36 ++++++++++++++++++++++++++++++++++--
> 2 files changed, 38 insertions(+), 2 deletions(-)
>
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 4c6d29c954..d014747122 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -54,6 +54,10 @@ typedef struct {
> uint32_t unused32[1]; /* Reserved for future use */
> uint64_t unused64[3]; /* Reserved for future use */
> char ramblock[256];
> + /* This array contains the pointers to:
> + - normal pages (initial normal_pages entries)
> + - zero pages (following zero_pages entries)
> + */
> uint64_t offset[];
> } __attribute__((packed)) MultiFDPacket_t;
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index cfa9f75d13..ded13289e7 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -11,6 +11,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
> #include "qemu/rcu.h"
> #include "exec/target_page.h"
> #include "sysemu/sysemu.h"
> @@ -277,6 +278,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>
> packet->offset[i] = cpu_to_be64(temp);
> }
> + for (i = 0; i < p->zero_num; i++) {
> + /* there are architectures where ram_addr_t is 32 bit */
> + uint64_t temp = p->zero[i];
> +
> + packet->offset[p->normal_num + i] = cpu_to_be64(temp);
> + }
> }
>
> static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> @@ -362,6 +369,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> p->normal[i] = offset;
> }
>
> + for (i = 0; i < p->zero_num; i++) {
> + uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
> +
> + if (offset > (block->used_length - page_size)) {
> + error_setg(errp, "multifd: offset too long %" PRIu64
> + " (max " RAM_ADDR_FMT ")",
> + offset, block->used_length);
> + return -1;
> + }
> + p->zero[i] = offset;
> + }
> +
> return 0;
> }
>
> @@ -627,6 +646,8 @@ static void *multifd_send_thread(void *opaque)
> {
> MultiFDSendParams *p = opaque;
> Error *local_err = NULL;
> + /* qemu older than 7.0 don't understand zero page on multifd channel */
> + bool use_zero_page = migrate_use_multifd_zero_page();
> int ret = 0;
>
> trace_multifd_send_thread_start(p->id);
> @@ -655,8 +676,15 @@ static void *multifd_send_thread(void *opaque)
> p->zero_num = 0;
>
> for (int i = 0; i < p->pages->num; i++) {
> - p->normal[p->normal_num] = p->pages->offset[i];
> - p->normal_num++;
> + if (use_zero_page &&
> + buffer_is_zero(p->pages->block->host + p->pages->offset[i],
> + qemu_target_page_size())) {
> + p->zero[p->zero_num] = p->pages->offset[i];
> + p->zero_num++;
> + } else {
> + p->normal[p->normal_num] = p->pages->offset[i];
> + p->normal_num++;
> + }
> }
>
> if (p->normal_num) {
> @@ -1115,6 +1143,10 @@ static void *multifd_recv_thread(void *opaque)
> }
> }
>
> + for (int i = 0; i < p->zero_num; i++) {
> + memset(p->host + p->zero[i], 0, qemu_target_page_size());
> + }
> +
On the existing code, it tries to avoid doing the memset if the target
page size matches; that avoids allocating the zero pages on the
destination host; should we try and do the same here?
Dave
> if (flags & MULTIFD_FLAG_SYNC) {
> qemu_sem_post(&multifd_recv_state->sem_sync);
> qemu_sem_wait(&p->sem_sync);
> --
> 2.34.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2022-01-18 19:59 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-11 13:00 [PATCH v4 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
2022-01-11 13:00 ` [PATCH v4 01/23] migration: All this fields are unsigned Juan Quintela
2022-01-11 13:00 ` [PATCH v4 02/23] migration: We only need last_stage in two places Juan Quintela
2022-01-11 13:00 ` [PATCH v4 03/23] migration: ram_release_pages() always receive 1 page as argument Juan Quintela
2022-01-11 13:00 ` [PATCH v4 04/23] migration: Remove masking for compression Juan Quintela
2022-01-11 19:56 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 05/23] migration: simplify do_compress_ram_page Juan Quintela
2022-01-11 20:00 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 06/23] migration: Move ram_release_pages() call to save_zero_page_to_file() Juan Quintela
2022-01-11 13:00 ` [PATCH v4 07/23] multifd: Use proper maximum compression values Juan Quintela
2022-01-13 13:27 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 08/23] multifd: Move iov from pages to params Juan Quintela
2022-01-18 17:56 ` Dr. David Alan Gilbert
2022-01-25 9:31 ` Juan Quintela
2022-01-27 15:03 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 09/23] multifd: Make zlib use iov's Juan Quintela
2022-01-11 13:00 ` [PATCH v4 10/23] multifd: Make zstd " Juan Quintela
2022-01-11 13:00 ` [PATCH v4 11/23] multifd: Remove send_write() method Juan Quintela
2022-01-18 18:22 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 12/23] multifd: Use a single writev on the send side Juan Quintela
2022-01-11 13:00 ` [PATCH v4 13/23] multifd: Unfold "used" variable by its value Juan Quintela
2022-01-11 13:00 ` [PATCH v4 14/23] multifd: Use normal pages array on the send side Juan Quintela
2022-01-18 18:41 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 15/23] multifd: Use normal pages array on the recv side Juan Quintela
2022-01-18 19:29 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 16/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
2022-01-11 13:00 ` [PATCH v4 17/23] multifd: Rename pages_used to normal_pages Juan Quintela
2022-01-11 13:00 ` [PATCH v4 18/23] migration: Make ram_save_target_page() a pointer Juan Quintela
2022-01-18 19:43 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 19/23] multifd: Add property to enable/disable zero_page Juan Quintela
2022-01-18 19:38 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 20/23] multifd: Support for zero pages transmission Juan Quintela
2022-01-18 19:49 ` Dr. David Alan Gilbert
2022-01-11 13:00 ` [PATCH v4 21/23] multifd: Zero " Juan Quintela
2022-01-18 19:55 ` Dr. David Alan Gilbert [this message]
2022-01-25 9:42 ` Juan Quintela
2022-01-27 15:13 ` Dr. David Alan Gilbert
2022-01-27 15:26 ` Juan Quintela
2022-01-11 13:00 ` [PATCH v4 22/23] migration: Use multifd before we check for the zero page Juan Quintela
2022-01-18 20:01 ` Dr. David Alan Gilbert
2022-01-25 9:45 ` Juan Quintela
2022-01-11 13:00 ` [PATCH v4 23/23] migration: Export ram_release_page() Juan Quintela
2022-01-18 20:02 ` Dr. David Alan Gilbert
2022-01-25 10:02 ` Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YecbN5MbUvL3oVKm@work-vm \
--to=dgilbert@redhat.com \
--cc=eduardo@habkost.net \
--cc=f4bug@amsat.org \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).