From: Fabiano Rosas <farosas@suse.de>
To: qemu-devel@nongnu.org
Cc: Peter Xu <peterx@redhat.com>,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>
Subject: Re: [RFC PATCH 5/7] migration/multifd: Isolate ram pages packet data
Date: Fri, 19 Jul 2024 11:40:29 -0300 [thread overview]
Message-ID: <87a5id1m3m.fsf@suse.de> (raw)
In-Reply-To: <20240620212111.29319-6-farosas@suse.de>
Fabiano Rosas <farosas@suse.de> writes:
> While we cannot yet disentangle the multifd packet from page data, we
> can make the code a bit cleaner by setting the page-related fields in
> a separate function.
>
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
> migration/multifd.c | 104 +++++++++++++++++++++++++++++---------------
> 1 file changed, 68 insertions(+), 36 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index c4a952576d..6fe339b378 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -407,65 +407,64 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
> g_free(pages);
> }
>
> -void multifd_send_fill_packet(MultiFDSendParams *p)
> +static void multifd_ram_fill_packet(MultiFDSendParams *p)
> {
> MultiFDPacket_t *packet = p->packet;
> MultiFDPages_t *pages = p->data->opaque;
> - uint64_t packet_num;
> uint32_t zero_num = pages->num - pages->normal_num;
> - int i;
>
> - packet->flags = cpu_to_be32(p->flags);
> packet->pages_alloc = cpu_to_be32(pages->allocated);
> packet->normal_pages = cpu_to_be32(pages->normal_num);
> packet->zero_pages = cpu_to_be32(zero_num);
> - packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> -
> - packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
> - packet->packet_num = cpu_to_be64(packet_num);
>
> if (pages->block) {
> strncpy(packet->ramblock, pages->block->idstr, 256);
> }
>
> - for (i = 0; i < pages->num; i++) {
> + for (int i = 0; i < pages->num; i++) {
> /* there are architectures where ram_addr_t is 32 bit */
> uint64_t temp = pages->offset[i];
>
> packet->offset[i] = cpu_to_be64(temp);
> }
>
> - p->packets_sent++;
> p->total_normal_pages += pages->normal_num;
> p->total_zero_pages += zero_num;
> +}
>
> - trace_multifd_send(p->id, packet_num, pages->normal_num, zero_num,
> +void multifd_send_fill_packet(MultiFDSendParams *p)
> +{
> + MultiFDPacket_t *packet = p->packet;
> + uint64_t packet_num;
> +
> + memset(packet, 0, p->packet_len);
> +
> + packet->magic = cpu_to_be32(MULTIFD_MAGIC);
> + packet->version = cpu_to_be32(MULTIFD_VERSION);
> +
> + packet->flags = cpu_to_be32(p->flags);
> + packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> +
> + packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
> + packet->packet_num = cpu_to_be64(packet_num);
> +
> + p->packets_sent++;
> +
> + if (p->data) {
This needs to be !(p->flags & MULTIFD_SYNC). In v2 I'll add it in a
separate patch to make it clear:
-->8--
From e0dd1e0f10b6adb5d419ff68c1ef3b76d2fcf1d4 Mon Sep 17 00:00:00 2001
From: Fabiano Rosas <farosas@suse.de>
Date: Fri, 19 Jul 2024 11:28:33 -0300
Subject: [PATCH] migration/multifd: Don't send ram data during SYNC
Skip saving and loading any ram data in the packet in the case of a
SYNC. This fixes a shortcoming of the current code which requires a
reset of the MultiFDPages_t fields right after the previous
pending_job finishes, otherwise the very next job might be a SYNC and
multifd_send_fill_packet() will put the stale values in the packet.
By not calling multifd_ram_fill_packet(), we can stop resetting
MultiFDPages_t in the multifd core and leave that to the client code.
Actually moving the reset function is not yet done because
pages->num==0 is used by the client code to determine whether the
MultiFDPages_t needs to be flushed. The subsequent patches will
replace that with a generic flag that is not dependent on
MultiFDPages_t.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 3809890082..6e6e62d352 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -431,6 +431,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
uint64_t packet_num;
+ bool sync_packet = p->flags & MULTIFD_FLAG_SYNC;
memset(packet, 0, p->packet_len);
@@ -445,7 +446,9 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
p->packets_sent++;
- multifd_ram_fill_packet(p);
+ if (!sync_packet) {
+ multifd_ram_fill_packet(p);
+ }
trace_multifd_send(p->id, packet_num,
be32_to_cpu(packet->normal_pages),
@@ -556,7 +559,12 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->packet_num = be64_to_cpu(packet->packet_num);
p->packets_recved++;
- ret = multifd_ram_unfill_packet(p, errp);
+ if (p->flags & MULTIFD_FLAG_SYNC) {
+ p->normal_num = 0;
+ p->zero_num = 0;
+ } else {
+ ret = multifd_ram_unfill_packet(p, errp);
+ }
trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
p->flags, p->next_packet_size);
--
2.35.3
next prev parent reply other threads:[~2024-07-19 14:41 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 21:21 [RFC PATCH 0/7] migration/multifd: Introduce storage slots Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 1/7] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-06-21 14:42 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 2/7] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 3/7] migration/multifd: Replace p->pages with an opaque pointer Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 4/7] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 5/7] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-07-19 14:40 ` Fabiano Rosas [this message]
2024-06-20 21:21 ` [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters Fabiano Rosas
2024-06-27 3:27 ` Wang, Lei
2024-06-27 14:40 ` Peter Xu
2024-06-27 15:17 ` Peter Xu
2024-07-10 16:10 ` Fabiano Rosas
2024-07-10 19:10 ` Peter Xu
2024-07-10 20:16 ` Fabiano Rosas
2024-07-10 21:55 ` Peter Xu
2024-07-11 14:12 ` Fabiano Rosas
2024-07-11 16:11 ` Peter Xu
2024-07-11 19:37 ` Fabiano Rosas
2024-07-11 20:27 ` Peter Xu
2024-07-11 21:12 ` Fabiano Rosas
2024-07-11 22:06 ` Peter Xu
2024-07-12 12:44 ` Fabiano Rosas
2024-07-12 15:37 ` Peter Xu
2024-07-18 19:39 ` Fabiano Rosas
2024-07-18 21:12 ` Peter Xu
2024-07-18 21:27 ` Fabiano Rosas
2024-07-18 21:52 ` Peter Xu
2024-07-18 22:32 ` Fabiano Rosas
2024-07-19 14:04 ` Peter Xu
2024-07-19 16:54 ` Fabiano Rosas
2024-07-19 17:58 ` Peter Xu
2024-07-19 21:30 ` Fabiano Rosas
2024-07-16 20:10 ` Maciej S. Szmigiero
2024-07-17 19:00 ` Peter Xu
2024-07-17 21:07 ` Maciej S. Szmigiero
2024-07-17 21:30 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 7/7] migration/multifd: Hide multifd slots implementation Fabiano Rosas
2024-06-21 14:44 ` [RFC PATCH 0/7] migration/multifd: Introduce storage slots Maciej S. Szmigiero
2024-06-21 15:04 ` Fabiano Rosas
2024-06-21 15:31 ` Maciej S. Szmigiero
2024-06-21 15:56 ` Peter Xu
2024-06-21 17:40 ` Maciej S. Szmigiero
2024-06-21 20:54 ` Peter Xu
2024-06-23 20:25 ` Maciej S. Szmigiero
2024-06-23 20:45 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87a5id1m3m.fsf@suse.de \
--to=farosas@suse.de \
--cc=mail@maciej.szmigiero.name \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).