From: Fabiano Rosas <farosas@suse.de>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>
Subject: Re: [RFC PATCH v2 7/9] migration/multifd: Isolate ram pages packet data
Date: Mon, 22 Jul 2024 17:34:44 -0300 [thread overview]
Message-ID: <87h6chyxln.fsf@suse.de> (raw)
In-Reply-To: <Zp6089I1ozCg1Eaq@x1n>
Peter Xu <peterx@redhat.com> writes:
> On Mon, Jul 22, 2024 at 02:59:12PM -0300, Fabiano Rosas wrote:
>> While we cannot yet disentangle the multifd packet from page data, we
>> can make the code a bit cleaner by setting the page-related fields in
>> a separate function.
>>
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> ---
>> migration/multifd.c | 97 ++++++++++++++++++++++++++++-----------------
>> 1 file changed, 61 insertions(+), 36 deletions(-)
>>
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index fcdb12e04f..d25b8658b2 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -409,65 +409,62 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
>> return msg.id;
>> }
>>
>> -void multifd_send_fill_packet(MultiFDSendParams *p)
>> +static void multifd_ram_fill_packet(MultiFDSendParams *p)
>> {
>> MultiFDPacket_t *packet = p->packet;
>> MultiFDPages_t *pages = &p->data->u.ram;
>> - uint64_t packet_num;
>> uint32_t zero_num = pages->num - pages->normal_num;
>> - int i;
>>
>> - packet->flags = cpu_to_be32(p->flags);
>> packet->pages_alloc = cpu_to_be32(pages->allocated);
>> packet->normal_pages = cpu_to_be32(pages->normal_num);
>> packet->zero_pages = cpu_to_be32(zero_num);
>> - packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>
> Definitely good intention, but I had a feeling that this will need to be
> reorganized again when Maciej reworks on top, due to the fact that
> next_packet_size will be ram-private field, simply because it's defined
> after pages_alloc and normal_pages...
>
> E.g., see:
>
> https://lore.kernel.org/r/41dedaf2c9abebb5e45f88c052daa26320715a92.1718717584.git.maciej.szmigiero@oracle.com
>
> Where the new MultiFDPacketHdr_t cannot include next_packet_size (even if
> VFIO will need that too..).
Isn't it just a matter of setting next_packet_size in the other path as
well?
@Maciej, can you comment?
>
> typedef struct {
> uint32_t magic;
> uint32_t version;
> uint32_t flags;
> } __attribute__((packed)) MultiFDPacketHdr_t;
>
> So _maybe_ it's easier we drop this patch and leave that part to Maciej to
> identify which is common and which is arm/vfio specific. No strong
> opinions here.
>
I could drop it if that's preferrable. However, patch 8/9 is absolutely
necessary so we can remove this madness of having to clear
MultiFDPages_t fields at the multifd_send_thread() top level. It took me
a whole day to figure that one out and that bug has been manifesting
ever since I started working on multifd.
I'm not sure how we'll do that without this patch. Maybe it's better I
fix in this one whatever you guys think needs fixing.
>> -
>> - packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
>> - packet->packet_num = cpu_to_be64(packet_num);
>>
>> if (pages->block) {
>> strncpy(packet->ramblock, pages->block->idstr, 256);
>> }
>>
>> - for (i = 0; i < pages->num; i++) {
>> + for (int i = 0; i < pages->num; i++) {
>> /* there are architectures where ram_addr_t is 32 bit */
>> uint64_t temp = pages->offset[i];
>>
>> packet->offset[i] = cpu_to_be64(temp);
>> }
>>
>> - p->packets_sent++;
>> p->total_normal_pages += pages->normal_num;
>> p->total_zero_pages += zero_num;
>> +}
>>
>> - trace_multifd_send(p->id, packet_num, pages->normal_num, zero_num,
>> +void multifd_send_fill_packet(MultiFDSendParams *p)
>> +{
>> + MultiFDPacket_t *packet = p->packet;
>> + uint64_t packet_num;
>> +
>> + memset(packet, 0, p->packet_len);
>> +
>> + packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>> + packet->version = cpu_to_be32(MULTIFD_VERSION);
>> +
>> + packet->flags = cpu_to_be32(p->flags);
>> + packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>> +
>> + packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
>> + packet->packet_num = cpu_to_be64(packet_num);
>> +
>> + p->packets_sent++;
>> +
>> + multifd_ram_fill_packet(p);
>> +
>> + trace_multifd_send(p->id, packet_num,
>> + be32_to_cpu(packet->normal_pages),
>> + be32_to_cpu(packet->zero_pages),
>> p->flags, p->next_packet_size);
>> }
>>
>> -static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> +static int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> {
>> MultiFDPacket_t *packet = p->packet;
>> int i;
>>
>> - packet->magic = be32_to_cpu(packet->magic);
>> - if (packet->magic != MULTIFD_MAGIC) {
>> - error_setg(errp, "multifd: received packet "
>> - "magic %x and expected magic %x",
>> - packet->magic, MULTIFD_MAGIC);
>> - return -1;
>> - }
>> -
>> - packet->version = be32_to_cpu(packet->version);
>> - if (packet->version != MULTIFD_VERSION) {
>> - error_setg(errp, "multifd: received packet "
>> - "version %u and expected version %u",
>> - packet->version, MULTIFD_VERSION);
>> - return -1;
>> - }
>> -
>> - p->flags = be32_to_cpu(packet->flags);
>> -
>> packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
>> /*
>> * If we received a packet that is 100 times bigger than expected
>> @@ -496,15 +493,9 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> return -1;
>> }
>>
>> - p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>> - p->packet_num = be64_to_cpu(packet->packet_num);
>> - p->packets_recved++;
>> p->total_normal_pages += p->normal_num;
>> p->total_zero_pages += p->zero_num;
>>
>> - trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
>> - p->flags, p->next_packet_size);
>> -
>> if (p->normal_num == 0 && p->zero_num == 0) {
>> return 0;
>> }
>> @@ -546,6 +537,40 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> return 0;
>> }
>>
>> +static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> +{
>> + MultiFDPacket_t *packet = p->packet;
>> + int ret = 0;
>> +
>> + packet->magic = be32_to_cpu(packet->magic);
>> + if (packet->magic != MULTIFD_MAGIC) {
>> + error_setg(errp, "multifd: received packet "
>> + "magic %x and expected magic %x",
>> + packet->magic, MULTIFD_MAGIC);
>> + return -1;
>> + }
>> +
>> + packet->version = be32_to_cpu(packet->version);
>> + if (packet->version != MULTIFD_VERSION) {
>> + error_setg(errp, "multifd: received packet "
>> + "version %u and expected version %u",
>> + packet->version, MULTIFD_VERSION);
>> + return -1;
>> + }
>> +
>> + p->flags = be32_to_cpu(packet->flags);
>> + p->next_packet_size = be32_to_cpu(packet->next_packet_size);
>> + p->packet_num = be64_to_cpu(packet->packet_num);
>> + p->packets_recved++;
>> +
>> + ret = multifd_ram_unfill_packet(p, errp);
>> +
>> + trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
>> + p->flags, p->next_packet_size);
>> +
>> + return ret;
>> +}
>> +
>> static bool multifd_send_should_exit(void)
>> {
>> return qatomic_read(&multifd_send_state->exiting);
>> --
>> 2.35.3
>>
next prev parent reply other threads:[~2024-07-22 20:35 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-22 17:59 [RFC PATCH v2 0/9] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-07-22 17:59 ` [RFC PATCH v2 1/9] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-07-22 17:59 ` [RFC PATCH v2 2/9] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-07-22 19:21 ` Peter Xu
2024-07-22 17:59 ` [RFC PATCH v2 3/9] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
2024-07-22 19:22 ` Peter Xu
2024-07-22 17:59 ` [RFC PATCH v2 4/9] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
2024-07-22 19:20 ` Peter Xu
2024-07-22 20:36 ` Fabiano Rosas
2024-07-22 17:59 ` [RFC PATCH v2 5/9] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
2024-07-22 17:59 ` [RFC PATCH v2 6/9] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-07-22 19:29 ` Peter Xu
2024-07-22 20:07 ` Fabiano Rosas
2024-07-22 20:38 ` Peter Xu
2024-07-22 17:59 ` [RFC PATCH v2 7/9] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-07-22 19:37 ` Peter Xu
2024-07-22 20:34 ` Fabiano Rosas [this message]
2024-07-22 21:06 ` Peter Xu
2024-07-24 9:30 ` Maciej S. Szmigiero
2024-07-22 17:59 ` [RFC PATCH v2 8/9] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
2024-07-22 21:03 ` Peter Xu
2024-07-22 21:24 ` Fabiano Rosas
2024-07-22 17:59 ` [RFC PATCH v2 9/9] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
2024-07-22 19:55 ` Peter Xu
2024-07-22 20:26 ` Fabiano Rosas
2024-07-22 20:41 ` Peter Xu
2024-07-22 19:58 ` [RFC PATCH v2 0/9] migration/multifd: Remove multifd_send_state->pages Peter Xu
2024-07-22 20:21 ` Fabiano Rosas
2024-07-22 20:54 ` Peter Xu
2024-07-22 21:20 ` Fabiano Rosas
2024-07-22 23:01 ` Peter Xu
2024-07-23 17:48 ` Fabiano Rosas
2024-07-23 18:20 ` Peter Xu
2024-07-23 20:50 ` Fabiano Rosas
2024-07-23 21:11 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87h6chyxln.fsf@suse.de \
--to=farosas@suse.de \
--cc=mail@maciej.szmigiero.name \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).