From: Fabiano Rosas <farosas@suse.de>
To: Prasad Pandit <ppandit@redhat.com>
Cc: "Peter Xu" <peterx@redhat.com>,
qemu-devel@nongnu.org,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
"Cédric Le Goater" <clg@redhat.com>
Subject: Re: [PATCH 1/2] migration: Add some documentation for multifd
Date: Thu, 20 Mar 2025 10:38:47 -0300 [thread overview]
Message-ID: <878qozbz4o.fsf@suse.de> (raw)
In-Reply-To: <CAE8KmOwkLoPB=wLuE5WC0HERzmUqAqjP9ZECTvxBELaN31yBVQ@mail.gmail.com>
Prasad Pandit <ppandit@redhat.com> writes:
> On Tue, 11 Mar 2025 at 00:59, Fabiano Rosas <farosas@suse.de> wrote:
>> Peter Xu <peterx@redhat.com> writes:
>> > To me, this is a fairly important question to ask. Fundamentally, the very
>> > initial question is why do we need periodic flush and sync at all. It's
>> > because we want to make sure new version of pages to land later than old
>> > versions.
> ...
>> > Then v1 and v2 of the page P are ordered.
>> > If without the message on the main channel:
>> > Then I don't see what protects reorder of arrival of messages like:
> ...
>> That's all fine. As long as the recv part doesn't see them out of
>> order. I'll try to write some code to confirm so I don't waste too much
>> of your time.
>
> * Relying on this receive order seems like a passive solution. On one
> side we are saying there is no defined 'requirement' on the network or
> compute capacity/quality for migration. ie. compute and network can be
> as bad as possible, yet migration shall always work reliably.
>
> * When receiving different versions of pages, couldn't multifd_recv
> check the latest version present in guest RAM and accept the incoming
> version only if it is fresher than the already present one? ie. if v1
> arrives later than v2 on the receive side, the receive side
> could/should discard v1 because v2 is already received.
>
"in guest RAM" I don't think so, the performance would probably be
affected. We could have a sequence number that gets bumped per
iteration, but I'm not sure how much of a improvement that would be.
Without a sync, we'd need some sort of per-page handling*. I have a gut
feeling this would get costly.
*- maybe per-iovec depending on how we queue pages to multifd.
> Thank you.
> ---
> - Prasad
next prev parent reply other threads:[~2025-03-20 13:39 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-07 13:42 [PATCH 0/2] migration: multifd documentation Fabiano Rosas
2025-03-07 13:42 ` [PATCH 1/2] migration: Add some documentation for multifd Fabiano Rosas
2025-03-07 17:27 ` Peter Xu
2025-03-07 19:06 ` Fabiano Rosas
2025-03-07 22:15 ` Peter Xu
2025-03-10 14:24 ` Fabiano Rosas
2025-03-10 15:22 ` Peter Xu
2025-03-10 19:27 ` Fabiano Rosas
2025-03-20 12:06 ` Prasad Pandit
2025-03-20 13:38 ` Fabiano Rosas [this message]
2025-03-20 11:50 ` Prasad Pandit
2025-03-20 14:45 ` Fabiano Rosas
2025-03-20 15:56 ` Peter Xu
2025-03-20 17:12 ` Fabiano Rosas
2025-03-21 10:47 ` Prasad Pandit
2025-03-21 14:04 ` Fabiano Rosas
2025-03-24 11:14 ` Prasad Pandit
2025-03-07 13:42 ` [PATCH 2/2] migration: Move compression docs under multifd Fabiano Rosas
2025-03-07 17:28 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878qozbz4o.fsf@suse.de \
--to=farosas@suse.de \
--cc=clg@redhat.com \
--cc=mail@maciej.szmigiero.name \
--cc=peterx@redhat.com \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).