From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>
Subject: Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
Date: Thu, 22 Aug 2024 13:36:59 -0400 [thread overview]
Message-ID: <Zsd3O_OCbpyctm-K@x1n> (raw)
In-Reply-To: <87ttfcxzc5.fsf@suse.de>
On Thu, Aug 22, 2024 at 02:05:30PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
> >> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
> >> > Separate the multifd sync from flushing the client data to the
> >> > channels. These two operations are closely related but not strictly
> >> > necessary to be executed together.
> >> >
> >> > The multifd sync is intrinsic to how multifd works. The multiple
> >> > channels operate independently and may finish IO out of order in
> >> > relation to each other. This applies also between the source and
> >> > destination QEMU.
> >> >
> >> > Flushing the data that is left in the client-owned data structures
> >> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
> >> > but that is particular to how the ram migration is implemented with
> >> > several passes over dirty data.
> >> >
> >> > Make these two routines separate, allowing future code to call the
> >> > sync by itself if needed. This also allows the usage of
> >> > multifd_ram_send to be isolated to ram code.
> >>
> >> What's the usage of sync but not flush here?
> >
> > Oh I think I see your point.. I think flush+sync is always needed, it's
> > just that RAM may not always be the one to flush, but something else.
> > Makes sense then.
> >
>
> I'm thinking of "flush" here as a last multifd_send() before sync. We
> need multiple multifd_send() along the migration to send the data, but
> we might not need this extra flush. It could be that there's nothing to
> flush and the code guarantees it:
>
> <populate MultiFDSendData>
> multifd_send()
> sync
>
> Where RAM currently does:
>
> multifd_queue_page()
> multifd_queue_page()
> multifd_queue_page()
> ...
> multifd_queue_page()
> multifd_send()
> sync
>
> Today there is a multifd_send() inside multifd_queue_page() and the
> amount sent depends on the ram.c code. At the time sync gets called,
> there could be data queued but not yet sent. Another client (not ram)
> could just produce data in a deterministic manner and match that with
> calls to multifd_send().
I hope I read it alright.. I suppose you meant we have chance to do:
ram_send()
vfio_send()
flush()
Instead of:
ram_send()
flush()
vfio_send()
flush()
Am I right?
>
> > If you want, you may touch up the commit message to clarify that. E.g. I
> > still don't see any use case that we want to sync without a flush, that
> > part might be a bit ambiguous.
> >
> > If my understanding is correct, take this:
> >
> > Reviewed-by: Peter Xu <peterx@redhat.com>
>
--
Peter Xu
next prev parent reply other threads:[~2024-08-22 17:38 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 01/14] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 02/14] migration/multifd: Inline page_size and page_count Fabiano Rosas
2024-08-21 20:25 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 03/14] migration/multifd: Remove pages->allocated Fabiano Rosas
2024-08-21 20:32 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 04/14] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 05/14] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
2024-08-21 20:38 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
2024-08-21 21:27 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 08/14] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-08-21 21:38 ` Peter Xu
2024-08-22 14:13 ` Fabiano Rosas
2024-08-22 14:30 ` Peter Xu
2024-08-22 14:55 ` Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
2024-08-22 15:50 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
2024-08-22 15:59 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush Fabiano Rosas
2024-08-22 16:03 ` Peter Xu
2024-08-22 16:10 ` Peter Xu
2024-08-22 17:05 ` Fabiano Rosas
2024-08-22 17:36 ` Peter Xu [this message]
2024-08-22 18:07 ` Fabiano Rosas
2024-08-22 19:11 ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
2024-08-22 16:23 ` Peter Xu
2024-08-22 17:20 ` Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c Fabiano Rosas
2024-08-22 16:25 ` Peter Xu
2024-08-22 17:21 ` Fabiano Rosas
2024-08-22 17:27 ` Peter Xu
2024-08-01 12:45 ` [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zsd3O_OCbpyctm-K@x1n \
--to=peterx@redhat.com \
--cc=farosas@suse.de \
--cc=mail@maciej.szmigiero.name \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).