From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, berrange@redhat.com, armbru@redhat.com,
Juan Quintela <quintela@redhat.com>,
Leonardo Bras <leobras@redhat.com>,
Claudio Fontana <cfontana@suse.de>
Subject: Re: [PATCH v2 21/29] migration/multifd: Add pages to the receiving side
Date: Wed, 1 Nov 2023 11:55:02 -0400 [thread overview]
Message-ID: <ZUJ01lcAJS1PaAIw@x1n> (raw)
In-Reply-To: <87il6mcrf5.fsf@suse.de>
On Tue, Oct 31, 2023 at 08:18:06PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Mon, Oct 23, 2023 at 05:36:00PM -0300, Fabiano Rosas wrote:
> >> Currently multifd does not need to have knowledge of pages on the
> >> receiving side because all the information needed is within the
> >> packets that come in the stream.
> >>
> >> We're about to add support to fixed-ram migration, which cannot use
> >> packets because it expects the ramblock section in the migration file
> >> to contain only the guest pages data.
> >>
> >> Add a pointer to MultiFDPages in the multifd_recv_state and use the
> >> pages similarly to what we already do on the sending side. The pages
> >> are used to transfer data between the ram migration code in the main
> >> migration thread and the multifd receiving threads.
> >>
> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >
> > If it'll be new code to maintain anyway, I think we don't necessarily
> > always use multifd structs, right?
> >
>
> For the sending side, unrelated to this series, I'm experimenting with
> defining a generic structure to be passed into multifd:
>
> struct MultiFDData_t {
> void *opaque;
> size_t size;
> bool ready;
> void (*cleanup_fn)(void *);
> };
>
> The client code (ram.c) would use the opaque field to put whatever it
> wants in it. Maybe we could have a similar concept on the receiving
> side?
>
> Here's a PoC I'm writing, if you're interested:
>
> https://github.com/farosas/qemu/commits/multifd-packet-cleanups
>
> (I'm delaying sending this to the list because we already have a
> reasonable backlog of features and refactorings to merge.)
I went through the idea, I agree it's reasonable to generalize multifd to
drop the page constraints. Actually I'm wondering maybe it should be
better that we have a thread pool model for migration, then multifd can be
based on that.
Something like: job submissions, proper locks, notifications, quits,
etc. with a bunch of API to manipulate the thread pool.
And actually.. I just noticed we have. :) See util/thread-pool.c. I didn't
have closer look, but that looks like something good if we can work on top
(e.g., I don't think we want the bottom halfs..), or refactor to satisfy
all our needs from migration pov. Not something I'm asking right away, but
maybe we can at least keep an eye on.
>
> > Rather than introducing MultiFDPages_t into recv side, can we allow pages
> > to be distributed in chunks of (ramblock, start_offset, end_offset) tuples?
> > That'll be much more efficient than per-page. We don't need page granule
> > here on recv side, we want to load chunks of mem fast.
> >
> > We don't even need page granule on sender side, but since only myself cared
> > about perf.. and obviously the plan is to even drop auto-pause, then VM can
> > be running there, so sender must do that per-page for now. But now on recv
> > side VM must be stopped before all ram loaded, so there's no such problem.
> > And since we'll introduce new code anyway, IMHO we can decide how to do
> > that even if we want to reuse multifd.
> >
> > Main thread can assign these (ramblock, start_offset, end_offset) jobs to
> > recv threads. If ramblock is too small (e.g. 1M), assign it anyway to one
> > thread. If ramblock is >512MB, cut it into slices and feed them to multifd
> > threads one by one. All the rest can be the same.
> >
> > Would that be better? I would expect measurable loading speed difference
> > with much larger chunks and with that range-based tuples.
>
> I need to check how that would interact with the existing recv_thread
> code. Hopefully there's nothing there preventing us from using a
> different data structure.
Sure, thanks. Maybe there's a good way to provide a middle ground on both
"less code changes" and "easily maintainable", if that helps on this series
being merged.
What I want to make sure is we don't introduce new complicated logic but
even not doing the job as correct as we can.
--
Peter Xu
next prev parent reply other threads:[~2023-11-01 15:56 UTC|newest]
Thread overview: 128+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-23 20:35 [PATCH v2 00/29] migration: File based migration with multifd and fixed-ram Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 01/29] tests/qtest: migration events Fabiano Rosas
2023-10-25 9:44 ` Thomas Huth
2023-10-25 10:14 ` Daniel P. Berrangé
2023-10-25 13:21 ` Fabiano Rosas
2023-10-25 13:33 ` Steven Sistare
2023-10-23 20:35 ` [PATCH v2 02/29] tests/qtest: Move QTestMigrationState to libqtest Fabiano Rosas
2023-10-25 10:17 ` Daniel P. Berrangé
2023-10-25 13:19 ` Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 03/29] tests/qtest: Allow waiting for migration events Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 04/29] migration: Return the saved state from global_state_store Fabiano Rosas
2023-10-25 10:19 ` Daniel P. Berrangé
2023-10-23 20:35 ` [PATCH v2 05/29] migration: Introduce global_state_store_once Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 06/29] migration: Add auto-pause capability Fabiano Rosas
2023-10-24 5:25 ` Markus Armbruster
2023-10-24 18:12 ` Fabiano Rosas
2023-10-25 5:33 ` Markus Armbruster
2023-10-25 8:48 ` Daniel P. Berrangé
2023-10-25 13:57 ` Fabiano Rosas
2023-10-25 14:20 ` Daniel P. Berrangé
2023-10-25 14:58 ` Peter Xu
2023-10-25 15:25 ` Daniel P. Berrangé
2023-10-25 15:36 ` Peter Xu
2023-10-25 15:40 ` Daniel P. Berrangé
2023-10-25 17:20 ` Peter Xu
2023-10-25 17:31 ` Daniel P. Berrangé
2023-10-25 19:28 ` Peter Xu
2023-10-23 20:35 ` [PATCH v2 07/29] migration: Run "file:" migration with a stopped VM Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 08/29] tests/qtest: File migration auto-pause tests Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 09/29] io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 10/29] io: Add generic pwritev/preadv interface Fabiano Rosas
2023-10-24 8:18 ` Daniel P. Berrangé
2023-10-24 19:06 ` Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 11/29] io: implement io_pwritev/preadv for QIOChannelFile Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 12/29] migration/qemu-file: add utility methods for working with seekable channels Fabiano Rosas
2023-10-25 10:22 ` Daniel P. Berrangé
2023-10-23 20:35 ` [PATCH v2 13/29] migration: fixed-ram: Add URI compatibility check Fabiano Rosas
2023-10-25 10:27 ` Daniel P. Berrangé
2023-10-31 16:06 ` Peter Xu
2023-10-23 20:35 ` [PATCH v2 14/29] migration/ram: Introduce 'fixed-ram' migration capability Fabiano Rosas
2023-10-24 5:33 ` Markus Armbruster
2023-10-24 18:35 ` Fabiano Rosas
2023-10-25 6:18 ` Markus Armbruster
2023-10-23 20:35 ` [PATCH v2 15/29] migration/ram: Add support for 'fixed-ram' outgoing migration Fabiano Rosas
2023-10-25 9:39 ` Daniel P. Berrangé
2023-10-25 14:03 ` Fabiano Rosas
2023-11-01 15:23 ` Peter Xu
2023-11-01 15:52 ` Daniel P. Berrangé
2023-11-01 16:24 ` Peter Xu
2023-11-01 16:37 ` Daniel P. Berrangé
2023-11-01 17:30 ` Peter Xu
2023-10-31 16:52 ` Peter Xu
2023-10-31 17:33 ` Fabiano Rosas
2023-10-31 17:59 ` Peter Xu
2023-10-23 20:35 ` [PATCH v2 16/29] migration/ram: Add support for 'fixed-ram' migration restore Fabiano Rosas
2023-10-25 9:43 ` Daniel P. Berrangé
2023-10-25 14:07 ` Fabiano Rosas
2023-10-31 19:03 ` Peter Xu
2023-11-01 9:26 ` Daniel P. Berrangé
2023-11-01 14:21 ` Peter Xu
2023-11-01 14:28 ` Daniel P. Berrangé
2023-11-01 15:00 ` Peter Xu
2023-11-06 13:18 ` Fabiano Rosas
2023-11-06 21:00 ` Peter Xu
2023-11-07 9:02 ` Daniel P. Berrangé
2023-10-31 19:09 ` Peter Xu
2023-10-31 20:00 ` Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 17/29] tests/qtest: migration-test: Add tests for fixed-ram file-based migration Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 18/29] migration/multifd: Allow multifd without packets Fabiano Rosas
2023-10-23 20:35 ` [PATCH v2 19/29] migration/multifd: Add outgoing QIOChannelFile support Fabiano Rosas
2023-10-25 9:52 ` Daniel P. Berrangé
2023-10-25 14:12 ` Fabiano Rosas
2023-10-25 14:23 ` Daniel P. Berrangé
2023-10-25 15:00 ` Fabiano Rosas
2023-10-25 15:26 ` Daniel P. Berrangé
2023-10-31 20:11 ` Peter Xu
2023-10-23 20:35 ` [PATCH v2 20/29] migration/multifd: Add incoming " Fabiano Rosas
2023-10-25 10:29 ` Daniel P. Berrangé
2023-10-25 14:18 ` Fabiano Rosas
2023-10-31 21:28 ` Peter Xu
2023-10-23 20:36 ` [PATCH v2 21/29] migration/multifd: Add pages to the receiving side Fabiano Rosas
2023-10-31 22:10 ` Peter Xu
2023-10-31 23:18 ` Fabiano Rosas
2023-11-01 15:55 ` Peter Xu [this message]
2023-11-01 17:20 ` Fabiano Rosas
2023-11-01 17:35 ` Peter Xu
2023-11-01 18:14 ` Fabiano Rosas
2023-10-23 20:36 ` [PATCH v2 22/29] io: Add a pwritev/preadv version that takes a discontiguous iovec Fabiano Rosas
2023-10-24 8:50 ` Daniel P. Berrangé
2023-10-23 20:36 ` [PATCH v2 23/29] migration/ram: Add a wrapper for fixed-ram shadow bitmap Fabiano Rosas
2023-11-01 14:29 ` Peter Xu
2023-10-23 20:36 ` [PATCH v2 24/29] migration/ram: Ignore multifd flush when doing fixed-ram migration Fabiano Rosas
2023-10-25 9:09 ` Daniel P. Berrangé
2023-10-23 20:36 ` [PATCH v2 25/29] migration/multifd: Support outgoing fixed-ram stream format Fabiano Rosas
2023-10-25 9:23 ` Daniel P. Berrangé
2023-10-25 14:21 ` Fabiano Rosas
2023-10-23 20:36 ` [PATCH v2 26/29] migration/multifd: Support incoming " Fabiano Rosas
2023-10-23 20:36 ` [PATCH v2 27/29] tests/qtest: Add a multifd + fixed-ram migration test Fabiano Rosas
2023-10-23 20:36 ` [PATCH v2 28/29] migration: Add direct-io parameter Fabiano Rosas
2023-10-24 5:41 ` Markus Armbruster
2023-10-24 19:32 ` Fabiano Rosas
2023-10-25 6:23 ` Markus Armbruster
2023-10-25 8:44 ` Daniel P. Berrangé
2023-10-25 14:32 ` Fabiano Rosas
2023-10-25 14:43 ` Daniel P. Berrangé
2023-10-25 17:30 ` Fabiano Rosas
2023-10-25 17:45 ` Daniel P. Berrangé
2023-10-25 18:10 ` Fabiano Rosas
2023-10-30 22:51 ` Fabiano Rosas
2023-10-31 9:03 ` Daniel P. Berrangé
2023-10-31 13:05 ` Fabiano Rosas
2023-10-31 13:45 ` Daniel P. Berrangé
2023-10-31 14:33 ` Fabiano Rosas
2023-10-31 15:22 ` Daniel P. Berrangé
2023-10-31 15:52 ` Fabiano Rosas
2023-10-31 15:58 ` Daniel P. Berrangé
2023-10-31 19:05 ` Fabiano Rosas
2023-11-01 9:30 ` Daniel P. Berrangé
2023-11-01 12:16 ` Fabiano Rosas
2023-11-01 12:23 ` Daniel P. Berrangé
2023-11-01 12:30 ` Fabiano Rosas
2023-10-24 8:33 ` Daniel P. Berrangé
2023-10-24 19:06 ` Fabiano Rosas
2023-10-25 9:07 ` Daniel P. Berrangé
2023-10-25 14:48 ` Fabiano Rosas
2023-10-25 15:22 ` Daniel P. Berrangé
2023-10-23 20:36 ` [PATCH v2 29/29] tests/qtest: Add a test for migration with direct-io and multifd Fabiano Rosas
2023-10-25 9:25 ` Daniel P. Berrangé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZUJ01lcAJS1PaAIw@x1n \
--to=peterx@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=cfontana@suse.de \
--cc=farosas@suse.de \
--cc=leobras@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).