From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
Date: Tue, 27 Aug 2024 15:09:31 -0400 [thread overview]
Message-ID: <Zs4ka2-q6JJbL1KA@x1n> (raw)
In-Reply-To: <87mskxx0ck.fsf@suse.de>
On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
> >> Add documentation clarifying the usage of the multifd methods. The
> >> general idea is that the client code calls into multifd to trigger
> >> send/recv of data and multifd then calls these hooks back from the
> >> worker threads at opportune moments so the client can process a
> >> portion of the data.
> >>
> >> Suggested-by: Peter Xu <peterx@redhat.com>
> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >> ---
> >> Note that the doc is not symmetrical among send/recv because the recv
> >> side is still wonky. It doesn't give the packet to the hooks, which
> >> forces the p->normal, p->zero, etc. to be processed at the top level
> >> of the threads, where no client-specific information should be.
> >> ---
> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
> >> 1 file changed, 70 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> index 13e7a88c01..ebb17bdbcf 100644
> >> --- a/migration/multifd.h
> >> +++ b/migration/multifd.h
> >> @@ -229,17 +229,81 @@ typedef struct {
> >> } MultiFDRecvParams;
> >>
> >> typedef struct {
> >> - /* Setup for sending side */
> >> + /*
> >> + * The send_setup, send_cleanup, send_prepare are only called on
> >> + * the QEMU instance at the migration source.
> >> + */
> >> +
> >> + /*
> >> + * Setup for sending side. Called once per channel during channel
> >> + * setup phase.
> >> + *
> >> + * Must allocate p->iov. If packets are in use (default), one
> >
> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
> > returns in code to match this line.
>
> Not worth the extra instructions in my opinion. It would crash
> immediately once the thread touches p->iov anyway.
It might still be good IMHO to have that assert(), not only to abort
earlier, but also as a code-styled comment. Your call when resend.
PS: feel free to queue existing patches into your own tree without
resending the whole series!
>
> >
> >> + * extra iovec must be allocated for the packet header. Any memory
> >> + * allocated in this hook must be released at send_cleanup.
> >> + *
> >> + * p->write_flags may be used for passing flags to the QIOChannel.
> >> + *
> >> + * p->compression_data may be used by compression methods to store
> >> + * compression data.
> >> + */
> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
> >> - /* Cleanup for sending side */
> >> +
> >> + /*
> >> + * Cleanup for sending side. Called once per channel during
> >> + * channel cleanup phase. May be empty.
> >
> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
> > I wonder whether we leaked it in most compressors.
>
> Sorry, this one shouldn't have that text.
I still want to double check with you: we leaked iov[] in most compressors
here, or did I overlook something?
That's definitely more important than the doc update itself..
>
> >
> > With that, I wonder whether we should also assert(p->iov == NULL) after
> > this one returns (squash in this same patch).
> >
> >> + */
> >> void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
> >> - /* Prepare the send packet */
> >> +
> >> + /*
> >> + * Prepare the send packet. Called from multifd_send(), with p
> >
> > multifd_send_thread()?
>
> No, I meant called as a result of multifd_send(), which is the function
> the client uses to trigger a send on the thread.
OK, but it's confusing. Some rewords you mentioned below could work.
>
> >
> >> + * pointing to the MultiFDSendParams of a channel that is
> >> + * currently idle.
> >> + *
> >> + * Must populate p->iov with the data to be sent, increment
> >> + * p->iovs_num to match the amount of iovecs used and set
> >> + * p->next_packet_size with the amount of data currently present
> >> + * in p->iov.
> >> + *
> >> + * Must indicate whether this is a compression packet by setting
> >> + * p->flags.
> >
> > Sigh.. I wonder whether we could avoid mentioning this, and also we avoid
> > adding new flags for new compressors, relying on libvirt guarding things.
> > Then when we have the handshakes that's something we verify there.
> >
>
> I understand that part is not in the best shape, but we must document
> the current state. There's no problem changing this later.
>
> Besides, there's the whole "the migration stream should be considered
> hostile" which might mean we should really be keeping these sanity check
> flags around in case something really weird happens so we don't carry on
> with a bad stream.
Yep, it's OK.
>
> >> + *
> >> + * As a last step, if packets are in use (default), must prepare
> >> + * the packet by calling multifd_send_fill_packet().
> >> + */
> >> int (*send_prepare)(MultiFDSendParams *p, Error **errp);
> >> - /* Setup for receiving side */
> >> +
> >> + /*
> >> + * The recv_setup, recv_cleanup, recv are only called on the QEMU
> >> + * instance at the migration destination.
> >> + */
> >> +
> >> + /*
> >> + * Setup for receiving side. Called once per channel during
> >> + * channel setup phase. May be empty.
> >> + *
> >> + * May allocate data structures for the receiving of data. May use
> >> + * p->iov. Compression methods may use p->compress_data.
> >> + */
> >> int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
> >> - /* Cleanup for receiving side */
> >> +
> >> + /*
> >> + * Cleanup for receiving side. Called once per channel during
> >> + * channel cleanup phase. May be empty.
> >> + */
> >> void (*recv_cleanup)(MultiFDRecvParams *p);
> >> - /* Read all data */
> >> +
> >> + /*
> >> + * Data receive method. Called from multifd_recv(), with p
> >
> > multifd_recv_thread()?
>
> Same as before. I'll reword this somehow.
>
> >
> >> + * pointing to the MultiFDRecvParams of a channel that is
> >> + * currently idle. Only called if there is data available to
> >> + * receive.
> >> + *
> >> + * Must validate p->flags according to what was set at
> >> + * send_prepare.
> >> + *
> >> + * Must read the data from the QIOChannel p->c.
> >> + */
> >> int (*recv)(MultiFDRecvParams *p, Error **errp);
> >> } MultiFDMethods;
> >>
> >> --
> >> 2.35.3
> >>
>
--
Peter Xu
next prev parent reply other threads:[~2024-08-27 19:09 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 01/19] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 02/19] migration/multifd: Inline page_size and page_count Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 03/19] migration/multifd: Remove pages->allocated Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 04/19] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 05/19] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 06/19] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 07/19] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 08/19] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 09/19] migration/multifd: Remove total pages tracing Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 10/19] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 11/19] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 12/19] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 13/19] migration/multifd: Allow multifd sync without flush Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 14/19] migration/multifd: Standardize on multifd ops names Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 15/19] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 16/19] migration/multifd: Move nocomp code into multifd-nocomp.c Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 17/19] migration/multifd: Make MultiFDMethods const Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side Fabiano Rosas
2024-08-27 18:07 ` Peter Xu
2024-08-27 18:45 ` Fabiano Rosas
2024-08-27 19:05 ` Peter Xu
2024-08-27 19:27 ` Fabiano Rosas
2024-08-27 19:49 ` Peter Xu
2024-08-27 17:46 ` [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods Fabiano Rosas
2024-08-27 18:30 ` Peter Xu
2024-08-27 18:54 ` Fabiano Rosas
2024-08-27 19:09 ` Peter Xu [this message]
2024-08-27 19:17 ` Fabiano Rosas
2024-08-27 19:44 ` Peter Xu
2024-08-27 20:22 ` Fabiano Rosas
2024-08-27 21:40 ` Peter Xu
2024-08-28 13:04 ` Fabiano Rosas
2024-08-28 13:13 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zs4ka2-q6JJbL1KA@x1n \
--to=peterx@redhat.com \
--cc=farosas@suse.de \
--cc=mail@maciej.szmigiero.name \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).