From: Peter Xu <peterx@redhat.com>
To: "Wang, Lei" <lei4.wang@intel.com>
Cc: Fabiano Rosas <farosas@suse.de>,
qemu-devel@nongnu.org,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>
Subject: Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Date: Thu, 27 Jun 2024 10:40:11 -0400 [thread overview]
Message-ID: <Zn15y693g0AkDbYD@x1n> (raw)
In-Reply-To: <e60bc0c7-dc49-400e-88f1-a30c32943f25@intel.com>
On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote:
> > Or graphically:
> >
> > 1) client fills the active slot with data. Channels point to nothing
> > at this point:
> > [a] <-- active slot
> > [][][][] <-- free slots, one per-channel
> >
> > [][][][] <-- channels' p->data pointers
> >
> > 2) multifd_send() swaps the pointers inside the client slot. Channels
> > still point to nothing:
> > []
> > [a][][][]
> >
> > [][][][]
> >
> > 3) multifd_send() finds an idle channel and updates its pointer:
>
> It seems the action "finds an idle channel" is in step 2 rather than step 3,
> which means the free slot is selected based on the id of the channel found, am I
> understanding correctly?
I think you're right.
Actually I also feel like the desription here is ambiguous, even though I
think I get what Fabiano wanted to say.
The free slot should be the first step of step 2+3, here what Fabiano
really wanted to suggest is we move the free buffer array from multifd
channels into the callers, then the caller can pass in whatever data to
send.
So I think maybe it's cleaner to write it as this in code (note: I didn't
really change the code, just some ordering and comments):
===8<===
@@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots)
*/
active_slot = slots->active;
slots->active = slots->free[p->id];
- p->data = active_slot;
-
- /*
- * By the next time we arrive here, the channel will certainly
- * have consumed the active slot. Put it back on the free list
- * now.
- */
slots->free[p->id] = active_slot;
+ /* Assign the current active slot to the chosen thread */
+ p->data = active_slot;
===8<===
The comment I removed is slightly misleading to me too, because right now
active_slot contains the data hasn't yet been delivered to multifd, so
we're "putting it back to free list" not because of it's free, but because
we know it won't get used until the multifd send thread consumes it
(because before that the thread will be busy, and we won't use the buffer
if so in upcoming send()s).
And then when I'm looking at this again, I think maybe it's a slight
overkill, and maybe we can still keep the "opaque data" managed by multifd.
One reason might be that I don't expect the "opaque data" payload keep
growing at all: it should really be either RAM or device state as I
commented elsewhere in a relevant thread, after all it's a thread model
only for migration purpose to move vmstates..
Putting it managed by multifd thread should involve less change than this
series, but it could look like this:
typedef enum {
MULTIFD_PAYLOAD_RAM = 0,
MULTIFD_PAYLOAD_DEVICE_STATE = 1,
} MultifdPayloadType;
typedef enum {
MultiFDPages_t ram_payload;
MultifdDeviceState_t device_payload;
} MultifdPayload;
struct MultiFDSendData {
MultifdPayloadType type;
MultifdPayload data;
};
Then the "enum" makes sure the payload only consumes only the max of both
types; a side benefit to save some memory.
I think we need to make sure MultifdDeviceState_t is generic enough so that
it will work for mostly everything (especially normal VMSDs). In this case
the VFIO series should be good as that was currently defined as:
typedef struct {
MultiFDPacketHdr_t hdr;
char idstr[256] QEMU_NONSTRING;
uint32_t instance_id;
/* size of the next packet that contains the actual data */
uint32_t next_packet_size;
} __attribute__((packed)) MultiFDPacketDeviceState_t;
IIUC that was what we need exactly with idstr+instance_id, so as to nail
exactly at where should the "opaque device state" go to, then load it with
a buffer-based loader when it's ready (starting from VFIO, to get rid of
qemufile). For VMSDs in the future if ever possible, that should be a
modified version of vmstate_load() where it may take buffers not qemufiles.
To Maciej: please see whether above makes sense to you, and if you also
agree please consider that with your VFIO work.
Thanks,
>
> > []
> > [a][][][]
> >
> > [a][][][]
> > ^idle
--
Peter Xu
next prev parent reply other threads:[~2024-06-27 14:41 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 21:21 [RFC PATCH 0/7] migration/multifd: Introduce storage slots Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 1/7] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-06-21 14:42 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 2/7] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 3/7] migration/multifd: Replace p->pages with an opaque pointer Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 4/7] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 5/7] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-07-19 14:40 ` Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters Fabiano Rosas
2024-06-27 3:27 ` Wang, Lei
2024-06-27 14:40 ` Peter Xu [this message]
2024-06-27 15:17 ` Peter Xu
2024-07-10 16:10 ` Fabiano Rosas
2024-07-10 19:10 ` Peter Xu
2024-07-10 20:16 ` Fabiano Rosas
2024-07-10 21:55 ` Peter Xu
2024-07-11 14:12 ` Fabiano Rosas
2024-07-11 16:11 ` Peter Xu
2024-07-11 19:37 ` Fabiano Rosas
2024-07-11 20:27 ` Peter Xu
2024-07-11 21:12 ` Fabiano Rosas
2024-07-11 22:06 ` Peter Xu
2024-07-12 12:44 ` Fabiano Rosas
2024-07-12 15:37 ` Peter Xu
2024-07-18 19:39 ` Fabiano Rosas
2024-07-18 21:12 ` Peter Xu
2024-07-18 21:27 ` Fabiano Rosas
2024-07-18 21:52 ` Peter Xu
2024-07-18 22:32 ` Fabiano Rosas
2024-07-19 14:04 ` Peter Xu
2024-07-19 16:54 ` Fabiano Rosas
2024-07-19 17:58 ` Peter Xu
2024-07-19 21:30 ` Fabiano Rosas
2024-07-16 20:10 ` Maciej S. Szmigiero
2024-07-17 19:00 ` Peter Xu
2024-07-17 21:07 ` Maciej S. Szmigiero
2024-07-17 21:30 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 7/7] migration/multifd: Hide multifd slots implementation Fabiano Rosas
2024-06-21 14:44 ` [RFC PATCH 0/7] migration/multifd: Introduce storage slots Maciej S. Szmigiero
2024-06-21 15:04 ` Fabiano Rosas
2024-06-21 15:31 ` Maciej S. Szmigiero
2024-06-21 15:56 ` Peter Xu
2024-06-21 17:40 ` Maciej S. Szmigiero
2024-06-21 20:54 ` Peter Xu
2024-06-23 20:25 ` Maciej S. Szmigiero
2024-06-23 20:45 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zn15y693g0AkDbYD@x1n \
--to=peterx@redhat.com \
--cc=farosas@suse.de \
--cc=lei4.wang@intel.com \
--cc=mail@maciej.szmigiero.name \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).