From: Peter Xu <peterx@redhat.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Cc: Fabiano Rosas <farosas@suse.de>, qemu-devel@nongnu.org
Subject: Re: [RFC PATCH 0/7] migration/multifd: Introduce storage slots
Date: Fri, 21 Jun 2024 11:56:12 -0400 [thread overview]
Message-ID: <ZnWinGjeZGRGVOF-@x1n> (raw)
In-Reply-To: <dfe0384e-a765-4bfb-81c8-529329d76052@maciej.szmigiero.name>
On Fri, Jun 21, 2024 at 05:31:54PM +0200, Maciej S. Szmigiero wrote:
> On 21.06.2024 17:04, Fabiano Rosas wrote:
> > "Maciej S. Szmigiero" <mail@maciej.szmigiero.name> writes:
> >
> > > Hi Fabiano,
> > >
> > > On 20.06.2024 23:21, Fabiano Rosas wrote:
> > > > Hi folks,
> > > >
> > > > First of all, apologies for the roughness of the series. I'm off for
> > > > the next couple of weeks and wanted to put something together early
> > > > for your consideration.
> > > >
> > > > This series is a refactoring (based on an earlier, off-list
> > > > attempt[0]), aimed to remove the usage of the MultiFDPages_t type in
> > > > the multifd core. If we're going to add support for more data types to
> > > > multifd, we first need to clean that up.
> > > >
> > > > This time around this work was prompted by Maciej's series[1]. I see
> > > > you're having to add a bunch of is_device_state checks to work around
> > > > the rigidity of the code.
> > > >
> > > > Aside from the VFIO work, there is also the intent (coming back from
> > > > Juan's ideas) to make multifd the default code path for migration,
> > > > which will have to include the vmstate migration and anything else we
> > > > put on the stream via QEMUFile.
> > > >
> > > > I have long since been bothered by having 'pages' sprinkled all over
> > > > the code, so I might be coming at this with a bit of a narrow focus,
> > > > but I believe in order to support more types of payloads in multifd,
> > > > we need to first allow the scheduling at multifd_send_pages() to be
> > > > independent of MultiFDPages_t. So here it is. Let me know what you
> > > > think.
> > >
> > > Thanks for the patch set, I quickly glanced at these patches and they
> > > definitely make sense to me.
> > >
> (..)
> > > > (as I said, I'll be off for a couple of weeks, so feel free to
> > > > incorporate any of this code if it's useful. Or to ignore it
> > > > completely).
> > >
> > > I guess you are targeting QEMU 9.2 rather than 9.1 since 9.1 has
> > > feature freeze in about a month, correct?
> > >
> >
> > For general code improvements like this I'm not thinking about QEMU
> > releases at all. But this series is not super complex, so I could
> > imagine we merging it in time for 9.1 if we reach an agreement.
> >
> > Are you thinking your series might miss the target? Or have concerns
> > over the stability of the refactoring? We can within reason merge code
> > based on the current framework and improve things on top, we already did
> > something similar when merging zero-page support. I don't have an issue
> > with that.
>
> The reason that I asked whether you are targeting 9.1 is because my
> patch set is definitely targeting that release.
>
> At the same time my patch set will need to be rebased/refactored on top
> of this patch set if it is supposed to be merged for 9.1 too.
>
> If this patch set gets merged quickly that's not really a problem.
>
> On the other hand, if another iteration(s) is/are needed AND you are
> not available in the coming weeks to work on them then there's a
> question whether we will make the required deadline.
I think it's a bit rush to merge the vfio series in this release. I'm not
sure it has enough time to be properly reviewed, reposted, retested, etc.
I've already started looking at it, and so far I think I have doubt not
only on agreement with Fabiano on the device_state thing which I prefer to
avoid, but also I'm thinking of any possible way to at least make the
worker threads generic too: a direct impact could be vDPA in the near
future if anyone cared, while I don't want modules to create threads
randomly during migration.
Meanwhile I'm also thinking whether that "the thread needs to dump all
data, and during iteration we can't do that" is the good reason to not
support that during iterations.
I didn't yet reply because I don't think I think all things through, but
I'll get there.
So I'm not saying that the design is problematic, but IMHO it's just not
mature enough to assume it will land in 9.1, considering it's still a large
one, and the first non-rfc version just posted two days ago.
Thanks,
--
Peter Xu
next prev parent reply other threads:[~2024-06-21 15:57 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 21:21 [RFC PATCH 0/7] migration/multifd: Introduce storage slots Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 1/7] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-06-21 14:42 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 2/7] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 3/7] migration/multifd: Replace p->pages with an opaque pointer Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 4/7] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 5/7] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-07-19 14:40 ` Fabiano Rosas
2024-06-20 21:21 ` [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters Fabiano Rosas
2024-06-27 3:27 ` Wang, Lei
2024-06-27 14:40 ` Peter Xu
2024-06-27 15:17 ` Peter Xu
2024-07-10 16:10 ` Fabiano Rosas
2024-07-10 19:10 ` Peter Xu
2024-07-10 20:16 ` Fabiano Rosas
2024-07-10 21:55 ` Peter Xu
2024-07-11 14:12 ` Fabiano Rosas
2024-07-11 16:11 ` Peter Xu
2024-07-11 19:37 ` Fabiano Rosas
2024-07-11 20:27 ` Peter Xu
2024-07-11 21:12 ` Fabiano Rosas
2024-07-11 22:06 ` Peter Xu
2024-07-12 12:44 ` Fabiano Rosas
2024-07-12 15:37 ` Peter Xu
2024-07-18 19:39 ` Fabiano Rosas
2024-07-18 21:12 ` Peter Xu
2024-07-18 21:27 ` Fabiano Rosas
2024-07-18 21:52 ` Peter Xu
2024-07-18 22:32 ` Fabiano Rosas
2024-07-19 14:04 ` Peter Xu
2024-07-19 16:54 ` Fabiano Rosas
2024-07-19 17:58 ` Peter Xu
2024-07-19 21:30 ` Fabiano Rosas
2024-07-16 20:10 ` Maciej S. Szmigiero
2024-07-17 19:00 ` Peter Xu
2024-07-17 21:07 ` Maciej S. Szmigiero
2024-07-17 21:30 ` Peter Xu
2024-06-20 21:21 ` [RFC PATCH 7/7] migration/multifd: Hide multifd slots implementation Fabiano Rosas
2024-06-21 14:44 ` [RFC PATCH 0/7] migration/multifd: Introduce storage slots Maciej S. Szmigiero
2024-06-21 15:04 ` Fabiano Rosas
2024-06-21 15:31 ` Maciej S. Szmigiero
2024-06-21 15:56 ` Peter Xu [this message]
2024-06-21 17:40 ` Maciej S. Szmigiero
2024-06-21 20:54 ` Peter Xu
2024-06-23 20:25 ` Maciej S. Szmigiero
2024-06-23 20:45 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZnWinGjeZGRGVOF-@x1n \
--to=peterx@redhat.com \
--cc=farosas@suse.de \
--cc=mail@maciej.szmigiero.name \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).