qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Avihai Horon <avihaih@nvidia.com>
Cc: Fabiano Rosas <farosas@suse.de>, qemu-devel@nongnu.org
Subject: Re: [PATCH 04/17] migration/multifd: Set p->running = true in the right place
Date: Tue, 30 Jan 2024 13:57:40 +0800	[thread overview]
Message-ID: <ZbiP1Ayqxj9BLdY7@x1n> (raw)
In-Reply-To: <0f75090d-bbe1-43cb-b649-a0880bc413c4@nvidia.com>

On Mon, Jan 29, 2024 at 02:20:35PM +0200, Avihai Horon wrote:
> 
> On 29/01/2024 6:17, Peter Xu wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Sun, Jan 28, 2024 at 05:43:52PM +0200, Avihai Horon wrote:
> > > On 25/01/2024 22:57, Fabiano Rosas wrote:
> > > > External email: Use caution opening links or attachments
> > > > 
> > > > 
> > > > Avihai Horon <avihaih@nvidia.com> writes:
> > > > 
> > > > > The commit in the fixes line moved multifd thread creation to a
> > > > > different location, but forgot to move the p->running = true assignment
> > > > > as well. Thus, p->running is set to true before multifd thread is
> > > > > actually created.
> > > > > 
> > > > > p->running is used in multifd_save_cleanup() to decide whether to join
> > > > > the multifd thread or not.
> > > > > 
> > > > > With TLS, an error in multifd_tls_channel_connect() can lead to a
> > > > > segmentation fault because p->running is true but p->thread is never
> > > > > initialized, so multifd_save_cleanup() tries to join an uninitialized
> > > > > thread.
> > > > > 
> > > > > Fix it by moving p->running = true assignment right after multifd thread
> > > > > creation. Also move qio_channel_set_delay() to there, as this is where
> > > > > it used to be originally.
> > > > > 
> > > > > Fixes: 29647140157a ("migration/tls: add support for multifd tls-handshake")
> > > > > Signed-off-by: Avihai Horon <avihaih@nvidia.com>
> > > > Just for context, I haven't looked at this patch yet, but we were
> > > > planning to remove p->running altogether:
> > > > 
> > > > https://lore.kernel.org/r/20231110200241.20679-1-farosas@suse.de
> > > Thanks for putting me in the picture.
> > > I see that there has been a discussion about the multifd creation/treadown
> > > flow.
> > > In light of this discussion, I can already see a few problems in my series
> > > that I didn't notice before (such as the TLS handshake thread leak).
> > > The thread you mentioned here and some of my patches point out some problems
> > > in multifd creation/treardown. I guess we can discuss it and see what's the
> > > best way to solve them.
> > > 
> > > Regarding this patch, your solution indeed solves the bug that this patch
> > > addresses, so maybe this could be dropped (or only noted in your patch).
> > > 
> > > Maybe I should also put you (and Peter) in context for this whole series --
> > > I am writing it as preparation for adding a separate migration channel for
> > > VFIO device migration, so VFIO devices could be migrated in parallel.
> > > So this series tries to lay down some foundations to facilitate it.
> > Avihai, is the throughput the only reason that VFIO would like to have a
> > separate channel?
> 
> Actually, the main reason is to be able to send and load multiple VFIO
> devices data in parallel.
> For example, today if we have three VFIO devices, they are migrated
> sequentially one after another.
> This particularly hurts during the complete pre-copy phase (downtime), as
> loading the VFIO data in destination involves FW interaction and resource
> allocation, which takes time and simply blocks the other devices from
> sending and loading their data.
> Providing a separate channel and thread for each VIFO device solves this
> problem and ideally reduces the VFIO contribution to downtime from sum{VFIO
> device #1, ..., VFIO device #N} to max{VFIO device #1, ..., VFIO device #N}.

I see.

> 
> > 
> > I'm wondering if we can also use multifd threads to send vfio data at some
> > point.  Now multifd indeed is closely bound to ram pages but maybe it'll
> > change in the near future to take any load?
> > 
> > Multifd is for solving the throughput issue already. If vfio has the same
> > goal, IMHO it'll be good to keep them using the same thread model, instead
> > of managing different threads in different places.  With that, any user
> > setting (for example, multifd-n-threads) will naturally apply to all
> > components, rather than relying on yet-another vfio-migration-threads-num
> > parameter.
> 
> Frankly, I didn't really put much attention to the throughput factor, and my
> plan is to introduce only a single thread per device.
> VFIO devices may have many GBs of data to migrate (e.g., vGPUs) and even
> mlx5 VFs can have a few GBs of data.
> So what you are saying here is interesting, although I didn't test such
> scenario to see the actual benefit.
> 
> I am trying to think if/how this could work and I have a few concerns:
> 1. RAM is made of fixed-positioned pages that can be randomly read/written,
> so sending these pages over multiple channels and loading them in the
> destination can work pretty naturally without much overhead.
>    VFIO device data, on the other hand, is just an opaque stream of bytes
> from QEMU point of view. This means that if we break this data to "packets"
> and send them over multiple channels, we must preserve the order by which
> this data was
>    originally read from the device and write the data in the same order to
> the destination device.
>    I am wondering if the overhead of maintaining such order may hurt
> performance, making it not worthwhile.

Indeed, it seems to me VFIO migration is based on a streaming model where
there's no easy way to index a chunk of data.

Is there any background on how that kernel interface was designed?  It
seems pretty unfriendly to concurrency already: even if multiple devices
can migrate concurrently, each single device can already contain GBs of
data as you said, which is pretty common here.  I'm a bit surprised to see
that the kernel interface is designed in this way for such a device.

I was wondering the possibility of whether VFIO can provide data chunks
with indexes just like RAM (which is represented in ramblock offsets).

> 
> 2. As I mentioned above, the main motivation for separate VFIO migration
> channel/thread, as I see today, is to allow parallel migration of VFIO
> devices.
>    AFAIU multifd, as it is today, doesn't provide such parallelism (i.e., in
> the complete pre-copy phase each device, be it RAM or VFIO, will fully send
> its data over the multifd threads and only after finishing will the next
> device send its data).

Indeed. That's actually an issue not only to VFIO but also to migration in
general: we can't migrate device states concurrently, and multifd is out of
the picture here so far, but maybe there's chance.

Consider huge VMs where there can be already ~500 vCPUs, each need their
own get()/put() of CPU states from/to KVM.  It'll be nice if we can do this
in concurrent threads too.  Here VFIO is one of the devices that will also
benefit from such a design, and greatly.

I added a todo in our wiki for this, which I see it a general improvement,
and hopefully someone can look into this:

https://wiki.qemu.org/ToDo/LiveMigration#Device_state_concurrency

I hope VFIO can consider resolving this as a generic issue, rather than
providing its own solution.

> 
> This is just what came to mind. Maybe we can think this more thoroughly and
> see if it could work somehow, now or in the future.
> However, I think making the multifd threads generic so they can send any
> kind of data is a good thing in general, regardless of VFIO.

Right.

In general, having a VFIO separate channel may solve the immediate issue,
but it may still won't solve all, meanwhile it may introduce the first
example of using completely separate channel that migration won't easily
manage itself, which IMHO can cause migration much harder to maintain in
the future.

It may also in the future become some technical debt that VFIO will need to
take even if such a solution merged, because VFIO could have its own model
of handling a few similar problems that migration has.

I hope there's some way out that we can work together to improve the
framework, providing a clean approach and consider for the long terms.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2024-01-30  5:58 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-25 16:25 [PATCH 00/17] migration: Add new migration channel connect and TLS upgrade APIs Avihai Horon
2024-01-25 16:25 ` [PATCH 01/17] migration: Fix logic of channels and transport compatibility check Avihai Horon
2024-01-26  3:09   ` Peter Xu
2024-01-25 16:25 ` [PATCH 02/17] migration: Move local_err check in migration_ioc_process_incoming() Avihai Horon
2024-01-26  3:10   ` Peter Xu
2024-01-25 16:25 ` [PATCH 03/17] migration: Rename default_channel to main_channel Avihai Horon
2024-01-26  3:11   ` Peter Xu
2024-01-25 16:25 ` [PATCH 04/17] migration/multifd: Set p->running = true in the right place Avihai Horon
2024-01-25 20:57   ` Fabiano Rosas
2024-01-28 15:43     ` Avihai Horon
2024-01-29  4:17       ` Peter Xu
2024-01-29 12:20         ` Avihai Horon
2024-01-30  5:57           ` Peter Xu [this message]
2024-01-30 18:44             ` Avihai Horon
2024-02-06 10:25               ` Peter Xu
2024-02-08 15:31                 ` Avihai Horon
2024-01-29 12:23         ` Fabiano Rosas
2024-01-25 16:25 ` [PATCH 05/17] migration/multifd: Wait for multifd channels creation before proceeding Avihai Horon
2024-01-29 14:34   ` Fabiano Rosas
2024-01-30 18:32     ` Avihai Horon
2024-01-30 21:32       ` Fabiano Rosas
2024-01-31  4:49         ` Peter Xu
2024-01-31 10:39         ` Avihai Horon
2024-01-25 16:25 ` [PATCH 06/17] migration/tls: Rename main migration channel TLS functions Avihai Horon
2024-01-25 16:25 ` [PATCH 07/17] migration/tls: Add new migration channel TLS upgrade API Avihai Horon
2024-01-25 16:25 ` [PATCH 08/17] migration: Use the new TLS upgrade API for main channel Avihai Horon
2024-01-25 16:25 ` [PATCH 09/17] migration/multifd: Use the new TLS upgrade API for multifd channels Avihai Horon
2024-01-25 16:25 ` [PATCH 10/17] migration/postcopy: Use the new TLS upgrade API for preempt channel Avihai Horon
2024-01-25 16:25 ` [PATCH 11/17] migration/tls: Make migration_tls_client_create() static Avihai Horon
2024-01-25 16:25 ` [PATCH 12/17] migration/multifd: Consolidate TLS/non-TLS multifd channel error flow Avihai Horon
2024-01-25 16:25 ` [PATCH 13/17] migration: Store MigrationAddress in MigrationState Avihai Horon
2024-01-25 16:25 ` [PATCH 14/17] migration: Rename migration_channel_connect() Avihai Horon
2024-01-25 16:25 ` [PATCH 15/17] migration: Add new migration channel connect API Avihai Horon
2024-01-25 16:25 ` [PATCH 16/17] migration/multifd: Use the new migration channel connect API for multifd Avihai Horon
2024-01-25 16:25 ` [PATCH 17/17] migration/postcopy: Use the new migration channel connect API for postcopy preempt Avihai Horon
2024-02-06 10:04 ` [PATCH 00/17] migration: Add new migration channel connect and TLS upgrade APIs Peter Xu
2024-02-06 13:10   ` Avihai Horon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZbiP1Ayqxj9BLdY7@x1n \
    --to=peterx@redhat.com \
    --cc=avihaih@nvidia.com \
    --cc=farosas@suse.de \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).