From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, "Avihai Horon" <avihaih@nvidia.com>,
"Daniel P . Berrangé" <berrange@redhat.com>
Subject: Re: [PATCH v2 6/6] migration/multifd: Add a synchronization point for channel creation
Date: Tue, 6 Feb 2024 11:37:11 +0800 [thread overview]
Message-ID: <ZcGpZ5V2YUcUXaLB@x1n> (raw)
In-Reply-To: <20240205194929.28963-7-farosas@suse.de>
On Mon, Feb 05, 2024 at 04:49:29PM -0300, Fabiano Rosas wrote:
> It is possible that one of the multifd channels fails to be created at
> multifd_new_send_channel_async() while the rest of the channel
> creation tasks are still in flight.
>
> This could lead to multifd_save_cleanup() executing the
> qemu_thread_join() loop too early and not waiting for the threads
> which haven't been created yet, leading to the freeing of resources
> that the newly created threads will try to access and crash.
>
> Add a synchronization point after which there will be no attempts at
> thread creation and therefore calling multifd_save_cleanup() past that
> point will ensure it properly waits for the threads.
>
> A note about performance: Prior to this patch, if a channel took too
> long to be established, other channels could finish connecting first
> and already start taking load. Now we're bounded by the
> slowest-connecting channel.
>
> Reported-by: Avihai Horon <avihaih@nvidia.com>
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
[...]
> @@ -934,7 +936,6 @@ static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
> MultiFDSendParams *p = opaque;
> QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
> Error *local_err = NULL;
> -
This line removal should belong to the previous patch. I can touch that
up.
> bool ret;
>
> trace_multifd_new_send_channel_async(p->id);
Reviewed-by: Peter Xu <peterx@redhat.com>
--
Peter Xu
next prev parent reply other threads:[~2024-02-06 3:37 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-05 19:49 [PATCH v2 0/6] migration/multifd: Fix channel creation vs. cleanup races Fabiano Rosas
2024-02-05 19:49 ` [PATCH v2 1/6] migration/multifd: Join the TLS thread Fabiano Rosas
2024-02-06 8:53 ` Daniel P. Berrangé
2024-02-06 9:15 ` Peter Xu
2024-02-06 10:06 ` Daniel P. Berrangé
2024-02-05 19:49 ` [PATCH v2 2/6] migration/multifd: Remove p->running Fabiano Rosas
2024-02-05 19:49 ` [PATCH v2 3/6] migration/multifd: Move multifd_send_setup error handling in to the function Fabiano Rosas
2024-02-05 19:49 ` [PATCH v2 4/6] migration/multifd: Move multifd_send_setup into migration thread Fabiano Rosas
2024-02-05 19:49 ` [PATCH v2 5/6] migration/multifd: Unify multifd and TLS connection paths Fabiano Rosas
2024-02-06 3:33 ` Peter Xu
2024-02-06 12:44 ` Avihai Horon
2024-02-06 14:30 ` Fabiano Rosas
2024-02-06 14:44 ` Avihai Horon
2024-02-05 19:49 ` [PATCH v2 6/6] migration/multifd: Add a synchronization point for channel creation Fabiano Rosas
2024-02-06 3:37 ` Peter Xu [this message]
2024-02-06 3:42 ` [PATCH v2 0/6] migration/multifd: Fix channel creation vs. cleanup races Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZcGpZ5V2YUcUXaLB@x1n \
--to=peterx@redhat.com \
--cc=avihaih@nvidia.com \
--cc=berrange@redhat.com \
--cc=farosas@suse.de \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).