qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, "Juan Quintela" <quintela@redhat.com>,
	"Leonardo Bras" <leobras@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: Re: [RFC PATCH 2/2] migration/multifd: Move semaphore release into main thread
Date: Mon, 13 Nov 2023 11:45:59 -0500	[thread overview]
Message-ID: <ZVJSx6FOg8WfSbrz@x1n> (raw)
In-Reply-To: <87pm0hzucq.fsf@suse.de>

On Fri, Nov 10, 2023 at 09:05:41AM -0300, Fabiano Rosas wrote:

[...]

> > Then assuming we have a clear model with all these threads issue fixed (no
> > matter whether we'd shrink 2N threads into N threads), then what we need to
> > do, IMHO, is making sure to join() all of them before destroying anything
> > (say, per-channel MultiFDSendParams).  Then when we destroy everything
> > safely, either mutex/sem/etc..  Because no one will race us anymore.
> 
> This doesn't address the race. There's a data dependency between the
> multifd channels and the migration thread around the channels_ready
> semaphore. So we cannot join the migration thread because it could be
> stuck waiting for the semaphore, which means we cannot join+cleanup the
> channel thread because the semaphore is still being used.

I think this is the major part of confusion, on why this can happen.

The problem is afaik multifd_save_cleanup() is only called by
migrate_fd_cleanup(), which is further only called in:

  1) migrate_fd_cleanup_bh()
  2) migrate_fd_connect()

For 1): it's only run when migration comletes/fails/etc. (in all cases,
right before it quits..) and then kicks off migrate_fd_cleanup_schedule().
So migration thread shouldn't be stuck, afaiu, or it won't be able to kick
that BH.

For 2): it's called by the main thread, where migration thread should have
not yet been created.

With that, I don't see how migrate_fd_cleanup() would need to worry about
migration thread

Did I miss something?

Thanks,

-- 
Peter Xu



  parent reply	other threads:[~2023-11-13 16:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-09 16:58 [RFC PATCH 0/2] migration: Fix multifd qemu_mutex_destroy race Fabiano Rosas
2023-11-09 16:58 ` [RFC PATCH 1/2] migration: Report error in incoming migration Fabiano Rosas
2023-11-09 18:57   ` Peter Xu
2023-11-10 10:58     ` Fabiano Rosas
2023-11-13 16:51       ` Peter Xu
2023-11-14  1:54         ` Fabiano Rosas
2023-11-09 16:58 ` [RFC PATCH 2/2] migration/multifd: Move semaphore release into main thread Fabiano Rosas
2023-11-09 18:56   ` Peter Xu
2023-11-10 12:05     ` Fabiano Rosas
2023-11-10 12:37       ` Fabiano Rosas
2023-11-16 15:51         ` Juan Quintela
2023-11-13 16:45       ` Peter Xu [this message]
2023-11-14  1:50         ` Fabiano Rosas
2023-11-14 17:28           ` Peter Xu
2023-11-16 15:44       ` Juan Quintela
2023-11-16 14:56     ` Juan Quintela
2023-11-16 18:13       ` Fabiano Rosas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZVJSx6FOg8WfSbrz@x1n \
    --to=peterx@redhat.com \
    --cc=farosas@suse.de \
    --cc=leobras@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).